url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1610.01568 | The ratio of domination and independent domination numbers on trees | Let $\gamma(G)$ and $i(G)$ be the domination number and the independent domination number of $G$, respectively. In 1977, Hedetniemi and Mitchell began with the comparison of of $i(G)$ and $\gamma(G)$ and recently Rad and Volkmann posted a conjecture that $i(G)/ \gamma(G) \leq \Delta(G)/2$, where $\Delta(G)$ is the maximum degree of $G$. In this work, we prove the conjecture for trees and provide the graph achieved the sharp bound. | \section{Introduction}
Throughout this paper $G = (V, E)$ is a simple undirected graph with vertex set $ V(G)$ and edge set $ E(G)$. For $v\in V(G)$, $N_G(v)=\{ w\in V(G): vw\in E(G)\}$ is
the open neighborhood of $v$ and $N_G[v]=N_G(v) \cup \{v\}$ is
the closed neighborhood of $v$ in $G$. If $N_G(v) = \phi$, $v$ is called an isolated vertex. For $S\subseteq V(G)$, $N_G(S)$ is the open neighborhood of $S$, $N_G[S]=N_G(S)\cup S$ is the closed neighborhood of $S$ and $G-S$ is a subgraph induced by $V(G)-S$. A graph $F$ is a forest if it has no cycles. Specially, $F$ is a tree if it contains only one component. A double star is a tree with exactly two vertices of degree greater than 1. In paticular, if the two vertices have same degree, then it is called a balanced double star.
The line graph $L(G)$ of a connected graph is a graph such that each vertex of $L(G)$ represents an edge of $G$ and two vertices of $L(G)$ are adjacent if and only if their corresponding edges share a common endpoint in $G$.
It is known that a vertex set $D \subset V(G)$ is a dominating set if every vertex of $V(G)-D$ is adjacent to some vertices of $D$. The minimum cardinality of a dominating set is called the domination number, denoted by $\gamma(G)$. Similarly,
a vertex set $I \subset V(G)$ is an independent dominating set if $I$ is both an independent set and a dominating set in $G$, where an independent set is a set of vertices in a graph such that no two of which are adjacent. The minimum cardinality of an independent dominating set is called the independent domination number, denoted by $ i(G)$. Currently, lots of work relating domination number and independent domination number have been studied, referred to surveys \cite{2,4}.
In 1977, S. Hedetniemi and S. Mitchell \cite {1977} showed that for any tree T,
$\frac{ i(L(T))} { \gamma (L(T))} =1$, where $ L(T)$ is the line graph of T.
Because any line graph is a $K_{1,3}$-free graph,
R. B. Allan and R. Laskar \cite {1978} extended the previous result in 1978 and obtained that if a graph does not have an induced subgraph isomorphic to $K_{1,3}$, then $ i(G)/ \gamma(G) = 1$.
Recently, Goddard et al.\cite{3} considered the ratio $i(G)/ \gamma(G) $ for regular graphs and proved that $i(G)/ \gamma(G) \leq 3/2$ for cubic graphs. In 2013, Southey and Henning \cite{6} improved the previous result to $i(G)/ \gamma(G) \leq 4/3$ for connected cubic graphs except for $K_{3,3}$.
During the same year, Rad and Volkmann \cite{5} got an upper bound of $i(G)/\gamma(G)$ for a graph $G$ and prosposed the conjecture.
\vskip 2mm {\bf Theorem 1}\emph{ (Rad and Volkmann \cite{5})
Let $G$ be a graph, then
$$ \frac{i(G)}{\gamma(G)} \leq
\left\{ \begin{array}{rcl}
\frac{\Delta(G)}{2},\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;&& \mbox{if } 3 \leq \Delta(G) \leq 5,
\\\Delta(G) -3 +\frac{2}{\Delta(G) - 1},&& \mbox{if } \Delta(G) \geq 6.
\end{array}\right.$$
}
{\bf Conjecture 2} \emph{(Rad and Volkmann \cite{5})
Let $G$ be a graph with $\Delta(G) \geq 3$, then $i(G)/\gamma(G) \leq \Delta(G)/2$.
}
In 2014, Furuta et al.\cite{7} showed that $i(G)/ \gamma(G) \leq \Delta(G) - 2 \sqrt{\Delta(G)} +2$ for a graph $G$ and gave the graph achieved the new bound. However,
when $\Delta(G)$ is big enough, then $ \Delta(G) - 2 \sqrt{\Delta(G)} +2 > \Delta(G) / 2.$ Now there is a natural question that\\
{\it Q: Is there other class of graphs, which has an affirmative answer for Conjecture 2? }
Motivated by Conjecture 2 and the above question, we prove that Conjeture 2 is true for the tree and provide the graph $G $, which attains the sharp bound $\Delta(G)/2$.
\begin{figure}[thb]
\center
\begin{tikzpicture}
\tikzstyle {every node} =[fill=black!60,circle,inner sep=0.5pt,text=white] \path
(2,0) node(0) {$w_1$}
(4,0) node(1) {$w_2$}
(1,1.5) node(2) {$v_1$}
(1,0.5) node(3) {$v_2$}
(1, -0.5)node(4){$*$}
(1,-1.5) node (5) {$v_s$}
(5,1.5) node(6) {$u_1$}
(5,0.5) node (7) {$u_2$}
(5,-0.5) node (8) {$*$}
(5,-1.5) node (9) {$u_s$}
;
\draw[black,line width=1pt] (0)--(1) (0)--(2) (0)--(3) (0)--(4) (0)--(5)
(1)--(6) (1)--(7) (1)--(8) (1)--(9)
;
\end{tikzpicture}
\caption{A balanced double star}
\label{fig:cor}
\end{figure}
\vskip 2mm {\bf Theorem 3} \emph{
Let $G$ be a forest, then $$ \frac{i(G)}{\gamma(G)} \leq
\left\{ \begin{array}{rcl}
1,\;\;\; && \mbox{if } \Delta(G) \leq 2,
\\\frac{\Delta(G)}{2},&& \mbox{if } \Delta(G) \geq 3,
\end{array}\right.$$ and the equalities hold if either $\Delta(G) \leq 2$ or each component of $G$ is a balanced double star(see figure 1).
}
As an immediate consequence of Theorem 3, we obtain that
\vskip 2mm {\bf Theorem 4} \emph{
Let $G$ be a tree, then $$ \frac{i(G)}{\gamma(G)} \leq
\left\{ \begin{array}{rcl}
1,\;\;\; && \mbox{if } \Delta(G) \leq 2,
\\\frac{\Delta(G)}{2},&& \mbox{if } \Delta(G) \geq 3,
\end{array}\right.$$ and the equalities hold if either $\Delta(G) \leq 2$ or $G$ is a balanced double star(see figure 1).
}
\section{Proof of Theorem 3}
In this section, we will prove Theorem 3 and start with an interesting lemma.
\vskip 2mm {\bf Lemma 1} \emph{
Let $r_1, r_2,r_3,r_4,t$ be positive numbers with $\frac{r_1}{r_2} \leq t$ and $\frac{r_3}{r_4} \leq t$. Then $\frac{r_1+r_3}{r_2+r_4} \leq t$.
}
Since $r_1 \leq r_2 t, r_3 \leq r_4 t$, we replace $r_1$, $r_3$ and obtain that Lemma 1 is true. Next we will give the main proof of this note.
{\bf Proof of Theorem 3.}
For $\Delta(G) \leq 1$, $G$ contains only isolated vertices or edges and $i(G) = \gamma(G)$, that is, $i(G)/ \gamma(G) =1$. Next, we will consider the case of $\Delta(G) \geq 2$ and
begin with the case that the forest $G$ contains only one component, that is, $G$ is a tree.
Let $D$ be a minimum dominating set of $G$. Then $G[D]$ is also
a forest. We build $\{G_i\}, \{x_i\}$ with $i \geq 1$ as follows: Let $G_1 = G[D]$ and $ x_1 \in V(G_1)$ with $d_{G_1}(x_1) = 0$ or $1$; For $i \geq 2$, if $V(G_i-N_{G_{i}}[x_{i}]) = \phi$, then stop and set $i = k$. Otherwise, let $G_{i} = G_{i-1}-N_{G_{i-1}}[x_{i-1}]$ and $x_{i} \in V(G_i)$ with $d_{G_i}(x_i) =0 $ or $ 1$.
Set $X = \{x_1,x_2,...,x_k\}$. Then $X$ is an independent dominating set of $G[D]$
and $\{N_{G_i}[x_i], 1 \leq i \leq k\}$ is a partition of $D$, that is, $\sum_{1 \leq i \leq k} (d_{G_i}(x_i) +1) = |D| = \gamma(G)$. Choose $I \subset V(G) - D$ such that
$X \cup I$ is an independent dominating set of $G$, that is,
$
i(G) \leq |X|+|I| = k+ |I|.
$
Since $D$ is a dominating set of $G$, then $I = \cup_{v \in {D-X}}(N_G(v) \cap I) = \cup_{1 \leq i \leq k} (\cup_{v \in {N_{G_i}(x_i)}}(N_G(v) \cap I))$.
By the choice of $x_i$, for $1 \leq i \leq k$ and $v \in N_{G_i}(x_i)$, we have $d_{G_i}(x_i) \leq d_{G_i}(v)$. Thus,
$|N_G(v) \cap I| \leq d_G(v) - d_{G_i}(v) \leq \Delta(G) - d_{G_i}(x_i)$ and
\begin{eqnarray}
|I| & \leq &\sum_{1 \leq i \leq k}( \sum _{v \in N_{G_i}(x_i)} |N_{G}(v) \cap I|) \nonumber \\
&\leq & \sum_{1 \leq i \leq k} (\sum_{v \in N_{G_i}(x_i)} (\Delta(G) - d_{G_i}(x_i))) \nonumber \\
&= & \sum_{1 \leq i \leq k} (\sum_{v\in N_{G_i}(x_i)} \Delta (G)) - \sum_{1 \leq i \leq k} ( \sum_{v \in N_{G_i}(x_i)} d_{G_i}(x_i)) \nonumber\\
&=& (|D| - k ) \Delta(G) - \sum_{1 \leq i \leq k}d_{G_i}(x_i)^2.
\end{eqnarray}
By $(1)$ and $|D| = \gamma(G)$, we can obtain that \begin{eqnarray}
i(G) &\leq & k+ |I| \nonumber \\
&\leq & k+ (|D| - k ) \Delta(G) - \sum_{1 \leq i \leq k}d_{G_i}(x_i)^2 \nonumber \\
&=& \Delta(G)\gamma(G) - \sum_{1 \leq i \leq k } (\Delta(G) -1 +d_{G_i}(x_i)^2), \nonumber
\end{eqnarray}
that is, $$
\frac{i(G)}{\gamma(G)} \leq \Delta(G) - \frac{ \sum_{1 \leq i \leq k } (\Delta(G) -1 +d_{G_i}(x_i)^2)}{\gamma(G)}.
$$
Now, it suffices to show that $ - \frac{ \sum_{1 \leq i \leq k } (\Delta(G) -1 +d_{G_i}(x_i)^2)}{\gamma(G)} \leq -\frac{\Delta(G)}{2}$, that is,
\begin{eqnarray}
\sum_{1 \leq i \leq k} (\Delta(G) - 1 + d_{G_i}(x_i)^2) &\geq& \frac{1}{2} \Delta(G) \gamma(G) \nonumber \\&=& \frac{1}{2} \Delta(G) (\sum_{1 \leq i \leq k} (d_{G_i}(x_i) +1))
\end{eqnarray}
By the construction of $G_i, x_i$, $d_{G_i}(x_i) = d_{G_i}(x_i)^2 = 0$ or $1$.
Thus, $(2)$ is the same as $(3)$ below.
\begin{eqnarray}
&&\Leftrightarrow
k\Delta(G) - k + \sum_{1\leq i \leq k} d_{G_i}(x_i) - \frac{1}{2} \Delta(G)(\sum_{1\leq i \leq k} d_{G_i}(x_i) ) \nonumber \\&& \;\;\;\;\; -\frac{1}{2} \Delta(G) k \geq 0 \nonumber
\\&&
\Leftrightarrow
(1- \frac{1}{2} \Delta(G)) ((\sum_{1\leq i \leq k} d_{G_i}(x_i) ) - k) \geq 0
\end{eqnarray}
Furthermore, $d_{G_i}(x_i) = 0$ or $1$ yields that $(\sum_{1\leq i \leq k} d_{G_i}(x_i) ) - k \leq 0$.
Since $\Delta(G) \geq 2$, then
$1- \frac{1}{2} \Delta(G) \leq 0$. Thus, $(3)$ is true, that is, Theorem 3 is true for the tree.
Next we will consider the case that $G$ has more than one component. In this case, each component of $G$ is either an isolated vertex or a tree, say $G_1, G_2, ..., G_s$ with an integer $s \geq 2$. For $1 \leq j \leq s $, if $G_j$ is an isolated vertex, then $i(G_j)/\gamma(G_j) = 1/1 \leq \Delta(G)/2$; If $G_j$ is a tree, by the above proof, $i(G_j)/\gamma(G_j) \leq \Delta(G)/2$. Finally, using Lemma 1, $i(G)/\gamma(G) \leq \Delta(G)/2$ holds for the forest.
Furthermore, if $\Delta(G) \leq 2$, all forests achieve the bound; if $\Delta(G) \geq 3$, the union of balanced double stars attain the bound. Thus, Theorem 3 is true.
$\hfill\Box$
| {
"timestamp": "2016-10-06T02:07:46",
"yymm": "1610",
"arxiv_id": "1610.01568",
"language": "en",
"url": "https://arxiv.org/abs/1610.01568",
"abstract": "Let $\\gamma(G)$ and $i(G)$ be the domination number and the independent domination number of $G$, respectively. In 1977, Hedetniemi and Mitchell began with the comparison of of $i(G)$ and $\\gamma(G)$ and recently Rad and Volkmann posted a conjecture that $i(G)/ \\gamma(G) \\leq \\Delta(G)/2$, where $\\Delta(G)$ is the maximum degree of $G$. In this work, we prove the conjecture for trees and provide the graph achieved the sharp bound.",
"subjects": "Combinatorics (math.CO)",
"title": "The ratio of domination and independent domination numbers on trees",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9910145717782007,
"lm_q2_score": 0.8519528076067262,
"lm_q1q2_score": 0.8442976468056156
} |
https://arxiv.org/abs/2203.13870 | A simple mnemonic to compute sums of powers | We give a simple recursive formula to obtain the general sum of the first $N$ natural numbers to the $r$th power. Our method allows one to obtain the general formula for the $(r+1)$th power once one knows the general formula for the $r$th power. The method is very simple to remember owing to an analogy with differentiation and integration. Unlike previously known methods, no knowledge of additional specific constants (such as the Bernoulli numbers) is needed. This makes it particularly suitable for applications in cases when one cannot consult external references, for example mathematics competitions. | \section{Introduction}
Sums of powers have fascinated mathematicians for centuries. The sum of the first $N$ natural numbers is given by the simple well-known formula
\begin{equation}
\sum_{n=1}^N n = 1+2+\cdots + N = \frac{N(N+1)}{2} \ .
\label{sum r=1}
\end{equation}
On the other hand, while occasionally useful, the general formulas for the sums of higher powers are much less well-known. For example, one has
\begin{equation}
\sum_{n=1}^N n^2 = \frac{1}{3} N^3 + \frac{1}{2} N^2 + \frac{1}{6} N \ .
\label{sum r=2}
\end{equation}
In fact once a formula of the type of eq. \eqref{sum r=1} or \eqref{sum r=2} is given, its proof is a trivial exercise by using induction. However, \textit{guessing} the formula in the first place is non-trivial.
Here we will provide a simple-to-remember method to recursively compute formulas for sums of a generic power $r \in \mathbb{N}$:
\begin{equation}
\sum_{n=1}^N n^r \equiv S(N;r) \ .
\label{sum r}
\end{equation}
In other words, our method allows us to obtain $S(N;r+1)$ from $S(N;r)$, and therefore recursively all the $S(N;r)$. In general, $S(N;r)$ will be a polynomial in $N$ of degree $r+1$.
\section{Faulhaber's Formula}\label{sec:faulhaber}
In the early 18th century, Jacob Bernoulli obtained a general formula for the sums of powers, now known as Faulhaber's formula,
\begin{equation}
\sum_{n=1}^N n^r = \frac{1}{r+1}\sum_{j=0}^r (-1)^j {r+1 \choose j} B_j N^{r+1-j} \ ,
\label{faulhaber}
\end{equation}
where the $B_j$ are the Bernoulli numbers with $B_1 = -1/2$. The proof of eq. \eqref{faulhaber} is straightforward starting from one of the forms of the Euler-Maclaurin identity \cite{spivey}.
While technically this is a solution to the problem, it is very hard to remember. Moreover, despite the apparent simplicity of eq. \eqref{faulhaber}, the computation of the Bernoulli numbers is itself non-trivial. Formulas are either implicit or involve double summations of complicated summands. For these reasons we look for an easier-to-remember method that one can perform with pen and paper without having to look up additional references.
\section{A simple mnemonic for sums of powers}
Here we illustrate the mnemonic to recursively compute sums of powers, while in the next section we give a proof of the correctness of the method. Let's consider the concrete example of the computation of $\sum_{n=1}^N n^2$. Here we first do something \say{illegal}, that is we differentiate the whole sum with respect to $N$, and by this we mean that we differentiate term-by-term. Doing this we obtain the sum of one power lower $\sum_{n=1}^N n^2$, which we know:
\begin{equation}
\frac{d}{dN} \sum_{n=1}^N n^2 \overset{?}{=} \sum_{n=1}^N \frac{d}{dn} (n^2) =2\sum_{n=1}^N n = N(N+1) = N^2+N \ .
\label{miao}
\end{equation}
This is clearly not a valid operation, however one can formally think of it as a map between polynomials that happens to be given by a term-by-term derivative. In order to recover the original sum, one may exploit the analogy with differentiation and integration, whereby integrating reverses the derivative. Thus integrating the end result of \eqref{miao}, one obtains
\begin{equation}
\int_0^N N^2 +N \,\, dN =\frac{1}{3} N^3 + \frac{1}{2} N^2 \ .
\end{equation}
Comparing with eq. \eqref{sum r=2}, this is almost the correct result, lacking only the linear term $N/6$. The procedure thus works as follows:
\begin{itemize}
\item Differentiate the sum term-by-term and substitute the formula for the lower sum of power;
\item Integrate the resulting polynomial;
\item Add a linear term $CN$ and fix the constant $C$ by requiring that the sum give $1$ for $N=1$ as it should.
\end{itemize}
Thus one is able to recursively obtain all the sums of powers for any $r$ starting from either $r=1$ or trivially $r=0$. In the next section we prove that this method indeed gives the correct result. This method can be easily remembered (\say{differentiate, then integrate and add a linear term}).
Now we compute the first few sums of powers as an example. Starting with $r=0$,
\begin{equation}
\sum_{n=1}^N n^r=\sum_{n=1}^N 1 = N
\end{equation}
trivially. Now we apply our procedure to the $r=1$ case. First we integrate and sum the resulting sequence:
\begin{equation}
\frac{d}{dN} \sum_{n=1}^N n = \sum_{n=1}^N 1 = N \ .
\end{equation}
Then we integrate and add a linear term, finding
\begin{equation}
\sum_{n=1}^N n = \frac{1}{2} N^2 + CN \ .
\end{equation}
Substituting $N=1$ we find $1/2 + C = 1$, that is $C=1/2$, so
\begin{equation}
\sum_{n=1}^N n = \frac{1}{2} N(N + 1) \ ,
\end{equation}
which is the correct formula. Now for $r=2$ we have
\begin{equation}
\frac{d}{dN} \sum_{n=1}^N n^2 = 2\sum_{n=1}^N n = N^2+N \ .
\end{equation}
Integrating and adding a linear term
\begin{equation}
\sum_{n=1}^N n^2 = \frac{1}{3} N^3+\frac{1}{2} N^2 + CN \ .
\end{equation}
Substituting $N=1$ we have $1/3+1/2 + C = 1$, that is $C=1/6$, so that
\begin{equation}
\sum_{n=1}^N n^2 = \frac{1}{3} N^3+\frac{1}{2} N^2 + \frac{1}{6}N \ .
\end{equation}
For $r=3$, we have
\begin{equation}
\frac{d}{dN} \sum_{n=1}^N n^3 = 3\sum_{n=1}^N n^2 = N^3+\frac{3}{2} N^2 + \frac{1}{2}N \ .
\end{equation}
Again integrating and adding a linear term
\begin{equation}
\sum_{n=1}^N n^3 =\frac{1}{4}N^4+\frac{1}{2} N^3 + \frac{1}{4}N^2 + CN \ .
\end{equation}
For $N=1$ we have $1/4+1/2 + 1/4 + C = 1$, that is $C=0$, so
\begin{equation}
\sum_{n=1}^N n^3 =\frac{1}{4}N^4+\frac{1}{2} N^3 + \frac{1}{4}N^2 \ .
\end{equation}
For $r=4$,
\begin{equation}
\frac{d}{dN} \sum_{n=1}^N n^4 = 4\sum_{n=1}^N n^3 = N^4+2 N^3 + N^2 \ .
\end{equation}
Therefore
\begin{equation}
\sum_{n=1}^N n^4 =\frac{1}{5}N^5+\frac{1}{2} N^4 + \frac{1}{3}N^3 + CN \ ,
\end{equation}
where $C=-1/30$, so that
\begin{equation}
\sum_{n=1}^N n^4 =\frac{1}{5}N^5+\frac{1}{2} N^4 + \frac{1}{3}N^3 - \frac{1}{30}N \ .
\end{equation}
In principle one can keep going and compute the formula for an arbitrary $r$.
\section{Proof of correctness}
Here we prove that the method explained in the previous section is correct. To do so, define
\begin{equation}
S(N;r) = \sum_{n=1}^N n^r \ ,
\end{equation}
where $S(N;r)$ is a polynomial in $N$. Then the statement of the previous method is essentially that
\begin{equation}
S(N;r) = CN + r \int_0^N S(N;r-1)\,dN
\end{equation}
for some constant $C$. To show that this is true, we compute the integral using Faulhaber's formula:
\begin{align*}
r\int_0^N S(N;r-1)\,dN &=r\int_0^N \frac{1}{r}\sum_{j=0}^{r-1} (-1)^j {r \choose j} B_j N^{r-j}\,dN =\\
&=\sum_{j=0}^{r-1} (-1)^j {r \choose j} B_j \frac{1}{r+1-j} N^{r+1-j}=\\
&=\frac{1}{r+1}\sum_{j=0}^{r-1} (-1)^j {r+1 \choose j} B_j N^{r+1-j}=\\
&=S(N;r) - \frac{1}{r+1} (-1)^r {r+1 \choose r} B_r N =\\
&=S(N;r) -(-1)^r B_r N
\end{align*}
That is
\begin{equation}
S(N;r) = (-1)^r B_r N + r \int_0^N S(N;r-1)\,dN \ .
\end{equation}
This is precisely the statement that we meant to prove, with an explicit value for the constant $C$. While this may be useful, we prefer to determine $C$ by setting $S(N;r)=1$ for $N=1$ as it is easier to remember.
| {
"timestamp": "2022-03-29T02:02:42",
"yymm": "2203",
"arxiv_id": "2203.13870",
"language": "en",
"url": "https://arxiv.org/abs/2203.13870",
"abstract": "We give a simple recursive formula to obtain the general sum of the first $N$ natural numbers to the $r$th power. Our method allows one to obtain the general formula for the $(r+1)$th power once one knows the general formula for the $r$th power. The method is very simple to remember owing to an analogy with differentiation and integration. Unlike previously known methods, no knowledge of additional specific constants (such as the Bernoulli numbers) is needed. This makes it particularly suitable for applications in cases when one cannot consult external references, for example mathematics competitions.",
"subjects": "General Mathematics (math.GM)",
"title": "A simple mnemonic to compute sums of powers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918485625379,
"lm_q2_score": 0.8539127473751341,
"lm_q1q2_score": 0.8440857901639618
} |
https://arxiv.org/abs/1903.05910 | Efficient evaluation of noncommutative polynomials using tensor and noncommutative Waring decompositions | This paper analyses a Waring type decomposition of a noncommuting (NC) polynomial $p$ with respect to the goal of evaluating $p$ efficiently on tuples of matrices. Such a decomposition can reduce the number of matrix multiplications needed to evaluate a noncommutative polynomial and is valuable when a single polynomial must be evaluated on many matrix tuples.In pursuit of this goal we examine a noncommutative analog of the classical Waring problem and various related decompositions. For example, we consider a "Waring decomposition" in which each product of linear terms is actually a power of a single linear NC polynomial or more generally a power of a homogeneous NC polynomial. We describe how NC polynomials compare to commutative ones with regard to these decompositions, describe a method for computing the NC decompositions and compare the effect of various decompositions on the speed of evaluation of generic NC polynomials. | \section{Introduction}
\ \ \
The Waring problem has had a long history since it was first proposed in 1770 by Edward Waring. Initially it was a question about integers, asking whether a natural number could be written as the sum of powers of natural numbers. Later it was extended to polynomials. It concerns the question whether a given polynomial, $f(x_1, x_2, \dots, x_n)$, can be represented by sums of powers of polynomials, where $x_i$'s are variables which commute. In this form, the Waring problem is closely related to symmetric tensor decomposition, see section \ref{sec:ClassicalBackground}. This problem
was treated successfully in \cite{RS00} and \cite{FOS12} and
has been studied extensively, as is shown, for example, in \cite{BC11} and \cite{GV08}. In this paper, we assume that $x_i$'s are noncommutative variables, and derive solutions for the noncommutative version of the Waring problem. Our results can be used to efficiently evaluate noncommutative polynomials on tuples of matrices.
Pursuing the noncommutative \ Waring Problem \ is in the spirit of the burgeoning area called {\it free analysis}.
Here one takes classical problems and works out analogs with noncommutative \ variables,
which are {\it free} of constraints.
These free analogues typically have interpretations for matrix or operator variables
and their development often impacts various areas.
One of the original efforts here was Voiculescu's free probability,
which started by developing a notion of entropy for operator variables
and which has a become a big area having many associations to
random matrix theory, \cite{MS17}.
Some other directions are free
analytic function theory, cf. \cite{KVV14} and free real algebraic geometry \cite{BKP16}
with some consequences for system engineering being \cite{HMPV09}. Our paper develops the noncommutative \ analogue of the classical Waring problem.
\subsection{Problem statement}
\ \ \ \
We now state a natural noncommutative version of the classical polynomial Waring problem. We shall work with functions of
$g$ noncommutative variables
$$x = (x_1, x_2, ..., x_g)$$
and be interested in powers of
linear functions
$$L_s(x): = A^s_1x_1 + A^s_2x_2 + ... A^s_g x_g,$$
where $s$ is an index and $A^s_i
\in \mathbb{R} \ or \ \mathbb{C}$ for $1 \le i \le g$.
For any (index) tuple $\alpha = (\alpha_1, \alpha_2, ..., \alpha_d)$, where $\alpha_i $ for $1 \le i \le d$ are integers between 1 and g, we denote
$$ x^\alpha = x_{\alpha_1}x_{\alpha_2}x_{\alpha_3}...x_{\alpha_d}.$$
For example,
if $\alpha = ( 1, 2, 1,3)$, then
$x^\alpha = x_1x_2x_1x_3$.
\begin{defi}
An homogeneous NC degree $d$ polynomial $p$ has a
$t$-term \df{real Waring (resp. complex Waring)
decomposition}
provided that
$p(x)$ can be written
as the sum of $t$ terms of the $d^{th}$-power of linear functions of $x$, i.e.,
\begin{equation} \label{Waring_rep}
p(x)=\sum_{s=1}^t [ A^s_1x_1 + A^s_2x_2 + ... A^s_gx_g]^d
\end{equation}
with real (resp. complex) numbers $A_j^s$.
\qed \end{defi}
\def{\mathbb C}{{\mathbb C}}
\noindent \df{NC Waring Problem: }
{\bf Determine if a noncommutative homogeneous degree $d$ polynomial $p$ has a $t$-term Waring decomposition.}
\bigskip
In this paper we will reduce this problem to the classical commutative variable Waring problem, thereby effectively solving it over ${\mathbb C}$.
We also examine the
\bigskip
\noindent
\df{General NC Waring Problem:}
{\bf Determine if a homogeneous noncommutative polynomial is the sum of $d^{th}$ powers of homogeneous polynomials
of degree $\delta$.}
\bigskip
In a similar spirit, we reduce the NC general Waring problem to a classical Waring problem, but in more variables.
\subsection{Background on the Waring Problem}
\ \ \ \
The classical commutative versions of these problems are well summarized in \cite{FOS12}.
\bigskip
\subsubsection{Classical Waring Problem.}
\label{sec:ClassicalBackground}
\ \ \ \
According to Theorem 2.2 of \cite{OO12}, we have the following theorem for the classical Waring decomposition of linear terms:
\begin{thm} \label{classical_result}
A homogeneous polynomial of degree $d$ in $g$ variables can always be expressed as the sum of powers of linear forms with complex coefficients. Moreover, for a general homogeneous polynomial, the number of linear forms needed is $\lceil \frac 1g {{g+d-1} \choose d} \rceil$, except for
\begin{itemize}
\item $d = 2$, where $g$ terms are needed
\item $(d,g) = (3,5),(4,3),(4,4),(4,5)$ where $\lceil \frac 1g {{g+d-1} \choose d} \rceil + 1$ terms are needed.
\end{itemize}
\end{thm}
\noindent
{\bf Waring vs Tensor Decomposition.}
\ \ \ \
It is well known that the classical polynomial Waring problem is equivalent to symmetric tensor decomposition. Let $T \in (\mathbb C^{g})^{\otimes d}$ be a symmetric tensor, i.e. a symmetric multiidexed array, with entries $T_\alpha \in \mathbb C$ where $\alpha= (\alpha_1, \dots, \alpha_d)$ is a $d$-tuple of integers between $1$ and $g$. We may associate $T$ to a homogeneous degree d polynomial $p_T (x)$ in the commutative variables $z=(z_1, \dots, z_g)$ by setting
\[
p_T (z)= \sum_{|\alpha|=d} T_\alpha z^\alpha.
\]
Suppose $T$ has rank $r$ symmetric tensor decomposition
\[
T= \sum_{s=1}^r A^s \otimes \cdots \otimes A^s \quad \quad \mathrm{where \ } d \mathrm{\ copies \ of \ } A^s \mathrm{\ appear \ in \ each \ tensor \ product.}
\]
Here $A^s=(A_1^s, \dots, A_g^s) \in \mathbb C^{g}$ for each $s$. Then it is straightforward to check that
\[
p_T (z) = \sum_{s=1}^r \left( \sum_{i=1}^g A_i^s z_i \right)^d.
\]
That is, a rank $r$ symmetric tensor decomposition of $T$ corresponds to a rank $r$ Waring decomposition for $p_T (z)$. By reversing this correspondence one sees that a rank $r$ Waring decomposition for a homogeneous polynomial gives a rank $r$ symmetric tensor decomposition for the associated symmetric tensor.
\noindent
\subsubsection{Classical General Waring Problem.}
\ \ \ \
The classical commutative Waring problem can be generalized from representation by powers of linear functions to powers of any degree homogeneous polynomials. The generalized classical Waring problem has also been well studied.
According to Theorem 4 in \cite{FOS12},
there is an upper bound for the number of terms needed for
such problems:
\begin{thm}
A general homogeneous polynomial of degree $\delta d$ in $g$ variables, where $d \ge 2$, can be expressed as a sum of at most
$d^{g - 1}$ $d^{th}$ powers of degree $\delta$ homogeneous
complex coefficient polynomials. Moreover, for a fixed $g$, this bound is sharp for all sufficiently large $\delta$.
\end{thm}
\subsection{An easily stated result}
\ \ \ \
Before stating a result we need a definition.
Define an \df{indicator function} on
an index $d$-tuple $\alpha=(\alpha_1, \dots, \alpha_d)$ by first defining
\index{$\mathbbm{1}_j^{\alpha_i}$ }
$$\mathbbm{1}_j^{\alpha_i} =
\begin{cases}
1 & \textrm{if } \alpha_i = j \\
0 & \textrm{if } \alpha_i \neq j
\end{cases}.
$$
Then the indicator function $\mathbbm{1}_j^\alpha$ which gives the number of $j$'s appearing in $\alpha$ is
\index{$\mathbbm{1}_j^{\alpha}$ }
$$
\mathbbm{1}_j^\alpha := \sum_{i=1}^d \mathbbm{1}_j^{\alpha_i}.
$$
A corollary for ${\delta}=1$ of Theorem \ref{thm:LinearWaring} is:
\begin{cor} \label{cor}
Suppose a NC homogeneous polynomial $p(x) = \sum_\alpha {P_\alpha x^\alpha}$, where $P_\alpha = P_{\alpha_1, \alpha_2, ..., \alpha_d}$ $\in \mathbb{C} $, satisfies $P_\alpha = P_{\tilde{\alpha}}$ for any index sets $\alpha, \tilde{\alpha}$ such that $\mathbbm{1}_j^\alpha = \mathbbm{1}_j^{\tilde{\alpha}}$ for all $1 \le j \le g$. Then $p$ has an NC complex coefficient Waring decomposition \ with linear powers.
Moreover, for such a NC homogeneous polynomial,
the number of terms needed is
$$
\left \lceil \frac{\binom{g+d-1}{d}} g \right \rceil,
$$
except in the cases
\begin{itemize}
\item $d = 2$, where $g$ terms are needed
\item $(d,g) = (3,5),(4,3),(4,4),(4,5)$ where $\lceil \frac 1g {{g+d-1} \choose d} \rceil + 1$ terms are needed.
\end{itemize}
\end{cor}
\begin{proof}
This corollary is a combination of Theorem \ref{thm:LinearWaring}, our main result in Section \ref{main_result}, and the well developed solutions for the
classical Waring Theorem stated in Theorem \ref{classical_result}.
\end{proof}
\subsection{NC polynomial evaluation}
Let $p(x)=\sum_{|\alpha| \leq d} P_\alpha x^\alpha$ be a noncommutative polynomial. Then for any $n$ and for any $g$-tuple of $n \times n$ matrices $X=(X_1, \dots, X_g)$, we define the \df{evaluation} of $p$ on $X$ by
\[
p(X)=\sum_{|\alpha| \leq d} P_\alpha X^\alpha
\]
where $X^0=I_n$. In the case where $p$ is a homogeneous noncommutative polynomial and has a NC Waring decomposition, the NC Waring decomposition of $p$ may be used to efficiently evaluate $p$ on matrix tuples. This is especially useful in situations where a single $p$ must be evaluated on many different tuples of matrices.
Suppose $p$ has the NC Waring decomposition
\[
p(x)=\sum_{s=1}^t [A_1^s x_1 + A_2^2 x_2 + \cdots + A_g^s x_g]^d.
\]
Then for any matrix tuple $X$ we may evaluate $p(X)$ using $tg-1$ matrix additions and $t$ matrix exponentiations of degree $d$, where $t \leq \lceil \frac 1g {{g+d-1} \choose d} \rceil + 1$ by Corollary \ref{cor}. We note that powers of a matrix may be efficiently computed either by decomposing the exponent as a sum of powers of two, or by first computing the Jordan form of the matrix.
In contrast, a naive evaluation of a single degree $d$ NC monomial requires $d-1$ matrix multiplications. As a consequence, the naive approach to evaluating a homogeneous degree $d$ NC polynomial in $g$ variables on a matrix tuple can require up to $g^d (d-1)$ matrix multiplications and $g^d-1$ matrix additions.
A matrix exponentiation of degree $d$ can easily be evaluated with at most $d-1$ matrix multiplications, so to compare the computation complexity of these methods we may compare $g^d$ and $\lceil \frac 1g {{g+d-1} \choose d} \rceil + 1$. A more informative comparison on the comparison comes from Stirling’s approximation which shows
\[
\left\lceil \frac 1g {{g+d-1} \choose d} \right\rceil + 1 \lessapprox \frac{1}{g} \left( \frac{e(g+d)}{d} \right)^d.
\]
It follows that the ratio of $\lceil \frac 1g {{g+d-1} \choose d} \rceil$ to $g^d$ is approximately bounded above by
\[
\frac{1}{g} \left( \frac{e(g+d)}{gd} \right)^d,
\]
a quantity that rapidly (geometrically) approaches zero as $g$ or $d$ increase, provided $3 \leq d,g$.
The authors thank Ignat Domanov for suggesting efficient polynomial evaluation as an application of NC Waring decompositions.
\subsection{Numerical computation of NC Waring decompositions}
\ \ \ \
We conclude the introduction with an example which computes an NC Waring decomposition by using popular tensor decomposition software. Consider the homogeneous noncommutative polynomial
\[
\begin{array}{rclcl}
p(x)&=& x_1^3-4 x_2^3 -4 x_3^3 + 5 x_1 x_1 x_2+ 5 x_1 x_2 x_1 + 5 x_2 x_1 x_1 -3 x_1 x_1 x_3-3 x_1 x_3 x_1-3 x_3 x_1 x_1 \\
& & +7 x_2 x_2 x_1 + 7 x_2 x_1 x_2 + 7 x_1 x_2 x_2-11 x_2 x_2 x_3 - 11 x_2 x_3 x_2 - 11 x_3 x_2 x_2 \\
& & + 6 x_3 x_3 x_1 + 6 x_3 x_1 x_3 +6 x_1 x_3 x_3 - 6 x_3 x_3 x_2 -6 x_3 x_2 x_3-6 x_2 x_3 x_3 \\
& & + x_1 x_2 x_3 +x_1 x_3 x_2 + x_2 x_1 x_3 +x_2 x_3 x_1+ x_3 x_1 x_2+x_3 x_2 x_1.
\end{array}
\]
We associate $p(x)$ to the symmetric tensor $T$ defined by its frontal slices
\[
T(:,:,1)=\begin{pmatrix}
1 & 5 & -3 \\
5 & 7 & 1 \\
-3 & 1 & 6
\end{pmatrix}
\quad \quad
\mathrm{and}
\quad \quad
T(:,:,2)=\begin{pmatrix}
5 & 7 & 1 \\
7 & -4 & -11 \\
1 & -11 & -6
\end{pmatrix}
\]
and
\[
T(:,:,3)=\begin{pmatrix}
-3 & 1 & 6 \\
1 & -11 & -6 \\
6 & -6 & -4
\end{pmatrix},
\]
where $T(:,:,i)$ is the standard Matlab index notation.
Using Tensorlab \cite{VDSBL16} we compute that $T$ is a rank $4$ tensor and has symmetric tensor decomposition
\[
T= v_1 \otimes v_1 \otimes v_1 +v_2 \otimes v_2 \otimes v_2 +v_3 \otimes v_3 \otimes v_3 + v_4 \otimes v_4 \otimes v_4
\]
where
\[
v_1=\begin{pmatrix}
3.839 \\
-1.591 \\
2.593
\end{pmatrix}
\quad \quad
v_2=(-1)^{1/3} \begin{pmatrix}
-1.577 \\
-1.697 \\
0.902
\end{pmatrix}
\]
and
\[
v_3=\begin{pmatrix}
0.821 \\
-2.121 \\
-1.793
\end{pmatrix}
\quad \quad
v_4=\begin{pmatrix}
-3.917 \\
1.673 \\
-2.462
\end{pmatrix}.
\]
It follows that $p$ has the rank 4 NC Waring decomposition
\[
\begin{array}{rclcl}
p(x)&=& (3.839 x_1 - 1.591 x_2 + 2.593 x_3)^3 \\
& & + ((-1)^{1/3} (-1.577 x_1 - 1.697 x_2 + .902 x_3))^3 \\
& & + (.821 x_1 - 2.121 x_2 - 1.793 x_3)^3 \\
& & + (-3.917 x_1 + 1.673 x_2 -2.462 x_3)^3.
\end{array}
\]
This is easy to numerically verify using NCAlgebra \cite{OHMS}.
A naive evaluation of $p$ on a matrix tuple using the original definition of $p$ requires $54$ matrix multiplications. In contrast, evaluating $p$ on a matrix tuple using its NC Waring decomposition only requires $8$ matrix multiplications.
\subsection{Guide to readers}
\ \ \ \
In Section \ref{sec:linear} we show that the NC Waring problem reduces to the classical Waring problem. The section begins by introducing a compatibility condition which is necessary for a NC homogeneous polynomial $p$ to have a Waring decomposition. The main result of this section is Theorem \ref{thm:LinearWaring} which shows that a NC homogeneous polynomial $p$ has a $t$-term Waring decomposition if and only if it satisfies our compatibility condition and its commutative collapse has a $t$-term Waring decomposition.
Section \ref{sec:general} considers the general NC Waring problem. Similar to the $\delta=1$ case, we begin by introducing a general $\delta$-compatibility condition which is necessary for the existence of a $(\delta,d)$-NC Waring decomposition. The main result of the section is Proposition \ref{prop:GenNCWaringMain} which shows that, under the $\delta$-compatibility condition, the general NC Waring problem is equivalent to a commutative Waring problem for a polynomial with an increased number of variables. We end with Section \ref{sec:NeedExtraVars} which illustrates that an increase in our number of variables is necessary to reduce the general NC Waring decomposition \ to a commutative Waring decomposition .
\section{The noncommutative Waring problem} \label{sec:linear}
\ \ \ \
In this section we will present our main results on the linear noncommutative Waring problem.
\subsection{Commutative collapse}
\ \ \ \
Our results are associated with commutative problems through a correspondence we now describe.
For a NC polynomial $p$, the associated \df{commutative collapse}, $p^c$, is the commutative polynomial obtained by considering the variables of $p$ to be commutative.
Our notation for commutative collapse for an NC monomial $x^\alpha = x_{\alpha_1} x_{\alpha_2} \dots x_{\alpha_d}$ is $X^\alpha = X_{\alpha_1} X_{\alpha_2} \dots X_{\alpha_d}$. For example, when $\alpha = (1, 2, 1, 2 )$, $x^\alpha = x_1x_2x_1x_2$ collapses to $X^\alpha = X_1^2X_2^2$.
We impose an equivalence relation $\sim_c$ \index{$\sim_c$} on NC monomials by saying that $x^\alpha$ and $x^{\tilde{\alpha}}$ are \df{commutative equivalent} if they have the same commutative collapse:
$$
x^\alpha \sim_c x^{\tilde{\alpha}} \quad \text{iff} \quad
X^\alpha = X^{\tilde{\alpha}}.
$$
Moreover, two index tuples $\alpha$ and $\tilde{\alpha}$ are \df{commutative equivalent}, denoted $\alpha \sim_c {\tilde{\alpha}}$,
iff $x^\alpha \sim_c x^{\tilde{\alpha}}$.
Note that
$$\alpha \sim_c {\tilde{\alpha}}\qquad \mbox{ iff} \qquad \bmone^\alpha_i = \mathbbm{1}^{\widetilde \alpha}_i
\ \ for \ i = 1, \dots, g.$$
\subsection{Main results on the NC Waring decomposition}
\ \ \ \
Our results contain two parts. First we state a compatibility condition necessary for the existence of a Waring decomposition \S \ref{sssec:Compatibility}. Second, if the compatibility condition holds,
we reduce the NC Waring problem to the classical commutative
Waring problem \S \ref{main_result}.
\subsubsection{The Compatibility Condition}
\label{sssec:Compatibility}
\ \ \ \
As we next see the following condition is necessary for existence
of a NC Waring decomposition.
\begin{defi}
A noncommutative homogeneous degree $d$ polynomial $p(x)$,
$$
p(x) = \sum_{|\alpha|=d} {P_\alpha x^\alpha}\qquad P_\alpha := P_{\alpha_1, \alpha_2, ..., \alpha_d} \in \mathbb{R} \ or \ \mathbb{C},
$$
satisfies the \df{compatibility condition} means
\begin{equation}
\label{eq:Paequiv}
P_\alpha = P_{\tilde{\alpha}} \qquad for \ all \
\alpha \sim_c \tilde{\alpha}.
\end{equation}
Sometimes we say {$p$ is compatible}.
\qed \end{defi}
A noncommutative homogeneous polynomial $p$ of degree $d$ which satisfies the compatibility condition can be thought of as a ``symmetric\footnote{The term symmetric typically refers to a noncommutative polynomial $p$ which is equal to its transpose $p^T$, which is different than the notion discussed here.} noncommutative homogeneous polynomial of degree $d$" in the sense that $p$ is invariant under the following action of the symmetric group. Given tuple a tuple $\alpha=(\alpha_1,\alpha_2, \dots, \alpha_d)$ of length $d$ and a permutation $\pi \in \mathcal{S}_d$ define
\[
\pi (\alpha)=(\alpha_{\pi(1)}, \alpha_{\pi(2)}, \dots, \alpha_{\pi(d)}) \quad \mathrm{that \ is} \quad \pi (x^\alpha)=x^{\pi(\alpha)}.
\]
It is then straight forward to check that $x^\alpha \sim_c x^{\tilde{\alpha}}$ and $\alpha \sim_c \tilde{\alpha}$ if and only if there is a permutation $\pi \in \mathcal{S}_d$ such that $\pi(\alpha)=\tilde{\alpha}$.
We extend the action of $\mathcal{S}_d$ to noncommutative homogeneous polynomials of degree $d$ by
\[
\pi (p(x))= \sum_{|\alpha|=d} {P_\alpha x^{\pi(\alpha)}}.
\]
Then $p$ meets the compatibility condition if and only if
\[
\pi(p(x))=p(x)
\]
for all permutations $\pi \in \mathcal{S}_d$.
The following lemma shows that the compatibility condition is necessary for existence of an NC Waring decomposition.
\begin{lem} \label{NecessaryCond}
If a NC homogeneous polynomial of degree $d$
has a $t$-term NC Waring decomposition,
then the compatibility condition \eqref{eq:Paequiv} holds. Moreover, if $p$ meets the compatibility condition, then $p$ has a $t$-term NC Waring decomposition over the complex numbers (resp. real numbers) if and only if
\begin{equation} \label {Peq}
P_\alpha = \sum_{s=1}^t \prod_{j=1}^g \left( A_j^s \right)^{\mathbbm{1}_j^\alpha}
\end{equation}
has a solution $A_j^s \in \mathbbm{C} ( \textrm{resp. } A_j^s \in \mathbbm{R} ).$
\end{lem}
\begin {proof}
$p$ has a $t$-term Waring decomposition \ if and only if
\begin{equation*}
\label{eq:pcore}
\sum_{|\alpha|=d} P_\alpha x^\alpha = \sum_{s=1}^t [L_s(x)]^d
= \sum_{s=1}^t \sum_{|\alpha|=d} \left( \prod_{i=1}^d A_{\alpha_i}^s \right) x^\alpha
= \sum_{|\alpha|=d} \left( \sum_{s=1}^t \prod_{i=1}^d A_{\alpha_i}^s \right) x^\alpha.
\end{equation*}
Comparing the coefficients of $x^\alpha$ on both sides, we get
\begin{equation}
\label{eq:pwkeyG}
P_\alpha = \sum_{s=1}^t \prod_{i=1}^d A_{\alpha_i}^s
= \sum_{s=1}^t \prod_{j=1}^g \left( A_j^s \right)^{\mathbbm{1}_j^\alpha}
\end{equation}
This also implies $P_\alpha = P_{\tilde{\alpha}}$ if $\mathbbm{1}_j^\alpha
= \mathbbm{1}_j^{\tilde{\alpha}}$ for all $1 \le j \le g $.
\end {proof}
\begin{exa}
\rm A NC homogeneous polynomial $p(x) = \sum_\alpha {P_\alpha x^\alpha}$ has the complex (resp. real) $2$-term Waring decomposition \
$$
p(x) = (a x_1 + c x_2)^3 + (b x_1 + d x_2)^3
$$
if and only if
\begin{equation}
\begin{aligned}
P_{1,1,1} &
= a^3 + b^3
\\
P_{1,1,2} &= a^2 c + b^2 d = \frac 1 6 ((a+c)^3 +(b+d)^3- (a-c)^3-(b-d)^3)- \frac 1 3 P_{2,2,2}
\\
P_{1,2,2} &= a c^2 + b d^2 = \frac 1 6 ((a+c)^3 +(b+d)^3+ (a-c)^3+(b-d)^3) - \frac 1 3 P_{1,1,1}
\\
P_{2,2,2} &= c^3 + d^3
\end{aligned}
\end{equation}
has a solution $a,b,c,d \in \mathbbm{C}$ (resp. $\mathbbm{R}$)
\qed
\end{exa}
\subsubsection{Reduction of NC Waring to Classical Waring}
\label{main_result}
\ \ \ \
There has been extensive work on the Waring problem in the classical commutative case and what we see in this section is that the NC Waring problem reduces to the commutative one.
\begin{lem}\label{lem:count_eta}
For an index tuple $\alpha$, denote $\eta[\alpha]$ as the number of $\tilde{\alpha}$'s that satisfy $\bmone^\alpha_j = \bmone^{\talpha}_j$ for all $1 \le j \le g$. Then
$$
\eta[\alpha] = \frac {d !} {\prod_{j = 1}^{g} (\mathbbm{1}_j^\alpha) !}
$$
\end{lem}
\begin{proof}
The problem is equivalent to calculating how many d-tuples can be formed by elements from
$
\alpha = ( \alpha_1, \alpha_2, \dots, \alpha_d )
$, which is equivalent to
$$
\eta[\alpha] = \frac {\text{\# of permutations of }d\text{ items}} {\text{\# of permutations of repetitions}} =\frac {d !} {\prod_{j = 1}^{g} (\mathbbm{1}_j^\alpha) !}.
$$
\end{proof}
\begin{thm} \label{thm:LinearWaring}
Suppose $p$ is an NC homogeneous polynomial which satisfies the compatibility conditions \eqref{eq:Paequiv}.
Then the commutative collapse $p^c$ has the Waring decomposition \
\begin{equation} \label{eq:comm_WR}
p^c(X)=\sum_{s=1}^t [ A^s_1X_1 + A^s_2X_2 + \cdots + A^s_g X_g]^d
\end{equation}
(with $X_i$ being commuting variables) if and only if $p$ has the
nc Waring decomposition \
\begin{equation} \label{eq:nc_WR}
p(x)=\sum_{s=1}^t [ A^s_1x_1 + A^s_2x_2 + \cdots +A^s_g x_g]^d .
\end{equation}
Note that the number of terms is the same and the
real coefficients
(resp. complex coefficients) $A_j^s$ are the same.
\end{thm}
\def{\mathcal R}{{\mathcal R}}
\begin{proof}
The proof begins by laying out the algebraic connection
between $p$ and $p^c$. Let ${\mathcal R}$ denote a set consisting of one representative from each $\sim_c$ equivalence class. Then from \eqref{eq:Paequiv}, the NC polynomial
$
p(x) = \sum_{|\alpha| = d} P_\alpha x^\alpha
$
has
commutative collapse satisfying
$$
p^c(X)
=
\sum_{\alpha \in {\mathcal R} }
\quad \sum_{ \tilde{\alpha} \sim_c \alpha } P_{\tilde{\alpha}} X^{\alpha}
=
\sum_{\alpha \in {\mathcal R} }
P_\alpha^c \ X^{\alpha},
$$
where
$
P_\alpha^c =
\sum_{ \tilde{\alpha} \sim_c \alpha } P_{\tilde{\alpha}}
$.
Thus if $p$ satisfies the compatibility condition
\eqref{eq:Paequiv}, then
\begin{equation}
\label{ncToc}
P_\alpha^c = \eta[\alpha ] P_{\widetilde \alpha} \qquad for \ {\alpha} \in {\mathcal R}
\ \ and \ \
{\alpha} \sim_c {\widetilde \alpha}.
\end{equation}
Therefore, $p^c$ is the commutative collapse of a
compatible NC homogeneous degree $d$ polynomial $p$
iff $P_\alpha^c = \eta[\alpha ] P_\alpha$ for all index tuples $\alpha$ of length $d$.
Now we proceed to prove our theorem. Assume $p$ has the NC Waring decomposition \ \eqref{eq:nc_WR}, we shall obtain a reversible formula for the Waring decomposition \ of $p^c$.
By equation \eqref{ncToc} and Lemma \ref{NecessaryCond}, the commutative collapse $p^c$ is
\begin{equation} \label{eq:collapse}
p^c(X) =
\sum_{\alpha \in {\mathcal R}, \ | \alpha | = d }\eta[\alpha ] P_\alpha X^\alpha = \sum_{|\alpha|=d} P_\alpha X^\alpha = \sum_{|\alpha|=d} \sum_{s=1}^t \prod_{j=1}^g \left( A_j^s \right)^{\mathbbm{1}_j^\alpha} X^\alpha
\end{equation}
Thus
\begin{equation} \label{eq:collapse2}
p^c(X)
= \sum_{s=1}^t \sum_{|\alpha|=d} \prod_{i=1}^d A^s_{\alpha_i} X^\alpha
= \sum_{s=1}^t [ A^s_1X_1 + A^s_2X_2 + ... A^s_g X_g]^d
\end{equation}
On the other hand, suppose $p$'s commutative collapse, $p^c$, has the commutative Waring decomposition \ \eqref{eq:comm_WR}, then
the calculations in \eqref{eq:collapse} and \eqref{eq:collapse2} can be reversed. By comparing coefficients, this is equivalent to
$$
P^c_\alpha = \eta[\alpha ]
\sum_{s=1}^t \prod_{j=1}^g \left( A_j^s \right)^{\mathbbm{1}_j^\alpha}
$$
for all $\alpha \in {\mathcal R}$.
Therefore by \eqref{ncToc},
$p$ satisfies
$$
P_\alpha =\sum_{s=1}^t \prod_{j=1}^g \left( A_j^s \right)^{\mathbbm{1}_j^\alpha}
$$
for all index tuples $\alpha$ of length $d$. Hence by Lemma \ref{NecessaryCond}, $p$ has the Waring decomposition \ \eqref{eq:nc_WR}.
Thus under the compatibility condition \eqref{eq:Paequiv}, the NC polynomial $p$ has a Waring decomposition \ iff its commutative collapse $p^c$ has the same Waring decomposition.
\end{proof}
\section {The general noncommutative Waring problem} \label{sec:general}
\ \ \ \
We now consider a more general situation of which the problem in the preceding section is the base case. As you will see, the bookkeeping and notation is formidable, so it is very helpful to have done a simpler case. In the previous section our focus was to determine if a degree $d$ noncommutative \ homogeneous polynomial can be expressed as sums of powers of linear terms. Now we examine when a degree ${\delta} d$ noncommutative \ homogeneous polynomial can be expressed as sums of powers of homogeneous degree ${\delta}$ terms.
\subsection{Problem formulation and notation}
\ \ \ \
Let \index{$T^g_\delta$} $T^g_\delta$ be the set of all possible $\delta$-tuples whose elements are integers between $1$ and $g$, i.e.,
$$
T^g_\delta = \{ (\alpha^1, \alpha^2, \dots, \alpha^\delta) \mid 1 \le \alpha^i \le g \}.
$$
Additionally, define \index{$( T^g_\delta )^d $} $( T^g_\delta )^d $ by
$$
( T^g_\delta )^d = \{ (\alpha_1, \alpha_2, \dots, \alpha_d) \mid \alpha_i \in T^g_\delta \}.
$$
That is, $( T^g_\delta )^d$ is the set of $d$-tuples of $\delta$ tuples of indices.
For any $\alpha = (\alpha_1, \dots, \alpha_d) \in (T^g_\delta)^d$, where $\alpha_i = (\alpha_i^1, \dots, \alpha_i^{\delta}) \in T^g_\delta, $
we can write
\[
x^{\alpha} = x^{\alpha_1}x^{\alpha_2} \dots x^{\alpha_d}.
\]
It is natural to identify $ x^\alpha$ with
\[
x_{\alpha_1^1} x_{\alpha_1^2} \dots x_{\alpha_1^\delta} x_{\alpha_2^1}
\dots x_{\alpha_d^1}
\dots x_{\alpha_d^\delta}.
\]
Recall our notation for a degree $\delta$ homogeneous polynomial is
$$
H(x) = \sum_{\beta \in T^g_\delta} A_{\beta} x^\beta,
$$
where $A_{\beta} = A_{(\beta^1, \beta^2, \dots, \beta^\delta)} \in \mathbb{C}$.
\begin{rmk}
\rm For any $ \alpha = (\alpha_1, \alpha_2, \dots, \alpha_d) \in (T^g_\delta)^d$, we can identify
$$\alpha = ((\alpha_1^1, \alpha_1^2, \dots, \alpha_1^\delta), \dots, (\alpha_d^1, \alpha_d^2, \dots, \alpha_d^\delta))$$
with
$$
(\alpha_1^1, \alpha_1^2, \dots, \alpha_1^\delta, \dots, \alpha_d^\delta) \in T^g_{\delta d}.$$ On the other hand, for any element of $T^g_{\delta d}$, we can reverse this identification and form groups of size $\delta$ to get a $d$-tuple of $\delta$-tuples. Hence $(T^g_\delta)^d$ and $T^g_{\delta d}$ are isomorphic
and we let $\tau$ \index{$\tau$} denote the isomorphism
$$\tau : T^g_{\delta d} \to (T^g_\delta)^d$$
which accomplishes this grouping.
\qed
\end{rmk}
\bigskip
\noindent
{\bf The} \df{General NC Waring Problem:}
{\bf
\ \ \ \ Given a NC homogeneous degree $\delta d$ polynomial p, does it have a t-term $d^{th}$ power real NC Waring (resp. complex NC Waring)
decomposition of degree $\delta$. That is, can $p(x)$ be written as
\begin{equation}
\label{eq:deldWaring}
p(x) = \sum_{s = 1}^t (H_s(x))^d = \sum_{s = 1}^t \left ( \sum_{\beta \in T^g_\delta} A^s_{\beta} x^{\beta} \right )^d?
\end{equation}}
\bigskip
We call this problem the $({\delta},d)$-NC Waring problem and say a a decomposition of the form \eqref{eq:deldWaring} is a \df{$t$-term $({\delta},d)$-NC Waring decomposition}. Similarly for a commutative polynomial $p^c$, we say a decomposition of the form \eqref{eq:deldWaring} (with $x^\beta$ replaced by $X^\beta$) is a \df{$t$-term $({\delta},d)$-Waring decomposition}. Note that the problem treated in Section \ref{sec:linear} is exactly the $(1,d)$-NC Waring problem.
An obvious fact is, if $p$ is a degree $\delta d$ NC homogeneous polynomial and $p$ has a $t$-term $(\delta,d)$-NC Waring decomposition ,
then its commutative collapse $p^c$ has a $t$-term $(\delta,d)$-Waring decomposition .
\subsubsection{Tuple indicator functions}
\ \ \ \
We now extend the notion of indicator function to tuples of ${\delta}$-tuples.
For two $\delta-$tuples $\beta,\gamma \in T^g_{\delta}$, denote
\index{$\mathbbm{1}_\beta^{\gamma}$ }
$$
\mathbbm{1}_\beta^{\gamma} =
\begin{cases}
1 & \textrm{if } \gamma = \beta \\
0 & \textrm{otherwise},
\end{cases}.
$$
Then for an index tuple $\mu \in (T^g_\delta)^d$, the number of times a particular $\delta-$tuple $\beta \in T^g_\delta$ appears in $\mu$ is
$$
\mathbbm{1}_\beta^\mu := \sum_{k=1}^d \mathbbm{1}_\beta^{\mu_k}.
$$
Furthermore, denote
\begin{equation}\label{eq:1abi}
\mathbbm{1}_i^\mu
:= \sum_{\beta \in T^g_\delta, i \in \beta}
\mathbbm{1}^\mu_\beta
= \sum_{\beta \in T^g_\delta} \mathbbm{1}^\mu_\beta \bmone^\beta_i
\end{equation}
as the number of integers $i$ appearing in all the $\delta$-tuples in $\alpha$.
\subsection{Main results on the general Waring decomposition}
\ \ \ \
Similar to Section 2, we first state a compatibility
condition which is necessary for the existence of a generalized NC Waring decomposition . We then prove that, if this condition holds, then we can reduce the generalized NC Waring problem to a commutative one at the price of an increase our number of variables.
\subsubsection{The Compatibility Condition}
\ \ \ \
The generalized version of the $\delta = 1$ compatibility condition is defined as follows:
\begin{defi}
A noncommutative homogeneous polynomial of degree $\delta d$ in g variables of the form
\begin{equation} \label{eq:ncPoly}
p(x) = \sum_{\alpha \in {T^g_{\delta d} }} {P_\alpha x^\alpha} \qquad P_\alpha
\in \mathbb{R} \ or \ \mathbb{C}
\end{equation}
satisfies the \df {$\delta$-compatibility condition} means
\begin{equation} \label{eq:GeneralPaEquiv}
P_{\alpha} = P_{\tilde{\alpha}}
\end{equation}
for all index sets, $\alpha$, $\tilde{\alpha} \in {T^g_{\delta d} }$
such that $\bmone^{\tau(\alpha)}_\beta = \bmone^{\tau(\tilde \alpha) }_\beta \text{ for all }
\beta \in T^g_\delta $.
Consistent with this,
we define the \df{$\delta$-equivalence relation}, denoted $\sim_\delta$ \index{$\sim_\delta$}, on ${T^g_{\delta d} }$ by
$$
\alpha \sim_{\delta} \tilde{\alpha}
\quad
\text{iff}
\quad
\bmone^{\tau(\alpha)}_\beta = \bmone^{\tau(\tilde \alpha) }_\beta
$$
for all $\beta \in T^g_\delta$.
\qed \end{defi}
\begin{rmk}
\label{rem:tupequiv}
\rm Here are a few bookkeeping properties of $\delta$-equivalences.
\begin{enumerate}
\item
\label{it:1EquivIsCommEquiv}
We have $\alpha \sim_1 \tilde{\alpha}$ if and only if $\alpha \sim_c \tilde{\alpha}$.
\item
\label{it:monEquivCompare}
Let $\delta_1,\delta_2 \in \mathbb{N}$ and let $\alpha,\tilde{\alpha} \in T_{\delta_2 d}^g$. If $\delta_2$ divides $\delta_1$, then $\alpha \sim_{\delta_1} \tilde{\alpha}$ implies $\alpha \sim_{\delta_2} \tilde{\alpha}$. In the case where $\delta_2=1$ this follows from equation \eqref{eq:1abi}. The general case is similar.
\item
\label{it:polyEquivCompare}
Let $\delta_1, \delta_2, d \in \mathbb{N}$ and let $p$ be a degree $\delta_1 d$ NC homogeneous polynomial. If $\delta_2$ divides $\delta_1$ and $p$ satisfies the $\delta_2$-compatibility condition then $p$ satisfies the $\delta_1$-compatibility condition.
\end{enumerate}
Items \eqref{it:monEquivCompare} and \eqref{it:polyEquivCompare} highlight that, as $\delta$ grows, it becomes increasingly difficult for fixed monomials $\alpha$ and $\tilde{\alpha}$ of degree divisible by $\delta$ to be $\delta$-equivalent. As an immediate consequence, as $\delta$ grows, it become more likely that a fixed NC homogeneous polynomial $p$ of degree divisible by $\delta$ satisfies the $\delta$-compatibility condition. In the extreme case, monomials $\alpha$ and $\tilde{\alpha}$ of degree $\delta$ are $\delta$-equivalent if and only if $\alpha=\tilde{\alpha}$. As a result, every degree $\delta$ NC homogeneous polynomial satisfies the $\delta$-compatibility condition. \qed
\end{rmk}
\begin{exa}
\rm Let
\[
\alpha=x_1 x_2 x_2 x_1 \quad \quad \quad \quad \tilde{\alpha}=x_2 x_1 x_1 x_2.
\]
Then
\[
\alpha \sim_1 \tilde{\alpha} \quad \mathrm{and} \quad \alpha \sim_2 \tilde{\alpha} \quad \mathrm{however} \quad \alpha \not\sim_4 \tilde{\alpha}.
\]
Now let $p$ be the degree four homogeneous NC polynomial
\[
p(x)=\alpha+\tilde{\alpha}=x_1 x_2 x_2 x_1+x_2 x_1 x_1 x_2.
\]
Then $p$ satisfies the $2$-compatibility condition and the $4$-compatibility condition. However, $p$ does not satisfy the $1$-compatibility condition, since the coefficient of $x_1 x_1 x_2 x_2$ in $p$ is $0$ but the coefficient of $x_1 x_2 x_2 x_1$ is $1$ and
\[
x_1 x_1 x_2 x_2 \sim_1 x_1 x_2 x_2 x_1. \qed
\]
\end{exa}
The following lemma shows that the $\delta$-compatibility condition is necessary for the general NC Waring problem.
\begin{lem}
\label{lem:condgen}
Suppose a NC homogeneous polynomial $p$ of degree
$\delta d$ in $g$ variables has a $t$-term $({\delta},d)$-NC Waring decomposition ,
then $p$ satisfies the $\delta$-compatibility condition,
$P_{\alpha} = P_{\widetilde \alpha}$ if ${\alpha} \sim_\delta {\widetilde \alpha}$.
Here $p$ has coefficients $P_{\alpha}$.
Moreover, the $({\delta},d)$-NC
Waring problem has a
solution over the complex numbers (resp. real numbers) if and only if the equation
\begin{equation} \label {eq:pcoef}
P_\alpha = \sum_{s=1}^t \prod_{\beta \in T^g_\delta}
\left (A^s_\beta \right)^{\bmone^{\tau(\alpha)}_\beta}
\qquad {\alpha} \in T_{{\delta} d}^g
\end{equation}
has a solution $A_\beta^s \in \mathbbm{C}$ (resp. $A_\beta^s \in \mathbbm{R}$).
\end{lem}
\begin{proof}
$p$ has a $t$-term $(\delta,d)$-NC Waring decomposition \,
iff $\exists$ $\delta^{th}$ degree homogeneous polynomials,
$H_1, H_2, \dots, H_t$ satisfying
\begin{align}
\sum_{\alpha \in {T^g_{\delta d} }} P_\alpha x^\alpha
& = \sum_{s=1}^t [H_s(x)]^d
= \sum_{s=1}^t \left [\sum_{\beta \in T^g_\delta}
A_{\beta}^s x^{\beta} \right ]^d\\
%
& = \sum_{s=1}^t \sum_{\alpha \in {T^g_{\delta d} } }
\left( \prod_{ \substack { 1 \le j \le d}}
A_{\tau(\alpha)_j}^s x^{\alpha_j}\right) \\
& = \sum_{\alpha \in {T^g_{\delta d} }}
\left( \sum_{s=1}^t \prod_{ \substack { 1 \le j \le d}}
A_{\tau(\alpha)_j}^s \right) x^\alpha.
\label{eq:pmain}
\end{align}
\newline
Comparing coefficients we see, equivalent to the $(\delta, d)$-NC Waring
decomposition is:
$$
P_\alpha = \sum_{s=1}^t \prod_{ \substack {
1 \le j \le d}} A_{\tau(\alpha)_j}^s
= \sum_{s=1}^t \prod_{\substack{{j_1, \dots,j_\delta }\\
1 \le j_k \le g}}^g
\left( A_{(j_1, \dots,j_\delta)}^s \right)^{\mathbbm{1}_{(j_1, \dots,j_\delta)}^{\tau(\alpha)}}
= \sum_{s=1}^t \prod_{\beta \in T^g_\delta} \left (A^s_\beta \right)^{\bmone^{\tau(\alpha)}_\beta},
$$
yielding \eqref{eq:pcoef}.
As a consequence
$
P_\alpha = P_{\tilde{\alpha}}
$
for any
$\alpha$
satisfying
$\bmone^{\tau(\alpha)}_\beta = \bmone^{\tau(\tilde \alpha) }_\beta$
for every
$
\beta \in T^g_\delta
$, yielding the first assertion of the theorem.
\end{proof}
\begin{exa}
\rm
Let
$$
p(x) = (x_1x_2 + x_1^2)(x_2x_1 + x_1^2)
= x_1x_2^2x_1 + x_1x_2x_1^2 + x_1^2x_2x_1 + x_1^4.
$$
Then $p$ is an example where there is no $({\delta},d)=(2,2)$-NC
Waring decomposition; indeed
the $2$-compatibility condition is violated because
$
P_{(1,1,1,2)} = 0 \neq 1 = P_{(1,2,1,1)}
$.
However, its commutative collapse does have the Waring decomposition :
\begin{equation*}
p^c(X)=X_1^2X_2^2 + 2 X_1^3 X_2 + X_1^4
= (X_1X_2 + X_1^2)^2. \qquad \qed
\end{equation*}
\end{exa}
\subsection{Reduction to classical Waring in more variables}
\label{sec:GenWaringReduction}
\ \ \ \
To solve the general $(\delta,d)$-noncommutative \ Waring problem we reduce to the $\delta=1$ case solved by Theorem \ref{thm:LinearWaring}.
This reduction is accomplished by identifying a monomial $x^\beta$
with a new variable $z_\beta$. Namely, fix $\delta$ and define the map $\phi$ on monomials of the form $x^\beta$ for $\beta \in T_\delta^g$ by
$$
\phi (x^\beta) := z_\beta \qquad for \ each \ \beta \in T_\delta^g
$$
where the $z_\beta$ are noncommutative indeterminates indexed by elements of $T_\delta^g$.
We extend or definition of $\phi$ to a noncommutative homogeneous polynomial
\[
p(x)=\sum_{\mu \in (T_\delta^g)d} P_\mu x^{\mu_1} x^{\mu_2} \cdots x^{\mu_d}
\]
of degree $\delta d$
by
\begin{equation}
\label{eq:phiOnPolys}
\phi(p(x))=\sum_{\mu \in (T_\delta^g)d} P_\mu \phi(x^{\mu_1}) \phi(x^{\mu_2}) \cdots \phi(x^{\mu_d})=\sum_{\mu \in (T_\delta^g)d} P_\mu z_{\mu_1} z_{\mu_2} \cdots z_{\mu_d}.
\end{equation}
\begin{lem}
\label{lem:phiAlgIso}
The map $\phi$ as defined in equation \eqref{eq:phiOnPolys} defines an algebra isomorphism on the algebra of noncommutative homogeneous polynomials of degree divisible by $\delta$ in the noncommutative indeterminate $x=(x_1,x_2, \dots, x_g)$ which maps to the algebra of noncommutative homogeneous polynomials in the noncommutative indeterminates $\{z_\beta\}_{\beta \in T_\delta^g}$.
\end{lem}
\begin{proof}
This is straight forward from the definition of $\phi$ on a noncommutative homogeneous polynomial of degree $d\delta$.
\end{proof}
We now give our main result for the $(\delta,d)$-NC Waring problem.
\begin{prop}
\label{prop:GenNCWaringMain}
Let $p$ be a noncommutative homogeneous polynomial of degree $\delta d$ in the indeterminate $x=(x_1, \dots, x_g)$, and let $\phi$ be as defined in equation \eqref{eq:phiOnPolys}. Then we have the following.
\begin{enumerate}
\item
\label{it:GenMain1}
$p(x)$ has a $t$-term $({\delta},d)$-noncommutative \ Waring decomposition if and only if
$\phi(p(x))$ has a $t$-term $(1,d)$-noncommutative Waring decomposition.
\item
\label{it:GenMain2}
$p(x)$ satisfies the ${\delta}$-compatibility condition if and only if $\phi(p(x))$ satisfies the $1$-compatibility condition.
\item
\label{it:GenMain3}
$p(x)$ has a $t$-term $({\delta},d)$-noncommutative \ Waring decomposition if and only if $p(x)$ satisfies the ${\delta}$-compatibility condition and the commutative collapse of $\phi(p(x))$ has a $t$-term $(1,d)$-Waring decomposition.
\end{enumerate}
\end{prop}
\begin{proof}
To prove item \eqref{it:GenMain1}, assume $p(x)$ has a $t$-term $(\delta,d)$-noncommutative Waring decomposition
\[
p(x)= \sum_{s=1}^t [\sum_{\beta \in T_{\delta}^g } A_\beta \; x^\beta ]^d.
\]
By Lemma \ref{lem:phiAlgIso}, $\phi$ is an algebra isomorphism so
\[
\phi(p(x))=\phi\left(\sum_{s=1}^t [\sum_{\beta \in T_{\delta}^g } A_\beta \; x^\beta ]^d \right) = \sum_{s=1}^t [\sum_{\beta \in T_{\delta}^g } A_\beta \; \phi(x^\beta) ]^d = \sum_{s=1}^t [\sum_{\beta \in T_{\delta}^g } A_\beta \; z_\beta ]^d.
\]
This shows $\phi(p(x))$ has a $t$-term $(1,d)$ noncommutative Waring decomposition. The reverse direction is follows the same reasoning using $\phi^{-1}$ instead of $\phi$.
To prove item \eqref{it:GenMain2} let
\[
p(x)=\sum_{\mu \in (T_\delta^g)d} P_\mu x^{\mu_1} x^{\mu_2} \cdots x^{mu_d}.
\]
Then
\[
\phi(p(x))=\sum_{\mu \in (T_\delta^g)d} P_\mu z_{\mu_1} z_{\mu_2} \cdots z_{\mu_d}.
\]
Observe
\[
(\mu_1, \dots, \mu_d) \sim_1 (\tilde \mu_1, \dots \tilde \mu_d)
\]
where the $\mu_j$ are viewed as elements of the index set $T_\delta^g$ if and only if
\[
(\mu_1, \dots, \mu_d) \sim_{\delta} (\tilde \mu_1, \dots \tilde \mu_d)
\]
where the $\mu_j$ are viewed as as $\delta$ tuples of elements of $T_{\delta}^g$. It follows that
\[
P_{(\mu_1, \dots, \mu_d)}=P_{(\tilde \mu_1, \dots \tilde \mu_d)} \quad \quad \mathrm{for\ all\ } (\mu_1, \dots, \mu_d) \sim_1 (\tilde \mu_1, \dots \tilde \mu_d).
\]
where the $\mu_j$ are viewed as elements of the index set $T_\delta^g$ if and only if
\[
P_{(\mu_1, \dots, \mu_d)}=P_{(\tilde \mu_1, \dots \tilde \mu_d)} \quad \quad \mathrm{for\ all\ } (\mu_1, \dots, \mu_d) \sim_{\delta} (\tilde \mu_1, \dots \tilde \mu_d).
\]
where the $\mu_j$ are viewed as as $\delta$ tuples of elements of $T_{\delta}^g$.
Item \eqref{it:GenMain3} is an immediate consequence of items \eqref{it:GenMain1} and \eqref{it:GenMain2} with Theorem \ref{thm:LinearWaring}, our main result for $(1,d)$-NC Waring decompositions.
\end{proof}
\subsection{Additional variables are necessary for the reduction}
\label{sec:NeedExtraVars}
\ \ \ \
It is tempting to try to solve the general $({\delta},d)$-NC Waring problem by reducing to the commutative case without introducing additional variables. This section will show that this is not possible.
One may hope that the following are true:
\begin{enumerate}
\item
\label{it:comm_WR_general}
\textit{If $p$ is a degree $\delta d$ NC homogeneous polynomial, which satisfies the $\delta$-compatibility condition \eqref{eq:GeneralPaEquiv},
then its commutative collapse $p^c$ has the Waring decomposition \
\begin{equation} \label{comm_WR_general}
p^c(X) = \sum_{s = 1}^t \left ( \sum_{\beta \in T^g_\delta} A^s_{\beta} X^{\beta} \right )^d
\end{equation}
(with $X_i$ being commuting variables) if and only if $p$ has the
NC Waring decomposition \
\begin{equation} \label{nc_WR_general}
p(x)=\sum_{s=1}^t \left ( \sum_{\beta \in T^g_\delta} A^s_{\beta} x^{\beta} \right )^d.
\end{equation}
}
\item
\label{it:nc_WR_general}
\textit{The commutative collapse $p^c$ of $p$
has a $t$-term $({\delta},d)$-NC Waring decomposition
iff
the commutative collapse
$\phi(p)^c$ of $\phi(p)$ has a $t$-term
$(1,d)$-NC Waring decomposition.}
\end{enumerate}
The following polynomial gives a counter example to both items. Let
\[
p(x)=x_1^4+x_1 x_2 x_2 x_1 + x_2 x_1 x_1 x_2 +x_2^4
\]
and let $\delta=d=2$. Then $p$ satisfies the $2$-compatibility condition. We will show that the commutative collapse of $p$ has a two term $(2,2)$-Waring decomposition \ but that $p$ does not have a two term $(2,2)$-NC Waring decomposition.
It is straight forward to check
\[
p^c(X)=X_1^4+2X_1^2 X_2^2 +X_2^4=(X_1^2+X_2^2)^2.
\]
Item \eqref{it:comm_WR_general} would imply that
\[
p(x)=(x_1^2+x_2^2)^2=x_1^4+x_1^2 x_2^2+x_2^2 x_1^2 +x_2^4 \neq x_1^4+x_1 x_2 x_2 x_1 + x_2 x_1 x_1 x_2 +x_2^4 = p(x)
\]
which is contradiction. This shows that item \eqref{it:comm_WR_general} cannot be correct.
In fact, $p$ does not have a two term $(2,2)$-NC Waring decomposition. To check this set
\[
z_{(1,1)}=x_1 x_1 \quad \quad z_{(1,2)}= x_1 x_2 \quad \quad z_{(2,1)}=x_2 x_1 \quad \quad z_{(2,2)}=x_2 x_2.
\]
Then $\phi(p) (z)=z_{(1,1)}^2+z_{(1,2)} z_{(2,1)} +z_{(2,1)} z_{(1,2)} +z_{(2,2)}^2$ satisfies the $1$-compatibility condition but
\[
(\phi(p))^c (Z)= Z_{(1,1)}^2+ 2 Z_{(1,2)} Z_{(2,1)} + Z_{(2,2)} ^2
\]
does not have a two term $(1,2)$-Waring decomposition.\footnote{This is a straight forward calculation.}
It follows from Proposition \ref{prop:GenNCWaringMain} \eqref{it:GenMain3} that $p$ does not have a two term $(2,2)$-NC Waring decomposition.
| {
"timestamp": "2019-03-15T01:15:45",
"yymm": "1903",
"arxiv_id": "1903.05910",
"language": "en",
"url": "https://arxiv.org/abs/1903.05910",
"abstract": "This paper analyses a Waring type decomposition of a noncommuting (NC) polynomial $p$ with respect to the goal of evaluating $p$ efficiently on tuples of matrices. Such a decomposition can reduce the number of matrix multiplications needed to evaluate a noncommutative polynomial and is valuable when a single polynomial must be evaluated on many matrix tuples.In pursuit of this goal we examine a noncommutative analog of the classical Waring problem and various related decompositions. For example, we consider a \"Waring decomposition\" in which each product of linear terms is actually a power of a single linear NC polynomial or more generally a power of a homogeneous NC polynomial. We describe how NC polynomials compare to commutative ones with regard to these decompositions, describe a method for computing the NC decompositions and compare the effect of various decompositions on the speed of evaluation of generic NC polynomials.",
"subjects": "Functional Analysis (math.FA)",
"title": "Efficient evaluation of noncommutative polynomials using tensor and noncommutative Waring decompositions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9838471628097781,
"lm_q2_score": 0.85776809953619,
"lm_q1q2_score": 0.843912711077416
} |
https://arxiv.org/abs/1608.01666 | A central limit theorem for a new statistic on permutations | This paper does three things: It proves a central limit theorem for novel permutation statistics (for example, the number of descents plus the number of descents in the inverse). It provides a clear illustration of a new approach to proving central limit theorems more generally. It gives us an opportunity to acknowledge the work of our teacher and friend B. V. Rao. | \section{Introduction}
Let $S_n$ be the group of all $n!$ permutations of $\{1,\ldots, n\}$. A variety of statistics $T(\pi)$ are used to enable tasks such as tests of randomness of a time series, comparison of voter profiles when candidates are ranked, non-parametric statistical tests and evaluation of search engine rankings. A basic feature of a permutation is a local `up-down' pattern. Let the number of descents be defined~as
\[
D(\pi) := |\{i: 1\le i\le n-1,\, \pi(i+1)<\pi(i)\}|\,.
\]
For example, when $n=10$, the permutation $\pi = (\underline{7} \; 1\; \underline{5}\; 3\; \underline{10}\; \underline{8}\; \underline{6}\; 2\; 4\; 9)$ has $D(\pi)= 5$. The enumerative theory of permutations by descents has been intensively studied since Euler. An overview is in Section \ref{overview} below. In seeking to make a metric on permutations using descents we were led to study
\begin{equation}\label{tpi}
T(\pi) := D(\pi)+D(\pi^{-1})\,.
\end{equation}
For a statistician or a probabilist it is natural to ask ``Pick $\pi\in S_n$ uniformly; what is the distribution of $T(\pi)$?'' While a host of limit theorems are known for $D(\pi)$, we found $T(\pi)$ challenging. A main result of this paper establishes a central limit theorem.
\begin{thm}\label{clt}
For $\pi$ chosen uniformly from the symmetric group $S_n$, and $T(\pi)$ defined by \eqref{tpi}, for $n\ge 2$
\[
\mathbb{E}(T(\pi)) = n-1\,,\; \; \mathrm{Var}(T(\pi)) = \frac{n+7}{6}-\frac{1}{n}\,,
\]
and, normalized by its mean and variance, $T(\pi)$ has a limiting standard normal distribution.
\end{thm}
The proof of Theorem \ref{clt} uses a method of proving central limit theorems for complicated functions of independent random variables due to Chatterjee~\cite{cha08}. This seems to be a useful extension of Stein's approach. Indeed, we were unable to prove Theorem~\ref{clt} by standard variations of Stein's method such as dependency graphs, exchangeable pairs or size-biased couplings. Theorem \ref{clt} is a special case of the following more general result. Call a statistic $F$ on $S_n$ ``local of degree $k$'' if $F$ can be expressed as
\[
F(\pi) = \sum_{i=0}^{n-k} f_i(\pi)\,,
\]
where the quantity $f_i(\pi)$ depends only on the relative ordering of $\pi(i+1),\ldots, \pi(i+k)$. For example, the number of descents is local of degree $2$, and the number of peaks is local of degree $3$. We will refer to $f_0,\ldots, f_{n-k}$ as the ``local components'' of $F$.
\begin{thm}\label{genclt}
Suppose that $F$ and $G$ are local statistics of degree $k$ on $S_n$, as defined above. Suppose further that the absolute values of the local components of $F$ and $G$ are uniformly bounded by $1$. Let $\pi$ be be chosen uniformly from $S_n$ and let
\[
W := F(\pi) + G(\pi^{-1})\,.
\]
Let $s^2 := \mathrm{Var}(W)$. Then the Wasserstein distance between $(W-\mathbb{E}(W))/s$ and the standard normal distribution is bounded by $C(n^{1/2}s^{-2} + n s^{-3}) k^3$, where $C$ is a universal constant.
\end{thm}
After the first draft of this paper was posted on arXiv, it was brought to our notice that the joint normality of $D(\pi)$, $D(\pi^{-1})$ was proved in Vatutin~\cite{vatutin96} in 1996 via a technical tour de force with generating functions. The asymptotic normality in Theorem \ref{clt} follows as a corollary of Vatutin's theorem. Theorem \ref{genclt} is a new contribution of this paper.
In outline, Section \ref{metrics} describes metrics on permutations and our motivation for the study of $T(\pi)$. Section \ref{overview} reviews the probability and combinatorics of $D(\pi)$ and $T(\pi)$. Section \ref{method} describes Chatterjee's central limit theorem. Section \ref{proof} proves Theorems \ref{clt} and the proof of Theorem \ref{genclt} is in Section \ref{proof2}. Section \ref{future} outlines some other problems where the present approach should work.
\vskip.2in
\noindent{\bf Acknowledgments.} This work derives from conversations with Ron Graham. Walter Stromquist provided the neat formula for the variance of $T(\pi)$. Jason Fulman provided some useful references. Vladimir Vatutin brought the important reference \cite{vatutin96} to our notice. Finally, B.~V.~Rao has inspired both of us by the clarity, elegance and joy that he brings to mathematics.
\section{Metrics on permutations}\label{metrics}
A variety of metrics are in widespread use in statistics, machine learning, probability, computer science and the social sciences. They are used in conjunction with statistical tests, evaluation of election results (when voters rank order a list of candidates), and for combing results of search engine rankings. The book by Marden \cite{marden} gives a comprehensive account of various approaches to statistics on permutations. The book by Critchlow \cite{critchlow} extends the use of metrics on permutations to partially ranked data (top $k$ out of $n$) and other quotient spaces of the symmetric group. The Ph.D.~thesis of Eric Sibony (available on the web) has a comprehensive review of machine learning methods for studying partially ranked data. Finally, the book by Diaconis \cite[Chapter 6]{diaconis} contains many metrics on groups and an extensive list of applications.
Some widely used metrics are:
\begin{itemize}
\item Spearman's footrule: $d_s(\pi, \sigma) = \sum_{i=1}^n |\pi(i)-\sigma(i)|$.
\item Spearman's rho: $d_\rho^2(\pi, \sigma) = \sum_{i=1}^n (\pi(i)-\sigma(i))^2$.
\item Kendall's tau: $d_\tau(\pi,\sigma) = $ minimum number of adjacent transpositions to bring $\sigma$ to $\pi$.
\item Cayley: $d_C(\pi, \sigma) = $ minimum number of transpositions to bring $\sigma$ to $\pi$.
\item Hamming: $d_H(\pi, \sigma) = |\{i: \pi(i)\ne \sigma(i)\}|$.
\item Ulam: $d_U(\pi,\sigma) = n - \text{length of longest increasing subsequence in } \pi\sigma^{-1}$.
\end{itemize}
All of these have known means, variances and limit laws \cite{diaconis}. Some of this is quite deep mathematically. For example, the limit law for Ulam's metric is the Tracy--Widom distribution of random matrix theory.
In addition to the metric properties, metrics can be classified by their invariance properties; for example, right invariance ($d(\pi, \sigma) = d(\pi\eta, \sigma\eta)$), left invariance ($d(\pi, \sigma) = d(\eta \pi, \eta \sigma)$), two-sided invariance ($d(\pi, \sigma) = d(\eta_1\pi\eta_2, \eta_1\sigma\eta_2)$), and conjugacy invariance ($d(\pi, \sigma) = d(\eta^{-1}\pi\eta, \eta^{-1}\sigma\eta)$). Common sense requires right invariance; if $\pi$ and $\sigma$ are the rankings of a class on the midterm and on the final, we would not want $d(\pi,\sigma)$ to depend on the class being listed by last name or identity number. All of the metrics above are right invariant. Only the Cayley and Hamming metrics are bi-invariant.
\vskip.1in
\noindent{\underline{A metric from descent structure?}}
It is natural to try and make a metric from descents. Let us call this $d_D(\pi, \sigma)$. By right invariance only the distance $d_D(\text{id}, \sigma)$ must be defined (and then $d_D(\pi, \sigma) = d_D(\text{id}, \sigma\pi^{-1})$). A zeroth try is $d^0_D(\text{id}, \sigma) = D(\sigma)$; at least $d_D^0(\text{id}, \text{id}) = 0$. However, symmetry requires $d_D^0(\text{id}, \sigma) = d^0_D(\sigma, \text{id}) = d_D^0(\text{id}, \sigma^{-1})$ and $D(\sigma)\ne D(\sigma^{-1})$ for many $\sigma$ (for example, when $n=4$, $\sigma = (2\; 4\; 1\; 3)$, $\sigma^{-1}= (3\; 1\; 4\; 2)$, $D(\sigma) = 1$, $D(\sigma^{-1})=2$). In small samples $D(\sigma)=D(\sigma^{-1})$ occurs fairly often. However, in Section \ref{proof} we will prove a bivariate central limit theorem for $D(\sigma)$, $D(\sigma^{-1})$, which suggests that the chance of $D(\sigma)=D(\sigma^{-1})$ is asymptotic to $Cn^{-1/2}$ when $n$ is large.
A next try is $d_D^1(\text{id}, \sigma) = D(\sigma)+D(\sigma^{-1})$. Then $d_D^1(\pi, \sigma) = D(\sigma \pi^{-1}) + D(\pi\sigma^{-1}) = d_D^1(\sigma, \pi)$. Alas, Ron Graham showed us simple examples where this definition fails to satisfy the triangle inequality! Take $\pi = (3\;4\;1\;2\;5)$, $\sigma =(1\;4\;5\;2\;3)$. A simple check shows that
\[
2+2 = d_D^1(\pi, \text{id}) + d_D^1(\text{id}, \sigma) < d_D^1(\pi, \sigma) = 6\,.
\]
The next idea does work. Form a graph with vertices the $n!$ permutations and an edge from $\pi$ to $\sigma$ with weight $D(\pi\sigma^{-1}) + D(\sigma \pi^{-1})$. Define $d_D^2(\pi, \sigma)$ as the minimum sum of the weights of paths from $\pi$ to $\sigma$. Experiments show that {\it usually} the minimum path is the edge from $\pi$ to $\sigma$. But the example above shows this is not always the case. We believe that for almost all pairs the graph distance equals the edge weight.
The statistic $T(\pi) = D(\pi)+D(\pi^{-1})$ arose from these considerations. Of course, $T$ does not have to give rise to a metric to be a useful measure of disarray. The Kullback--Leibler `divergence' is a case in point.
\section{Combinatorics and probability for descents}\label{overview}
The study of descents starts with Euler. In studying power series which allow closed form evaluation, Euler showed that
\begin{equation}\label{euler}
\sum_{k=0}^\infty k^n t^k = \frac{A_n(t)}{(1-t)^{n+1}}\,,
\end{equation}
where the Eulerian polynomial is
\[
A_n(t) = \sum_{\pi\in S_n} t^{D(\pi)+1} = \sum_{i=1}^n A_{n,i} t^i\,,
\]
with $A_{n,i} = |\{\pi\in S_n: D(\pi)=i-1\}|$, the Eulerian numbers. Thus,
\[
\sum_{k=0}^\infty t^k = \frac{1}{1-t}\,,\;\;\sum_{k=0}^\infty kt^k = \frac{t}{(1-t)^2}\,, \;\;\sum_{k=0}^\infty k^2 t^k = \frac{t+t^2}{(1-t)^3}\, , \;\;\ldots
\]
Fulman \cite{fulman99} connects \eqref{euler} and other descent identities to `higher math'.
The Eulerian numbers and polynomials satisfy a host of identities and have neat generating functions. We recommend Carlitz~\cite{carlitz}, Petersen~\cite{petersen13}, Stanley~\cite[p.\ 6]{stanley77}, Graham--Knuth--Patashnik~\cite[Chapter 6]{grahametal}, the book by Petersen~\cite{petersen15} and the references in Sloane~\cite[Seq A008292]{sloane} for basics with pointers to a large literature.
The probability theory of descents is similarly well studied. The central limit theorem reads:
\begin{thm}\label{descent}
For $\pi$ chosen uniformly in $S_n$,
\[
\mathbb{E}(D(\pi)) = \frac{n-1}{2}\,,\;\;\mathrm{Var}(D(\pi)) = \frac{n+1}{12}\,,
\]
and, normalized by its mean and variance, $D(\pi)$ has a limiting standard normal distribution.
\end{thm}
\noindent{\it \underline{Remark.}} We point here to six different approaches to the proof of Theorem \ref{descent}. Each comes with an error term (for the Kolmogorov distance) of order $n^{-1/2}$. The first proof uses $m$-dependence: For $1\le i\le n-1$, let
\[
X_i(\pi) =
\begin{cases}
1&\text{ if } \pi(i+1)<\pi(i),\\
0 &\text{ else.}
\end{cases}
\]
It is easy to see that $X_1,\ldots, X_{n-1}$ are $2$-dependent. The central limit theorem follows. See Chen and Shao \cite{chenshao} for a version with error terms. A second proof uses a geometrical interpretation due to Stanley~\cite{stanley77}. Let $U_1,\ldots, U_n$ be independent uniform random variables on $[0,1]$. Stanley shows that for all $n$ and $0\le j\le n-1$,
\[
\mathbb{P}(D(\pi)=j) = \mathbb{P}(j < U_1+\cdots +U_n < j+1)\,.
\]
From here the classical central limit theorem for sums of i.i.d.~random variables gives the claimed result. A third proof due to Harper~\cite{harper} uses the surprising fact that the generating function $A_n(t)$ has all real zeros. Now, general theorems for such generating functions show that $D(\pi)$ has the same distribution as the sum of $n$ independent Bernoulli random variables with success probabilities determined by the zeros. Pitman \cite{pitman} surveys this topic and shows
\[
\sup_{-\infty<x<\infty}\left|\mathbb{P}\left(\frac{D(\pi)-\frac{n-1}{2}}{\sqrt{\frac{n+1}{12}}} \le x\right) - \Phi(x)\right|\le \sqrt{\frac{12}{n}}\,,
\]
where $\Phi$ is the standard normal cumulative distribution function.
A fourth approach due to Fulman \cite{fulman04} uses a clever version of Stein's method of exchangeable pairs (see also Conger \cite{conger}). Each of the papers cited above gives pointers to yet other proofs. A related probabilistic development is in Borodin, Fulman and Diaconis~\cite{borodinetal}. They show that the descent process is a determinantal point process and hence the many general theorems for determinantal point processes apply.
The fifth approach is due to Bender~\cite{bender}, who uses generating functions to prove the CLT. Lastly, David and Barton~\cite{davidbarton} give a proof using the method of moments.
We make two points regarding the above paragraphs. First, all of the proofs depend on some sort of combinatorial magic trick. Second, we were unable to get any of these techniques to work for~$T(\pi)$.
There have been some applications of descents and related local structures (for example, peaks) in statistics. This is nicely surveyed in Warren and Seneta~\cite{warrenseneta} and Stigler \cite{stigler}.
\vskip.1in
\noindent{\underline{Joint distribution of $D(\pi)$, $D(\pi^{-1})$}}
There have been a number of papers that study the joint distribution of $D(\pi)$, $D(\pi^{-1})$ (indeed, along with other statistics). We mention Rawlings~\cite{rawlings} and Garsia and Gessel~\cite{garsiagessel}. As shown below, these papers derive generating functions in a sufficiently arcane form that we have not seen any way of deriving the information we need from them.
Two other papers seem more useful. The first by Kyle Petersen \cite{petersen13} treats only $D(\pi)$, $D(\pi^{-1})$ and is very accessible. The second, by Carlitz, Rosellet and Scoville~\cite{carlitzetal}, gives useful recurrences via `manipulatorics'.
First, let
\[
A_{n,r,s} = |\{\pi\in S_n: D(\pi)=r-1,\, D(\pi^{-1}) = s-1\}| \,.
\]
A table of $A_{8,r,s}$ from Peterson \cite[p.~12]{petersen13} shows a roughly elliptic shape and suggests that the limiting distribution of $D(\pi)$, $D(\pi^{-1})$ might be a bivariate normal distribution with vanishing correlations. Theorem \ref{clt2} from Section \ref{proof} proves normality with limiting correlation zero.
Carlitz et al.~\cite{carlitzetal} delineate which $r$, $s$ are possible:
\[
A_{n,r,s}=0 \iff r\ge \frac{s-1}{s} n + 1\,.
\]
Let
\[
A_n(u,v) = \sum_{\pi\in S_n} u^{D(\pi)+1} v^{D(\pi^{-1})+1} = \sum_{i,j=1}^{n} A_{n,i,j} u^i v^j\,.
\]
Petersen \cite[Theorem 2]{petersen13} gives the following formula for the generating function defined above:
\[
\frac{A_n(u,v)}{(1-u)^{n+1}(1-v)^{n+1}} = \sum_{k,l\ge 0} {kl+n-1\choose n} u^k v^l\,.
\]
From this, he derives a recurrence which may be useful for deriving moments:
\begin{align*}
nA_n(u,v) &= (n^2uv + (n-1)(1-u)(1-v))A_{n-1}(u,v)\\
&\qquad + nuv(1-u)\fpar{}{u}A_{n-1}(u,v) + nuv(1-v) \fpar{}{v} A_{n-1}(u,v)\\
&\qquad + uv(1-u)(1-v) \mpar{}{u}{v} A_{n-1}(u,v)\,.
\end{align*}
From the above identity, he derives a complicated recurrence for $A_{n,i,j}$. We are interested in $D(\pi)+D(\pi^{-1})$, so the relevant generating function is
\[
A_n(u,u) = (1-u)^{2n+2} \sum_{k,l\ge 0} {kl+n-1\choose n} u^{k+l}\,.
\]
Finally, Carlitz et al.~\cite{carlitzetal} give
\[
\sum_{n=0}^\infty\sum_{i=0}^\infty \sum_{j=0}^\infty A_{n,i,j} z^ix^jy^n(1-x)^{-(n+1)} (1-z)^{-(n+1)} = \sum_{k=0}^\infty \frac{z^k}{1-x(1-y)^{-k}}\,.
\]
Complicated and seemingly intractable as they are, the generating functions displayed above are in fact amenable to analysis. In a remarkable piece of work from twenty years ago, Vatutin \cite{vatutin96} was able to use these generating functions to prove a class of multivariate central limit theorems for functions of $\pi$ and $\pi^{-1}$. The results were generalized to other settings in Vatutin~\cite{vatutin94} and Vatutin and Mikhailov~\cite{vm96}.
\section{The method of interaction graphs}\label{method}
One motivation for the present paper is to call attention to a new approach to proving central limit theorems for non-linear functions of independent random variables. Since most random variables can be so presented, the method has broad scope. The method is presented in \cite{cha08} in abstract form and used to solve a spatial statistics problem of Bickel; this involved `locally dependent' summands where the notion of local itself is determined from the data. A very different application is given in~\cite{chasound}. We hope that the applications in Sections \ref{proof} and \ref{future} will show the utility of this new approach.
The technique of defining `local neighborhoods' using the data was named `the method of interaction graphs' in \cite{cha08}. The method can be described very briefly as follows.
Let $\mathcal{X}$ be a set endowed with a sigma algebra and let $f:\mathcal{X}^n \rightarrow \mathbb{R}$ be a measurable map, where $n\ge 1$ is a given positive integer. Suppose that $G$ is a map that associates to every $x\in \mathcal{X}^n$ a simple graph $G(x)$ on $[n] := \{1,\ldots,n\}$. Such a map will be called a graphical rule on $\mathcal{X}^n$. We will say that a graphical rule $G$ is symmetric if for any permutation $\pi$ of $[n]$ and any $(x_1,\ldots,x_n)\in \mathcal{X}^n$, the set of edges in $G(x_{\pi(1)}, \ldots, x_{\pi(n)})$ is exactly
\[
\{\{\pi(i),\pi(j)\}: \{i,j\} \text{ is an edge of } G(x_1,\ldots,x_n)\}.
\]
For $m \ge n$, a symmetric graphical rule $G'$ on $\mathcal{X}^m$ will be called an extension of $G$ if for any $(x_1,\ldots,x_m) \in \mathcal{X}^m$, $G(x_1,\ldots,x_n)$ is a subgraph of $G'(x_1,\ldots,x_m)$.
Now take any $x,x'\in \mathcal{X}^n$. For each $i\in [n]$, let $x^i$ be the vector obtained by replacing $x_i$ with $x'_i$ in the vector $x$. For any two distinct elements $i$ and $j$ of~$[n]$, let $x^{ij}$ be the vector obtained by replacing $x_i$ with $x'_i$ and $x_j$ with~$x_j'$. We will say that the coordinates $i$ and $j$ are non-interacting for the triple $(f,x,x')$ if
\[
f(x)-f(x^j) = f(x^i) - f(x^{ij}).
\]
We will say that a graphical rule $G$ is an interaction rule for a function $f$ if for any choice of $x,x'$ and $i,j$, the event that $\{i,j\}$ is not an edge in the graphs $G(x)$, $G(x^i)$, $G(x^j)$, and $G(x^{ij})$ implies that $i$ and $j$ are non-interacting vertices for the triple $(f,x,x')$. The following theorem implies central limit behavior if one can construct an interaction graph that has, with high probability, small maximum degree.
\begin{thm}[\cite{cha08}]\label{main}
Let $f:\mathcal{X}^n \rightarrow \mathbb{R}$ be a measurable map that admits a symmetric interaction rule $G$.
Let $X_1,X_2,\ldots $ be a sequence of i.i.d.\ $\mathcal{X}$-valued random variables and let $X= (X_1,\ldots,X_n)$. Let $W := f(X)$ and $\sigma^2 := \mathrm{Var}(W)$. Let $X^\prime = (X^\prime_1,\ldots,X^\prime_n)$ be an independent copy of $X$. For each $j$, define
\[
\Delta_j f(X) = W - f(X_1,\ldots,X_{j-1}, X^\prime_j, X_{j+1}, \ldots,X_n),
\]
and let $M = \max_j|\Delta_jf(X)|$. Let $G'$ be an extension of $G$ on $\mathcal{X}^{n+4}$, and put
\[
\delta := 1+ \textup{degree of the vertex $1$ in } G'(X_1,\ldots,X_{n+4}).
\]
Then
\[
\delta_W \le \frac{Cn^{1/2}}{\sigma^2}\mathbb{E}(M^8)^{1/4} \mathbb{E}(\delta^4)^{1/4} + \frac{1}{2\sigma^3}\sum_{j=1}^n \mathbb{E}|\Delta_j f(X)|^3,
\]
where $\delta_W$ is the Wasserstein distance between $(W-\mathbb{E}(W))/\sigma$ and $N(0,1)$, and $C$ is a universal constant.
\end{thm}
\section{Proof of Theorem \ref{clt}}\label{proof}
We now apply Theorem \ref{main} to prove Theorem \ref{clt}. Take $\mathcal{X} = [0,1]^2$, and let $X_1,X_2,\ldots$ be independent uniformly distributed points on $\mathcal{X}$. Let $X= (X_1,\ldots, X_n)$. Write each $X_i$ as a pair $(U_i,V_i)$. Define the $x$-rank of the point $X_i$ as the rank of $U_i$ among all the $U_j$'s, and the $y$-rank of the point $X_i$ as the rank of $V_i$ among all the $V_j$'s. More accurately, we should say ``$x$-rank of $X_i$ in $X$'' and ``$y$-rank of $X_i$ in $X$''.
Let $X_{(1)},\ldots,X_{(n)}$ be the $X_i$'s arranged according to their $x$-ranks, and let $X^{(1)},\ldots,X^{(n)}$ be the $X_i$'s arranged according to their $y$-ranks. Write $X_{(i)} = (U_{(i)}, V_{(i)})$ and $X^{(i)} = (U^{(i)}, V^{(i)})$. Let $\pi(i)$ be the $y$-rank of $X_{(i)}$. Then clearly $\pi$ is a uniform random permutation. Let $\sigma(i)$ be the $x$-rank of $X^{(i)}$. Then $X^{(i)} = X_{(\sigma(i))}$. Therefore
\[
\pi(\sigma(i)) = \text{the $y$-rank of $X_{(\sigma(i))}$} = \text{the $y$-rank of $X^{(i)}$} = i\,.
\]
Thus, $\sigma = \pi^{-1}$. Let
\begin{align*}
W = f(X) &:= \sum_{i=1}^{n-1} 1_{\{\pi(i)> \pi(i+1)\}} + \sum_{i=1}^{n-1} 1_{\{\sigma(i)>\sigma(i+1)\}}\\
&= \sum_{i=1}^{n-1} 1_{\{V_{(i)} > V_{(i+1)}\}} + \sum_{i=1}^{n-1}1_{\{U^{(i)} > U^{(i+1)}\}}\,.
\end{align*}
Now suppose that one of the $X_i$'s is replaced by an independent copy $X_i'$. Then $W$ can change by at most $4$. Therefore in the notation of Theorem \ref{main}, $|\Delta_j f(X)|\le 4$ for every $j$ and hence $M\le 4$.
For $x\in \mathcal{X}^n$, define a simple graph $G(x)$ on $[n]$ as follows. For any $1\le i\ne j\le n$, let $\{i,j\}$ be an edge if and only if the $x$-rank of $x_i$ and the $x$-rank of $x_j$ differ by at most $1$, or the $y$-rank of $x_i$ and the $y$-rank of $x_j$ differ by at most $1$. This construction is clearly invariant under relabeling of indices, and hence $G$ is a symmetric graphical rule.
Given $x,x'\in \mathcal{X}^n$, let $x^i$, $x^j$ and $x^{ij}$ be defined as in the paragraphs preceding the statement of Theorem \ref{main}. Suppose that $\{i,j\}$ is not an edge in the graphs $G(x)$, $G(x^i)$, $G(x^j)$ and $G(x^{ij})$. Since the difference $f(x)-f(x^i)$ is determined only by those $x_k$'s such that $k$ is a neighbor of $i$ in $G(x)$ or in $G(x^i)$, and the difference $f(x^j)-f(x^{ij})$ is determined only by those $x_k$'s such that $k$ is a neighbor of $i$ in $G(x^j)$ or in $G(x^{ij})$, the above condition implies that $i$ and $j$ are non-interacting for the triple $(f,x,x')$. Thus, $G$ is a symmetric interaction rule for $f$.
Next, define a graphical rule $G'$ on $\mathcal{X}^{n+4}$ as follows. For any $x\in \mathcal{X}^{n+4}$ and $1\le i\ne j\le n+4$, let $\{i,j\}$ be an edge if and only if the $x$-rank of $x_i$ and the $x$-rank of $x_j$ differ by at most $5$, or if the $y$-rank of $x_i$ and the $y$-rank of $x_j$ differ by at most $5$. Then $G'$ is a symmetric graphical rule. Since the addition of four extra points can alter the difference between the ranks of two points by at most $4$, any edge of $G(x_1,\ldots,x_n)$ is also an edge of $G'(x_1,\ldots,x_{n+4})$. Thus, $G'$ is an extension of $G$.
Since the degree of $G'(x)$ is bounded by $10$ for any $x\in \mathcal{X}^{n+4}$, the quantity $\delta$ of Theorem \ref{main} is bounded by $10$ in this example. The upper bounds on $\Delta_j f(X)$, $M$ and $\delta$ obtained above imply that for this $W$,
\[
\delta_W\le \frac{Cn^{1/2}}{\sigma^2} + \frac{Cn}{\sigma^3}\,,
\]
where $\sigma^2 = \mathrm{Var}(W)$ and $\delta_W$ is the Wasserstein distance between $(W-\mathbb{E}(W))/\sigma$ and $N(0,1)$.
To complete the proof of Theorem \ref{clt}, we derive the given expression for $\mathrm{Var}(T(\pi))$. We thank Walter Stromquist for the following elegant calculation. From classically available generating functions, it is straightforward to show that
\[
\mathbb{E}(D(\pi)) = \frac{n-1}{2}\,,\;\;\mathrm{Var}(D(\pi)) = \frac{n+1}{12}\,,
\]
as given in Theorem \ref{descent} (see Fulman \cite{fulman04}). Consider, with obvious notation,
\[
\mathbb{E}(D(\pi)D(\pi^{-1})) = \mathbb{E}\biggl(\sum_{i=1}^{n-1} \sum_{j=1}^{n-1}D_i(\pi)D_j(\pi^{-1})\biggr)\,.
\]
For a given $\pi$, the $(n-1)^2$ terms in the sum may be broken into three types, depending on the size of the set $\{\pi(i), \pi(i+1)\} \cap \{j,j+1\}$, which may be $0$, $1$ or $2$. Carefully working out the expected value in each case, it follows that
\[
\mathbb{E}(D(\pi)D(\pi^{-1})) = \frac{(n-1)^2}{4} + \frac{n-1}{2n}\,.
\]
From this,
\[
\mathrm{Cov}(D(\pi), D(\pi^{-1})) = \frac{n-1}{2n},\ \ \ \mathrm{Var}(D(\pi)+D(\pi^{-1})) = 2\biggl(\frac{n+1}{12}\biggr) + 2\biggl(\frac{n-1}{2n}\biggr) = \frac{n+7}{6}-\frac{1}{n}\,.
\]
This completes the proof of Theorem \ref{clt}.
\vskip.1in
\noindent{\it \underline{Remarks.}} The argument used to prove Theorem \ref{clt} can be adapted to prove the joint limiting normality of $(D(\pi), D(\pi^{-1}))$. Fix real $(s,t)$ and consider $T_{s,t}(\pi)=s D(\pi)+t D(\pi^{-1})$. The same symmetric graph $G$ can be used. Because $s$, $t$ are fixed, the limiting normality follows as above. The limiting variance follows from $\mathrm{Cov}(D(\pi), D(\pi^{-1}))$. We state the conclusion:
\begin{thm}\label{clt2}
For $\pi$ chosen uniformly from the symmetric group $S_n$,
\[
\left(\frac{D(\pi)-\frac{n-1}{2}}{\sqrt{\frac{n+1}{12}}},\, \frac{D(\pi^{-1})-\frac{n-1}{2}}{\sqrt{\frac{n+1}{12}}}\right) \stackrel{d}{\longrightarrow} Z_2\,,
\]
where $Z_2$ is bivariate normal with mean $(0,0)$ and identity covariance matrix.
\end{thm}
Like Theorem \ref{clt}, Theorem \ref{clt2} can also be deduced as a corollary of the results in \cite{vatutin96}.
\section{Proof of Theorem \ref{genclt}}\label{proof2}
The proof goes exactly as the proof of Theorem \ref{clt}, except for slight differences in the definitions of the interaction graphs $G$ and $G'$. Recall the quantity $k$ from the statement of Theorem \ref{genclt}. With all the same notation as in the previous section, define the interaction graph $G$ as follows. For any $1\le i\ne j\le n$, let $\{i,j\}$ be an edge if and only if the $x$-rank of $x_i$ and the $x$-rank of $x_j$ differ by at most $k-1$, or the $y$-rank of $x_i$ and the $y$-rank of $x_j$ differ by at most $k-1$. It is easy to see that this is indeed an interaction graph for $F(\pi)+G(\pi^{-1})$. Define the extended graph $G'$ similarly, with $k-1$ replaced by $k+3$. Everything goes through as before, except that the quantity $M$ of Theorem \ref{main} is now bounded by $C_1 k$ (since the local components of $F$ and $G$ are bounded by $1$) and the quantity $\delta$ of Theorem \ref{main} is now bounded by $C_2 k$, where $C_1$ and $C_2$ are universal constants. The proof is now easily completed by plugging in these bounds in Theorem \ref{main}.
\section{Going further}\label{future}
There are two directions where the methods of this paper should allow natural limit theorems to be proved. The first is to develop a similar theory for {\it global} patterns in subsequences (for example triples $i<j<k$ with $\pi(i)<\pi(j)<\pi(k)$). Of course, inversions, the simplest case ($i<j$ with $\pi(i)>\pi(j)$), have a nice probabilistic and combinatorial theory, so perhaps the joint distribution of the number of occurrences of a fixed pattern in $\pi$ and $\pi^{-1}$ does as well.
The second direction of natural generalization is to other finite reflection groups. There is a notion of `descents' --- the number of simple positive roots sent to negative roots. The question of number of number of descents in $x$ plus the number of descents in $x^{-1}$ makes sense and present methods should apply. In specific contexts, for example the hyperoctahedral group of signed permutations, these are concrete interesting questions.
Lastly, it would be interesting to see if the dependence of the error bound on $k$ in Theorem \ref{genclt} can be improved, and to figure out what is the optimal dependence.
| {
"timestamp": "2016-10-28T02:01:23",
"yymm": "1608",
"arxiv_id": "1608.01666",
"language": "en",
"url": "https://arxiv.org/abs/1608.01666",
"abstract": "This paper does three things: It proves a central limit theorem for novel permutation statistics (for example, the number of descents plus the number of descents in the inverse). It provides a clear illustration of a new approach to proving central limit theorems more generally. It gives us an opportunity to acknowledge the work of our teacher and friend B. V. Rao.",
"subjects": "Probability (math.PR)",
"title": "A central limit theorem for a new statistic on permutations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683465856102,
"lm_q2_score": 0.8539127548105611,
"lm_q1q2_score": 0.8432972073966294
} |
https://arxiv.org/abs/1805.01412 | Upper bounds for the regularity of powers of edge ideals of graphs | Let $G$ be a finite simple graph and $I(G)$ denote the corresponding edge ideal. In this paper, we obtain upper bounds for the Castelnuovo-Mumford regularity of $I(G)^q$ in terms of certain combinatorial invariants associated with $G$. We also prove a weaker version of a conjecture by Alilooee, Banerjee, Beyarslan and Hà on an upper bound for the regularity of $I(G)^q$ and we prove the conjectured upper bound for the class of vertex decomposable graphs. Using these results, we explicitly compute the regularity of $I(G)^q$ for several classes of graphs. | \section{Introduction}
Let $I$ be a homogeneous ideal of a polynomial ring
$R = \K[x_1,\ldots,x_n]$ over a field $\K$ with usual grading.
In \cite{BEL91}, Bertram, Ein and Lazarsfeld
have initiated the study of the Castelnuovo-Mumford regularity of $I^q$
as a function of $q$ by proving that if $I$ is the defining ideal of a smooth complex
projective variety, then $\reg(I^q)$ is bounded by a linear function of $q$.
Then, Chandler \cite{Cha97} and Geramita, Gimigliano and Pitteloud \cite{GGP95} proved that
if $\dim(R/I)\leq 1$, then $\reg(I^q) \leq q.\reg(I)$ for all $q\geq 1$.
However, Swanson \cite{Swa97} proved that there exists $k \geq 1$ such that
for all $q \geq 1$, $\reg(I^q)\leq kq$. Thereafter, Cutkosky, Herzog and Trung, \cite{CHT},
and independently Kodiyalam \cite{vijay}, proved that for a homogeneous ideal
$I$ in a polynomial ring, $\reg(I^q)$ is a linear
function for $q \gg 0$ i.e.,
there exist non negative integers $a$ and $b$ depending on $I$ such that
$\reg(I^q)=aq+b \text{ for all $q \gg 0$}.$
While the coefficient $a$ is well-understood (\cite{CHT}, \cite{vijay}, \cite{TW}),
the free constant $b$ and the stabilization index $q_0=\min\{q' \mid \reg(I^q)=aq+b,
\text{ for all } q \geq q'\}$ are quite mysterious.
Therefore, the attention has been to identify classes for which the linear
polynomial can be computed or bounded using invariants associated to $I$.
There have been some attempts on computing the free constant and stabilization index
for several class of ideals. For instance, if $I$ is a equigenerated homogeneous ideal, then
$b$ is related to the regularity of fibers of certain projection map
(see for example, \cite{Romer2001}). If
$I$ is $(x_1,\ldots,x_n)$-primary, then $q_0$ can be related to partial
regularity of the Rees algebra of $I$ (see for example, \cite{Berle}).
In this paper, we study the regularity
of powers of edge ideals associated to finite simple graphs.
Let $G$ be a finite simple graph on the vertex set $\{x_1, \ldots,
x_n\}$ and $I(G) := (\{x_ix_j \mid \{i,j\} \in E(G)\}) \subset \K[x_1,
\ldots, x_n]$ be the edge ideal corresponding to the graph $G$.
It is known that
$\reg(I(G)^q) = 2q + b$ for some $b$ and $q \geq q_0$.
There are very few classes of graphs for which $b$ and $q_0$ are known. We
refer the reader to \cite{BBH17} and the references cited there for a
review of results in the literature in this direction.
While the aim is to obtain the linear polynomial corresponding to
$\reg(I(G)^q)$, it seems unlikely that a single combinatorial invariant
will represent the constant term for all graphs. This naturally give
rise to two directions of research. One, to obtain linear polynomials
for particular classes of graphs. Two, to obtain upper and
lower bounds for $\reg(I(G)^q)$ using combinatorial invariants
associated to the graph $G$. It was proved by Beyarslan, H\`a and Trung that $2q + \nu(G) -
1 \leq \reg(I(G)^q)$ for all $q \geq 1$, where $\nu(G)$ denotes the
induced matching number of $G$, \cite{selvi_ha}. In \cite{jayanthan},
the authors along with Narayanan proved that for a bipartite graph
$G$, $\reg(I(G)^q) \leq 2q + \cochord(G) - 1$ for all $q \geq 1$,
where $\cochord(G)$ denote the co-chordal cover number of $G$.
There is no general upper bound known for powers of edge ideals of
arbitrary graphs. Therefore, one may ask:
\begin{enumerate}
\item[Q1.] Does there exist a graph invariant of $G$, say $\rho(G)$, such
that $\reg(I(G)^q) \leq 2q + \rho(G)$ for all $q \geq 1$?
\item[Q2.] Can one obtain the linear polynomial corresponding to
$\reg(I(G)^q)$ for various classes of graphs?
\end{enumerate}
This paper evolves around these two questions.
The first main result of the paper answers Question Q1.
H\`a and Woodroofe
\cite{HaWood} defined an invariant in terms of star
packing, denoted by $\zeta(G)$ (see Section \ref{reg-pow} for
definition), and proved that $\reg(I(G)) \leq \zeta(G) + 1$. In this
paper, we extend H\'a and Woodroofe's bound to include all powers of
$I(G)$. We prove:
\vskip 2mm \noindent
\textbf{Theorem \ref{main-first}.}
{\em If $G$ is a graph, then for all $q \geq 1$,}
$$\reg(I(G)^q) \leq 2q+\zeta(G)-1.$$
So far, in the literature, for the
classes of graphs for which the regularity of powers of edge ideals
have been computed, they satisfy either $\reg(I(G)^q) = 2q + \nu(G) -
1$ or $\reg(I(G)^q) = 2q + \cochord(G) - 1$, for all $q \geq 2$. In \cite{jayanthan}, the
authors raised the question whether there exists a graph $G$ with
\[2q + \nu(G) - 1 < \reg(I(G)^q) < 2q + \cochord(G) - 1, \text{ for $q \gg 0$.}\] As a
consequence of our investigation, we obtain a class of graphs which
attain the upper bound in Theorem \ref{main-first} and the above
strict inequalities are satisfied.
Another way of bounding the function $\reg(I(G)^q)$, than using
combinatorial invariants, is to relate it to the regularity of $G$
itself. It was conjectured by Alilooee, Banerjee, Beyarslan and H\`a,
\cite[Conjecture 7.11(2)]{BBH17}:
\begin{conj}\label{ABBH-conj}
If $G$ is a graph, then for all $q \geq 1$,
$\reg(I(G)^q) \leq 2q + \reg(I(G)) - 2.$
\end{conj}
There are some classes of graphs for which this conjecture is known to
be true, see \cite{BBH17}. As a consequence of the techniques that we have
developed, we prove the conjecture with an additional hypothesis:
\vskip 2mm \noindent
\textbf{Theorem \ref{weak-conj1}.} {\em
Let $G$ be a graph. If every induced subgraph $H$ of
$G$ has a vertex $x$ with $\reg(I(H \setminus N_H[x]))+1 \leq
\reg(I(H))$, then for all $q \geq 1$,}
\[
\reg(I(G)^q) \leq 2q+\reg(I(G))-2.
\]
We then move on to study regularity of powers of vertex decomposable graphs.
A graph $G$ is said to be vertex decomposable if $\Delta(G)$ is a vertex decomposable,
where $\Delta(G)$ denotes the independence complex of $G$
(see Section \ref{reg-vertex} for definition).
Vertex decomposability was first introduced by Provan and Billera \cite{ProvLouis},
in the case when all the maximal faces are of equal cardinality, and
extended to the arbitrary case by Bj\"orner and Wachs \cite{BjWachs}.
We have the chain of implications:
\begin{equation*}
\text{vertex decomposable}\Longrightarrow \text{shellable} \Longrightarrow \text{sequentially
Cohen-Macaulay,}
\end{equation*}
where a graph $G$ is shellable if $\Delta(G)$ is a shellable
simplicial complex and $G$ is sequentially Cohen-Macaulay if $R/I(G)$
is sequentially Cohen-Macaulay.
Both the above implications are known to be strict.
Recently, a number of authors have
been interested in classifying or identifying vertex decomposable graphs $G$ in
terms of the combinatorial properties of $G$, \cite{BFH15, DochEng09, Wood2}.
We prove the Conjecture \ref{ABBH-conj}, for vertex decomposable graphs.
\vskip 2mm \noindent
\textbf{Theorem \ref{upper-ver}.}
{\em Let $G$ be a vertex decomposable graph. Then for all
$q \geq 1$,} \[
\reg(I(G)^q) \leq 2q+\reg(I(G))-2.
\]
As a consequence, we obtain the linear polynomial corresponding to
$\reg(I(G)^s)$ for several classes of graphs such as
$C_5$-free vertex decomposable, chordal, sequentially Cohen-Macaulay bipartite
graphs and certain whiskered graphs.
Our paper is organized as follows. In the Section \ref{pre}, we collect the terminology
and preliminary results that are essential for the rest of the paper.
We prove, in Section \ref{tech}, several technical lemmas which are
needed for the proof of our main results which appear in Sections \ref{reg-pow} and
\ref{reg-vertex}.
\section{Preliminaries}\label{pre}
Throughout this article, $G$ denotes a finite simple graph without
isolated vertices.
For a graph $G$, $V(G)$ and $E(G)$ denote the set of all
vertices and the set of all edges of $G$ respectively.
The \textit{degree} of a vertex $x \in V(G),$ denoted by
$\deg_G(x),$ is the number of edges incident to $x.$
A subgraph $H \subseteq G$ is called \textit{induced} if for $u, v
\in V(H)$, $\{u,v\} \in
E(H)$ if and only if $\{u,v\} \in E(G)$.
For $\{u_1,\ldots,u_r\} \subseteq V(G)$, let $N_G(u_1,\ldots,u_r) = \{v \in V (G)\mid \{u_i, v\} \in E(G)~
\text{for some $1 \leq i \leq r$}\}$
and $N_G[u_1,\ldots,u_r]= N_G(u_1,\ldots,u_r) \cup \{u_1,\ldots,u_r\}$.
For $U \subseteq V(G)$, denote by $G \setminus U$
the induced subgraph of $G$ on the vertex set $V(G) \setminus U$.
A subset $X$ of $V(G)$ is called \textit{independent} if there is no edge $\{x,y\} \in E(G)$
for $x, y \in X$.
A \textit{matching} in a graph $G$ is a subgraph consisting of pairwise disjoint edges.
The largest size of a matching in $G$ is called its matching number.
If the subgraph is an induced subgraph, the matching is an \textit{induced matching}. The largest size of an induced matching in $G$ is called its induced
matching number and denoted by $\nu(G)$.
The \textit{complement} of a graph $G$, denoted by $G^c$, is the graph on the same
vertex set in which $\{u,v\}$ is an edge of $G^c$ if and only if it is not an edge of $G$.
A graph $G$ is \textit{chordal} if every induced
cycle in $G$ has length $3$, and is co-chordal if the complement graph $G^c$ is chordal.
The co-chordal cover number, denoted $\cochord(G)$, is the
minimum number $n$ such that there exist co-chordal subgraphs
$H_1,\ldots, H_n$ of $G$ with $E(G) = \bigcup_{i=1}^n E(H_i)$.
One important tool in the study of regularity of powers of edge ideals
is even-connections.
We recall the concept of \textit{even-connectedness} from \cite{banerjee}.
\begin{definition}\label{even_connected} Let $G$ be a graph. Two vertices $u$ and $v$ ($u$ may be same as $v$) are said to be even-connected with respect to an $s$-fold
products $e_1\cdots e_s$, where $e_i$'s are edges of $G$, not necessarily distinct, if there is a path $p_0p_1\cdots p_{2k+1}$, $k\geq 1$ in $G$ such that:
\begin{enumerate}
\item $p_0=u,p_{2k+1}=v.$
\item For all $0 \leq l \leq k-1,$ $p_{2l+1}p_{2l+2}=e_i$ for some $i$.
\item For all $i$, $ \mid\{l \geq 0 \mid p_{2l+1}p_{2l+2}=e_i \}\mid
~ \leq ~ \mid \{j \mid e_j=e_i\} \mid$.
\item For all $0 \leq r \leq 2k$, $p_rp_{r+1}$ is an edge in $G$.
\end{enumerate}
\end{definition}
\begin{remark} For convenience, we set an edge to be trivially
even-connected, i.e., we get the even-connection by setting $k = 0$
in the above definition.
\end{remark}
The following theorem due to Banerjee is used repeatedly throughout this paper:
\begin{theorem}\label{even_connec_equivalent}\cite[Theorem 6.1 and Theorem 6.7]{banerjee} Let $G$ be a graph with edge ideal
$I = I(G)$, and let $s \geq 1$ be an integer. Let $M$ be a minimal generator of $I^s$.
Then $(I^{s+1} : M)$ is minimally generated by monomials of degree 2, and $uv$ ($u$ and $v$ may
be the same) is a minimal generator of $(I^{s+1} : M )$ if and only if either $\{u, v\} \in E(G) $ or $u$ and $v$ are even-connected with respect to $M$.
\end{theorem}
Polarization is a process that creates a squarefree monomial ideal (in
a possibly different polynomial ring) from a given monomial ideal,
\cite[Section 1.6]{Herzog'sBook}.
In this paper, we repeatedly use one of the important properties of the
polarization, namely:
\begin{corollary} \cite[Corollary 1.6.3(a)]{Herzog'sBook}\label{pol_reg} Let $I$ be a monomial
ideal in $\K[x_1, \ldots, x_n].$ Then
$$\reg(I)=\reg(\widetilde{I}).$$
\end{corollary}
\section{Technical Lemmas}\label{tech}
In this section, we prove several technical results concerning the
graph associated with $(\widetilde{I(G)^{s+1} : e_1 \cdots e_s})$ and
some of its induced subgraphs. We begin by fixing the notation for the
most of our results.
\begin{notation}\label{esetup}
Let $G$ be a graph with $V(G)=\{x_1,\ldots,x_n\}$ and $e_1,\ldots,e_s$, $s \geq 1,$ be some edges of $G$ which are not
necessarily distinct.
By Theorem \ref{even_connec_equivalent} and Corollary \ref{pol_reg}, $\widetilde{(I(G)^{s+1}:e_1 \cdots e_s)}$ is a
quadratic squarefree monomial ideal in an appropriate polynomial ring. Let
$G'$ be the graph associated to $\widetilde{(I(G)^{s+1}:e_1 \cdots e_s)}$.
\end{notation}
One of the key ingredients in the proof of the main results is a new
graph, $G'$, obtained from a given graph $G$ by joining even-connected
vertices by an edge. Our main aim in this section is to get an upper
bound for regularity of certain induced subgraphs of $G'$ which in
turn will help us in bounding $\reg(I(G'))$. For this purpose, we need
to understand the structure of the graph $G'$ in more detail. First we
show that whiskers can be ignored while taking even-connections.
\begin{lemma}\label{even_obs2}
Let $G$ be a graph with $e_1,\ldots,e_s \in E(G)$, $s \geq 1$ and $N_G(x)=\{y\}$. If $e_i=\{x,y\}$, for some $1 \leq i \leq s$,
then $$(I(G)^{s+1}:e_1 \cdots e_s)=(I(G)^s:\prod_{j\neq i}e_j)$$
\end{lemma}
\begin{proof}
Clearly $(I(G)^s:\prod_{j\neq i}e_j) \subseteq (I(G)^{s+1}:e_1 \cdots e_s)$.
Let $uv \in (I(G)^{s+1}:e_1 \cdots e_s)$. For $k \geq 0$, let $(u=p_0) \cdots (p_{2k+1}=v)$
be an even-connection in $G$ with respect to $e_1 \cdots e_s$.
For $0 \leq r \leq k-1$, set $e_{i_r}=\{p_{2r+1},p_{2r+2}\}$. If $k=0$, then
$uv \in (I(G)^s:\prod_{j\neq i}e_j)$. Assume $k\geq 1$.
If $e_i \neq e_{i_r}$, for all $0 \leq r \leq k-1$, then $u$ is an even-connected to $v$ in $G$
with respect to $e_1 \cdots e_{i-1}e_{i+1} \cdots e_s$. Suppose $e_i=e_{i_r}$, for some
$0 \leq r \leq k-1$. Since $N_G(x)=\{y\}$, $p_{2r+1}=p_{2r+3}$. Then
$(u=p_0)p_1 \cdots (p_{2r+1}=p_{2r+3})p_{2r+4} \cdots (p_{2k+1}=v)$ gives an even-connection
in $G$ with respect to $e_1 \cdots e_{i-1}e_{i+1} \cdots e_s$.
Hence $uv \in (I(G)^s:\prod_{j\neq i}e_j)$.
\end{proof}
The following result shows that if a vertex has no intersection with a
set of edges, then removing such a vertex and taking even-connection
with respect to the set of those edges commute with each other.
\begin{lemma}\label{single_vertex} Let the notation be as in \ref{esetup}.
If for $x \in V(G)$, $\{x\} \cap e_i=\emptyset$, for all $1 \leq i \leq s$, then
$$I(G' \setminus x)=(\widetilde{I(G \setminus x)^{s+1}:e_1 \cdots e_s}).$$
\end{lemma}
\begin{proof}
Clearly $(I(G \setminus x)^{s+1}:e_1 \cdots e_s) \subseteq I(G'
\setminus x)$. Suppose $\{u,v\} \in E(G' \setminus x)$.
Let $(u=p_0)p_1 \cdots p_{2k}(p_{2k+1}=v)$ be an even-connection
in $G$ with respect to $e_1 \cdots e_s$. Since $e_i \cap
\{x\}=\emptyset$, for all $1 \leq i \leq s$, $u$ is even-connected to
$v$ in $G \setminus x$ with respect to $e_1 \cdots e_s$.
\end{proof}
The next result talks about new even-connections made out of a given
even-connection and the neighbors of some of the vertices in the
even-connection.
\begin{lemma}\label{even_obs} Let the notation be as in \ref{esetup}.
Suppose $(u=p_0)p_1 \cdots p_{2k}(p_{2k+1}=v)$ is an even-connection in $G$ with respect to
$e_1 \cdots e_s$, for some $k \geq 1$. If $\{w,p_i\} \in E(G')$, for some
$0 \leq i \leq 2k+1$, then either $\{u,w\} \in E(G')$ or $\{v,w\} \in E(G')$.
\end{lemma}
\begin{proof}
If $i=0,2k+1$, then we are done.
Assume that $i=2j+1$, for some $j \geq 0$.
Let
$(w=q_0)q_1 \cdots (q_{2j+1}=p_i)$ be an even-connection with respect to $e_1 \cdots e_s$
in $G$. If $\{q_{2\alpha+1},q_{2\alpha+2}\}$ and $\{p_{2\beta+1},p_{2 \beta+2}\}$ do not have
a common vertex, for
all $0 \leq \alpha \leq j-1$, $j \leq \beta \leq k-1$, then
$(w=q_0)q_1 \cdots (q_{2j+1}=p_i)p_{i+1} \cdots (p_{2k+1}=v)$
is an even-connection with respect to $e_1 \cdots e_s$ in $G$.
Therefore, $wv \in I(G')$.
If $\{q_{2\alpha+1},q_{2\alpha+2}\}$ and $\{p_{2\beta+1},p_{2 \beta+2}\}$
have a common vertex,
for some $0 \leq \alpha \leq j-1$, $j \leq \beta \leq k-1$, then
by \cite[Lemma 6.13]{banerjee},
$w$ is even-connected either to $u$ or to $v$ in $G$.
Therefore either $wu\in I(G')$ or $wv \in I(G')$.
If $i = 2j+2$, then proof is similar.
\end{proof}
The following lemma, which throws more light into the structure of
$G'$, is very useful for the induction process.
\begin{lemma}\label{tech_lemma} Let the notation be as in \ref{esetup}.
Let $y \in V(G)$ and $H=G \setminus N_G[y]$.
If $\{e_1,\ldots,e_s\} \cap E(H) = \{e_{i_1},\ldots,e_{i_t}\}$ and $H'$ is the
graph associated to $\widetilde{(I(H)^{t+1}:e_{i_1} \cdots e_{i_t})}$, then
$G' \setminus N_{G'}[y]$ is an induced subgraph of $H'$.
In particular, $$\reg(I(G' \setminus N_{G'}[y])) \leq \reg(I(H')).$$
\end{lemma}
\begin{proof}
Let $\{u,v\} \in E(G' \setminus N_{G'}[y])$.
Let $(u=p_0)p_1 \cdots (p_{2k+1}=v)$
be an even-connection in $G$ with respect to $e_1 \cdots e_s$ for some
$k \geq 0$. If $p_i \in N_{G'}[y]$, for some $0 \leq i \leq 2k+1$, then by Lemma \ref{even_obs}, $y$
is even-connected either to $u$ or to $v$. This contradicts the
assumption that $\{u,v\} \in G' \setminus N_{G'}[y]$. Therefore, for
each $0 \leq i \leq 2k+1$, $p_i \notin N_{G'}[y]$. Hence $\{u,v\} \in E(H')$, which
proves that $G' \setminus N_{G'}[y]$ is a subgraph of $H'$.
If $a,b \in V(G' \setminus
N_{G'}[y])$ is such that $\{a,b\} \in E(H)$,
then $\{a,b\} \in E(G' \setminus N_{G'}[y])$. Hence
$G' \setminus N_{G'}[y]$ is an induced subgraph of $H$.
The assertion on the regularity follows from \cite[Proposition
4.1.1]{sean_thesis}.
\end{proof}
In the following results, we show that the even-connections in a
parent graph with respect to edges coming from an induced subgraph,
induces an even-connection in the induced subgraph.
\begin{lemma}\label{ind-reg}
Let $G$ be a graph and $H$ be an induced subgraph of $G$. If for any $e_1,\ldots,e_s \in E(H)$,
$s \geq 1$, then $H'$ is an induced subgraph of $G'$, where
$H'$ and $G'$ are the graph associated to $\widetilde{(I(H)^{s+1}:e_1 \cdots e_s)}$
and $\widetilde{(I(G)^{s+1}:e_1 \cdots e_s)}$ respectively.
In particular, $$\reg(I(H')) \leq \reg(I(G')).$$
\end{lemma}
\begin{proof}
Let $\{a,b\} \in E(H')$. For some $k \geq 0$, let $(a=p_0)p_1 \cdots p_{2k}(p_{2k+1}=b)$ be an
even-connection in $G$ with respect to $e_1 \cdots e_s$. Since
$H$ is an induced subgraph of $G$ and $\{p_{2r+1},p_{2r+2}\} \in E(H)$, for all $0 \leq r \leq k-1$,
$(a=p_0)p_1 \cdots p_{2k}(p_{2k+1}=b)$ is an even-connection in $G$ with respect to
$e_1 \cdots e_s$. Therefore, $\{a,b\}\in E(G')$.
Hence $H'$ is the subgraph of $G'$.
As in the previous lemma, it can be seen that the subgraph is an induced subgraph.
\end{proof}
Let the notation be as in \ref{esetup}. For some $1 \leq i \leq s$, set $e_i =
\{x,y\}$. We further explore the even-connections between $N_{G'}[y]$ and
$N_{G'}(x)$. If $(u=p_0)p_1\cdots p_{2k}(p_{2k+1}=y)$ ($u$ may be equal to $y$) be an even-connection in $G$ with respect
to $e_1 \cdots e_s$, for some $k \geq 0$, then there are three possibilities:
\begin{enumerate}
\item $\{p_{2\lambda+1}, p_{2\lambda+2}\} \neq e_i$ for any $0 \leq
\lambda \leq k-1$;
\item There exists $0 \leq \lambda \leq k-1$ with $\{p_{2\lambda+1},
p_{2\lambda+2}\} = e_i$ and $p_{2\lambda+1} = y, p_{2\lambda+2} =
x$;
\item For some $0 \leq \lambda \leq k-1$, $p_{2\lambda+1}=x$ and $p_{2\lambda+2}=y$ whenever
$\{p_{2\lambda+1},p_{2\lambda+2}\}=e_i$;
\end{enumerate}
Note that, fixing an even-connection between $u$ and $y$, $(1), (2)$
and $(3)$ are mutually exclusive. Let
\begin{equation}\label{nota_nbd}
\begin{split}
X_1=&~\Big \{u \in N_{G'}[y] \mid (u=p_0)p_1\cdots p_{2k}(p_{2k+1}=y)
\text{ satisfies either (1) or (2)}\Big\};\\
X_2=&~ \Big\{u \in (N_{G'}[y]) \setminus X_1 \mid (u=p_0)p_1\cdots p_{2k}(p_{2k+1}=y)
\text{ satisfies (3)}\Big\}.
\end{split}
\end{equation}
\begin{lemma}\label{even-lemma} Following the notation set above, let
$E(G\setminus N_G[u,x]) \cap \{e_1, \ldots, e_s\} = \{e_{j_1},
\ldots, e_{j_t}\}$ and $(G \setminus N_G[u,x])'$ denote the graph
associated to $\widetilde{(I(G \setminus N_G[u,x])^{t+1}:e_{j_1} \cdots e_{j_t})}$.
\begin{enumerate}
\item If $u \in X_1$, then $G' \setminus N_{G'}[u] \text{ is an
induced subgraph of } (G \setminus N_{G}[u,x])'.$ In particular,
$$\reg(I(G' \setminus N_{G'}[u]))\leq \reg(I((G \setminus N_{G}[u,x])')).$$
\item Set $G_1'=G' \setminus X_1$. If $u \in X_2$, then $G_1'
\setminus N_{G_1'}[u]$ is an induced subgraph of $(G \setminus
N_{G}[u,x])'$.
$$\reg(I(G_1'\setminus N_{G_1'}[u]))\leq \reg(I((G \setminus N_{G}[u,x])')).$$
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Set $H=G' \setminus N_{G'}[u]$ and $K=(G \setminus N_{G}[u,x])'$. Let
$\{a,b\} \in E(H)$ and $(a=q_0)q_1 \cdots (q_{2l+1}=b)$ be an
even-connection in $G$ with respect to $e_1 \cdots e_s$, for some $l \geq 0$. We show that
$\{a,b\} \in E(K)$. Note that by Lemma \ref{even_obs}, $q_i \notin N_{G'}[u]$, for all $0 \leq i \leq 2l+1$.
Suppose
$q_i \in N_{G}[x]$, for some $0 \leq i \leq 2k+1$. Since $u \in X_1$, then
$\{u,q_i\} \in E(G')$. By Lemma \ref{even_obs}, $u$ is an even-connected either to
$a$ or to $b$ in $G$ with respect to $e_1 \cdots e_s$. This is a contradiction to
$\{a,b\} \in E(H)$.
Therefore, $q_i \notin N_{G}[x]$, for all $0 \leq i \leq 2l+1$. Hence
$a$ is even-connected to $b$ in $G \setminus N_G[u,x]$ with respect to
$e_{j_1} \cdots e_{j_t}$. Therefore, $\{a,b\} \in E(K)$.
\vskip 1mm \noindent
$(2)$ Set $H=G_1' \setminus N_{G_1'}[u]$ and $K=(G \setminus
N_{G}[u,x])'$. Let $\{a,b\} \in E(H)$ and $(a=q_0)q_1 \cdots
(q_{2l+1}=b)$ be an even-connection in $G$ with respect to $e_1 \cdots
e_s$, for some $l \geq 0$. If $\{p_{2\alpha+1},p_{2\alpha+2}\}$ and
$\{q_{2\beta+1},q_{2\beta+2}\}$ have a common vertex, for some $0 \leq
\alpha \leq k-1$, $0 \leq \beta \leq l-1$, then by \cite[Lemma
6.13]{banerjee}, $u$ is even-connected either to $a$ or to $b$ in
$G$ with respect to $e_1 \cdots e_s$. This is a contradiction to our
assumption that
$\{a,b\} \in E(H)$. Therefore, for all $0 \leq \alpha \leq k-1$ and $0
\leq \beta \leq l-1$, $\{p_{2\alpha+1},p_{2\alpha+2}\}$ and
$\{q_{2\beta+1},q_{2\beta+2}\}$ do not have a common vertex. Note
that by Lemma \ref{even_obs}, $q_i \notin N_{G'}[u]$, for all $0 \leq
i \leq 2l+1$. Suppose $q_i \in N_{G}[x]$, for some $0 \leq i \leq
2l+1$. Since $u \in X_2$, there exists $0 \leq \lambda \leq k-1$ with
$p_{2\lambda+1}=x$ and $p_{2\lambda+2}=y$. Therefore,
$(p_{2k+1}=y)p_{2k} \cdots p_{2\lambda+1} p_{2\lambda+2}q_i$ is an
even-connection in $G$ with respect to $e_1 \cdots e_s$. By Lemma
\ref{even_obs}, $y$ is even-connected either to $a$ or to $b$ in $G$
with respect to $e_1 \cdots e_s$. Since
for all $0 \leq \alpha \leq k-1$ and $0
\leq \beta \leq l-1$, $\{p_{2\alpha+1},p_{2\alpha+2}\}$ and
$\{q_{2\beta+1},q_{2\beta+2}\}$ do not have a common vertex,
either $a \in X_1$ or $b
\in X_1$. This is a contradiction to our assumption that $\{a,b\} \in E(H)$. Hence $q_i
\notin N_G[x]$, for all $0 \leq i \leq 2l+1$ so that $\{a,b\}$ is an
even-connection in $G \setminus N_{G}[u,x]$ with respect to
$e_{j_1}\cdots e_{j_t}$. Therefore $\{a,b\} \in E(K)$.
As in the previous lemma, it can be seen that the subgraphs
considered in (1) and (2) are induced subgraphs.
The assertion on the regularity in (1) and (2) follows from \cite[Proposition
4.1.1]{sean_thesis}.
\end{proof}
\section{Regularity powers of graphs}\label{reg-pow}
In this section, we obtain a general upper bound for the regularity
powers of edge ideals of graphs. We first recall the
definition of the invariant $\zeta(G)$, introduced in \cite{HaWood}.
Let $v_1 \in V(G)$ be a vertex of degree at least $2$. If $G \setminus
N_G[v_1]$ does not have any vertex of degree at least $2$, then set
$\sigma = \{v_1\}$ and $H_1$ be the graph obtained by removing all the
isolated vertices of $G \setminus N_G[v_1]$. If there exists $v_2 \in
V(H_1)$ such that $\deg_{H_1}(v_2) \geq 2$, then set $\sigma = \{v_1,
v_2\}$ and $H_2$ be the graph obtained by removing the isolated
vertices of $H_1 \setminus N_{H_1}[v_2]$. Continuing like this, we
obtain $\sigma = \{v_1, \ldots, v_k\}$ such that $H_{k-1} \setminus
N_{H_{k-1}}[v_k]$ is a collection of disconnected edges, say $\{w_1,
\ldots, w_r\}$ and isolated vertices. Let $\PP = \{N_G[v_1],
\ldots, N_G[v_k]\} \cup \{w_1, \ldots, w_r\}.$ We call this $\PP$ a
\textit{star packing} of $G$ and set $\zeta_{\PP}(G) = k+r$. Define $$\zeta(G)
:= \max \Big\{\zeta_{\PP}(G) \mid \PP \text{ is a star packing of } G \Big\}.$$
For example, if $G = C_n$, cycle on $n$ vertices, then for any $x \in
V(G)$, $N_G[x]$ is a path on $3$ vertices and hence $G \setminus
N_G[x]$ is a path on $n-3$ vertices. A maximal star packing can
be obtained by successively taking out $N_G[x]$, where $x$ is the
neighbour of a degree $1$ vertex. Therefore, if $n \equiv \{0,1\}(\text{mod }3)$,
then $\zeta(G) = \lfloor \frac{n}{3} \rfloor = \nu(G)$ and if $n
\equiv 2(\text{mod }3)$, then $\zeta(G) = \lfloor \frac{n}{3} \rfloor + 1 =
\nu(G) + 1$.
It may be noted that for a graph $G$, $\nu(G) \leq \zeta(G)$,
\cite{HaWood}. H\`a and Woodroofe proved:
\begin{theorem}\cite[Theorem 1.6]{HaWood} \label{reg-upper} Let $G$ be
a graph. Then $$\reg(I(G)) \leq \zeta(G)+1.$$
\end{theorem}
It is easy to see that $\zeta(G)$ is at most the matching number of
$G$. There are two general upper bounds known for the class of edge
ideals, namely, $\reg(I(G)) \leq \cochord(G) + 1$, \cite[Theorem
1]{russ} and $\reg(I(G)) \leq \text{min-max}(G) + 1$, \cite{ha_adam,
russ}, where min-max$(G)$ denotes the minimum number of a maximal
matching in $G$. We would like to note here that the invariants
$\zeta(G)$, $\cochord(G)$ and min-max$(G)$ are not comparable in
general, as can be seen from the following examples.
\begin{example}\label{ex:incomparable}
Let $G=C_7$. Then one can see that the co-chordal subgraphs of
$G$ are paths with at most $3$ edges so that $\cochord(G)=3$. Also, one seen that
$\zeta(G)=2$. Let $H$ be obtained from $C_4$ by attaching a pendant to
any of the vertices.
It is easy to see that $\cochord(H)=1$ and $\zeta(H)=2$.
Let $G$ be the graph obtained by adding a pendant vertex each to two
vertices having a common neighbor in $C_6$. Then min-max$(G) = 2$ and
$\zeta(G) = 3$. If $H = C_4$, then it can be seen that min-max$(G) =
2$ while $\zeta(G) = 1$.
\end{example}
We now make an observation about the behaviour of the invariant
which is crucial in our inductive arguments.
\begin{obs}\label{inv-obs}
Let $x$ be a vertex of $G$ of degree at least $2$, then $\zeta(G \setminus
N_G[x]) + 1 \leq \zeta(G)$.
\end{obs}
\begin{proof}
Let $H = G \setminus N_G[x]$. Let $\PP'$ be a star packing of $H$ such
that $\zeta_{\PP'}(H) = \zeta(H)$. Then $\PP = \PP' \cup \{N_G[x]\}$
is a star packing of $G$. Thus,
$\zeta(H) + 1 = \zeta_{\PP'}(H) + 1 = \zeta_{\PP}(G) \leq \zeta(G).$
\end{proof}
We first prove that for a given graph $G$ and edges $e_1, \ldots,
e_s$, the regularity of $G'$
is bounded above by one more than $\zeta(G)$.
\begin{theorem}\label{base-reg}
If $G$ is a graph, then for any
$e_1,\ldots,e_s \in E(G)$, $s \geq 1$
$$\reg(I(G)^{s+1}:e_1 \cdots e_s) \leq \zeta(G)+1.$$
\end{theorem}
\begin{proof}
Let $G'$ be the graph associated to
$\widetilde{(I(G)^{s+1}:e_1 \cdots e_s)}$ contained in an appropriate polynomial ring
$R_1$.
We prove the assertion by induction
on $s$. Suppose $s=1$.
Set $e_1=\{x,y\}$. If either $\deg_G(x)=1$ or $\deg_G(y)=1$, then
by Lemma \ref{even_obs2}, $I(G')=I(G)$. Therefore by
Theorem \ref{reg-upper}, $\reg(I(G'))=\reg(I(G))\leq \zeta(G)+1$.
Suppose $\deg_G(x)>1$ and $\deg_G(y)>1$. Setting
$U=N_{G'}(y)=\{y_1,\ldots,y_r\}$ and
$J=I(G')$, consider the exact sequences:
\begin{eqnarray}\label{main-exact-seq}
0 & \longrightarrow & \frac{R_1}{(J : y_1)}(-1)
\overset{\cdot y_1}{\longrightarrow} \frac{R_1}{J} \longrightarrow
\frac{R_1}{(J , y_1)} \longrightarrow 0;\nonumber \\
& & \hspace*{1cm} \vdots\hspace*{5cm}\vdots \hspace*{3cm}\vdots \\
0 & \longrightarrow & \frac{R_1}{((J,
y_1,\ldots,y_{r-1}):y_r)}(-1)
\overset{\cdot y_r}{\longrightarrow}
\frac{R_1}{(J, y_1,\ldots,y_{r-1})} \longrightarrow
\frac{R_1}{(J, U)} \longrightarrow 0.\nonumber
\end{eqnarray}
It follows from these exact sequences that
\[
\reg(R_1/J) \leq \max \left\{
\begin{array}{l}
\reg\left(\frac{R_1}{(J :y_1)}\right)+1,\ldots,
\reg\left(\frac{R_1}{(J,y_1,\ldots,y_{r-1}):y_r)}\right) + 1,~
\reg\left(\frac{R_1}{(J,~U)}\right)
\end{array}\right\}.
\]
We now prove that each of the regularities appearing on the right hand
side of the above inequality is bounded above by $\zeta(G)$.
Since $(J,U)$ corresponds to an induced subgraph of $G$, by \cite[Proposition
4.1.1]{sean_thesis} and Theorem \ref{reg-upper},
\begin{equation*}\label{case2:eq3}
\reg \left(\frac{R_1}{(J,U)}\right) \leq \reg \left(\frac{R}{I(G)}\right)\leq \zeta(G).
\end{equation*}
Let the notation be as in Equation (\ref{nota_nbd}).
Since $N_{G'}(y)=N_G(y)$, $U \subseteq X_1$.
Note that $x \in N_G(y)$ and $e_1 \cap E(G \setminus N_{G}[y_j,x])=\emptyset$, for any $y_j \in U$. Therefore,
we have
$$
\begin{array}{lcll}
\reg((J:y_j)) & = &\reg(I(G' \setminus N_{G'}[y_j])) \leq
\reg(I(G \setminus N_{G}[y_j,x])) & \mbox{ (by Lemma \ref{even-lemma})}\\
& \leq & \reg(I(G \setminus N_{G}[x]))\leq\zeta(G \setminus N_{G}[x])+1 \leq \zeta(G). & \mbox{ (by Observation
\ref{inv-obs}) }\\
\end{array}
$$
Since $((J,y_1,\ldots,y_{j-1}):y_j)$
corresponds to an induced subgraph of $(J:y_j)$, it follows that
$$\reg \left( \frac{R_1}{((J,y_1,\ldots,y_{j-1}):y_j)}\right)+1 \leq
\reg \left( \frac{R_1}{(J:y_j)}\right) +1\leq \zeta(G).$$
Therefore,
$\reg\left( \frac{R_1}{J}\right) \leq \zeta(G).$
Suppose $s > 1$. Assume by induction that for any graph $G$ and for any $e_1,
\ldots, e_{s-1}\in E(G)$, $\reg(I(G)^s : e_1 \cdots e_{s-1}) \leq
\zeta(G) + 1$.
Let $G$ be a graph, $e_1, \ldots, e_s \in E(G), ~s\geq 2$.
For $1 \leq i \leq s$, let $e_i=\{a_i,b_i\}$. If for some $1 \leq j \leq s$, either
$\deg_G(a_j)=1$ or $\deg_G(b_j)=1$, then
by Lemma \ref{even_obs2},
$$\reg((I(G)^{s+1}:e_1 \cdots e_s))=\reg((I(G)^{s}:e_1 \cdots e_{j-1}e_{j+1} \cdots e_{s}))
\leq \zeta(G)+1,$$
where the last inequality follows by induction hypothesis on $s$.
Suppose
$\deg_G(a_i)>1$ and $\deg_G(b_i)>1$, for all $1 \leq i \leq s$.
Let $N_{G'}(b_s)=\{y_1,\ldots,y_p,z_1,\ldots,z_q\}$.
Following the notation in Equation (\ref{nota_nbd}),
set $X_1=\{y_1,\ldots,y_p\}$,
$X_2=\{z_1,\ldots,z_q\}$ and $J=I(G')$.
It follows from set of short exact sequences, similar to Equation
(\ref{main-exact-seq}), that
\[
\reg(R_1/J) \leq \max \left\{
\begin{array}{l}
\reg\left(\frac{R_1}{(J :y_1)}\right)+1, \ldots, \reg\left(\frac{R_1}{(J,X_1) : z_1}\right)+1,
\ldots, \\
\;\; \\
\reg\left(\frac{R_1}{((J,X_1,z_1,\ldots,z_{q-1}):z_q)}\right) + 1,~
\reg\left(\frac{R_1}{(J,X_1,X_2)}\right).
\end{array}\right.
\]
Now,
\[
\begin{array}{llll}
\reg((J,X_1,X_2))& = & \reg(I(G' \setminus N_{G'}[b_s])) &
(\text{by \cite[Remark 2.5]{selvi_ha}}) \\
& \leq& \reg(I((G \setminus
N_{G}[b_s])')) & (\text{by Lemma \ref{tech_lemma}}),
\end{array}\]
where
$E(G \setminus N_G[b_s]) \cap
\{e_1,\ldots,e_s\}=\{e_{j_1},\ldots,e_{j_t}\}$ and $(G \setminus N_{G}[b_s])'$
is the graph associated to $\widetilde{(I(G \setminus
N_G[b_s])^{t+1}:e_{j_1} \cdots e_{j_t})}$.
Since $a_s \in N_G(b_s), e_s \notin E(G\setminus N_G[b_s])$ so that
$t<s$. Hence by
induction hypothesis on $s$ and Observation \ref{inv-obs}, we get
$$\reg(I(G' \setminus N_{G'}[b_s])) \leq \reg(I((G \setminus N_{G}[b_s])')) \leq
\zeta(G \setminus N_{G}[b_s])+1 \leq \zeta(G).$$
Let $E(G \setminus N_G[y_i,a_s]) \cap
\{e_1,\ldots,e_s\}=\{e_{j_1},\ldots,e_{j_t}\}$ and $(G \setminus
N_G[y_i,a_s])'$ be the graph associated to
$\widetilde{(I(G \setminus N_G[y_i,a_s])^{t+1}:e_{j_1} \cdots
e_{j_t})}$.
For any $y_i \in X_1$, we have
\[
\begin{array}{llll}
\reg((J:y_i))=\reg(I(G' \setminus N_{G'}[y_i])) & \leq &
\reg(I((G \setminus N_G[y_i,a_s])')) & (\text{by Lemma
\ref{even-lemma}}) \\
& \leq & \zeta(G \setminus N_{G}[y_i,a_s])+1 & (\text{by
induction on } s) \\
& \leq & \zeta(G \setminus N_{G}[a_s])+1
\leq \zeta(G). & (\text{by Observation \ref{inv-obs}})
\end{array}
\]
Since $((J,y_1,\ldots,y_{i-1}):y_i)$
corresponds to an induced subgraph of $(J:y_i)$, it follows that
$$\reg(((J,y_1,\ldots,y_{i-1}):y_i)) \leq \reg(J:y_i)\leq \zeta(G).$$
Let $G_1'=G' \setminus X_1$. For any $z_i \in X_2$, we may conclude,
as done earlier, that
\[\reg((J,X_1:z_i)) \leq \reg(I((G \setminus N_G[z_i,a_s])'))
\leq \zeta(G \setminus N_{G}[a_s])+1\leq \zeta(G).\]
Since $((J,X_1,z_1,\ldots,z_{i-1}):z_i)$
corresponds to an induced subgraph of $(J,X_1:z_i)$, it follows that
$$\reg(((J,X_1,z_1,\ldots,z_{i-1}):z_i)) \leq \reg(J,X_1:z_i)\leq \zeta(G).$$
Therefore
$\reg(J) \leq \zeta(G)+1.$
Hence, for any $e_1,\ldots,e_s \in E(G)$, $s \geq 1$,
$$\reg((I(G)^{s+1}:e_1 \cdots e_s)) \leq \zeta(G)+1.$$
\end{proof}
Now we prove an upper bound for the regularity of powers of edge
ideals of graphs.
\begin{theorem}\label{main-first}
If $G$ is a graph, then for all $q \geq 1$,
$$\reg(I(G)^q) \leq 2q+\zeta(G)-1$$
\end{theorem}
\begin{proof}
We prove by induction on $q$. If $q=1,$ then the
assertion follows from Theorem \ref{reg-upper}.
Assume that $q >1$. By applying \cite[Theorem 5.2]{banerjee} and using
induction, it is enough to prove that for edges $e_1, \ldots, e_q$ of
$G$,
$\reg(I(G)^{q+1}:e_1 \cdots e_q) \leq \zeta(G))+1$ for all $q > 1$.
This follows from Theorem \ref{base-reg}.
\end{proof}
If $G_1$ and $G_2$ are graphs for which the linearity of $\reg(I(G_1)^s)$ and
$\reg(I(G_2)^s)$ are known for $s \geq s_1$ and $s \geq s_2$
respectively, then by \cite[Theorem 5.7]{nguyen_vu}, it is known that
$\reg(I(G_1 \coprod G_2)^s)$ is linear for $s \geq s_1+s_2$. Using
this result, we obtain a class of graphs for which the upper bound in
Theorem \ref{main-first} is attained.
\begin{proposition}\label{disjoint}
For $p \geq 0,r > p$, let
$
H=\left(\coprod_{i=1}^p C_{n_i}\right) \coprod \left(\coprod_{j=p+1}^rC_{n_j}\right),
$ where
$n_1,\ldots,n_p \equiv 2(mod~3)$, $n_{p+1},\ldots,n_r \equiv
\{0,1\}(mod~3)$.
Then for all $q \geq 1$,
$$\reg(I(H)^q)=2q+\zeta(H)-1.$$
\end{proposition}
\begin{proof}
It follows from
\cite[Theorem 7.6.28]{sean_thesis} and \cite[Theorem 5.2]{selvi_ha}
that
\[
\reg(I(C_n)) = \left\{\def\arraystretch{1.2}%
\begin{array}{@{}c@{\quad}l@{}}
\nu(C_n)+1 & \text{ if $n \equiv \{0,1\}~(mod~3),$}\\
\nu(C_n)+2 & \text{ if $n \equiv 2~(mod~3),$}\\
\end{array}\right.
\]
and that for all $q \geq 2$, $\reg(I(C_n)^q)=2q+\nu(C_n)-1.$
If $q=1$, then by \cite[Lemma 8]{russ}, we get $\reg(I(H))=\zeta(H)+1$. If $p = 0$, then
$H=(\coprod_{j=p+1}^rC_{n_j})$.
By \cite[Theorem 5.7]{nguyen_vu},
$\reg(I(H)^q)=2q+\zeta(H)-1, \text{for all $q \geq 1$}.$
Suppose $p>0$. Set $H'=(\coprod_{j=p+1}^rC_{n_j})$.
Let $L_1 = H' \coprod C_{n_1}$, where $n_1 \equiv 2(\text{mod }3)$.
Then by \cite[Proposition 2.7]{HTT},
$\reg(I(L_1)^2)=\zeta(L_1)+3.$
By \cite[Theorem 5.7]{nguyen_vu}, for $q \geq 3$, we have
$\reg(I(L_1)^q)=2q+\zeta(L_1)-1.$
For $i \geq 2$, let $L_i = L_{i-1} \coprod C_{n_i}$, where $n_i \equiv
2(\text{mod }3)$.
Recursively applying \cite[Proposition 2.7]{HTT} and \cite[Theorem
5.7]{nguyen_vu}, to the graphs $L_i$, we get for all $q \geq 2$,
$\reg(I(H)^q)=2q+\zeta(H)-1.$
\end{proof}
In \cite{jayanthan}, the authors asked if there exists a graph
$G$ with $2q + \nu(G) - 1 < \reg(I(G)^q) < 2q + \cochord(G) - 1$ for
all $q \gg 0$, \cite[Question 5.8]{jayanthan}. We show that some of
the graphs considered in Proposition
\ref{disjoint} satisfy this inequality.
Let $H$ be a graph as in Proposition \ref{disjoint}, with $n_j \equiv
1(\text{mod }3)$ for $j = p+1, \ldots, q$. Then $\nu(H) =
\sum_{i=1}^q \lfloor\frac{n_i}{3} \rfloor$, $\zeta(H) =
p+ \sum_{i=1}^q \lfloor \frac{n_i}{3} \rfloor$ and $\cochord(G) =
q+ \sum_{i=1}^q \lfloor \frac{n_i}{3} \rfloor$. Therefore, we get
$$2q+\nu(H)-1<\reg(I(H)^q) = 2q + \zeta(H) - 1 <2q+\cochord(H)-1.$$
Using techniques very similar to the ones used in the proof of Theorem
\ref{base-reg}, we prove a weaker version of Conjecture
\ref{ABBH-conj}.
\begin{theorem}\label{weak-conj1} Let $G$ be a graph. If every induced subgraph $H$ of
$G$ has a vertex $x$ with $\reg(I(H \setminus N_H[x]))+1 \leq
\reg(I(H))$, then for all $q \geq 1$,
\[
\reg(I(G)^q) \leq 2q+\reg(I(G))-2.
\]
\end{theorem}
\begin{proof}
Let $G$ be a graph satisfying the given hypothesis. We prove the
assertion by induction on $q$. If $q=1,$ then we are done.
Assume that $q >1$.
For any graph $K$, set
\[
\PP(K)=\{x \in V(K) \mid \reg(I(K)) \geq \reg(I(K \setminus N_K[x]))+1\}.
\]
By hypothesis, $\PP(G) \neq \emptyset$.
By applying \cite[Theorem 5.2]{banerjee} and using
induction, it is enough to prove that for edges $e_1, \ldots, e_s$ of
$G$,
$\reg((I(G)^{s+1}:e_1 \cdots e_s)) \leq \reg(I(G))$ for all $s \geq 1$.
We prove this by induction
on $s$.
Let $G'$ be the graph associated to the ideal
$\widetilde{(I(G)^{s+1}:e_1\cdots e_s)}$ which is contained in an appropriate polynomial ring
$R_1$.
Suppose $s=1$.
\vskip 1mm \noindent
\textsc{Case 1:} Suppose $e_1\cap \PP(G) \neq \emptyset$.
\vskip 1mm \noindent
Let $e_1=\{x,y\}$ with $x \in \PP(G)$.
We proceed as in the proof of Theorem \ref{base-reg}.
If either $\deg_G(x)=1$ or $\deg_G(y)=1$,
then by Lemma \ref{even_obs2}, $I(G')=I(G)$. Therefore,
$\reg(I(G'))=\reg(I(G))$.
\vskip 1mm \noindent
Suppose $\deg_G(x)>1$ and $\deg_G(y)>1$. Setting
$U=N_G(y)=\{y_1,\ldots,y_r\}$ and
$J=I(G')$ and considering the exact sequences as in Equation
(\ref{main-exact-seq}), we get
\[
\reg(R_1/J) \leq \max \left\{
\begin{array}{l}
\reg\left(\frac{R_1}{(J :y_1)}\right)+1,
\ldots,
\reg\left(\frac{R_1}{(J,y_1,y_2,\ldots,y_{r-1}):y_r)}\right) + 1,~
\reg\left(\frac{R_1}{(J,~U)}\right).
\end{array}\right.
\]
Proceeding as in that proof, we conclude that $\reg(R_1/(J,U)) \leq
\reg(R/I(G))$ and $\reg(J: y_j) \leq \reg(I(G \setminus N_G[x])) <
\reg(I(G))$, where the last inequality follows from the hypothesis.
Since $((J,y_1,\ldots, y_{j-1}): y_j)$ corresponds to
an induced subgraph of $(J : y_j)$, we get
$$\reg \left( \frac{R_1}{((J,y_1,\ldots,y_{j-1}):y_j)}\right)+1 \leq
\reg \left( \frac{R_1}{(J:y_j)}\right) +1\leq \reg\left(\frac{R}{I(G)}\right).$$
Therefore,
$\reg\left( \frac{R_1}{J}\right) \leq \reg(\frac{R}{I(G)}).$
\vskip 1mm \noindent
\textsc{Case 2:} Suppose $e_1 \cap \PP(G) = \emptyset$.
\vskip 1mm \noindent
Let $|V(G)|=n$. We proceed by induction on $n$. If $n \leq 4$, then it can be
seen that $e \cap \PP(G)\neq \emptyset$, for all $e \in E(G)$ so that
\textsc{Case 2} does not occur.
Therefore $n \geq 5$.
\noindent
\begin{minipage}{\linewidth}
\begin{minipage}{0.2\linewidth}
\begin{figure}[H]
\begin{tikzpicture}[scale=.75]
\draw (4,3)-- (3,2);
\draw (4,3)-- (5,2);
\draw (3,2)-- (5,2);
\draw (3,2)-- (3.01,0.59);
\draw (5,2)-- (4.99,0.61);
\draw (3.01,0.59)-- (4.99,0.61);
\draw (4.99,0.61)-- (3.01,0.59);
\draw (5,2)-- (3,2);
\begin{scriptsize}
\fill [color=black] (4,3) circle (1.5pt);
\draw[color=black] (4.25,3.23) node {$t_1$};
\fill [color=black] (3,2) circle (1.5pt);
\draw[color=black] (2.88,2.25) node {$t_5$};
\fill [color=black] (5,2) circle (1.5pt);
\draw[color=black] (5.25,2.23) node {$t_2$};
\fill [color=black] (3.01,0.59) circle (1.5pt);
\draw[color=black] (2.79,0.7) node {$t_4$};
\fill [color=black] (4.99,0.61) circle (1.5pt);
\draw[color=black] (5.23,0.85) node {$t_3$};
\end{scriptsize}
\end{tikzpicture}
\end{figure}
\end{minipage}
\begin{minipage}{0.78\linewidth}
Suppose $n=5$. There are $23$ simple graphs (without isolated
vertices) on $5$ vertices. Among these graphs, it can be verified,
manually or using computational packages such as Macaulay 2 \cite{M2}
or SAGE \cite{sage}, that all, except the graph given on the left,
satisfy the property that $G \setminus \PP(G)$ is a set of isolated
vertices.
\end{minipage}
\end{minipage}
For this graph $G$, $\reg(I(G)) = 2$ and $\PP(G) = \{t_2, t_5\}$. Hence
$\{t_3,t_4\} \in E(G \setminus \PP(G))$ and $(I(G)^2 : t_3t_4) = I(G)$ so
that $G' = G$. Therefore, the assertion holds true.
Now assume that $n>5$. By induction, assume that if $K$ is a
graph with $|V(K)| < n$ and for every induced subgraph $K'$ of $K$,
there exists $z \in V(K')$ such that
$\reg(I(K' \setminus N_{K'}[z]))+1 \leq \reg(I(K'))$,
then $\reg(I(K)^2 :e) \leq \reg(I(K))$ for every
$e\in E(K)$ such that $e \cap \PP(K)= \emptyset$.
Let $x \in \PP(G)$. Then by \cite[Theorem 3.4]{Ha2},
\[\reg(I(G')) \leq \max
\Big\{\reg(I(G' \setminus \{x\})), \reg(I(G' \setminus
N_{G'}[x])+1\Big\}.\]
Since $e_1 \cap \{x\}=\emptyset$, by Lemma \ref{single_vertex},
$$\reg(I(G' \setminus x))=\reg((I(G \setminus x)^2:e_1)).$$
Note that $G \setminus x$ is a graph with $|V(G \setminus x)|<n$ and every induced
subgraph of $G \setminus x$ is an induced subgraph of $G$.
If $e_1 \cap \PP(G \setminus x) \neq \emptyset$, then by Case 1, or if
$e_1 \cap \PP(G \setminus x) = \emptyset$, then by induction on the number of vertices, we get
$$\reg((I(G \setminus x)^2:e_1)) \leq \reg(I(G \setminus x)) \leq \reg(I(G)).$$
Now we prove that $\reg(I(G' \setminus N_{G'}[x])+1 \leq \reg(I(G))$.
Note that $G \setminus N_G[x]$ is a graph with $|V(G \setminus N_G[x])|<n$.
If $e_1 \cap \PP(G \setminus N_G[x]) \neq \emptyset$, then by Case 1, or if
$e_1 \cap \PP(G \setminus N_G[x]) = \emptyset$, then by induction on the number of vertices, we get
\[
\begin{array}{llll}
\reg(I(G' \setminus N_{G'}[x]))+1 & \leq & \reg(I(G \setminus N_G[x])^2:e_1)+1 &
(\text{by Lemma }\ref{tech_lemma}) \\
& \leq & \reg(I(G \setminus N_G[x]))+1 & \\
& \leq & \reg(I(G)). &
\end{array}
\]
Hence $\reg(I(G')) \leq \reg(I(G))$. This proves the case $s =1$.
Suppose $s > 1$. We now show that $\reg(I(G)^{s+1}: e_1\cdots e_s)
\leq \reg(I(G))$. Let $e_i = \{a_i, b_i\}$ for $1 \leq i \leq s$. If
$\deg_G(a_i) = 1$ or $\deg_G(b_i)$ for some $i$, then by Lemma
\ref{even_obs2}, it follows that
\[\reg(I(G)^{s+1} : e_1\cdots e_s) = \reg(I(G)^s : e_1\cdots
e_{i-1}e_{i+1}\cdots e_s) \leq \reg(I(G)),\]
where the last inequality follows from induction on $s$.
Assume now that $\deg_G(a_i) \geq 2$ and $\deg_G(b_i) \geq 2$ for all
$1 \leq i \leq s$.
\vskip 1mm \noindent
\textsc{Case 3:} Suppose $e_i \cap \PP(G) \neq \emptyset$, for some $1
\leq i \leq s$.
\vskip 1mm \noindent
Without loss of generality, assume that $e_s \cap \PP(G) \neq
\emptyset$ and $a_s \in \PP(G)$.
Proceeding as in the proof Theorem \ref{base-reg} following the same
notation, one gets
\begin{eqnarray*}
\reg(J,X_1,X_2) &\leq & \reg(I(G' \setminus N_{G'}[b_s]))\leq \reg(I(G \setminus N_G[b_s])) \leq
\reg(I(G));\\
\reg(J : y_i) & \leq & \reg(I((G \setminus N_G[y_i,a_s])')) \leq \reg(I((G \setminus N_G[a_s])')) \\
& \leq& \reg(I(G \setminus N_G[a_s])) < \reg(I(G));
\\
\reg(J,X_1 : z_i) &\leq &\reg(I((G \setminus N_G[y_i,a_s])')) \leq \reg(I((G \setminus N_G[a_s])'))\\
& \leq & \reg(I(G \setminus N_G[a_s])) < \reg(I(G)).
\end{eqnarray*}
For the above conclusions, we use, in a similar manner as in the proof
of Theorem \ref{base-reg}, Lemmas \ref{tech_lemma}, \ref{ind-reg},
\ref{even-lemma} and induction on $s$. Using these inequalities, we
conclude that $\reg(J)
\leq \reg(I(G))$.
\vskip 1mm \noindent
\textsc{Case 4:}
Suppose $e_i \cap \PP(G) = \emptyset$, for all $1 \leq i \leq s$.
If $|V(G)| \leq 4$, then one can see that the case $e_i \cap \PP(G) =
\emptyset$ does not occur. If $|V(G)| = 5$, then as remarked in
\textsc{Case 2}, one can see that there is only one graph $G$, which
is given there, such that $G \setminus \PP(G)$ has an edge. In this
case, the only possibility is $e_i = \{t_3, t_4\}$ for all $i = 1,
\ldots, s$. Hence $G' = G$ and hence the assertion holds true.
Now assume that $|V(G)| = n > 5$.
Let $x \in \PP(G)$. Then by by \cite[Theorem 3.4]{Ha2},
\[\reg(I(G')) \leq \max
\Big\{\reg(I(G' \setminus \{x\})), \reg(I(G' \setminus
N_{G'}[x]))+1\Big\}.\]
By Lemma \ref{single_vertex}, $\reg(I(G' \setminus x)) =
\reg(I(G\setminus x)^{s+1} : e_1 \cdots e_s))$ and by Lemma
\ref{tech_lemma}, $\reg(I(G' \setminus N_{G'}[x])) \leq \reg(I(G
\setminus N_G[x])^{t+1} : e_{i_1}\cdots e_{i_t})$, where $E(G\setminus
N_G[x]) \cap \{e_1, \ldots, e_s\} = \{e_{i_1}, \ldots, e_{i_t}\}$.
Proceeding as in \textsc{Case 2}, using the above inequalities and
induction on $|V(G)|$ as well as on $s$,
one can conclude that $\reg(I(G')) \leq \reg(I(G))$.
Therefore $\reg(I(G)^{s+1} : e_1\cdots e_s) \leq \reg(I(G))$.
\end{proof}
In view of the above theorem, we would like to ask:
\begin{question}
Does every finite simple graph $G,$ having no isolated vertices, have
a vertex $x$ such that $\reg(I(G): x) + 1 \leq \reg(I(G))$?
\end{question}
We note here that if the answer to the above question is positive,
then it follows from Theorem \ref{weak-conj1} that the Conjecture
\ref{ABBH-conj} is true.
\section{Vertex decomposable graphs}\label{reg-vertex}
In this section, we prove Conjecture \ref{ABBH-conj} for vertex
decomposable graphs.
We first recall the definition of simplicial complex and vertex decomposable graph.
A \textit{simplicial complex} $\Delta$ on $V = \{x_1,\ldots,x_n\}$ is
a collection of subsets of $V$ such that:
\begin{enumerate}
\item $\{x_i\}\in \Delta $ for $i =1,\ldots,n$, and
\item if $F \in \Delta$ and $G \subseteq F$, then $G \in \Delta$.
\end{enumerate}
Elements of $\Delta$ are called the \textit{faces} of $\Delta$, and the maximal elements, with
respect to inclusion, are called the facets. The link of a face $F$ in
$\Delta$ is $\link_\Delta(F) = \{F' \mid F' \cup F \text{ is a face in
} \Delta, ~F' \cap F = \emptyset \}$.
A simplicial complex $\Delta$ is
recursively defined to be {\em vertex decomposable} if it is either a
simplex or else has some vertex $v$ so that
\begin{enumerate}
\item both $\Delta \setminus v$ and $\link_\Delta v$ are vertex decomposable, and
\item no face of $\link_\Delta v$ is a facet of $\Delta \setminus v$.
\end{enumerate}
The \textit{independence complex} of $G$, denoted $\Delta(G)$, is the simplicial
complex on $V(G)$ with face set
$$\Delta(G)=\Big\{F \subseteq V(G) \mid F \text{ is an independent set of $G$ } \Big\}.$$
A graph $G$ is said to be \textit{vertex decomposable} if $\Delta(G)$ is a
vertex decomposable simplicial complex.
In \cite{Wood2}, Woodroofe translated the notion of vertex decomposable
for graphs as follows.
\begin{definition} \cite[Lemma 4]{Wood2}
A graph $G$ is recursively defined to be vertex decomposable if
$G$ is totally disconnected (with no edges) or if
\begin{enumerate}
\item there is a vertex $x$ in $G$ such that $G \setminus x$ and
$G \setminus N_G[x]$ are both vertex decomposable, and
\item no independent set in $G\setminus N_G[x]$ is a maximal independent set
in $G\setminus x$.
\end{enumerate}
\end{definition}
A vertex $x$ which satisfies the second condition is called a \textit{shedding vertex} of
$G$.
If $G$ is a vertex decomposable graph, then by \cite[Theorem 2.5]{BFH15},
$G \setminus N_G[x]$ is a vertex decomposable graph, for any $x \in V(G)$.
For any vertex decomposable graph $K$, set
\[
\s(K)=\Big \{ x \in V(K) \mid \text{$x$ is a shedding vertex and
$K \setminus x$ is a vertex decomposable graph} \Big\}.
\]
Note that if $K$ is vertex decomposable, then $\s(K) \neq \emptyset$.
\begin{obs} \label{reg-obs}
Let $G$ be a vertex decomposable graph and $x \in \s(G)$. By \cite[Theorem 4.2]{HaWood},
$$\reg(I(G))=\max\Big\{\reg(I(G \setminus x)), ~\reg(I(G \setminus N_G[x]))+1\Big\}.$$
Therefore, $\reg(I(G \setminus N_G[x]))+1 \leq \reg(I(G))$.
\end{obs}
We prove Conjecture \ref{ABBH-conj} for the class of vertex decomposable graphs.
\begin{theorem}\label{upper-ver}
Let $G$ be a vertex decomposable graph. Then for all $q \geq 1$,
$$\reg(I(G)^q) \leq 2q+\reg(I(G))-2.$$
\end{theorem}
\begin{proof}
This proof is also very similar to the proof of Theorem
\ref{weak-conj1}. We give a sketch of the proof here. The proof is by
induction on $q$. If $q=1,$ then we are done. Assume that $q >1$. By
applying \cite[Theorem 5.2]{banerjee} and using induction, it is
enough to prove that for edges $e_1, \ldots, e_s$ of $G$ (not
necessarily distinct), $\reg((I(G)^{s+1}:e_1 \cdots e_s)) \leq
\reg(I(G))$ for all $s \geq 1$. We prove this by induction on $s$.
Let $G'$ be the graph associated to $\widetilde{(I(G)^{s+1}:e_1\cdots
e_s)}$, for any $e_1, \ldots, e_s \in E(G)$.
Let $s = 1$. As in the proof of Theorem \ref{weak-conj1}, we split the
proof into two cases.
\vskip 1mm \noindent
\textsc{Case 1:} Suppose $e_1\cap \s(G) \neq \emptyset$.
\vskip 1mm \noindent
The proof is identical to \textsc{Case 1} in Theorem \ref{weak-conj1}.
The only difference in the proof is that
while we used the hypothesis in Theorem \ref{weak-conj1} to conclude
that $\reg(J:y_i) < \reg(I(G))$, here we use Observation \ref{reg-obs}
for that conclusion.
\vskip 1mm \noindent
\textsc{Case 2:} Suppose $e_1 \cap \s(G) = \emptyset$.
\vskip 1mm \noindent
Let $|V(G)|=n$. We proceed by induction on $n$.
If $G$ is a vertex decomposable with $n \leq 4$, then
it can be seen that $e \cap \s(G)\neq \emptyset$, for all $e \in
E(G)$. Therefore $n \geq 5$.
\noindent
\begin{minipage}{\linewidth}
\begin{minipage}{0.41\linewidth}
\begin{figure}[H]
\begin{tikzpicture
\draw (3,3)-- (2,2);
\draw (3,3)-- (4,2);
\draw (2,2)-- (4,2);
\draw (5,3)-- (5,2);
\draw (5,2)-- (7,2);
\draw (7,2)-- (7,1);
\draw (5,2)-- (5,1);
\draw (5,1)-- (7,1);
\draw (4,2)-- (4,1);
\draw (2,2)-- (2,1);
\draw (2,1)-- (4,1);
\begin{scriptsize}
\fill [color=black] (3,3) circle (1.5pt);
\fill [color=black] (2,2) circle (2.5pt);
\fill [color=black] (4,2) circle (2.5pt);
\fill [color=black] (5,3) circle (1.5pt);
\fill [color=black] (5,2) circle (2.5pt);
\fill [color=black] (7,2) circle (1.5pt);
\fill [color=black] (7,1) circle (1.5pt);
\fill [color=black] (5,1) circle (1.5pt);
\fill [color=black] (4,1) circle (1.5pt);
\fill [color=black] (2,1) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\caption*{Vertex decomposable graphs with $|V(G)|=5$ and $\bullet$
denote shedding vertices.}\label{fig:vertexdecom}
\end{figure}
\end{minipage}
\begin{minipage}{0.59\linewidth}
Suppose $n=5$.
It can verified, manually or using computational packages such as
Macaulay 2 \cite{M2}, SimplicialDecomposability \cite{Cook} or SAGE \cite{sage}, that there are $20$ vertex decomposable
graphs (without isolated vertices) on $5$ vertices. Among these
graphs, all except two graphs satisfy the property that $G \setminus
\s(G)$ is a set of isolated vertices. The two graphs for which $G
\setminus \s(G)$ contain edges are given in the figure on the left.
Then for any choice of $e \in E(G \setminus
\s(G)), ~ (I(G)^2 : e) = I(G)$ so that $G' = G$.
\end{minipage}
\end{minipage}
Now assume that $n>5$. By induction, assume that if $H$ is a
vertex decomposable graph with $|V(H)| < n$ and $e
\in E(H)$ such that $e \cap S(H)= \emptyset$, then $\reg(I(H)^2 :
e) \leq \nu(H)+1$.
Let $x \in \s(G)$. By \cite[Theorem 3.4]{Ha2},
\[\reg(I(G')) \leq \max
\Big\{\reg(I(G' \setminus \{x\})), \reg(I(G' \setminus
N_{G'}[x])+1\Big\}.\]
Now the proof is identical to that of the proof of \textsc{Case 2} in
Theorem \ref{weak-conj1}. In this case also, we use Observation
\ref{reg-obs} to conclude $\reg(I(G \setminus N_G[x])) < \reg(I(G))$.
Suppose $s > 1$. Assume by induction that for any vertex decomposable graph $G$
and edges $e_1,\ldots, e_{s-1}$, $\reg(I(G)^s : e_1 \cdots e_{s-1}) \leq \reg(I(G))$.
We now prove that for edges $e_1, \ldots, e_s$, $\reg(I(G)^{s+1} :
e_1\cdots e_s) \leq \reg(I(G))$. Let $e_i = \{a_i, b_i\}$. If
$\deg_G(a_i) = 1$ or $\deg_G(b_i) = 1$ for some $i$, then the
assertion follows as in the proof of Theorem \ref{weak-conj1}.
Therefore, we may assume that $\deg_G(a_i) \geq 2$ and $\deg_G(b_i)
\geq 2$ for all $1 \leq i \leq s$.
\vskip 1mm \noindent
\textsc{Case 3:} Suppose $e_i \cap \s(G) \neq \emptyset$, for some $1
\leq i \leq s$.
\vskip 1mm \noindent
Without loss of generality, assume that $e_s \cap \s(G) \neq \emptyset$ and $a_s \in \s(G)$.
Proceeding as in the proof Theorem \ref{weak-conj1} following the same
notation, one gets
\begin{eqnarray*}
\reg(J,X_1,X_2) &\leq & \reg(I(G' \setminus N_{G'}[b_s]))\leq \reg(I(G \setminus N_G[b_s])) \leq
\reg(I(G)); \\
\reg(J : y_i) & \leq & \reg(I((G \setminus N_G[y_i,a_s])')) \leq \reg(I((G \setminus N_G[a_s])')) \\
& \leq& \reg(I(G \setminus N_G[a_s])) < \reg(I(G));
\\
\reg(J,X_1 : z_i) &\leq &\reg(I((G \setminus N_G[y_i,a_s])')) \leq \reg(I((G \setminus N_G[a_s])'))\\
& \leq & \reg(I(G \setminus N_G[a_s])) < \reg(I(G)).
\end{eqnarray*}
Here also, we use Observation \ref{reg-obs} along with
Lemmas \ref{tech_lemma},
\ref{ind-reg}, \ref{even-lemma} and induction on $s$ for the above
conclusions.
Using these inequalities, we conclude, as in the proof
of Theorem \ref{weak-conj1}, that $\reg(J) \leq \reg(I(G))$.
\vskip 1mm \noindent
\textsc{Case 4:}
Suppose $e_i \cap \s(G) = \emptyset$, for all $1 \leq i \leq s$.
Let $x \in \s(G)$. By \cite[Theorem 3.4]{Ha2},
\[\reg(I(G')) \leq \max
\Big\{\reg(I(G' \setminus \{x\})), \reg(I(G' \setminus
N_{G'}[x])+1\Big\}.\]
Here too, the proof is identical to \textsc{Case 4} of Theorem
\ref{weak-conj1}.
Finally, we obtain
$\reg(I(G)^{s+1} : e_1\cdots e_s) \leq \reg(I(G))$. Therefore, for all
$q \geq 1$,
$\reg(I(G)^q) \leq 2q + \reg(I(G)) - 2$.
\end{proof}
As an immediate consequence of the above result, we obtain the linear
polynomial corresponding to $\reg(I(G)^q)$ for several
classes of graphs.
\begin{corollary}\label{main-cor} If
\begin{enumerate}
\item $G$ is $C_5$-free vertex decomposable;
\item $G$ is chordal;
\item $G$ is sequentially Cohen-Macaulay bipartite, ~~or
\item $G=H \cup W(S)$, where $S \subseteq V(H)$, $H \setminus S$ is a
chordal graph and $H \cup W(S)$ denotes the graph obtained from $H$ by adding a whisker to each vertex in $S$,
\end{enumerate}
then for all $q \geq 1$, $$\reg(I(G)^q)=2q+\nu(G)-1.$$
\end{corollary}
\begin{proof} (1) Follows from Theorem \ref{upper-ver}, \cite[Theorem 4.5]{selvi_ha} and
\cite[Lemma 2.3]{khosh_moradi}.
\vskip 1mm \noindent
(2) Since $G$ is chordal, it is $C_5$-free and by \cite{Wood2} $G$ is
vertex decomposable. By (1), $\reg(I(G)^q)=2q+\nu(G)-1$.
\vskip 1mm \noindent
(3) By \cite[Theorem 2.10]{adam}, $G$ is vertex decomposable.
Since a bipartite graph is $C_5$-free, the assertion follows from
(1).
\vskip 1mm \noindent
(4) First we show that $\reg(I(G))=\nu(G))+1$. By \cite[Lemma 2.2]{Kat}, we have
$\nu(G))+1 \leq \reg(I(G)$. Therefore, it is enough to prove that $\reg(I(G))\leq
\nu(G)+1$. Set $|S|=m$.
If $m=0$, then by \cite[Corollary 6.9]{ha_adam}
$\reg(I(G))=\nu(G)+1.$
Suppose $m \geq 1$. There is a vertex $x$ in $S$ such that $N_{G}(z_x)=\{x\}$.
By \cite[Theorem 3.4]{Ha2},
$$\reg(I(G)) \leq \max \{\reg(I(G \setminus x)),\reg(I(G \setminus N_{G}[x]))+1\}.$$
By induction hypothesis on $m$,
$\reg(I(G \setminus x)) \leq \nu(G)+1$ and $\reg(I(G \setminus N_{G}[x])))+1 \leq
\nu(G \setminus N_G[x])+2$. If $\{f_1,\ldots,f_t\}$ is an induced matching of $G \setminus N_G[x]$,
then $\{f_1,\ldots,f_t,\{x,z_x\}\}$ is an induced matching of $G$.
Therefore $\nu(G \setminus N_G[x])+1 \leq \nu(G)$. Hence
$ \reg(I(G))=\nu(G)+1.$
By \cite[Corollary 4.6]{BFH15}, $G$ is a vertex decomposable graph. Therefore, by
Theorem \ref{upper-ver} and \cite[Theorem 4.5]{selvi_ha},
$\reg(I(G)^q)=2q+\nu(G)-1.$
\end{proof}
\vskip 2mm \noindent
\textbf{Acknowledgement:} We would like to thank Fahimeh Khosh-Ahang,
Huy T{\`a}i H\`a, Adam Van Tuyl and Russ Woodroofe for several
clarifications on our doubts on their results. We also would like to
thank Arindam Banerjee and Huy T\`ai H\`a for pointing out an error in
one of the proofs in an earlier version of the manuscript. We
extensively used \textsc{Macaulay2}, \cite{M2}, \textsc{SAGE},
\cite{sage} and the package \textsc{SimplicialDecomposability},
\cite{Cook}, for testing our our computations.
\bibliographystyle{abbrv}
| {
"timestamp": "2018-05-04T02:12:24",
"yymm": "1805",
"arxiv_id": "1805.01412",
"language": "en",
"url": "https://arxiv.org/abs/1805.01412",
"abstract": "Let $G$ be a finite simple graph and $I(G)$ denote the corresponding edge ideal. In this paper, we obtain upper bounds for the Castelnuovo-Mumford regularity of $I(G)^q$ in terms of certain combinatorial invariants associated with $G$. We also prove a weaker version of a conjecture by Alilooee, Banerjee, Beyarslan and Hà on an upper bound for the regularity of $I(G)^q$ and we prove the conjectured upper bound for the class of vertex decomposable graphs. Using these results, we explicitly compute the regularity of $I(G)^q$ for several classes of graphs.",
"subjects": "Commutative Algebra (math.AC); Combinatorics (math.CO)",
"title": "Upper bounds for the regularity of powers of edge ideals of graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750477464142,
"lm_q2_score": 0.8539127492339909,
"lm_q1q2_score": 0.8431321415461835
} |
https://arxiv.org/abs/1805.10510 | A modification of the Chang-Wilson-Wolff Inequality via the Bellman Function | We describe the Bellman function technique for proving sharp inequalities in harmonic analysis. To provide an example along with historical context, we present how it was originally used by Donald Burkholder to prove $L^p$ boundedness of the $\pm 1$ martingale transform. Finally, with Burkholder's result as a blueprint, we use the Bellman function to prove a new result related to the Chang-Wilson-Wolff Inequality. | \section{Introduction}
The Bellman function technique, named for applied mathematician Richard Bellman, is a tool that has been imported from the applied field of stochastic optimal control, and is now being used to tackle problems in probability and harmonic analysis. It was introduced to the world of analysis by Donald Burkholder, who in \cite{Burkholder} used it to prove that the $\pm$1 transform of a martingale is a bounded operator on $L^p$. We will borrow Burkholder's ideas to prove a new result concerning the exponential integrability of dyadic martingales.
Section 2 is expository. We briefly summarize Burkholder's use of the Bellman function to prove a sharp martingale inequality. This will serve as homage to Burkholder, the pioneer, and provide some historical context. As importantly, it will give a template upon which we will build the proof of our main result. The Bellman function technique is much easier to demonstrate than it is to describe abstractly.
In section 3, we introduce a well known inequality from harmonic analysis due to Chang, Wilson, and Wolff which classifies the order of local integrability of a function whose dyadic square function is bounded. Their result says that given a function $f:[0,1) \to \mathbb{R}$ and its dyadic square function $Sf$,
\begin{equation*}
\int_0^1 e^{f(x)-\langle f \rangle_{[0,1)}}dx \leq e^{\frac{1}{2}\|(Sf)^2(x)\|_{L^\infty}}
\end{equation*}
Next, we address a related question from \cite{SV}. Namely, we explore whether there exists a constant $\alpha$ such that
\begin{equation*}
\int_0^1 e^{f(x)-\langle f \rangle_{[0,1)}}dx \leq \int_0^1 e^{\alpha(Sf)^2(x)}dx
\end{equation*}
and if so, what the smallest valid choice of $\alpha$ is. Stated probabilistically, given a dyadic martingale with $f_0 = 0$, what is the smallest $\alpha$ such that
\begin{equation*}
\mathbb{E}e^{f_n} \leq \mathbb{E}e^{\alpha(Sf_n)^2}
\end{equation*}
We use the Bellman function to prove, without constructing an explicit example, that $\alpha = 2$ makes this inequality sharp.
Lastly, in section 4, we provide an alternate proof of our inequality by applying Cauchy-Schwarz to a result known as Rubin's lemma. We attempt to construct an example of a martingale which shows $\alpha = 2$ is sharp, but come up short. Our example pushes $\alpha$ up to $\log_2(e) \approx 1.44$, but finding the extremal martingale is left for future work.
\section{Bellman Function Technique}
We begin by illustrating the utility of the Bellman function. We shall summarize Burkholders arguments, which we will repurpose for our own problem later on. The exposition roughly follows that of \cite{Osekowski}.
\begin{definition}
Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space filtered by $\{\mathcal{F}_n\}$. Then the sequence of random variables $\{f_n\}$ is called a martingale if for each $n \in \mathbb{N}$
\begin{enumerate}
\item $f_n$ is $\mathcal{F}_n$ measurable\\
(i.e., $\{f_n\}$ is adpapted to the filtration $\{\mathcal{F}_n\}$)
\item $\|f_n\|_{L^1} < \infty$
\item $E[f_{n+1}|\mathcal{F}_n] = f_n$
\end{enumerate}
\end{definition}
If we replace the equality in condition 3 with $\leq$ or $\geq$, $\{f_n\}$ is called a supermartingale or submartingale respectively.
\begin{definition}
Let $\{f_n\}$ be a martingale. Define $df_0 = f_0$ and $df_n = f_n - f_{n-1}$ for $n>0$. $\{df_n\}$ is called the difference sequence of ${f_n}$.
\end{definition}
Note that $\{df_n\}$ is adapted to $\mathcal{F}_n$ and $E[df_{n+1}|\mathcal{F}_n] = 0$. It is often useful to express a martingale as the sum of its difference sequence, $f_n = \sum_{k=0}^n df_k$.
\begin{definition}
Let $\{f_n\}$ be a martingale, and define
$$g_n = \sum_{k=0}^n \epsilon_k df_k$$
where $\{\epsilon_k\}$ is a deterministic sequence, all of whose terms are $\pm1$. $\{g_n\}$ is called a $\pm 1$ transform of $\{f_n\}$.
\end{definition}
In \cite{Burkholder}, Burkholder used the Bellman function technique to prove the following result.
\begin{theorem}
There is a constant $\beta_p$ such that if $\{g_n\}$ is a $\pm 1$ transform of $\{f_n\}$, then for all $n \in \mathbb{N}$ and $0<p<\infty$
\begin{equation}
\|g_n\|_p \leq \beta_p \|f_n\|_p
\end{equation}
\end{theorem}
In other words, each $\pm 1$ transform is a bounded operator on $L^p$ for $0<p<\infty$. Theorem 1 has an interesting corollary, which highlights the connection between harmonic analysis and probability. Theorem 1 implies that the Haar basis for $L^p([0,1))$ is unconditional since the partial sums of a Haar series are a martingale on $([0,1),\mathcal{B},|\cdot|)$ (see \cite{Burkholder} or \cite{Osekowski} for details). Thinking about functions as random variables is sometimes a useful perspective in analysis as it allows us to import tools from probability where it's not obvious that they belong.
We now outline Burkholder's proof of Theorem 1 in order to demonstrate the Bellman function technique and provide a blueprint. Its essence is to relate the validity of an inequality to the existence of a special function. Often the existence of the function is easier to prove or disprove than the given inequality. We will henceforth assume that $\{f_n\}$ is a simple martingale (each $f_n$ takes on finitely many values and eventually $f_n = f_{n+1} = \dots$). Passage to the general case follows from an approximation argument.
\begin{theorem}
Suppose there exists a function $B:\mathbb{R}^2 \to \mathbb{R}$ with the following three properties.
\begin{enumerate}
\item (Majorization) $B(x,y) \geq x^p - \beta_p^p y^p \coloneqq V(x,y)$
\item (Concavity) For all $x,y,t_1,t_2 \in \mathbb{R}, \epsilon = \pm 1$, and $\alpha \in (0,1)$ such that $\alpha t_1 + (1 - \alpha) t_2 = 0$, we have
$$\alpha B(x + t_1,y + \epsilon t_1) + (1-\alpha)B(x + t_2, y + \epsilon t_2) \leq B(x,y)$$
\item (Initial condition) $B(x,\pm x) \leq 0$
\end{enumerate}
Then (2.1) holds.
\end{theorem}
As promised, we've reduced the veracity of (2.1) to the existence of a certain function with special properties. Note that, due to the majorization property, the existence of $B$ depends on our choice of $\beta$ and $p$. This is to be expected because (1) may hold for some $\beta$ and $p$ but not others.
$B$ is actually not the Bellman function, but a pointwise majorant of it. The definition of Bellman function and its relationship to our $B$ will be discussed momentarily.
Property 1 states that $B$ dominates a function $V$ whose definition is suggested by the inequality we're after. Note that (2.1) is equivalent to $\mathbb{E}V(f_n,g_n) \leq 0$. Property 2 says that $B$ is "diagonally concave," i.e., it is concave along the lines of slope $\pm 1$. This implies $\mathbb{E}B(x + \xi, y \pm \xi) \leq B(x,y)$ for all mean zero random variables $\xi$ by Jensen's inequality. The form of this concavity condition also varies with the inequality to be proven. It is chosen so that $\{B(f_n,g_n)\}$ is a supermartingale.
\begin{lemma}
If $B$ satisfies condition 2 in Theorem 2.5, then $\{B(f_n,g_n)\}$ is a supermartingale.
\end{lemma}
\begin{proof}
\begin{align}
\mathbb{E}[B(f_{n+1},g_{n+1})|\mathcal{F}_n] &= \mathbb{E}[B(f_n + df_{n+1},g_n \pm df_{n+1})|\mathcal{F}_n]\\
&\leq \mathbb{E}[B(f_{n},g_{n})|\mathcal{F}_n]\\
&= B(f_n,g_n)
\end{align}
where (2.2) uses that $g_n$ is a $\pm 1$ transform of $f_n$ and (2.3) is from applying condition 2 in Theorem 2.5 conditionally. (2.4) is because $(f_n,g_n)$ is $\mathcal{F}_n$ measurable.
\end{proof}
We now prove Theorem 2.5.
\begin{proof}
Recall that it suffices to show $\mathbb{E}V(f_n,g_n) \leq 0$.
\begin{align}
\mathbb{E}V(f_n,g_n) &\leq \mathbb{E}B(f_n,g_n)\\
&=\mathbb{E}\big[\mathbb{E}[B(f_n,g_n)|\mathcal{F}_{n-1}]\big]\\
&\leq \mathbb{E}B(f_{n-1},g_{n-1})
\end{align}
where (2.5) follows from majorization (2.6) is because conditional expectation preserves expectation. (2.7) is from the fact that $\{B(f_n,g_n)\}$ is a supermartingale.
Repeating the argument n times, we get
$$\mathbb{E}V(f_n,g_n) \leq \mathbb{E}B(f_0,g_0) = \mathbb{E}B(df_0,\pm df_0) \leq 0$$
The final inequality is from the initial condition.
\end{proof}
With Theorem 2.5 proved, we can now prove Theorem 2.4 by producing an appropriate $B$. However, we can actually do more. In fact, given a $0<p<\infty$ and $\beta_p$, if no such $B$ exists, then (2.1) is false.
\begin{theorem}
Given $0<p<\infty$ and $\beta_p$, (2.1) holds if and only if there exists a function $B$ with the three properties from Theorem 2.
\end{theorem}
\begin{proof}
We know that if $B$ exists, then (2.1) holds by Theorem 2.5. Now we must show that if (2.1) holds, then $B$ exists.
Let $\mathcal{M}(x,y)$ be the set of all $\mathbb{R}^2$ valued martingales $(f_n,g_n)$ such that $(f_0,g_0) \equiv (x,y)$ and $dg_n = \pm df_n$ for $n \geq 1$. We define the Bellman function
$$\mathcal{B}(x,y) \coloneqq \sup\{\mathbb{E}V(f_n,g_n) : (f_n,g_n)\in M(x,y)\}$$
We will show that $\mathcal{B}$ is the desired function $B$ possessing the three properties. Majorization is straightforward. Observe that the deterministic pair $(x,y) \in \mathcal{M}(x,y)$. The initial condition $\mathcal{B}(x,\pm x) \leq 0$ follows from (2.1). To show concavity, we use a "splicing argument." Take $x,y,t_1,t_2,\alpha,\epsilon$ as in the statement of the concavity condition. Choose any $(f_n^a,g_n^a) \in \mathcal{M}(x + t_1, y + \epsilon t_1)$ and $(f_n^b,g_n^b) \in \mathcal{M}(x + t_2, y + \epsilon t_2)$. We may assume the pairs are given on $([0,1),\mathcal{B},|\cdot|)$. We will define another martingale by "splicing" these two. Let $(f_0,g_0) \equiv (x,y)$ and for $n \geq 0$
\[ (f_{n+1},g_{n+1})(\omega) =
\begin{cases}
(f_n^a,g_n^a)(\frac{\omega}{\alpha}) & \omega \in [0,\alpha) \\
(f_n^b,g_n^b)(\frac{\omega - \alpha}{1 - \alpha}) & \omega \in [\alpha,1) \\
\end{cases}
\]
One can check that $\{(f_n,g_n)\} \in \mathcal{M}(x,y)$, and
\begin{align*}
\mathcal{B}(x,y) \geq \mathbb{E}V(f_n,g_n) = \alpha\mathbb{E}V(f_n^a,g_n^a) + (1 - \alpha)\mathbb{E}V(f_n^b,g_n^b)
\end{align*}
Taking the supremum over all such $(f_n^a,g_n^a)$ and $(f_n^b,g_n^b)$ yields
\begin{align*}
\mathcal{B}(x,y) \geq \alpha \mathcal{B}(x + t_1,y + \epsilon t_1) + (1-\alpha)\mathcal{B}(x + t_2, y + \epsilon t_2)
\end{align*}
The last thing that must be checked is that $\mathcal{B}$ is finite on all of $\mathbb{R}^2$. We know $\mathcal{B} \geq V > -\infty$ so we only need to check $\mathcal{B}(x,y) < \infty$. The initial condition says that $\mathcal{B}(x,\pm x) \leq 0$. Now suppose $|x|\neq |y|$ and let $(f_n,g_n)\in \mathcal{M}(x,y)$. We construct another martingale $(f_n',g_n')$ as follows (again we may assume our martingales are defined on $([0,1),\mathcal{B},|\cdot|)$.
\begin{align*}
(f'_0,g'_0) &\equiv (\frac{x+y}{2},\frac{x+y}{2}) \\
(f'_1,g'_1)(\omega) &=
\begin{cases}
(x,y) & \omega\in [0,\frac{1}{2}) \\
(y,x) & \omega\in [\frac{1}{2},1) \\
\end{cases} \\
(f'_n,g'_n)(\omega) &=
\begin{cases}
(f_{n-1},g_{n-1})(2\omega) & \omega\in [0,\frac{1}{2}) \\
(y,x) & \omega\in [\frac{1}{2},1)\\
\end{cases}
\end{align*}
\noindent
where the last equality holds for $n \geq 2$. Note that $f'$ is a $\pm 1$ transform of $g'$ and
$$0 \geq \mathbb{E}V(f_n',g_n') = \frac{1}{2}V(y,x) + \frac{1}{2}\mathbb{E}V(f_{n-1},g_{n-1})$$
\noindent
Taking the supremum over all $(f_n,g_n)\in \mathcal{M}(x,y)$ gives $\mathcal{B}(x,y) \leq -V(y,x) < \infty$ and we're done.
\end{proof}
Equipped with Theorem 2.7, we can do more than just prove our inequality. We can find the optimal constant $\beta_p$. If $B$ does not exist when $\beta_p < C_p$ but $B$ does exist when $\beta_p \geq C_p$, then $C_p$ is optimal. We will not construct $B$ here because our intention is not to reproduce Burkholder's result, but rather demonstrate the utility of his approach and provide a blueprint for proving our new inequality.
\section{A modification of the Chang-Wilson-Wolff Inequality}
Let $f \in L^1(I)$ for some real interval $I$. Let $\mathcal{F}_n$ be the $\sigma$ algebra generated by the dyadic subintervals of $I$ of length $|I|2^{-n}$ and let $\mathcal{F} = \bigcup \mathcal{F}_n$. Then $f_n = \mathbb{E}[f|\mathcal{F}_n]$ is a martingale on the probability space $(I,\mathcal{F},\frac{|\cdot|}{|I|})$. Due to the dyadic filtration, such a martingale is called a dyadic martingale.
Now define $Sf_n = \|\{df_n\}\|_{\ell^2} = \sqrt{\sum_{k=1}^n (df_k)^2}$ and let $Sf = \lim\limits_{n \to \infty} Sf_n$. In the context of probability, the sequence $\{Sf_n\}$ is called the quadratic variation of the martingale $\{f_n\}$. In the Littlewood-Paley theory of harmonic analysis, $Sf$ is called the dyadic square function of $f$ and is the discrete counterpart of the Lusin area function, whose definition would take us too far from our present goal. For a description of the connection between the square function and the Luzin area function, refer to \cite{Llorente}. For a survey of the role of square functions in harmonic analysis, consult \cite{Stein}.
In \cite{CWW}, the authors answer a question posed by the illustrious harmonic analyst Elias Stein. The question concerns the size of the functions in a certain subspace of BMO: What is the sharp order of local integrability of a function $f$ with a pointwise bounded square function $Sf$? Suppose for simplicity and concreteness that $f \in L^1([0,1))$. The authors show that
\begin{equation}
\int_0^1 e^{f(x)-\langle f \rangle_{[0,1)}}dx \leq e^{\frac{1}{2}\|Sf\|^2_{L_{[0,1)}^\infty}}
\end{equation}
This result, called the Chang-Wilson-Wolff inequality, can be used to show that if $Sf$ is bounded, then $f$ is exponentially square integrable \cite{Pipher}.
In \cite{SV}, the authors explore a related question: Is there a constant $\alpha$ such that
\begin{equation}
\int_0^1 e^{f(x)-\langle f \rangle_{[0,1)}}dx \leq \int_0^1 e^{\alpha(Sf)^2(x)}dx
\end{equation}
Inspired by their work, our goal in this section is to use the Bellman function technique to find the smallest such $\alpha$. In the language of probability, given a dyadic martingale with $f_0 = 0$, is there a constant $\alpha$ such that for each $n \in \mathbb{N}$
\begin{equation}
\mathbb{E}e^{f_n} \leq \mathbb{E}e^{\alpha(Sf_n)^2}
\end{equation}
If so, what is the smallest such $\alpha$? We shall use Burkholder's approach as a blueprint. As before, the veracity of (3.3) can be recast as a question of the existence of a special function satisfying three properies. The nature of these properties are gleaned from the details of the inequality we are after.
\begin{theorem}
(3.3) holds if and only if there exists a function $B:\mathbb{R} \times [0,\infty) \to \mathbb{R}$ with the following three properties.
\begin{enumerate}
\item (Majorization) $B(x,y) \geq e^x-e^{\alpha y} \coloneqq V(x,y)$
\item (Concavity) $\frac{B(x + \delta, y + \delta^2) + B(x - \delta, y + \delta^2)}{2} \leq B(x,y)$ for any $\delta \in \mathbb{R}$
\item (Initial condition) $B(0,0) \leq 0$
\end{enumerate}
\end{theorem}
\noindent
\textbf{Remark:} As before, property 2 has a probabilistic interpretation. Given any discrete random variable $\xi$ with $\mathbb{P}(\xi = \delta) = \mathbb{P}(\xi = -\delta) = \frac{1}{2}$, we have $\mathbb{E}B(x + \xi, y + \xi^2) \leq B(x,y)$.
\begin{proof}
The proof is the same as those of Theorems 2.5 and 2.7 in the previous section, only adapted to the present inequality. Keep in mind that we are assuming throughout $\{f_n\}$ is a simple dyadic martingale with $f_0=0$. First, assume $B$ exists. Then $B(f_n,(Sf)^2_n)$ is a supermartingale. Indeed,
\begin{align}
\mathbb{E}[B(f_{n+1},(Sf)^2_{n+1})|\mathcal{F}_n] &= \mathbb{E}[B(f_n + df_{n+1}, (Sf)^2_n + (df)^2_{n+1})|\mathcal{F}_n]\\
&\leq \mathbb{E}[B(f_{n},g_{n})|\mathcal{F}_n]\\
&= B(f_n,g_n)
\end{align}
where (3.5) is from applying the concavity property. Therefore, we have
\begin{align}
\mathbb{E}V(f_n,(Sf)^2_n) &\leq \mathbb{E}B(f_{n},(Sf)^2_{n})\\
&= \mathbb{E}\big[\mathbb{E}[B(f_{n},(Sf)^2_{n})|\mathcal{F}_{n-1}]\big]\\
&\leq \mathbb{E}B(f_{n-1},(Sf)^2_{n-1})
\end{align}
where (3.7) follows from majorization and (3.9) from the fact that $\{B(f_n,(Sf)^2_n)\}$ is a supermartingale.
Repeating the argument n times, we get
$$\mathbb{E}V(f_n,(Sf)^2_n) \leq \mathbb{E}B(f_0,(Sf)^2_0) = \mathbb{E}B(0,0) \leq 0$$
To prove the other direction, we assume (3.3) holds, and construct $B$. As before, we define the Bellman function $\mathcal{B}(x,y) \coloneqq \sup\{\mathbb{E}V(f_n,(Sf)^2_n) : (f_n,(Sf)^2_n)\in \mathcal{M}(x,y)\}$ where $\mathcal{M}(x,y)$ is the set of all $\mathbb{R} \times [0,\infty)$ valued processes $(f_n,(Sf)^2_n)$ such that $\{f_n\}$ is a dyadic martingale with $f_0 = x$ and $(Sf)^2_n = y + \sum_{k=1}^n (df_k)^2$ where $df_0 \coloneqq 0$. Again, as before, we show that $\mathcal{B}$ satisfies the three properties.
Majorization follows from observing the constant pair $(x,y) \in \mathcal{M}(x,y)$ and the initial condition follows from (3.3). Once again, we can get concavity by a splicing argument.
Choose any $(f_n^a,(Sf_n^a)^2) \in \mathcal{M}(x + \delta, y + \delta^2)$ and $(f_n^b,(Sf_n^b)^2) \in \mathcal{M}(x - \delta, y + \delta^2)$. We may assume the pairs are given on $([0,1),\mathcal{B},|\cdot|)$. We will define another martingale by "splicing" these two. Let $(f_0,(Sf_0)^2) \equiv (x,y)$ and for $n \geq 0$
\[ (f_{n+1},(Sf_{n+1})^2)(\omega) =
\begin{cases}
(f_n^a,(Sf_n^a)^2)(2\omega) & \omega \in [0,\frac{1}{2}) \\
(f_n^b,(Sf_n^b)^2)(2(\omega - \frac{1}{2})) & \omega \in [\frac{1}{2},1) \\
\end{cases}
\]
One can check that $\{(f_n,(Sf_n)^2)\} \in \mathcal{M}(x,y)$, and
\begin{align*}
\mathcal{B}(x,y) \geq \mathbb{E}V(f_n,(Sf_n)^2) = \frac{\mathbb{E}V(f_n^a,(Sf_n^a)^2) + \mathbb{E}V(f_n^b,(Sf_n^b)^2)}{2}
\end{align*}
Taking the supremum over all such $(f_n^a,(Sf_n^a)^2)$ and $(f_n^b,(Sf_n^b)^2)$ yields
\begin{align*}
\frac{\mathcal{B}(x + \delta, y + \delta^2) + \mathcal{B}(x - \delta, y + \delta^2)}{2} \leq \mathcal{B}(x,y)
\end{align*}
\noindent
Finally, we must show $\mathcal{B}$ is finite on $\mathbb{R}\times [0,\infty)$. We know $-\infty < V \leq \mathcal{B}$, so we only need to show $\mathcal{B} < \infty$. It follows from the definition of $\mathcal{B}$ that $\mathcal{B}(x + \delta, y + \frac{\delta}{\alpha}) = e^\delta \mathcal{B}(x,y)$ for any $\delta \in \mathbb{R}$ such that $y + \frac{\delta}{\alpha} \geq 0$. Hence it suffices to show that $\mathcal{B}$ is finite along the x axis. Using the concavity property, we have
\begin{align*}
\mathcal{B}(x,0) &\geq \frac{\mathcal{B}(x + \delta, \delta^2) + \mathcal{B}(x - \delta, \delta^2)}{2}\\
&= \frac{e^{\alpha\delta^2}\mathcal{B}(x + \delta - \alpha\delta^2, 0) + e^{\alpha\delta^2}\mathcal{B}(x - \delta - \alpha\delta^2, 0)}{2}
\end{align*}
\noindent
Therefore,
\begin{align*}
\mathcal{B}(x+\delta-\alpha\delta^2, 0) &\leq 2e^{-\alpha\delta^2}\mathcal{B}(x,0) - \mathcal{B}(x-\delta-\alpha\delta^2, 0)\\
&\leq 2e^{-\alpha\delta^2}\mathcal{B}(x,0) - V(x-\delta-\alpha\delta^2, 0)
\end{align*}
\noindent
The initial condition says that $\mathcal{B}(0,0) \leq 0,$ and we can see from the definition that $\mathcal{B}$ is non-decreasing in the positive x direction, so $\mathcal{B}(x,0) \leq 0$ for all $x \leq 0$. Furthermore, this monotonicity means it suffices to show $\mathcal{B}(x_n,0) < \infty$ for some sequence $x_n \to \infty$. If we take $\delta$ small enough that $\delta - \alpha\delta^2 > 0,$ we have
\begin{align*}
\mathcal{B}(\delta-\alpha\delta^2, 0) &\leq 2e^{-\alpha\delta^2}\mathcal{B}(0,0) - V(-\delta-\alpha\delta^2, 0) < \infty \\
\mathcal{B}(2(\delta-\alpha\delta^2), 0) &\leq 2e^{-\alpha\delta^2}\mathcal{B}(\delta-\alpha\delta^2,0) - V(-2\alpha\delta^2, 0) < \infty\\
\mathcal{B}(3(\delta-\alpha\delta^2), 0) &\leq 2e^{-\alpha\delta^2}\mathcal{B}(2(\delta-\alpha\delta^2),0) - V(\delta - 3\alpha\delta^2, 0) < \infty\\
\dots\\
\mathcal{B}(n(\delta-\alpha\delta^2), 0) &\leq 2e^{-\alpha\delta^2}\mathcal{B}((n-1)(\delta-\alpha\delta^2),0) - V((n-2)\delta - n\alpha\delta^2, 0) < \infty\\
\end{align*}
\end{proof}
\subsection*{Searching for a Bellman Function Candidate}
What remains is to find a function $B$ so that we may apply Theorem 3.1. Our reasoning along the way needn't be rigorous or even correct because once we arrive at a candidate $B$, we will prove that it has the three properties. The purpose of this section is to demonstrate how one might approach the task of searching for $B$. Our approach will be to build a PDE. It should be noted that, a priori, we don't even know that a differentiable $B$ exists, but the purpose of this section is not to rigorously prove anything. Rather, it's to explain how one might arrive at a candidate, which we can then test for the three required properties.
The concavity condition $\frac{B(x - \delta, y + \delta^2) + B(x + \delta, y + \delta^2)}{2} \leq B(x,y)$ suggests that $B$ is concave along parabolic paths. Thus, we expect that $\frac{d^2}{d\delta^2}B(x + \delta, y + \delta^2) \leq 0$.
\begin{align*}
\frac{d^2}{d\delta^2}B(x + \delta, y + \delta^2) &= B_{xx}(x+\delta, y + \delta^2) + B_{xy}(x+\delta, y + \delta^2)4\delta\\
&+ B_{yy}(x+\delta, y + \delta^2)2\delta^2 + 2B_y(x+\delta, y + \delta^2)\\
&\leq 0
\end{align*}
Evaluating at $\delta = 0$ yields
\begin{equation}
B_{xx}(x,y) + 2B_y(x,y) \leq 0
\end{equation}
So $B$ is a subsolution of the reverse heat equation.
A common strategy is to search for Bellman candidates which share certain homogeneity properties with $V$. In our case, we have
$$V\big(x + \delta,y + \frac{\delta}{\alpha}\big) = e^{x+\delta} - e^{\alpha(y + \frac{\delta}{\alpha})} = e^\delta V(x,y)$$
Let $\delta = -\alpha y$. Then
$$V(x - \alpha y, 0) = e^{-\alpha y}V(x,y)$$
Note that functions with this property are completely determined by their values along the x axis. We shall search for a $B$ that shares this property.\\
Define $f(x) = B(x, 0)$ and assume $B$ has the form
\begin{equation}
B(x,y) = e^{\alpha y}f(x-\alpha y)
\end{equation}
Plugging (3.11) into (3.10) and replacing the inequality with an equation gives
\begin{equation}
e^{\alpha y}f''(x-\alpha y) - 2\alpha e^{\alpha y} f'(x-\alpha y) +2\alpha e^{\alpha y} f(x-\alpha y) = 0
\end{equation}
Dividing through by $e^{\alpha y}$ reveals a second order ODE with constant coefficients. The solutions to the characteristic equation are $\alpha \pm \sqrt{\alpha^2 -2\alpha}$. If $\alpha < 2$ then these solutions are complex, meaning $f$ oscillates. Thus $B(x,0)=f(x)$ will have no hope of dominating $V(x,0) = e^x - 1$. On the other hand, if $\alpha = 2$, then $f(x) = C_1xe^{2x} + C_2e^{2x}$. We must select $C_1$ and $C_2$ such that $f(x) = B(x,0) \geq V(x,0) = e^x - 1$ (majorization) and $f(0) = B(0,0) \leq V(0,0) = 0$ (initial condition). Given these constraints, we find $f(x)=xe^{2x}$ and so our Bellman function candidate is $B(x,y) = (x - 2y)e^{2x - 2y}$. Unfortunately, this is not a valid $B$ as the concavity property is violated. For example, $\frac{B(1 - \delta, 1 + \delta^2) + B(1 + \delta, 1 + \delta^2)}{2} \to 0$ as $\delta \to \infty$ but $B(1,1) = -1$. Interestingly, although this line of thinking produced the wrong $B$, it produced the correct $\alpha$. It turns out $2$ is the sharp constant in (3.3).
One problem is that, assuming $B$ is smooth, (3.10) is a necessary but insufficient condition for concavity along parabolic paths. It only tests for concavity at the vertex of each path. Also, turning our differential inequality
\begin{equation}
f'' - 2\alpha f' +2\alpha f \leq 0
\end{equation}
into an equation was unjustified even though doing so produced the optimal $\alpha$.
At this point, there is no shame in resorting to guess and check. That is, looking for solutions to (3.13) with $f(0) = 0$ and $f'(0)=1$ (to ensure compliance with the majorization property and the initial condition) and testing the corresponding $B(x,y)$ for the concavity property. $f(x) = e^{2x} - e^x$ is such a function (for $\alpha = 2$) and the associated Bellman candidate is $B(x,y) = e^{2x - 2y} - e^x$. If we prove this $B$ has the three desired properties, we shown (3.3) holds with $\alpha = 2$.
\begin{theorem}
Suppose $\{f_n\}$ is a dyadic martingale with $f_0=0$. Then
$$\mathbb{E}e^{f_n} \leq \mathbb{E}e^{2(Sf_n)^2}$$
\end{theorem}
\begin{proof}
$B(x,y) = e^{2x - 2y} - e^x$ obeys the three properties from Theorem 3.1. We begin with majorization. Using the inequality $1 - e^t \leq e^{-t} - 1$ we have
\begin{align*}
V(x,y) = e^x - e^{2y} = e^x(1 - e^{2y - x}) \leq e^x(e^{x - 2y} - 1) = B(x,y)
\end{align*}
Next, we show that $B$ satisfies the concavity condition.
\begin{align*}
&\frac{B(x+\delta,y+\delta^2)+B(x-\delta,y+\delta^2)}{2}\\
&= \frac{e^{2(x+\delta) - 2(y+\delta^2)} - e^{x+\delta} + e^{2(x-\delta) - 2(y+\delta^2)} - e^{x-\delta}}{2}\\
&= e^{2x - 2y - 2\delta^2}\cosh2\delta - e^x\cosh\delta\\
&\leq e^{2x - 2y} - e^x = B(x,y)
\end{align*}
In the last line, we use the inequality $1 \leq \cosh t \leq e^\frac{t^2}{2}$.
Lastly, the initial condition is immediate. $B(0,0) = 0$.
\end{proof}
At this point, we've answered half of our original question: (3.3) holds with $\alpha = 2$. To show that $\alpha = 2$ is minimal, we must show that the three properties in Theorem 3.1 are mutually incompatible when $\alpha < 2$. Then Theorem 3.1 implies (3.3) is false for this range of $\alpha$.
Our strategy will be to prove that the Bellman function cannot possess the three properties assuming it is twice continuously differentiable. Then we will drop this assumption using a mollification argument. This scheme will first require a couple of lemmas about a class of differential inequalities. \footnote{The author thanks Iosif Pinelis for his assistance provided via \texttt{mathoverflow.net} in the proof of these lemmas.}
\begin{lemma}
If $g:\mathbb{R} \to \mathbb{R}$ satisfies $g'' \leq -bg$ where $b$ is some positive constant and $g(x_0) > 0$, then there is some $x > x_0$ such that $g(x) = 0$.
\end{lemma}
\begin{proof}
Of course, by the intermediate value theorem, it suffices to show that eventually $g(x) \leq 0$. Suppose for contradiction that $g(x) > 0$ for all $x>x_0$. Then on $[x_0,\infty)$, $g'$ is monotonically decreasing because $g'' \leq -bg < 0$. Thus $\lim\limits_{x \to \infty}g'(x)$ exists and is either finite or $-\infty$. Call this limit $L$.\\
If $L<0$, then we are done because $g'(x)$ is eventually bounded above by $L+\epsilon < 0$ and thus $g(x) \to -\infty$.\\
On the other hand, suppose that $L \geq 0$. Then $g'(x)\geq 0$ for all $x \geq x_0$, and thus for such $x$ we have $g(x) \geq g(x_0)$.\\
Now fix $\epsilon \in \left(0,\frac{bg(x_0)}{2}\right)$. Select $c$ such that $|g'(x) - L| < \epsilon$ when $x>c$. Lastly, choose $x_1$ and $x_2$ such that $c<x_1<x_2$ and $x_2 - x_1 > 2$. Then for some $t \in (x_1, x_2)$ we have
\begin{align*}
g''(t) = \frac{g'(x_2) - g'(x_1)}{x_2 - x_1} &\geq \frac{L-\epsilon - (L + \epsilon)}{x_2 - x_1}\\
&= \frac{-2\epsilon}{x_2 - x_1}\\
&\geq -\epsilon\\
&\geq \frac{-bg(x_0)}{2}
\end{align*}
Therefore,
$$\frac{-bg(x_0)}{2} \leq g''(t) \leq -bg(t) \leq -bg(x_0)$$
which gives our contradiction.
\end{proof}
\begin{lemma}
Suppose $g:\mathbb{R} \to \mathbb{R}$ satisfies $g'' \leq -bg$ where $b$ is some real positive constant. If $g(x_0) > 0$ then $g(x) = 0$ for some $x \in \big[x_0, x_0 + \frac{\pi}{\sqrt{b}}\big]$.
\end{lemma}
\begin{proof}
We break into two cases.\\
Case 1: $g'(x_0) \leq 0$\\
Let $x_1 = \min\{x>x_0: g(x)=0\}$. We know $x_1$ exists by the previous lemma. Note that $g''\leq -bg \leq 0$ on $[x_0,x_1]$ since $g\geq 0$ there. Hence $g'\leq 0 $ on $ [x_0,x_1]$ because $g'(x_0)\leq 0$ and $g'$ is decreasing.
Let $E(x) = g'(x)^2 + bg(x)^2$. Then for all $x \in [x_0,x_1]$
\begin{align*}
E'(x) &= 2g'(x)g''(x) + 2bg(x)g'(x)\\
&= 2g'(x)(g''(x) + bg(x))\\
&\geq 0
\end{align*}
Therefore $E(x_0)\leq E(x)$ for all $x\in[x_0,x_1]$, i.e., $g'(x_0)^2 + bg(x_0)^2 \leq g'(x)^2 + bg(x)^2$. Therefore, recalling that $g'(x)\leq$0, we have
$$1 \leq \frac{-g'(x)}{\sqrt{g'(x_0)^2 + bg(x_0)^2 - bg(x)^2}}$$
Integrating both sides over the interval $[x_0,x_1]$ yields
\begin{align*}
x_1 - x_0 &\leq \left.\frac{-1}{\sqrt{b}}\sin^{-1}\left(g(x)\sqrt{\frac{b}{g'(x_0)^2 + bg(x_0)^2}}\right)\right\vert_{x=x_0}^{x=x_1}\\
&= \frac{-1}{\sqrt{b}}\left[\sin^{-1}(0) - \sin^{-1}\left(\sqrt{\frac{bg(x_0)^2}{g'(x_0)^2 + bg(x_0)^2}}\right)\right]\\
&\leq \frac{-1}{\sqrt{b}}(0 - \sin^{-1}(1))\\
& = \frac{\pi}{2\sqrt{b}}
\end{align*}
Case 2: $g'(x_0) > 0$\\
This time we let $x_2 = \min\{x>x_0: g(x)=0\}$. Again we note that $g''\leq 0$ on $[x_0,x_2]$. Therefore, there is a point $x_1 \in [x_0,x_2]$ such that $g'(x_1) = 0$ with $g'>0$ on $[x_0,x_1]$ and $g'<0$ on $[x_1,x_2]$. From Case 1, we know that $x_2 - x_1 \leq \frac{\pi}{2\sqrt{b}}$. It suffices to show $x_1 - x_0 \leq \frac{\pi}{2\sqrt{b}}$.\\
Recall $E(x) = g'(x)^2 + bg(x)^2$ and $E'(x) = 2g'(x)(g''(x) + bg(x))$. Since $g' \geq 0$ on $[x_0,x_1]$, $E'(x) \leq 0$ there. Thus on that interval, we have $g'(x)^2 + bg(x)^2 \geq g'(x_1)^2 + bg(x_1)^2 = bg(x_1)^2$ which implies
$$1 \leq \frac{g'(x)}{\sqrt{bg(x_1)^2 - bg(x)^2}}$$
As before, we integrate both sides over $[x_0,x_1]$ to get
\begin{align*}
x_1 - x_0 &\leq \left.\frac{1}{\sqrt{b}}\sin^{-1}\left(\frac{g(x)}{g(x_1)}\right)\right\vert_{x=x_0}^{x=x_1}\\
&= \frac{1}{\sqrt{b}}\left[\sin^{-1}(1) - \sin^{-1}\left(\frac{g(x_0)}{g(x_1)}\right)\right]\\
&\leq \frac{\pi}{2\sqrt{b}}
\end{align*}
\end{proof}
\begin{theorem}
If $\mathcal{B}(x,0) \in \mathcal{C}^2(\mathbb{R})$, then $\alpha = 2$ is the minimal constant such that (3.3) holds. Here $\mathcal{B}$ is the theoretical Bellman function as in the the proof of Theorem 3.1.
\end{theorem}
\begin{proof}
Recall that $\mathcal{B}(x,y) = \sup\{\mathbb{E}V(f_n,(Sf)^2_n) : (f_n,(Sf)^2_n)\in \mathcal{M}(x,y)\}$ where $\mathcal{M}(x,y)$ is the set of all $\mathbb{R} \times [0,\infty)$ valued processes $(f_n,(Sf)^2_n)$ such that $\{f_n\}$ is a dyadic martingale with $f_0 = x$ and $(Sf)^2_n = y + \sum_{k=1}^n (df_k)^2$. By the proof of Theorem 3.1, $\mathcal{B}$ satisfies the majorization and concavity properies as well as the initial condition as long as (3.3) holds. We will show that if $\alpha < 2$, $\mathcal{B}$ does not obey these properties, and thus (3.3) is false.
From the definition of $\mathcal{B}$, we have $$e^{\alpha y}\mathcal{B}(x - \alpha y, 0) = \mathcal{B}(x,y)$$
This is essentially a consequence of the fact that $V$ has this property. Thus, as before, if we define $f(x) = \mathcal{B}(x,0)$ then $\mathcal{B}$ has the form
\begin{equation}
\mathcal{B}(x,y) = e^{\alpha y}f(x - \alpha y)
\end{equation}
Plugging (3.13) into the concavity property and setting $y=0$ gives
\begin{equation}
\frac{f(x - \delta -\alpha \delta^2) + f(x + \delta -\alpha \delta^2)}{2} \leq e^{-\alpha\delta^2}f(x)
\end{equation}
If we express $f$ as a second order Taylor polynomial in $\delta$ centered at $x$ (valid because we are assuming $f\in\mathcal{C}^2$), then (3.14) becomes
\begin{align*}
f(x) - \alpha\delta^2 f'(x) + \frac{\delta^2}{2} f''(x) + O(\delta^3) &\leq e^{-\alpha\delta^2}f(x)\\
&= f(x) - \alpha\delta^2 f(x) + O(\delta^4)
\end{align*}
Dividing through by $\frac{\delta^2}{2}$ and letting $\delta \to 0$, we get
\begin{equation}
f''(x) - 2\alpha f'(x) +2\alpha f(x) \leq 0
\end{equation}
Multiply both sides by $e^{-\alpha x}$ to get
\begin{equation*}
e^{-\alpha x}f''(x) - 2\alpha e^{-\alpha x}f'(x) +2\alpha e^{-\alpha x}f(x) = \left(e^{-\alpha x}f(x) \right)'' + (2\alpha - \alpha^2)e^{-\alpha x}f(x) \leq 0
\end{equation*}
Letting $g(x) \coloneqq e^{-\alpha x}f(x)$ and $b \coloneqq 2\alpha - \alpha^2$ we have $g'' + bg \leq 0$.
Suppose\footnote{Assuming $\alpha > 0$ is valid because if (3.3) fails for some $\alpha$, it clearly fails for all smaller $\alpha$.} $0 < \alpha < 2$ so that $b > 0$. Then by Lemma 3.3, $g(x) \leq 0$ and thus $f(x) \leq 0$ for some $x>0$. Hence, for this $x$, $\mathcal{B}(x,0) = f(x) \leq 0 < V(x,0) = e^x - 1$. Therefore, for this range of $\alpha$, the Bellman function $\mathcal{B}$ cannot posses both the majorization and concavity properties of Theorem 3.1, and (3.3) is false.
\end{proof}
The next theorem allows us to drop the smoothness assumption on $\mathcal{B}$. However, we will use a mollification argument which requires $\mathcal{B}$ to be continuous. We will subsequently show that the continuity of $\mathcal{B}$ is a consequence of the concavity property.
\begin{theorem}
If $\mathcal{B}(x,0)$ is continuous, then $\alpha = 2$ is the minimal constant such that (3.3) holds.
\end{theorem}
\begin{proof}
Let $\eta(x)$ be the standard mollifier.
$$
\eta(x) \coloneqq \begin{cases}
Ce^{\frac{1}{x^2-1}} & \text{if } |x| < 1\\
0 & \text{if } |x| \geq 1
\end{cases}
$$
with $C>0$ chosen so that $\int_\mathbb{R} \eta(x) dx = 1$.
Let $\eta_\epsilon(x) \coloneqq \frac{1}{\epsilon}\eta(\frac{x}{\epsilon})$.
Convolving both sides of (3.14) with $\eta_\epsilon$ gives
$$\frac{f_\epsilon(x - \delta -\alpha \delta^2) + f_\epsilon(x + \delta -\alpha \delta^2)}{2} \leq e^{-\alpha\delta^2}f_\epsilon(x)$$
where $f_\epsilon = f*\eta_\epsilon$.
$f_\epsilon \in C^\infty(\mathbb{R})$, so by the proof of the previous theorem, we have $g_\epsilon'' + bg_\epsilon \leq 0$ where $g_\epsilon(x) \coloneqq e^{-\alpha x}f_\epsilon(x)$ and $b \coloneqq 2\alpha - \alpha^2$. As before, we suppose $0 < \alpha < 2$ so that $b>0$.
Fix $\epsilon' > 0$. Select $x_0$ such that $\epsilon' < e^{x_0} - 1$. Since $f$ is continuous by assumption, $f_\epsilon \to f$ uniformly on compact sets \cite{Evans}, and we can select $\epsilon$ such that $\left \vert f - f_\epsilon \right \vert < \epsilon'$ on $[x_0,x_0 + \frac{\pi}{\sqrt{b}}]$. By Lemma 3.4, there exists $x \in [x_0, x_0 + \frac{\pi}{\sqrt{b}}]$ such that $g_\epsilon(x) \leq 0$ which implies $f_\epsilon(x) \leq 0$. Therefore, for this $x$, we have
$$\mathcal{B}(x,0) = f(x) < f_\epsilon(x) + \epsilon' \leq \epsilon' < e^{x_0}-1<e^x-1 = V(x,0)$$
Once again, we've shown $\mathcal{B}$ cannot posses both the majorization and concavity properties of Theorem 3.1, and so (3.3) is false for $\alpha < 2$ assuming $\mathcal{B}$ is continuous.
\end{proof}
Our final task is to justify the assumption that $\mathcal{B}$ is continuous. Indeed, this follows from the concavity property.
\begin{lemma}
Given $\alpha \in \mathbb{R}$, if $f:\mathbb{R} \to \mathbb{R}$ is an increasing function such that
$$\frac{f(x_0 - t -\alpha t^2) + f(x_0 + t -\alpha t^2)}{2} \leq e^{-\alpha t^2}f(x_0)$$
for all $t$, then $f$ is continuous at $x_0$.
\end{lemma}
\begin{proof}
Let $U = \lim\limits_{x \to x_0^+} f(x)$ and $L = \lim\limits_{x \to x_0^-} f(x)$. $U$ and $L$ exists with $L \leq U$ because $f$ is increasing. It suffices to show that $L \geq U$.
Fix $\epsilon > 0$. Choose $\delta > 0$ such that $f(x) > L - \epsilon$ when $x \in (x_0 - \delta, x_0)$. Select $t$ such that
$$0 < t < \min\left(\frac{1}{\alpha},\sqrt{\frac{\delta}{\alpha} + \frac{1}{4}}-\frac{1}{2}\right)$$
Note that this selection of $t$ ensures that $0 < t - \alpha t^2$ and $\alpha t^2 + t < \delta$. Finally, choose $x$ such that
$$\min (x_0 + \alpha t^2 - t, x_0 + \alpha t^2 + t - \delta) < x < x_0$$
Such an $x$ is known to exists since $\alpha t^2 - t < 0$ and $\alpha t^2 + t < \delta$.
Then we have
$$x_0 - \delta < x - \alpha t^2 - t < x < x_0 < x - \alpha t^2 + t$$
and hence
$$\frac{L - \epsilon + U}{2} \leq \frac{f(x - \alpha t^2 - t) + f(x - \alpha t^2 + t)}{2} \leq f(x)e^{-\alpha t} \leq f(x) \leq L$$
Since $\epsilon$ was arbitary, we have $\frac{L + U}{2} \leq L$ and thus $U \leq L$ as desired.
\end{proof}
We are now prepared to state our main result.
\begin{theorem}
$\alpha = 2$ is the smallest constant such that
$$\mathbb{E}e^{f_n} \leq \mathbb{E}e^{\alpha(Sf_n)^2}$$
for all dyadic martingales with $f_0 = 0$.
\end{theorem}
\begin{proof}
It is clear from the definition that $\mathcal{B}(x,0)$ is increasing. This fact coupled with the concavity property implies $\mathcal{B}(x,0)$ is continuous by the previous lemma. Now apply Theorem 3.6.
\end{proof}
\section{Examples and Future Work}
It is worth noting that a simpler proof of Theorem 3.2 exists. It uses a result known as Rubin's lemma \cite{Pipher}, which says that if $\{f_n\}$ is a real valued dyadic martingale on $[0,1)$ whose limit is $f$, then for all $\lambda \geq 0$,
$$\int_0^1 e^{\lambda\left( f(x) -\langle f \rangle_{[0,1)}\right) - \frac{\lambda^2}{2}\left(Sf\right)^2(x)}dx \leq 1$$
See \cite{BM} for a proof.
We can use the Cauchy-Schwarz inequality with Rublin's lemma to prove Theorem 3.2. As usual, we may work on $([0,1),\mathcal{B},|\cdot|)$.
\begin{align*}
\int_0^1 e^{f(x)-\langle f \rangle_{[0,1)}}dx &= \int_0^1 e^{f(x)-\langle f \rangle_{[0,1)}-(Sf)^2(x)+(Sf)^2(x)}dx &\\
&\leq \sqrt{\int_0^1 e^{2[f(x)-\langle f \rangle_{[0,1)}]-2(Sf)^2(x)}dx} \sqrt{\int_0^1 e^{2(Sf)^2(x)}dx} &\\
&\leq \sqrt{1} \sqrt{\int_0^1 e^{2(Sf)^2(x)}dx} &\\
&\leq \int_0^1 e^{2 (Sf)^2(x)}dx &
\end{align*}
The Bellman function method is still desirable in at least two ways. First, it's a general technique for proving inequalities, while this simpler proof is very particular to the details of our problem. Second, the Bellman function allowed us to prove the sharpness of Theorem 3.2. The shorter proof would requiring an accompanying construction of a martingale which maximizes the left side of the inequality relative to the right side.
Such a construction could provide the basis for some future work. Sometimes, extremal examples can be deduced from the Bellman function itself (see, e.g. \cite{Wang}). Although we were unable to do this, we have an example of a martingale which shows that
\begin{equation}
\int_0^1 e^{f(x)-\langle f \rangle_{[0,1)}}dx \leq \int_0^1 e^{\alpha (Sf)^2(x)}dx
\end{equation}
is false for $\alpha < \log_2(e) \approx 1.44$. Recall that $\alpha = 2$ was optimal.
Let $f=\sum\limits_{n=0}^\infty \chi_{I_n^0} = \sum\limits_{n=1}^\infty n\chi_{I_n^1}$ where $I_n^k = \left[\frac{k}{2^n},\frac{k+1}{2^n}\right)$. As always, we can think of $f$ as a dyadic martingale by letting $f_n = \mathbb{E}[f\vert \mathcal{D}_n]$ where $\mathcal{D}_n$ is the $\sigma$-algebra generated by the nth generation of dyadic subintervals of $[0,1)$. This function is a discrete approximation of $-\log_2x$ has the property that $(Sf)^2 = f$. It is a "fixed point" of the operator that sends $f \mapsto (Sf)^2$. The left side of (4.1) becomes
$$\int_0^1 e^{f(x)-\langle f \rangle_{[0,1)}}dx \approx \int_0^1 e^{-\log_2x}dx = \int_0^1 x^{-\log_2(e)}dx = \infty$$
On the other hand, the right side becomes
$$\int_0^1 e^{\alpha (Sf)^2(x)}dx = \int_0^1 e^{\alpha f(x)}dx \approx \int_0^1 e^{-\alpha\log_2x}dx = \int_0^1 x^{-\alpha\log_2(e)}dx$$
which is finite for $\alpha < \frac{1}{\log_2(e)} = \ln(2) \approx .69$. So our example falsifies (4.1) for this range of $\alpha$.
We can push this example further. $[S(\lambda g)]^2 = \lambda^2 [S(g)]^2$ for all $g$ follows immediately from the definition of $S$. Applying this to our present example gives $[S(\lambda f)]^2 = \lambda^2 [S(f)]^2 = \lambda^2 f$. Plugging $\lambda f$ into (4.1), the left side is approximately $\int_0^1 x^{-\lambda\log_2(e)}dx$ and the right side is approximately $\int_0^1 x^{-\alpha\lambda^2\log_2(e)}dx$. Thus for $\lambda = \frac{1}{\log_2(e)}$, the left side is infinite and the right side is finite when $\alpha < \log_2(e) \approx 1.44$, so $\frac{f}{\log_2(e)}$ falsifies (4.1) for this range of $\alpha$.
We are still left without a martingale showing $\alpha = 2$ is sharp. While the Bellman function proof makes it unnecessary, finding an explicit example is an interesting future problem.
\bibliographystyle{ieeetr}
| {
"timestamp": "2018-05-29T02:06:56",
"yymm": "1805",
"arxiv_id": "1805.10510",
"language": "en",
"url": "https://arxiv.org/abs/1805.10510",
"abstract": "We describe the Bellman function technique for proving sharp inequalities in harmonic analysis. To provide an example along with historical context, we present how it was originally used by Donald Burkholder to prove $L^p$ boundedness of the $\\pm 1$ martingale transform. Finally, with Burkholder's result as a blueprint, we use the Bellman function to prove a new result related to the Chang-Wilson-Wolff Inequality.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "A modification of the Chang-Wilson-Wolff Inequality via the Bellman Function",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109516378094,
"lm_q2_score": 0.8558511396138365,
"lm_q1q2_score": 0.842851575263406
} |
https://arxiv.org/abs/2009.07893 | Largest small polygons: A sequential convex optimization approach | A small polygon is a polygon of unit diameter. The maximal area of a small polygon with $n=2m$ vertices is not known when $m\ge 7$. Finding the largest small $n$-gon for a given number $n\ge 3$ can be formulated as a nonconvex quadratically constrained quadratic optimization problem. We propose to solve this problem with a sequential convex optimization approach, which is an ascent algorithm guaranteeing convergence to a locally optimal solution. Numerical experiments on polygons with up to $n=128$ sides suggest that the optimal solutions obtained are near-global. Indeed, for even $6 \le n \le 12$, the algorithm proposed in this work converges to known global optimal solutions found in the literature. | \section{Introduction}
The {\em diameter} of a polygon is the largest Euclidean distance between pairs of its vertices. A polygon is said to be {\em small} if its diameter equals one. For a given integer $n \ge 3$, the maximal area problem consists in finding the small $n$-gon with the largest area. The problem was first investigated by Reinhardt~\cite{reinhardt1922} in 1922. He proved that
\begin{itemize}
\item when $n$ is odd, the regular small $n$-gon is the unique optimal solution;
\item when $n=4$, there are infinitely many optimal solutions, including the small square;
\item when $n \ge 6$ is even, the regular small $n$-gon is not optimal.
\end{itemize}
The maximal area is known for even $n \le 12$. In 1961, Bieri~\cite{bieri1961} found the largest small $6$-gon, assuming the existence of an axis of symmetry. In 1975, Graham~\cite{graham1975} independently constructed the same $6$-gon, represented in Figure~\ref{figure:6gon:U6}. In 2002, Audet, Hansen, Messine, and Xiong~\cite{audet2002} combined Graham's strategy with global optimization methods to find the largest small $8$-gon, illustrated in Figure~\ref{figure:8gon:U8}. In 2013, Henrion and Messine~\cite{henrion2013} found the largest small $10$- and $12$-gons by also solving globally a nonconvex quadratically constrained quadratic optimization problem. They also found the largest small axially symmetrical $14$- and $16$-gons. In 2017, Audet~\cite{audet2017} showed that the regular small polygon has the maximal area among all equilateral small polygons. In 2020, Audet, Hansen, and Svrtan~\cite{audet2020} determined analytically the largest small axially symmetrical $8$-gon.
The diameter graph of a small polygon is defined as the graph with the vertices of the polygon, and an edge between two vertices exists only if the distance between these vertices equals one. Graham~\cite{graham1975} conjectured that, for even $n \ge 6$, the diameter graph of a small $n$-gon with maximal area has a cycle of length $n-1$ and one additional edge from the remaining vertex. The case $n=6$ was proven by Graham himself~\cite{graham1975} and the case $n=8$ by Audet, Hansen, Messine, and Xiong~\cite{audet2002}. In 2007, Foster and Szabo~\cite{foster2007} proved Graham's conjecture for all even $n \ge 6$. Figure~\ref{figure:4gon}, Figure~\ref{figure:6gon}, and Figure~\ref{figure:8gon} show diameter graphs of some small polygons. The solid lines illustrate pairs of vertices which are unit distance apart.
In addition to exact results and bounds, uncertified largest small polygons have been obtained both by metaheurisitics and nonlinear optimization. Assuming Graham's conjecture and the existence of an axis of symmetry, Mossinghoff~\cite{mossinghoff2006b} in 2006 constructed large small $n$-gons for even $6 \le n \le 20$. In 2018, using a formulation based on polar coordinates, Pinter~\cite{pinter2018} presented numerical solutions estimates of the maximal area for even $6 \le n \le 80$. However, the solutions obtained by Pinter are not optimal for $n\ge 32$.
The maximal area problem can be formulated as a nonconvex quadratically constrained quadratic optimization problem. In this work, we propose to solve it with a sequential convex optimization approach, also known as the concave-convex procedure~\cite{marks1978,lanckriet2009}. This approach is an ascent algorithm guaranteeing convergence to a locally optimal solution. Numerical experiments on polygons up to $n=128$ sides suggest that the optimal solutions obtained are near-global. Indeed, without assuming Graham's conjecture nor the existence of an axis of symmetry in our quadratic formulation, optimal $n$-gons obtained with the algorithm proposed in this work verify both conditions within the limit of numerical computations. Moreover, for even $6 \le n \le 12$, this algorithm converges to known global optimal solutions. The algorithm is implemented as a MATLAB-based package, OPTIGON, which is available on GitHub~\cite{optigon}. OPTIGON requires that CVX~\cite{cvx2} be installed.
The remainder of this paper is organized as follows. In Section~\ref{sec:ngon}, we recall principal results on largest small polygons. Section~\ref{sec:nqcqo} presents the quadratic formulation of the maximal area problem and the sequential convex optimization approach to solve it. We report in Section~\ref{sec:results} computational results. Section~\ref{sec:conclusion} concludes the paper.
\begin{figure}[h]
\centering
\subfloat[$(\geo{R}_4,0.5)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.5000,0.5000) -- (0,1) -- (-0.5000,0.5000) -- cycle;
\draw (0,0) -- (0,1);
\draw (0.5000,0.5000) -- (-0.5000,0.5000);
\end{tikzpicture}
}
\subfloat[$(\geo{R}_3^+,0.5)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0.5000,0.8660) -- (0,1) -- (-0.5000,0.8660);
\draw (0,1) -- (0,0) -- (0.5000,0.8660) -- (-0.5000,0.8660) -- (0,0);
\end{tikzpicture}
}
\caption{Two small $4$-gons $(\geo{P}_4,A(\geo{P}_4))$}
\label{figure:4gon}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[$(\geo{R}_6,0.649519)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.4330,0.2500) -- (0.4330,0.7500) -- (0,1) -- (-0.4330,0.7500) -- (-0.4330,0.2500) -- cycle;
\draw (0,0) -- (0,1);
\draw (0.4330,0.2500) -- (-0.4330,0.7500);
\draw (0.4330,0.7500) -- (-0.4330,0.2500);
\end{tikzpicture}
}
\subfloat[$(\geo{R}_5^+,0.672288)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.5000,0.3633) -- (0.3090,0.9511) -- (0,1) -- (-0.3090,0.9511) -- (-0.5000,0.3633) -- cycle;
\draw (0,1) -- (0,0) -- (0.3090,0.9511) -- (-0.5000,0.3633) -- (0.5000,0.3633) -- (-0.3090,0.9511) -- (0,0);
\end{tikzpicture}
}
\subfloat[$(\geo{U}_6,0.674981)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.5000,0.4024) -- (0.3438,0.9391) -- (0,1) -- (-0.3438,0.9391) -- (-0.5000,0.4024) -- cycle;
\draw (0,1) -- (0,0) -- (0.3438,0.9391) -- (-0.5000,0.4024) -- (0.5000,0.4024) -- (-0.3438,0.9391) -- (0,0);
\end{tikzpicture}
\label{figure:6gon:U6}
}
\caption{Three small $6$-gons $(\geo{P}_6,A(\geo{P}_6))$}
\label{figure:6gon}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[$(\geo{R}_8,0.707107)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.3536,0.1464) -- (0.5000,0.5000) -- (0.3536,0.8536) -- (0,1) -- (-0.3536,0.8536) -- (-0.5000,0.5000) -- (-0.3536,0.1464) -- cycle;
\draw (0,0) -- (0,1);
\draw (0.3536,0.1464) -- (-0.3536,0.8536);
\draw (0.5000,0.5000) -- (-0.5000,0.5000);
\draw (0.3536,0.8536) -- (-0.3536,0.1464);
\end{tikzpicture}
}
\subfloat[$(\geo{R}_7^+,0.725320)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.4010,0.1931) -- (0.5000,0.6270) -- (0.2225,0.9749) -- (0,1) -- (-0.2225,0.9749) -- (-0.5000,0.6270) -- (-0.4010,0.1931) -- cycle;
\draw (0,1) -- (0,0) -- (0.2225,0.9749) -- (-0.4010,0.1931) -- (0.5000,0.6270) -- (-0.5000,0.6270) -- (0.4010,0.1931) -- (-0.2225,0.9749) -- (0,0);
\end{tikzpicture}
}
\subfloat[$(\geo{U}_8,0.726868)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.4091,0.2238) -- (0.5000,0.6404) -- (0.2621,0.9650) -- (0,1) -- (-0.2621,0.9650) -- (-0.5000,0.6404) -- (-0.4091,0.2238) -- cycle;
\draw (0,1) -- (0,0) -- (0.2621,0.9650) -- (-0.4091,0.2238) -- (0.5000,0.6404) -- (-0.5000,0.6404) -- (0.4091,0.2238) -- (-0.2621,0.9650) -- (0,0);
\end{tikzpicture}
\label{figure:8gon:U8}
}
\caption{Three small $8$-gons $(\geo{P}_8,A(\geo{P}_8))$}
\label{figure:8gon}
\end{figure}
\section{Largest small polygons}\label{sec:ngon}
Let $A(\geo{P})$ denote the area of a polygon $\geo{P}$. Let $\geo{R}_n$ denote the regular small $n$-gon. We have
\[
A(\geo{R}_n) =
\begin{cases}
\frac{n}{2}\left(\sin \frac{\pi}{n} - \tan \frac{\pi}{2n}\right) &\text{if $n$ is odd,}\\
\frac{n}{8}\sin \frac{2\pi}{n} &\text{if $n$ is even.}\\
\end{cases}
\]
We remark that $A(\geo{R}_n) < A(\geo{R}_{n-1})$ for all even $n\ge 6$~\cite{audet2009}. This suggests that $\geo{R}_n$ does not have maximum area for any even $n\ge 6$. Indeed, when $n$ is even, we can construct a small $n$-gon with a larger area than $\geo{R}_n$ by adding a vertex at distance $1$ along the mediatrix of an angle in $\geo{R}_{n-1}$. We denote this $n$-gon by $\geo{R}_{n-1}^+$ and we have
\[
A(\geo{R}_{n-1}^+) = \frac{n-1}{2} \left(\sin \frac{\pi}{n-1} - \tan \frac{\pi}{2n-2}\right) + \sin \frac{\pi}{2n-2} - \frac{1}{2}\sin \frac{\pi}{n-1}.
\]
\begin{theorem}[Reinhardt~\cite{reinhardt1922}]
For all $n \ge 3$, let $A_n^*$ denote the maximal area among all small $n$-gons and let $\ub{A}_n := \frac{n}{2}\left(\sin \frac{\pi}{n} - \tan \frac{\pi}{2n}\right)$.
\begin{itemize}
\item When $n$ is odd, $A_n^* = \ub{A}_n$ is only achieved by $\geo{R}_n$.
\item $A_4^* = 0.5 < \ub{A}_4$ is achieved by infinitely many $4$-gons, including $\geo{R}_4$ and~$\geo{R}_3^+$ illustrated in Figure~\ref{figure:4gon}.
\item When $n\ge 6$ is even, $A(\geo{R}_n) < A_n^* < \ub{A}_n$.
\end{itemize}
\end{theorem}
The maximal area~$A_n^*$ is known for even $n \le 12$. Using geometric arguments, Graham~\cite{graham1975} determined analytically the largest small $6$-gon, represented in Figure~\ref{figure:6gon:U6}. Its area $A_6^* \approx 0.674981$ is about $3.92\%$ larger than $A(\geo{R}_6) \approx 0.649519$. The approach of Graham, combined with methods of global optimization, has been followed by~\cite{audet2002} to determine the largest small $8$-gon, represented in Figure~\ref{figure:8gon:U8}. Its area $A_8^* \approx 0.726868$ is about $2.79\%$ larger than $A(\geo{R}_8) \approx 0.707107$. Henrion and Messine~\cite{henrion2013} found that $A_{10}^* \approx 0.749137$ and $A_{12}^* \approx 0.760730$.
For all even $n\ge 6$, let $\geo{U}_n$ denote the largest small $n$-gon.
\begin{theorem}[Graham~\cite{graham1975}, Foster and Szabo~\cite{foster2007}]
\label{thm:area:diam}
For even $n \ge 6$, the diameter graph of $\geo{U}_n$ has a cycle of length $n-1$ and one additional edge from the remaining vertex.
\end{theorem}
\begin{conjecture}
\label{thm:area:sym}
For even $n \ge 6$, $\geo{U}_n$ has an axis of symmetry corresponding to the pending edge in its diameter graph.
\end{conjecture}
From Theorem~\ref{thm:area:diam}, we note that $\geo{R}_{n-1}^+$ has the same diameter graph as the largest small $n$-gon $\geo{U}_n$. Conjecture~\ref{thm:area:sym} is only proven for $n=6$ and this is due to Yuan~\cite{yuan2004}. However, the largest small polygons obtained by~\cite{audet2002} and~\cite{henrion2013} are a further evidence that the conjecture may be true.
\section{Nonconvex quadratically constrained quadratic optimization} \label{sec:nqcqo}
We use cartesian coordinates to describe an $n$-gon $\geo{P}_n$, assuming that a vertex $\geo{v}_i$, $i=0,1,\ldots,n-1$, is positioned at abscissa $x_i$ and ordinate $y_i$. Placing the vertex $\geo{v}_0$ at the origin, we set $x_0 = y_0 = 0$. We also assume that the $n$-gon $\geo{P}_n$ is in the half-plane $y\ge 0$ and the vertices $\geo{v}_i$, $i=1,2,\ldots,n-1$, are arranged in a counterclockwise order as illustrated in Figure~\ref{figure:model}, i.e., $y_{i+1}x_i \ge x_{i+1}y_i$ for all $i=1,2,\ldots,n-2$. The maximal area problem can be formulated as follows
\begin{subequations}\label{eq:ngon:area}
\begin{align}
\max_{\rv{x},\rv{y},\rv{u}} \quad & \sum_{i=1}^{n-2} u_i\\
\subj \quad & (x_j - x_i)^2 + (y_j - y_i)^2 \le 1 &\forall 1\le i < j \le n-1,\label{eq:ngon:d}\\
& x_i^2 + y_i^2 \le 1 &\forall 1 \le i \le n-1,\label{eq:ngon:r}\\
& y_i \ge 0 &\forall 1 \le i \le n-1,\label{eq:ngon:y}\\
& 2u_i \le y_{i+1}x_i - x_{i+1}y_i &\forall 1 \le i \le n-2,\label{eq:ngon:u}\\
& u_i \ge 0 &\forall 1 \le i \le n-2.
\end{align}
\end{subequations}
At optimality, for all $i=1,2,\ldots,n-2$, $u_i = (y_{i+1}x_i - x_{i+1}y_i)/2$, which corresponds to the area of the triangle $\geo{v}_0\geo{v}_i\geo{v}_{i+1}$.
It is important to note that, unlike what was done in~\cite{audet2002,henrion2013}, this formulation does not make the assumption of Graham's conjecture, nor of the existence of an axis of symmetry.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) node[below]{$\geo{v}_0(0,0)$} -- (0.4370,0.4370) node[right]{$\geo{v}_1(x_1,y_1)$} -- (0.1564,0.9877) node[right]{$\geo{v}_2(x_2,y_2)$} -- (-0.1564,0.9877) node[above]{$\geo{v}_3(x_3,y_3)$} -- (-0.4540,0.8911) node[left]{$\geo{v}_4(x_4,y_4)$} -- (-0.5507,0.2806) node[left]{$\geo{v}_5(x_5,y_5)$} -- cycle;
\draw[->] (-0.25,0)--(0.25,0)node[below]{$x$};
\draw[->] (0,0)--(0,0.5)node[left]{$y$};
\end{tikzpicture}
\caption{Definition of variables: Case of $n=6$ vertices}
\label{figure:model}
\end{figure}
Problem~\eqref{eq:ngon:area} is a nonconvex quadratically constrained quadratic optimization problem and can be reformulated as a difference-of-convex optimization (DCO) problem of the form
\begin{subequations}\label{eq:dco}
\begin{align}
\max_{\rv{z}} \quad & g_0(\rv{z}) - h_0(\rv{z})\\
\subj \quad& g_i(\rv{z}) - h_i(\rv{z}) \ge 0 &\forall 1 \le i \le m,
\end{align}
\end{subequations}
where $g_0,\ldots,g_m$ and $h_0,\ldots,h_m$ are convex quadratic functions. We note that the feasible set
\[
\Omega := \{\rv{z} \colon g_i(\rv{z}) - h_i(\rv{z}) \ge 0, i =1,2,\ldots,m\}
\]
is compact with a nonempty interior, which implies that $g_0(\rv{z}) - h_0(\rv{z}) < \infty$ for all $\rv{z} \in \Omega$.
For a fixed $\rv{c}$, we have $\lb{g}_i(\rv{z};\rv{c}) := g_i(\rv{c}) + \nabla g_i(\rv{c})^T (\rv{z} - \rv{c}) \le g_i(\rv{z})$ for all $i=0,1,\ldots,m$. Then the following problem
\begin{subequations}\label{eq:dcocvx}
\begin{align}
\max_{\rv{z}} \quad & \lb{g}_0(\rv{z};\rv{c}) - h_0(\rv{z})\\
\subj \quad& \lb{g}_i(\rv{z};\rv{c}) - h_i(\rv{z}) \ge 0 &\forall 1 \le i \le m
\end{align}
\end{subequations}
is a convex restriction of the DCO problem~\eqref{eq:dco} as stated by Proposition~\ref{thm:cvxrestr}. Constraint~\eqref{eq:ngon:u} is equivalent to
\[
(y_{i+1}-x_i)^2+(x_{i+1}+y_i)^2+8u_i \le (y_{i+1}+x_i)^2+(x_{i+1}-y_i)^2
\]
for all $i=1,2,\ldots,n-2$. For a fixed $(\rv{a},\rv{b}) \in \mathbb{R}^{n-1} \times \mathbb{R}^{n-1}$, if we replace~\eqref{eq:ngon:u} in~\eqref{eq:ngon:area} by
\[
(y_{i+1}-x_i)^2+(x_{i+1}+y_i)^2+8u_i \le 2(b_{i+1}+a_i)(y_{i+1}+x_i)-(b_{i+1}+a_i)^2
+ 2(a_{i+1}-b_i)(x_{i+1}-y_i)-(a_{i+1}-b_i)^2
\]
for all $i=1,2,\ldots,n-2$, we obtain a convex restriction of the maximal area problem.
\begin{proposition}\label{thm:cvxrestr}
If $\rv{z}$ is a feasible solution of~\eqref{eq:dcocvx} then $\rv{z}$ is a feasible solution of~\eqref{eq:dco}.
\end{proposition}
\begin{proof}
Let $\rv{z}$ be a feasible solution of~\eqref{eq:dcocvx}, i.e., $\lb{g}_i(\rv{z};\rv{c}) - h_i(\rv{z}) \ge 0$ for all $i=1,2,\ldots,m$. Then $g_i(\rv{z}) - h_i(\rv{z}) \ge \lb{g}_i(\rv{z};\rv{c}) - h_i(\rv{z}) \ge 0$ for all $i=1,2,\ldots,m$. Thus, $\rv{z}$ is a feasible solution of~\eqref{eq:dco}.
\end{proof}
\begin{proposition}\label{thm:ascent}
If $\rv{c}$ is a feasible solution of~\eqref{eq:dco} then \eqref{eq:dcocvx} is a feasible problem. Moreover, if $\rv{z}^*$ is an optimal solution of~\eqref{eq:dcocvx} then $g_0(\rv{c}) - h_0(\rv{c}) \le g_0(\rv{z}^*) - h_0(\rv{z}^*)$.
\end{proposition}
\begin{proof}
Let $\rv{c}$ be a feasible solution of~\eqref{eq:dcocvx}, i.e., $g_i(\rv{c}) - h_i(\rv{c}) \ge 0$ for all $i=1,2,\ldots,m$. Then there exists $\rv{z} = \rv{c}$ such that $\lb{g}_i(\rv{c};\rv{c}) - h_i(\rv{c}) = g_i(\rv{c}) - h_i(\rv{c}) \ge 0$ for all $i=1,2,\ldots,m$. Thus, \eqref{eq:dcocvx} is a feasible problem. Moreover, if $\rv{z}^*$ is an optimal solution of~\eqref{eq:dcocvx}, we have $g_0(\rv{c}) - h_0(\rv{c}) = \lb{g}_0(\rv{c};\rv{c}) - h_0(\rv{c}) \le \lb{g}_0(\rv{z}^*;\rv{c}) - h_0(\rv{z}^*) \le g_0(\rv{z}^*) - h_0(\rv{z}^*)$.
\end{proof}
From Proposition~\ref{thm:ascent}, the optimal small $n$-gon $(\rv{x},\rv{y})$ obtained by solving a convex restriction of Problem~\eqref{eq:ngon:area} constructed around a small $n$-gon $(\rv{a},\rv{b})$ has a larger area than this one. Proposition~\ref{thm:local} states that if $(\rv{a},\rv{b})$ is the optimal $n$-gon of the convex restriction constructed around itself, then it is a local optimal $n$-gon for the maximal area problem.
\begin{proposition}\label{thm:local}
Let $\rv{c}$ be a feasible solution of~\eqref{eq:dco}. We suppose that $\lb{\Omega}(\rv{c}) := \{\rv{z}\colon \lb{g}_i(\rv{z};\rv{c}) - h_i(\rv{z}) \ge 0, i =1,2,\ldots,m\}$ satisfies Slater condition. If $\rv{c}$ is an optimal solution of~\eqref{eq:dcocvx} then $\rv{c}$ is a critical point of~\eqref{eq:dco}.
\end{proposition}
\begin{proof}
If $\rv{c}$ is an optimal solution of~\eqref{eq:dcocvx} then there exist $m$ scalars $\mu_1, \mu_2,\ldots,\mu_m$ such that
\[
\begin{aligned}
\nabla\lb{g}_0 (\rv{c};\rv{c}) + \sum_{i=1}^m \mu_i\nabla \lb{g}_i(\rv{c};\rv{c}) &= \nabla h_0(\rv{c}) + \sum_{i=1}^m \mu_i\nabla h_i(\rv{c}),\\
\lb{g}_i(\rv{c};\rv{c}) &\ge h_i(\rv{c}) &\forall i=1,2,\ldots,m,\\
\mu_i &\ge 0 &\forall i=1,2,\ldots,m,\\
\mu_i\lb{g}_i(\rv{c};\rv{c}) &= \mu_i h_i(\rv{c}) &\forall i=1,2,\ldots,m.
\end{aligned}
\]
Since $\lb{g}_i (\rv{c};\rv{c}) = g_i(\rv{c})$ and $\nabla \lb{g}_i (\rv{c};\rv{c}) = \nabla g_i(\rv{c})$ for all $i=0,1,\ldots,m$, we conclude that $\rv{c}$ is a critical point of~\eqref{eq:dco}.
\end{proof}
We propose to solve the DCO problem~\eqref{eq:dco} with a sequential convex optimization approach given in Algorithm~\ref{algo:ccp}, also known as concave-convex procedure. A proof of showing that a sequence $\{\rv{z}_k\}_{k=0}^\infty$ generated by Algorithm~\ref{algo:ccp} converges to a critical point $\rv{z}^*$ of the original DCO problem~\eqref{eq:dco} can be found in~\cite{marks1978,lanckriet2009}.
\begin{algorithm}
\caption{Sequential convex optimization}
\label{algo:ccp}
\begin{algorithmic}[1]
\STATE Initialization: choose a feasible solution $\rv{z}_0$ and a stopping criteria $\varepsilon > 0$.
\STATE $\rv{z}_1 \in \arg\max \{\lb{g}_0(\rv{z};\rv{z}_0) - h_0(\rv{z})\colon \lb{g}_i(\rv{z};\rv{z}_0) - h_i(\rv{z}) \ge 0, i =1,2,\ldots,m\}$
\STATE $k := 1$
\WHILE{$\frac{\|\rv{z}_k - \rv{z}_{k-1}\|}{\|\rv{z}_k\|} > \varepsilon$}
\STATE $\rv{z}_{k+1} \in \arg\max \{\lb{g}_0(\rv{z};\rv{z}_k) - h_0(\rv{z})\colon \lb{g}_i(\rv{z};\rv{z}_k) - h_i(\rv{z}) \ge 0, i =1,2,\ldots,m\}$
\STATE $k := k+1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Computational results}\label{sec:results}
Problem~\eqref{eq:ngon:area} was solved in MATLAB using CVX~2.2 with MOSEK~9.1.9 and \texttt{default precision} (tolerance $\epsilon = 1.49 \times 10^{-8}$). All the computations were carried out on an \texttt{Intel(R) Core(TM) i7-3540M CPU @ 3.00 GHz} computing platform. Algorithm~\ref{algo:ccp} was implemented as a MATLAB package: OPTIGON, which is freely available at \url{https://github.com/cbingane/optigon}. OPTIGON requires that CVX be installed. CVX is a MATLAB-based modeling system for convex
optimization, which turns MATLAB into a modeling language, allowing constraints and objectives to be specified using standard MATLAB expression syntax~\cite{cvx2}.
We chose the following values as initial solution:
\[
\begin{aligned}
a_0 &= 0, &b_0 &= 0,\\
a_i &= \frac{\sin \frac{2i\pi}{n-1}}{2\cos \frac{\pi}{2n-2}} = -a_{n-i}, & b_i &= \frac{1-\cos \frac{2i\pi}{n-1}}{2\cos \frac{\pi}{2n-2}} = b_{n-i} &\forall i=1,\ldots,n/2-1,\\
a_{n/2} &= 0, &b_{n/2} &= 1,
\end{aligned}
\]
which define the $n$-gon $\geo{R}_{n-1}^+$, and the stopping criteria $\varepsilon = 10^{-5}$. Table~\ref{table:area} shows the optimal values $A_n^*$ of the maximal area problem for even numbers $n=6,8,\ldots,128$, along with the areas of the initial $n$-gons $\geo{R}_{n-1}^+$, the best lower bounds $\lb{A}_n$ found in the literature, and the upper bounds~$\ub{A}_n$. We also report the number $k$ of iterations in Algotithm~\ref{algo:ccp} for each~$n$. The results support the following keypoints:
\begin{enumerate}
\item For $6 \le n\le 12$, $\lb{A}_n - A_n^* \le 10^{-8}$, i.e., Algorithm~\ref{algo:ccp} converges to the best known optimal solutions found in the literature.
\item For $32 \le n \le 80$, $\lb{A}_n < A(\geo{R}_{n-1}^+) < A_n^*$, i.e., the solutions obtained by Pinter~\cite{pinter2018} are suboptimal.
\item For all $n$, the solutions obtained with Algorithm~\ref{algo:ccp} verify, within the limit of the numerical computations, Theorem~\ref{thm:area:diam} and Conjecture~\ref{thm:area:sym}, i.e.,
\[
\begin{aligned}
x_{n/2} &=0, &y_{n/2} &=1,\\
\|\geo{v}_{n/2-1}\| &=1, &\|\geo{v}_{n/2+1}\| &=1,\\
\|\geo{v}_{i+n/2}-\geo{v}_i\| &=1, &\|\geo{v}_{i+n/2+1}-\geo{v}_i\| &=1 &\forall i=1,2,\ldots,n/2-2,\\
\|\geo{v}_{n-1}-\geo{v}_{n/2-1}\| &=1,\\
x_{n-i} &=-x_i, &y_{n-i} &= y_i &\forall i=1,2,\ldots,n/2-1.
\end{aligned}
\]
We illustrate the largest small $16$-, $32$- and $64$-gons in Figure~\ref{figure:Un}. Furthermore, we remark that Theorem~\ref{thm:area:diam} and Conjecture~\ref{thm:area:sym} are verified by each polygon of the sequence generated by Algorithm~\ref{algo:ccp}. All $6$-gons generated by the algorithm are represented in Figure~\ref{figure:ccp:U6} and the coordinates of their vertices are given in Table~\ref{table:ccp:U6}.
\end{enumerate}
\begin{table}[t]
\footnotesize
\centering
\caption{Maximal area problem}
\label{table:area}
\resizebox{!}{10cm}{
\begin{tabular}{@{}rlll|lr@{}}
\toprule
$n$ & $A(\geo{R}_{n-1}^+)$ & $\lb{A}_n$ & $\ub{A}_n$ & $A_n^*$ & \# ite. $k$ \\
\midrule
6 & 0.6722882584 & 0.6749814429~\cite{bieri1961,graham1975,mossinghoff2006b} & 0.6961524227 & 0.6749814387 & 5 \\
8 & 0.7253199909 & 0.7268684828~\cite{audet2002,mossinghoff2006b} & 0.7350842599 & 0.7268684802 & 10 \\
10 & 0.7482573378 & 0.7491373459~\cite{henrion2013,mossinghoff2006b} & 0.7531627703 & 0.7491373454 & 16 \\
12 & 0.7601970055 & 0.7607298734~\cite{henrion2013,mossinghoff2006b} & 0.7629992851 & 0.7607298710 & 24 \\
14 & 0.7671877750 & 0.7675310111~\cite{mossinghoff2006b} & 0.7689359584 & 0.7675310093 & 33 \\
16 & 0.7716285345 & 0.7718613220~\cite{mossinghoff2006b} & 0.7727913493 & 0.7718613187 & 43 \\
18 & 0.7746235089 & 0.7747881651~\cite{mossinghoff2006b} & 0.7754356273 & 0.7747881619 & 55 \\
20 & 0.7767382147 & 0.7768587560~\cite{mossinghoff2006b} & 0.7773275822 & 0.7768587517 & 68 \\
22 & 0.7782865351 & 0.7783773308~\cite{pinter2018} & 0.7787276939 & 0.7783773228 & 81 \\
24 & 0.7794540033 & 0.7795240461~\cite{pinter2018} & 0.7797927529 & 0.7795240330 & 95 \\
26 & 0.7803559816 & 0.7804111201~\cite{pinter2018} & 0.7806217145 & 0.7804111058 & 109 \\
28 & 0.7810672517 & 0.7811114192~\cite{pinter2018} & 0.7812795297 & 0.7811114002 & 122 \\
30 & 0.7816380102 & 0.7816739255~\cite{pinter2018} & 0.7818102598 & 0.7816739044 & 136 \\
32 & 0.7821029651 & 0.7818946320~\cite{pinter2018} & 0.7822446490 & 0.7821325276 & 148 \\
34 & 0.7824867354 & 0.7823103007~\cite{pinter2018} & 0.7826046775 & 0.7825113660 & 159 \\
36 & 0.7828071755 & 0.7826513767~\cite{pinter2018} & 0.7829063971 & 0.7828279054 & 169 \\
38 & 0.7830774889 & 0.7829526627~\cite{pinter2018} & 0.7831617511 & 0.7830950955 & 177 \\
40 & 0.7833076096 & 0.7832011589~\cite{pinter2018} & 0.7833797744 & 0.7833226804 & 183 \\
42 & 0.7835051276 & 0.7834135187~\cite{pinter2018} & 0.7835674041 & 0.7835181187 & 185 \\
44 & 0.7836759223 & 0.7835966860~\cite{pinter2018} & 0.7837300377 & 0.7836871900 & 184 \\
46 & 0.7838246055 & 0.7837554636~\cite{pinter2018} & 0.7838719255 & 0.7838344336 & 179 \\
48 & 0.7839548353 & 0.7838942710~\cite{pinter2018} & 0.7839964516 & 0.7839634510 & 172 \\
50 & 0.7840695435 & 0.7840161496~\cite{pinter2018} & 0.7841063371 & 0.7840771278 & 162 \\
52 & 0.7841711020 & 0.7841233641~\cite{pinter2018} & 0.7842037903 & 0.7841778072 & 150 \\
54 & 0.7842614465 & 0.7842192995~\cite{pinter2018} & 0.7842906181 & 0.7842674010 & 138 \\
56 & 0.7843421691 & 0.7843044654~\cite{pinter2018} & 0.7843683109 & 0.7843474779 & 128 \\
58 & 0.7844145892 & 0.7843807534~\cite{pinter2018} & 0.7844381066 & 0.7844193386 & 118 \\
60 & 0.7844798073 & 0.7844492943~\cite{pinter2018} & 0.7845010402 & 0.7844840717 & 109 \\
62 & 0.7845387477 & 0.7845111362~\cite{pinter2018} & 0.7845579827 & 0.7845425886 & 101 \\
64 & 0.7845921910 & 0.7834620877~\cite{pinter2018} & 0.7846096710 & 0.7845956631 & 94 \\
66 & 0.7846408000 & 0.7845910589~\cite{pinter2018} & 0.7846567322 & 0.7846439473 & 88 \\
68 & 0.7846851407 & 0.7846139029~\cite{pinter2018} & 0.7846997026 & 0.7846880001 & 82 \\
70 & 0.7847256986 & 0.7846403575~\cite{pinter2018} & 0.7847390429 & 0.7847283036 & 77 \\
72 & 0.7847628920 & 0.7847454020~\cite{pinter2018} & 0.7847751508 & 0.7847652718 & 72 \\
74 & 0.7847970830 & 0.7845564840~\cite{pinter2018} & 0.7848083708 & 0.7847992622 & 68 \\
76 & 0.7848285863 & 0.7847585719~\cite{pinter2018} & 0.7848390031 & 0.7848305850 & 64 \\
78 & 0.7848576763 & 0.7845160579~\cite{pinter2018} & 0.7848673094 & 0.7848595143 & 61 \\
80 & 0.7848845934 & 0.7848252941~\cite{pinter2018} & 0.7848935195 & 0.7848862871 & 58 \\
82 & 0.7849095487 & -- & 0.7849178354 & 0.7849111119 & 55 \\
84 & 0.7849327284 & -- & 0.7849404352 & 0.7849341725 & 52 \\
86 & 0.7849542969 & -- & 0.7849614768 & 0.7849556352 & 50 \\
88 & 0.7849744002 & -- & 0.7849811001 & 0.7849756425 & 48 \\
90 & 0.7849931681 & -- & 0.7849994298 & 0.7849943223 & 46 \\
92 & 0.7850107163 & -- & 0.7850165772 & 0.7850117894 & 44 \\
94 & 0.7850271482 & -- & 0.7850326419 & 0.7850281477 & 42 \\
96 & 0.7850425565 & -- & 0.7850477130 & 0.7850434878 & 40 \\
98 & 0.7850570245 & -- & 0.7850618708 & 0.7850578951 & 39 \\
100 & 0.7850706272 & -- & 0.7850751877 & 0.7850714422 & 38 \\
102 & 0.7850834323 & -- & 0.7850877290 & 0.7850841941 & 36 \\
104 & 0.7850955008 & -- & 0.7850995538 & 0.7850962152 & 35 \\
106 & 0.7851068883 & -- & 0.7851107156 & 0.7851075587 & 34 \\
108 & 0.7851176450 & -- & 0.7851212630 & 0.7851182747 & 33 \\
110 & 0.7851278167 & -- & 0.7851312404 & 0.7851284086 & 32 \\
112 & 0.7851374450 & -- & 0.7851406881 & 0.7851380017 & 31 \\
114 & 0.7851465680 & -- & 0.7851496430 & 0.7851470916 & 30 \\
116 & 0.7851552203 & -- & 0.7851581386 & 0.7851557129 & 29 \\
118 & 0.7851634339 & -- & 0.7851662060 & 0.7851639010 & 29 \\
120 & 0.7851712379 & -- & 0.7851738734 & 0.7851716781 & 28 \\
122 & 0.7851786591 & -- & 0.7851811668 & 0.7851790741 & 27 \\
124 & 0.7851857221 & -- & 0.7851881101 & 0.7851861129 & 26 \\
126 & 0.7851924497 & -- & 0.7851947255 & 0.7851928211 & 26 \\
128 & 0.7851988626 & -- & 0.7852010332 & 0.7851992126 & 25 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}
\centering
\subfloat[$(\geo{U}_{16},0.771861)$]{
\begin{tikzpicture}[scale=5]
\draw[dashed] (0,0) -- (0.2163,0.0539) -- (0.3801,0.1794) -- (0.4793,0.3573) -- (0.5000,0.5595) -- (0.4390,0.7532) -- (0.3070,0.9060) -- (0.1320,0.9912) -- (0,1) -- (-0.1320,0.9912) -- (-0.3070,0.9060) -- (-0.4390,0.7532) -- (-0.5000,0.5595) -- (-0.4793,0.3573) -- (-0.3801,0.1794) -- (-0.2163,0.0539) -- cycle;
\draw (0,0)--(0,1);
\draw (0,0)--(0.1320,0.9912);\draw (0,0)--(-0.1320,0.9912);
\draw (0.2163,0.0539)--(-0.1320,0.9912);\draw (0.2163,0.0539)--(-0.3070,0.9060);
\draw (0.3801,0.1794)--(-0.3070,0.9060);\draw (0.3801,0.1794)--(-0.4390,0.7532);
\draw (0.4793,0.3573)--(-0.4390,0.7532);\draw (0.4793,0.3573)--(-0.5000,0.5595);
\draw (0.5000,0.5595)--(-0.5000,0.5595);\draw (0.5000,0.5595)--(-0.4793,0.3573);
\draw (0.4390,0.7532)--(-0.4793,0.3573);\draw (0.4390,0.7532)--(-0.3801,0.1794);
\draw (0.3070,0.9060)--(-0.3801,0.1794);\draw (0.3070,0.9060)--(-0.2163,0.0539);
\draw (0.1320,0.9912)--(-0.2163,0.0539);
\end{tikzpicture}
}
\subfloat[$(\geo{U}_{32},0.782133)$]{
\begin{tikzpicture}[scale=5]
\draw[dashed] (0,0) -- (0.1083,0.0131) -- (0.2043,0.0450) -- (0.2910,0.0947) -- (0.3661,0.1606) -- (0.4266,0.2401) -- (0.4702,0.3301) -- (0.4950,0.4271) -- (0.5000,0.5271) -- (0.4850,0.6261) -- (0.4507,0.7200) -- (0.3984,0.8052) -- (0.3302,0.8783) -- (0.2491,0.9363) -- (0.1587,0.9768) -- (0.0654,0.9979) -- (0,1) -- (-0.0654,0.9979) -- (-0.1587,0.9768) -- (-0.2491,0.9363) -- (-0.3302,0.8783) -- (-0.3984,0.8052) -- (-0.4507,0.7200) -- (-0.4850,0.6261) -- (-0.5000,0.5271) -- (-0.4950,0.4271) -- (-0.4702,0.3301) -- (-0.4266,0.2401) -- (-0.3661,0.1606) -- (-0.2910,0.0947) -- (-0.2043,0.0450) -- (-0.1083,0.0131) -- cycle;
\draw (0,0)--(0,1);
\draw (0,0)--(0.0654,0.9979);\draw (0,0)--(-0.0654,0.9979);
\draw (0.1083,0.0131)--(-0.0654,0.9979);\draw (0.1083,0.0131)--(-0.1587,0.9768);
\draw (0.2043,0.0450)--(-0.1587,0.9768);\draw (0.2043,0.0450)--(-0.2491,0.9363);
\draw (0.2910,0.0947)--(-0.2491,0.9363);\draw (0.2910,0.0947)--(-0.3302,0.8783);
\draw (0.3661,0.1606)--(-0.3302,0.8783);\draw (0.3661,0.1606)--(-0.3984,0.8052);
\draw (0.4266,0.2401)--(-0.3984,0.8052);\draw (0.4266,0.2401)--(-0.4507,0.7200);
\draw (0.4702,0.3301)--(-0.4507,0.7200);\draw (0.4702,0.3301)--(-0.4850,0.6261);
\draw (0.4950,0.4271)--(-0.4850,0.6261);\draw (0.4950,0.4271)--(-0.5000,0.5271);
\draw (0.5000,0.5271)--(-0.5000,0.5271);\draw (0.5000,0.5271)--(-0.4950,0.4271);
\draw (0.4850,0.6261)--(-0.4950,0.4271);\draw (0.4850,0.6261)--(-0.4702,0.3301);
\draw (0.4507,0.7200)--(-0.4702,0.3301);\draw (0.4507,0.7200)--(-0.4266,0.2401);
\draw (0.3984,0.8052)--(-0.4266,0.2401);\draw (0.3984,0.8052)--(-0.3661,0.1606);
\draw (0.3302,0.8783)--(-0.3661,0.1606);\draw (0.3302,0.8783)--(-0.2910,0.0947);
\draw (0.2491,0.9363)--(-0.2910,0.0947);\draw (0.2491,0.9363)--(-0.2043,0.0450);
\draw (0.1587,0.9768)--(-0.2043,0.0450);\draw (0.1587,0.9768)--(-0.1083,0.0131);
\draw (0.0654,0.9979)--(-0.1083,0.0131);
\end{tikzpicture}
}
\subfloat[$(\geo{U}_{64},0.784596)$]{
\begin{tikzpicture}[scale=5]
\draw[dashed] (0,0) -- (0.0531,0.0031) -- (0.1018,0.0108) -- (0.1492,0.0231) -- (0.1953,0.0400) -- (0.2398,0.0615) -- (0.2820,0.0874) -- (0.3216,0.1174) -- (0.3581,0.1513) -- (0.3911,0.1887) -- (0.4201,0.2291) -- (0.4451,0.2723) -- (0.4656,0.3178) -- (0.4815,0.3650) -- (0.4926,0.4136) -- (0.4988,0.4631) -- (0.5000,0.5129) -- (0.4963,0.5627) -- (0.4876,0.6118) -- (0.4741,0.6598) -- (0.4559,0.7062) -- (0.4332,0.7506) -- (0.4061,0.7924) -- (0.3750,0.8314) -- (0.3403,0.8670) -- (0.3022,0.8990) -- (0.2612,0.9270) -- (0.2178,0.9507) -- (0.1725,0.9700) -- (0.1257,0.9846) -- (0.0783,0.9944) -- (0.0319,0.9995) -- (0,1) -- (-0.0319,0.9995) -- (-0.0783,0.9944) -- (-0.1257,0.9846) -- (-0.1725,0.9700) -- (-0.2178,0.9507) -- (-0.2612,0.9270) -- (-0.3022,0.8990) -- (-0.3403,0.8670) -- (-0.3750,0.8314) -- (-0.4061,0.7924) -- (-0.4332,0.7506) -- (-0.4559,0.7062) -- (-0.4741,0.6598) -- (-0.4876,0.6118) -- (-0.4963,0.5627) -- (-0.5000,0.5129) -- (-0.4988,0.4631) -- (-0.4926,0.4136) -- (-0.4815,0.3650) -- (-0.4656,0.3178) -- (-0.4451,0.2723) -- (-0.4201,0.2291) -- (-0.3911,0.1887) -- (-0.3581,0.1513) -- (-0.3216,0.1174) -- (-0.2820,0.0874) -- (-0.2398,0.0615) -- (-0.1953,0.0400) -- (-0.1492,0.0231) -- (-0.1018,0.0108) -- (-0.0531,0.0031) -- cycle;
\draw (0,0)--(0,1);
\draw (0,0)--(0.0319,0.9995);\draw (0,0)--(-0.0319,0.9995);
\draw (0.0531,0.0031)--(-0.0319,0.9995);\draw (0.0531,0.0031)--(-0.0783,0.9944);
\draw (0.1018,0.0108)--(-0.0783,0.9944);\draw (0.1018,0.0108)--(-0.1257,0.9846);
\draw (0.1492,0.0231)--(-0.1257,0.9846);\draw (0.1492,0.0231)--(-0.1725,0.9700);
\draw (0.1953,0.0400)--(-0.1725,0.9700);\draw (0.1953,0.0400)--(-0.2178,0.9507);
\draw (0.2398,0.0615)--(-0.2178,0.9507);\draw (0.2398,0.0615)--(-0.2612,0.9270);
\draw (0.2820,0.0874)--(-0.2612,0.9270);\draw (0.2820,0.0874)--(-0.3022,0.8990);
\draw (0.3216,0.1174)--(-0.3022,0.8990);\draw (0.3216,0.1174)--(-0.3403,0.8670);
\draw (0.3581,0.1513)--(-0.3403,0.8670);\draw (0.3581,0.1513)--(-0.3750,0.8314);
\draw (0.3911,0.1887)--(-0.3750,0.8314);\draw (0.3911,0.1887)--(-0.4061,0.7924);
\draw (0.4201,0.2291)--(-0.4061,0.7924);\draw (0.4201,0.2291)--(-0.4332,0.7506);
\draw (0.4451,0.2723)--(-0.4332,0.7506);\draw (0.4451,0.2723)--(-0.4559,0.7062);
\draw (0.4656,0.3178)--(-0.4559,0.7062);\draw (0.4656,0.3178)--(-0.4741,0.6598);
\draw (0.4815,0.3650)--(-0.4741,0.6598);\draw (0.4815,0.3650)--(-0.4876,0.6118);
\draw (0.4926,0.4136)--(-0.4876,0.6118);\draw (0.4926,0.4136)--(-0.4963,0.5627);
\draw (0.4988,0.4631)--(-0.4963,0.5627);\draw (0.4988,0.4631)--(-0.5000,0.5129);
\draw (0.5000,0.5129)--(-0.5000,0.5129);\draw (0.5000,0.5129)--(-0.4988,0.4631);
\draw (0.4963,0.5627)--(-0.4988,0.4631);\draw (0.4963,0.5627)--(-0.4926,0.4136);
\draw (0.4876,0.6118)--(-0.4926,0.4136);\draw (0.4876,0.6118)--(-0.4815,0.3650);
\draw (0.4741,0.6598)--(-0.4815,0.3650);\draw (0.4741,0.6598)--(-0.4656,0.3178);
\draw (0.4559,0.7062)--(-0.4656,0.3178);\draw (0.4559,0.7062)--(-0.4451,0.2723);
\draw (0.4332,0.7506)--(-0.4451,0.2723);\draw (0.4332,0.7506)--(-0.4201,0.2291);
\draw (0.4061,0.7924)--(-0.4201,0.2291);\draw (0.4061,0.7924)--(-0.3911,0.1887);
\draw (0.3750,0.8314)--(-0.3911,0.1887);\draw (0.3750,0.8314)--(-0.3581,0.1513);
\draw (0.3403,0.8670)--(-0.3581,0.1513);\draw (0.3403,0.8670)--(-0.3216,0.1174);
\draw (0.3022,0.8990)--(-0.3216,0.1174);\draw (0.3022,0.8990)--(-0.2820,0.0874);
\draw (0.2612,0.9270)--(-0.2820,0.0874);\draw (0.2612,0.9270)--(-0.2398,0.0615);
\draw (0.2178,0.9507)--(-0.2398,0.0615);\draw (0.2178,0.9507)--(-0.1953,0.0400);
\draw (0.1725,0.9700)--(-0.1953,0.0400);\draw (0.1725,0.9700)--(-0.1492,0.0231);
\draw (0.1257,0.9846)--(-0.1492,0.0231);\draw (0.1257,0.9846)--(-0.1018,0.0108);
\draw (0.0783,0.9944)--(-0.1018,0.0108);\draw (0.0783,0.9944)--(-0.0531,0.0031);
\draw (0.0319,0.9995)--(-0.0531,0.0031);
\end{tikzpicture}
}
\caption{Three largest small $n$-gons $(\geo{U}_n,A_n^*)$}
\label{figure:Un}
\end{figure}
\begin{figure}[h]
\centering
\subfloat[$(\geo{R}_5^+,0.672288)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.5000,0.3633) -- (0.3090,0.9511) -- (0,1) -- (-0.3090,0.9511) -- (-0.5000,0.3633) -- cycle;
\draw (0,1) -- (0,0) -- (0.3090,0.9511) -- (-0.5000,0.3633) -- (0.5000,0.3633) -- (-0.3090,0.9511) -- (0,0);
\end{tikzpicture}
}
\subfloat[$(\geo{P}_6^1,0.674941)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.5000,0.3975) -- (0.3397,0.9405) -- (0,1) -- (-0.3397,0.9405) -- (-0.5000,0.3975) -- cycle;
\draw (0,1) -- (0,0) -- (0.3397,0.9405) -- (-0.5000,0.3975) -- (0.5000,0.3975) -- (-0.3397,0.9405) -- (0,0);
\end{tikzpicture}
}
\subfloat[$(\geo{P}_6^2,0.674981)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.5000,0.4018) -- (0.3433,0.9392) -- (0,1) -- (-0.3433,0.9392) -- (-0.5000,0.4024) -- cycle;
\draw (0,1) -- (0,0) -- (0.3433,0.9392) -- (-0.5000,0.4018) -- (0.5000,0.4018) -- (-0.3433,0.9392) -- (0,0);
\end{tikzpicture}
}\\
\subfloat[$(\geo{P}_6^3,0.674981)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.5000,0.4023) -- (0.3437,0.9391) -- (0,1) -- (-0.3437,0.9391) -- (-0.5000,0.4023) -- cycle;
\draw (0,1) -- (0,0) -- (0.3437,0.9391) -- (-0.5000,0.4023) -- (0.5000,0.4023) -- (-0.3437,0.9391) -- (0,0);
\end{tikzpicture}
}
\subfloat[$(\geo{P}_6^4,0.674981)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.5000,0.4023) -- (0.3438,0.9391) -- (0,1) -- (-0.3438,0.9391) -- (-0.5000,0.4023) -- cycle;
\draw (0,1) -- (0,0) -- (0.3438,0.9391) -- (-0.5000,0.4023) -- (0.5000,0.4023) -- (-0.3438,0.9391) -- (0,0);
\end{tikzpicture}
}
\subfloat[$(\geo{P}_6^5,0.674981)$]{
\begin{tikzpicture}[scale=4]
\draw[dashed] (0,0) -- (0.5000,0.4024) -- (0.3438,0.9391) -- (0,1) -- (-0.3438,0.9391) -- (-0.5000,0.4024) -- cycle;
\draw (0,1) -- (0,0) -- (0.3438,0.9391) -- (-0.5000,0.4024) -- (0.5000,0.4024) -- (-0.3438,0.9391) -- (0,0);
\end{tikzpicture}
}
\caption{All $6$-gons $(\geo{P}_6^k,A(\geo{P}_6^k))$ generated by Algorithm~\ref{algo:ccp}}
\label{figure:ccp:U6}
\end{figure}
\begin{table}[t]
\footnotesize
\centering
\caption{Vertices of $6$-gons generated by Algorithm~\ref{algo:ccp}}
\label{table:ccp:U6}
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}l|ccccc|l@{}}
\toprule
$6$-gon & \multicolumn{5}{c|}{Coordinates $(x_i,y_i)$} & Area \\
\cmidrule{2-6} & $(x_1,y_1)$ & $(x_2,y_2)$ & $(x_3,y_3)$ & $(x_4,y_4)$ & $(x_5,y_5)$ &\\
\midrule
$\geo{R}_5^+$ & $(0.500000,0.363271)$ & $(0.309017,0.951057)$ & $(0.000000,1.000000)$ & $(-0.309017,0.951057)$ & $(-0.500000,0.363271)$ & $0.6722882584$ \\
$\geo{P}_6^1$ & $(0.500000,0.397460)$ & $(0.339680,0.940541)$ & $(0.000000,1.000000)$ & $(-0.339680,0.940541)$ & $(-0.500000,0.397460)$ & $0.6749414624$ \\
$\geo{P}_6^2$ & $(0.500000,0.401764)$ & $(0.343285,0.939231)$ & $(0.000000,1.000000)$ & $(-0.343285,0.939231)$ & $(-0.500000,0.401764)$ & $0.6749808685$ \\
$\geo{P}_6^3$ & $(0.500000,0.402283)$ & $(0.343715,0.939074)$ & $(0.000000,1.000000)$ & $(-0.343715,0.939074)$ & $(-0.500000,0.402283)$ & $0.6749814310$ \\
$\geo{P}_6^4$ & $(0.500000,0.402345)$ & $(0.343766,0.939055)$ & $(0.000000,1.000000)$ & $(-0.343766,0.939055)$ & $(-0.500000,0.402345)$ & $0.6749814386$ \\
$\geo{P}_6^5$ & $(0.500000,0.402352)$ & $(0.343773,0.939053)$ & $(0.000000,1.000000)$ & $(-0.343773,0.939053)$ & $(-0.500000,0.402352)$ & $0.6749814387$ \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Conclusion}\label{sec:conclusion}
We proposed a sequential convex optimization approach to find the largest small $n$-gon for a given even number $n\ge 6$, which is formulated as a nonconvex quadratically constrained quadratic optimization problem. The algorithm, also known as the concave-convex procedure, guarantees convergence to a locally optimal solution.
Without assuming Graham's conjecture nor the existence of an axis of symmetry in our quadratic formulation, numerical experiments on polygons with up to $n=128$ sides showed that each optimal $n$-gon obtained with the algorithm proposed verifies both conditions within the limitation of the numerical computations. Futhermore, for even $6\le n\le 12$, the $n$-gons obtained correspond to the known largest small $n$-gons.
\section*{Acknowledgements}
The author thanks Charles Audet, Professor at Polytechnique Montreal, for helpful discussions on largest small polygons and helpful comments on early drafts of this paper.
\bibliographystyle{ieeetr}
| {
"timestamp": "2021-06-02T02:11:56",
"yymm": "2009",
"arxiv_id": "2009.07893",
"language": "en",
"url": "https://arxiv.org/abs/2009.07893",
"abstract": "A small polygon is a polygon of unit diameter. The maximal area of a small polygon with $n=2m$ vertices is not known when $m\\ge 7$. Finding the largest small $n$-gon for a given number $n\\ge 3$ can be formulated as a nonconvex quadratically constrained quadratic optimization problem. We propose to solve this problem with a sequential convex optimization approach, which is an ascent algorithm guaranteeing convergence to a locally optimal solution. Numerical experiments on polygons with up to $n=128$ sides suggest that the optimal solutions obtained are near-global. Indeed, for even $6 \\le n \\le 12$, the algorithm proposed in this work converges to known global optimal solutions found in the literature.",
"subjects": "Optimization and Control (math.OC)",
"title": "Largest small polygons: A sequential convex optimization approach",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130599601177,
"lm_q2_score": 0.8519528019683106,
"lm_q1q2_score": 0.842592447616275
} |
https://arxiv.org/abs/2209.13199 | Normal Bundles of Rational Normal Curves on Hypersurfaces | Let $C$ be the rational normal curve of degree $e$ in $\mathbb{P}^n$, and let $X\subset \mathbb{P}^n$ be a degree $d\ge 2$ hypersurface containing $C$. In previous work, I. Coskun and E. Riedl showed that the normal bundle $N_{C/X}$ is balanced for a general $X$. H. Larson studied the case of lines ($e=1$) and computed the dimension of the space of hypersurfaces for which $N_{C/X}$ has a given splitting type. In this paper, we work with any $e\ge 2$. We compute explicit examples of hypersurfaces for all possible splitting types, and for $d\ge 3$, we compute the dimension of the space of hypersurfaces for which $N_{C/X}$ has a given splitting type. For $d=2$, we give a lower bound on the maximum rank of quadrics with fixed splitting type. | \section{Introduction}
Rational curves play an important role in the study of birational and arithmetic geometry of projective varieties. The local structure of the space of rational curves on a variety is determined by its normal bundle. In this paper, we study the possible splitting types of the normal bundle of rational normal curves on a hypersurface in $\P^n$ and obtain explicit examples of hypersurfaces for each splitting type. Unless otherwise specified, we work over an algebraically closed field $K$ of arbitrary characteristic $p$.
Let $X$ be a degree $d$ hypersurface in $\P^n$ containing a smooth rational curve $C$. By the Birkhoff-Grothendieck's theorem, the normal bundle of $C$ on $X$, $N_{C/X}$, splits as a direct sum $N_{C/X}\cong \bigoplus_{i=1}^{n-2}\mathcal{O}_{\P^1}(a_i)$. The collection of integers $a_i$ is called the \textit{splitting type} of $N_{C/X}$. We say a splitting type is \textit{balanced} when $|a_i-a_j|\le 1$ for all $i$ and $j$. We are interested in studying the possible splitting types when $C$ is the rational normal curve of degree $e$ in $\P^n$. Additionally, given a splitting type $E_{\vec{a}}=\bigoplus_{i=1}^{n-2}\mathcal{O}(a_i)$, we can study the space of hypersurfaces $X$ containing $C$ and for which $N_{C/X}\cong \bigoplus_{i=1}^{n-2}\mathcal{O}(a_i)$. Let $X$ be given by a degree $d$ polynomial $F$ as $X = V(F)$, and define
\[ \Sigma = \{ F \ | \ X \text{ is a degree } d \text{ hypersurface smooth along } C \}\subset H^0(\mathcal{O}_{\P^n}(d)) \]
and
\[ \Sigma_{\vec{a}} = \{ F\in \Sigma \ | \ N_{C/X}\cong E_{\vec{a}} \}\subset \Sigma . \]
The codimension of the locus of vector bundles on $\P^1$ with a specified splitting type $E_{\vec{a}}$ in the versal deformation space is given by $h^1(\P^1, \End(E_{\vec{a}}))$ (see \cite{C08} Lemma 2.4 and \cite{CR18}). We call it the \textit{expected codimension} for $\Sigma_{\vec{a}}$ in $\Sigma$ and observe that
\[ h^1(\End(E_{\vec{a}})) = h^1(E_{\vec{a}}^*\otimes E_{\vec{a}}) = \sum_{\{i,j | a_i-a_j\le -2\}}(a_j-a_i-1). \]
Normal bundles of rational curves have been studied in many different works (see \cite{Sa80}, \cite{GS80}, \cite{EV81}, \cite{EV82}, \cite{M86}, \cite{Ran}, \cite{AlzatiRe}, \cite{AlzatiReTortora}, \cite{CR18}, \cite{Ran20}, \cite{Ran22}). In \cite{CR18}, I. Coskun and E. Riedl show that the locus of rational curves in $\P^n$ with a given splitting type has arbitrarily many components, and the difference between the expected dimension and actual dimension of a component can grow arbitrarily large as the degree of the curve increases.
For rational curves on hypersurfaces, H. Larson (\cite{L21}, proof of Theorem 1.1) studies the case of lines and shows that $\Sigma_{\vec{a}}$ is smooth of the expected codimension. For rational normal curves of degree $e\ge 2$, I. Coskun and E. Riedl prove (\cite{CR} Corollary 3.8) that the normal bundle on a degree $d\ge 2$ general hypersurface is balanced. Here, we examine more closely the case of rational normal curves on hypersurfaces, and find explicit examples of hypersurfaces $X$ for each possible splitting type of $N_{C/X}$.
In the section 2, we describe the normal bundle as the kernel in the sequence
\[ 0\longrightarrow N_{C/X}\longrightarrow \mathcal{O}(e+2)^{e-1}\oplus \mathcal{O}(e)^{n-e}\overset{\psi_F}{\longrightarrow} \mathcal{O}(de)\longrightarrow 0, \]
so the splitting type of $N_{C/X}$ must have the form of a rank $(n-2)$ direct sum
\[ N_{C/X}\cong \left(\bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i)\right )\oplus \left (\bigoplus_{j=e+1}^{n}\mathcal{O}(e-b_j) \right ), \]
with $a_i, b_j\ge 0$, and degree $\deg N_{C/X} = \deg N_{C/\P^n} - \deg \mathcal{O}(de) = e(n-d+1)-2$, equivalently $\sum_{i=1}^{e-2}a_i + \sum_{j=e+1}^n b_j = e(d-1)-2$. As a direct consequence of the surjectivity of the map $\phi : H^0(\mathcal{I}_C(d))\to \Hom (N_{C/\P^n}, \mathcal{O}_{\P^1}(de))$ for $d\ge 3$ from \cite{CR} (see section 2), we show that all these splitting types are achieved, and we look for explicit examples. We first examine the case $d\ge 3$:
\begin{theorem}\label{existencetheorem} (cf. Theorem \ref{existence})
Let $e\le n$, $d\ge 3$, and let $C$ be the rational normal curve of degree $e$ in $\P^n$. Then for all splitting types
\[ E = \left(\bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i)\right )\oplus \left (\bigoplus_{j=e+1}^{n}\mathcal{O}(e-b_j) \right ), \]
with $a_i, b_j\ge 0$ and $\sum_{i=1}^{e-2}a_i + \sum_{j=e+1}^n b_j = e(d-1)-2$, we obtain explicit examples of degree $d$ hypersurfaces $X$, smooth along the curve $C$, with normal bundle $N_{C/X}\cong E$.
\end{theorem}
When $d\ge 4$, Bertini's Theorem implies that the general example is smooth.
\begin{corollary}\label{existsmooth} (cf. Corollary \ref{smoothexample})
(char $K=0$) Let $d\ge 4$, and assume the base field $K$ has characteristic $0$. For all splitting types $E$ as in the theorem, there exists a smooth hypersurface $X$ of degree $d$ containing the curve $C$ with normal bundle $N_{C/X}\cong E$.
\end{corollary}
In addition, for $d\ge 3$, we compute the dimension of $\Sigma_{\vec{a}}$ and show that, when $e=n$ or the splitting type does not have terms of degree $e+2$, we get the expected codimension in $\Sigma$. When the dimension is not the expected one, we can compute the difference to its actual dimension. In particular, we show the difference can get arbitrarily large as $n$ grows.
\begin{theorem} (cf. Theorem \ref{teoremasigmaa})
Let $e\le n$ and $d\ge 3$. Given a splitting type $E_{\vec{a}}$ as in Theorem \ref{existencetheorem}, the locus $\Sigma_{\vec{a}}$ is irreducible and smooth of codimension $h^1(\End(E_{\vec{a}})) - h^1(\sHom(E_{\vec{a}}, N_{C/\P^n}))$ in $\Sigma$.
\end{theorem}
\begin{corollary} (cf. Corollary \ref{difference}) Let $z = |\{ i \ | \ a_i = e+2 \}|$ be the number of terms $\mathcal{O}(e+2)$ in the splitting type $E_{\vec{a}}$. Then the codimension of $\Sigma_{\vec{a}}$ in $\Sigma$ is $h^1(\End(E_{\vec{a}})) - (n-e)z$.
\end{corollary}
The case $d=2$ presents additional difficulties, as for example, the map $\phi : H^0(\mathcal{I}_C(2))\to \Hom (N_{C/\P^n}, \mathcal{O}_{\P^1}(2e))$ is not surjective, so it is not immediately clear that all splitting types are achievable. Nonetheless, we can adapt the computation from the case $d\ge 3$, and find explicity examples of quadrics for every possible splitting type.
\begin{theorem}\label{quadricsexamples} (cf. Theorem \ref{quadricprop1})
For any given splitting type
\[ E = \left ( \bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i) \right )\oplus \left ( \bigoplus _{j=e+1}^{n}\mathcal{O}(e-b_j)\right ), \] with $a_i, b_j\ge 0$ and $\sum_{i=1}^{e-2}a_i + \sum_{j=e+1}^nb_j = e-2$, we produce a quadric $X=V(F)$, smooth along $C$, for which $N_{C/X}\cong E$.
More explicitly, for $e=n$, rearrange the $a_i$ in increasing order $a_1\le \cdots \le a_{n-2}$, and let $\beta_0 = 0$ and $\beta_i = a_1 + \cdots + a_i$ for $i=1, \cdots , n-2$. Then the quadric $X$ given by the polynomial
\[ F = \sum_{i=0}^{n-2}Q_{\beta_i+1,\beta_i+2} = Q_{1,2} + Q_{\beta_1 + 1, \beta_1 +2} + Q_{\beta_2 + 1, \beta_2 + 2} \cdots + Q_{\beta_{n-3}+1, \beta_{n-3}+2} + Q_{n-1,n}, \]
where $Q_{ij} = x_ix_{j-1}-x_{i-1}x_j$, has normal bundle $N_{C/X}\cong E$.
For $e<n$, let $\beta_0 = 0$ and $\beta_i = a_1 + \cdots + a_i$ for $1\le i\le e-2$. Let also $\gamma_n = e$ and $\gamma_{j} = e - b_n - b_{n-1} - \cdots - b_{j+1}$ for $e+1\le j\le n-1$. Then a quadric $X$ such that $N_{C/X}\cong E$ is given by
\begin{align*}
F = & \sum_{i=0}^{e-2}Q_{\beta_i+1,\beta_i+2} + \sum_{j=e+1}^n x_{\gamma_j}x_j\\
= & (Q_{1,2} + Q_{\beta_1 + 1, \beta_1 +2} + \cdots + Q_{\beta_{n-3}+1, \beta_{n-3}+2} + Q_{n-1,n}) + (x_{\gamma_j}x_{e+1} + \cdots + x_{\gamma_{n-1}}x_{n-1} + x_ex_n).
\end{align*}
\end{theorem}
The quadrics constructed in \ref{quadricsexamples} are smooth along the curve $C$ but not necessarily smooth quadrics. We can, however, work on their quadratic form matrices to produce examples with a smaller singular locus. This allows us to find a lower bound for the maximum rank of quadrics with a given normal bundle:
\begin{theorem}\label{quadricstheorem} (cf. Theorem \ref{corankexample})
For every splitting type
\[ E = \left ( \bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i) \right )\oplus \left ( \bigoplus _{j=e+1}^{n}\mathcal{O}(e-b_j)\right ), \]
with $a_i, b_j\ge 0$ and $\sum_{i=1}^{e-2}a_i + \sum_{j=e+1}^nb_j = e-2$, we obtain an example of quadric $X$ of corank at most $\sum _{a_i\ge 4}(a_i-3)$ with $N_{C/X}\cong E$. In particular, if $a_i\le 3$ for all $i$, we find a smooth quadric $X$ with $N_{C/X}\cong E$.
\end{theorem}
\noindent \textbf{Organization of the paper.} In Section 2, we discuss definitions and computations that will be used throughout the paper. In Section 3, we work on the case $d\ge 3$, both providing the example of a hypersurface with $N_{C/X}\cong E_{\vec{a}}$ and computing the dimension of $\Sigma_{\vec{a}}$. Section 4 is dedicated to the case $d=2$; we compute the example with a given splitting type, and prove Theorem \ref{quadricstheorem}.
\medskip
\noindent \textbf{Acknowledgements.} I would like to thank my advisor Izzet Coskun for our many discussions and his guidance throughout the writing of this paper. I would also like to thank Benjamin Gould, Yeqin Liu, Eric Riedl and Geoffrey Smith for reading my drafts and giving valuable suggestions.
\section{Preliminaries}
In this section we cover the definitions and results necessary to describe our approach. Most results here are proved in \cite{CR}. We describe the rational normal curves, and give an explicit computation of their normal bundle on a given hypersurface.
\medskip
\subsection{Rational Normal Curves} For $e\le n$, we say the rational normal curve $R_e$ of degree $e$ in $\P^n$ is the curve defined by
\[ (s^e : s^{e-1}t : s^{e-2}t^2 : \cdots : st^{e-1} : t^e : 0 : \cdots : 0): \P^1\to \P^n. \]
Its homogeneous ideal $I_{R_e}\subset k[x_0, \cdots , x_n]$ is cut out by quadrics and linear forms:
\[ I_{R_e} = ( \{ Q_{ij} = x_ix_{j-1} - x_{i-1}x_j \mid 1\le i < j\le e \}\cup \{x_{e+1}, \cdots , x_n \} ). \]
The $Q_{ij}$ correspond to the $2\times 2$ minors of the matrix
\[ \begin{pmatrix}
x_1 & x_2 & \cdots & x_e\\
x_0 & x_1 & \cdots & x_{e-1}
\end{pmatrix}. \]
We can get the relations between the $Q_{ij}$ as the $3\times 3$ minors of the matrices
\[ \begin{pmatrix}
x_1 & x_2 & \cdots & x_e\\
x_1 & x_2 & \cdots & x_e\\
x_0 & x_1 & \cdots & x_{e-1}
\end{pmatrix} \text{ and } \begin{pmatrix}
x_0 & x_1 & \cdots & x_{e-1}\\
x_1 & x_2 & \cdots & x_e\\
x_0 & x_1 & \cdots & x_{e-1}
\end{pmatrix}. \]
Let $R_n$ be the rational normal curve of degree $n$ in $\P^n$. From the relations above, Proposition 2.4 in \cite{CR} shows that the quadrics $Q_{i,i+1}$, for $1\le i\le n-1$ suffice to determine the elements of $H^0(N_{R_n/\P^n})$.
\begin{proposition}\label{relations}(\cite{CR} Proposition 2.4)
An element $\alpha \in H^0(N_{R_n/\P^n}) = \Hom (\mathcal{I}_{R_n/\P^n}, \mathcal{O}_C)$ is determined by the images $\alpha (Q_{i,i+1})$, for $1\le i\le n-1$. Furthermore, $s^{n-i-1}t^{i-1}$ divides $\alpha (Q_{i,i+1})$ and this is the only constraint on $\alpha (Q_{i,i+1})$. If $b_{i,i+1}$, for $1\le i\le n-1$, are arbitrary polynomials of degree $n+2$, there exists an element $\alpha \in H^0(N_{R_n/\P^n})$ such that $\alpha (Q_{i,i+1}) = s^{n-i-1}t^{i-1}b_{i,i+1}$.
In addition, the image $\alpha (Q_{i,j})$ of the other generators of $I_{R_n}$ are expressed in terms of $b_{i,i+1}$ by
\[ \alpha (Q_{i,j}) = \sum _{l=1}^{j-1} s^{n-j-i+l}t^{j+i-l-2}b_{l,l+1}. \]
\end{proposition}
\begin{corollary}\label{normalbundlee} (\cite{CR} Corollary 2.6)
For an integer $e\le n$, the normal bundle $N_{R_e/\P^n}$ is $N_{R_e/\P^e}\oplus N_{\P^e/\P^n}\cong \mathcal{O}_{\P^1}(e+2)^{e-1}\oplus \mathcal{O}_{\P^1}(e)^{n-e}$.
\end{corollary}
\subsection{Normal bundles on hypersurfaces} Let $C$ be a smooth rational curve of degree $e$ and let $X$ be a degree $d$ hypersurface in $\P^n$ containing $C$. Using the identification $N_{X/\P^n}\cong \mathcal{O}_{X}(d)$, we can write the standard normal bundle sequence as
\[ 0\longrightarrow N_{C/X}\longrightarrow N_{C/\P^n}\overset{\psi}{\longrightarrow} N_{X/\P^n}|_C\cong \mathcal{O}_{\P^1}(de). \]
Thus, we get a map
\[ \phi : H^0(\mathcal{I}_C(d))\to \Hom (N_{C/\P^n}, \mathcal{O}_{\P^1}(de)) \]
that sends polynomials $F$ defining $X = V(F)$ to elements $\psi$ in $\Hom (N_{C/\P^n}, \mathcal{O}_{\P^1}(de))$.
This can be identified via the sequence
\[ 0\longrightarrow \mathcal{I}_{C/\P^n}^2\longrightarrow \mathcal{I}_{C/\P^n}\longrightarrow N_{C/\P^n}^*\longrightarrow 0 \]
when we twist it by $\mathcal{O}_{\P^n}(d)$ and take global sections:
\begin{equation*}\label{phisequence}
0\longrightarrow H^0(\mathcal{I}_{C/\P^n}^2(d))\longrightarrow H^0(\mathcal{I}_{C/\P^n}(d))\overset{\phi}{\longrightarrow} H^0(N_{C/\P^n}^*(d)).
\end{equation*}
When $C = R_e$ is the rational normal curve of degree $e$ in $\P^n$ and $d\ge 3$, I. Coskun and E. Riedl showed in \cite{CR} that the map $\phi$ is surjective, that is, every element $\psi \in \Hom (N_{C/\P^n}, \mathcal{O}_{\P^1}(de))$ can be obtained as the map of a normal bundle sequence for some hypersurface $X$. It is not surjective when $d=2$. In the section 4, we show $\phi$ is injective for $d=2$ and $e=n$, and we compute the dimension of its image.
The relations given in Proposition \ref{relations} allow us to explicitly write the map $\psi _F: N_{C/\P^n}\to \mathcal{O}_{\P^1}(de)$ for a hypersurface $X=V(F)$. First, let $e=n$. Write $F$ in terms of the generators of $I_C$, $F = \sum _{i,j}F_{i,j}Q_{i,j}$. Then $\psi_F(\alpha) = \sum_{i,j}F_{i,j}|_C\alpha(Q_{i,j})$. By the relation from Proposition \ref{relations}, we get
\[ \psi_F(\alpha) = \sum _{i,j}F_{i,j}|_C\sum _{l=i}^{j-1} s^{n-j-i+l}t^{j+i-l-2}b_{l,l+1}. \]
Collect the terms and write the sum as $\sum_{i=1}^{n-1}c_ib_{i,i+1}$, then the map $\psi_F: \mathcal{O}_{\P^1}(n+2)^{n-1}\to \mathcal{O}_{\P^1}(dn)$ is given by the matrix $(c_1 \ \cdots \ c_{n-1})$.
When $e<n$, by Corollary \ref{normalbundlee} the normal bundle $N_{C/\P^n}$ splits as the direct sum $N_{C/\P^n}\oplus N_{\P^e/\P^n}$. We write $F = \sum _{i,j}F_{i,j}Q_{i,j} + \sum _{k=e+1}^nG_kx_k$, and collect the coefficients $c_1, \cdots , c_{e-1}$ of the $b_{l,l+1}$ as above. Then the map $\psi_F: \mathcal{O}_{\P^1}(e+2)^{e-1}\oplus \mathcal{O}_{\P^1}(e)^{n-e}\to \mathcal{O}_{\P^1}(de)$ is given by the matrix $(c_1 \ \cdots \ c_{e-1}; \ G_{e+1}|_C \ \cdots \ G_n|C)$.
\medskip
We can use this description to obtain the map $\psi_F$ from explicit hypersurfaces $X=V(F)$. By choosing the appropriate $\psi_F$, we will then obtain each possible splitting type for $N_{C/X}$.
First, observe that when $C$ is the rational normal curve of degree $e$ in $\P^e$, the restrictions $F|_C$ of hypersurfaces of degree $k\ge 1$ cut out the complete linear series $|\mathcal{O}_C(k)|$, that is, rational normal curves are projectively normal.
\begin{lemma}\label{projectivelynormal}\cite{ACGH}
For every $k\ge 1$, the map $H^0(\mathcal{O}_{\P^n}(k))\to H^0(\mathcal{O}_{C}(k))\cong H^0(\mathcal{O}_{\P^1}(ek))$, $F\mapsto F|_C$, is surjective.
\end{lemma}
To obtain an explicit $F$ for each polynomial in $H^0(\mathcal{O}_{\P^1}(ek))$ it suffices to write each monomial as a product of $k$ monomials of degree $e$. For instance, for $F|_C = s^{ek-2}t^2$ we can write $s^{ek-2}t^2 = s^{e(k-1)}(s^{e-2}t^2)$ and choose $F = x_0^{k-1}x_2$.
Before we advance to the general case, let us compute an example of a hypersurface $X$ for which $N_{C/X}$ is balanced in order to display the computation process with a simpler notation. In particular, we recover Corollary 3.8 of \cite{CR} for the rational normal curve of degree $n$ in $\P^n$. We make use of the same method in the proof of Theorem \ref{existence}.
\begin{proposition}\label{balancedexample} (\cite{CR}, Corollary 3.8)
Let $d\ge 2$, and let $X$ be a general hypersurface of degree $d$ containing the rational normal curve $C$ of degree $n$ in $\P^n$. Then the normal bundle $N_{C/X}$ is balanced.
\end{proposition}
\begin{proof}
Since being balanced is an open condition in a family of vector bundles on $\P^1$ with fixed rank and degree, it suffices to show an example of hypersurface $X$ for each $d$. The idea is to compute the kernel of $\psi_F: \mathcal{O}(n+2)^{n-1}\to \mathcal{O}(dn)$ directly from the linear relations between the entries of $\psi_F$, which we call them the \textit{column relations} of $\psi_F$.
If $d=2$, consider $F = \sum_{i=1}^{n-1} Q_{i,i+1}$, then
\[ \psi_F = ( s^{n-2},\ s^{n-3}t, \cdots ,\ st^{n-3},\ t^{n-2} ). \]
The columns $C_1, \cdots , C_{n-1}$ of $\psi_F$, satisfy the relations
\[ tC_i - sC_{i+1} = t(s^{n-1-i}t^{i-1}) - s(s^{n-2-i}t^{i}) = 0, \ \ 1\le i\le n-2. \]
These relations define the vectors $K_i = (a_1, \cdots , a_{n-1})$ with $a_i = t$, $a_{i+1} = -s$ and $a_j = 0$ for $j\neq i, i+1$. Let $K$ be the matrix whose columns are $K_i$:
\[ K = \begin{pmatrix}
t & 0 & 0 & \cdots & 0 & 0\\
-s & t & 0 & \cdots & 0 & 0\\
0 & -s & t & \cdots & 0 & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & 0 & \cdots & t & 0\\
0 & 0 & 0 & \cdots & -s & t\\
0 & 0 & 0 & \cdots & 0 & -s\\
\end{pmatrix}\]
Then $K$ defines a map
\[ K: \mathcal{O}(n+1)^{n-2}\to \mathcal{O}(n+2)^{n-1} \]
whose image, by the column relations above, is in the kernel $N_{C/X}$ of $\psi_F$. Notice also that $K$ has maximum rank, so the map is injective, and thus it factors through the $N_{C/X}$. As $N_{C/X}$ has rank $n-2$, we have $N_{C/X}\cong \mathcal{O}(n+1)^{n-2}$.
Now, let $d\ge 3$. First, observe that if $N_{C/X}$ is balanced, then it must be $\mathcal{O}(A)^{m}\oplus \mathcal{O}(A+1)^{n-2-m}$ with
\[ A = \left \lfloor \dfrac{n(n-d+1)-2}{n-2} \right \rfloor \text{ and } m = A(n-2)-n(n-d). \]
The approach is to construct $\psi_F$ such that its column relations define an injective map
\[ K: \mathcal{O}(A)^{m}\oplus \mathcal{O}(A+1)^{n-2-m}\to \mathcal{O}(n+2)^{n-1} \]
which, as in the case $d=2$, will be the kernel of $\psi_F$. This means we want to obtain $m$ relations of degree $(n+2)-A$ and $n-2-m$ relations of degree $(n+2)-(A+1)$ between the columns of $\psi_F$. To simplify the analysis, we look at polynomials $F = \sum_{i=1}^{n-1}F_iQ_{i,i+1}$ with $F_i$ monomials. Set $F_1 = x_0^{d-2}$ and $F_{n-1} = x_n^{d-2}$, so $F$ induces a map of the form
\[ \psi_F = ( (s^{n-2})(s^{n(d-2)}), \ (s^{n-3}t)F_2|_C, \cdots ,\ (st^{n-3})F_{n-2}|_C, \ (t^{n-2})(t^{n(d-2)}) ). \]
Since an arbitrary degree $d-2$ homogeneous polynomial $F_i$ has the form $F_i|_C = s^{n(d-2)-\beta_i}t^{\beta_i}$ when restricted to $C$, the map $\psi_F$ has the form
\[ \psi_F = ( (s^{n-2})(s^{n(d-2)}), \ (s^{n-3}t)(s^{n(d-2)-\beta_2}t^{\beta_2}), \cdots ,\ (st^{n-3})(s^{n(d-2)-\beta_{n-2}}t^{\beta_{n-2}}), \ (t^{n-2})(t^{n(d-2)}) ). \]
To further simplify, we look for relations that only involve two consecutive entries of $\psi_F$. One way to find such $\psi_F$ is to choose the $\beta_i$ in increasing order $0\le \beta_2\le \cdots \le \beta_{n-2}\le n(d-2)$. This way, we get the columns relations
\begin{align*}
& t^{\beta_2+1}C_1 - s^{\beta_2+1}C_2 = 0, \\
& t^{\beta_{i+1}-\beta_i+1}C_i - s^{\beta_{i+1}-\beta_i+1}C_{i+1} = 0, \ \ 2\le i\le n-3 \\
& t^{n(d-2)-\beta_{n-2}+1}C_{n-2} - s^{n(d-2)-\beta_{n-2}+1}C_{n-1} = 0.
\end{align*}
And as before, we define the matrix $K$ whose columns follow from the relations above:
\[ K = \begin{pmatrix}
t^{\beta_2+1} & 0 & 0 & \cdots & 0 & 0\\
-s^{\beta_2+1} & t^{\beta_3-\beta_2+1} & 0 & \cdots & 0 & 0\\
0 & -s^{\beta_3-\beta_2+1} & t^{\beta_4-\beta_3+1} & \cdots & 0 & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & 0 & \cdots & t^{\beta_{n-2}-\beta_{n-3}+1} & 0\\
0 & 0 & 0 & \cdots & -s^{\beta_{n-2}-\beta_{n-3}+1} & t^{n(d-2)-\beta_{n-2}+1}\\
0 & 0 & 0 & \cdots & 0 & -s^{n(d-2)-\beta_{n-2}+1}\
\end{pmatrix}\]
Since $K$ has maximum rank, it defines an injective map
\[ \mathcal{O}(n+2-(\beta_2+1))\oplus \left (\bigoplus_{i=2}^{n-3}\mathcal{O}(n+2-(\beta_{i+1}+\beta_i+1))\right )\oplus \mathcal{O}(n+2-(n(d-2)-\beta_{n-2}+1))\overset{K}{\to}\mathcal{O}(n+2)^{n-1} \]
which, due to the relations, factors through the kernel of $\psi_F$. Since the direct sum above and $N_{C/X}$ have the same rank, it follows that
\[ N_{C/X}\cong \mathcal{O}(n+2-(\beta_2+1))\oplus \left (\bigoplus_{i=2}^{n-3}\mathcal{O}(n+2-(\beta_{i+1}+\beta_i+1))\right )\oplus \mathcal{O}(n+2-(n(d-2)-\beta_{n-2}+1)). \]
To conclude, we pick the appropriate values for $\beta_i$, $2\le i\le n-2$. Choose
\[ \beta _i = \left\{\begin{matrix}
(i-1)(n+1-A), & 2\le i\le m+1\\
(i-1)(n-A)+m, & m+2\le i\le n-2
\end{matrix}\right., \]
noting that, by Lemma \ref{projectivelynormal}, there exist $F_i$ such that $F_i|_C = s^{n(d-2)-\beta_i}t^{\beta_i}$. A simple computation then shows that the direct sum is $\mathcal{O}(A)^{m}\oplus \mathcal{O}(A+1)^{n-2-m}$.
\end{proof}
\section{Hypersurfaces of Degree at Least 3}
Throughout this section, we assume $d\ge 3$. We study the splitting types of $N_{C/X}$ for hypersurfaces of degree $d$ in $\P^n$ containing a degree $e\le n$ rational normal curve $C = R_e$. By Corollary \ref{normalbundlee}, the normal bundle $N_{C/X}$ is the kernel in the sequence
\[ 0\longrightarrow N_{C/X}\longrightarrow \mathcal{O}(e+2)^{e-1}\oplus \mathcal{O}(e)^{n-e}\overset{\psi_F}{\longrightarrow} \mathcal{O}(de)\longrightarrow 0 \]
for $F$ a degree $d$ polynomial defining $X = V(F)$.
The surjectivity of the map $\phi : H^0(\mathcal{I}_{C/\P^n}(d))\to \Hom (N_{C/\P^n}, \mathcal{O}(de))$ for $d\ge 3$ (\cite{CR} Theorem 3.1) implies the existence of hypersurfaces $X$ for any possible splitting type $E_{\vec{a}}$: consider an injection $E_{\vec{a}}\to N_{C/\P^n}$ and compute its cokernel $\psi: N_{C/\P^n}\to \mathcal{O}(de)$. Since $\phi$ is surjective, there is a polynomial $F\in H^0(\mathcal{I}_{C/\P^n}(d))$ that defines $\psi$, and thus it induces $E_{\vec{a}}$.
Here, we look for examples of each possible splitting type for $N_{C/X}$, and the dimension of the space $\Sigma_{\vec{a}}$ of hypersurfaces $X$ with a given splitting type $E_{\vec{a}}$.
The surjectivity of $\phi$ is also enough to show that $N_{C/X}$ is balanced for the general $X$ (\cite{CR}, Corollary 3.8), and it is a result we use to compute the dimension of $\Sigma_{\vec{a}}$.
\subsection{Examples of hypersurfaces for each splitting type}
From the short exact sequence, a candidate for splitting type of $N_{C/X}$ must have the form of a rank $(n-2)$ direct sum
\[ N_{C/X}\cong \left(\bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i)\right )\oplus \left (\bigoplus_{j=e+1}^{n}\mathcal{O}(e-b_j) \right ), \]
with $a_i, b_j\ge 0$, and degree $\deg N_{C/X} = \deg N_{C/\P^n} - \deg \mathcal{O}(de) = e(n-d+1)-2$, equivalently $\sum_{i=1}^{e-2}a_i + \sum_{j=e+1}^n b_j = e(d-1)-2$.
\medskip
We start by finding examples of hypersurfaces $X$ smooth along $C$ for each splitting type. The proof of the theorem is a generalization of the computation from Proposition \ref{balancedexample}.
\begin{theorem}\label{existence} For all splitting types
\[ E = \left(\bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i)\right )\oplus \left (\bigoplus_{j=e+1}^{n}\mathcal{O}(e-b_j) \right ), \]
with $a_i, b_j\ge 0$ and $\sum_{i=1}^{e-2}a_i + \sum_{j=e+1}^n b_j = e(d-1)-2$, we obtain explicit examples of degree $d$ hypersurfaces $X$, smooth along the curve $C$, with normal bundle $N_{C/X}\cong E$.
\end{theorem}
\begin{proof}
We divide the proof into the cases $e=n$ and $e<n$.
For $e=n$, we ask if we can get all splitting types $E = \bigoplus_{i=1}^{n-2}\mathcal{O}(n+2-a_i)$ with $a_i\ge 0$ and $\sum _{i=1}^{n-2}a_i = n(d-1)-2$. The approach is to construct a map $\psi_F$ from which we can easily compute the kernel from the linear relations between its entries, which we call the \textit{column relations} of $\psi_F$.
In order to start with a simpler $\psi_F$, consider polynomials $F$ of the form $F = \sum_{i=1}^{n-1}F_iQ_{i,i+1}$ with $F_1 = x_0^{d-2}$ and $F_{n-1} = x_n^{d-2}$, so the map $\psi_F: \mathcal{O}(n+2)^{n-1}\to \mathcal{O}(dn)$ has the form
\[ \psi_F = (s^{n-2}(s^{n(d-2)}), \ \ (s^{n-3}t)F_2|_C, \ \cdots ,\ (st^{n-3})F_{n-2}|_C, \ \ t^{n-2}(t^{n(d-2)})). \]
First, we check that these are all smooth along the curve $C$. Taking partial derivatives $\frac{\partial F}{\partial x_i}$ and restricting to the curve (that is, taking $Q_{ij} = 0$), we obtain
\[ \frac{\partial F}{\partial x_0} = -x_0^{d-2}x_2, \ \ \frac{\partial F}{\partial x_{2}} = -x_0^{d-1} + 2x_{2}F_{2} - x_{4}F_{3} \ \ \text{ and } \ \ \frac{\partial F}{\partial x_{n-2}} = -x_n^{d-1} + 2x_{n-2}F_{n-2} - x_{n-4}F_{n-3}. \]
If $F$ is singular at a point $P = (s^n : s^{n-1}t : \cdots : t^n)$ on $C$, then $\frac{\partial F}{\partial x_0} = 0$ gives $s=0$ or $t=0$. If $s=0$, then $\frac{\partial F}{\partial x_{n-2}}=0$ imples $t=0$; and if $t=0$, then $\frac{\partial F}{\partial x_2}=0$ implies $s=0$. Thus, polynomials $F$ of this form give hypersurfaces smooth along $C$.
Now, look at the map $\psi_F$. One way to obtain a $\psi_F$ with clear relations between its entries is to choose increasing exponents for $t$. This way, we get a relation between every two consecutive entries by multiplying the first by a power of $t$ and the second by a power of $s$. We will also need zero entries to get the terms $\mathcal{O}(n+2)$ of $E$. So, let $0\le z\le n-3$ and $0\le \beta_2\le \beta _3\le \cdots \le \beta_{n-z-2}\le n(d-2)+z+1$ be integers, and let $F_i$ be such that
\[ F_i|_C = s^{n(d-2)-\beta_i}t^{\beta_i} \text{ for } 2\le i\le n-z-2, \]
and
\[ F_i|_C = 0 \text{ for } n-z-1\le i\le n-2. \]
Note that such $F_i$ exist by Lemma \ref{projectivelynormal}. Then, $\psi_F$ has increasing powers of $t$ and $z$ zero entries:
\begin{multline*}
\psi_F = ( s^{n-2}(s^{n(d-2)}), \ (s^{n-3}t)(s^{n(d-2)-\beta_2}t^{\beta_2}), \cdots , (s^{z+1}t^{n-z-3})(s^{n(d-2)-\beta_{n-z-2}}t^{\beta_{n-z-2}}), \ 0, \ 0, \cdots \\
\cdots 0, \ t^{n-2}(t^{n(d-2)}) ).
\end{multline*}
Its columns $C_1, C_2, \cdots , C_{n-1}$ satisfy the column relations
\begin{align*}
& t^{\beta_2+1}C_1 - s^{\beta_2+1}C_2 = 0, \\
& t^{\beta_{i+1}-\beta_i+1}C_i - s^{\beta_{i+1}-\beta_i+1}C_{i-1} = 0, \ \ 2\le i\le n-z-3, \\
& C_j = 0, \ \ n-z-1\le j\le n-2, \\
& t^{n(d-2)+z+1-\beta_{n-z-2}}C_{n-z-2} - s^{n(d-2)+z+1-\beta_{n-z-2}}C_{n-1} = 0.
\end{align*}
To simplify notation and make the final result simpler to parse, we write $\gamma_i = \beta_{i+1} - \beta_i$ for $2\le i\le n-z-3$, $\gamma_1 = \beta_2$ and $\gamma_{n-z-2} = \beta_{n-z-2}$. The conditions for $\beta_i$ translate into the conditions $\gamma_i\ge 0$ and $\gamma_{n-z-2} = \sum_{i=1}^{n-z-3}\gamma_i\le n(d-2)+z+1$. Rewriting the column relations:
\begin{align*}
& t^{\gamma_1+1}C_1 - s^{\gamma_1+1}C_2 = 0, \\
& t^{\gamma_i+1}C_i - s^{\gamma_i+1}C_{i-1} = 0, \ \ 2\le i\le n-z-3, \\
& C_j = 0, \ \ n-z-1\le j\le n-2, \\
& t^{n(d-2)+z+1-\gamma_{n-z-2}}C_{n-z-2} - s^{n(d-2)+z+1-\gamma_{n-z-2}}C_{n-1} = 0.
\end{align*}
We define the matrix $K$ whose columns give the relations above:
\[ K = \begin{pmatrix}
t^{\gamma_1+1} & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 \\
-s^{\gamma_1+1} & t^{\gamma_2+1} & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 \\
0 & -s^{\gamma_2+1} & t^{\gamma_3+1} & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & 0 & \cdots & t^{\gamma_{n-z-3}+1} & 0 & 0 & \cdots & 0 & 0\\
0 & 0 & 0 & \cdots & -s^{\gamma_{n-z-3}+1} & 0 & 0 & \cdots & 0 & t^{n(d-2)+z+1-\gamma_{n-z-2}}\\
0 & 0 & 0 & \cdots & 0 & 1 & 0 &\cdots & 0 & 0\\
0 & 0 & 0 & \cdots & 0 & 0 & 1 & \cdots & 0 & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 1 & 0\\
0 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & -s^{n(d-2)+z+1-\gamma_{n-z-2}}
\end{pmatrix}\]
Then $K$ defines a map
\[ K: \left (\bigoplus_{i=1}^{n-z-3} \mathcal{O}(n+1-\gamma_i)\right )\oplus \mathcal{O}(n+2)^z\oplus \mathcal{O}(n+1 - n(d-2)-z+\gamma_{n-z-2})\to \mathcal{O}(n+2)^{n-1}. \]
The column relations imply that the image of $K$ is contained in the kernel $N_{C/X}$ of $\psi_F$. Since $K$ has maximal rank, its image is a subbundle of $N_{C/X}$. As both $N_{C/X}$ and the subbundle have rank $n-2$, it follows that
\[ N_{C/X}\cong \mathcal{O}(n+2)^z\oplus \left (\bigoplus_{i=1}^{n-z-3} \mathcal{O}(n+1-\gamma_i)\right )\oplus \mathcal{O}(n+1 - n(d-2)-z+\gamma_{n-z-2}).\]
Thus, we get splitting types of $N_{C/X}$ with exactly $z$ terms $\mathcal{O}(n+2)$. By appropriately choosing the $\gamma_i\ge 0$ with $\gamma_{n-z-2} = \sum_{i=1}^{n-z-3}\gamma_i\le n(d-2)+z+1$, or equivalently, by choosing the $\beta_i$ in increasing order, we get all possible choices for the summands of smaller degrees. Note the last term is determined by the previous ones and the degree of $N_{C/X}$. Finally, by varying the number of zero entries $z$ between $0$ and $n-3$, we obtain all splitting types $E$ for $N_{C/X}$.
\medskip
Now, we investigate the case $e<n$. Similarly, consider polynomials $F$ of the form $F = \sum _{i=1}^{e-1}F_iQ_{i,i+1} + \sum _{j=e+1}^nG_jx_j$ with $F_1 = x_0^{d-2}$ and $G_n = x_e^{d-1}$, so that $N_{C/X}$ is the kernel of maps $\psi_F:\mathcal{O}(e+2)^{e-1}\oplus \mathcal{O}(e)^{n-e}\to \mathcal{O}(de)$ of the form
\[ \psi_F = \left ( s^{e-2}(s^{e(d-2)}), \ (s^{e-3}t)F_2|_C, \cdots , t^{e-2}F_{e-1}|_C \ ; \ G_{e+1}|_C, \ \cdots , G_{n-1}|_C, \ t^{e(d-1)} \right ). \]
The approach in this case is similar. The main difference is that $N_{C/\P^n}$ splits into two summands of different degrees: $\mathcal{O}(e+2)^{e-1}\oplus \mathcal{O}(e)^{n-e}$. We separate $\psi_F$ into two parts as above, corresponding to the $F_i$ and $G_j$. We will choose the zero entries all in the first part so we can get the $\mathcal{O}(e+2)$ terms of $E$.
First, we show that these $F$ define hypersurfaces smooth along the curve. Taking partial derivatives $\frac{\partial F}{\partial x_i}$ and restricting to the curve (letting $Q_{ij} = 0$ and $x_l = 0$, $l\ge e+1$), we obtain
\[ \frac{\partial F}{\partial x_2} = -x_0^{d-1} + 2x_2F_2 - F_3x_4 \ \ \ \text{ and } \ \ \ \frac{\partial F}{\partial x_n} = x_e^{d-1}. \]
So, if $F$ is singular at a point $P=(s^e : s^{e-1}t : \cdots : t^e : 0 : \cdots : 0)$ on $C$ then $x_e^{d-1} = 0$ implies $t=0$, thus $x_2 = x_4 =0$, and the first equation gives $x_0^{d-1} = 0$, so $s=0$. Hence, all $F$ of this form are smooth along $C$.
As in the case $e=n$, we choose maps $\psi_F$ with $z$ zero entries and increasing exponents of $t$, so to get relations between every two consecutive entries. This time, consider integers
\[ 0\le z\le e-2, \ \ 0\le \beta_2\le \beta_3\le \cdots \le \beta_{e-1-z} \ \text{ and } \ \beta_{e-1-z}-z+e\le \beta_e\le \cdots \le \beta_{n-2}\le e(d-1), \]
and let $F_i$ and $G_j$ be such that
\begin{align*}
& F_i|_C = s^{e(d-2)-\beta_i}t^{\beta_i} \text{ for } 2\le i\le e-1-z,\\
& F_i|_C = 0 \text{ for } e-z\le i\le e-1 \text{ and }\\
& G_j|_C = s^{e(d-1)-\beta_{j-1}}t^{\beta_{j-1}} \text{ for } e+1\le j\le n-1.
\end{align*}
so the map $\psi_F$ has the form
\begin{multline*}
\psi_F = ( s^{e-2}(s^{e(d-2)}), \ \ (s^{e-3}t)(s^{e(d-2)-\beta_2}t^{\beta_2}), \ \cdots , \ (s^zt^{e-2-z})(s^{e(d-2)-\beta_{e-1-z}}t^{\beta_{e-1-z}}), \ \ 0, \ \cdots , \ 0 ; \\
(s^{e(d-1)-\beta_e}t^{\beta_e}), \ \ (s^{e(d-1)-\beta_{e+1}}t^{\beta_{e+1}}), \ \cdots , \ (s^{e(d-1)-\beta_{n-2}}t^{\beta_{n-2}}), \ \ t^{e(d-1)} ),
\end{multline*}
We obtain the following relations between the columns $C_1, \cdots , C_{e-1}; C_e, \cdots C_{n-1}$:
\begin{align*}
& t^{\beta_2+1}C_1 - s^{\beta_2+1}C_2 = 0\\
& t^{\beta_{i+1}-\beta_i+1}C_i - s^{\beta_{i+1}-\beta_i+1}C_{i+1} = 0, \text{ for } 2\le i\le e-2-z\\
& C_i = 0, \text{ for } e-z\le e-1\\
& t^{\beta_e-\beta_{e-1-z}-e+2+z}C_{e-2-z} - s^{\beta_e-\beta_{e-1-z}+z-e}C_{e} = 0\\
& t^{\beta_{j+1}-\beta_j}C_j - s^{\beta_{j+1}-\beta_j}C_{j+1} = 0, \text{ for } e\le j\le n-3\\
& t^{e(d+1)-\beta_{n-2}}C_{n-2}-s^{e(d+1)-\beta_{n-2}}C_{n-1} = 0.
\end{align*}
As before, rename $\gamma_i = \beta_{i+1} - \beta_i$, $\gamma_1 = \beta_2$, $\gamma_{e-1} = \beta_e - \beta_{e-1-z} - e + z$ and $\gamma_{n-2} = \beta_{n-2}$. These satisfy the conditions $\gamma_i\ge 0$ for all $i$ and $\gamma_{n-2} = \sum_{i=1}^{n-2-z}\gamma_i + \sum_{j=e-1}^{n-3}\gamma_j\le e(d-1)$. Then the column relations become
\begin{align*}
& t^{\gamma_2+1}C_1 - s^{\gamma_2+1}C_2 = 0\\
& t^{\gamma_i+1}C_i - s^{\gamma_i+1}C_{i+1} = 0, \text{ for } 2\le i\le e-2-z\\
& C_i = 0, \text{ for } e-1-z\le e-1\\
& t^{\gamma_{e-1}+2}C_{e-2-z} - s^{\gamma_{e-1}}C_{e} = 0\\
& t^{\gamma_j}C_j - s^{\gamma_j}C_{j+1} = 0, \text{ for } e\le j\le n-3\\
& t^{e(d+1)-\gamma_{n-2}}C_{n-2}-s^{e(d+1)-\gamma_{n-2}}C_{n-1} = 0.
\end{align*}
They induce the matrix
\[ K = \begin{pmatrix}
t^{\gamma_1+1} & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 \\
-s^{\gamma_1+1} & t^{\gamma_2+1} & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 \\
0 & -s^{\gamma_2+1} & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \cdots & t^{\gamma_{e-z-2}+1} & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0\\
0 & 0 & \cdots & -s^{\gamma_{e-z-2}+1} & 0 & 0 & \cdots & 0 & t^{\gamma_{e-1}+2} & 0 & \cdots & 0 & 0\\
0 & 0 & \cdots & 0 & 1 & 0 &\cdots & 0 & 0 & 0 & \cdots & 0 & 0\\
0 & 0 & \cdots & 0 & 0 & 1 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0\\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 1 & 0 & 0 & \cdots & 0 & 0\\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & -s^{\gamma_{e-1}} & t^{\gamma_e} & \cdots & 0 & 0 \\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 & -s^{\gamma_e} & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & t^{\gamma_{n-3}} & 0\\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & -s^{\gamma_{n-3}} & t^{e(d+1)-\gamma_{n-2}}\\
0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & -s^{e(d+1)-\gamma_{n-2}}
\end{pmatrix}. \]
\medskip
And $K$ defines an injective map
\[ K: \left( \bigoplus _{i=1}^{e-2-z}\mathcal{O}(e+1-\gamma_i) \right )
\oplus \mathcal{O}(e+2)^{z}\oplus \left ( \bigoplus _{j=e-1}^{n-3}\mathcal{O}(e-\gamma_j) \right ) \oplus \mathcal{O}(e-e(d+1)+\gamma_{n-2})\to \mathcal{O}(e+2)^{e-1}\oplus \mathcal{O}(e)^{n-e}. \]
As before, the column relations imply that $K$ factors through the kernel of $\psi_F$, and consequently, the kernel is
\[ N_{C/X}\cong \mathcal{O}(e+2)^{z}\oplus \left( \bigoplus _{i=1}^{e-2-z}\mathcal{O}(e+1-\gamma_i) \right )
\oplus \left ( \bigoplus _{j=e-1}^{n-3}\mathcal{O}(e-\gamma_j) \right ) \oplus \mathcal{O}(e-e(d+1)+\gamma_{n-2}). \]
This is a splitting type with exactly $z$ terms $\mathcal{O}(e+2)$, $e-2-z$ terms of degree less or equal than $e+1$, and the rest of them of degree less or equal than $e$. By varying the $\gamma_i$ with $\gamma_i\ge 0$ and $\gamma_{n-2} = \sum_{i=1}^{n-2-z}\gamma_i + \sum_{j=e-1}^{n-3}\gamma_j\le e(d-1)$, we get all splitting types $E$ of this form. Finally, by varying the number of zero entries $z$ between $0$ and $n-2$ we obtain all possible splitting types $E$ for $N_{C/X}$.
\end{proof}
\begin{corollary}\label{smoothexample}
(char $K=0$) Let $d\ge 4$, and assume the base field $K$ has characteristic $0$. For all splitting types $E$ as in the theorem, there exists a smooth hypersurface $X$ of degree $d$ containing the curve $C$ with normal bundle $N_{C/X}\cong E$.
\end{corollary}
\begin{proof} Let $\psi_F\in \Hom (N_{C/\P^n},\mathcal{O}(de))$ be the map with $N_{C/X}\cong E$, induced by the polynomial $F$ obtained in the theorem. From the exact sequence
\[ 0\longrightarrow H^0(\mathcal{I}_C^2(d))\longrightarrow H^0(\mathcal{I}_C(d))\overset{\phi}{\longrightarrow} \Hom (N_{C/\P^n},\mathcal{O}(de)), \]
we have that the space of polynomials that induce the same $\psi_F$ is the coset
\[ F+H^0(\mathcal{I}_C^2(d)) = \{ F+G \ | \ G\in H^0(\mathcal{I}_C^2(d)) \}. \]
If $d\ge 4$, then multiples of $Q_{ij}^2$ are in $H^0(\mathcal{I}_C^2(d))$, so the base locus of $F + H^0(\mathcal{I}_C^2(d))$ is the curve $C$. Thus, by Bertini's theorem, the general member of $F+H^0(\mathcal{I}_C^2(d))$ is smooth away from $C$. As $F$ is smooth along $C$, the general member of $F+H^0(\mathcal{I}_C^2(d))$ is smooth.
\end{proof}
\subsection{Dimension Count}
Theorem \ref{existence} shows that for all possible splitting types $E$ of $N_{C/X}$ we can compute an hypersurface $X$ for which $N_{C/X}\cong E$. A natural question that arises is if we can compute the dimension of the space of such hypersurfaces, and if it is the expected one.
Let $E_{\vec{a}} = \bigoplus _{i=1}^{n-2}\mathcal{O}(a_i)$ be a possible splitting type. The numerical conditions in Theorem \ref{existence} translate to $a_i\le e+2$ for $1\le i\le e-2$, $a_j\le e$ for $e-1\le j\le n-2$, and $\sum_{i=1}^{n-2}a_i = e(n-d+1)-2$.
In the introduction, we defined the spaces
\[ \Sigma = \{ F \ | \ X \text{ is a degree } d \text{ hypersurface smooth along } C \}\subset H^0(\mathcal{O}_{\P^n}(d)) \]
and
\[ \Sigma_{\vec{a}} = \{ F\in \Sigma \ | \ N_{C/X}\cong E_{\vec{a}} \}\subset \Sigma , \]
and observed that the expected codimension of $\Sigma_{\vec{a}}$ in $\Sigma$ is $h^1(\End(E_{\vec{a}}))$.
We start by computing this codimension at the level of homomorphisms. For that, define
\[ \Phi_{\vec{a}} = \{ M\in \Hom (N_{C/\P^n}, \mathcal{O}_{\P^1}(de)) \ | \ \ker M\cong E_{\vec{a}} \}\subset \Hom (N_{C/\P^n}, \mathcal{O}_{\P^1}(de)). \]
\begin{proposition}\label{lemmaHom} The codimension of $\Phi_{\vec{a}}$ in $\Hom (N_{C/\P^n}, \mathcal{O}_{\P^1}(de))$ is
\[ h^1(\End (E_{\vec{a}})) - h^1(\sHom(E_{\vec{a}}, \mathcal{O}(de))). \]
\end{proposition}
\begin{proof}
We first compute the dimension of $\Phi_{\vec{a}}$. Every $M$ in $\Phi_{\vec{a}}$ comes from an injection $\theta: E_{\vec{a}}\hookrightarrow N_{C/\P^n}$, and every injection defines an $M$, since its cokernel is a line bundle of degree $de$. By our numerical conditions on $\vec{a}$, $\sHom (E_{\vec{a}}, N_{C/\P^n})$ is globally generated, so the dimension of injections $\theta$ is $h^0(\sHom(E_{\vec{a}}, N_{C/\P^n}))$. However, each $M$ might come from different injections. To count these, consider the set of pairs of homomorphisms
\[ B = \{ (\theta, M)\mid E_{\vec{a}}\overset{\theta}{\hookrightarrow} N_{C/\P^n}\overset{M}{\twoheadrightarrow} \mathcal{O}(de) \}\subset \Hom (E_{\vec{a}}, N_{C/\P^n})\times \Hom (N_{C/\P^n}, \mathcal{O}(de)) \]
together with the projections $\pi_1$ and $\pi_2$:
\[ \begin{tikzcd}
& B \arrow[ld, "\pi_1"'] \arrow[rd, "\pi_2"] & \\
\Hom (E_{\vec{a}}, N_{C/\P^n}) & & \Hom (N_{C/\P^n}, \mathcal{O}(de))
\end{tikzcd} \]
A fiber $\pi^{-1}(\theta)$ of the map $\pi_1$ correspond to the $M$'s induced by $\theta$ up to a choice of basis for $\mathcal{O}(de)$, thus, the fibers of $\pi_1$ are isomorphic to $\Aut(\mathcal{O}(de))$. The image of $\pi_1$ is the open set of injections $\theta \in \Hom (E_{\vec{a}}, N_{C/\P^n})$. Hence, $\dim B = h^0(\sHom(E_{\vec{a}}, N_{C/\P^n})) + h^0(\End(\mathcal{O}(de)))$.
On the other hand, the image of $\pi_2$ is exactly the set $\Phi_{\vec{a}}$, and the fibers of $\pi_2$ are isomorphic to $\Aut (E_{\vec{a}})$. Thus,
\[ \dim \Phi_{\vec{a}} = h^0(\sHom(E_{\vec{a}}, N_{C/\P^n})) + h^0(\End(\mathcal{O}(de))) - h^0(\End(E_{\vec{a}})). \]
Therefore, its codimension in $\Hom (N_{C/\P^n}, \mathcal{O}(de))$ is
\[ \codim \Phi_{\vec{a}} = h^0(\sHom (N_{C/\P^n}, \mathcal{O}_{\P^1}(de))) - h^0(\sHom(E_{\vec{a}}, N_{C/\P^n})) + h^0(\End(E_{\vec{a}})) - h^0(\End(\mathcal{O}(de))). \]
Now, applying the functors $\sHom (E_{\vec{a}}, - )$ and $\sHom (-, \mathcal{O}(de))$ to the sequence
\[ 0\longrightarrow E_{\vec{a}}\longrightarrow N_{C/\P^n}\longrightarrow \mathcal{O}(de)\longrightarrow 0 \]
we obtain the long exact sequences
\begin{align*}
0\longrightarrow & \sHom(E_{\vec{a}}, E_{\vec{a}})\longrightarrow \sHom(E_{\vec{a}}, N_{C/\P^n})\longrightarrow \sHom(E_{\vec{a}}, \mathcal{O}(de))\longrightarrow \\
\longrightarrow & \sExt^1(E_{\vec{a}}, E_{\vec{a}})\longrightarrow \sExt^1(E_{\vec{a}}, N_{C/\P^n})\longrightarrow \sExt^1(E_{\vec{a}}, \mathcal{O}(de)) = 0
\end{align*}
and
\[ 0\longrightarrow \sHom(\mathcal{O}(de), \mathcal{O}(de))\longrightarrow \sHom(N_{C/\P^n}, \mathcal{O}(de))\longrightarrow \sHom(E_{\vec{a}}, \mathcal{O}(de))\longrightarrow 0. \]
Observe that $\sExt^1(E_{\vec{a}}, \mathcal{O}(de)) = 0$ since $a_i\le e+2\le de$ for all $1\le i\le n-2$.
Computing the alternating sum on dimensions of the first sequence, and substituting it on $\codim \Phi_{\vec{a}}$, we get
\begin{multline*}
\codim \Phi_{\vec{a}} = h^0(\sHom (N_{C/\P^n}, \mathcal{O}_{\P^1}(de))) - h^0(\sHom(E_{\vec{a}}, \mathcal{O}(de)) + h^1(\End(E_{\vec{a}}))\\
- h^1(\sHom(E_{\vec{a}}, N_{C/\P^n})) - h^0(\End(\mathcal{O}(de))).
\end{multline*}
And from the second sequence it follows that
\begin{align*}
\codim \Phi_{\vec{a}} = & h^0(\End(\mathcal{O}(de))) + h^0(\sHom(E_{\vec{a}}, \mathcal{O}(de))) - h^0(\sHom(E_{\vec{a}}, \mathcal{O}(de)) \\
& \hspace{2.5cm} + h^1(\End(E_{\vec{a}}) - h^1(\sHom(E_{\vec{a}}, N_{C/\P^n})) - h^0(\End(\mathcal{O}(de)))\\
= & h^1(\End(E_{\vec{a}}) - h^1(\sHom(E_{\vec{a}}, N_{C/\P^n})).
\end{align*}
\end{proof}
\begin{corollary}\label{corhom} If $e=n$ or $a_i\le e+1$ for all $i$, then the codimension of $\Phi_{\vec{a}}$ in $\Hom (N_{C/\P^n}, \mathcal{O}_{\P^1}(de))$ is $h^1(\End(E_{\vec{a}}))$.
\end{corollary}
\begin{proof}
In this case, since $N_{C/\P^n}\cong \mathcal{O}(e+2)^{e-1}\oplus \mathcal{O}(e)^{n-e}$, we have $h^1(\sHom(E_{\vec{a}}, N_{C/\P^n})) = 0$.
\end{proof}
We can compute the dimension of the balanced splitting type, recovering a particular case of \cite{CR}, Corollary 2.2.
\begin{corollary}\label{phiissurjective} (\cite{CR}, Corollary 2.2)
The kernel of the general $M\in \Hom (N_{C/\P^n}, \mathcal{O}(de))$ is balanced.
\end{corollary}
\begin{proof}
For $E_{\vec{a}}$ the balanced splitting type we have $a_i\le \left \lceil \frac{e(n-d+1)-2}{n-2} \right \rceil \le e+1$ and $|a_i-a_j|\le 1$ for all $i,j$, thus
\[ \codim \Phi_{\vec{a}} = h^1(\End(E_{\vec{a}})) = 0. \]
Hence $\Phi_{\vec{a}}$ has dimension equal to $\dim \Hom (N_{C/\P^n}, \mathcal{O}(de))$.
\end{proof}
\begin{theorem}\label{teoremasigmaa} The locus $\Sigma_{\vec{a}}$ is irreducible and smooth of codimension $h^1(\End(E_{\vec{a}})) - h^1(\sHom(E_{\vec{a}}, N_{C/\P^n}))$ in $\Sigma$.
\end{theorem}
\begin{proof}
Let $\overline{\Sigma_{\vec{a}}}$ be the space of all polynomials $F\in H^0(\mathcal{I}_C(d))$, not necessarily smooth along $C$, whose kernel of $\psi_F$ has splitting type $E_{\vec{a}}$. Consider the map $\beta :\overline{\Sigma_{\vec{a}}} \to \Phi_{\vec{a}}$, $\beta (F) = \psi _F$. By \cite{CR} Theorem 3.1, the map $\phi : H^0(\mathcal{I}_C(d))\to \Hom (N_{C/\P^n},\mathcal{O}(de))$ is surjective. It follows that $\beta$ is also surjective. Note its image $\Phi_{\vec{a}}$ is irreducible, since it is the image of $\pi_2$ from the last proposition. In addition, by the short exact sequence
\[ 0\longrightarrow H^0(\mathcal{I}_{C/\P^n}^2(d))\longrightarrow H^0(\mathcal{I}_{C/\P^n}(d))\overset{\phi}{\longrightarrow} H^0(\Hom (N_{C/\P^n}, \mathcal{O}(de)))\longrightarrow 0, \]
the fiber $\beta ^{-1}(M)$ is isomorphic to the linear system $H^0(\mathcal{I}_C^2(d))$. Thus, $\beta ^{-1}(M)$ is smooth and irreducible of dimension $h^0(\mathcal{I}^2_C(d))$. Then, by Proposition \ref{lemmaHom}, it follows that $\overline{\Sigma_{\vec{a}}}$ is smooth and irreducible of dimension
\[ h^0(\sHom(N_{C/\P^n}, \mathcal{O}(de))) - h^1(\End(E_{\vec{a}})) + h^1(\sHom(E_{\vec{a}}, N_{C/\P^n})) + h^0(\mathcal{I}^2_C(d)). \]
Since $F$ being smooth along $C$ is an open condition in $\overline{\Sigma_{\vec{a}}}$, and $\Sigma_{\vec{a}}$ is not empty by Theorem \ref{existence}, it follows that $\Sigma_{\vec{a}}$ is an open dense subset of $\overline{\Sigma_{\vec{a}}}$. Therefore, $\Sigma_{\vec{a}}$ is irreducible and smooth of the same dimension.
The dimension of $\Sigma$ is $h^0(\mathcal{I}_C(d))$, and by the sequence above,
\[ h^0(\mathcal{I}^2_C(d)) = h^0(\mathcal{I}_C(d)) - h^0(\sHom(N_{C/\P^n}, \mathcal{O}(de))). \]
Hence, the codimension of $\Sigma _{\vec{a}}$ in $\Sigma$ is
\[ \codim (\Sigma_{\vec{a}}\subset \Sigma) = h^1(\End(E_{\vec{a}})) - h^1(\sHom(E_{\vec{a}}, N_{C/\P^n})). \]
\end{proof}
By Corollary \ref{smoothexample}, when $d\ge4$ the general hypersurface of $\Sigma_{\vec{a}}$ is smooth.
\begin{corollary}
(char $K=0$) Let $d\ge 4$, and assume the base field $K$ has characteristic $0$. Let $S\Sigma_{\vec{a}}$ be the subspace of $\Sigma_{\vec{a}}$ of polynomials $F$ with $X$ smooth. Then $S\Sigma_{\vec{a}}$ is irreducible and smooth of codimension $h^1(\End(E_{\vec{a}})) - h^1(\sHom(E_{\vec{a}}, N_{C/\P^n}))$ in $\Sigma$.
\end{corollary}
As in Corollary \ref{corhom}, we get the expected codimension when $e=n$ or all $a_i$ are smaller than $e+2$.
\begin{corollary} If $e=n$ or $a_i\le e+1$ for all $i$, then the codimension of $\Sigma_{\vec{a}}$ in $\Sigma$ is the expected $h^1(\End(E_{\vec{a}}))$.
\end{corollary}
When $e < n$ and there exist terms $a_i = e+2$, the expected and the actual codimension differ. We can compute this difference as follows.
\begin{corollary}\label{difference} Let $z = |\{ i \ | \ a_i = e+2 \}|$ be the number of terms $\mathcal{O}(e+2)$ in the splitting type $E_{\vec{a}}$. Then the codimension of $\Sigma_{\vec{a}}$ in $\Sigma$ is $h^1(\End(E_{\vec{a}})) - (n-e)z$.
\end{corollary}
\begin{proof}
By Theorem \ref{teoremasigmaa} and Serre duality for $\P^1$,
\begin{align*}
h^1(\sHom(E_{\vec{a}}, N_{C/\P^n})) & = h^1(\sHom(E_{\vec{a}}, \mathcal{O}(e+2)^{e-1}\oplus \mathcal{O}(e)^{n-e})) \\
& = h^0(\sHom(\mathcal{O}, (\mathcal{O}(-e-2)^{e-1}\oplus \mathcal{O}(-e)^{n-e})\otimes (\oplus_{i=1}^{n-2} \mathcal{O}(a_i))\otimes \mathcal{O}(-2))) \\
& = \sum_{i=1}^{n-2} h^0(\P^1, \mathcal{O}(-e-4+a_i)^{e-1}\oplus \mathcal{O}(-e-2+a_i)^{n-e}) = (n-e)z.
\end{align*}
\end{proof}
In particular, the difference between the actual and the expected codimension can get arbitrarily large as $n$ grows.
\section{Quadric Hypersurfaces}
In this section, we study the case $d=2$. Let $X = V(F)$ be a degree 2 hypersurface that contains the rational normal curve $C$ of degree $e$, and consider the exact sequence of the map $\phi$ described in Section 2:
\[ 0\longrightarrow H^0(\mathcal{I}^2_C(2))\longrightarrow H^0(\mathcal{I}_C(2))\overset{\phi}{\longrightarrow} \Hom (N_{C/\P^n}, \mathcal{O}_{\P^1}(2e)). \]
Unlike when $d\ge 3$, the map $\phi$ is not surjective. Thus, the cokernel of an injection $E_{\vec{a}}\hookrightarrow N_{C/\P^n}$ may not be in the image of $\phi$, so we cannot repeat the arguments from Proposition \ref{lemmaHom} to compute the dimension of $\Sigma_{\vec{a}}$. Nevertheless, we can compute the dimension of the image of $\phi$.
\begin{proposition} The dimension of $H^0(\mathcal{I}_C^2(2))$ is $\frac{(n-e)(n-e+1)}{2}$. In particular, $\phi$ is injective when $e=n$.
\end{proposition}
\begin{proof}
If $F\in H^0(\mathcal{I}_C^2(2))$ then $F$ is double along $C$. Since $F$ is a quadric, and the singular locus of a quadric is a linear space, $F$ must also be singular on all the $\P^e$ span by $C$. Thus, $F$ must be a combination of the generators $x_ix_j$, $e+1\le i,j\le n$ of $\P^e$. That is,
\[ F = \sum _{e+1\le i\le j\le n} \lambda_{ij}x_ix_j, \ \ \lambda_{ij}\in K. \]
So we got to choose $\frac{(n-e)(n-e+1)}{2}$ coefficients to define $F$. Thus, $h^0(\mathcal{I}_C^2(2)) = \frac{(n-e)(n-e+1)}{2}$.
\end{proof}
\begin{corollary} The image of $\phi$ has dimension $\frac{2ne + 2n -3e - e^2}{2}$.
\end{corollary}
\begin{proof}
From the exact sequence,
\[ \dim (\mathrm{im} \ \phi ) = h^0(\mathcal{I}_C(2)) - h^0(\mathcal{I}^2_C(2)) = \dbinom{n+2}{2} - (2e+1) - \frac{(n-e)(n-e+1)}{2} = \frac{2ne + 2n -3e - e^2}{2}. \]
\end{proof}
\subsection{Splitting Types}
As in the case $d\ge 3$, we start by showing that we can obtain examples of quadrics smooth along $C$ for every possible splitting type of $N_{C/X}$.
The normal bundle sequence in the case $d=2$ is
\[ 0\longrightarrow N_{C/X}\longrightarrow \mathcal{O}(e+2)^{e-1}\oplus \mathcal{O}(e)^{n-e}\overset{\psi_F}{\longrightarrow}\mathcal{O}(2e)\longrightarrow 0 \]
and $\deg N_{C/X} = e(n-1)-2$. Then, any splitting type for $N_{C/X}$ must have the form of a direct sum $\left ( \bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i) \right )\oplus \left ( \bigoplus _{j=e+1}^{n}\mathcal{O}(e-b_j)\right )$ with $a_i, b_j\ge 0$ and $\sum_{i=1}^{e-2}a_i+\sum_{j=e+1}^nb_j = e-2$.
\begin{theorem}\label{quadricprop1}
For any given splitting type
\[ E = \left ( \bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i) \right )\oplus \left ( \bigoplus _{j=e+1}^{n}\mathcal{O}(e-b_j)\right ), \] with $a_i, b_j\ge 0$ and $\sum_{i=1}^{e-2}a_i + \sum_{j=e+1}^nb_j = e-2$, we produce a quadric $X=V(F)$, smooth along $C$, for which $N_{C/X}\cong E$.
More explicitly, for $e=n$, rearrange the $a_i$ in increasing order $a_1\le \cdots \le a_{n-2}$, and let $\beta_0 = 0$ and $\beta_i = a_1 + \cdots + a_i$ for $i=1, \cdots , n-2$. Then the quadric $X$ given by the polynomial
\[ F = \sum_{i=0}^{n-2}Q_{\beta_i+1,\beta_i+2} = Q_{1,2} + Q_{\beta_1 + 1, \beta_1 +2} + Q_{\beta_2 + 1, \beta_2 + 2} \cdots + Q_{\beta_{n-3}+1, \beta_{n-3}+2} + Q_{n-1,n}, \]
where $Q_{ij} = x_ix_{j-1}-x_{i-1}x_j$, has normal bundle $N_{C/X}\cong E$.
For $e<n$, let $\beta_0 = 0$ and $\beta_i = a_1 + \cdots + a_i$ for $1\le i\le e-2$. Let also $\gamma_n = e$ and $\gamma_{j} = e - b_n - b_{n-1} - \cdots - b_{j+1}$ for $e+1\le j\le n-1$. Then a quadric $X$ such that $N_{C/X}\cong E$ is given by
\begin{align*}
F = & \sum_{i=0}^{e-2}Q_{\beta_i+1,\beta_i+2} + \sum_{j=e+1}^n x_{\gamma_j}x_j\\
= & (Q_{1,2} + Q_{\beta_1 + 1, \beta_1 +2} + \cdots + Q_{\beta_{n-3}+1, \beta_{n-3}+2} + Q_{n-1,n}) + (x_{\gamma_j}x_{e+1} + \cdots + x_{\gamma_{n-1}}x_{n-1} + x_ex_n).
\end{align*}
\end{theorem}
Before we proceed to the proof of the theorem, let us use an example to better illustrate the idea of the proof, and to show how we are using the relations between the entries of $\psi_F$ to compute its kernel.
Let $n=e=5$. Consider polynomials $F$ of the form $F = \lambda_1Q_{1,2} + \lambda_2Q_{2,3} + \lambda_3Q_{3,4} + \lambda_4Q_{4,5}$. So $F$ induces the map $\psi_F: \mathcal{O}(7)^{4}\to \mathcal{O}(10)$,
\[ \psi_F = (\lambda_1s^3, \ \lambda_2s^2t, \ \lambda_3st^2, \ \lambda_4t^3). \]
When $\lambda_i = 1$, $i=1,\dots ,4$, $\psi_F = (s^3, \ s^2t, \ st^2, \ t^3)$ and we get the following degree $1$ relations between consecutive entries of $\psi_F$, which we call \textit{column relations} of $\psi_F$:
\begin{align*}
& t(s^3) - s(s^2t) = 0\\
& t(s^2t) - s(st^2) = 0\\
& t(st^2) - s(t^3) = 0\\
\end{align*}
Writing the coefficients of these relations as columns vectors, we get the matrix $K$
\[ K = \begin{pmatrix}
t & 0 & 0\\
-s & t & 0\\
0 & -s & t\\
0 & 0 & -s
\end{pmatrix}, \]
which defines a map $\mathcal{O}(6)^3\overset{K}{\to} \mathcal{O}(7)^4$. The column relations imply that the image of $K$ is contained in the kernel of $\psi_F$. As $K$ has rank 3, the map is injective and it coincides with the kernel. Hence, $N_{C/X}\cong \mathcal{O}(6)^3$.
Note that the splitting type of $N_{C/X}$ is determined by the degrees of the column relations of $\psi_F$. So, if we wish to get $N_{C/X}\cong \mathcal{O}(5)\oplus \mathcal{O}(6)\oplus \mathcal{O}(7)$, we need relations of degree $0$, $1$ and $2$. Consecutive nonzero entries give degree $1$ relations. Similarly, entries separated by one entry give a degree $2$ relation. Then, let $\lambda_3 = 0$, so $\psi_F = (s^3, \ s^2t, \ 0, \ t^3)$. It satisfies the column relations
\begin{align*}
& t(s^3) - s(s^2t) = 0\\
& t^2(s^2t) - s^2(t^3) = 0
\end{align*}
and similarly, we define the matrix
\[ K = \begin{pmatrix}
t & 0 & 0\\
-s & 0 & t^2\\
0 & 1 & 0\\
0 & 0 & -s^2
\end{pmatrix}, \]
and obtain that $N_{C/X}\cong \mathcal{O}(6)\oplus \mathcal{O}(7)\oplus \mathcal{O}(5)\overset{K}{\to} \mathcal{O}(7)^4$ is the kernel of $\psi_F$.
Finally, the splitting type $\mathcal{O}(4)\oplus \mathcal{O}(7)^2$ must come from a $\psi_F$ that has a column relation of degree $3$. We get it by letting two consecutive entries of $\psi_F$ be $0$, that is, let $\psi_F = (s^3, \ 0, \ 0, \ t^3)$.
In summary, for every term $\mathcal{O}(n+2-a_i)$ of the splitting type, we need a corresponding column relation of degree $a_i$. We can get it by letting $a_i-1$ consecutive terms of $\psi_F$ be $0$. By doing it for all $i$, we get all needed relations.
\medskip
\begin{proofofthm4.3}
Let first $e=n$. In this case, the possible splitting types are $E = \bigoplus_{i=1}^{n-2}\mathcal{O}(n+2-a_i)$ with $a_i\ge 0$ and $\sum_{i=1}^{n-2}a_i = n-2$. We look at polynomials $F$ of the form $F = \sum_{i=1}^{n-1} \lambda_iQ_{i,i+1}$ with $\lambda_i\in \{ 0, 1 \}$ for all $i$. Then, $F$ induces the map $\psi_F: \mathcal{O}(n+2)^{n-1}\to \mathcal{O}(2n)$,
\[ \psi_F = (\lambda_1 s^{n-2}, \ \lambda_2 s^{n-3}t, \cdots , \lambda_i s^{n-1-i}t^{i-1}, \cdots , \lambda_{n-1}t^{n-2}). \]
As in the examples above, we want to find a $\psi_F$ with column relations of degrees $a_i$. We can get it by letting $a_i-1$ consecutive entries of $\psi_F$ to be $0$ for every $i$. So let $\beta_0 = 0$ and $\beta_i = a_1 + \cdots + a_i$ for $i=1, \cdots , n-2$, and consider the map
\[ \psi_F = (s^{n-2}, \ 0, \dots , 0, \ s^{n-2-\beta_1}t^{\beta_1},\ 0, \dots , 0,\ s^{n-2-\beta_2}t^{\beta_2}, \ \ \cdots \ \ , \ s^{n-2-\beta_{n-3}}t^{\beta_{n-3}}, \ 0, \dots , 0 , \ t^{n-2}). \]
Notice that $\psi_F$ is induced by the polynomial
\[ F = \sum_{i=0}^{n-2}Q_{\beta_i+1,\beta_i+2} = Q_{1,2} + Q_{\beta_1 + 1, \beta_1 +2} + Q_{\beta_2 + 1, \beta_2 + 2} \cdots + Q_{\beta_{n-3}+1, \beta_{n-3}+2} + Q_{n-1,n}. \]
The entries of $\psi_F$ satisfy the degree $a_{i+1}$ relations
\begin{align*}
& t^{\beta_{i+1}-\beta_i}(s^{n-2-\beta_i}t^{\beta_i}) - s^{\beta_{i+1}-\beta_i}(s^{n-2-\beta_{i+1}}t^{\beta_{i+1}})\\
& = t^{a_{i+1}}(s^{n-2-\beta_i}t^{\beta_i}) - s^{a_{i+1}}(s^{n-2-\beta_{i+1}}t^{\beta_{i+1}}) \ \ \text{ for } \ \ 0\le i\le n-3.
\end{align*}
Since each relation involves a different pair of columns, they are linearly independent, that is, the matrix $K$ whose columns are the coefficients of the relations has maximal rank. Therefore, $K$ gives the kernel of $\psi_F$, hence $N_{C/X}\cong \bigoplus_{i=1}^{n-2}\mathcal{O}(n+2-a_i)$.
Finally, we show that these $F$ are smooth along the rational normal curve. Consider the partial derivatives of $F$
\[ \frac{\partial F}{\partial x_0} = -x_2, \ \ \frac{\partial F}{\partial x_2} = 2\lambda_2x_2 - x_0 - \lambda_3x_4, \ \ \frac{\partial F}{\partial x_{n-2}} = 2x_{n-2} - x_n - \lambda_{n-3}x_{n-4}. \]
If F is singular at a point $P = (s^n, s^{n-1}t, \cdots , t^n)\in C$, then $\frac{\partial F}{\partial x_0} = 0$ at $P$ implies $s=0$ or $t=0$. If $s=0$, then $\frac{\partial F}{\partial x_{n-2}} = 0$ implies $t=0$; and if $t=0$ then $\frac{\partial F}{\partial x_2} = 0$ implies $s=0$. Hence, $F$ is smooth along $C$.
\medskip
Now, let $e<n$. We want to obtain all the possible splitting types $E = \left ( \bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i) \right )\oplus \left ( \bigoplus _{j=e+1}^{n}\mathcal{O}(e-b_j)\right )$. Consider polynomials $F$ of the form $F = \sum_{i=1}^{n-1}\lambda_iQ_{i,i+1} + \sum_{j=e+1}^{n}L_jx_j$ with $\lambda_i\in \{0,1\}$ and $L_j$ linear forms. Then $F$ induces the map $\psi_F: \mathcal{O}(e+2)^{e-1}\oplus \mathcal{O}(e)^{n-e}\to \mathcal{O}(2e)$,
\[ \psi_F = (\lambda_1 s^{n-2}, \ \lambda_2 s^{n-3}t, \cdots , \lambda_i s^{n-1-i}t^{i-1}, \cdots , \lambda_{n-1}t^{n-2}; \ L_{e+1}|_C, L_{e+2}|_C, \cdots , L_n|_C). \]
We separate $\psi_F$ into two parts, one corresponding to the $\lambda_i$ and the other to the $L_j|_C$. The idea is to use the first part to obtain all the terms $\mathcal{O}(e+2-a_i)$ of $E$ as in the case $e=n$, and to use the second part for the $\mathcal{O}(e-b_j)$ terms.
Let $\beta_0 = 0$ and $\beta_i = a_1 + \cdots + a_i$ for $1\le i\le e-2$. Let also $\gamma_n = e$ and $\gamma_{j} = e - b_n - b_{n-1} - \cdots - b_{j+1}$ for $e+1\le j\le n-1$. Consider the map
\begin{multline*}
\psi_F = (s^{n-2}, \ 0, \dots , 0, \ s^{n-2-\beta_1}t^{\beta_1},\ 0, \dots , 0,\ s^{e-2-\beta_2}t^{\beta_2}, \ \ \cdots \ \ , \ s^{e-2-\beta_{e-2}}t^{\beta_{e-2}}, \ 0, \dots 0 ;\\
\ \ s^{e-\gamma_{e+1}}t^{\gamma_{e+1}}, \ s^{e-\gamma_{e+2}}t^{\gamma_{e+2}}, \dots , s^{e-\gamma_{n-1}}t^{\gamma_{n-1}}, \ t^e ).
\end{multline*}
Notice $\psi_F$ is induced by the polynomial
\begin{align*}
F = & \sum_{i=0}^{e-2}Q_{\beta_i+1,\beta_i+2} + \sum_{j=e+1}^n x_{\gamma_j}x_j\\
= & (Q_{1,2} + Q_{\beta_1 + 1, \beta_1 +2} + \cdots + Q_{\beta_{n-3}+1, \beta_{n-3}+2} + Q_{n-1,n}) + (x_{\gamma_j}x_{e+1} + \cdots + x_{\gamma_{n-1}}x_{n-1} + x_ex_n).
\end{align*}
We obtain the following relations between the entries of the first part of $\psi_F$:
\begin{align*}
& t^{\beta_{i+1}-\beta_i}(s^{e-2-\beta_i}t^{\beta_i}) - s^{\beta_{i+1}-\beta_i}(s^{e-2-\beta_{i+1}}t^{\beta_{i+1}}) = 0\\
& \Leftrightarrow t^{a_{i+1}}(s^{e-2-\beta_i}t^{\beta_i}) - s^{a_{i+1}}(s^{e-2-\beta_{i+1}}t^{\beta_{i+1}}) = 0 \ \ \text{ for } \ \ 0\le i\le e-3.
\end{align*}
And between every two consecutive entries of the second part of $\psi_F$:
\begin{align*}
& t^{\gamma_{j+1}-\gamma_j}(s^{e-\gamma_j}t^{\gamma_j}) - s^{\gamma_{j+1}-\gamma_j}(s^{e-\gamma_{j+1}}t^{\gamma_{j+1}}) = 0\\
& \Leftrightarrow t^{b_{j+1}}(s^{e-\gamma_j}t^{\gamma_j}) - s^{b_{j+1}}(s^{e-\gamma_{j+1}}t^{\gamma_{j+1}}) = 0 \ \ \text{ for } \ \ e+1\le j\le e-1.
\end{align*}
Note that $\gamma_{e+1}-\beta_{e-2} = e - \left (\sum_{i=1}^{e-2} a_i + \sum_{j=e+2}^n b_j\right ) = e - (e-2-b_{e+1}) = b_{e+1}+2$. We also get a relation between the first and second part of $\psi_F$:
\begin{align*}
& t^{\gamma_{e+1}-\beta_{e-2}}(s^{e-2-\beta_{e-2}}t^{\beta_{e-2}}) - s^{\gamma_{e+1}-\beta_{e-2}-2}(s^{e-\gamma_{e+1}}t^{\gamma_{e+1}}) = 0 \\
& \Leftrightarrow t^{b_{e+1}+2}(s^{e-2-\beta_{e-2}}t^{\beta_{e-2}}) - s^{b_{e+1}}(s^{e-\gamma_{e+1}}t^{\gamma_{e+1}}) = 0.
\end{align*}
The matrix $K$ defined by the coefficients of these relations defines a map
\[ K: \left ( \bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i) \right )\oplus \left ( \bigoplus _{j=e+1}^{n}\mathcal{O}(e-b_j)\right )\to \mathcal{O}(e+2)^{e-1}\oplus \mathcal{O}(e)^{n-e} \]
that factors through the kernel $N_{C/X}$ of $\psi_F$. Since the relations involve different pairs of columns, $K$ has maximum rank, so the map is injective, and it follows that
\[ N_{C/X}\cong \left ( \bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i) \right )\oplus \left ( \bigoplus _{j=e+1}^{n}\mathcal{O}(e-b_j)\right ). \]
To conclude, we check that $F$ is smooth along the curve. Consider the partial derivatives restricted to $C$ (that is, take $Q_{ij} = 0$ and $x_l = 0$ for $l\ge e+1$)
\[ \frac{\partial F}{\partial x_2} = -x_0 + 2\lambda_2x_2 - \lambda_3x_4, \ \ \frac{\partial F}{\partial x_{n}} = x_e. \]
If $P = (s^e : s^{e-1}t : \cdots : t^e : 0 : \cdots : 0)\in C$ is a singular point of $X$, then $\frac{\partial F}{\partial x_{n}} = x_e = t^e = 0$ implies $t=0$, thus $\frac{\partial F}{\partial x_2} = -x_0 + 2\lambda_2x_2 - \lambda_3x_4 = -s^e = 0$, so $s=0$. Hence, $X$ is smooth along $C$.
\end{proofofthm4.3}
A natural question that arises is whether we can obtain all splitting types from smooth quadric hypersurfaces $X$. The construction above produce quadrics smooth along $C$, but not necessarily smooth. We can, however, use the quadratic form matrix of $F$ to reduce the dimension of the singular locus of $X$.
\begin{theorem}\label{corankexample}
For every splitting type
\[ E = \left ( \bigoplus_{i=1}^{e-2}\mathcal{O}(e+2-a_i) \right )\oplus \left ( \bigoplus _{j=e+1}^{n}\mathcal{O}(e-b_j)\right ), \]
with $a_i, b_j\ge 0$ and $\sum_{i=1}^{e-2}a_i + \sum_{j=e+1}^nb_j = e-2$, we obtain an example of quadric $X$ of corank at most $\sum _{a_i\ge 4}(a_i-3)$ with $N_{C/X}\cong E$. In particular, if $a_i\le 3$ for all $i$, there exists a smooth quadric $X$ with $N_{C/X}\cong E$.
\end{theorem}
\begin{proof} We first refer to the proof of Theorem \ref{quadricprop1}, as we will use the quadric $F$ constructed there. We divide the proof into the cases $e=n$ and $e<n$.
Let $e=n$. Rearrange the $a_i$ in increasing order $a_1\le \cdots \le a_{n-2}$ and follow the construction of $F$ in the proof of Theorem \ref{quadricprop1}. For $\beta_0 = 0$ and $\beta_i = a_1 + \cdots + a_i$ for $i=1, \cdots , n-2$, we obtain the polynomial
\[ F = \sum_{i=0}^{n-2}Q_{\beta_i+1,\beta_i+2} = Q_{1,2} + Q_{\beta_1 + 1, \beta_1 +2} + Q_{\beta_2 + 1, \beta_2 + 2} \cdots + Q_{\beta_{n-3}+1, \beta_{n-3}+2} + Q_{n-1,n}, \]
corresponding to the map
\[ \psi_F = (s^{n-2}, \ 0, \dots , 0, \ s^{n-2-\beta_1}t^{\beta_1},\ 0, \dots , 0,\ s^{n-2-\beta_2}t^{\beta_2}, \ \ \cdots \ \ , \ s^{n-2-\beta_{n-3}}t^{\beta_{n-3}}, \ 0, \dots , 0 , \ t^{n-2}), \]
whose kernel is $N_{C/X}\cong \bigoplus_{i=1}^{n-2}\mathcal{O}(n+2-a_i)$.
We look at the quadratic form matrix of $F$. It is the $(n+1)\times (n+1)$ symmetric matrix $Q = (c_{i,j})_{i,j=0}^n$ with entries $c_{\beta_{i}+1,\beta_{i}+1} = 1$, $c_{\beta_{i},\beta_{i}+2} = -\frac{1}{2}$, $c_{\beta_{i}+2,\beta_{i}} = -\frac{1}{2}$, $0\le i\le n-2$, and zero elsewhere. It has 1's and 0's in the diagonal with $a_i-1$ consecutive 0's for each $a_i$, in increasing order of $i$. For example, for $n=5$ and $N_{C/X}\cong \mathcal{O}(5)\oplus \mathcal{O}(6)\oplus \mathcal{O}(7)$, we have $a_1=0$, $a_2=1$ and $a_3=2$, and we construct $\psi_F = (s^3,\ s^2t,\ 0,\ t^3)$, that corresponds to the matrix
\[ Q = \left (\begin{smallmatrix}
0 & 0 & -\frac{1}{2} & 0 & 0 & 0 \\
0 & 1 & 0 & -\frac{1}{2} & 0 & 0 \\
-\frac{1}{2} & 0 & 1 & 0 & 0 & 0 \\
0 & -\frac{1}{2} & 0 & 0 & 0 & -\frac{1}{2} \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & -\frac{1}{2} & 0 & 0
\end{smallmatrix}\right ). \]
Notice that, for each $a_i\ge 4$, we have a $(a_i-3)\times (a_i-3)$ zero diagonal block in $Q$. Thus, the rank of $F$ drops by at least $a_i-3$ for each relation of degree $a_i\ge 4$. The nonzero diagonal blocks might be singular, further reducing the rank of $F$. To prove the theorem, we will show we can replace the nonzero blocks by nonsingular ones without changing the splitting type of $N_{C/X}$.
First, notice that the first diagonal block of $Q$ corresponds to the $a_i\le 2$. This block is followed by $(a_i-3)\times (a_i-3)$ zero diagonal blocks alternating with blocks
\[ B = \begin{pmatrix}
0 & 0 & -\frac{1}{2}\\
0 & 1 & 0\\
-\frac{1}{2} & 0 & 0
\end{pmatrix}. \]
The blocks B are already nonsingular, so we do not need to replace them. We might need to replace the first block. When we replace it by a new block, we change $F$ to an $F'$, and we need to check if the new $\psi_{F'}$ induces the same bundle $N_{C/X}$. We do this by checking that $\psi_{F'}$ has column relations of the same degree as of $\psi_F$.
Let us analyze the first block closer. It has size $(m+1)\times (m+1)$, for $m = \sum_{a_i\le 2}a_i + 2$. Its diagonal is formed by 0, a sequence of consecutive 1's, and then alternating 1's and 0's:
\[ B_1 = \left ( \begin{smallmatrix}
0 & & -\frac{1}{2} & & & & & & & & \\
& 1 & & -\frac{1}{2} & & & & & & & \\
-\frac{1}{2} & & 1 & & & & & & & & \\
& -\frac{1}{2} & & ... & & -\frac{1}{2} & & & & & \\
& & & & 1 & & & & & & \\
& & & -\frac{1}{2} & & 0 & & -\frac{1}{2} & & & \\
& & & & & & 1 & & & & \\
& & & & & -\frac{1}{2} & & ... & & & \\
& & & & & & & & 0 & & -\frac{1}{2}\\
& & & & & & & & & 1 & \\
& & & & & & & & -\frac{1}{2} & & 0
\end{smallmatrix}\right ). \]
Let $l = |\{ i \ | \ a_i = 1 \}| + 1$ be the number of consecutive 1's in the diagonal. We divide into the cases $l=1, 2, 3$ and $l\ge 4$;
For the case $l=1$, the diagonal of $B_1$ alternates between $0$ and $1$, thus $m$ must be even. If $4\nmid m$, Gauss-Jordan elimination shows that the original matrix is nonsingular and we do not change it. If $4\mid m$, it may be singular, and we define $F' = F + Q_{\frac{m}{2}, \frac{m}{2}+1} + Q_{\frac{m}{2}-1, \frac{m}{2}+1}$. This change will replace the three middle entries of $\psi_F$:
\[ \psi_F = (s^{n-2}, \ 0, \ s^{n-4}t^2, \ 0, \cdots , s^{n-\frac{m}{2}}t^{\frac{m}{2}-2}, \ 0, \ s^{n-\frac{m}{2}-2}t^{\frac{m}{2}}, \cdots ,\ 0, \ s^{n-m}t^{m-2} \cdots ) \]
by
\begin{multline*}
\psi_{F'} = (s^{n-2}, \ 0, \ s^{n-4}t^2, \ 0, \cdots , s^{n-\frac{m}{2}}t^{\frac{m}{2}-2} + s^{n-\frac{m}{2}-2}t^{\frac{m}{2}}, \ s^{n-\frac{m}{2}-1}t^{\frac{m}{2}-1}, \ s^{n-\frac{m}{2}-2}t^{\frac{m}{2}} + s^{n-\frac{m}{2}}t^{\frac{m}{2}-2}, \cdots \\
\cdots ,\ 0, \ s^{n-m}t^{m-2} \cdots ),
\end{multline*}
and instead of the original column relations
\begin{align*}
& t^2(s^{n-\frac{m}{2}}t^{\frac{m}{2}-2}) - s^2(s^{n-\frac{m}{2}-2}t^{\frac{m}{2}}) = 0\\
& 1\cdot (s^{n-\frac{m}{2}}t^{\frac{m}{2}-2}) - 1\cdot 0 = 0,
\end{align*}
we get
\begin{align*}
& (st)(s^{n-\frac{m}{2}}t^{\frac{m}{2}-2} + s^{n-\frac{m}{2}-2}t^{\frac{m}{2}}) - (s^2+t^2)(s^{n-\frac{m}{2}-1}t^{\frac{m}{2}-1}) = 0\\
& 1\cdot (s^{n-\frac{m}{2}}t^{\frac{m}{2}-2} + s^{n-\frac{m}{2}-2}t^{\frac{m}{2}}) - 1\cdot (s^{n-\frac{m}{2}-2}t^{\frac{m}{2}} + s^{n-\frac{m}{2}}t^{\frac{m}{2}-2}) = 0,
\end{align*}
thus preserving the degrees of the column relations.
For $l=2$ or $l\ge 4$, Gauss-Jordan elimination shows that $B_1$ is nonsingular, and we do not replace it.
For $l=3$, consider the cases $m=4$ and $m>4$. If $m>4$, switch one of the 0's to the third row:
\[ \text{replace } \left ( \begin{smallmatrix}
0 & & -\frac{1}{2} & & & & & &\\
& 1 & & -\frac{1}{2} & & & & &\\
-\frac{1}{2} & & 1 & & -\frac{1}{2} & & & &\\
& -\frac{1}{2} & & 1 & & & & &\\
& & -\frac{1}{2} & & 0 & & -\frac{1}{2} & &\\
& & & & & 1 & & &\\
& & & & -\frac{1}{2} & & 0 & & -\frac{1}{2}\\
& & & & & & & 1 &\\
& & & & & & -\frac{1}{2} & &...
\end{smallmatrix}\right )\text{ by }
\left ( \begin{smallmatrix}
0 & & -\frac{1}{2} & & & & & &\\
& 1 & & & & & & &\\
-\frac{1}{2} & & 0 & & -\frac{1}{2} & & & &\\
& & & 1 & & -\frac{1}{2} & & &\\
& & -\frac{1}{2} & & 1 & & -\frac{1}{2} & &\\
& & & -\frac{1}{2} & & 1 & & &\\
& & & & -\frac{1}{2} & & 0 & & -\frac{1}{2}\\
& & & & & & & 1 &\\
& & & & & & -\frac{1}{2} & &...
\end{smallmatrix}\right ).\]
The second matrix is nonsingular, as one can also see by Gauss-Jordan elimination. In terms of the polynomial, this corresponds to taking $F' = F - Q_{2,3} + Q_{4,5}$, and it replaces
\[ \psi_F = ( s^{n-2},\ s^{n-3}t, \ s^{n-4}t^2, \ 0, \ s^{n-6}t^4, \ 0, \ s^{n-8}t^6, \cdots ) \]
by
\[ \psi_F' = ( s^{n-2},\ 0, \ s^{n-4}t^2, \ s^{n-5}t^3, \ s^{n-6}t^4, \ 0, \ s^{n-8}t^6, \cdots ). \]
Note that this only changes the position of one degree $2$ relation, that is, instead of the column relations
\begin{align*}
& t(s^{n-2}) - s(s^{n-3}t) = 0\\
& t(s^{n-3}t) - s(s^{n-4}t^2) = 0\\
& t^2(s^{n-4}t^2) - s^2(s^{n-6}t^4) = 0,
\end{align*}
we get the relations
\begin{align*}
& t^2(s^{n-2}) - s^2(s^{n-4}t^2) = 0\\
& t(s^{n-4}t^2) - s(s^{n-5}t^3) = 0\\
& t(s^{n-5}t^3) - s(s^{n-6}t^4) = 0.
\end{align*}
If $m=4$, the first block of the original matrix $Q$ is
\[ \left (\begin{smallmatrix}
0 & & -\frac{1}{2} & & \\
& 1 & & -\frac{1}{2} & \\
-\frac{1}{2} & & 1 & & -\frac{1}{2}\\
& -\frac{1}{2} & & 1 & \\
& & -\frac{1}{2} & & 0
\end{smallmatrix}\right ). \]
We replace it by the nonsingular
\[ \left (\begin{smallmatrix}
0 & & -\frac{1}{2} & -\frac{1}{2} & -\frac{1}{2}\\
& 1 & \frac{1}{2} & 0 & \\
-\frac{1}{2} & \frac{1}{2} & 1 & & -\frac{1}{2}\\
-\frac{1}{2} & 0 & & 1 & \\
-\frac{1}{2} & & -\frac{1}{2} & & 0
\end{smallmatrix}\right ), \]
corresponding to the polynomial $F' = F + Q_{1,3} + Q_{1,4}$. The original map $\psi_F$ is
\[ \psi_F = ( s^{n-2},\ s^{n-3}t, \ s^{n-4}t^4, \ 0, \cdots ) \]
and satisfies the degree $1$ column relations
\begin{align*}
& t(s^{n-2}) - s(s^{n-3}t) = 0 \\
& t(s^{n-3}t) - s(s^{n-4}t^2) = 0.
\end{align*}
The new map is
\[ \psi_{F'} = (s^{n-2} + s^{n-3}t + s^{n-4}t^2, \ s^{n-2}+2s^{n-3}t, \ s^{n-2} + s^{n-4}t^2, \cdots) \]
and satisfies the degree $1$ relations
\begin{align*}
& (-s+3t)(s^{n-2} + s^{n-3}t + s^{n-4}t^2) + (s-t)(s^{n-2}+2s^{n-3}t) - (3t)(s^{n-2} + s^{n-4}t^2)) = 0 \\
& (-s-2t)(s^{n-2} + s^{n-3}t + s^{n-4}t^2) + t(s^{n-2}+2s^{n-3}t) + (s+2t)(s^{n-2} + s^{n-4}t^2) = 0,
\end{align*}
confirming that replacing $F$ by $F'$ does not change the degrees of the column relations.
So far, we have shown that we can replace the first block of the matrix of $F$ by a nonsingular block with column relations in $\psi_F$ of the same degrees. We now need to check the column relation between the first and the other blocks. It is enough to check the relation between the first and the second nonzero block.
Except for the case $l = 3$ and $m = 4$, the last entry of $\psi_F$ corresponding to the first block remains the same, and so do the column relation between the first and the second blocks. When $l = 3$ and $m = 4$, the map $\psi_F$ looks like
\[ \psi_{F} = (s^{n-2}, \ s^{n-3}t, \ s^{n-4}t^2, \ 0, \cdots , 0, \ s^{n-2-b}t^b, \cdots), \]
and the relation between the first and second blocks is the degree $b-2$ relation
\[ t^{b-2}(s^{n-4}t^2) - s^{b-2}(s^{n-2-b}t^b) = 0. \]
It gets replaced by
\[ \psi_{F'} = (s^{n-2} + s^{n-3}t + s^{n-4}t^2, \ s^{n-2}+2s^{n-3}t, \ s^{n-2} + s^{n-4}t^2, \ 0, \cdots , 0, \ s^{n-2-b}t^b, \cdots), \]
and we have the relation between the first and the second blocks
\[ t^{b-2}(2(s^{n-2} + s^{n-3}t + s^{n-4}t^2) - (s^{n-2}+2s^{n-3}t) - (s^{n-2} + s^{n-4}t^2)) - s^{b-2}(s^{n-2-b}t^b) = 0, \]
also of degree $b-2$.
Therefore, the new matrix we obtain by replacing the first block induces column relations of the same degree of the original matrix. Hence, we still get $N_{C/X}\cong \bigoplus _{i=1}^{n-2}\mathcal{O}(n+2-a_i)$. The rank of the new matrix is only decreased by the $(a_i-3)\times (a_i-3)$ zero diagonal blocks, thus it has corank $\sum _{a_i\ge 4}(a_i-3)$.
\medskip
Now, let $e<n$. The $F$ constructed in Proposition \ref{quadricprop1} is $F = \sum_{i=0}^{e-2}Q_{\beta_i+1,\beta_i+2} + \sum_{j=e+1}^n x_{\gamma_j}x_j$. The quadratic form matrix of $F$ can be seen as a four blocks symmetric matrix
\[ M = \begin{pmatrix}
Q & A \\
A^t & 0
\end{pmatrix} \]
where $Q$ is the $(e+1)\times (e+1)$ matrix corresponding to the quadratric form $\sum_{i=0}^{e-2}Q_{\beta_i+1,\beta_i+2}$ in $\P^e = V(x_{e+1}, \cdots , x_n)$. Thus, by applying the case $e=n$ to the matrix $Q$, we may assume $Q$ to be a corank $\sum _{a_i\ge 4}(a_i-3)$ matrix. In other words, we are able to replace $Q$ by a corank $\sum _{a_i\ge 4}(a_i-3)$ matrix with the same normal bundle $N_{C/X}$.
The $0$ block of $M$ corresponds to the terms $x_lx_j$ of $F$ with $l,j\ge e+1$. These are $0$ when restricted to $C$, and therefore do not interfere with the map $\psi_F$. Thus, we can replace the $0$ block by any matrix $L$ without changing $\psi_F$ and hence preserving $N_{C/X}$. Therefore, all matrices
\[ M = \begin{pmatrix}
Q & A \\
A^t & L
\end{pmatrix} \]
with any $L$ induce the same $N_{C/X}$. To compute its rank, assume $L$ is invertible, let $I$ be the identity matrix, and use Schur complement:
\[ \begin{pmatrix}
I & -AL^{-1} \\
0 & I
\end{pmatrix}
\begin{pmatrix}
Q & A \\
A^t & L
\end{pmatrix}
\begin{pmatrix}
I & 0\\
-L^{-1}A^t & I
\end{pmatrix} =
\begin{pmatrix}
Q-AL^{-1}A^t & 0 \\
0 & L
\end{pmatrix}. \]
Since
\[ \begin{pmatrix}
I & -AL^{-1} \\
0 & I
\end{pmatrix} \ \ \text{ and } \ \ \begin{pmatrix}
I & 0\\
-L^{-1}A^t & I
\end{pmatrix} \]
are invertible, it follows that
\[ \rk \begin{pmatrix}
Q & A \\
A^t & L
\end{pmatrix} = \rk \begin{pmatrix}
Q-AL^{-1}A^t & 0 \\
0 & L
\end{pmatrix} = \rk (Q-AL^{-1}A^t) + \rk (L). \]
And since matrices of rank at least $\rk (Q)$ form an open neighborhood of $Q$, we can choose an invertible matrix $L$ such that $\rk (Q-AL^{-1}A^t)\ge \rk (Q)$, and therefore, $\rk M\ge \rk(Q) + \rk(L)$, that is, the corank of $M$ is at most $\sum _{a_i\ge 4}(a_i-3)$.
\end{proof}
\printbibliography[title={References}]
\end{document}
| {
"timestamp": "2022-09-28T02:12:50",
"yymm": "2209",
"arxiv_id": "2209.13199",
"language": "en",
"url": "https://arxiv.org/abs/2209.13199",
"abstract": "Let $C$ be the rational normal curve of degree $e$ in $\\mathbb{P}^n$, and let $X\\subset \\mathbb{P}^n$ be a degree $d\\ge 2$ hypersurface containing $C$. In previous work, I. Coskun and E. Riedl showed that the normal bundle $N_{C/X}$ is balanced for a general $X$. H. Larson studied the case of lines ($e=1$) and computed the dimension of the space of hypersurfaces for which $N_{C/X}$ has a given splitting type. In this paper, we work with any $e\\ge 2$. We compute explicit examples of hypersurfaces for all possible splitting types, and for $d\\ge 3$, we compute the dimension of the space of hypersurfaces for which $N_{C/X}$ has a given splitting type. For $d=2$, we give a lower bound on the maximum rank of quadrics with fixed splitting type.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Normal Bundles of Rational Normal Curves on Hypersurfaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717468373085,
"lm_q2_score": 0.8539127455162773,
"lm_q1q2_score": 0.8424461889906357
} |
https://arxiv.org/abs/1804.02614 | Randomized subspace iteration: Analysis of canonical angles and unitarily invariant norms | This paper is concerned with the analysis of the randomized subspace iteration for the computation of low-rank approximations. We present three different kinds of bounds. First, we derive both bounds for the canonical angles between the exact and the approximate singular subspaces. Second, we derive bounds for the low-rank approximation in any unitarily invariant norm (including the Schatten-p norm). This generalizes the bounds for Spectral and Frobenius norms found in the literature. Third, we present bounds for the accuracy of the singular values. The bounds are structural in that they are applicable to any starting guess, be it random or deterministic, that satisfies some minimal assumptions. Specialized bounds are provided when a Gaussian random matrix is used as the starting guess. Numerical experiments demonstrate the effectiveness of the proposed bounds. | \section{Introduction}
The computation of low-rank approximations of large-scale matrices is a vital step in many applications in data analysis and scientific computing. These applications include principal component analysis, facial recognition, spectral clustering, model reduction techniques such as proper orthogonal decomposition (POD) and discrete empirical interpolation method (DEIM), approximation algorithms for partial differential and integral equations. The celebrated Eckart-Young theorem~\cite{GovL13} says that the optimal low-rank approximation can be obtained by means of the Singular Value Decomposition (SVD); however, computing the full or truncated SVD can be computationally challenging, or even prohibitively expensive for many applications of interest.
{
Randomized algorithms for computing low-rank approximations have become increasingly popular in the last two decades. For example, see the survey papers~\cite{HMT09,mahoney2011randomized}. Randomized methods have gained in popularity since they are easy to implement, computationally efficient, and numerically robust. Although randomized algorithms tend to have the same asymptotic cost compared to classical methods, they have several advantages that make them suitable for large-scale computing. Specifically, for datasets that are too large to fit in memory, randomized algorithms are able to exploit parallel computing efficiently and are efficient in the number of times they access the data. Randomized algorithms also have excellent numerical robustness and are very reliable in practical applications.
We focus on a specific randomized algorithm known as {\em randomized subspace iteration}. The main idea of this method is to use random sampling to identify a subspace that approximately captures the range of the matrix. A low-rank approximation to the matrix is then obtained by projecting the matrix onto this subspace. A post-processing step is then performed to compress the low-rank representation to achieve a desired target rank, and a conversion step to obtain an equivalent representation in the desired format (typically, a truncated SVD representation)---both these steps are deterministic. }
Many advances have been made in the analysis of randomized algorithms for low-rank approximations. The analysis typically has two stages: a \textit{structural, or deterministic stage}, in which minimal assumption about the distribution of the random matrix is made, and a \textit{probabilistic stage}, in which the distribution of the random matrix is taken into account to derive bounds for expected and tail bounds of the error distribution. As mentioned earlier, existing literature only targets the error in the low-rank representation~\cite{gu2015subspace,HMT09}. When the low-rank representation is in the SVD format, it is desirable to understand the quality of the approximate subspaces and the individual singular triplets. This paper aims to fill in some of the missing gaps in the literature by a rigorous analysis of the accuracy of approximate singular values, vectors and subspaces obtained using randomized subspace iteration. This analysis will be beneficial in applications where an analysis beyond the low-rank approximation is desired. Examples include Model Reduction techniques~\cite{erichson2017randomized,balabanov2018randomized}, Leverage Score computation~\cite{holodnak2015conditioning}, Spectral Clustering~\cite{boutsidis2015spectral}, FEAST eigensolvers~\cite{peter2014feast}, Canonical Correlation Analysis~\cite{avron2013efficient}.
\subsection{Contributions and overview of paper}
We survey the contents and the main contributions of this paper.
\textbf{Canonical angles.} We have developed bounds for \textit{all} the canonical angles between the spaces spanned by the exact and the approximate singular vectors. Several different flavors of bounds are provided:
\begin{enumerate}
\item The bounds in \cref{ssec:angles} relate the canonical angles between the exact and the approximate singular subspaces. Analysis is also provided for unitarily invariant norms of the canonical angles.
\item In applications where lower dimensional subspaces are extracted from the approximate singular subspaces, the bounds in \cref{ssec:sintheta} quantifies the accuracy in the extraction process.
\item \cref{ssec:sintheta} also presents bounds for the angles between the individual exact and approximate singular vectors, extracted from the appropriate subspaces.
\end{enumerate}
Our bounds suggest that the accuracy of the singular values and vectors, in addition to the low-rank approximations, is high provided (1) singular values decay rapidly beyond the target rank $k$, and (2) the larger the singular value gaps, the higher is the accuracy to be expected. Furthermore, the truncation step to extract the $k$ dimensional subspaces does not significantly lower the accuracy of the subspaces.
\textbf{Low-rank approximation.} This paper provides the first known analysis of the randomized subspace iteration for an arbitrary unitarily invariant norm, with stronger, specialized results for Schatten-p norms. Bounds for the special cases of the Schatten-p norm, namely the spectral and Frobenius norms, have already appeared in the literature---our result for the Schatten-p norm recovers these results as special cases.
\textbf{Singular values.} We derive upper and lower bounds on the approximate singular values obtained by the randomized subspace iteration. Similar bounds also appear in~\cite{gu2015subspace}; however, our proof technique is different. We also present Hoffman-Wielandt type bounds for the accuracy of the singular values.
The conclusion of the bounds for the low-rank approximations and the singular values are similar to those of the conclusions for the canonical angles.
\textbf{Generalization of sin theta theorem} The sin theta theorem~\cite{wedin1983angles} is a well known result in numerical analysis and relates the canonical angles between the true and approximate singular subspaces in the unitarily invariant norms. We derive a generalization of the sin theta theorem that derives bounds for the individual canonical angles between the two subspaces. The sin theta theorem is recovered as a special case. This result {maybe} of independent interest beyond the study of randomized algorithms.
\section{Background and preliminaries}
\subsection{Notation}\label{ssec:prelim}
{
Denote the target rank by $k$ and let $1 \leq k \leq \mathsf{rank}\,(A)$. Let the matrix $A\in \mathbb{C}^{m\times n}$, have the SVD
\[A = \bmat{U_k & U_{\perp}}\bmat{\Sigma_k & \\ &\Sigma_{\perp} }\bmat{ V_k^* \\ V_\perp^*}. \]
Here, $\Sigma_k \in \mathbb{C}^{k\times k}$ and $\Sigma_{\perp} \in \mathbb{C}^{(m-k) \times (n-k)}$; the columns of $U_k$ and $U_{\perp}$ are the corresponding left singular vectors, and columns of $V_k$ and $V_{\perp}$ are the corresponding right singular vectors. We also denote by $A_k = U_k\Sigma_kV_k^*$ as the best rank$-k$ approximation to the matrix $A$, in any unitarily invariant norm (for a definition, see below). We also define $A_\perp = U_\perp \Sigma_\perp V_\perp^*$ and observe that
\[ A = A_k + A_\perp . \]
}
\paragraph{Singular values and ratios} Let $\normtwo{\cdot}$ denote the spectral norm, so that $\normtwo{\Sigma_{\perp}} = \sigma_{k+1}$ and $\normtwo{\Sigma_k^{-1}} = \frac{1}{\sigma_k}$. The singular values of $A$ can be arranged in decreasing order as
\[ \sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_k \geq \sigma_{k+1} \geq \dots \geq \sigma_n.\]
For later use, we define the singular value ratios
\begin{equation}
\gamma_j = \frac{\sigma_{k+1}}{\sigma_j} \qquad j=1,\dots,k.
\end{equation}
Since the singular values are monotonically decreasing, the singular value ratios are monotonically increasing, i.e., $\gamma_1 \leq \dots \leq \gamma_k \leq 1$.
\paragraph{Norms} We have already defined the spectral norm. The Frobenius norm of a matrix is $\normf{A} = \sqrt{\mathsf{trace}\,(A^*A)}$. We use the symbol $\uninorm{\cdot}$ to denote any unitarily invariant norm, i.e., a norm that satisfies $\uninorm{QAZ} = \uninorm{A}$ for unitary matrices $Q,Z$. An example of the unitarily invariant norms is Schatten-p class of norms, defined as the vector $\ell_p$ norm of the singular values of $A$, i.e.,
\[ \schattenp{A} = \left( \sum_{j=1}^{\min\{m,n\}}\sigma_j^{p}\right)^{1/p}.\]
With this definition, it can be readily seen that $\normtwo{A}= \uninorm{A}_{\infty}$ and $\normf{A} = \uninorm{A}_{2}$. Another example is the Ky-Fan-$k$ class of norms defined $\|A \|_{(k)} = \sum_{j=1}^k \sigma_j$ for every $k=1,\dots,\min\{m,n\}$. Associated with every unitarily invariant norm is a symmetric gauge function acting on the singular values of the matrix that it acts on.
\paragraph{Projection matrices} Suppose the matrix $Z$ has full column rank with column space $\mc{R}\,(Z)$; $Z^\dagger$ is a left multiplicative inverse and where ${}^\dagger$ represents the Moore-Penrose inverse. We define the (orthogonal) projection matrix $\proj{Z} = ZZ^\dagger$. {An orthogonal projection matrix is uniquely defined by its range, and $\mc{R}\,(\proj{Z}) = \mc{R}\,(Z)$.} For a matrix $Q$ with orthonormal columns, the formula simplifies and $\proj{Q} = QQ^*$.
\paragraph{Canonical angles} The separation between subspaces can be measured by the principal or canonical angles. Let $\mc{M}$ and $\mc{N}$ be two subspaces of $\mb{C}^n$, such that $\dim \mc{M} = \ell$, $\dim\mc{N} = k$ and $\ell\geq k$. Then the principal angles between the subspaces $\mc{M}$ and $\mc{N}$ are recursively defined to be the numbers $0 \leq \theta_i \leq \pi/2$ such that
\[ \cos\theta_i = \max_{u \in \mc{M}, v \in \mc{N} \\ \normtwo{u} = \normtwo{v} = 1} v^*u = v_i^*u_i, \qquad i = 1,\dots,k\]
subject to the constraints $\normtwo{u_i} = \normtwo{v_i} = 1$, and
\[u_j^*u, \quad v_j^*v = 0, \qquad j=1,\dots,i-1.\]
The canonical angles are arranged in increasing order as
\[ 0 \leq \theta_1 \leq \dots \leq \theta_k \leq \pi/2.\]
It can also be shown that $\sin\theta_i$ are also the singular values of $\proj{\mc{M}} - \proj{\mc{N}}$.
We denote $\angle(\mc{M},\mc{N})$ to be the canonical angles between subspaces $\mc{M}$ and $\mc{N}$. Let $M$ and $N$ be matrices with orthonormal columns, which form bases for subspaces $\mc{M}$ and $\mc{N}$ respectively. Then, the singular values of singular values of $(I-MM^*)N$ can be used to compute $\sin\angle(\mc{M},\mc{N})$ and the singular values of $M^*N$ can be used to compute $\cos\angle(\mc{M},\mc{N})$~\cite[Section 3]{bjorck1973numerical}. For ease of notation, in the rest of this paper, we write $\angle(M,N)$ instead of $\angle(\mc{M},\mc{N})$.
\subsection{Randomized subspace iteration} The basic version of the randomized subspace iteration is summarized in \cref{alg:randsvd}. Given a starting guess, denoted by $\Omega \in \mathbb{C}^{n\times (k+\rho)}$, the algorithm performs $q$ steps of the randomized subspace iteration to obtain the matrix $Y$, also known as the ``sketch.'' A thin-QR factorization of $Y$ is performed to obtain $Q$ whose columns form an orthonormal basis for the range of $Y$. The main idea is that, under suitable conditions, the range of $Q$ is a good approximation for the range of $A$. We obtain a low-rank approximation to $A$ by the projection $\widehat{A} = QQ^*A$. The rest of the algorithm involves converting this low-rank approximation into the SVD format.
\begin{algorithm}
\begin{algorithmic}[1]
\REQUIRE Matrix $A$, Starting guess $\Omega \in \mathbb{C}^{n\times (k + \rho)}$, an integer $q \geq 0$.
\STATE Compute $Y = (AA^*)^qA\Omega$
\STATE Compute thin QR factorization of $Y$, so that $Y = QR$.
\STATE Compute $B = Q^*A$ and its SVD $B = U_{B}\widehat{\Sigma} \widehat{V}^*$.
\STATE Compute $\widehat{U} = QU_{B}$.
\RETURN Matrices $\widehat{U},\widehat{\Sigma},\widehat{V} $ that define $\widehat{A} \equiv \widehat{U}\widehat{\Sigma}\widehat{V}^*$.
\end{algorithmic}
\caption{Idealized version of Subspace iteration for Singular Value Decomposition}
\label{alg:randsvd}
\end{algorithm}
The algorithm to compute an approximate singular value decomposition, given starting guess $\Omega \in \mb{C}^{n\times (k+\rho)}$ is summarized in~\cref{alg:randsvd}. We say that this is an idealized version, since the algorithm can behave poorly in the presence of round-off errors. A practical implementation of this algorithm alternates the QR factorization with matrix-vector products (matvecs) involving $A$; for more details regarding the implementation, the reader is referred to~\cite{saad2011numerical,HMT09}. In~\cref{alg:randsvd}, the output $$\widehat{A} \equiv QQ^*A = \widehat{U}\widehat{\Sigma}\widehat{V}^*$$
may have a larger rank than (or equal to) $k$. If a rank-$k$ approximation to $A$ is desired, then it can be obtained by discarding the $\rho$ smallest singular values of $\widehat{A}$. We denote this low-rank representation by
\[ \widehat{A}_k = \widehat{U}_k \widehat{\Sigma}_k \widehat{V}_k^*. \]
This is summarized in~\cref{alg:truncation}.
\begin{algorithm}[!ht]
\begin{algorithmic}[1]
\REQUIRE Matrix $A\in \mathbb{C}^{m\times n}$ and $Q \in \mathbb{C}^{m\times (k+\rho)}$. Target rank $1 \leq k \mathsf{rank}\,(A)$.
\STATE Form matrix $B = Q^*A$.
\STATE Compute the truncated SVD representation $B_k = \widehat{U}_{B,k} \widehat{\Sigma}_k \widehat{V}_k^*$.
\STATE Form $\widehat{U}_k = Q\widehat{U}_{B,k}$
\RETURN Matrices $\widehat{U}_k,\widehat{\Sigma}_k,\widehat{V}_k$ such that $\widehat{A}_k = \widehat{U}_k\widehat{\Sigma}_k\widehat{V}_k^*$.
\end{algorithmic}
\caption{Truncated SVD of $\widehat{A} = QQ^*A$}
\label{alg:truncation}
\end{algorithm}
\noindent {Before we state the assumptions needed for our analysis, we introduce the following notation. The matrix $V^*\Omega$ captures the influence of the starting guess on the right singular matrix $V$. Partition this matrix as
\begin{equation}\label{eqn:omegadef} V^*\Omega = \bmat{V_{k}^*\Omega \\ V_{\perp}^*\Omega} = \bmat{\Omega_1 \\ \Omega_2},\end{equation}
where $\Omega_1 = V_k^*\Omega \in \mathbb{C}^{k\times (k+\rho)}$ and $\Omega_2 = V_\perp^*\Omega \in \mathbb{C}^{(n-k)\times (k+\rho)}$.
As was mentioned earlier, we assume that the target rank $k$ satisfies $1 \leq k \leq \mathsf{rank}\,(A)$. Additionally, the following assumptions will be required for our analysis.
\begin{assumption}\label{ass:main}
Let $\Omega_1 \in \mathbb{C}^{k\times (k+\rho)}$ be defined as above. We assume that
\begin{equation}\label{eqn:omega1} \mathsf{rank}\,(\Omega_1) = k. \end{equation}
\item The singular value gap at index $k$ is inversely proportional to the singular value ratio
\begin{equation}
\gamma_k = \normtwo{\Sigma_\perp}\normtwo{\Sigma_k^{-1}}= \frac{\sigma_{k+1}}{\sigma_k} < 1.
\end{equation}
\end{assumption}
The first assumption guarantees that the starting guess $\Omega$ has a significant influence over the right singular vectors, whereas the second assumption ensures that the $k$ dimensional subspace $\mc{R}\,(U_k)$ is well defined. In practice, it is highly desirable that $\gamma_k \ll 1$, which ensures that there is a large singular value gap.}
\section{Accuracy of singular vectors}\label{sec:angles}
We want to understand how well $\mc{R}\, (\widehat{U})$ approximates $\mc{R}\,(U_k)$, measured in terms of the canonical angles between the subspaces. To this end, abbreviate the subspace angles between $\widehat{U} \in \mathbb{C}^{m\times \ell}$ and $U_k \in \mathbb{C}^{m \times k}$ as $\theta_1,\dots,\theta_k$. Similarly, denote the angles between $\widehat{V} \in \mathbb{C}^{n\times \ell}$ and $V_k \in \mathbb{C}^{n \times k}$ by $\nu_1,\dots,\nu_k$. We are also interested in obtaining bounds for the canonical angles $\angle(U_k,\widehat{U}_k)$ and $\angle(V_k,\widehat{V}_k)$. To distinguish these angles from $\angle(U_k,\widehat{U})$ and $\angle(V_k,\widehat{V})$, we call them $\theta_j'$ and $\nu_j'$ for $j=1,\dots,k$.
\subsection{Bounds for canonical angles}\label{ssec:angles}
Our first result derives bounds for the canonical angles $\angle(U_k,\widehat{U})$. The analysis is based on the perturbation of projectors and the tools used here are similar to~\cite{HMT09}.
\begin{theorem}\label{thm:u}
Let $\widehat{U}$ and $\widehat{V}$ be obtained from~\cref{alg:randsvd}. With~\cref{ass:main}, the canonical angles $\theta_j$ and $\nu_j$ satisfy
\[ \sin\theta_j\leq \frac{\gamma_{j}^{2q+1} \normtwo{\Omega_2\Omega_1^\dagger}}{\sqrt{1 + \gamma_{j}^{4q+2} \normtwo{\Omega_2\Omega_1^\dagger}^2}} \qquad \sin\nu_j \leq \frac{\gamma_{j}^{2q+2} \normtwo{\Omega_2\Omega_1^\dagger}}{\sqrt{1 + \gamma_{j}^{4q+4} \normtwo{\Omega_2\Omega_1^\dagger}^2}} \]
for $j=1,\dots,k$.
\end{theorem}•
{
This theorem has several interesting features worth pointing out. First, if the matrix has exact rank $k$, then all of the canonical angles are uniformly equal to zero; that is, the randomized subspace iteration identifies the subspace exactly. On the other hand, when $\gamma_k$ is very close to $1$, the subspaces may not be well-defined and {may be} difficult to identify. In practice, it is highly desirable that $\gamma_k \ll 1$, so that the angles are captured accurately.
Second, the bounds for the canonical angles show explicit dependence on the singular value ratios $\gamma_j$. In particular, the canonical angles $\theta_j$ and $\nu_j$ converge to zero quadratically but at different rates depending on the singular value ratios. Specifically, the smaller the singular value ratio, smaller the canonical angles.
Third, the term $\normtwo{\Omega_2\Omega_1^\dagger}$ can be written in terms of the right singular vector matrix $V$ and the starting guess $\Omega$ as
\[ \normtwo{\Omega_2\Omega_1^\dagger} = \normtwo{ (V_{\perp}^*\Omega)(V_k^*\Omega)^\dagger }.\]
When the columns of $\Omega$ is linearly independent, this quantity is nothing but the tangent of the largest canonical angle between $\mc{R}\,(V_k)$ and $\mc{R}\,(\Omega)$. This term appears frequently in randomized linear algebra and can be interpreted as a measure of the subspace overlap between the starting guess and the right singular vectors. In the ideal case, $\Omega$ contains the singular vectors in $V_k$.
A discussion of the meaning and interpretation of this term, is provided in~\cite[Section 2.5]{drineas2017structural}. In particular, when $\Omega$ is a Gaussian random matrix, $\normtwo{\Omega_2\Omega_1^\dagger}$ is roughly on the order of $\sqrt{(n-k)k}$.
Fourth, the influence of $\normtwo{\Omega_2\Omega_1^\dagger}$ is subdued by the singular value ratios $\gamma_j^{2q+1}$. With sufficiently large number of iterations $q$, the canonical angles are smaller than a user-defined tolerance. Rigorous bounds for the requisite number of iterations are provided in~\cref{ssec:prob}.
Lastly, the bounds for the canonical angles $\theta_j$ are smaller than $\nu_j$ because the latter contains an additional power of $\gamma_j$. The reason for this higher accuracy is as follows: the columns of $\widehat{V}$ are the right singular vectors of $Q^*A$. Therefore, the multiplication step with $Q$ amounts to an additional step of subspace iteration and gives the extra factor.
}
\begin{remark}
\cref{thm:u} gives the sine of the canonical angles; these bounds can also be used to obtain upper bounds for the tangents and lower bounds for the cosines.
With the same assumptions and notation as in~\cref{thm:u}, the relationship between the tangent and sine implies
\[\tan\theta_j \leq {\gamma_{j}^{2q+1} \normtwo{\Omega_2\Omega_1^\dagger}} \qquad \tan\nu_j \leq {\gamma_{j}^{2q+2} \normtwo{\Omega_2\Omega_1^\dagger}} \]
for $j=1,\dots,k$. Lower bounds for cosine of the canonical angles follow similarly.
\end{remark}
\paragraph{Unitarily invariant norms} The following result derives bounds for the canonical angles in any unitarily invariant norm, in contrast to~\cref{thm:u} which bounds the individual canonical angles.
\begin{theorem}\label{thm:theta2F} Let the approximate singular vectors $\widehat{U}$ and $\widehat{V}$ for a matrix $A$ be computed according to~\cref{alg:randsvd}. Under~\cref{ass:main}, for every unitarily invariant norm,
\begin{equation}
\begin{aligned}
\uninorm{\sin\angle(U_k,\widehat{U}) } \leq & \> \gamma_k^{2q} \frac{ \uninorm{ \Sigma_\perp}}{\sigma_k} \normtwo{\Omega_2\Omega_1^\dagger}, \\
\uninorm{\sin\angle(V_k,\widehat{V}) } \leq & \> \gamma_k^{2q+1} \frac{\uninorm{ \Sigma_\perp}}{\sigma_k} \normtwo{\Omega_2\Omega_1^\dagger}.
\end{aligned}
\end{equation}
\end{theorem}
The interpretation of this theorem is similar to that of~\cref{thm:u}. The connection between the two theorems follows from the identity $\sin\theta_k = \|\sin\angle(U_k,\widehat{U}) \|_{2}$. If we specialize the result in~\cref{thm:theta2F} to the spectral norm, then it is clear that this result weaker than the bound in~\cref{thm:u}.
\subsection{Extraction of $k$-dimensional subspaces}\label{ssec:sintheta}
In the previous subsection, the columns of $\widehat{U}$ and $\widehat{V}$ spanned $\ell = k +\rho$ dimensional subspaces. Many applications, however, require the extraction of $k$ dimensional singular subspaces from the low-rank approximation $\widehat{A} \equiv QQ^*A$. One way to extract the appropriate subspaces is to first compute the optimal rank-$k$ truncation of $\widehat{A}$, denoted by $\widehat{A}_k$. The singular vectors of $\widehat{A}_k$, denoted by $\widehat{U}_k$ and $\widehat{V}_k$, are then used instead of $\widehat{U}$ and $\widehat{V}$. See~\cref{alg:truncation}, for details regarding implementation. The bounds derived in the previous subsection are not directly applicable since~\cite[Corollary 10]{ye2016schubert} says
\[ \theta_j \leq \theta_j' \qquad \nu_j \leq \nu_j' \qquad j=1,\dots,k.\]
To understand how much additional error is incurred during this extraction process, we present several results. The important conclusion of all these results is that the accuracy of the extracted subspaces of dimension $k$ is comparable to the accuracy of the $k+\rho$ dimensional subspace provided the singular values are sufficiently well separated.
The approach we take is different from that of the previous section. The starting point of our analysis is the well-known sin theta theorem for singular subspaces~\cite{wedin1983angles}. Let $A,\widehat{A}$ be two matrices of conformal dimensions. Assuming that
\begin{equation}\label{eqn:gap}
\zeta \equiv \sigma_k(A) - \sigma_{k+1}(\widehat{A}) > 0,
\end{equation}
we have
\begin{equation}\label{eqn:sintheta} \max\left\{ \uninorm{\sin \angle(U_k,\widehat{U}_k)} ,\uninorm{\sin\angle(V_k,\widehat{V}_k)} \right\} \leq \frac{\max\{\uninorm{E_{12}},\uninorm{E_{21}}\}}{\zeta},
\end{equation}
where the two matrices $E_{12}$ and $E_{21}$ are
\begin{equation} \label{eqn:e1}
\begin{aligned}
E_{12} = & \>(I-\proj{\widehat{U}_k}) (A-\widehat{A}) \proj{V_k}\\
E_{21} = & \> \proj{{U}_k} (A-\widehat{A}) (I-\proj{\widehat{V}_k}).
\end{aligned}
\end{equation}
However, this version of the sin theta theorem does not provide us with a way to obtain bounds for the individual canonical angles. To this end, we first present a new generalization of the sin theta theorem.
\begin{theorem}\label{thm:gensintheta} Let $A \in \mathbb{C}^{m\times n}$ with $\mathsf{rank}\,(A) \geq k$ and let $\widehat{A}$ be the perturbed matrix with same dimensions. Suppose the singular value gap satisfies~\cref{eqn:gap}.
Let $\widehat{A}_k = \widehat{U}_k\widehat{\Sigma}_k\widehat{V}_k^*$ be the truncated SVD of $\widehat{A}$. Then
\[ \max \{ \sin \theta_j' , \sin \nu_j' \} \leq \frac{\sigma_k(A)}{\sigma_{j}(A)} \max\{\sin\theta_k',\sin\nu_k'\} \qquad j=1,\dots,k.\]
\end{theorem}
This theorem states that the sine of the canonical angles $\sin \theta_j'$ are bounded by $\sin \theta_k' $ up to a multiplicative factor, which is at most $1$.
Our main result provides the following bounds for canonical angles between the exact and the approximate singular subspaces, when both the subspaces have the same dimension. The proof involves simplifying every term in~\cref{eqn:e1}.
\begin{theorem}\label{thm:sintheta}
Let $\widehat{U}$ and $\widehat{V}$ be obtained from~\cref{alg:randsvd}, and matrices $\widehat{U}_k$ and $\widehat{V}_k$ from~\cref{alg:truncation}. Under~\cref{ass:main},
\begin{itemize}
\item for every unitarily invariant norm
\[ \max\left\{ \uninorm{ \sin \angle (U_k,\widehat{U}_k) } ,\uninorm{\sin\angle(V_k,\widehat{V}_k)} \right\} \leq \phi\frac{\gamma_k^{2q}}{1-\gamma_k} \frac{\uninorm{\Sigma_\perp}}{\sigma_k} \normtwo{\Omega_2\Omega_1^\dagger}.\]
The factor $\phi$ takes different values depending on the specific norm used. For an arbitrary unitarily invariant norm, we have $\phi= \sqrt{2}$, whereas for the spectral and Frobenius norms, we have $\phi = 1$.
\item canonical angles $\theta_j'$ and $\nu_j'$ satisfy
\[\max \{ \sin \theta_j' , \sin \nu_j' \} \leq \gamma_{j} \frac{\gamma_k^{2q}}{1-\gamma_k}\normtwo{\Omega_2\Omega_1^\dagger} \qquad j=1,\dots,k.\]
\end{itemize}•
\end{theorem}
The interpretation of this theorem is: (1) as the number of iterations $q$ increase, the largest canonical angle converges to $0$ quadratically, and (2) a larger singular value gap means that the subspace is computed more accurately. Comparing this result with~\cref{thm:theta2F}, we see that the upper bound in~\cref{thm:sintheta} has additional factors which depend on the specific norm used. For an arbitrary unitarily invariant norm, there is an additional factor $\max\{1,\sqrt{2}\gamma_k\}/(1-\gamma_k)$. For the spectral and Frobenius norms, the additional factor is $1/(1-\gamma_k)$. Both factors are greater than $1$, suggesting that the truncation process can introduce additional error. The additional factor is also independent of the number of iterations $q$, suggesting that it is a one-time price to be paid for the extraction process. The bound is devastating when $\gamma_k \approx 1$, but this also means that the subspaces may not be well-defined.
\paragraph{Individual singular vectors}
The previous results give insight into the accuracy measured using the canonical angles between the exact and approximate singular subspaces. When individual singular vectors need to be extracted, does the extraction process introduce additional error? The following result quantifies the accuracy of the extraction process.
\begin{theorem}\label{t_singleuv}
Let the approximate singular vectors $\widehat{U}$ and $\widehat{V}$ be computed according to~\cref{alg:randsvd}. With~\cref{ass:main}, we have the following inequalities
\begin{equation}\label{eqn:singleuv}
\sin \angle(u_j,\widehat{U})\leq {\gamma_{j}^{2q+1} \normtwo{\Omega_2\Omega_1^\dagger}} \qquad
\sin\angle(v_j,\widehat{V}) \leq {\gamma_{j}^{2q+2} \normtwo{\Omega_2\Omega_1^\dagger}}
\end{equation}
for $j=1,\dots k$. Denote the approximate singular triplets $(\hat\sigma_j,\hat{u}_j,\hat{v}_j)$ for $j=1,\dots,k$. Under~\cref{ass:main}
\begin{equation}\label{eqn:svs}
\max\left\{\sin\angle(u_j,\hat{u}_j),\sin\angle(v_j,\hat{v}_j) \right\} \leq \sqrt{1 + 2\frac{\tilde{\gamma}^2}{\tilde\delta^2}} \, \gamma_j^{2q+1}\normtwo{\Omega_2\Omega_1^\dagger}.
\end{equation}
Here, $\tilde\gamma^2 \equiv \normtwo{\Sigma_\perp}^2 + \normtwo{\Sigma_\perp\Omega_2\Omega_1^\dagger}^2 $ and $\tilde\delta \equiv \min\{\min_{\tilde{\sigma}_i\neq \tilde\sigma_j}\{ |\sigma_j - \tilde\sigma_i| ,\sigma_j\} \} $.
\end{theorem}•
The first result bounds the angles between the exact singular vector and the corresponding approximate singular subspaces. The second result compares the angles of the exact and the approximate singular vectors. This result also says that the extraction process does not adversely increase the error in the singular subspaces, provided the singular values are well-separated.
The convergence of the individual singular vectors tell a similar story to that of~\cref{thm:u}. The singular vectors corresponding to the largest singular values converge earlier than the singular vectors corresponding to the smaller singular vectors. This is a consequence of the fact that the singular value ratios are non-decreasing.
\subsection{Comparison with other bounds}
The subspace iteration dates to a 1957 paper by Bauer~\cite{Bauer1957} for eigenvalue problems. The analysis of the subspace iteration has also been well-established, for example, we refer to~\cite[Chapter 14]{Par80}. {Randomized subspace iteration has attracted a lot of attention in the last two decades, with a special emphasis on quantifying the influence of the starting guess $\Omega$. In particular, recent research has focused on the choice of the distribution and the effect of the oversampling parameter $\rho$. The effect of randomized subspace iteration on the accuracy of singular vectors was studied in the context of spectral clustering in~\cite{boutsidis2015spectral}. However, the authors made the rather strong assumption that $\Omega \in \mathbb{R}^{n\times k}$, which amounts to setting the oversampling parameter $\rho = 0$. This is a strong requirement since~\cref{ass:main} now requires $\Omega_1$ to be invertible. The authors were able to show (in our notation)
\[ \normtwo{ \sin \angle (U_k,\widehat{U}_k)} \leq \frac{\gamma_k^{2q+1}\normtwo{\Omega_2\Omega_1^{-1}}}{\sqrt{1 + \gamma_k^{4q+2}\normtwo{\Omega_2\Omega_1^{-1}}^2}}.\]
Notice that this bound coincides with~\cref{thm:u} (for $\sin\theta_k$) when $\rho = 0$. Our results provide bounds for the right singular vectors as well as all the canonical angles.
Let us return to this assumption that $\mathsf{rank}\,(\Omega_1)$. When $\Omega$ is standard Gaussian matrix,~\cite[Theorem 3.3]{sankar2006smoothed} says
$$\|\Omega_1^{-1}\|_2 \leq \frac{2.35 \sqrt{k}}{\delta}$$
with probability at least $1-\delta$. For a small probability of failure $0 <\delta < 1$, this bound can be devastating. By contrast, if we let $\Omega_1 \in \mathbb{C}^{k\times (k+\rho)}$ with $\rho \geq 2$, and still suppose that $\Omega$ is a Gaussian random matrix. Then, with probability at least $1-\delta$~\cite[Proposition 10.4]{HMT09} says
$$\normtwo{\Omega_1^\dagger} \leq e\frac{\sqrt{k+\rho}}{\rho} \left(\frac{1}{\delta}\right)^{1/(\rho+1)}. $$
It is clear that when the random matrix is Gaussian, oversampling has an impact on the accuracy of the randomized subspace iteration. Specifically, larger the oversampling, the more accurate is the subspace. }
Oversampling plays a bigger role for random matrices that have different distributions than Gaussian. When $\Omega$ is generated from the subsampled randomized Hadamard transform (SRHT), or Rademacher distributions, a more aggressive form of oversampling $\ell \sim k\log k$ is necessary to ensure that $\mathsf{rank}\,(\Omega_1) = k$. Therefore, by allowing for oversampling, our bounds are applicable to starting guesses that are not restricted to Gaussian random matrices. Not only that, our bounds are also informative for matrices with decaying singular values and significant singular value gap.
A recent paper by Nakatsukasa~\cite{nakatsukasa2017accuracy} considered the issue of accuracy of extracting singular subspaces for general projection-based approximation methods. In our notation, these refer to relating bounds for $\angle(U_k,\widehat{U})$ to $\angle(U_k,\widehat{U}_k)$. Our bounds for the canonical angles appear to be tighter than the result implied by~\cite[Corollary 1]{nakatsukasa2017accuracy}. This may be because the analysis was applicable to arbitrary subspace projections, whereas ours is specialized to randomized subspace iteration; we do not go into a detailed comparison here. Furthermore, our analysis is able to bound the individual canonical angles which is missing in~\cite{nakatsukasa2017accuracy}.
\subsection{Probabilistic bounds}\label{ssec:prob} Thus far, we
have not made specific assumptions on the matrix $\Omega$, as long as it
satisfies $\mathsf{rank}\,(\Omega_1) = k$. In particular, $\Omega$ need not be even be random, and
may be deterministic. However, more can be said about the bounds when $\Omega$ is random
is drawn from a specific distribution.
In many applications, the matrix $\Omega \in \mathbb{R}^{n\times (k+\rho)}$ is taken to be the standard Gaussian random matrix. That is, the entries of $\Omega$ are i.i.d.\ $\mc{N}(0,1)$ random variables. Here we derive a few probabilistic results that provide insight into the accuracy of the subspaces. Let $\rho \geq 2$ and define the constant
\begin{equation}
\label{eqn:ce}
C_e = \sqrt{\frac{k}{\rho-1}} + \frac{e\sqrt{(k+\rho)(n-k)}}{\rho}
\end{equation}
and for $0 < \delta < 1$ define the constant
\begin{equation}
\label{eqn:cd}
C_d = \frac{e\sqrt{k+\rho}}{\rho+1}\left(\frac{2}{\delta}\right)^{1/(\rho+1)} \left( \sqrt{n-k} + \sqrt{k+\rho} + \sqrt{2\log\frac{2}{\delta}}\right).
\end{equation}
\begin{theorem}[Probabilistic bounds]\label{thm:can_gauss} Let $\Omega \in \mb{R}^{n\times (k+\rho)}$ be a standard Gaussian random matrix with $\rho \geq 2$. Assume that the singular value ratio $\gamma_k < 1$. Let $\widehat{U}$ and $\widehat{V}$ be obtained from~\cref{alg:randsvd}. For $j=1,\dots,k$, the expected value of the canonical angles satisfy
\[ \mb{E}\, \left[\sin\theta_j \right] \leq \frac{\gamma_{j}^{2q+1}C_e}{\sqrt{1 + \gamma_{j}^{4q+2}C_e^2} } \qquad \mb{E} \, \left[\sin\nu_j \right] \leq \frac{\gamma_{j}^{2q+2}C_e}{\sqrt{1 + \gamma_{j}^{4q+4}C_e^2} }.\]
Let $0 < \delta < 1$ be a user defined failure tolerance. With probability, at least $1-\delta$, the following inequalities hold independently for $j=1,\dots,k$
\[ \sin\theta_{j} \leq \frac{\gamma_{j+1}^{2q+1}C_d}{\sqrt{1 + \gamma_{j}^{4q+2}C_d^2} } \qquad \sin\nu_j \leq \frac{\gamma_{j}^{2q+2}C_d}{\sqrt{1 + \gamma_{j}^{4q+4}C_d^2} }.\]
\end{theorem}
The main message of theorem can be seen from the following bound on the number of subspace iterations $q$. Specifically, suppose $0 < \epsilon < 1$, and the number of subspace iterations $q$ we take satisfies
\[ q \geq \frac12 \left( \frac{\log \epsilon/C_e}{\log \gamma_k} -1\right),\]
then $ \mb{E}\, \sin \theta_j \leq \mc{O}(\epsilon^2)$ for $j=1,\dots,k$.
Several extensions of these results are possible. First, following the proof technique of~\cref{thm:can_gauss}, we can extend the probabilistic analysis to~\cref{thm:theta2F,t_singleuv} as well. Second, following the strategy in~\cite{HMT09}, the probabilistic results can be extended to other distributions. However, we will not pursue these extensions here.
\section{Low-rank approximation and Singular values}\label{sec:aux}
In this section, we provide several structural bounds for the accuracy of the low-rank approximation and the accuracy of the singular values.
\subsection{Low-rank approximation} Several results are available for
estimating the error in the low-rank approximation $A \approx QQ^*A$ in the
spectral and Frobenius norms, when the matrix $Q$ is obtained from the
randomized subspace
iteration~\cite{HMT09,gu2015subspace,zhang2016randomized}. As was mentioned
earlier, the spectral and Frobenius norms are special cases of the Schatten-p
norm, which are examples of unitarily invariant norms.
Here we present the first known analysis of randomized subspace iteration in a unitarily invariant norm.
\begin{theorem}\label{thm:lowrank_schatten}
Let $\widehat{A} \in \mathbb{C}^{m\times n}$ be computed using~\cref{alg:randsvd}. Under~\cref{ass:main}, the following inequalities hold in every unitarily invariant norm
\begin{align}\label{eqn:nnorm}
\uninorm{(I-QQ^*)A} \leq & \> \uninorm{\Sigma_\perp}+ \gamma_k^{2q}\uninorm{\Sigma_\perp\Omega_2 \Omega_1^\dagger } \\ \label{eqn:nnorm_rankr}
\uninorm{(I-QQ^*)A_k } \leq & \> \gamma_k^{2q}\uninorm{\Sigma_\perp\Omega_2\Omega_1^\dagger}.
\end{align}•
Let $B =Q^*A$, and let $B_k$ be its best rank$-k$ approximation. If $A$ is approximated using $QB_k$, then the error in the low-rank approximation is
\begin{equation}\label{eqn:nnorm_rankr2}
\uninorm{A-QB_k} \leq \> \left( 1+ \frac{\sigma_1}{\sigma_k}\frac{\phi\gamma_k^{2q}}{1-\gamma_k} \normtwo{\Omega_2 \Omega_1^\dagger} \right)\uninorm{\Sigma_\perp}.\end{equation}
As in~\cref{thm:sintheta}, $\phi = 1$ for spectral and Frobenius norms, and $\sqrt{2}$ for an arbitrary unitarily invariant norm.
\end{theorem}
In this theorem, as the number of iterations $q\rightarrow \infty$, the error in the low-rank approximation goes to zero.
We present a variant of the error in the low-rank approximation for the special case that a Schatten-p norm is used. The proof for the special case of the Frobenius norm was provided in~\cite{zhang2016randomized}.
\begin{theorem}\label{thm:lowrank}
Let $\widehat{A}$ be computed using~\cref{alg:randsvd}. Under~\cref{ass:main}, we have
\begin{equation}\label{eqn:2norm}
\schattenp{ (I-QQ^*)A }^2 \leq \schattenp{\Sigma_\perp}^2 + \gamma_k^{4q}\schattenp{\Sigma_\perp\Omega_2\Omega_1^\dagger}^2.
\end{equation}•
\end{theorem}
The error bound in~\cref{thm:lowrank_schatten} is weaker than~\cref{thm:lowrank} for the Schatten-p norm since for $\alpha,\beta \geq
0$, we have $\sqrt{\alpha^2 + \beta^2} \leq \alpha + \beta$. More generally,~\cref{thm:lowrank} is applicable to any unitarily invariant norm that is
also a Q-norm~\cite[Definition IV.2.9]{bhatia1997matrix}. A unitarily invariant norm $\uninorm{\cdot}_Q$ is a $Q$-norm, if there exists
another unitarily invariant norm $\uninorm{\cdot}_a$ such that $\uninorm{A}_Q^2 =
\uninorm{A^*A}_a$. Note that the Schatten-p norms satisfy this property for $p
\geq 2$, since $\schattenp{A}^2 = \uninorm{A^*A}_{p/2}$.
\subsection{Accuracy of singular values} How are the singular values of $A$
related to the singular values of $\widehat{A}$? We now present a result that
quantifies the accuracy of the individual singular values. This result is
similar to~\cite[Theorem 4.3]{gu2015subspace}. Our proof techniques are
substantially different. We make extensive use of the Cauchy interlacing
theorem and the multiplicative singular value inequalities~\cref{eqn:singprod}.
\begin{theorem}\label{thm:sigma}
Let $\widehat{A} = \widehat{U}\widehat{\Sigma}\widehat{V}^*$ be computed using~\cref{alg:randsvd}. Under~\cref{ass:main}, the
approximate singular values $\sigma_j(\widehat{A})$ satisfy for $j=1,\dots,k$
\[ \sigma_j(A) \geq \sigma_j(\widehat{A}) \geq \frac{\sigma_j(A)}{\sqrt{1 + \gamma^{4q+2}_{j}\normtwo{\Omega_2\Omega_1^\dagger}^2 }}.\]
\end{theorem}•
It can be readily seen that the large singular values are computed more accurately since the singular value ratio corresponding to larger singular values is smaller.
Rather than quantify the accuracy of the individual singular values, the next results are of the Hoffman-Wielandt type and account for all the singular values together. Define the two matrices of conformal sizes
\[ \Sigma = \bmat{\Sigma_k \\ & \Sigma_\perp} \qquad \Sigma' = \bmat{\widehat{\Sigma} \\ & 0}.\]
Under~\cref{ass:main}, the error in the singular values satisfies
\begin{equation}
\uninorm{\Sigma-\Sigma'} \leq \uninorm{\Sigma_\perp} + \gamma_k^{2q} \uninorm{\Sigma_\perp\Omega_2\Omega_1^\dagger} .
\end{equation}
The proof combines~\cite[III.6.13]{bhatia1997matrix} with~\cref{thm:lowrank_schatten}. For the Schatten-p norm, with $p\geq 2$, we can derive the bound
\begin{equation}\schattenp{\Sigma-\Sigma'} \leq \sqrt{\schattenp{\Sigma_\perp}^2 + \gamma_k^{4q} \schattenp{\Sigma_\perp\Omega_2\Omega_1^\dagger}^2} .
\end{equation}
The proof is similar, and is therefore omitted.
\section{Proofs}\label{sec:proofs}
We recall some results here that will be useful in our analysis, see~\cite[Section 7.7]{HoJ13} for proofs. Let $M,N$ be Hermitian positive definite. The notation $M \preceq N$ means $N-M $ is positive semi-definite and it defines a partial ordering on the set of Hermitian matrices. Clearly, this also implies $I -N \preceq I -M$. The partial order is preserved under the conjugation rule. That is
\[ SMS^* \preceq SNS^* \qquad \forall \> S \in \mb{C}^{m\times n}. \]
Weyl's theorem implies that the eigenvalues satisfy $\lambda_j(M) \leq \lambda_j(N)$ for all $j=1,\dots,n$. If additionally, $M,N$ are both positive semidefinite then $M^{1/2} \preceq N^{1/2}$~\cite[Proposition V.1.8]{bhatia1997matrix} and $(I+N)^{-1} \preceq (I+M)^{-1}$.
\paragraph{Singular value inequalities} Let $A,B \in \mathbb{C}^{m\times n}$. For all $i,j$ such that $1 \leq i,j \leq \min\{m,n\}$ and $i+j-1\leq \min\{m,n\}$, the following singular value inequalities hold for the sum $A+B$~\cite[Equation 7.3.13]{HoJ13}
\begin{equation}
\sigma_{i+j-1}(A+B) \leq \sigma_i(A) + \sigma_j(B),
\end{equation}
and product $AB^*$~\cite[Equation (7.3.14)]{HoJ13}
\begin{equation}\label{eqn:singprod}
\sigma_{i+j-1}(AB^*)\leq \sigma_i(A)\sigma_j(B).
\end{equation}•
A useful corollary of these results is that $\sigma_i(A+B) \leq \sigma_i (A) + \sigma_1(B)$ and $\sigma_i(AB^*) \leq \sigma_i(A) \sigma_1(B)$ for $i=1,\dots,\min\{m,n\}$.
\paragraph{Unitarily invariant norms} {It is useful to recall some properties of the unitarily invariant norms. Every unitarily invariant norm $\uninorm{\cdot}$ on $\mathbb{C}^n$ is associated with a symmetric gauge function on $\mathbb{R}^n$. The $\uninorm{\cdot}$ satisfies $\uninorm{M} = \uninorm{(M^*M)^{1/2}}$, since both matrices have the same nonzero singular values. The following inequality for unitarily invariant norms, also known as strong sub-multiplicativity, will be useful~\cite[(IV.40)]{bhatia1997matrix}
$$\uninorm{ABC} \leq \normtwo{A}\normtwo{C}\uninorm{B}.$$
We will need the following lemma
\begin{lemma}\label{lemma:schatten}
Let $A, B, D \in \mathbb{C}^{n\times n} $ such that $A,B$ Hermitian and $0 \preceq A \preceq B$, then
\[ \uninorm{(D^*AD)^{1/2}} \leq \uninorm{(D^*BD)^{1/2}}.\]
\end{lemma}
\begin{proof}
Combining the properties of the partial ordering, the eigenvalues of the scaled matrices satisfy $\lambda_j(D^*AD)^{1/2} \leq \lambda_j(D^*BD)^{1/2}$ for all $j=1,\dots,n$. Since the matrices are positive semidefinite, the eigenvalues are the singular values and $\|(D^*AD)^{1/2}\|_{(k)} \leq \|(D^*BD)^{1/2}\|_{(k)}$ for every Ky-Fan-k norm $k=1,\dots,n$. By the Fan dominance theorem~\cite[Theorem IV.2.2]{bhatia1997matrix}, the advertised inequality is true for every unitarily invariant norm.
\end{proof}
}
\subsection{Proofs of~\cref{ssec:angles} Theorems}
\begin{proof}[\cref{thm:u}] We tackle each case separately. \\
\textbf{Bounds for $\sin\theta_j$}: The proof is lengthy and proceeds in four steps. We give a great level of detail here, since the proof technique will be applicable to the subsequent proofs.
\paragraph{1. Converting an SVD to an EVD} We compute the thin SVD of $(I-\proj{\widehat{U}})U_k = K S_U G^*.$ The matrix $$S_U = \mathsf{diag}\,(\sin\theta_k,\dots,\sin\theta_1) \in \mb{R}^{k\times k}$$
contains the sine of the canonical angles between the subspaces spanned by the columns of $\widehat{U}$ and $U_k$~\cite[Equation (13)]{bjorck1973numerical}. It is readily seen that
\begin{equation}\label{eqn:su} G S_U^2 G^* = U_k^*(I-\proj{\widehat{U}})U_k.\end{equation}
\paragraph{2. Shrinking space} In~\cref{alg:randsvd}, we had defined $Y = (AA^*)^qA\Omega$. It follows that
\[ U^*Y = \bmat{\Sigma_k^{2q+1} \\ & (\Sigma_\perp\Sigma_\perp^\top)^q\Sigma_\perp} (V^*\Omega)= \begin{bmatrix} \Sigma_k^{2q+1} \Omega_1 \\ (\Sigma_\perp\Sigma_\perp^\top)^q\Sigma_\perp \Omega_2\end{bmatrix},\]
where from~\cref{eqn:omegadef}, $\Omega_1 = V_k^*\Omega$ and $\Omega_2 = V_\perp^*\Omega$. Next, by~\cref{ass:main}, $\Omega_1$ has full row rank and therefore it has a right multiplicative inverse. Define
\[ Z \equiv U^*Y\Omega_1^\dagger \Sigma_k^{-(2q+1)} = \bmat{I \\ F}• \qquad F \equiv (\Sigma_\perp\Sigma_\perp^\top)^q\Sigma_\perp\Omega_2\Omega_1^\dagger\Sigma_k^{-(2q+1)}. \]
{Recall that $Y= QR$ is the thin-QR factorization of $Y$. Let $Q_1R_1$ be the thin-QR factorization of $R\Omega_1^\dagger\Sigma_k^{-(2q+1)}$; here, $Q_1 \in \mb{C}^{(k+\rho) \times k}, R_1\in \mb{C}^{k\times k}$.
From $Q_1Q_1^*\preceq I$, the conjugation rule implies
\[
\proj{Z} = U^*QQ_1Q_1^*Q^*U \preceq U^*QQ^*U = \proj{U^*Q}.
\]
Since
$\mc{R}\,(U^*Y) = \mc{R}\,(U^*Q) = \mc{R}\,(U^*\widehat{U})$, they have the same projectors, so
\begin{equation}\label{eqn:projz}\proj{Z} \preceq \proj{U^*\widehat{U}} \qquad I - \proj{U^*\widehat{U}} \preceq I - \proj{Z}.\end{equation}
Plug in $UU^* = I$ into~\eqref{eqn:su}, and use~\eqref{eqn:projz} to obtain
\[ U_k^*(I-\proj{\widehat{U}})U_k = U_k^*U (I-\proj{U^*\widehat{U}}) U^*U_k \preceq \bmat{ I & 0 }(I-\proj{Z}) \bmat{I \\ 0}.\]
}
\paragraph{3. Simplifying $\proj{Z}$}
Since $\proj{Z} = ZZ^\dagger$, we have
\[ \proj{Z} = \bmat{I \\ F} (I + F^*F)^{-1} \bmat{I & F^*}, \]
from which, it can be readily seen that
\begin{align} \nonumber
\bmat{ I & 0 }(I-\proj{Z}) \bmat{I \\ 0} = & \> I - (I+F^*F)^{-1} \\ \label{eqn:H}
= & \> F^*F (I+F^*F)^{-1}\equiv H.
\end{align}
Note that $H$ is positive semidefinite. To summarize the story so far, $GS_U^2 G^* \preceq H$.
\paragraph{4. Applying singular value inequalities} A straightforward SVD argument shows that the $j$-th singular value of $ H$ satisfies
\[ \sigma_j(H) = \sigma_j^2(F)/(1+\sigma_j^2(F)) \quad j=1,\dots,k. \]
The singular value inequalities~\cref{eqn:singprod} imply
\[ \sigma_j(F) \leq \sigma_1( \Sigma_\perp\Sigma_\perp^\top)^q\Sigma_\perp\Omega_2\Omega_1^\dagger)\sigma_j(\Sigma_k^{-2q-1}) \leq \left(\frac{\sigma_{k+1}}{\sigma_{k-j+1}}\right)^{2q+1} \normtwo{\Omega_2\Omega_1^\dagger} .\]
Plugging this inequality into $\sigma_j(H)$
\[ \sigma_j^2(H) \> \leq \> \frac{\gamma_{k-j+1}^{4q+2} \normtwo{\Omega_2\Omega_1^\dagger}^2}{1 + \gamma_{k-j+1}^{4q+2} \normtwo{\Omega_2\Omega_1^\dagger}^2}\, \qquad j=1,\dots,k. \]
Since $GS_U^2 G^* \preceq H$, Weyl's theorem implies $\sin^2 \theta_{k-j+1} \leq \sigma_j^2(H)$.
Take square roots on both sides and rename $j \leftarrow k-j+1$ to get the desired result.
\textbf{Bounds for $\sin\nu_j$}: Let $GS_V^2G^*$ be the eigenvalue decomposition of $V_k^*(I-\proj{\widehat{V}})V_k$. Note that the diagonals of $S_V$ are the sine of the canonical angles $\angle(V_k,\widehat{V})$. Since $\widehat{V}$
is obtained from the thin SVD of $A^*Q$, $\mc{R}\,(A^*Q) =
\mc{R}\,(\widehat{V})$ and $\proj{\widehat{V}} = \proj{A^*Q}$, since an orthogonal projection
matrix is uniquely determined by the range. Next, consider $\widehat{Z}$ defined as
\begin{equation}\widehat{Z} \equiv \Sigma^\top U^*Y\Omega_1^\dagger \Sigma_k^{-2q-2} = \begin{bmatrix}I \\ \widehat{F}\end{bmatrix}• \qquad \widehat{F} {\equiv} (\Sigma_\perp^\top\Sigma_\perp)^{q+1}\Omega_2\Omega_1^\dagger\Sigma_k^{-2q-2}, \label{zhat}
\end{equation}•
from $(AV)^*Q= \Sigma^*U^*Q $, it can be verified that
\[ \mc{R}\,(\widehat{Z} ) \subset \mc{R}\,(\Sigma^\top U^*Y) = \mc{R}\,(\Sigma^\top U^*Q) = \mc{R}\,((AV)^*Q). \]
Using an argument similar to~\cref{eqn:projz}, we obtain
\[ V_k^*V(I-\proj{\widehat{V}})V^* V_k \preceq V_k^*V(I-\proj{\widehat{Z}})V^* V_k = \bmat{I & 0} (I-\proj{\widehat{Z}}) \bmat{I \\0}. \]
The right hand side simplifies to $I - (I +\widehat{F}^*\widehat{F})^{-1}$. The rest of the proof is similar to that of the proof for $\sin\theta_j$.
\end{proof}
\begin{proof}[\cref{thm:theta2F}]
With the notation of \cref{thm:u}, we follow steps 1-3 of the proof to obtain $$GS_U^2 G^* \preceq H \preceq F^*F.$$
Since the square root preserves partial ordering, implies $GS_U G^* \preceq (F^*F)^{1/2}$. {Note that $(F^*F)^{1/2}$ and $F$ have the same nonzero singular values. Therefore, $$\uninorm{\sin\angle(U_k,\widehat{U})} \leq\uninorm{(F^*F)^{1/2}} = \uninorm{F}.$$
By using strong sub-multiplicativity of the unitarily invariant norm, we have
$$\uninorm{\sin\angle(U_k,\widehat{U})} \leq \gamma_k^{2q}\normtwo{\Omega_2\Omega_1^\dagger} \frac{\uninorm{\Sigma_\perp}}{\sigma_k}.$$
}
\end{proof}
\subsection{Proofs of~\cref{ssec:sintheta} Theorems}
\begin{proof}[\cref{thm:gensintheta}]
Let $X = (I-\proj{\widehat{U}_k})\proj{U_k}$ and $Y = (I-\proj{\widehat{V}_k})\proj{V_k}$. In decreasing order, the singular values of $X$ and $Y$ are $\{ \sin \theta_j'\}_{j=1}^k$ and $ \{\sin \nu_j'\}_{j=1}^k$ respectively. Let $B \equiv \widehat{A} - \widehat{A}_k$. First, we observe that
\begin{align*}
E_{12} = & \> (I-\proj{\widehat{U}_k})(A - \widehat{A})\proj{V_k} \\
= & \> (I-\proj{\widehat{U}_k})A_k - (I-\proj{\widehat{U}_k}) \widehat{A} \proj{V_k} \\
= &\> (I-\proj{\widehat{U}_k})\proj{U_k}A_k - (\widehat{A} - \widehat{A}_k)\proj{V_k} \\
=& \> XA_k - B(I-\proj{\widehat{V}_k})\proj{V_k} =XA_k - BY.
\end{align*}•
A similar calculation shows that $E_{21} = X^*B - A_kY^*$. From the first relation, since $\mathsf{rank}\,(A) \geq k$, we have
\[ XA_k A_k^\dagger = (E_{12} + BY) A_k^\dagger.\]
But $A_k A_k^\dagger = \proj{U_k}$ and $X\proj{U_k} = X$. Applying~\cref{eqn:singprod}, we have
\[ \sigma_j(X) \leq (\|E_{12}\|_2 + \|B\|_2\|Y\|_2)/\sigma_{k-j+1}(A) \qquad j=1,\dots,k.\]
A similar argument gives
\[\sigma_j(Y) \leq (\normtwo{E_{21}} + \normtwo{B}\normtwo{X})/\sigma_{k-j+1}(A) \qquad j=1,\dots,k. \]
Combining these relations
\[ \max \{ \sigma_j(X), \sigma_j(Y)\} \leq \frac{\max\{ \|E_{21}\|_2, \|E_{12}\|_2\}}{\sigma_{k-j+1}(A)} + \frac{\|B\|_2}{\sigma_{k-j+1}(A)} \max\{\|X\|_2,\|Y\|_2\}. \]
Recognize that $\|B\|_2 = \sigma_{k+1}(\widehat{A})$.
Applying~\cref{eqn:sintheta} in the spectral norm simplifies the expression since
\[ \frac{1}{\sigma_{k-j+1}(A)}\left(1 + \frac{\sigma_{k+1}(\widehat{A})}{\sigma_{k}(A) - \sigma_{k+1}(\widehat{A})}\right) = \frac{\sigma_k(A)}{\sigma_{k-j+1}(A)(\sigma_{k}(A) - \sigma_{k+1}(\widehat{A}))}. \]
Therefore,
\[\max \{ \sigma_j(X), \sigma_j(Y)\} \leq \frac{\sigma_k(A)}{\sigma_{k-j+1}(A)}\frac{\max\{ \normtwo{E_{21}}, \normtwo{E_{12}}\}}{\zeta}.\]
Now $\sigma_j(X) = \sin\theta_{k-j+1}'$ and $\sigma_j(Y) = \sin\nu_{k-j+1}'$. Rename $j\leftarrow k-j+1$ to finish.
\end{proof}
\begin{proof}[\cref{thm:sintheta}] We tackle each case independently. \\
\textbf{Unitarily invariant norms}: Our proof involves simplifying each term in~\cref{eqn:sintheta}, and~\cref{eqn:e1} and has several steps.
\paragraph{1. Simplifying the gap} Recall $\zeta = \sigma_k(A) - \sigma_{k+1}(\widehat{A})$ and $\widehat{A} = QQ^*A$. From the first part of~\cref{thm:sigma}
\[ \zeta = \sigma_k(A) - \sigma_{k+1}(\widehat{A}) \geq \sigma_k(A) - \sigma_{k+1}(A).\]
\paragraph{2. Simplifying $\uninorm{E_{12} }$} First observe that $A\proj{V_k} = A_k$. So $$E_{12} =(I-\proj{\widehat{U}_k})(I-QQ^*)A\proj{V_k} = (I-\proj{\widehat{U}_k})(I-QQ^*)A_k.$$ Then applying~\cref{eqn:nnorm_rankr} along with sub-multiplicativity gives
\[ \uninorm{E_{12} } \leq \uninorm{(I-QQ^*)A_k} \leq \gamma_k^{2q}\uninorm{\Sigma_\perp\Omega_2\Omega_1^\dagger } \leq \gamma_k^{2q} \uninorm{\Sigma_\perp} \normtwo{\Omega_2\Omega_1^\dagger }. \]
\paragraph{3. Simplifying $\uninorm{E_{21} }$} First, $E_{21} = \proj{U_k}(I-QQ^*)A\proj{\widehat{V}_k}$, and since $\normtwo{\proj{U_k}(I-QQ^*)} = \normtwo{\sin\angle(U_k,\widehat{U})}$,
\[ \uninorm{E_{21} } \leq \normtwo{\sin\angle(U_k,\widehat{U})} \uninorm{ (I-QQ^*)A},\]
because of strong sub-multiplicativity. Applying \cref{thm:u} and~\cref{eqn:nnorm_rankr}
\[ \uninorm{E_{21}} \leq \frac{\gamma_k^{2q+1} \normtwo{\Omega_2\Omega_1^\dagger}}{\sqrt{1+\gamma_k^{4q+2} \| \Omega_2\Omega_1^\dagger\|_2^2}} \left(1 + \gamma_k^{2q}\normtwo{\Omega_2\Omega_1^\dagger} \right)\uninorm{\Sigma_\perp}.\]
Let $\beta = \gamma_k^{2q}\normtwo{\Omega_2\Omega_1^\dagger}$. Then for $\beta \geq 0$, since $\gamma_k < 1$
\[ \frac{\gamma_k(1+\beta)}{\sqrt{1+\gamma_k^2\beta^2}} \leq \frac{1+\gamma_k\beta}{\sqrt{1+\gamma_k^2\beta^2}} \leq \sqrt{2}. \]
Therefore, $\uninorm{E_{21}} \leq \sqrt{2}\gamma_k^{2q} \uninorm{\Sigma_\perp} \|\Omega_2\Omega_1^\dagger\|_2 $.
\paragraph{4. Putting everything together} Plugging in the intermediate quantities into~\cref{eqn:sintheta}, we have
\[ \max\left\{ \uninorm{ \sin \angle(U_k,\widehat{U}_k)} ,\uninorm{\sin\angle(V_k,\widehat{V}_k)} \right\} \leq \sqrt{2}\gamma_k^{2q} \normtwo{\Omega_2\Omega_1^\dagger} \frac{ \uninorm{\Sigma_\perp}}{\sigma_{k}-\sigma_{k+1}}. \]
Dividing the numerator and denominator by $\sigma_k$ proves the stated result for unitarily invariant norms.
\textbf{Spectral/Frobenius norms}: Let $\norm{\cdot}{\xi}$ denote the spectral and Frobenius norms. The first two steps are identical to the proof for unitarily invariant norms. For the third step, using~\cref{eqn:nnorm_rankr}
\[ \norm{E_{21}}{\xi} \leq \frac{\gamma_k^{2q+1} \| \Omega_2\Omega_1^\dagger\|_2}{\sqrt{1+\gamma_k^{4q+2} \| \Omega_2\Omega_1^\dagger\|_2^2}} \norm{\Sigma_\perp}{\xi} \sqrt{1+\gamma_k^{4q} \normtwo{\Omega_2\Omega_1^\dagger}^2}.\]
With $\beta$ defined as before, since $\gamma_k < 1$, ${\sqrt{\gamma_k^2+\gamma_k^2\beta^2}}/{\sqrt{1+\gamma_k^2\beta^2}} \leq 1$.
Therefore, $$\norm{E_{21}}{\xi} \leq \gamma_k^{2q} \norm{\Sigma_\perp}{\xi}\normtwo{\Omega_2\Omega_1^\dagger}.$$ The rest of the proof is the same.
\textbf{Canonical angles}: The proof combines~\cref{thm:gensintheta} with the above analysis for the spectral norm. The right hand side contains the term
$$\max\left\{ \normtwo{ \sin \angle(U_k,\widehat{U}_k)} ,\normtwo{\sin\angle(V_k,\widehat{V}_k)} \right\}.$$
The rest of the proof involves some simple manipulations.
\end{proof}
\begin{proof}[\cref{t_singleuv}]
We first address~\cref{eqn:singleuv}. Following the steps of the proof of~\cref{thm:u}, we have
\[ \sin^2 \angle (u_j,\widehat{U}) = u_j^*U(I-\proj{U^*Q})U^*u_j \preceq \bmat{e_j^\top & 0 } (I - \proj{Z}) \bmat{e_j \\ 0}, \]
where $e_j$ is the $j$--th column of the $k\times k$ identity matrix. Therefore, we have $\sin^2 \angle (u_j,\widehat{U}) \leq e_j^\top He_j$,
where $H$ was defined in~\cref{eqn:H}. The inequality $H \preceq F^*F$ implies
\begin{align*}
\sin^2 \angle (u_j,\widehat{U})
\leq& \>\sigma_{j}^{-4q-2} \normtwo{(\Sigma_\perp\Sigma_\perp^\top)^q\Sigma_\perp(\Omega_2\Omega_1^\dagger)e_j}^2 \\
\leq & \> \gamma_{j}^{4q+2}\normtwo{\Omega_2\Omega_1^\dagger}^2.
\end{align*}
Taking square-roots on both sides gives the desired results. The strategy for bounding the canonical angles $\sin\angle (v_j,\widehat{V})$ is very similar and will be omitted.
We now address~\cref{eqn:svs}, which is a straightforward application of~\cite[Theorem 2.5]{hochstenbach2004harmonic}. Let $\proj{\mc{U}} = QQ^*$ and $\proj{\mc{V}} = I$. Then, in our notation, this result takes the form
\[ \max\left\{\sin\angle(u_j,\widehat{u}_j),\sin\angle(v_j,\widehat{v}_j)\right\} \leq \sqrt{1 + 2\frac{\tilde\gamma'^2}{\tilde\delta^2}}\max\left\{\sin\angle(u_j,\widehat{U}),\sin\angle(v_j,I)\right\}. \]
where $\tilde\gamma' = \max\{0, \normtwo{(I-QQ^*)A}\}$ and $\tilde\delta$ is as defined in the statement of the theorem. \cref{thm:lowrank} for the spectral norm implies $\tilde\gamma'\leq \tilde\gamma$, whereas \cref{t_singleuv} implies
\[ \max\left\{\sin\angle(u_j,\widehat{U}),\sin\angle(v_j,I)\right\} \leq \gamma_j^{2q}\normtwo{\Omega_2\Omega_1^\dagger}.\]
Plug in the intermediate steps to obtain the desired bound.
\end{proof}
\begin{proof}[\cref{thm:can_gauss}]
In~\cref{thm:u}, bounds for $\normtwo{\Omega_2\Omega_1^\dagger}$ are available in the literature. From the proof of~\cite[Theorem 10.6]{HMT09} we find the inequality
$$\mb{E}\,\normtwo{\Omega_2\Omega_1^\dagger} \leq C_e,$$ where the constant $C_e$ was defined in~\cref{eqn:ce}. Let $\alpha > 0$ be a constant. The map $x \mapsto x/\sqrt{1+\alpha x^2}$ is convex. Therefore, by Jensen's inequality the results in expectation follow.
For the concentration inequalities,~\cite[Theorem 5.8]{gu2015subspace} showed that $\normtwo{\Omega_2\Omega_1^\dagger} \leq C_d$ with a probability at least $1-\delta$. Here, $C_d$ was defined in~\cref{eqn:cd}. Plug into~\cref{thm:u} to obtain the desired bounds.
\end{proof}
\subsection{Proofs of~\cref{sec:aux} Theorems}
\begin{proof}[\cref{thm:lowrank_schatten}]
\textbf{Proof of~\cref{eqn:nnorm}}:
Using the unitary invariance of the norms
\[ \uninorm{(I-\proj{Q})A} = \uninorm{(I-\proj{U^*Q})\Sigma} = \uninorm{(\Sigma^\top(I-\proj{U^*Q})\Sigma)^{1/2}}.\]
We use~\cref{eqn:projz} combined with \cref{lemma:schatten} to obtain
$$\uninorm{(\Sigma^\top(I-\proj{U^*Q})\Sigma)^{1/2}} \leq \uninorm{(\Sigma^\top(I-\proj{Z})\Sigma)^{1/2}}.$$
With $M_1 \equiv I-(I+F^*F)^{-1}$ and $M_2 \equiv I - F(I+F^*F)^{-1}F^*$, then $\Sigma^\top(I-\proj{Z})\Sigma$ simplifies as
\begin{equation}\label{eqn:block} \Sigma^\top (I-\proj{Z})\Sigma = \bmat{\Sigma_k M_1\Sigma_k & * \\ * & \Sigma_\perp^\top M_2\Sigma_\perp}.\end{equation}
The square root function is concave on $[0,\infty)$ and $\Sigma^\top(I-\proj{Z})\Sigma$ is positive semidefinite. Therefore, an extension to Rotfel'd's theorem says~\cite[Theorem 2.1]{lee2011extension}
\[ \uninorm{(\Sigma^\top(I-\proj{Z})\Sigma)^{1/2}} \leq \uninorm{(\Sigma_k^\top M_1\Sigma_k)^{1/2}} + \uninorm{(\Sigma_\perp^\top M_2\Sigma_\perp)^{1/2}}. \]
{
Use the inequalities $M_1 \preceq F^*F$ and $M_2\preceq I$, along with \cref{lemma:schatten} gives
\begin{equation}
\begin{split}
\uninorm{(I-\proj{Q})A} \leq & \> \uninorm{(\Sigma_kF^*F\Sigma_k)^{1/2}} + \uninorm{(\Sigma_\perp\Sigma_\perp)^{1/2}}\\
\leq & \> \uninorm{F\Sigma_k} + \uninorm{\Sigma_\perp}.
\end{split}
\end{equation}
Use $F\Sigma_k = (\Sigma_\perp\Sigma_\perp)^q\Sigma_\perp \Omega_2\Omega_1^\dagger \Sigma_{k}^{-2q}$ and the sub-multiplicativity to obtain the advertised bounds.}
\textbf{Proof of~\cref{eqn:nnorm_rankr}}: The proof for~\cref{eqn:nnorm_rankr} is similar and is omitted. The main observation is that $A_k$ has only $k$
nonzero singular values.
\textbf{Proof of~\cref{eqn:nnorm_rankr2}}: We follow the strategy in~\cite[Section 3.3]{drineas2018low}. {Recall that $B_k$ is the best rank-$k$ approximation to $B = Q^*A$. With the notation in~\cref{alg:truncation}, note that
$$QB_k = Q\widehat{U}_{B,k} \widehat{U}_{B,k}^*B = \widehat{U}_k \widehat{U}_{k}^*A = \proj{\widehat{U}_k}A,$$ the triangle inequality gives
\[ \uninorm{(I-\proj{\widehat{U}_k})A} \leq \uninorm{(I-\proj{\widehat{U}_k})A_k} + \uninorm{(I-\proj{\widehat{U}_k})A_{\perp}}. \]
}
Since $A_k = \proj{U_k}A_k$, applying strong sub-multiplicativity
\[ \uninorm{(I-\proj{\widehat{U}_k})A} \leq \uninorm{(I-\proj{\widehat{U}_k})\proj{U_k}} \normtwo{A_k} + \uninorm{A_{\perp}}.\]
We recognize that $\uninorm{(I-\proj{\widehat{U}_k})\proj{U_k}} = \uninorm{\sin\angle(U_k,\widehat{U}_k)}$, apply~\cref{thm:sintheta} to complete the proof.
\end{proof}
\begin{proof}[\cref{thm:lowrank}]
The proof is similar to that of the proof of~\cref{thm:lowrank_schatten}. Consider the term of interest $\schattenp{ (I-QQ^*)A}^2$, which can be simplified to
\[ \schattenp{ (I-QQ^*)A }^2 = \uninorm{A^*(I-QQ^*) A}_{p/2} = \uninorm{ \Sigma^\top (I - \proj{U^*Q}) \Sigma}_{p/2}.\]
The first equality holds only for $p\geq 2$, whereas the last equality follows because of the unitary invariance. As in the proof of~\cref{thm:u}, we have
\[ Z = U^*Y\Omega_1^\dagger \Sigma_k^{-(2q+1)}\qquad F = (\Sigma_\perp \Sigma_\perp^\top)^q \Sigma_\perp\Omega_2 \Omega_1^\dagger \Sigma_k^{-(2q+1)}.\]
The use of~\cref{eqn:projz} and~\cref{lemma:schatten} ensures
\[\uninorm{ \Sigma^\top (I - \proj{U^*Q}) \Sigma}_{p/2} \leq \uninorm{\Sigma^\top (I - \proj{Z}) \Sigma}_{p/2}.\]
{We apply~\cite[Theorem 2.1]{lee2011extension} to~\cref{eqn:block} with $f(t) =t$ to obtain
\begin{equation*}
\begin{aligned}\uninorm{\Sigma^\top (I - \proj{Z}) \Sigma}_{p/2} \leq & \> \uninorm{\Sigma_k M_1 \Sigma_k}_{p/2} +\uninorm{\Sigma_\perp^\top M_2 \Sigma_\perp}_{p/2}\\
\leq & \>\uninorm{\Sigma_k F^*F \Sigma_k}_{p/2} + \uninorm{\Sigma_\perp^\top \Sigma_\perp}_{p/2}\\
= & \> \schattenp{F\Sigma_k}^2 + \schattenp{\Sigma_\perp}^2. \end{aligned} \end{equation*}
We have used $M_1 \preceq F^*F$ and $M_2 \preceq I$.} The rest of the proof is similar to that of~\cref{thm:lowrank_schatten}.
\end{proof}
\begin{proof}[\cref{thm:sigma}]
{The proof makes heavy use of the partial ordering which was reviewed in the start of \cref{sec:proofs}.} From the inequality $I \succeq QQ^*$, the conjugation rule gives $$A^*A \succeq A^*QQ^*A.$$ Then, Weyl's theorem implies $\lambda_j(A^*A) \geq \lambda_j(A^*QQ^*A)$ for $j=1,\dots,k$. Relating the eigenvalues to the singular values proves the first inequality.
For the second inequality consider again $A^*QQ^*A$. With the aid of~\cref{eqn:projz}
\begin{equation}\label{eqn:inters} A^*QQ^*A = V\Sigma^\top\proj{U^*Q}\Sigma V^* \succeq V\Sigma^\top\proj{Z}\Sigma V^*. \end{equation}
Therefore, $\lambda_j(A^*QQ^*A) \geq \lambda_j(V\Sigma^\top\proj{Z}\Sigma V^*)$ for $j =1,\dots,k$. Since $V\Sigma^\top\proj{Z}\Sigma V^*$ and $\Sigma^\top\proj{Z}\Sigma$ are similar, they share the same eigenvalues. It can be readily shown that
\[ \Sigma^\top\proj{Z}\Sigma = \bmat{ \Sigma_k (I+F^*F)^{-1}\Sigma_k & * \\ * & *} .\]
For $j=1,\dots,k$, the eigenvalues of $A^*Q^*QA$ satisfy
\begin{equation}\label{eqn:ordering} \lambda_j(A^*QQ^*A) \geq \lambda_j(V\Sigma^\top\proj{Z}\Sigma V^*) \geq \lambda_j(\Sigma_k (I+F^*F)^{-1}\Sigma_k).
\end{equation}
{The second inequality follows from the Cauchy interlacing theorem~\cite[Section 10-1]{Par80}. Applying the properties of partial ordering, we obtain
\[ F^*F \preceq \sigma_{k+1}^{4q+2} \normtwo{\Omega_2\Omega_1^\dagger}^2 \Sigma_k^{-(4q+2)} = \normtwo{\Omega_2\Omega_1^\dagger}^2 \Gamma_k^{4q+2}, \]
where $\Gamma_k = \mathsf{diag}\,(\gamma_1,\dots,\gamma_k)$ is a diagonal matrix with the singular value gaps. Furthermore,
\[ \Sigma_k (I+F^*F)^{-1}\Sigma_k \succeq \Sigma_k(I + \normtwo{\Omega_2\Omega_1^\dagger}^2\Gamma_k^{4q+2})^{-1}\Sigma_k.\]
Since the diagonal matrix on the right hand side has its singular values on the diagonals; this fact, combined with~\cref{eqn:ordering} gives for $j=1,\dots,k$
\[ \sigma_j^2(Q^*A) = \lambda_j(A^*QQ^*A) \geq \lambda_j(\Sigma_k (I+F^*F)^{-1}\Sigma_k) \geq \frac{\sigma_j^2(A)}{1 + \normtwo{\Omega_2\Omega_1^\dagger}^2 \gamma_{j}^{4q+2}}.\]
Taking square-roots, we obtain the desired result.
}
\end{proof}
\section{Numerical Results}
\subsection{Test matrices}\label{ssec:test} To demonstrate the performance of the bounds, we use the following test matrices
\begin{enumerate}
\item \textbf{Controlled gap} The first set of test matrices $A \in \mathbb{R}^{3000\times 300}$ are constructed using the formula
\[ A = \sum_{j=1}^{r}\frac{\text{gap}}{j} \, x_j y_j^\top +
\sum_{j=r+1}^{300}\frac{1}{j} \,x_j y_j^\top,\]
where $x_j \in \mathbb{R}^{3000}$ and $y_j \in \mathbb{R}^{300}$ are sparse random vectors with non-negative entries generated using the MATLAB commands \verb|sprand(3000,1,0.025)| and \verb|sprand(300,1,0.025)| respectively. The formula above is not an SVD, since the vectors do not form an orthonormal set. Nonetheless, the singular values decay like $1/j$ and the gap between the singular values between $15$ and $16$ is controlled by the parameter $\text{gap}$. We consider three cases:
\begin{enumerate}
\item Small gap (GapSmall) $\text{gap} = 1$,
\item Medium gap (GapMedium) $\text{gap} = 2$,
\item Large gap (GapLarge) $\text{gap} = 10$.
\end{enumerate}•
\begin{figure}[!ht]\centering
\includegraphics[scale=0.25]{figs/sv_gap}
\includegraphics[scale=0.25]{figs/sv_noise}
\includegraphics[scale=0.25]{figs/sv_decay}
\caption{Singular value of the matrices from the (left) `Controlled Gap' example, (right) `Low-rank plus noise' example, (below) `Low-rank plus decay' example. }
\label{fig:svs}
\end{figure}•
\item \textbf{Low-rank plus noise} The matrices are of the form
\[ A = \begin{bmatrix} I_r & 0 \\ 0 & 0\end{bmatrix} + \sqrt{\frac{\gamma_n r}{2n^2}}(G + G^\top),\]
where $G \in \mathbb{R}^{n\times n}$ is a random Gaussian matrix.
We consider three cases:
\begin{enumerate}
\item Small noise (NoiseSmall) $\gamma_n = 10^{-2}$,
\item Medium noise (NoiseMedium) $\gamma_n = 10^{-1}$,
\item Large noise (NoiseLarge) $\gamma_n = 1$.
\end{enumerate}•
\item \textbf{Low-rank plus decay} The matrices take the form
\[ A = U\mathsf{diag}\,(\underbrace{1,1,\dots,1}_r,2^{-d},3^{-d}\dots,(n-r+1)^{-d})V^*.\]
The unitary matrices $U,V$ are obtained by drawing a random Gaussian matrix, and taking its QR factorization. We distinguish between the following cases
\begin{enumerate}
\item Slow decay (DecaySlow): $d = 0.5$,
\item Medium decay (DecayMedium): $d = 1.0$,
\item Fast decay (DecayFast): $d=2.0$.
\end{enumerate}
\end{enumerate}
The first example is adapted from~\cite{sorensen2016deim}, whereas the second and third examples are drawn from~\cite{tropp2017practical}. In all the examples, the random matrices were fixed by setting the random seed and we the set the parameter $r=15$. The singular values of all the test matrices are plotted in~\cref{fig:svs}.
\begin{figure}[!ht]%
\centering
\subfigure[GapSmall]{%
\label{fig:cana1}%
\includegraphics[scale=0.24]{figs/canon_gap_1}}%
\subfigure[GapMedium]{%
\label{fig:canb1}%
\includegraphics[scale=0.24]{figs/canon_gap_2}}%
\subfigure[GapLarge]{%
\label{fig:canc1}%
\includegraphics[scale=0.24]{figs/canon_gap_3}}%
\\
\subfigure[NoiseSmall]{%
\label{fig:cana2}%
\includegraphics[scale=0.24]{figs/canon_noise_1}}%
\subfigure[NoiseMedium]{%
\label{fig:canb2}%
\includegraphics[scale=0.24]{figs/canon_noise_2}}%
\subfigure[NoiseLarge]{%
\label{fig:canc2}%
\includegraphics[scale=0.24]{figs/canon_noise_3}}%
\\
\subfigure[DecaySlow]{%
\label{fig:cana3}%
\includegraphics[scale=0.24]{figs/canon_decay_1}}%
\subfigure[DecayMedium]{%
\label{fig:canb3}%
\includegraphics[scale=0.24]{figs/canon_decay_2}}%
\subfigure[DecayFast]{%
\label{fig:canc3}%
\includegraphics[scale=0.24]{figs/canon_decay_3}}%
\caption{Plots of $\sin \theta_j $ for $j=1,\dots,k$. The test matrices were described in \cref{ssec:test}. The target rank $k=25$ and an oversampling parameter of $20$ was chosen for all the experiments. The solid lines correspond to the computed values, the dashed lines correspond to bounds obtained using~\cref{thm:u}. The parameter $q$ corresponds to the number of subspace iterations. }
\label{fig:can}
\end{figure}
\subsection{Canonical angles}
For the first numerical example, we use the $9$ test matrices in~\cref{ssec:test}. For each matrix, we chose an oversampling parameter $\rho=20$ and the target rank $k$ was chosen to be $25$. The starting guess $\Omega$ was taken to be a random Gaussian matrix.
\subsubsection{No extraction} We plot the canonical angles $\sin\angle(U_k,\widehat{U})$ in solid lines, the corresponding bounds from~\cref{thm:u} are also plotted in dashed lines. The results are displayed in~\cref{fig:can}. We make the following general observations:
{
\begin{itemize}
\item The influence of the subspace iterations on the canonical angles is clear: the angles become smaller as the number of iterations $q$ increases. This implies that the subspace is becoming more accurate.
\item If there is a large singular value gap in the spectrum, this means that all the canonical angles below that index are captured accurately. This is prominently seen in~\cref{fig:canc1}, in which there is a large gap between singular values $15$ and $16$. Similar observations can be made in the other figures.
\item As the decay rate of the singular values increases, the corresponding canonical angles become smaller.
\item In most figures the bounds are qualitatively informative, but in some figures, the bounds are also quantitatively accurate (e.g., GapLarge).
\item Similar results were observed for $\sin\angle(V_k,\widehat{V})$ and, therefore, omitted.
\end{itemize}•
}
{
We now make observations specific to the test examples:
\begin{description}
\item [1. Gap examples] The computed canonical angles decrease as the gap increases, and with more iterations. The test matrices (GapMedium and GapLarge) have both a decay in the singular values and a prominent singular value gap between indices $15$ and $16$. These matrices satisfy the assumptions of our analysis, and therefore the bounds can be expected to be good. We see that as the size of the gap increases, the bounds become more accurate in accordance with Theorem~\ref{thm:u}. GapSmall has decay in the singular values but no special singular value gap. Even in this case, the bounds are qualitatively good.
\item [2. Noise examples] NoiseSmall is close to a low-rank matrix and there is a large singular value gap at index $15$. For this example, the bounds are qualitatively good. As the level of noise increases, the gap decreases and therefore, the computed angles increase, as predicted by~\cref{thm:u}. The bounds are uninformative for $q=0$, but qualitatively good for $q=1$ and $2$. Compared to the Gap examples, the bounds are not as sharp since there is very little decay in the singular values.
\item [3. Decay examples] In these examples, the singular values decay beyond index $15$ but there is no prominent gap. As the rate of decay increases, in general, the canonical angles decrease. It is also seen that the bounds are qualitatively accurate (except for $q=0$).
\end{description}
}
\subsubsection{Extraction step} Our next experiment tests the effect of the extraction step on the accuracy of the canonical angles. We now compute $\sin\theta_j'$ and $\sin\nu_j'$ for the test matrices described in~\cref{ssec:test}. We plot the quantities $\max\{\sin\theta_j',\sin\nu_j'\}$ for $j=1,\dots,k$ in solid lines. The corresponding bounds from~\cref{thm:sintheta} are plotted in dashed lines. Here, the target rank was chosen to be $k=15$, to exploit the singular value gap in the matrices. We make the following general observations:
{
\begin{itemize}
\item The extraction step did not significantly affect the canonical angles and the accuracy is comparable to~\cref{fig:can}. The subspaces are more accurate as the number of iterations increase, and if there is a large singular value gap at index $j$, then the canonical angles with index $j' < j$ are captured accurately.
\item Although the canonical angles are small, compared to~\cref{thm:u}, the bounds in~\cref{thm:sintheta} are not as accurate. One reason is that the upper bounds in~\cref{thm:u} are at most $1$, but the bounds in~\cref{thm:sintheta} are allowed to be greater than $1$. Furthermore, the bound in~\cref{thm:sintheta} has the factor $1/(1-\gamma_k)$ in the denominator, which can be quite large when there is a small singular value gap. It may be possible to derive better bounds, but we could not immediately see how to derive them.
\item We also compared the accuracy of the individual singular vectors (not shown here). The results and the conclusions are similar.
\end{itemize}
}
{
We now make observations specific to the test examples:
\begin{description}
\item [1. Gap examples] The behavior of the computed canonical angles is very similar to that without the extraction step. In general, the angles decrease as the gap increases. When the parameter $\text{gap}$ is small, the singular value ratio $\gamma_k$ is large, and $(1-\gamma_k)^{-1}$ is small. This explains why the bounds are bad for GapSmall and GapMedium, and show little improvement with more subspace iterations. Only for the GapLarge example with $q=0$, the bounds are qualitatively good.
\item [2. Noise examples] The computed canonical angles decrease as the noise decreases. In all three examples, the bounds are qualitatively good. The bounds are better for NoiseSmall and NoiseMedium because the singular value gap between indices $15$ and $16$ is bigger than that for NoiseLarge.
\item [3. Decay examples] The computed canonical angles become smaller as the decay of the singular values increases. In these examples, there is no prominent gap, so the bounds don't capture the behavior well. However, the computed angles are small, and the subspace is accurate.
\end{description}
}
\begin{figure}[!ht]%
\centering
\subfigure[GapSmall]{%
\label{fig:exta1}%
\includegraphics[scale=0.24]{figs/extract_gap_1}}%
\subfigure[GapMedium]{%
\label{fig:extb1}%
\includegraphics[scale=0.24]{figs/extract_gap_2}}%
\subfigure[GapLarge]{%
\label{fig:extc1}%
\includegraphics[scale=0.24]{figs/extract_gap_3}}%
\\
\subfigure[NoiseSmall]{%
\label{fig:exta2}%
\includegraphics[scale=0.24]{figs/extract_noise_1}}%
\subfigure[NoiseMedium]{%
\label{fig:extb2}%
\includegraphics[scale=0.24]{figs/extract_noise_2}}%
\subfigure[NoiseLarge]{%
\label{fig:extc2}%
\includegraphics[scale=0.24]{figs/extract_noise_3}}%
\\
\subfigure[DecaySlow]{%
\label{fig:exta3}%
\includegraphics[scale=0.24]{figs/extract_decay_1}}%
\subfigure[DecayMedium]{%
\label{fig:extb3}%
\includegraphics[scale=0.24]{figs/extract_decay_2}}%
\subfigure[DecayFast]{%
\label{fig:extc3}%
\includegraphics[scale=0.24]{figs/extract_decay_3}}%
\caption{Plots of $\max\{\sin \theta_j',\sin\nu_j'\} $ for $j=1,\dots,k$. The test matrices were described in~\cref{ssec:test}. The target rank $k=15$ and an oversampling parameter of $20$ was chosen for all the experiments. The solid lines correspond to the computed values, the dashed lines correspond to bounds obtained using~\cref{thm:sintheta}. The parameter $q$ corresponds to the number of subspace iterations.}
\label{fig:ext}
\end{figure}
\subsection{Singular Values} We now consider the accuracy of the singular values. We use the same test matrices and the remaining parameters are kept fixed. The computed singular values are plotted against the upper and lower bounds. We make the following general observations:
{
\begin{itemize}
\item For the large singular values, both the upper and lower bounds are qualitatively good for all the examples that we tested.
\item As the number of iterations increase, the singular values are computed more accurately and are close to the upper bounds (the exact singular values). However, for indices close to the target rank, the lower bounds are are not tight. The bounds get tighter as the number of iterations $q$ increase.
\item The bounds for the singular values quantitatively better than the bounds for the canonical angles.
\end{itemize}
}
\begin{figure}[!ht]%
\centering
\subfigure[GapSmall, $q=0$]{%
\label{fig:svsa1}%
\includegraphics[scale=0.24]{figs/svs_gap_q_0}}%
\subfigure[GapSmall, $q=1$]{%
\label{fig:svsb1}%
\includegraphics[scale=0.24]{figs/svs_gap_q_1}}%
\subfigure[GapSmall, $q=2$]{%
\label{fig:svsc1}%
\includegraphics[scale=0.24]{figs/svs_gap_q_2}}%
\\
\subfigure[NoiseMedium, $q=0$]{%
\label{fig:svsa2}%
\includegraphics[scale=0.24]{figs/svs_noise_q_0}}%
\subfigure[NoiseMedium, $q=1$]{%
\label{fig:svsb2}%
\includegraphics[scale=0.24]{figs/svs_noise_q_1}}%
\subfigure[NoiseMedium, $q=2$]{%
\label{fig:svsc2}%
\includegraphics[scale=0.24]{figs/svs_noise_q_2}}%
\\
\subfigure[DecayMedium, $q=0$]{%
\label{fig:svsa3}%
\includegraphics[scale=0.24]{figs/svs_decay_q_0}}%
\subfigure[DecayMedium, $q=1$]{%
\label{fig:svsb3}%
\includegraphics[scale=0.24]{figs/svs_decay_q_1}}%
\subfigure[DecayMedium, $q=2$]{%
\label{fig:svsc3}%
\includegraphics[scale=0.24]{figs/svs_decay_q_2}}%
\caption{Plots of the singular values. The test matrices were described in \cref{ssec:test}. The target rank $k=25$ and an oversampling parameter of $\rho=20$ was chosen for all the experiments. The solid lines black and blue lines correspond to the upper and lower bounds respectively, the dashed red lines correspond to bounds obtained using \cref{thm:sigma}. The parameter $q$ corresponds to the number of subspace iterations.}
\end{figure}
{
We now make observations specific to the test examples:
\begin{description}
\item [1. GapSmall] In these examples, the large singular values are captured accurately. As the number of iterations increase, both the lower bound and the approximate singular values approach the true singular values (upper bound). For GapMedium and GapLarge, the bounds were much more accurate.
\item [2. NoiseMedium] There is a qualitatively different behavior before and after indices $15-16$. The upper and lower bounds are tight before index $15$, but only the upper bound is tight after index $16$. The lower bound significantly under-predicts the singular values.
\item [3. DecayMedium] Similar to the previous example, the lower bounds are good before index $15$, and improve with number of iterations after index $15$.
\end{description}
}
\section{Acknowledgments}
The author would like to thank Ilse C.F.\ Ipsen and Andreas Stathopoulos for helpful conversations. He would also like to acknowledge Ivy Huang for her help with the figures.
\iffalse
| {
"timestamp": "2018-11-13T02:13:48",
"yymm": "1804",
"arxiv_id": "1804.02614",
"language": "en",
"url": "https://arxiv.org/abs/1804.02614",
"abstract": "This paper is concerned with the analysis of the randomized subspace iteration for the computation of low-rank approximations. We present three different kinds of bounds. First, we derive both bounds for the canonical angles between the exact and the approximate singular subspaces. Second, we derive bounds for the low-rank approximation in any unitarily invariant norm (including the Schatten-p norm). This generalizes the bounds for Spectral and Frobenius norms found in the literature. Third, we present bounds for the accuracy of the singular values. The bounds are structural in that they are applicable to any starting guess, be it random or deterministic, that satisfies some minimal assumptions. Specialized bounds are provided when a Gaussian random matrix is used as the starting guess. Numerical experiments demonstrate the effectiveness of the proposed bounds.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Randomized subspace iteration: Analysis of canonical angles and unitarily invariant norms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9833429580381723,
"lm_q2_score": 0.8558511524823263,
"lm_q1q2_score": 0.8415952039223497
} |
https://arxiv.org/abs/2102.00996 | Compositions that are palindromic modulo $m$ | In recent work, G. E. Andrews and G. Simay prove a surprising relation involving parity palindromic compositions, and ask whether a combinatorial proof can be found. We extend their results to a more general class of compositions that are palindromic modulo $m$, that includes the parity palindromic case when $m=2$. We then provide combinatorial proofs for the cases $m=2$ and $m=3$. | \section{Introduction}
Let $\sigma=(\sigma_1,\sigma_2,\ldots,\sigma_k)$ be a sequence of positive integers such that $\sum \sigma_i = n$. The sequence $\sigma$ is called a \textit{composition} of $n$ of length $k$. The numbers $\sigma_i$ are called the \textit{parts} of the composition. If $\sigma_i=\sigma_{k-i+1}$ for all $i$, then $\sigma$ is called a \textit{palindromic} composition. If instead $\sigma$ satisfies the weaker condition that $\sigma_i \equiv \sigma_{k-i+1}$ modulo $m$ for some $m\geq 1$ and all $i$, then $\sigma$ is said to be \textit{palindromic modulo $m$}. Let $pc(n,m)$ be the number of compositions of $n$ that are palindromic modulo $m$.
Andrews and Simay \cite{AS20} have shown that $pc(1,2)=1$, and \[pc(2n,2)=pc(2n+1,2)=2\cdot 3^{n-1}\]
for $n>1$.
Our main theorem generalizes their result by giving an ordinary generating function for $pc(n,m)$ for all $m$. Of particular interest is that $pc(1,3)=1$, and \[pc(n,3)=2\cdot f(n-1)\] for $n>1$, where $f(n)$ is the $n$th Fibonacci number (here $f(1)=f(2)=1$).
We then give a combinatorial proof of the above identities for $pc(n,2)$ and $pc(n,3)$, and conclude with some general properties and asymptotic analysis of $pc(n,m)$ for larger $m$.
\begin{thm}
We have \[F_m(q) := \sum_{n\geq 1} pc(n,m) q^n = \frac{q+2q^2-q^{m+1}}{1-2q^2-q^m}. \]
\end{thm}
\noindent{\bf Remark.} Though we mean this to be a formal definition, it can be verified that the series converges for all $|q|<1/2$. Indeed, \[F_m(q) \leq F_1(q) = \frac{q}{1-2q}.\]
\begin{proof}[Proof of Theorem 1]
Notice that to form a composition that is palindromic modulo $m$, we need to form a sequence of pairs of positive integers whose difference is a multiple of $m$. Furthermore, there may or may not be a central part that will always be congruent to itself. If $G_m(q)$ is the ordinary generating function for pairs of positive integers whose difference is a multiple of $m$, then \[F_m(q) = \frac{G_m(q)}{1-G_m(q)} \cdot \frac{1}{1-q} + \frac{q}{1-q}.\] Now for the function $G_m(q)$, either the pair of positive integers is the same, or we must choose to add a multiple of $m$ to either the first or second number. Therefore \[G_m(q) = \frac{q^2}{1-q^2}+\frac{2q^2}{1-q^2}\cdot \frac{q^m}{1-q^m}.\] Simplifying gives the result. \end{proof}
When $m=1$, every composition will be palindromic modulo $m$. Therefore $pc(n,1)=2^{n-1}$, as is evident from $F_1(q)$. We next consider the case when $m=2$.
\section{The case $m=2$}
When $m=2$, Theorem 1 gives \[F_2(q) = \frac{2q^3(1+q)}{1-3q^2},\] and after expanding we see that $p(1,2)=1$, and $p(2n,2)=p(2n+1,2)=2\cdot 3^{n-1}$. Our goal in this section will be to give a combinatorial proof of this formula.
For $n\geq 1$, we start by considering the $3^{n-1}$ elements of the set $\{0,1,2 \}^{n-1}$. Our goal will be to embed two disjoint copies of this set into the set of compositions of $2n$ that are palindromic modulo 2, and show that every composition of $2n$ that is palindromic modulo 2 has a preimage. We then give a bijection between compositions of $2n$ that are palindromic modulo 2 and compositions of $2n+1$ that are palindromic modulo 2.
\begin{proof}[Combinatorial proof that $pc(2n,2)=2\cdot3^{n-1}$] For $n\geq 1$, let \[a=(a_1,a_2,\ldots,a_{n-1})\in \{0,1,2 \}^{n-1}.\] As an example, we take $a=(0,1,2,2,1)\in \{0,1,2 \}^5$. We will first use the sequence of $a_i$s to construct a sequence of triples $(b_j,c_j,d_j)$, where each triple will become a pair of parts $\{ \sigma_j,\sigma_{k-j+1}\}$ or the central part $\sigma_{(k+1)/2}$ in the resulting composition having length $k$.
Initialize the process with the triple $(b_1,c_1,d_1)=(1,0,0)$. If $n=1$, then we are done. Now if $a_1=0$, we create a new triple $(b_2,c_2,d_2)=(1,0,0)$. If $a_1=1$, then we increase $b_1$ by one. If $a_1=2$, then we increase $c_1$ by one. In our example of $(0,1,2,2,1)$, we see that $a_1=0$, so after the first step we have the following list of triples
\[(1,0,0), \ \ (1,0,0).\]
Now for $i>1$, we define the instructions given by each $a_i$ recursively. We assume that at this step we have a total of $j$ triples, for some $j\geq 1$.
\begin{enumerate}
\item If $a_i=0$, then we create the new triple $(b_{j+1},c_{j+1},d_{j+1})=(1,0,0)$.
\item If $a_i=2$, then increase $c_j$ by one.
\item If $a_i=1$ and $a_{i-1}\neq 2$, then increase $b_j$ by one.
\item If $a_i=1$ and $a_{i-1}=2$, then set $d_j=1$ and create the new triple $(b_{j+1},c_{j+1},d_{j+1})=(1,0,0)$.
\end{enumerate}
For our example $(0,1,2,2,1)$, we form the list of triples
\[(1,0,0), \ \ (2,2,1), \ \ (1,0,0). \]
After performing this algorithm on each of the $3^{n-1}$ sequences in $\{ 0,1,2 \}^{n-1}$ gives a set of $3^{n-1}$ sequences of triples. For each sequence $a\in\{0,1,2\}^{n-1}$, we denote by $k_a$ the number triples in the list we have formed. By construction, all of these sequences of $k_a$ triples are distinct.
We create a second set containing $3^{n-1}$ sequences of triples by setting $d_{k_a} = 1$ for all $a$. For our example $(0,1,2,2,1)$, we get a second sequence of triples
\[(1,0,0), \ \ (2,2,1), \ \ (1,0,1). \]
We now have $2\cdot 3^{n-1}$ sequences of triples, all of which are distinct.
Next, for each sequence $a$, taking one of its two associated sequence of triples $(b_1,c_1,d_1)$, \ldots, $(b_{k_a},c_{k_a},d_{k_a})$, we show how to form a unique composition $\sigma$ of $2n$ that is palindromic modulo 2. Start with the triple $(b_{k_a}, c_{k_a}, d_{k_a})$.
\begin{enumerate}
\item If $d_{k_a}=0$ and $c_{k_a}=0$, then set $\sigma_{k_a}=2b_{k_a}$.
\item If $d_{k_a}=0$ and $c_{k_a}>0$, then set $\sigma_{k_a}=b_{k_a}+2c_{k_a}$ and $\sigma_{k_a+1}=b_{k_a}$.
\item If $d_{k_a}=1$, then set $\sigma_{k_a}=b_{k_a}$ and $\sigma_{k_a+1}=b_{k_a}+2c_{k_a}$.
\end{enumerate}
These cases will determine whether the length of the composition is even or odd. If $d_{k_a}=0$ and $c_{k_a}=0$, then the length of $\sigma$ will be $2k_a-1$. Otherwise, the length of $\sigma$ will be $2k_a$. For convenience we refer to the length of $\sigma$ as $k$ in either case. For our example $(0,1,2,2,1)$, we have initialized the two compositions \[(0,0,2,0,0) \ \text{and} \ (0,0,1,1,0,0).\]
Now if $k_a=1$ we are done. Assuming $k_a>1$, for each $1\leq j < k_a$ we consider the triple $(b_j,c_j,d_j)$. If $d_j=0$, then set $\sigma_j=b_j+2c_j$ and $\sigma_{k-j+1} = b_j$. If $d_j=1$, then set $\sigma_j=b_j$ and $\sigma_{k-j+1}=b_j+2c_j$. Also notice if $d_j=1$, then by construction $c_j>0$, so that $b_j\neq b_j+2c_j$. This ensures that each sequence of triples will be associated to a unique composition. Also notice that the sum of the parts in $\sigma$ equals $2\sum b_j + 2\sum c_j=2n$, and that $|\sigma_j-\sigma_{k-j+1}|$ is even for all $j$. Thus $\sigma$ is a composition of $2n$ that is palindromic modulo 2. Showing that every composition of $2n$ that is palindromic modulo 2 can be formed from one of the described sequences of triples is done by reversing the described algorithm. Returning to our example sequence $(0,1,2,2,1)$, we have form the two compositions
\[(1,2,2,6,1) \ \text{and} \ (1,2,1,1,6,1),\]
each of which is a composition of 12 that is palindromic modulo 2.\end{proof}
\begin{proof}[Combinatorial proof that $pc(2n,2)=pc(2n+1,2)$]
We split the proof into two cases. Starting with a composition $\sigma$ of $2n$ that is palindromic modulo 2, suppose the length of $\sigma$ is $2k+1$. Then adding one to $\sigma_{k+1}$ gives a composition of $2n+1$ that is still palindromic modulo 2. Now if the length of $\sigma$ is $2k$, form a composition $\sigma^{\prime}$ of $2n+1$ of length $2k+1$ by setting
\[\sigma^{\prime}_j=\begin{cases}
\sigma_j & 1\leq j\leq k \\
1 & j=k+1 \\
\sigma_{j-1} & k+2\leq j \leq 2k+1.
\end{cases}
\]
We note that $\sigma^{\prime}$ is still palindromic modulo 2. It is straightforward to verify that this map is a bijection. For our examples of $(1,2,2,6,1)$ and $(1,2,1,1,6,1)$ that are compositions of 12, we form the two compositions of 13
\[(1,2,1,1,1,6,1) \ \ \text{ and } \ \ (1,2,3,6,1),\]
each of which is palindromic modulo 2. \end{proof}
\section{The case $m=3$}
When $m=3$, Theorem 1 gives
\[F_3(q) = \frac{q+2q^2-q^4}{1-2q^2-q^3} = q + \frac{2q^2}{1-q-q^2},\]
and expanding this function shows $pc(1,3)=1$ and $pc(n,3)=2\cdot f(n-1)$ for $n>1$. Our goal of this section is to give a combinatorial proof of this formula.
It is well known \cite{AH75} that $f(n+1)$ is equal to the number of compositions of $n$ with parts that are equal to one or two. For $n>1$, our goal will be to embed two disjoint copies of the compositions of $n-2$ with parts equal to one or two into the compositions of $n$ that are palindromic modulo 3. Then we will show each composition of $n$ that is palindromic modulo 3 has a preimage.
\begin{proof}[Combinatorial proof that $pc(n,3)=2\cdot f(n-1)$.] For $n>1$, let $\sigma =( \sigma_1,\sigma_2,\ldots,\sigma_k)$ be a composition of $n-2$, where each $\sigma_i\in \{1,2\}$. Form two distinct compositions of $n$ by setting
\begin{align*}
\sigma^{\prime}&=(1,1,\sigma_1,\sigma_2,\ldots,\sigma_k) \text{, and} \\
\sigma^{\prime\prime}&=(2,\sigma_1,\sigma_2,\ldots,\sigma_k).
\end{align*}
Doing this for each composition gives us a set of $2\cdot f(n-1)$ compositions of $n$ with parts equal to one or two, the set of which we will denote by $A_n$. For each of these compositions, we will form a unique composition of $n$ that is palindromic modulo 3.
Let $+$ denote sequence concatenation, as in $(1,2)+(3,4)=(1,2,3,4)$. Now for any composition $a\in A_n$, we can decompose $a$ in the following way:
\[a=a_1+a_2+\ldots+a_s,\]
where the substrings $\{a_i\}_{i=1}^s$ are determined by the following rules. We will write $k_i$ to be the length of each substring $a_i$.
\begin{enumerate}
\item If $a$ contains no twos, then $a_1=a$.
\item Suppose $a$ begins with a two, and this is the only two in $a$. Then $a_1=(2)$.
\item Suppose $a$ begins with the substring $(2,2)$ or $(2,1,1)$. Then $a_1=(2)$.
\item Suppose $a$ begins with the substring $(1,1)$ or $(2,1,2)$. Then $a_1$ terminates with the first two such that
\begin{enumerate}
\item it is the final two in $a$;
\item it is immediately followed by the substring $(2)$ or $(1,1)$.
\end{enumerate}
\end{enumerate}
For $i>1$, the substring $a_i$ is determined recursively using these 4 rules, after first deleting the substrings $a_1$, $a_2$, \ldots, $a_{i-1}$ from $a$. The resulting string cannot begin with the substring $(1,2)$, so it still makes sense to apply these rules. To aid the reader, Table \ref{decomp} shows the decomposition of the compositions in the set $A_8$. As an example, take the composition $a=(1,1,1,2,2,1,1,2,1,2,1,1)$ in $A_{16}$. By applying rule (4) we have $a_1=(1,1,1,2)$, and we now consider the string $(2,1,1,2,1,2,1,1)$. By applying rule (3) we have $a_2=(2)$, and we now consider the string $(1,1,2,1,2,1,1)$. By applying rule (4) we have $a_3=(1,1,2,1,2)$, and we now consider the string $(1,1)$. By applying rule (1) we have $a_4=(1,1)$, and are now done. Thus
\[(1,1,1,2,2,1,1,2,1,2,1,1)=(1,1,1,2)+(2)+(1,1,2,1,2)+(1,1).\]
Let $B_n$ be the set of compositions of $n$ that are palindromic modulo 3. For each composition $a=a_1+\ldots+a_s\in A_n$, we will show how to construct a unique $b\in B_n$. If $a_s$ contains a two, the length of $b$ will be $2s$; if $a_s$ does not contain a two, the length of $b$ will be $2s-1$. Either way, we will denote $k_b$ to be the length of $b$ and write $b=(b_1,b_2,\ldots,b_{k_b})$ (recall we have also set $k_i$ to be the length of $a_i$).
Now assume $a_i$ has $o_i$ ones, and $t_i$ twos. If $t_i=0$ (which can only be the case for $a_s$), then set $b_s=k_s$. For what follows we will assume $t_i>0$. Note that $k_i=o_i+t_i$, and \[n=\sum_{i=1}^s (o_i+2\cdot t_i).\]
We will now form the triple $(c_i,d_i,e_i)$ using the following rules.
Let $o^{\prime}_i$ be the number of ones in $a_i$ preceding the first two. Then if
\begin{enumerate}
\item $o^{\prime}_i$ is even (possibly zero), set
\[c_i=\frac{o^{\prime}_i}{2}+1, \ \ d_i=3\cdot(t_i-1), \ \ \text{and} \ \ e_i=0. \]
\item $o^{\prime}_i$ is odd, set
\[c_i=\frac{o^{\prime}_i-1}{2}, \ \ d_i=0, \ \ \text{and} \ \ e_i=3\cdot t_i. \]
\end{enumerate}
By construction this sequence of triples is unique to the composition $a\in A_n$ we began with. We show how our example composition with decomposition $a_1=(1,1,1,2)$, $a_2=(2)$, $a_3=(1,1,2,1,2)$, and $a_4=(1,1)$ maps to three triples (the final substring $a_4$ contains no twos, and therefore we set $b_4=k_4=2$).
\begin{center}
\begin{tabular}{c||c|c|c|c|c|c}
$a_i$ & $o_i$ & $o^{\prime}_i$ & $t_i$ & $c_i$ & $d_i$ & $e_i$ \\
\hline
$(1,1,1,2)$ &3&3&1&1&0&3\\
$(2)$ &0&0&1&1&0&0\\
$(1,1,2,1,2)$ &3&2&2&2&3&0
\end{tabular}
\end{center}
Therefore, we have the triples
\[(1,0,3), \ \ (1,0,0), \ \ \text{ and } \ \ (2,3,0).\]
We now form $b$ by setting $b_i=c_i+d_i$ and $b_{k_b-i+1}=c_i+e_i$, adjusting slightly for the case when $a_s$ has no twos (recall that we just set $b_s=k_s$ in this case). The composition is uniquely determined from the triples, and by construction $d_i$ and $e_i$ are both multiples of 3. Furthermore,
\[b_i+b_{k_b-i+1} = 2c_i + d_i+e_i = o^{\prime}_i +3t_i -1=o_i+2t_i. \]
To see why the last equality holds, consider first the case when $t_i=1$. Then $o^{\prime}_i=o_i$, and $3t_i-1=2t_i$. Now if $t_i>1$, we can pair each two in $a_i$ with a one immediately preceding it. However, we have used a one counted by $o^{\prime}_1$ in this pairing, so we must subtract one. Therefore, we have embedded $A_n$ into $B_n$. For our example
\[a=(1,1,1,2,2,1,1,2,1,2,1,1)=(1,1,1,2)+(2)+(1,1,2,1,2)+(1,1),\]
we have the triples $(1,0,3)$, $(1,0,0)$, and $(2,3,0)$, and we obtain the composition
\[b=(1,1,5,2,2,1,4),\]
which is a composition of 16 that is palindromic modulo 3 ($b\in B_{16}$). It is straightforward to construct a composition $a\in A_n$ from a composition $b\in B_n$ by reversing this construction, which proves the result. \end{proof}
\section{The case $m>3$}
If $m>3$, the formula for $pc(n,m)$ can be deduced from Theorem 1; we only give a detailed study for $m=2$ and $m=3$ based on the elegance of the formulae. For $m=4$, Theorem 1 gives
\[F_4(q)=\frac{q+2q^2-q^5}{1-2q^2-q^4},\]
and after expanding we see $pc(1,4)=1$, and for $n\geq 1$
\[pc(2n,4)=pc(2n+1,4)=\frac{(1+\sqrt{2})^n-(1-\sqrt{2})^n}{\sqrt{2}}.\]
The familiar reader will recognize these numbers as twice the Pell numbers \cite{pell}. While the correct context for a combinatorial proof is not immediately clear, we pose the following question.
\begin{question}
Is there a combinatorial proof of the formula for $pc(n,4)$?
\end{question}
We also include some properties of $pc(n,m)$ that we observed for general $m$.
\begin{prop}
If $m$ is even, then for $n\geq 1$ we have $pc(2n,m)=pc(2n+1,m)$.
\end{prop}
\begin{proof}
The proof of this is identical in spirit to the combinatorial proof that $pc(2n,2)=pc(2n+1,2)$ in Section 2. \end{proof}
\begin{prop}
If $n>1$, then $pc(n,m)$ is even for all $m$.
\end{prop}
\begin{proof}
A quick, one line proof of this fact comes from the identity
\[F_m(q)-q = \frac{2q^2 (1+q)}{1-2q^2-q^m}.\]
This can also be seen combinatorially by pairing up compositions of $n$ that are palindromic modulo $2$. Let $a$ be a composition of $n>1$ of length $s$ that is palindromic modulo $m$. If $a_i\neq a_{s-i+1}$ for some $i$, then we can pair this composition with the composition formed by switching $a_i$ and $a_{s-i+1}$.
Now assume $a_i=a_{s-i+1}$ for all $i$ (i.e. $a$ is a palindrome). If $s$ is odd, than $a_{(s+1)/2}$ is even, and we can pair this composition with the composition of $n$ of length $s+1$ formed by removing $a_{(s+1)/2}$ and appending $a_{(s+1)/2}/2$ to the beginning and the end. \end{proof}
\begin{prop}
If $2n\geq m$, then $p(2n,m)=p(2n+1,m)=2^n$.
\end{prop}
\begin{proof}
The condition that $2n\geq m$ requires all compositions of $2n$ and $2n+1$ that are palindromic modulo $m$ to be palindromes. Let $pc(n,\infty)$ denote the number of palindromic compositions of $n$. Then it is well known \cite{wk} that
\[pc(n,\infty)=2^{\lfloor n/2\rfloor}. \qedhere\]
\end{proof}
\begin{prop}
Let $\alpha_m$ be the unique positive root of $1-2q^2-q^m=0$, and set
\[c_m=\lim_{q\rightarrow \alpha_m} (1-\alpha_m^{-1}q)\cdot F_m(q) \ \ \text{ and } \ \ d_m=\lim_{q\rightarrow \alpha_m} (1+\alpha_m^{-1}q)\cdot F_m(q).\]
If $m$ is even, then
\[\limsup_{n\rightarrow \infty} \alpha_m^n\cdot pc(n,m) = c_m+d_m \ \ \text{ and } \ \ \liminf_{n\rightarrow\infty} \alpha_m^n\cdot pc(n,m)= c_m-d_m. \]
If $m$ is odd, $d_m=0$, and thus $pc(n,m)\sim c_m \alpha_m^{-n}$ as $n\rightarrow \infty$.
\end{prop}
\begin{proof}
This is a routine analysis of the rational function $F_m(q)$. For convenience, we set $\phi_m(q)=1-2q^2-q^m$. The asymptotics of $pc(n,m)$ are determined by the poles of $F_m(q)$, which are precisely the zeros of $\phi_m(q)$. We will assume $m>4$, as the result can be verified directly for $m\leq 4$.
First note that $\phi_m(q)$ has exactly one positive zero, $\alpha_m$, since $\phi_m^{\prime}(q)<0$ for all $q>0$, $\phi_m(0)>0$, and $\phi_m(1)<0$. When $m$ is even, $\phi_m(q)$ is an even function and thus $-\alpha_m$ is also a zero. When $m$ is odd, $\phi_m(q)$ has a unique zero in the interval $(-1,-\alpha_m)$, since $\phi^{\prime}_m(q)>0$ on $(-1,0)$, $\phi_m(-1)<0$, and $\phi_m(\alpha_m)>0$. Our next goal is to show that these are the only two zeros inside the disc $|q|<0.9$.
Note that $q^2(2+q^{m-2})$ has one zero (with multiplicity equal to 2) inside the disc $|q|<0.9$. Therefore, if we can show
\[|q^2(2+q^{m-2}|>1\]
for all $q$ with $|q|=0.9$, we can apply Rouche's theorem to conclude that $\phi_m(q)$ has exactly two zeros inside the disc $|q|<0.9$. For $m>4$ we have $|q^{m-2}|\leq |q^3|$, so that $|2+q^{m-2}|\geq 2-|q^3|=1.279$ and
\[|q^2(2+q^{m-2})| > 0.81\cdot 1.729 = 1.03599>1.\]
We can now perform a partial fraction decomposition $F_m(q)$,
\[F_m(q) = \frac{c_m}{1-\alpha_m^{-1}q} + \frac{d_m}{1+\alpha_m^{-1}q} +G(q),\]
where $G(q)$ is a rational function with no poles in the disc $|q|<0.9$. Therefore,
\[pc(n,m) = (c_m + (-1)^n d_m)\alpha_m^{-n} + o(\alpha_m^{-n}) \]
as $n\rightarrow \infty$, noting that $d_m=0$ if $m$ is odd. The result follows. \end{proof}
The following table shows the values of $\alpha_m$, $c_m$, and $d_m$ for various $m$.
\begin{center}
\begin{tabular}{c||c|c|c}
$m$&$\alpha_m^{-1}$&$c_m$&$d_m$ \\
\hline
1& $2$&$\frac{1}{2}=0.5$& 0 \\
2& $\sqrt{3}\approx 1.73$&$\frac{3+\sqrt{3}}{9}\approx0.53$& $\frac{3-\sqrt{3}}{9}\approx 0.14$ \\
3& $\frac{\sqrt{5}+1}{2}\approx 1.61$&$\frac{5-\sqrt{5}}{5}=0.55$& 0 \\
4& $\approx 1.55$&$\approx0.58$& $\approx 0.13$ \\
\vdots&\vdots & \vdots & \vdots \\
$\infty$ &$\sqrt{2}\approx 1.41$&$\frac{2+\sqrt{2}}{4}\approx 0.85$& $\frac{2-\sqrt{2}}{4}\approx 0.14$
\end{tabular}
\end{center}
\section*{Acknowledgements}
The author was partially supported by the Research and Training Group grant DMS-1344994 funded by the National Science Foundation.
| {
"timestamp": "2021-09-29T02:08:40",
"yymm": "2102",
"arxiv_id": "2102.00996",
"language": "en",
"url": "https://arxiv.org/abs/2102.00996",
"abstract": "In recent work, G. E. Andrews and G. Simay prove a surprising relation involving parity palindromic compositions, and ask whether a combinatorial proof can be found. We extend their results to a more general class of compositions that are palindromic modulo $m$, that includes the parity palindromic case when $m=2$. We then provide combinatorial proofs for the cases $m=2$ and $m=3$.",
"subjects": "Combinatorics (math.CO)",
"title": "Compositions that are palindromic modulo $m$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9924227585080309,
"lm_q2_score": 0.8479677602988602,
"lm_q1q2_score": 0.8415425038016715
} |
https://arxiv.org/abs/1505.04151 | Minkowski Symmetrizations of Star Shaped Sets | We provide sharp upper bounds for the number of symmetrizations required to transform a star shaped set in ${\mathbb R}^n$ arbitrarily close (in the Hausdorff metric) to the Euclidean ball. | \section{Introduction and results}\label{Sec_Intro}
A non empty compact set $K\subset {\cal R}^n$ is called {\em star shaped}
if $x\in K$ implies $[0,x]\subseteq K$. We denote the family of star
shaped sets in ${\cal R}^n$ by $\mathscr{S}^n$. Recall that given a set $K$ and a
direction $u \in S^{n-1}$, it's Minkowski symmetral is defined to be
\[
M_u(K) = \frac{K + R_u K}{2},
\]
where $R_u$ is the reflection with respect to the hyperplane
$u^\perp$.
The Minkowski symmetrization $M_u$ results in a set that is symmetric
with respect to the hyperplane $u^\perp$, thus it is natural to
expect that successive applications of this procedure in different
directions yield a sequence of sets that convergences in some sense
to the Euclidean ball.
This is indeed known in the case where $K$ is convex.
Moreover, there are estimates regarding the convergence rate.
Bourgain, Lindenstrauss and Milman \cite{BLM} obtained the first
quantitative estimate for the convergence rate of Minkowski
symmetrizations. They found a function $n_0: (0,1)\to \mathbb N$ satisfying:
\begin{thm} [Bourgain, Lindenstrauss, Milman] \label{thm-Intro-BLM}
Let $\varepsilon \in(0,1)$ and let $n\in \mathbb N$ such that
$n_0(\varepsilon)
\le n$. If $K\in {\cal K}^n$ is a convex body with mean width $M^*(K)=M_0$,
then there exist $cn\left(C(\varepsilon) + \log n\right)$ Minkowski
symmetrizations transforming $K$ into a (convex) body $\tilde{K}$,
such that
\[
(1-\varepsilon) M_0 D_n \subset
\tilde{K} \subset
(1+\varepsilon) M_0 D_n.
\]
Here $c$ is some positive constant, and the functions
$C(\varepsilon)$, $n_0(\varepsilon)$ are of the order
$\left(\frac{1}{\varepsilon}\right)^\frac{c}{\varepsilon^2}$.
\end{thm}
In \cite{K_Rate} Klartag improved Theorem \ref{thm-Intro-BLM}, and
also removed the restriction $n_0\le n$, thus providing the first
truely isometric result
in all dimensions:
\begin{thm}[Klartag] \label{thm-Klartag}
Let $n\ge 2$ and $\varepsilon\in(0,1/2)$. If $K\in {\cal K}^n$ is a convex
set with mean width $M^*(K)=M_0$, then there exist $cn |\log
(\varepsilon)|$ Minkowski symmetrizations transforming $K$ into
$\tilde{K}$, such that
\[
(1-\varepsilon)M_0 D_n
\subseteq \tilde{K} \subseteq
(1+\varepsilon)M_0 D_n,
\] where $c$ is some universal constant.
\end{thm}
In this note we extend Klartag's theorem to $\mathscr{S}^n$. Namely, we show the following:
\begin{thm}\label{Thm_Mink-Isometric}
Let $n\ge 2$ and $\varepsilon\in(0,1/2)$. If $K\in \mathscr{S}^n$ is a star
shaped set with mean width $M^*(K)=M_0$, then there exist $Cn
|\log(\varepsilon)|$
Minkowski symmetrizations transforming $K$ into $\tilde{K}$, such that
\[
(1-\varepsilon)M_0 D_n
\subseteq \tilde{K} \subseteq
(1+\varepsilon)M_0 D_n,
\] where $C$ is some universal constant.
\end{thm}
The proof of Theorem \ref{Thm_Mink-Isometric} consists of three steps.
First, we make sure that the body at hand contains a small ball around
the origin (Lemma \ref{Lem_Give-Small-Ball
). Next, we consider Minkowski symmetrizations {\em of the convex
hull} of our star shaped body. As mentioned above, for convex bodies
Klartag showed how many steps are required in order to bring a body
isometrically close to the Euclidean ball. We apply these
symmetrizations, and get a body whose convex hull lies between two
balls of very similar radii. This can only happen if the body
contains some ``$\varepsilon$-net'' of the inner ball's sphere (Lemma
\ref{Lem_Conv=Ball-to-eps-net}). In the third and final step, we use
this fact to increase the radius of the small ball, which was
obtained in the first step.
\noindent {\bf Notations:}
The {\em support function} of a (not necessarily convex) body $K$ is
defined by $h_K(u) = \sup \{ \iprod{x}{u} \ | \ x \in K \}$. The
width, or {\em mean width}, of a star shaped set $K$ is defined to be
$M^*(K)= \int_{S^{n-1}} h_K d\sigma$, where $\sigma$ is the
normalized Haar measure on the sphere.
\section{Proof of the Theorem
}\label{Sec_Mink}
Our first step is to generate (using Minkowski symmetrizations) a
small ball inside a (non trivial) star shaped set.
\begin{lem}\label{Lem_Give-Small-Ball}
Let $n\ge 2$, $K_0\in \mathscr{S}^n$, and let $M_0=M^*(K_0)>0$. Then there exist
$c_1n$ Minkowski symmetrizations transforming $K_0$ into $K_1$, such that
\[
\frac{c_2}{\sqrt{n}} D_n
\subseteq \frac{K_1}{M_0}
,\]
where $c_1,c_2$ are some universal constants (in fact $c_1=c$ of
Theorem \ref{thm-Klartag}).
\end{lem}
\begin{proof}
Let $R\ge M_0>0$ be the minimal radius of a centered ball enclosing
$K_0$. Then there exists some $u\in S^{n-1}$ such that $I_0=[0,Ru]
\subseteq K_0$. Let $\varepsilon = 1/e$. By Theorem \ref{thm-Klartag}
applied to $I_0$, there exist $N_1 = cn$
Minkowski symmetrizations
$M_{u_1}\dots M_{u_{N_1}}$ which transform the interval $I_0$ into a
convex body $I_1$ satisfying
\[
(1-1/e) M^*(I_0) D_n \subseteq I_1.
\]
Then
\[
(1-1/e) M^*([0,u]) D_n
\subseteq
(1-1/e) \left(\frac{RM^*([0,u])}{M_0}\right) D_n
\subseteq
\frac{I_1}{M_0}
\subseteq
\frac{K_1}{M_0},
\]
where the body $K_1$ is defined by:
\begin{equation}\label{Eq_Mink-1st-Step}
K_1:=M_{u_{N_1}}\dots M_{u_1} K_0.
\end{equation}
Since $M^*([0,u])=
\frac{1}{2}\int_{S^{n-1}}|x_1|d\sigma(x)\approx
\frac{1}{\sqrt{2\pi n}}$, the proof is complete.
\end{proof}
The inner radius $c_2/\sqrt{n}$ does not decrease under additional
symmetrizations (it may increase). Next consider Minkowski
symmetrizations which bring {\em the convex hull} of $K_1$
isometrically close to the ball. By Theorem \ref{thm-Klartag} there
exist $N_2= c n |\log(\varepsilon)|$
directions $u_1,\dots,u_{N_2}$
such that
\[
(1 - \varepsilon) D_n
\subseteq
M_{u_{N_2}}\dots M_{u_1} {\rm conv} \left( \frac{K_1}{M_0} \right)
\subseteq
(1 + \varepsilon) D_n.
\]
We define the body $K_2$ to be:
\begin{equation}\label{Eq_Mink-2nd-Step}
K_2:=M_{u_{N_2}}\dots M_{u_1} K_1.
\end{equation}
Note that Minkowski symmetrizations commute with the convex hull
operation, i.e. $M_u{\rm conv} K = {\rm conv} M_u K$, simply because in general
${\rm conv} (A+B) = {\rm conv} A + {\rm conv} B$. Thus $\frac{K_2}{M_0}$ is a star
shaped body whose convex hull is isometrically close to the ball. We
use the following standard lemma to show that such a body must
contain some $\delta$-net of the sphere. More precisely:
\begin{lem}\label{Lem_Conv=Ball-to-eps-net}
Let $n\ge 2$, $\varepsilon\in(0,1)$, and let $K\in \mathscr{S}^n$ be such that
\[(1 - \varepsilon) D_n
\subseteq {\rm conv} K \subseteq
(1 + \varepsilon) D_n.
\]
Then $K$ contains a $2\sqrt{\varepsilon}$-net of the sphere
$(1-\varepsilon) S^{n-1}$, that is
\begin{equation}\label{Eq_Delta-net}
(1-\varepsilon) S^{n-1}
\subseteq
K + 2\sqrt{\varepsilon}D_n.
\end{equation}
\end{lem}
\begin{proof}
Let $x\in (1-\varepsilon) S^{n-1}$. We claim that the intersection
$ (x+2\sqrt{\varepsilon}D_n)\cap K $ is not empty. Denoting the
hyperplane supporting $(1-\varepsilon) D_n$ at the point $x$ by $H$
and the halfspace with boundary $H$ by $H^+= \{y\,|\,
\iprod{x}{x} \le \iprod{x}{y} \}$, we will show that $H^+\cap K \cap
(x+2\sqrt{\varepsilon}D_n)$ is not empty. Indeed,
\[
H^+ \cap K
\subseteq
H^+ \cap (1+\varepsilon) S^{n-1}
\subseteq
(x + 2\sqrt{\varepsilon}D_n),
\]
see Figure \ref{Fig_Instead-of-Proving}. The set $H^+ \cap K$ is not
empty (since $x\in (H^+ \cap{\rm conv} K)$), and thus the proof is complete.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{conv_is_ball}
\caption{The small ball around $x$ contains all points of $K$ in $H^+$.}
\label{Fig_Instead-of-Proving}
\end{figure}
\end{proof}
Note that in fact, since $K + 2\sqrt{\varepsilon}D_n$ is star shaped, we have:
\begin{equation}
(1-\varepsilon) D_n
\subseteq
K + 2\sqrt{\varepsilon}D_n.
\end{equation}
The outer radius $1+\varepsilon$ does not increase under additional
symmetrizations (it may decrease). As for the inner radius, we begin
with the small ball obtained in Lemma \ref{Lem_Give-Small-Ball}, and
use \eqref{Eq_Delta-net} to increase it geometrically. More precisely:
\begin{lem} \label{lem-StarShapedContainsBall}
Let $n\ge 2$, $K\in \mathscr{S}^n$, $r\in(0,1)$, and $\varepsilon\in
(0,\varepsilon_0)$, where $\varepsilon_0=1/25$. Assume that:
\begin{itemize}
\item $r D_n \subseteq K$.
\item $(1-\varepsilon) D_n
\subseteq
K + 2\sqrt{\varepsilon}D_n$.
\end{itemize}
Then there exist $N=\alpha + \beta|\log \varepsilon| + \gamma|\log r|$ Minkowski symmetrizations transforming $K$
into $\tilde{K}$ satisfying
\[
(1-4\sqrt{\varepsilon}) D_n \subseteq \tilde{K},
\]
where $\alpha, \beta, \gamma$ are positive constants.
\end{lem}
\begin{proof}
In each of the two cases $r < 2\sqrt{\varepsilon}$ and
$2\sqrt{\varepsilon} \le r$, we argue a bit differently, so we handle
them separately. We begin with the case of smaller initial inner radius.
\noindent{\bf Case a. Increasing $r$ geometrically to reach $2\sqrt{\varepsilon}$:}
If $r < 2\sqrt{\varepsilon}$ we may take the second assumption and
write for any $u\in S^{n-1}$:
\[
r\frac{1-\varepsilon}{2\sqrt{\varepsilon}} D_n
\subseteq
\frac{r}{2\sqrt{\varepsilon}}K + r D_n
\subset
K + r D_n
\subseteq
K + R_u (K),
\]
So that $r\frac{1-\varepsilon}{4\sqrt{\varepsilon}} D_n
\subseteq M_u (K)$. Defining the (decreasing) function $q:(0,1)\to{\cal R}^+$
by $q(\varepsilon)= \frac{1-\varepsilon} {4\sqrt{\varepsilon}}$, we
have $q(\varepsilon)\ge q(\varepsilon_0)= 6/5$, so the inner radius
multiplies by at least $6/5$. If $2\sqrt{\varepsilon} \le r(6/5)^m$,
then after $m$ symmetrizations the inner radius reaches
$2\sqrt{\varepsilon}$.
Thus after $N_a = 4 + 3 \left|\log\left( \frac{\varepsilon}{r^2} \right)\right|$
symmetrizations, we have reduced to the second case, where
$2\sqrt{\varepsilon} \le r$.
\noindent{\bf Case b. Increasing $r$ geometrically towards $1$:}
Again, for any $u\in S^{n-1}$ we have
\[
\frac{(1-[2\sqrt{\varepsilon} + \varepsilon]) + r}{2} D_n
=
\frac{(1 - \varepsilon) D_n +
(r - 2\sqrt{\varepsilon}) D_n}{2}
\subseteq
\]
\[
\frac{K + 2\sqrt{\varepsilon}D_n +
(r - 2\sqrt{\varepsilon}) D_n}{2} =
\frac{K + r D_n}{2}
\subseteq
M_u (K).
\]
So the difference $(1-[2\sqrt{\varepsilon} + \varepsilon])- r$
decreases by at least half. The inner radius exceeds $1 -
4\sqrt{\varepsilon}$ if and only if its difference from
$(1-[2\sqrt{\varepsilon} + \varepsilon])$ decreases below
$2\sqrt{\varepsilon} - \varepsilon > \sqrt{\varepsilon}$. Thus it
suffices to decrease that difference to $\sqrt{\varepsilon}$. For that
we require no more than $N_b=|\log_2(\sqrt{\varepsilon})| =
\frac{1}{\log 4} |\log(\varepsilon)|$ symmetrizations.
\end{proof}
\begin{proof}[{\bf Proof of Theorem \ref{Thm_Mink-Isometric}.}] To
complete the proof one has to combine the steps above. Let $K\in\mathscr{S}^n$,
such that $M^\ast(K) = 1$. By Lemma \ref{Lem_Give-Small-Ball}, there
exist $cn$ Minkowski symmetrizations that transform $K$ into a set
$K_1$ such that
\[
\frac{c_2}{\sqrt{n}}D_n \subset K_1
\]
and no Minkowski symmetrization can change this fact. Let $0 <
\varepsilon < 1/25$. By Theorem \ref{thm-Klartag}, there exist
$cn|\log\varepsilon|$ symmetrization that transform ${\rm conv}(K_1)$ into
a convex set $L$ such that $(1-\varepsilon)D_n \subset L \subset (1 +
\varepsilon)D_n$. Since $M_u {\rm conv} K = {\rm conv} M_u K$, we may apply the
same symmetrizations to $K_1$ to obtain a new set $K_2$ such that
${\rm conv}(K_2) = L$. By Lemma \ref{Lem_Conv=Ball-to-eps-net}, we get the
following:
\[
(1-\varepsilon)S^{n-1} \subset K_2 + 2\sqrt{\varepsilon}D_n.
\]
In addition,
\[
\frac{c_2}{\sqrt{n}}D_n \subset K_2.
\]
Thus, by Lemma \ref{lem-StarShapedContainsBall} there exist $\alpha +
\beta|\log\varepsilon| + \gamma\log n$ Minkowski symmetrizations that
transform $K_2$ into $K_3$ such that
\[
(1-4\sqrt{\varepsilon})D_n \subset K_3.
\]
Recall that $K_3 \subset (1+\varepsilon)D_n$. To sum it up, we
applied no more than
\[
cn + cn|\log\varepsilon| + \alpha + \beta|\log\varepsilon| +
\gamma\log n \le C n |\log\varepsilon|
\]
symmetrizations, for some universal constant $C > 0$.
During the proof we assumed that $\varepsilon < 1/25$. This can be
easily changed to $\varepsilon < 1/2$, at the cost of a different
constant in the expression $C n |\log\varepsilon|$, by always
symmetrizing the set to be $\varepsilon / 25$ close to the Euclidean
ball. By the same argument one may extend for all $\varepsilon\in
(0,1)$, and the corresponding bound on the number of symmetrizations
will become $n (C|\log\varepsilon| + C')$.
\end{proof}
| {
"timestamp": "2015-05-18T02:11:08",
"yymm": "1505",
"arxiv_id": "1505.04151",
"language": "en",
"url": "https://arxiv.org/abs/1505.04151",
"abstract": "We provide sharp upper bounds for the number of symmetrizations required to transform a star shaped set in ${\\mathbb R}^n$ arbitrarily close (in the Hausdorff metric) to the Euclidean ball.",
"subjects": "Metric Geometry (math.MG); Functional Analysis (math.FA)",
"title": "Minkowski Symmetrizations of Star Shaped Sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864299677055,
"lm_q2_score": 0.8499711775577735,
"lm_q1q2_score": 0.8414599316458669
} |
https://arxiv.org/abs/1906.12085 | Tutorial: Complexity analysis of Singular Value Decomposition and its variants | We compared the regular Singular Value Decomposition (SVD), truncated SVD, Krylov method and Randomized PCA, in terms of time and space complexity. It is well-known that Krylov method and Randomized PCA only performs well when k << n, i.e. the number of eigenpair needed is far less than that of matrix size. We compared them for calculating all the eigenpairs. We also discussed the relationship between Principal Component Analysis and SVD. | \section{Introduction}
Dimensionality reduction has always been a trendy topic in machine learning. Linear subspace method for reduction, e.g., Principal Component Analysis and its variation have been widely studied\cite{Xu1995Robust, Partridge2002Robust, Zou2006Sparse}, and some pieces of literature introduce probability and randomness to realize PCA\cite{Tipping2010Probabilistic, Halko2010Finding, Halko2011An, J1989Estimating, Woolfe2008A}. However, linear subspace is not applicable when the data lies in a non-linear manifold\cite{Belkin2014Laplacian, Kai2010Clustered}. Due to the direct connection with PCA, Singular Value Decomposition (SVD) is one of the most well-known algorithms for low-rank approximation\cite{Woodruff2014Low, Frieze1998Fast}, and it has been widely used throughout the machine learning and statistics community.
Some implementations of SVD are solving least squares\cite{Golub1970Singular, Zhang2010Regularized}, latent semantic analysis\cite{deerwester1990indexing, hofmann2001unsupervised}, genetic analysis, matrix completion\cite{Recht2012Exact, Cai2008A, Candes2010Matrix, Cand2010The}, data mining\cite{Chong2016Feature, Belabbas2009On} etc. However, when it comes to a large scale matrix, the runtime of traditional SVD is intolerable and the memory usage could be enormously consuming.
$\textbf{Notations}$: we have a matrix $A$ with size $m\times n$, usually $m \gg n$. Our goal is to find the Principal Components (PCs) given the cumulative explained variance threshold $t$.
$\textbf{Assumptions}$: In this tutorial, every entry of matrix $A$ is real-valued; W.l.o.g., assume $m \gg n$ and $A$ has zero mean over each feature.
For your information, either each column or row of $A$ could represent an example, and the definition will be specified when necessary.
In the traditional approach of PCA, we need to compute the covariance matrix $S$, then perform the eigen-decomposition on $S$. By selecting the top $K$ largest eigenvalues and corresponding eigenvectors, we get our Principal Components (PCs). Nevertheless, if each column of $A$ is an example and the row size of $A$ is tremendously large, saving even larger covariance matrix into memory is expensive, let alone the eigen-decomposition process.
\section{Preliminary knowledge}\label{sec:pre}
\subsection{Singular Value Decomposition (SVD)}
Any real or complex matrix can be approximated over the summation of a series of rank-1 matrix. In SVD, we have
\begin{align*}
A = U\Sigma V^T
\end{align*}
where
\begin{align*}
U &= [u_1 \cdots u_n]\in\mathbb{R}^{m \times n}\\
\Sigma &= diag(\sigma_1 \cdots \sigma_n)\in\mathbb{R}^{n\times n}\\
V &= [v_1 \cdots v_n]\in\mathbb{R}^{n \times n}
\end{align*}
Here $U$ and $V$ are orthogonal matrices, i.e.
\begin{align*}
U^T U &= I_n\\
UU^T &= I_m\\
V^TV&=VV^T=I_n
\end{align*}
We could also rewrite SVD as following
\begin{align}\label{AV}
Av_i = \sigma_i u_i, i = 1 \cdots n
\end{align}
Geometrically speaking, the matrix $A$ rotates the unit vector $v_i$ to $u_i$ and then stretches the Euclidean norm of $u_i$ with a factor of $\sigma_i$.
The orthogonal matrices $U$ and $V$ can be obtained by eigen-decomposition of matrix $AA^T$ and $A^TA$, and the singular values $\sigma_i's$ are the square root of the eigenvalues of $AA^T$ or $AA^T$.
Proof:
\begin{align}
AA^T &= U\Sigma V^T V\Sigma^T U^T = U\Sigma^2 U^T \label{AA}\\
A^TA &= V\Sigma^T U^T U\Sigma V^T = V\Sigma^2 V^T \label{AB}
\end{align}
Set $\Lambda = \Sigma^2= diag(\sigma_1^2 \cdots \sigma_n^2)$, i.e.
\begin{align}\label{LambdaSigma}
\lambda_i = \sigma_i^2
\end{align}
We could rewrite Eq. (\ref{AA}) and Eq. (\ref{AB}) as
\begin{align}
AA^Tu_i &= \lambda_i u_i \\
A^TAv_i &= \lambda_i v_i
\end{align}
Therefore, the column vectors of $U$ and $V$ are the eigenvectors (with the unit norm) of $AA^T$ and $A^T A$, respectively. Moreover, the eigenvalues $\lambda_i$ are square of singular values $\sigma_i$, as in Eq. (\ref{LambdaSigma}). In other words, the square root of eigenvalues are singular values.
\subsection{Relations with PCA}
\textbf{If each column of $A$ represents an example or data point}, set $Y=U^TA$ to our transformed data points, where $U$ is the left singular matrix in SVD. The covariance matrix of $Y$ is
\begin{align}
cov(Y) = YY^T = U^TAA^TU = U^TU\Sigma^2U^TU = \Sigma^2
\end{align}
\textbf{If each row of $A$ represents an example or data point}, set $Y=AV$ to our transformed data points, where $V$ is the right singular matrix in SVD. The covariance matrix of $Y$ is
\begin{align}
cov(Y) = Y^TY = V^TA^TAV = V^TV\Sigma^2V^TV = \Sigma^2
\end{align}
It means the transformed data $Y$ are uncorrelated. Therefore, the column vectors of orthogonal matrix $U$ in SVD are the projection bases for Principal Components.
\subsection{Truncated SVD}
Although the derivation of SVD is clear theoretically, practically speaking, however, it is unwise to do eigen-decomposition on matrix $AA^T$, as it has a tremendous size of $m \times m$, which will deplete memory and cost a great amount of time. On the contrary, the matrix $A^TA$ only has a size of $n \times n$, thus it is plausible that we compute orthogonal matrix $V$ first.
Then we can plug the Eq. (\ref{AV}) into Eq. (\ref{LambdaSigma}) and get
\begin{align}
u_i = \dfrac{Av_i}{\sqrt{\lambda_i}}, i = 1 \cdots n
\end{align}
This equation is the key to improving time and space efficiency because we do not perform eigen-decomposition on huge matrix $AA^T\in\mathbb{R}^{m\times m}$, which takes $\mathcal{O}(m^3)$ time and $\mathcal{O}(m^2)$ space.
Then we column-wisely combine $u_i$ to get $U$, and the same for $v_i$ to form $V$. For $\Sigma$, it is $diag(\sqrt{\sigma_1} \cdots \sqrt{\sigma_n})$.
\subsection{PCA Evaluation}
In Section \ref{sec:pre}, we have proved that the column vectors of orthogonal matrix $U$ or $V$ in SVD are the projection bases for PCA. In the literature of PCA, there are many criteria for evaluating the residual error, e.g., Frobenius norm and induced $L_2$ norm of the difference matrix (original matrix minus approximated matrix), explained variance and cumulative explained variance.
Usually, we use the cumulative explained variance criterion for evaluation.
\textbf{cumulative explained variance criterion:} Given the threshold $t$, find the minimal integer $K$ such that
\begin{align}
\frac{\Sigma_{i=1}^{K} \lambda_i }{\Sigma_{i=1}^n \lambda_i}\geq t
\end{align}
where each $\lambda_i$ is the eigenvalue of matrix $A^TA$ or $AA^T$. Every $\lambda_i=\sigma_i^2$ indicates the variance in principal axis, this is why the criterion is named cumulative explained variance.
For those SVD or PCA algorithms who do not obtain all the eigenvalues or can not get accurate them accurately, it seems that the denominator term $\Sigma_{i=1}^n \lambda_i$ can not be calculated. Actually, the sum of all eigenvalues can be done by
\begin{align}\label{sumeigen}
\Sigma_{i=1}^n \lambda_i = tr(A^TA) = tr(AA^T) = \|A\|_F^2 = \Sigma_{i, j} a_{ij}^2
\end{align}
\textbf{Therefore, we do not need to implement eigen-decomposition on either large matrix $AA^T\in\mathbb{R}^{m\times m}$ or small matrix $A^TA\in\mathbb{R}^{n\times n}$.} Eq. (\ref{sumeigen}) saves us $\mathcal{O}(n^3)$ time and $\mathcal{O}(n^2)$ space.
\section{Complexity Analysis}
In this section, we compare the time complexity and space complexity of Krylov method, Randomized PCA and truncated SVD. Due to the copyrights issue, the mechanisms of MatLab and Python built-in econSVD are not available, whose complexity analysis will not be conducted.
To restate again, our matrix $A$ has size $m\times n$ and $m>n$.
\subsection{Time Complexity}
For time complexity, we use the number of FLoating-point OPerations (FLOP) as a quantification metric.
For matrix $A \in \mathbb{R}^{m\times n}$ and $B \in \mathbb{R}^{n\times l}$, the time complexity of matrix multiplication $AB$ takes $mnl$ FLOP of products and $ml(n-1)$ FLOP of summations. Therefore, the multiplication of two matrices takes $\mathcal{O}(mnl+ml(n-1))=\mathcal{O}(2mnl-ml) = \mathcal{O}(mnl)$ FLOP. We could ignore the coefficient here for it will not bring bias to our analysis.
For your information, the coefficient of time complexity in Big-O notation will not be ignored when comparing different SVD algorithms as it is of importance in our analysis.
\subsubsection{Krylov method}
For the Krylov method, we discuss the time complexity of each step.
\begin{enumerate}
\item Forming standard normal distribution matrix $G$ of size $n \times l$ takes $\mathcal{O}(nl)$ FLOP, where in practice $l=0.5n$.
\item Forming matrix $H^{(0)}=AG \in \mathbb{R}^{m\times l}$ takes $\mathcal{O}(mnl)$ FLOP.
\item Forming matrix $H^{(i)}=A(A^T H^{(i-1)})\in \mathbb{R}^{m\times l}$ takes $\mathcal{O}(2imnl)$ FLOP, as the matrix multiplication in bracket $A^T H^{(i-1)}$ takes $\mathcal{O}(nml)$ FLOP, then multiplying by $A$ takes $\mathcal{O}(mnl)$ FLOP. In total, it takes $\mathcal{O}{(2imnl)}$ FLOP to generate $i$ matrices.
\item Forming matrix $H=\left(H^{(0)}\left|H^{(1)}\right| \ldots\left|H^{(i-1)}\right| H^{(i)}\right) \in \mathbb{R}^{m\times (i+1)l}$ by concatenating each $H^{(i)}$ takes $\mathcal{O}(1)$ FLOP.
\item Performing QR decomposition on $H$ takes $\mathcal{O}(2m[(i+1)l]^2 - 2[(i+1)l]^3/3)$ FLOP.
\item Forming $T = A^T Q \in \mathbb{R}^{n\times (i+1)l}$ takes $\mathcal{O}((i+1)mnl)$ FLOP.
\item Performing SVD on $T=\tilde{V}\tilde{\Sigma}W^T$ takes $\mathcal{O}(n[(i+1)l]^2)$ FLOP.
\item Forming $\tilde{U}=QW$ takes $\mathcal{O}(m[(i+1)l]^2)$ FLOP.
\end{enumerate}
In total, the time complexity of Krylov method is
\begin{align}
\mathcal{O}(nl+(3i+2)mnl+(i+1)^2 l^2 (m^2+n^2+2m-\dfrac{2}{3}(i+1)l)) \text{ FLOP}
\end{align}
In practice, $l = 0.5n, i=1$, then the time complexity will be
\begin{align}
\mathcal{O}(\dfrac{n^2}{2}+\dfrac{5}{2}mn^2+4 (m^2 n^2 +2mn^2+n^4 -\dfrac{2}{3}n^3)) \text{ FLOP}
\end{align}
\subsubsection{Randomized PCA}
For Randomized PCA, we discuss the time complexity of each step. It is very similar to the Krylov method.
\begin{enumerate}
\item Forming standard normal distribution matrix $G$ of size $n \times l$ takes $\mathcal{O}(nl)$ FLOP, where in practice $l=0.5n$.
\item Forming matrix $H^{(0)}=AG \in \mathbb{R}^{m\times l}$ takes $\mathcal{O}(mnl)$ FLOP.
\item Forming matrix $H^{(i)}=A(A^T H^{(i-1)})\in \mathbb{R}^{m\times l}$ takes $\mathcal{O}(2imnl)$ FLOP, as the matrix multiplication in bracket $A^T H^{(i-1)}$ takes $\mathcal{O}(nml)$ FLOP, then multiplying by $A$ takes $\mathcal{O}(mnl)$ FLOP. In total, it takes $\mathcal{O}{(2imnl)}$ FLOP.
\item Performing QR decomposition on $H$ takes $\mathcal{O}(2ml^2 - 2l^3/3)$ FLOP.
\item Forming $T = A^T Q \in \mathbb{R}^{n\times l}$ takes $\mathcal{O}(mnl)$ FLOP.
\item Performing SVD on $T=\tilde{V}\tilde{\Sigma}W^T$ takes $\mathcal{O}(nl^2)$ FLOP.
\item Forming $\tilde{U}=QW$ takes $\mathcal{O}(ml^2)$ FLOP.
\end{enumerate}
In total, the time complexity of Randomized PCA is
\begin{align}
\mathcal{O}(nl+(2i+2)mnl+l^2 (3m+n-\dfrac{2}{3}l)) \text{ FLOP}
\end{align}
In practice, $l = 0.5n, i = 1$, then the time complexity will be
\begin{align}
\mathcal{O}(\dfrac{n^2}{2}+\dfrac{11}{4}mn^2+\dfrac{n^3}{6}) \text{ FLOP}
\end{align}
\subsubsection{Truncated SVD}
We discuss the time complexity of Truncated SVD for each step.
\begin{enumerate}
\item Forming matrix $A^T A\in\mathbb{R}^{n\times n}$ takes $\mathcal{O}(mn^2)$ FLOP.
\item Performing eigen-decomposition on $A^TA \in\mathbb{R}^{n\times n}$ takes $\mathcal{O}(n^3)$ FLOP.
\item Taking the square root of each eigenvalue of $A^T A$ takes $\mathcal{O}(n)$ FLOP.
\item Forming $u_i = \dfrac{Av_i}{\sigma_i}$ takes $\mathcal{O}(n(mn+m))$ FLOP, as $Av_i$ takes $\mathcal{O}(mn)$ FLOP while divided by $\sigma_i$ takes $\mathcal{O}(m)$ FLOP. In total, we have $n$ equations like this, thus it takes $\mathcal{O}(n(mn+m))$ FLOP.
\end{enumerate}
In total, the time complexity of Truncated SVD is
\begin{align}
\mathcal{O}(2mn^2+n^3+n+mn)
\end{align}
\subsection{Space Complexity}
We evaluate the space complexity by the number of matrix entries. For a matrix $A\in\mathbb{R}^{m\times n}$, its space complexity is $\mathcal{O}(mn)$. In MatLab or Python programming language, each entry takes 8 bytes memory.
\subsubsection{Krylov method}
\begin{enumerate}
\item Forming standard normal distribution matrix $G$ of size $n \times l$ takes $\mathcal{O}(nl)$, where in practice $l=0.5n$.
\item Forming matrix $H^{(0)}=AG \in \mathbb{R}^{m\times l}$ takes $\mathcal{O}(ml)$.
\item Forming matrix $H^{(i)}=A(A^T H^{(i-1)})\in \mathbb{R}^{m\times l}$ takes $\mathcal{O}(ml)$.
\item Forming matrix $H=\left(H^{(0)}\left|H^{(1)}\right| \ldots\left|H^{(i-1)}\right| H^{(i)}\right) \in \mathbb{R}^{m\times (i+1)l}$ by concatenating each $H^{(i)}$ takes $\mathcal{O}((i+1)ml)$.
\item Performing QR decomposition on $H$ takes $\mathcal{O}((i+1)ml)$. Note that we discard matrix $R$, only $Q$ is saved.
\item Forming $T = A^T Q \in \mathbb{R}^{n\times (i+1)l}$ takes $\mathcal{O}((i+1)nl)$.
\item Performing SVD on $T=\tilde{V}\tilde{\Sigma}W^T$ takes $\mathcal{O}((i+1)nl + 2[(i+1)l]^2)$, for matrix $\tilde{V}\in\mathbb{R}^{n\times (i+1)l}$ takes $\mathcal{O}((i+1)nl)$ and $\tilde{\Sigma}\in\mathbb{R}^{(i+1)l\times (i+1)l}$ takes $\mathcal{O}([(i+1)l]^2)$, and $W\in\mathbb{R}^{(i+1)l\times (i+1)l}$ takes $\mathcal{O}([(i+1)l]^2)$. In total, it takes $\mathcal{O}((i+1)nl + 2[(i+1)l]^2)$.
\item Forming $\tilde{U}=QW$ takes $\mathcal{O}((i+1)ml)$.
\end{enumerate}
In total with $A$ taking $\mathcal{O}(mn)$, the space complexity of Krylov method is
\begin{align}
\mathcal{O}(mn+(3i+4)ml + (2i+3)nl + 2[(i+1)l]^2)
\end{align}
In practice, $l = 0.5n, i = 1$, then the space complexity of Krylov method will be
\begin{align}
\mathcal{O}(\dfrac{9}{2}mn + \dfrac{9}{2}n^2)
\end{align}
\subsubsection{Randomized PCA}
\begin{enumerate}
\item Forming standard normal distribution matrix $G$ of size $n \times l$ takes $\mathcal{O}(nl)$, where in practice $l=0.5n$.
\item Forming matrix $H^{(0)}=AG \in \mathbb{R}^{m\times l}$ takes $\mathcal{O}(ml)$.
\item Forming matrix $H^{(i)}=A(A^T H^{(i-1)})\in \mathbb{R}^{m\times l}$ takes $\mathcal{O}(ml)$.
\item Performing QR decomposition on $H$ takes $\mathcal{O}(ml)$. Note that we discard matrix $R$, only $Q$ is saved.
\item Forming $T = A^T Q \in \mathbb{R}^{n\times l}$ takes $\mathcal{O}(nl)$.
\item Performing SVD on $T=\tilde{V}\tilde{\Sigma}W^T$ takes $\mathcal{O}(nl + 2l^2)$, for matrix $\tilde{V}\in\mathbb{R}^{n\times l}$ takes $\mathcal{O}(nl)$ and $\tilde{\Sigma}\in\mathbb{R}^{l\times l}$ takes $\mathcal{O}(l^2)$, and $W\in\mathbb{R}^{l\times l}$ takes $\mathcal{O}(l^2)$. In total, it takes $\mathcal{O}(nl + 2l^2)$.
\item Forming $\tilde{U}=QW$ takes $\mathcal{O}(ml)$.
\end{enumerate}
In total with $A$ taking $\mathcal{O}(mn)$, the space complexity of Randomized PCA is
\begin{align}
\mathcal{O}(mn+3ml+3nl+2l^2)
\end{align}
In practice, $l = 0.5n, i = 1$, then the space complexity of Randomized PCA will be
\begin{align}
\mathcal{O}(\dfrac{9}{2}mn + 3n^2)
\end{align}
\subsubsection{Truncated SVD}
We discuss the space complexity of Truncated SVD for each step.
\begin{enumerate}
\item Forming matrix $A^T A\in\mathbb{R}^{n\times n}$ takes $\mathcal{O}(n^2)$.
\item Performing eigen-decomposition on $A^TA \in\mathbb{R}^{n\times n}$ takes $\mathcal{O}(n^2+n)$.
\item Taking the square root of each eigenvalue of $A^T A$ takes $\mathcal{O}(n)$.
\item Forming $u_i = \dfrac{Av_i}{\sigma_i}$ takes $\mathcal{O}(mn)$, as each $u_i$ takes $\mathcal{O}(m)$ and we have $n$ equations like this, thus in total it takes $\mathcal{O}(mn)$.
\item Forming $V$ takes $\mathcal{O}(n^2)$.
\item Storing $n$ singular values takes $\mathcal{O}(n)$.
\end{enumerate}
In total with $A$ taking $\mathcal{O}(mn)$, the space complexity of Truncated SVD is
\begin{align}
\mathcal{O}(3n^2+3n+2mn)
\end{align}
\subsection{Summary of Complexity Analysis}
\begin{table}[h]
\caption{Comparison of complexity}
\label{table-complexity}
\centering
\begin{tabular}{lll}
\toprule
Method & Time complexity & Space complexity \\
\midrule
Krylov method &
$\mathcal{O}(\dfrac{n^2}{2}+\dfrac{5}{2}mn^2+4 (m^2 n^2 +2mn^2+n^4 -\dfrac{2}{3}n^3))$ & $\mathcal{O}(\dfrac{9}{2}mn + \dfrac{9}{2}n^2)$ \\
& & \\
Randomized PCA & $\mathcal{O}(\dfrac{n^2}{2}+\dfrac{11}{4}mn^2+\dfrac{n^3}{6})$ & $\mathcal{O}(\dfrac{9}{2}mn + 3n^2)$ \\
& & \\
Truncated SVD & $\mathcal{O}(2mn^2+n^3+n+mn)$ & $\mathcal{O}(3n^2+3n+2mn)$ \\
\bottomrule
\end{tabular}
\end{table}
We summarized the time complexity and space complexity in Table \ref{table-complexity}.
Under the assumptions that $m \gg n$, for time complexity, by keeping the highest order term and its coefficient, we could see that for Krylov method, it takes $\mathcal{O}(4m^2n^2)$ FLOP while Randomized PCA takes $\mathcal{O}(\dfrac{11}{4}mn^2)$ FLOP and Truncated SVD only takes $\mathcal{O}(2mn^2)$ FLOP. Therefore, Truncated SVD is the fastest SVD algorithm among the aforementioned. Furthermore, Truncated SVD keeps all the eigenpairs rather than only first $k$ pairs as Krylov method and Randomized PCA do.
For space complexity, we could see that Truncated SVD needs the least memory usage as $\mathcal{O}(3n^2+3n+2mn)$ and Krylov method needs the most memory space as $\mathcal{O}(\dfrac{9}{2}mn + \dfrac{9}{2}n^2)$. Randomized PCA holds the space complexity in between.
\section{Experiments}
We generate a matrix $A$ whose entry obeys standard normal distribution, i.e., $a_{ij} \sim \mathcal{N}(0,1)$, with 5 row sizes in list $[2000, 4000, 6000, 8000, 10000]$ and 12 column sizes in $[100, 200, 300, 400, 500, 600, 800, 900, 1000, 1200, 1500, 2000]$, 60 matrices of $A$ in total. The experiment is repeated 10 times to get an average runtime. On evaluating the residual error, the rate of Frobenius norm is used
\begin{align}
\delta = \dfrac{\|A-U\Sigma V^T \|_F}{\|A\|_F}
\end{align}
In Fig. \ref{runtimefixcol}, we compare the runtime of 4 SVD methods: Truncated SVD (FameSVD), Krylov method, Randomized PCA, econ SVD (MatLab built-in economic SVD). The matrix column size is fixed at 2000, and we increase the row size gradually. We could observe that all 4 methods follow a linear runtime pattern when row size increases. Of these 4 methods, Truncated SVD method outperforms the other 3 approaches.
\graphicspath{{E:/LXC/Graduate2Spring/FastSVD/experiment/MatLab/Fig/}}
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{runtime_columnSize_2000.png}
\caption{Runtime comparisons between 4 methods of SVD. The matrix column size is fixed at 2000, but row size varies. Truncated SVD method outperforms the rest.}
\label{runtimefixcol}
\end{figure}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.65\linewidth]{runtime_rowSize_10000.png}
\caption{Runtime comparisons between 4 methods of SVD. The matrix row size is fixed at 10000, but column size varies. Truncated SVD method outperforms the rest.}
\label{runtimefixrow}
\end{figure}
In Fig. \ref{runtimefixrow}, we fix the row size of matrix at 10000, and we increase the column size gradually. We could observe that all 4 methods behave as non-linear runtime pattern when row size increases. Out of all 4 methods, truncated SVD method takes the least runtime in every scenario.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.65\linewidth]{memory_rowSize_10000.png}
\caption{Memory usage between 3 methods of SVD. The matrix row size is fixed at 10000, but column size varies. Truncated SVD needs the least memory.}
\label{memoryusagefig}
\end{figure}
In Fig. \ref{memoryusagefig}, the row size of matrix is fixed at 10000, but column size varies. We could see that truncated SVD uses the minimal amount of memory while Randomized PCA needs the most.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{fourinone2.png}
\caption{Left to right: Original image, Randomized PCA, Krylov method and truncated SVD. The first 392 (784/2) principal components are reserved.}
\label{mnistfig}
\end{figure}
We also evaluate our algorithm on handwritten digit dataset MNIST\cite{Deng2012The}. We form our matrix $A$ with size $60000\times 784$ by concatenating $60000$ of vectorized $28\times 28$ intensity image. For runtime, it takes 4.54$s$ and 10.79$s$ for Randomized PCA and Krylov method respectively to obtain the first 392 (784/2) principal components. However, it only takes Truncated SVD \textbf{3.12s} to get all the 784 eigenvalues and eigenvectors; For memory usage, 1629.1MB for Randomized PCA and 1636.1MB for Krylov method. In the meanwhile, only \textbf{731.9MB} is used for truncated SVD.
Our experiments are conducted on MatLab R2013a and Python 3.7 with NumPy 1.15.4, with Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz 3.40GHz, 8.00GB RAM, and Windows 7. The truncated SVD is faster than the built-in economic SVD of both MatLab and NumPy.
\section{Conclusion}
the regular SVD performs the worst because it needs the most of memory usage and time. When all eigenpairs are needed ($k = n$), truncated SVD outperforms other SVD variants mentioned in this tutorial. The memory usage of these SVD methods grows linearly with matrix column size, but truncated SVD has the lowest growth rate. The runtime of truncated SVD grows sublinearly while Krylov method grows exponentially.
{\small
\bibliographystyle{ieee}
| {
"timestamp": "2019-10-15T02:24:05",
"yymm": "1906",
"arxiv_id": "1906.12085",
"language": "en",
"url": "https://arxiv.org/abs/1906.12085",
"abstract": "We compared the regular Singular Value Decomposition (SVD), truncated SVD, Krylov method and Randomized PCA, in terms of time and space complexity. It is well-known that Krylov method and Randomized PCA only performs well when k << n, i.e. the number of eigenpair needed is far less than that of matrix size. We compared them for calculating all the eigenpairs. We also discussed the relationship between Principal Component Analysis and SVD.",
"subjects": "Numerical Analysis (math.NA); Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "Tutorial: Complexity analysis of Singular Value Decomposition and its variants",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9799765546169712,
"lm_q2_score": 0.8740772433654401,
"lm_q1q2_score": 0.8565752054223639
} |
https://arxiv.org/abs/1210.6582 | Minimal Periods for Ordinary Differential Equations in Strictly Convex Banach Spaces and Explicit Bounds for some l^p-Spaces | Let x(t) be a non-constant T-periodic solution to the ordinary differential equation x'= f(x) in a Banach space X where f is assumed to be Lipschitz continuous with constant L. Then there exists a constant c such that T L >= c, with c only depending on X. It is known that c >= 6 in any Banach space and that c = 2{\pi} in any Hilbert space, but whereas the bound of c = 2 pi is sharp in any Hilbert space, there exists only one known example of a Banach space such that c = 6 is optimal. In this paper, we show that the inequality is in fact strict in any strictly convex Banach space. Moreover, we improve the lower bound for l^p(R^n) and L^p(M, {\mu}) for a range of p close to p = 2 by using a form of Wirtinger's inequality for functions in W^{1,p}([0, T ], L^p(M, {\mu})). | \section{Introduction}
Consider the ordinary differential equation $\dot{x}=f(x)$ in a Banach space $X$, where $f$ is Lipschitz continuous with constant $L$, that is for any $x,y\in X$
$$
\|f(x)-f(y)\|_X\leq L\|x-y\|_X.
$$
In this case one can relate the period $T$ of any non-constant periodic orbit to the Lipschitz constant $L$ via the inequality $TL\ge c$.
In 1969, Yorke \cite{yor} proved that $c=2\pi$ when $X=\mathbb{R}^n$ with its usual norm. Lasota \& Yorke \cite{lasyor} showed that the proof extends to arbitrary Hilbert spaces and they
proved the bound $c = 4$ for any Banach space. This was improved to $c=4.5$ by Busenberg \& Martelli \cite{busmar} and finally to $c=6$ by Busenberg, Fisher \& Martelli \cite{busfishmar}
who also gave another proof for $c=2\pi$ in any Hilbert space using Wirtinger's inequality. An obvious extension of the simple two-dimensional example
$$
\dot x=Ly\qquad\dot y=-Lx
$$
shows that $c=2\pi$ is sharp in any Hilbert space. Busenberg, Fisher \& Martelli \cite{busfishmar2} also constructed an example of an ODE on a periodic orbit of period $1$, which when viewed as a subset of
$L^1([0,1]^2)$ has Lipschitz constant $L=6$, showing that $c=6$ is sharp for general Banach spaces.\\
However, some interesting questions about minimal periods remain unanswered. Does there exist an ODE in a finite-dimensional Banach space such
that the lower bound of $TL=6$ is obtained? Does $TL\geq 2\pi$ characterise Hilbert spaces? Is there a (non-Hilbert) Banach space for which $c> 6$?
The results in this paper address this last question. First we show that in strictly convex Banach spaces necessarily $TL>6$. For these normed topological vector spaces the unit ball is a strictly
convex set. It is easy to see that the unit balls in $\ell^1$ and $\ell^\infty$ contain a line segment and are therefore not strictly convex
sets whereas the unit balls for all $1<p<\infty$ are strictly convex. This result nicely complements the current theory because the only example
for a Banach space with $c=6$ is $L^1$.\\
However, we prove not only that the inequality is strict in any strictly convex Banach space but we are also able to push the bound a little further
for the simplest family of interesting finite-dimensional Banach spaces, namely $\ell^p({\mathbb R}^n)$, that is $\mathbb{R}^n$ equipped with the $\ell^p$-norm,
$$
\|(x_1,\ldots,x_n)\|_{\ell^p}=\left(\sum_{j=1}^n|x_j|^p\right)^{1/p}.
$$
It is remarkable that even for Euclidean spaces with the family of $\ell^p$-norms the optimal constant is not known\footnote{Unfortunately there appears to be an error in one of the integral calculations in the paper by Zevin \cite{zev} which claims to show that $c=2\pi$ in $\ell^\infty({\mathbb R}^n$).} for $p\neq 2$.
Our second contribution in this paper is to point out that by using a generalised form of Wirtinger's inequality, one can find explicit bounds on $c$ which are strictly larger than $6$ in a range of $\ell^p$-spaces near $p=2$ ($1.43\lesssim p\lesssim 3.35$). A similar argument also works in the infinite-dimensional Lebesgue spaces $L^p(M,\mu)$.
We should mention the interesting related result, due to Zevin \cite{zev2}, that if $X$ is a finite-dimensional Banach space and one considers the second order equation $\ddot x=f(x)$ with $f:X\to X$ Lipschitz with constant $L^2$, then $TL\ge 2\pi$ independent of the space $X$. (The paper \cite{zev2} claims a similar result for the first order equation $\dot x=f(x)$, but there is a small error in the proof of equation (11). Nevertheless, Zevin's argument readily yields the result we have stated for $\ddot x=f(x)$.)
\section{Minimal periods in strictly convex Banach spaces}
Let us start this section by stating the main result of this paper:
\begin{thm}\label{main result}
Let $X$ be a strictly convex Banach space. Then
$$ TL > 6.$$
\end{thm}
In fact the proof of this statement is a refinement of an integral inequality originally introduced by Busenberg, Martelli \& Fisher \cite{busfishmar}. The revised
version of the result is summarised in the following lemma.
\begin{lem}
\label{BanachLemma}
Let $X$ be a normed space and $y:\mathbb{R}\rightarrow X$ be a continuous, $T$-periodic function such that $ \left\Vert\dot{y}(t)\right\Vert$ is integrable. Then $$\int_0^T \int_0^T \left\Vert y(t)-y(s) \right\Vert {\rm d} s\; {\rm d} t \leq \frac{T}{6} \int_0^T \int_0^T \left\Vert\dot{y}(t)-\dot{y}(s)\right\Vert {\rm d} s \;{\rm d} t.$$
If $X$ is a strictly convex Banach space, then the above inequality is in fact strict.
\end{lem}
Before we go into details of the proof, we show how Busenberg, Fisher \& Martelli used it to establish $TL\geq 6$ for any Banach space.
\begin{proof}[Proof of Theorem \ref{main result}]
Applying Lemma \ref{BanachLemma} and using the Lipschitz continuity of $f$, it follows that
\begin{eqnarray*}
\int_0^T\int_0^T || x(t)- x(s)||{\rm d} s\;{\rm d} t &\leq & \frac{T}{6} \int_0^T\int_0^T ||\dot{ x}(t)-\dot{ x}(s)||{\rm d} s\;{\rm d} t\\
&=& \frac{T}{6}\int_0^T\int_0^T ||f( x(t))-f( x(s))||{\rm d} s\;{\rm d} t\\ &\leq & \frac{LT}{6}\int_0^T \int_0^T || x(t)- x(s)||{\rm d} s\;{\rm d} t.
\end{eqnarray*}
Dividing both sides of the inequality by $\int_0^T\int_0^T || x(t)- x(s)||{\rm d} s\;{\rm d} t$ yields the result.
\end{proof}
We now turn to the main proof of this section.
\begin{proof}[Proof of Lemma \ref{BanachLemma}]\label{Integral Lemma Martelli}
We know that $y$ is periodic with period $T$. Hence its integral over one period is shift invariant and thus $$\int_0^T \int_0^T \left\Vert y(t+s)-y(s)\right\Vert {\rm d} s{\rm d} t=\int_0^T \int_0^T \left\Vert y(t)-y(s)\right\Vert {\rm d} s{\rm d} t.$$
Using the above observation, we can derive the following integral expression
\begin{eqnarray}\label{inequality martelli}
\int_0^T \hspace{-0.2cm} \int_0^T \hspace{-0.1cm}\left\Vert y(t)-y(s)\right\Vert {\rm d} s\;{\rm d} t \hspace{-0.2cm} \nonumber
&=&\hspace{-0.3cm}\int_0^T \hspace{-0.2cm} \int_0^T \hspace{-0.05cm}\left\Vert y(t+s)-y(s)\right\Vert {\rm d} s\; {\rm d} t\nonumber\\
&=&\hspace{-0.3cm}\int_0^T \hspace{-0.2cm}\int_0^T \hspace{-0.05cm}\frac{(T-t)t}{T} \left\Vert\frac{y(t+s)-y(s)}{t}-\frac{y(s)-y(s+t-T)}{T-t}\right\Vert {\rm d} s\; {\rm d} t\nonumber\\
&=&\hspace{-0.3cm}\int_0^T \hspace{-0.2cm}\int_0^T \frac{(T-t)t}{T^2}\left\Vert \int_0^{T} \dot{y}\left(s+\frac{tr}{T}\right)- \dot{y}\left(s+\frac{tr}{T}-r\right){\rm d} r\right\Vert {\rm d} s\; {\rm d} t\nonumber\\
&\leq &\hspace{-0.3cm}\int_0^T \hspace{-0.2cm} \int_0^T \frac{(T-t)t}{T^2}\int_0^T \left\Vert \dot{y}\left(s+\frac{tr}{T}\right)-\dot{y}\left(s+\frac{tr}{T}-r\right)\right\Vert {\rm d} r\;{\rm d} s\; {\rm d} t\\
&= &\hspace{-0.3cm}\int_0^T \hspace{-0.05cm} \frac{(T-t)t}{T^2}\int_0^T \hspace{-0.2cm}\int_0^T \left\Vert \dot{y}\left(s+\frac{tr}{T}\right)-\dot{y}\left(s+\frac{tr}{T}-r\right)\right\Vert {\rm d} s\; {\rm d} r\; {\rm d} t\nonumber
\end{eqnarray}
The last inner integral has been taken over one period, so we may shift it by $tr/T$ in order to obtain
\begin{eqnarray*}
\int_0^T\int_0^T \left\Vert y(r)-y(s)\right\Vert {\rm d} r\;{\rm d} s &\leq & \int_0^T \frac{(T-t)t}{T^2}dt \int_0^T \int_0^T \left \Vert \dot{y}(s+r)-\dot{y}(s)\right \Vert {\rm d} s\; {\rm d} r\\
&=& \frac{T}{6} \int_0^T \int_0^T \left \Vert \dot{y}(r)-\dot{y}(s)\right \Vert {\rm d} s\; {\rm d} r
\end{eqnarray*}
giving us the desired inequality for arbitrary Banach spaces.\\
From now on we consider the case when $X$ is in fact a strictly convex Banach space. The only actual inequality in the above argument occurs in line (\ref{inequality martelli})
where we use the triangle inequality for the Banach space $X$. Note that in doing so, we have a weight
$$\frac{(T-t)t}{T^2}$$
in front of the inner integral which vanishes at $t = 0,T$. In particular, if we show that this inequality actually has to be strict for some $s$ and some $0 < t < T$, our statement follows. Additionally, because of the weight, these conditions are tight as the triangle inequality could fail to be strict at $t = 0,T$ without causing the chain of inequalities to become strict. Note that from the continuity of $\dot{y}(t)$ we obtain that the functions
$$ (s,t) \rightarrow \left\|\int_{0}^{T}\dot{y}\left(s+\frac{tr}{T}\right) -\dot{y}\left(s+\frac{tr}{T}-r\right){\rm d} r\right\|$$
$$ (s,t) \rightarrow \int_{0}^{T}\left\|\dot{y}\left(s+\frac{tr}{T}\right) -\dot{y}\left(s+\frac{tr}{T}-r\right)\right\| {\rm d} r$$
are continuous as well. Fix $s$ and $0 < t < T$, fix an arbitrarily fine decomposition $0 = a_0 < a_1 < \dots < a_n = T$ and abbreviate
$$ b_i := \int_{a_i}^{a_{i+1}}\dot{y}\left(s+\frac{tr}{T}\right) \qquad \mbox{and} \qquad c_i := \int_{a_i}^{a_{i+1}}\dot{y}\left(s+\frac{tr}{T}-r\right).$$
If there is in fact equality in (1), then we need to have equality in every step of iteratively applying the triangle inequality and thus
\begin{eqnarray*}
\left\|\sum_{i=0}^{n-1}{b_i-c_i}\right\| &=& \left\|b_0 - c_0\right\| + \left\|\sum_{i=1}^{n-1}{b_i-c_i}\right\| \\
&=& \left\|b_0 - c_0\right\| + \left\|b_1 - c_1\right\| + \left\|\sum_{i=2}^{n-1}{b_i-c_i}\right\| \\
&=& \dots \\
&=& \sum_{i=0}^{n-1}{\left\|b_i-c_i\right\|}.
\end{eqnarray*}
W.l.o.g. we assume that all the terms satisfy $b_{i}-c_{i} \neq \textbf{0}$. Strict convexity implies in the last line of this argument that
$b_{n-2} - c_{n-2}$ and $b_{n-1} - c_{n-1}$ are collinear. By the same reasoning $b_{n-3} - c_{n-3}$ and $(b_{n-2} - c_{n-2}) + (b_{n-1} - c_{n-1})$
are collinear, however, the last expression itself is collinear to $b_{n-2}-c_{n-2}$ as well as $b_{n-1}- c_{n-1}$. Iterating this argument shows
that all $b_{i}-c_{i}$ are necessarily collinear. Using the continuity of $\dot{y}(t)$, making the partition sufficiently small and applying the
fundamental theorem of calculus, we can deduce that for every fixed $s$ and $0 < t < T$ there exists a vector $\textbf{v} \in X$ and a function
$g:[0,T] \rightarrow \mathbb{R}$ such that for all $0 \leq r \leq T$
\begin{equation}\dot{y}\left(s+\frac{tr}{T}\right) - \dot{y}\left(s+\frac{tr}{T}-r\right) = g(r)\textbf{v}. \label{fund}\end{equation}
Note, however, that both $g$ and $\textbf{v}$ depend on the previously fixed $s,t$. Since $y$ is not constant, it is possible to find and fix an $s$ such that
$$ \dot y(s) \neq \textbf{0}.$$
We now claim that this already implies that for all $0\leq r\leq T$
$$\dot{y}(s+r) = \tilde{g}(r)\textbf{v} + \dot{y}(s).$$
Suppose this was false, then there is an $r$ such that
$$ \dot{y}(s+r) \notin \left\{\dot y(s) + \lambda \textbf{v}| \lambda \in \mathbb{R}\right\}.$$
In particular,
$$ \min_{\lambda \in \mathbb{R}}{\|\dot{y}(s+r) - \dot y(s) + \lambda \textbf{v}\|} > 0.$$
This, however, can be seen to contradict (\ref{fund}) by taking $t$ sufficiently small.\\
\noindent Since $y$ is periodic with period $T$,
$$ \int_{0}^{T}{\dot{y}(s+r){\rm d} r} = \textbf{0} = \left(\int_{0}^{T}{\tilde{g}(r){\rm d} r}\right)\textbf{v} + T\dot{y}(s).$$
This implies that $\dot{y}(s)$ is a scalar multiple of $\textbf{v}$, in which case
$$ \dot{y}(s+r) = \left(\tilde{g}(r) - \frac{1}{T}\int_{0}^{T}{\tilde{g}(r){\rm d} r}\right)\textbf{v}.$$
This establishes that $\dot y(t)$ is one-dimensional, that is
$$ \dot y(t) = h(t) \textbf{v}$$
for some $\textbf{v} \neq \textbf{0}$ and a continuous, $T-$periodic function $h:[0,T] \rightarrow \mathbb{R}$.\\
Going back to an earlier stage of the argument, we had that for any fixed $s$ and $0<t<T$ the application of
the triangle inequality needs to be strict, that is
$$ \left\|\int_{0}^{T}\dot{y}\left(s+\frac{tr}{T}\right) -\dot{y}\left(s+\frac{tr}{T}-r\right){\rm d} r\right\| = \int_{0}^{T}\left\|\dot{y}\left(s+\frac{tr}{T}\right) -\dot{y}\left(s+\frac{tr}{T}-r\right)\right\| {\rm d} r.$$
Plugging in the relation $ \dot y(t) = h(t) \textbf{v}$,
we require that for any fixed $s,t$ with $0 < t < T$
$$ \left| \int_{0}^{T}{h\left(s+\frac{tr}{T}\right) - h\left(s+\frac{tr}{T}-r\right){\rm d} r} \right| = \int_{0}^{T}{\left| h\left(s+\frac{tr}{T}\right) - h\left(s+\frac{tr}{T}-r\right)\right| {\rm d} r}.$$
However, since $h$ is continuous and
$$ \int_{0}^{T}{h(z){\rm d} z} = 0,$$
$h$ has to vanish in a point, say $h(s) = 0$. For $t$ very small, we have
$$ \lim_{t \rightarrow 0}\left| \int_{0}^{T}{h\left(s+\frac{tr}{T}\right) - h\left(s+\frac{tr}{T}-r\right){\rm d} r} \right| = \left| \int_{0}^{T}{ h(s-r){\rm d} r} \right| = 0$$
while
$$ \lim_{t \rightarrow 0}\int_{0}^{T}{\left| h\left(s+\frac{tr}{T}\right) - h\left(s+\frac{tr}{T}-r\right)\right| {\rm d} r} = \int_{0}^{T}{\left| h(s-r)\right| {\rm d} r},$$
proving that $h \equiv 0.$
\end{proof}
\section{A generalised form of Wirtinger's inequality}
The second part of this paper is devoted to establishing explicit bounds for a certain class of $\ell^p$-spaces. The idea of our approach goes back to the proof
that $TL\geq 2\pi$ in any Hilbert space which is based on an analogue of Wirtinger's inequality for Hilbert spaces. In the following we adapt
this idea by using the work of Croce \& Dacorogna \cite{crocedac} who found the optimal constant in a generalised
set of Wirtinger inequalities, including the case of interest to us here. They showed that for
$$
u\in \left\{W^{1,p}_{\rm per}(0,1) \mbox{ with }\int_0^1 u(t)\,{\rm d} t=0 \mbox{ and } u(0)=u(1)\right\},
$$
where $W^{1,p}_{\rm per}$ is the space of $L^p$-functions $u$ whose weak first derivative lies in $L^p$, one has
$$
\left(\int_0^1|u(t)|^p\right)^{1/p}\leq C_p\left(\int_0^1|\dot{u}(t)|^p\,{\rm d} t\right)^{1/p},
$$
where
\begin{eqnarray}\label{Cpis}
C_p=\frac{p}{4(p-1)^{1/p}\int_0^1 t^{-\frac{1}{p}}(1-t)^{\frac{1}{p}-1}\,{\rm d} t}
\end{eqnarray}
is sharp. (Note that the integral appearing in the denominator is in fact the beta function $B(1/p',1/p)$ where $p'$ is the H\"older conjugate of $p$. Croce and Dacorogna consider functions defined on $(-1,1)$ but the form of the inequality here is more suitable for us in what follows.)
\begin{cor}
\label{WirtingersInequality}
Let $u\in W^{1,p}_{\rm per}([0,T], X)$ where $X$ is either $\ell^p(\mathbb{R}^n)$ or $L^p(M,\mu)$ and assume that $\int_0^T u(t)\,{\rm d} t=0$. Then
\begin{equation}\label{inLp}
\int_0^T\|u(t)\|^p_{X}\,{\rm d} t\leq C_p^p T^p\int_0^T\|\dot{u}(t)\|^p_{X}\,{\rm d} t,
\end{equation}
where $C_p$ is given in (\ref{Cpis}) and is optimal.
\end{cor}
\begin{proof}
By a simple change of variables it suffices to prove the result for $T=1$. When $X=\ell^p({\mathbb R}^n)$ we have
\begin{eqnarray*}
\int_{0}^1\sum_{j=1}^n|u_j(t)|^p\,{\rm d} t&=&\sum_{j=1}^n \int_{0}^1|u_j(t)|^p\,{\rm d} t\le C_p^p\sum_{j=1}^n\int_{0}^1|\dot u_j(t)|^p\,{\rm d} t,
\end{eqnarray*}
from which (\ref{inLp}) is immediate. One can see that the constant is optimal by considering $u=(u_1,\ldots,u_n)$ with $u_1\in W^{1,p}_{per}(0,1)$ and $u_j=0$ for $j=2,\ldots,n$.
Similarly, for $X=L^p(M,\mu)$ we have
\begin{eqnarray*}
\int_{0}^1\int_U|u(x,t)|^p\,{\rm d}\mu\,{\rm d} t&=&\int_U\int_{0}^1|u(x,t)|^p\,{\rm d} t\,{\rm d}\mu\\
&\le& C_p^p\int_U\int_{0}^1|\dot u(x,t)|^p\,{\rm d} t\,{\rm d}\mu=C_p^p\int_{0}^1\int_U|\dot u(x,t)|^p\,{\rm d}\mu\,{\rm d} t,
\end{eqnarray*}
and (\ref{inLp}) follows once more. Optimality of the constant follows by taking $f(t,x)=f(t){\bf 1}_A$ for some $f\in W^{1,p}_{per}(0,1)$ and $A\subset U$ with $\mu(A)>0$.
\end{proof}
\section{Improved lower bounds in $\ell^p(\mathbb{R}^n)$ and $L^p(M,\mu)$}
\label{Better Bounds for lp}
Having established Wirtinger's inequality for $W^{1,p}_{\rm per}([0,T],X)$ where $X$ is either $\ell^p(\mathbb{R}^n)$ or $L^p(M,\mu)$, we can now prove
the second contribution of this paper. The simple proof is essentially that for $p=2$ due to \cite{busfishmar} which is a particular case of this result if one notes that $C_2^{-1}=2\pi$.
\begin{thm}\label{Minimal bound for L^p using Poincare}
Let $x$ be a non-constant $T$-periodic solution to $\dot{ x}=f( x)$ in either $X=\ell^p(\mathbb{R}^n)$ or $X=L^p(M,\mu)$. Further, suppose that $f$ is Lipschitz continuous from $X$ into $X$ with Lipschitz constant $L$. Then
\begin{equation}\label{TLC}
TL\ge C_p^{-1}.
\end{equation}
\end{thm}
\begin{proof}
As the function $x$ is a solution to the ODE, it is differentiable by definition. Moreover, a simple calculation shows that
$$
\int_0^Tx(t+h)-x(t)\,{\rm d} t=0.
$$
Hence Wirtinger's inequality for $W^{1,p}_{per}((0,T),X)$ is applicable to $x(t+h)-x(t)$ and thus
\begin{eqnarray*}
\int_0^T\| x(t+h)- x(t)\|^p_X\,{\rm d} t&\le& C_p^pT^p \int_0^T \|\dot{ x}(t+h)-\dot{ x}(t)\|^p_X\,{\rm d} t\\
&=& C_p^p T^p\int_0^T \|f( x(t+h))-f( x(t))\|^p_X\,{\rm d} t\\
&\le& L^pC_p^pT^p \int_0^T \| x(t+h)- x(t)\|^p_X\,{\rm d} t.
\end{eqnarray*}
Dividing both sides by $\int_0^T \| x(t+h)- x(t)\|^p_X\,{\rm d} t$, which is non-zero as $x$ is non-constant, yields (\ref{TLC}).
\end{proof}
Theorem \ref{Minimal bound for L^p using Poincare} gives an improved lower bound on the product of Lipschitz constant $L$ and period $T$ for the spaces $\ell^p(\mathbb{R}^n)$ and $L^p(M,\mu)$ for a range of $p$ around $p=2$. Figure \ref{fig:BestConstant from generalised Wirtinger} plots $C_p^{-1}$ against $p$ for $1\leq p\leq 4$, and shows that $C_p^{-1}>6$ for $1.43 \leq p\leq 3.35$.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7\columnwidth]{bestconstant2.png}
\caption{Improved lower bound near $p=2$ using Wirtinger's inequality}
\label{fig:BestConstant from generalised Wirtinger}
\end{center}
\end{figure}
\textbf{Remark 1.} For $1\leq p<\infty$ one can construct an example of an ODE in $L^{p}(M,\mu)$ satisfying Lipschitz conditions on its derivative with period $2\pi$.
Suppose there are two sets $A\cap B= \emptyset$ such that
\[
0 < \mu(A) =\mu(B).
\]
and consider the ODE
$$\dot{z}=f(z)$$
with $f:L^{p}(M,\mu) \rightarrow L^{p}(M,\mu)$ given by
\[
f(z)= -\frac{\chi_{B}}{\mu(A)}\int_{A}{z d\mu} + \frac{\chi_{A}}{\mu(B)}\int_{B}{z d\mu}.
\]
Then H\"older's inequality gives that for $L=1$ the quantity
$$ I = \|f(z)-f(w)\|^p_{L^{p}(M,\mu)}$$
satisfies
\begin{eqnarray*}
\hspace{-1cm} I \hspace{-0.2cm}&=&\hspace{-0.2cm} \left\| - \chi_B \frac{1}{\mu(A)}\int_{A}{z-w d\mu} + \chi_{A}\frac{1}{\mu(A)}\int_{B}{z-w d\mu} \right\|_{L^p}^p \\
\hspace{-0.2cm}&=&\hspace{-0.2cm} \left(\frac{1}{\mu(A)}\int_{A}{z-w d\mu}\right)^p \mu(B) + \left(\frac{1}{\mu(B)}\int_{B}{z-w d\mu}\right)^p \mu(A) \\
\hspace{-0.2cm}&\leq&\hspace{-0.2cm} \frac{1}{\mu(A)^p} \left(\int_{A}{|z-w|^p d\mu}\right) \mu(A)^{p-1}\mu(B) + \frac{1}{\mu(B)^p}\left(\int_{B}{|z-w|^p d\mu}\right) \mu(B)^{p-1}\mu(A) \\
\hspace{-0.2cm}&\leq&\hspace{-0.2cm} \|z-w\|_{L^p}^{p}
\end{eqnarray*}
and one explicit $2\pi-$periodic solution is given by
$$ z(t) = -(\cos{t})\chi_A + (\sin{t})\chi_B.$$
Notice that this example can be generalised further to the case when $0 < \mu(A) \neq \mu(B)$.\\
\textbf{Remark 2.} Let $X$ be a Banach space which obeys `almost' a Hilbert space structure in the sense of the norm, that is there exists a $\varepsilon>0$ such that
$$(1-\varepsilon) \| x \|_H \leq \| x \|_X \leq (1+\varepsilon) \| x \|_H.$$
Let $x:\mathbb{R}\rightarrow X$ be a $T$-periodic solution to the ODE $\dot{x}=f(x)$ with $f$ being Lipschitz continuous with Lipschitz constant $L$. Since \begin{eqnarray*}
\|f(x) - f(y)\|_H \leq \frac{1}{1-\varepsilon} \| f(x) - f(y) \|_X \leq \frac{1}{1 - \varepsilon} L \| x - y\|_X \leq \frac{1+\varepsilon}{1-\varepsilon} L \|x - y\|_H,
\end{eqnarray*}
it follows that $f$ is also Lipschitz continuous with respect to the Euclidean norm with Lipschitz constant $L'=\frac{1+\varepsilon}{1-\varepsilon}L$. At the same time,
the length of the curve $x$ as measured in the Hilbert space is smaller than $(1+\varepsilon)T$ and using the fact that $c=2\pi$ in any Hilbert space we may conclude
that $$TL\geq 2\pi\frac{1-\varepsilon}{(1+\varepsilon)^2}.$$
However, this approximation lags behind the numerical results for $\ell^p$ obtained at the beginning of this section, especially for high dimensions.\\
\textbf{Remark 3.} Dvoretzky's theorem in \cite{dvor} guarantees that for any $\varepsilon > 0$ there exists $n \in \mathbb{N}$ sufficiently large such that any Banach space
with $\dim X \geq n$ contains a two-dimensional subspace with Banach-Mazur distance to $\ell_2^2$ at most $1+\varepsilon$. The example of a simple circle in $\ell^2_2$ realizes
$TL = 2\pi$. This means that in any Banach space $X$ it is possible to construct an ODE satisfying $TL \leq 2\pi + \varepsilon$, where $\varepsilon$ depends only on
the dimension of $X$. We do not know whether there is always an ODE for which $TL \leq 2\pi$.
\section{Conclusion}
As discussed in the introduction, the key question is what intrinsic property of a space $X$ determines the largest (and hence best) constant $C_X$ such that $LT\ge C_X$. One of
these intrinsic properties is strict convexity for which we have shown that the constant must be strictly larger than $6$. A natural question that arises is whether there exists
a Banach space in which the optimal constant is neither $6$ nor $2\pi$.\\
However, explicit bounds are difficult to obtain. Even in the simple case $X=\ell^p({\mathbb R}^n)$ this is not known, although our simple argument gives an explicit lower bound for
$p$ around $p=2$. It is interesting that a simple calculation shows that $C_p=C_{p'}$ when $p$ and $p'$ are conjugates; but it is not known whether the optimal constants
in $\ell^p$ and $\ell^{p'}$ do in fact coincide (this interesting question was suggested to one of us in a personal communication from Mario Martelli).\\
While the use of an $L^p$-based Wirtinger inequality suits the $\ell^p$-spaces, there is no reason why these exponents should match. Given a Banach space $X$ it would be interesting to determine the optimal constants in the family of inequalities
$$
\left(\int_0^1 \| u(t)\|^p_X\,{\rm d} t\right)^{1/p}\leq C_p(X)\left(\int_0^T \|\dot{ u}(t)\|^p_X\,{\rm d} t\right)^{1/p},
$$
noting that as a consequence of such a family of inequalities and the argument of Theorem \ref{Minimal bound for L^p using Poincare} one would obtain
$$
TL\ge\sup_p C_p(X)^{-1}.
$$
\section*{Acknowledgement}
JCR is supported by an EPSRC Leadership Fellowship, grant number EP/G007470/1. MACN gratefully acknowledges funding from the European Research Council under the European
Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement n$^{\circ}$ 291053. SS is supported by a Hausdorff scholarship of the Bonn International
Graduate School.
| {
"timestamp": "2013-07-24T02:08:42",
"yymm": "1210",
"arxiv_id": "1210.6582",
"language": "en",
"url": "https://arxiv.org/abs/1210.6582",
"abstract": "Let x(t) be a non-constant T-periodic solution to the ordinary differential equation x'= f(x) in a Banach space X where f is assumed to be Lipschitz continuous with constant L. Then there exists a constant c such that T L >= c, with c only depending on X. It is known that c >= 6 in any Banach space and that c = 2{\\pi} in any Hilbert space, but whereas the bound of c = 2 pi is sharp in any Hilbert space, there exists only one known example of a Banach space such that c = 6 is optimal. In this paper, we show that the inequality is in fact strict in any strictly convex Banach space. Moreover, we improve the lower bound for l^p(R^n) and L^p(M, {\\mu}) for a range of p close to p = 2 by using a form of Wirtinger's inequality for functions in W^{1,p}([0, T ], L^p(M, {\\mu})).",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Minimal Periods for Ordinary Differential Equations in Strictly Convex Banach Spaces and Explicit Bounds for some l^p-Spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9896718474806948,
"lm_q2_score": 0.8652240912652671,
"lm_q1q2_score": 0.8562879248873022
} |
https://arxiv.org/abs/1101.0388 | Demystifying a divisibility property of the Kostant partition function | We study a family of identities regarding a divisibility property of the Kostant partition function which first appeared in a paper of Baldoni and Vergne. To prove the identities, Baldoni and Vergne used techniques of residues and called the resulting divisibility property "mysterious." We prove these identities entirely combinatorially and provide a natural explanation of why the divisibility occurs. We also point out several ways to generalize the identities. | \section{Introduction}
\label{sec:in}
The objective of this paper is to provide a natural combinatorial explanation of a divisibility property of the Kostant partition function. The question of evaluating Kostant partition functions has been subject of much interest, without a satisfactory combinatorial answer. To mention perhaps the most famous such case: it is known that $$K_{A_n^+}(1, 2, \ldots, n, -{n+1 \choose 2})=\prod_{k=1}^{n}C_k,$$ where $C_k=\frac{1}{k+1}{2k \choose k}$ is the Catalan number, yet there is no combinatorial proof of the above identity!
Given such lack of understanding of the evaluation of the Kostant partition function, it seems a worthy proposition to provide a simple explanation for its certain divisibility properties.
We explore divisibility properties of Kostant partition functions of types $A_n$ and $C_{n+1}$, noting that such properties in types $B_{n+1}$ and $D_{n+1}$ are easy consequences of the type $C_{n+1}$ case. The type $A_n$ family of identities we study first appeared in a paper by Baldoni and Vergne \cite{BV}, where the authors prove the identities using residues, and where they call the divisibility property ``mysterious." It is our hope that the combinatorial argument we provide successfully demystifies the divisibility property of the Kostant partition function and provides a natural explanation why things happen the way they do.
The outline of the paper is as follows.
In Section \ref{sec:bv} we define Kostant partition functions of type $A_n$ and prove the Baldoni-Vergne identities combinatorially. Our proof also yields an affirmative answer to a question of Stanley \cite{S} regarding a possible bijective proof of a special case of the Baldoni-Vergne identities.
In Section \ref{sec:c} we define Kostant partition functions of type $C_{n+1}$, relate them to flows, and show how to modify our proof for the Baldoni-Vergne identities to obtain their analogues for type $C_{n+1}$. We also point out other possible variations of these identities in the type $C_{n+1}$ case.
\section{The Baldoni-Vergne identities}
\label{sec:bv}
Before stating the Baldoni-Vergne identities, we need a few definitions. Throughout this section the graphs $G$ we consider are on the vertex set $[n+1]$ with possible multiple edges, but no loops. Denote by $m_{ij}$ the multiplicity of edge $(i, j)$, $i<j$, in $G$. To each edge $(i, j)$, $i<j$, of $G$, associate the positive type $A_n$ root $e_i-e_j$, where $e_i$ is the $i^{th}$ standard basis vector. Let $\{\{\alpha_1, \ldots, \alpha_N\}\}$ be the multiset of vectors corresponding to the multiset of edges of $G$ as described above. Note that $N=\sum_{1\leq i<j\leq n+1} m_{ij}$.
The {\bf Kostant partition function} $K_G$ evaluated at the vector ${\bf a} \in \mathbb{Z}^{n+1}$ is defined as
\begin{equation} \label{kost} K_G({\bf a})= \# \{ (b_{i})_{i \in [N]} \mid \sum_{i \in [N]} b_{i} \alpha_i ={\bf a} \textrm{ and } b_{i} \in \mathbb{Z}_{\geq 0}\}.\end{equation}
That is, $K_G({\bf a})$ is the number of ways to write the vector ${\bf a}$ as a nonnegative linear combination of the positive type $A_n$ roots corresponding to the edges of $G$, without regard to order. Note that in order for $K_G({\bf a})$ to be nonzero, the partial sums of the coordinates of ${\bf a}$ have to satisfy $v_1+\ldots+v_i \geq 0$, $i\in [n]$, and $v_1+\ldots+v_{n+1}=0$.
We now proceed to state and prove Theorem \ref{thm:bv} which first appeared in \cite{BV}. Baldoni and Vergne gave a proof of it using residues, and called the result ``mysterious." We provide a natural combinatorial explanation of the result. Our explanation also answers a question of Stanley in affirmative, which he posed in \cite{S}, regarding a possible bijective proof of a special case of the Baldoni-Vergne identities.
For brevity, we write $G-e$, or $G-\{e_1, \ldots, e_k\}$, to mean a graph obtained from $G$ with edge $e$, or edges $e_1, \ldots, e_k$, deleted.
\begin{theorem} \label{thm:bv} (\cite{BV})
Given a connected graph $G$ on the vertex set $[n+1]$ with $m_{n-1, n}=m_{n-1, n+1}=m_{n, n+1}=1$ and such that $$\frac{m_{j, n-1}+m_{j, n}+m_{j, n+1}}{m_{j, n-1}}=c,$$
\noindent for some constant $c$ independent of $j$ for $j \in [n-2]$, we have that for any ${\bf a}=(a_1, \ldots, a_n, -\sum_{i=1}^n a_i) \in \mathbb{Z}^{n+1}$
\begin{equation} \label{c} K_G({\bf a})=(\frac{a_1+ \cdots+ a_{n-2}}{c}+ a_{n-1}+1) K_{G- (n-1, n)}({\bf a}).\end{equation
\end{theorem}
\bigskip
Before proceeding to the formal proof of Theorem \ref{thm:bv} we outline it, to fully expose the underlying combinatorics. Rephrasing equation \eqref{kost}, $K_G({\bf a})$ counts the number of {\bf flows} ${\bf f}_G=(b_i)_{i\in N}$ on graph $G$ satisfying $$\sum_{i \in [N]} b_{i} \alpha_i ={\bf a} \textrm{ and } b_{i} \in \mathbb{Z}_{\geq 0}.$$ In the proof of Theorem \ref{thm:bv} we introduce the concept of {\bf partial flows} ${\bf f}_H$ and the following are the key statements we prove:
\begin{itemize}
\item The elements of the set of partial flows are in bijection with the flows on $G-(n-1, n)$ that the Kostant partition function $K_{G-(n-1, n)}({\bf a})$ counts. That is, $$\#\text{ partial flows}=K_{G-(n-1, n)}({\bf a}).$$
\item The elements of the multiset of partial flows ${\bf f}_H$, where the cardinality of the multiset is $\frac{a_1+ \cdots+ a_{n-2}}{c}+ a_{n-1}+1$ times the cardinality of the set of partial flows, are in bijection with the flows on $G$ that the Kostant partition function $K_{G}({\bf a})$ counts. That is, $$(\frac{a_1+ \cdots+ a_{n-2}}{c}+ a_{n-1}+1) \#\text{ partial flows}=K_{G}({\bf a}).$$
\end{itemize}
The above two statements imply a bijection between the elements of the multiset of flows on $G-(n-1, n)$ that the Kostant partition function $K_{G-(n-1, n)}({\bf a})$ counts, where the cardinality of the multiset is $\frac{a_1+ \cdots+ a_{n-2}}{c}+ a_{n-1}+1$ times the cardinality of the set of flows counted by $K_{G-(n-1, n)}({\bf a})$, and the flows on $G$ that the Kostant partition function $K_{G}({\bf a})$ counts, yielding
$$K_G({\bf a})=(\frac{a_1+ \cdots+ a_{n-2}}{c}+ a_{n-1}+1) K_{G- (n-1, n)}({\bf a}).$$
\bigskip
We now proceed to the formal proof of Theorem \ref{thm:bv}.
\bigskip
\noindent \textit{Proof of Theorem \ref{thm:bv}}.
Let $\{\{\alpha_1, \ldots, \alpha_N\}\}$ be the multiset of vectors corresponding to the edges of $G$. Let $\alpha_N=e_{n-1}-e_n, \alpha_{N-1}=e_{n-1}-e_{n+1},$ and $\alpha_{N-2}=e_{n}-e_{n+1}.$ Then equation \eqref{c} can be rewritten as
\begin{align} \label{c1} & \# \{ (b_i)_{i \in [N]} \mid \sum_{i=1}^N b_i \alpha_i ={\bf a}\} = \nonumber \\
&(\frac{a_1+ \cdots+ a_{n-2}}{c}+ a_{n-1}+1) \# \{ (b_i)_{i \in [N-1]} \mid \sum_{i=1}^{N-1} b_i \alpha_i ={\bf a}\}.
\end{align}
Consider a flow ${\bf f}_H=(b_i)_{i \in [N-3]}$, $b_i \in \mathbb{Z}_{\geq 0}$, of the edges of the graph $H:=G-\{(n-1, n), (n-1, n+1), (n, n+1)\}$. We call ${\bf f}_H$ {\bf partial} if $$\sum_{i=1}^{N-3} b_i \alpha_i =(a_1, \ldots, a_{n-2}, x_1, x_2, x_3),$$ \noindent for some $x_1, x_2, x_3 \in \mathbb{Z}$.
Notice that given a partial flow ${\bf f}_H=(b_i)_{i \in [N-3]}$, it can be extended uniquely to a flow ${\bf f}_{G-\{(n-1, n)\}}=(b_i)_{i \in [N-1]}$, $b_i \in \mathbb{Z}_{\geq 0}$, on $G-\{(n-1, n)\}$ such that
$\sum_{i=1}^{N-1} b_i \alpha_i ={\bf a}$. Furthermore, this correspondence is a bijection. Therefore,
\begin{equation} \label{H} \# \{ (b_i)_{i \in [N-1]} \mid \sum_{i=1}^{N-1} b_i \alpha_i ={\bf a}\}=\sum_{{\bf f}_H} 1,\end{equation}
\noindent where the summation runs over all partial flows ${\bf f}_H$.
Also, given a partial flow ${\bf f}_H$ with $Y_{i}({\bf f}_H)$, $i \in \{n-1, n, n+1\}$, denoting the total {\bf inflow} into vertex $i \in \{n-1, n, n+1\}$ in $H$, that is the sum of all the flows $b_i$ on edges of $H$ incident to $i \in \{n-1, n, n+1\}$, the partial flow ${\bf f}_H$ can be extended in $Y_{n-1}({\bf f}_H)+a_{n-1}+1$ ways to a flow ${\bf f}_{G}=(b_i)_{i \in [N]}$, $b_i \in \mathbb{Z}_{\geq 0}$, of $G$ such that
$\sum_{i=1}^{N} b_i \alpha_i ={\bf a}$. Furthermore, given a flow ${\bf f}_G=(b_i)_{i \in [N]}$, $b_i \in \mathbb{Z}_{\geq 0}$ such that $\sum_{i=1}^{N} b_i \alpha_i ={\bf a}$, there is a unique partial flow ${\bf f}_H=(b_i)_{i \in [N-3]}$ from which it can be obtained. Therefore,
\begin{equation} \label{G} \# \{ (b_i)_{i \in [N]} \mid \sum_{i=1}^N b_i \alpha_i ={\bf a} \} =\sum_{{\bf f}_H} (Y_{n-1}({\bf f}_H)+a_{n-1}+1),\end{equation}
\noindent where the summation runs over all partial flows ${\bf f}_H$.
Note that since $$\frac{m_{j, n-1}+m_{j, r}+m_{j, n+1}}{m_{j, n-1}}=c,$$
\noindent for some constant $c$ independent of $j$ for $j \in [n-2]$, it follows that
$$c \sum_{{\bf f}_H} Y_{n-1}({\bf f}_H)=\sum_{{\bf f}_H} (Y_{n-1}({\bf f}_H)+Y_{n}({\bf f}_H)+Y_{ n+1}({\bf f}_H))=\sum_{{\bf f}_H} (a_1+\cdots+a_{n-2}),$$
that is
\begin{equation}\label{eq:y} \sum_{{\bf f}_H} Y_{n-1}({\bf f}_H)=\sum_{{\bf f}_H} \frac{(a_1+\cdots+a_{n-2})}{c}.\end{equation}
Thus, equation \eqref{G} can be rewritten as
\begin{align} \label{G1} & \# \{ (b_i)_{i \in [N]} \mid \sum_{i=1}^N b_i \alpha_i ={\bf a} \} =\sum_{{\bf f}_H} (\frac{(a_1+\cdots+a_{n-2})}{c}+a_{n-1}+1) \\ &=(\frac{(a_1+\cdots+a_{n-2})}{c}+a_{n-1}+1) \sum_{{\bf f}_H} 1\\ &=(\frac{(a_1+\cdots+a_{n-2})}{c}+a_{n-1}+1) \# \{ (b_i)_{i \in [N-1]} \mid \sum_{i=1}^{N-1} b_i \alpha_i ={\bf a}\}, \end{align}
\noindent where the first equality uses equations \eqref{G} and \eqref{eq:y}, and the third equality uses equation \eqref{H}.
\qed
\section{Type $C_{n+1}$ Kostant partition functions and the Baldoni-Vergne identities}
\label{sec:c}
We now show two generalizations of Theorem \ref{thm:bv} in the type $C_{n+1}$ case. We first give the necessary definitions and explain the notion of flow in the context of signed graphs.
Throughout this section the graphs $G$ on the vertex set $[n+1]$ we consider are signed, that is there is a sign $\epsilon \in \{+, -\}$ assigned to each of its edges, with possible multiple edges, and all loops labeled positive. Denote by $(i, j, -)$ and $(i, j, +)$, $i \leq j$, a negative and a positive edge, respectively. Denote by $m_{ij}^\epsilon$ the multiplicity of edge $(i, j, \epsilon)$ in $G$, $i\leq j$, $\epsilon \in \{+, -\}$.
To each edge $(i, j, \epsilon)$, $i\leq j$, of $G$, associate the positive type $C_{n+1}$ root ${\rm v}(i,j, \epsilon)$, where ${\rm v}(i,j, -)=e_i-e_j$ and ${\rm v}(i,j, +)=e_i+e_j$. Let $\{\{\alpha_1, \ldots, \alpha_N\}\}$ be the multiset of vectors corresponding to the multiset of edges of $G$ as described above. Note that $N=\sum_{1\leq i<j\leq n+1} (m_{ij}^-+m_{ij}^+)$.
The {\bf Kostant partition function} $K_G$ evaluated at the vector ${\bf a} \in \mathbb{Z}^{n+1}$ is defined as
$$K_G({\bf a})= \# \{ (b_{i})_{i \in [N]} \mid \sum_{i \in [N]} b_{i} \alpha_i ={\bf a} \textrm{ and } b_{i} \in \mathbb{Z}_{\geq 0}\}.$$
That is, $K_G({\bf a})$ is the number of ways to write the vector ${\bf a}$ as a nonnegative linear combination of the positive type $C_{n+1}$ roots corresponding to the edges of $G$, without regard to order.
Just like in the type $A_n$ case, we would like to think of the vector $(b_{i})_{i \in [N]} $ as a {\bf flow}. For this we here give a precise definition of flows in the type $C_{n+1}$ case, of which type $A_{n}$ is of course a special case.
Let $G$ be a signed graph on the vertex set $[n+1]$. Let $\{\{e_1, \ldots, e_N\}\}$ be the multiset of edges of $G$, and $\{\{\alpha_1, \ldots, \alpha_N\}\}$ the multiset of vectors corresponding to the multiset of edges of $G$. Fix an integer vector ${\bf a}=(a_1, \ldots, a_n, a_{n+1}) \in \mathbb{Z}^{n+1}$. A {\bf nonnegative integer} \textbf{${\bf a}$-flow} ${\bf f}_G$ on $G$ is a vector ${\bf f}_G=(b_i)_{i \in [N]}$, $b_i \in \mathbb{Z}_{\geq 0}$ such that for all $1\leq i \leq n+1$, we have
\begin{equation} \label{eqn:flow}
\sum_{e \in E, \textrm{inc}(e, v )=-} b(e)+a_v= \sum_{e \in E, \textrm{inc}(e, v)=+} b(e)+\sum_{e=(v, v, + )} b(e),
\end{equation}
\noindent where $b(e_i)=b_i$, $\textrm{inc}(e, v)=-$ if edge $e=(g, v, -)$, $g <v$, and $\textrm{inc}(e, v)=+$ if $e=(g, v, +)$, $g <v$, or $e=( v, j, \epsilon)$, $v <j,$ and $\epsilon \in \{+, -\}$.
Call $b(e)$ the \textbf{flow} assigned to edge $e$ of $G$. If the edge $e$ is negative, one can think of $b(e)$ units of fluid flowing on $e$ from its smaller to its bigger vertex. If the edge $e$ is positive, then one can think of $b(e)$ units of fluid flowing away both from $e$'s smaller and bigger vertex to infinity. Edge $e$ is then a ``leak" taking away $2b(e)$ units of fluid.
From the above explanation it is clear that if we are given an ${\bf a}$-flow ${\bf f}_G$ such that $\sum_{e=(i, j, +), i \leq j}b(e)=y$, then ${\bf a}=(a_1, \ldots, a_n, 2y-\sum_{i=1}^n a_i)$. It is then a matter of checking the defintions to see that for a signed graph $G$ on the vertex set $[n+1]$ and vector ${\bf a}=(a_1, \ldots, a_n, 2y-\sum_{i=1}^n a_i) \in \mathbb{Z}^{n+1}$, the number of nonnegative integer ${\bf a}$-flows on $G$ is equal to $K_G( {\bf a})$.
Thinking of $K_G({\bf a})$ as the number of nonnegative integer ${\bf a}$-flows on $G$, there is a straightforward generalization of Theorem \ref{thm:bv} in the type $C_{n+1}$ case:
\begin{theorem} \label{thm:bv2}
Given a connected signed graph $G$ on the vertex set $[n+1]$ with $m_{n-1, n}^-=m_{n-1, n+1}^-=m_{n, n+1}^-=1$, $m_{j, n-1}^+=m_{j, n}^+=m_{j, n+1}^+=0$, for $j \in [n+1]$, and such that $$\frac{m_{j, n-1}^-+m_{j, n}^-+m_{j, n+1}^-}{m_{j, n-1}^-}=c,$$
\noindent for some constant $c$ independent of $j$ for $j \in [n-2]$, we have that for any ${\bf a}=(a_1, \ldots, a_n, 2y-\sum_{i=1}^n a_i) \in \mathbb{Z}^{n+1}$,
\begin{equation} \label{cC} K_G({\bf a})=(\frac{a_1+ \cdots+ a_{n-2}-2y}{c}+ a_{n-1}+1) K_{G- (n-1, n)}({\bf a}).\end{equation
\end{theorem}
\bigskip
The proof of Theorem \ref{thm:bv2} proceeds analogously to that of Theorem \ref{thm:bv}. Namely, define \textbf{partial flows} ${\bf f}_H=(b_i)_{i\in [N-3]}$ on $H:=G-\{(n-1, n, -), (n-1, n+1, -), (n, n+1, -)\}$ such that $$\sum_{i=1}^{N-3} b_i \alpha_i =(a_1, \ldots, a_{n-2}, x_1, x_2, x_3),$$ \noindent for some $x_1, x_2, x_3 \in \mathbb{Z}$ and the sum of flows on positive edges is $y$.
Then, one can prove:
\begin{itemize}
\item The elements of the set partial flows are in bijection with the nonnegative integer ${\bf a}$-flows on $G-(n-1, n)$. That is, $$\#\text{ partial flows}=K_{G-(n-1, n)}({\bf a}).$$
\item The elements of the multiset of partial flows ${\bf f}_H$, where the cardinality of the multiset is $\frac{a_1+ \cdots+ a_{n-2}-2y}{c}+ a_{n-1}+1$ times the cardinality of the set of partial flows, are in bijection with the nonnegative integer ${\bf a}$-flows on $G$ . That is, $$(\frac{a_1+ \cdots+ a_{n-2}-2y}{c}+ a_{n-1}+1) \#\text{ partial flows}=K_{G}({\bf a}).$$
\end{itemize}
The above two statements imply a bijection between the elements of the multiset of nonnegative integer ${\bf a}$-flows on $G-(n-1, n)$, where the cardinality of the multiset is $\frac{a_1+ \cdots+ a_{n-2}-2y}{c}+ a_{n-1}+1$ times the cardinality of the set of nonnegative integer ${\bf a}$-flows on $G-(n-1, n)$, and the nonnegative integer ${\bf a}$-flows on $G$, yielding
$$K_G({\bf a})=(\frac{a_1+ \cdots+ a_{n-2}-2y}{c}+ a_{n-1}+1) K_{G- (n-1, n)}({\bf a}).$$
\bigskip
Note that the requirement that only negative edges are incident to the vertices $n-1, n, n+1$ in $G$ stems from the fact that we need to make sure, in order for our counting arguments from the proof of Theorem \ref{thm:bv} to work, that we can always assign nonnegative flows to the edges $(n-1, n, -), (n-1, n+1, -), (n, n+1, -)$ and also, that in case we are extending a partial flow ${\bf f}_H$ to a flow on $G$, we can extend it in $Y_{n-1}({\bf f}_H)+a_{n-1}+1$ ways. These properties will be satisfied, if we insure that ``inflows" at the vertices
$n-1$ and $n$ are at least $-a_{n-1}$ and $-a_n$, respectively. To simplify the formulation, we will assume that there are no loops at the vertices $n-1, n, n+1$, though the following theorem could also be adopted to a somewhat more general setting.
\begin{theorem} \label{thm:bv3}
Given a connected signed graph $G$ on the vertex set $[n+1]$ with $m_{n-1, n}^-=m_{n-1, n+1}^-=m_{n, n+1}^-=1$, $m_{i,j}^+=0$, for $i, j \in \{n-1, n, n+1\}$, and such that $$\frac{m_{j, n-1}^\epsilon+m_{j, r}^\epsilon+m_{j, n+1}^\epsilon}{m_{j, n-1}^\epsilon}=c,$$
\noindent for $\epsilon \in \{+, -\}$ and for some constant $c$ independent of $j$ for $j \in [n-2]$, we have that for any ${\bf a}=(a_1, \ldots, a_n, 2y-\sum_{i=1}^n a_i) \in \mathbb{Z}^{n+1}$, $y \leq a_{n-1}+1, a_n+1$,
\begin{equation} \label{cC} K_G({\bf a})=(\frac{a_1+ \cdots+ a_{n-2}-2y}{c}+ a_{n-1}+1) K_{G- (n-1, n)}({\bf a}).\end{equation
\end{theorem}
The proof technique of Theorem \ref{thm:bv3} is analogous to that of Theorem \ref{thm:bv}. We invite the reader to check each step of the proof of Theorem \ref{thm:bv} and see how they can be adapted to prove Theorem \ref{thm:bv3}.
\section*{Acknowledgement} I thank Richard Stanley for his intriguing slides and for pointing out the work of Baldoni and Vergne. I also thank Alejandro Morales for numerous conversations about flows and the Kostant partition function.
| {
"timestamp": "2011-01-04T02:03:52",
"yymm": "1101",
"arxiv_id": "1101.0388",
"language": "en",
"url": "https://arxiv.org/abs/1101.0388",
"abstract": "We study a family of identities regarding a divisibility property of the Kostant partition function which first appeared in a paper of Baldoni and Vergne. To prove the identities, Baldoni and Vergne used techniques of residues and called the resulting divisibility property \"mysterious.\" We prove these identities entirely combinatorially and provide a natural explanation of why the divisibility occurs. We also point out several ways to generalize the identities.",
"subjects": "Combinatorics (math.CO)",
"title": "Demystifying a divisibility property of the Kostant partition function",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130560740512,
"lm_q2_score": 0.8652240930029118,
"lm_q1q2_score": 0.855717924409709
} |
https://arxiv.org/abs/2002.07555 | Convergence analysis of multi-level spectral deferred corrections | The spectral deferred correction (SDC) method is class of iterative solvers for ordinary differential equations (ODEs). It can be interpreted as a preconditioned Picard iteration for the collocation problem. The convergence of this method is well-known, for suitable problems it gains one order per iteration up to the order of the quadrature method of the collocation problem provided. This appealing feature enables an easy creation of flexible, high-order accurate methods for ODEs. A variation of SDC are multi-level spectral deferred corrections (MLSDC). Here, iterations are performed on a hierarchy of levels and an FAS correction term, as in nonlinear multigrid methods, couples solutions on different levels. While there are several numerical examples which show its capabilities and efficiency, a theoretical convergence proof is still missing. This paper addresses this issue. A proof of the convergence of MLSDC, including the determination of the convergence rate in the time-step size, will be given and the results of the theoretical analysis will be numerically demonstrated. It turns out that there are restrictions for the advantages of this method over SDC regarding the convergence rate. | \section{Numerical Results}
\label{sec:examples}
In this section, the convergence behavior of MLSDC, theoretically analyzed in the previous section, is verified by numerical examples. The method is applied to three different initial value problems and the results are compared to those from classical, single-level SDC.
The key question here is whether the conditions derived in the previous sections (smoothness, high spatial/temporal resolution and high interpolation order) are actually sharp, i.e.\ whether MLSDC does indeed show only low order convergence if any of these conditions are violated.
The corresponding programs were written in Python using the \texttt{pySDC} code \cite{pysdc,10.1145/3310410}.
\subsection{Heat equation}
The first numerical example is the one-dimensional heat equation defined by the following initial value problem:
\begin{equation}\begin{gathered}
\frac{\partial}{\partial t} u(x,t) = \nu \frac{\partial^2}{\partial x^2} u(x,t), \quad \forall t \in [0,t_{end}], x \in [0,1]\\
u(0,t) = 0, \quad u(1,t) = 0,\\
u(x,0) = u_0(x),
\end{gathered}\end{equation}
where $u(x,t)$ represents the temperature at the location $x$ and time $t$ and $\nu > 0$ defines the thermal diffusivity of the medium.
This partial differential equation is discretized in space using standard second-order finite differences with $N$ degrees-of-freedom.
As initial value a sine wave with frequency $\kappa$ is selected, i.e.\ $u_0(x) = \sin(\kappa \pi x)$. Under these conditions, the analytical solution of the spatially discretized initial value problem is given by
\begin{align*}
\vec{u}(t) = \sin(\kappa \pi \vec{x})e^{-t \nu \rho}\text{ with } \rho = \frac{1}{\Delta x^2} (2 - 2\cos(\pi \nu \vec{x})
\end{align*}
with $\vec{x} \coloneqq (x_n)_{1 \leq n \leq N}$ and an element-wise application of the trigonometric functions.
For the tests, we choose $\kappa=4$, $\nu = 0.1$ and $M=5$ Gauß-Radau collocation nodes.
Note that we are using this linear ODE for our tests even though the linearity of the right-hand side $f$ is not a necessary condition in theorems \ref{theo:sdc_conv}, \ref{theo:mlsdc_conv_gen} or \ref{theo:mlsdc_conv_prec}. In fact, $f$ is just assumed to be Lipschitz continuous. However, we will consider the heat equation here since it is well studied and has a convenient exact solution needed to compute the errors of SDC and MLSDC.
The following tests are structured in a particular way: In the first one, we will adjust the method parameters according to the results of \fref{theo:mlsdc_conv_prec} to observe an improved convergence of MLSDC over SDC. More specifically, we will use a small spatial step size $\Delta x$, a high interpolation order $p$ and try to generate smooth errors using smooth initial guesses for the iteration. In a second step, we will then subsequently change these parameters leading to a lower convergence order of MLSDC as described in \fref{theo:mlsdc_conv_gen}. Thereby, we will reveal the dependence of MLSDC's convergence behavior on those parameters and simultaneously verify the general minimal achievable convergence order of the method. Altogether, this will confirm the theoretical results of the previous section.
For the first test, the number of degrees-of-freedom was set to $N_h = 255$ on the fine and $N_H = 127$ on the coarse level of MLSDC. This parameter particularly determines the spatial grid size $\Delta x = \frac{1}{N+1}$.
As discussed before, we use injection as restriction and a piecewise $p$-th order Lagrange interpolation as interpolation, for now with $p=8$.
Moreover, the smooth initial value $u_0(x) = \sin(4\pi x)$ was spread across the different nodes $\tau_m$ to form the initial guess.
\begin{figure}
\includegraphics[width=\textwidth]{errors_heat1d}
\caption{Convergence behavior of SDC and MLSDC applied to the discretized one-dimensional heat equation with coarsening in space (for MLSDC) and different parameters}
\label{fig:heat}
\end{figure}
An illustration of the corresponding numerical results is shown in \fref{fig:heat}c, with the reference SDC result in \fref{fig:heat}a. MLSDC was applied with different step sizes $\Delta t$ and numbers of iterations $k$ to the considered problem and the resulting errors were plotted as points in the respective graphs. The drawn lines, on the other hand, represent the expected behavior, i.e.\ the predicted convergence orders of the method according to \fref{theo:mlsdc_conv_prec}. In particular, we assume that the terms $\Delta x^p$ and $C(E)$ are sufficiently small such that $\Delta t^{k_0+2k}$ is the leading order in the corresponding error estimation. As a result, MLSDC is expected to gain two orders per iteration. In the figure, it can be seen that all of the computed points nearly lie on the expected lines which always start at the error resulting for the largest step size. Therefore, the numerical results match the theoretical predictions.
If, by contrast, the spatial grid size $\Delta x$ is chosen to be significantly larger, in particular as large as $\frac{1}{16}$ on the fine and $\frac{1}{8}$ on the coarse level, the leading order in \fref{theo:mlsdc_conv_prec}, presenting an error estimation for MLSDC, changes to $\Delta t^{k_0+k}$. Hence, in this example, we expect MLSDC to only gain one order in $\Delta t$ with each iteration, as SDC does and as it was described in the general convergence \fref{theo:mlsdc_conv_gen}. The corresponding numerical results, presented in \fref{fig:heat}d, confirm this prediction. For both methods, the error decreases by one order in $\Delta t$ with each iteration.
Another possible modification of the first example is a decrease of the interpolation order $p$. \Fref{fig:heat}e shows the numerical results if this parameter is changed to $p=4$. Apparently, this also leads to an order reduction of MLSDC compared to \fref{fig:heat}c. According to \fref{theo:mlsdc_conv_prec}, this is a reasonable behavior. In particular, the leading order in the presented error estimation is again reduced to $\Delta t^{k_0+k}$ due to the higher magnitude of $\Delta x^p$. Besides, it should be noted that the considered values of $\Delta t$ are significantly smaller here. This is caused by the fact that MLSDC does not converge for greater values of this parameter, i.e.\ the upper bound for $\Delta t$, implicitly occurring in the assumptions of the respective theorem, seems to be lower here. The smaller step sizes $\Delta t$ also entail overall smaller errors. As a result, the accuracy of the collocation solution is reached earlier which explains the outliers in the considered plots.
The third necessary condition for the improved convergence of MLSDC is the magnitude of the remainders $E_m$ or, in other words, the smoothness of the error. In this context, we will now have a look at the changes which result from a higher oscillatory initial guess. In particular, we will assign random values to $U^{(0)}$. The corresponding errors are shown in \fref{fig:heat}b for SDC and \fref{fig:heat}f for MLSDC. It can be seen that this change results again in a lower convergence order of MLSDC, in particular it gains one order per iteration as SDC. Since this time, as the crucial term $\Delta x^p$ is left unchanged, the result can only be assigned to a higher value of $C(E)$ and thus to an insufficient smoothness of the error. This may lead to the assumption that for this problem type a smooth initial guess is a sufficient condition for the smoothness of the error and thus, a low value of $C(E)$.
\subsection{Allen-Cahn equation}
The second test case is the non-linear, two-dimensional Allen-Cahn equation
\begin{align}\label{eq:ac}
u_t &= \Delta u + \frac{1}{\epsilon^2} u(1-u^2)\quad \mathrm{on}\quad [-0.5, 0.5]^2\times[0,T],\ T>0,\\
u(x,0) &= u_0(x),\quad x\in[-0.5,0.5]^2,\nonumber
\end{align}
with periodic boundary conditions and scaling parameter $\epsilon > 0$.
We use again second-order finite differences in space and choose a sine wave in 2D as initial condition, i.e.\ $u_0(x) = \sin(\kappa \pi x)\sin(\kappa\pi y)$.
There is no analytical solution, neither for the continuous nor for the spatially discretized equations.
Therefore, reported errors are computed against a numerically computed high-order reference solution.
For the tests, we choose $\kappa=4$, $\epsilon = 0.2$ and $M=3$ Gauß-Radau collocation nodes.
The tests are structured precisely as for the heat equation: we first show second-order convergence factors using appropriate parameters and then test the sharpness of the conditions on smoothness, the resolution and the interpolation order.
For the first test, the number of degrees-of-freedom per dimension was set to $N_h = 128$ on the fine and $N_H = 64$ on the coarse level of MLSDC.
Transfer operators are the same as before.
\begin{figure
\includegraphics[width=\textwidth]{errors_allencahn_2d.pdf}
\caption{Convergence behavior of SDC and MLSDC applied to the discretized two-dimensional Allen-Cahn equation with coarsening in space (for MLSDC) and different parameters}
\label{fig:allencahn}
\end{figure}
Figure~\ref{fig:allencahn} shows the results of our tests for the Allen-Cahn equation for both SDC and MLSDC.
The main conclusion here is the same as before: using less degrees-of-freedom (here $N=32$ on the fine level instead of $128$), a lower interpolation order (here $p=2$ instead of $8$) or a non-smooth (here random) initial guess leads to a degraded order of the convergence factor.
\subsection{Auzinger's test case}
The third test case is the following two-dimensional ODE introduced in \cite{auzinger2004modified}:
\begin{equation}\begin{gathered}
\dot{u} = \begin{pmatrix} \dot{x} \\ \dot{y} \end{pmatrix}
= \begin{pmatrix} -y - \lambda x(1-x^2-y^2) \\ x - \lambda \rho y(1-x^2-y^2) \end{pmatrix}, \quad \forall t \in [0,t_{end}] \\\
u(0) = u_0,
\end{gathered}\end{equation}
where $\lambda<0$ determines the stiffness of the problem and $\rho>0$ is a positive parameter.
For the tests, we choose $\lambda=-0.75$, $\rho=3$ and $u_0 = (1,0)^T$.
The analytical solution of this initial value problem is known. It is given by
\begin{align*}
u(t) = \begin{pmatrix} x(t) \\ y(t) \end{pmatrix} = \begin{pmatrix} \cos(t) \\ \sin(t) \end{pmatrix},\quad t \in [0,t_{end}].
\end{align*}
The corresponding tests are structured in a similar way as before but this time the ODE version of \fref{theo:mlsdc_conv_prec}, given by \fref{equ:err_coarse_time}, is considered. So, first appropriate parameters are used to reach second-order convergence of MLSDC, and then the sharpness of the implied conditions is tested. In particular, the improved convergence behavior of MLSDC is expected to depend on the time step size $\Delta \tau = C \Delta t$, the (now temporal) interpolation order $p$ and the smoothness of the error in time. In our tests, we always used the maximal interpolation order $p=M_H$ corresponding to the number of collocation nodes on the coarse level, since otherwise it was not possible to get a second order convergence at all. The number of nodes on the fine grid was chosen to be $M_h=8$.
\begin{figure}
\includegraphics[width=\textwidth]{errors_auzinger}
\caption{Convergence behavior of SDC and MLSDC applied to the Auzinger problem with coarsening in the collocation nodes (for MLSDC) and different parameters}
\label{fig:auzinger}
\end{figure}
The numerical results are shown in \fref{fig:auzinger}. Again, they agree with our theoretical predictions: All of the three conditions implied by \fref{equ:err_coarse_time} need to be fulfilled to reach second-order convergence of MLSDC. A larger time step size (here $\Delta t \in [2^{-1}, 2^{-4}]$ instead of $[2^{-3}, 2^{-6}]$), a lower interpolation order (here $p=M_H=2$ instead of $6$) or a non-smooth (here random) initial guess immediately led to a decrease in the order.
However, there are a few oddities in the graphs that we would like to discuss here. First of all, the orders shown in \fref{fig:auzinger}a, c and d are not $2k-1$ and $k-1$ as we would expect, but rather $2k$ and $k$. This behavior is probably related to the $k_0$-term in the estimates which stems from the initial guess of SDC and MLSDC. This explanation would also agree with the result that this additional order gets lost if a random initial guess is used (see \fref{fig:auzinger}b, f).
Aside from that, it should be noted that the use of a lower interpolation order (\fref{fig:auzinger}e) led to a convergence order of $k+1$ instead of $k$ as we would have expected. The reason for this is not clear but could be related to \fref{equ:mlsdc_conv_prec_lte} which implies that all orders between $k$ and $2k$ can potentially be reached. In any case, the result shows that the second-order convergence is lost if the interpolation order is decreased. Finally, we want to discuss the plot in \fref{fig:auzinger}d resulting from the use of a larger time step size $\Delta t$.
It can be seen that the data points do not perfectly agree with the predicted lines here. Apparently, the numerical results are often much better than expected. However, they do not reach order $2k$ and hence confirm our theory that the second-order convergence of MLSDC is also dependent of a small time step size. The deviations in the data are, in fact, not too surprising here, considering that the time step size is a very crucial parameter for the convergence of MLSDC in general. As described in \fref{theo:mlsdc_conv_gen} and \ref{theo:mlsdc_conv_prec}, $\Delta t$ has to be small enough in order for MLSDC to converge at all. For that reason, the possible testing scope for the time step size is rather small, making it difficult to find appropriate parameters where MLSDC converges exactly with order $k$.
These artifacts shed some light on the ``robustness'' of the results, a fact that we would like to share here: during the tests with all three examples, we saw that it is actually very hard to get these more or less consistent results.
All model and method parameters had to be chosen carefully in order to support the theory derived above so clearly.
In many cases the results were much more inconsistent, showing e.g.\ convergence orders somewhere between $k$ and $2k$, changing convergence orders or stagnating results close to machine precision or discretization errors.
None of the tests we did contradicted our theoretical results, though, but they revealed that the bounds we obtained are indeed rather pessimistic.
\section{Introduction}
The original spectral deferred correction (SDC) method for solving ordinary differential equations (ODEs), a variant of the defect and deferred correction methods developed in the 1960s \cite{def-corr-1,def-corr-2,def-corr-3,def-corr-4}, was first introduced in \cite{sdc-orig} and then subsequently improved, e.g. in \cite{hansen,huang,imex-1,imex-2}. It relies on a discretization of the initial value problem in terms of a collocation problem which is then iteratively solved using a preconditioned fixed-point iteration. The iterative structure of SDC has been proven to provide many opportunities for algorithmic and mathematical improvements.
These include the option of using Newton-Krylov schemes such as the Newton-GMRES method to solve the resulting preconditioned nonlinear systems, leading to the so-called Krylov deferred correction methods~\cite{huang,kdc}. Various semi-implicit and multi-implicit formulations of the method have been explored~\cite{hagstrom,imex-1,imex-2,BourliouxEtAl2003,LaytonMinion2004}. In the last decade, SDC has been applied e.g.\ to gas dynamics and incompressible or reactive flows~\cite{appl-1,imex-2} as well as to fast-wave slow-wave problems~\cite{sdc-expl-euler} or particle dynamics~\cite{appl-2}. The generalized integral deferred correction framework includes further variations of SDC, where the used discretization approach is not limited to collocation methods~\cite{idc-1,idc-2}. Moreover, the SDC approach was used to derive efficient parallel-in-time solvers addressing the needs of modern high-performance computing architectures~\cite{pfasst,sdc-notation}.
Here, we will focus on the multi-level extension of SDC, namely multi-level spectral deferred corrections (MLSDC), which was introduced in \cite{mlsdc-1}. It uses a multigrid-like approach to solve the collocation problem with SDC iterations (now called ``sweeps'' in this context) being performed on the individual levels. The solutions on the different levels are then coupled using the Full Approximation Scheme (FAS) coming from nonlinear multigrid methods. This variation was designed to improve the efficiency of the method by shifting some of the work to coarser, less expensive levels.
While there are several numerical examples which show the correctness and efficiency of MLSDC \cite{mlsdc-2,doi:10.1080/13647830.2019.1566574,HAMON2019435}, a theoretical proof of its convergence is still missing. The convergence of SDC, however, was already extensively examined \cite{causley,hagstrom,hansen,huang,xia,tang}. It could be shown that, under certain conditions, the method gains one order per iteration up to the accuracy of the solution of the collocation problem. The aim of this work now is to prove statements on the convergence behavior of MLSDC using similar concepts and ideas as they were used in the convergence proof of SDC, in particular the one presented in \cite{tang}.
For that, we first review SDC along with one of its existing convergence proofs, forming the basis for the following convergence analysis of MLSDC. Then, MLSDC is described and a first convergence theorem is provided. The theorem specifically states that MLSDC behaves at least as good as SDC does. Since this result contradicts our intuitive expectations in that we would assume the multi-level extension to be more efficient than the original one, we will again examine the convergence proof in greater detail, now for a specific choice of transfer operators between the different levels. As a result, a second theorem on the convergence of MLSDC will be derived, describing an improved behavior of the method if particular conditions are fulfilled.
More specifically, we will provide theoretical guidelines for parameter choices in practical applications of MLSDC in order to achieve this improved efficiency. Finally, the theoretical results will be verified by numerical examples.
\section*{Declarations}
\bibliographystyle{gtart}
\section{Multi-Level Spectral Deferred Corrections}
\label{sec:mlsdc}
Multi-level SDC (MLSDC) is a method that uses a multigrid-like approach to solve the collocation problem \eqref{equ:coll_prob}. It is an extension of SDC in which the iterations, now called ``sweeps'' in this context, are computed on a hierarchy of levels and the individual solutions are coupled in the same manner as used in the full approximation scheme (FAS) for non-linear multigrid methods.
The different levels are typically created by using discretizations of various resolutions. In this paper, only the two-level algorithm is considered. For this purpose, let $\Omega_h$ denote the fine level and $\Omega_H$ the coarse one. Then, $U_h$ denotes the discretized vector on $\Omega_h$. Furthermore, $C_h$, $F_h$ and $Q_h$ are the discretizations of the operators and the quadrature matrix. Likewise, $U_H$, $C_H$, $F_H$ and $Q_H$ represent the corresponding values for the discretization parameter $H$.
Here, we will consider two coarsening strategies. The first one is a re-discretization in time at the collocation problem, i.e.\ a reduction of collocation nodes. The second possibility, only applicable if a partial differential equation has to be solved, is a re-discretization in space, i.e.\ the use of less variables for the conversion into an ODE.
Since it is necessary to perform computations on different levels, a method to transfer vectors between the individual levels is needed. For this purpose, let $I_H^h$ denote the operator that transfers a vector from the coarse level $\Omega_H$ to the fine level $\Omega_h$. This operator is called the interpolation operator. $I_h^H$, on the other hand, shall represent the operator for the reverse direction. It is called the restriction operator. Both operators together are called transfer operators.
In detail, the MLSDC two-level algorithm consists of these four steps:
\begin{enumerate}
\item Compute the $\tau$-correction as the difference between coarse and fine level:
\begin{align}\begin{split}
\label{equ:tau-korr}
\tau &= C_H(I_h^H U_h^{(k)}) - I_h^H C_h(U_h^{(k)})\\
&= I_h^H ( \Delta t Q_h F_h (U_h^{(k)}) ) - \Delta t Q_H F_H(I_h^H U_h^{(k)}).
\end{split}\end{align}
\item Perform an SDC sweep to approximate the solution of the modified collocation problem on the coarse level
\begin{equation}
\label{equ:qu:coll_coarse}
C(U_H) = U_{0,H} + \tau
\end{equation}
on $\Omega_H$, beginning with $I_h^H U_h^{(k)}$:
\begin{align}\begin{split}
\label{equ:mlsdc-alg1}
U_H^{(k+\half)} &= U_{0,H} + \tau + \Delta t Q_{\Delta,H} F_H(U_H^{(k+\half)})\\
&\quad+ \Delta t (Q_H - Q_{\Delta,H}) F_H(I_h^H U_h^{(k)}).
\end{split}\end{align}
\item Compute the coarse level correction:
\begin{align}
\label{equ:mlsdc-alg2}
U_h^{(k+\half)} = U_h^{(k)} + I_H^h (U_H^{(k+\half)} - I_h^H U_h^{(k)}).
\end{align}
\item Perform an SDC sweep to approximate the solution of the original collocation problem
\begin{equation*}
C(U_h) = U_{0,h}
\end{equation*}
on $\Omega_h$, beginning with $U_h^{(k+\half)}$:
\begin{align}\begin{split}
\label{equ:mlsdc-alg3}
U_h^{(k+1)} &= U_{0,h} + \Delta t Q_{\Delta,h} F_h(U_h^{(k+1)})\\
&\quad+ \Delta t (Q_h - Q_{\Delta,h}) F_h(U_h^{(k+\half)}).
\end{split}\end{align}
\end{enumerate}
Note that for better readability, the enlargements of the matrices $Q$ and $Q_\Delta$ by applying the Kronecker product with the identity matrix are no longer indicated.
\subsection{A first convergence proof}
Here, we will extend the existing convergence proof for SDC, as presented in \fref{theo:sdc_conv}, to prove the convergence of its multi-level extension MLSDC. The following theorem provides an appropriate convergence statement. In the proof, we use very similar ideas as in the one for the convergence of SDC.
\begin{theorem}
\label{theo:mlsdc_conv_gen}
Consider a generic initial value problem like (\ref{equ:ivp}) with a Lipschitz-continuous function $f$ on the right-hand side.
If the step size $\Delta t$ is sufficiently small, MLSDC converges linearly to the solution of the collocation problem with a convergence rate in $\mathcal{O}(\Delta t)$, i.e. the following estimate for the error of the $k$-th iterated $U_h^{(k)}$ of MLSDC compared to the solution of the collocation problem $U_h$ is valid:
\begin{align}
\label{equ:mlsdc_conv_gen_iter}
\infnorm{U_h - U_h^{(k)}} \leq C_9 \Delta t \infnorm{U_h - U_h^{(k-1)}},
\end{align}
where the constant $C_9$ is independent of $\Delta t$.
If, additionally, the solution of the initial value problem $u$ is $(M+1)$-times continuously differentiable, the LTE of MLSDC compared to the solution of the ODE can be bounded by
\begin{align}
\label{equ:mlsdc_conv_gen_lte}
\|\bar{U} - U_h^{(k)}\|_\infty &\leq C_{10} \Delta t^{k_0+k} \|u\|_{k_0+1}
+ C_{11} \Delta t^{M+1} \|u\|_{M+1}\\
&= \mathcal{O}(\Delta t^{\min(k_0+k, M+1)}),
\end{align}
where the constants $C_{10}$ and $C_{11}$ are independent of $\Delta t$, $k_0$ denotes the approximation order of the initial guess $U^{(0)}$ and $\|u\|_p$ is defined by $\infnorm{u^{(p)}}$.
\end{theorem}
\begin{proof}
For better readability, the maximum norm $\infnorm{\cdot}$ is denoted with the simple norm $\norm{\cdot}$ within this proof.
\renewcommand{\infnorm}[1]{\lVert#1\rVert}
Besides, we consider $U_h^{(k+1)}$ instead of $U_h^{(k)}$ here, in order to enable consistent references to the definition of the MLSDC algorithm above.
As the last step of an MLSDC iteration, in particular \fref{equ:mlsdc-alg3}, corresponds to an SDC iteration, we can use \fref{theo:sdc_conv} to get an initial error estimation. Keeping in mind that the SDC iteration is based on $U_h^{(k+\half)}$ as initial guess here, the application of the mentioned theorem yields the estimation
\begin{align}
\label{equ:u-1_rek_u-half}
\infnorm{U_h - U_h^{(k+1)}} \leq C_4 \Delta t \infnorm{U_h - U_h^{(k+\half)}},
\end{align}
if the step size $\Delta t$ is sufficiently small.
Now, the expression on the right-hand side of the above equation will be further examined. In this context, the definition of an MLSDC iteration, in particular \fref{equ:mlsdc-alg2}, yields
\begin{align*}
\infnorm{U_h - U_h^{(k+\half)}} &= \infnorm{U_h - U_h^{(k)} - I_H^h \left( U_H^{(k+\half)} - I_h^H U_h^{(k)} \right)}\\
&= \infnorm{U_h - U_h^{(k)} - I_H^h \left( U_H^{(k+\half)} + U_H - U_H - I_h^H U_h^{(k)} \right)}\\
&= \infnorm{(I - I_H^h I_h^H) (U_h - U_h^{(k)}) + I_H^h ( U_H - U_H^{(k+\half)} )},
\end{align*}
where in the last step we used the identity $U_H = I_h^H U_h$ which applies in consequence of the $\tau$-correction stemming from the usage of FAS. We get
\begin{align}
\label{equ:u-half_Ihh}
\infnorm{U_h - U_h^{(k+\half)}} &\leq \infnorm{(I - I_H^h I_h^H) (U_h - U_h^{(k)})} + \infnorm{I_H^h ( U_H - U_H^{(k+\half)} )}\\
&\leq \tilde{C}_1 \infnorm{U_h - U_h^{(k)}} + \tilde{C}_2 \infnorm{U_H - U_H^{(k+\half)}},
\label{equ:u-half}
\end{align}
where here and in the following, temporary arising constants will again be denoted by symbols like $\tilde{C}_i$.
Now, it follows a further investigation of the newly emerged term, in particular the second summand of \fref{equ:u-half}. An insertion of the corresponding definitions, namely \fref{equ:qu:coll_coarse} and (\ref{equ:mlsdc-alg1}), together with the application of the triangle inequality and \fref{lem:estimation} yields
\begin{align*}
\begin{split}
\infnorm{U_H - U_H^{(k+\half)}}
&= \infnorm{\Delta t Q_H (F_H(U_H) - F_H(I_h^H U_h^{(k)}))\\
&\quad\quad + \Delta t Q_{\Delta,H} (F_H(I_h^H U_h^{(k)}) - F_H(U_H^{(k+\half)}))}
\end{split}\\
&\leq C_{2,H} \Delta t \infnorm{U_H - I_h^H U_h^{(k)}} + C_{3,H} \Delta t \infnorm{I_h^H U_h^{(k)} - U_H^{(k+\half)}} \nonumber\\
&\leq \tilde{C}_3 \Delta t \infnorm{U_H - I_h^H U_h^{(k)}} + C_{3,H} \Delta t \infnorm{U_H - U_H^{(k+\half)}} \nonumber.
\end{align*}
Subtracting $C_{3,H} \Delta t \infnorm{U_H - U_H^{(k+\half)}}$ from both sides and dividing by $1-C_{3,H} \Delta t$, results in
\begin{align*}
\infnorm{U_H - U_H^{(k+\half)}} &\leq \frac{\tilde{C}_3}{1-C_{3,H} \Delta t} \Delta t \infnorm{U_H - I_h^H U_h^{(k)}}.
\end{align*}
With the same argumentation as above, given a sufficiently small step size, it follows
\begin{align}
\infnorm{U_H - U_H^{(k+\half)}} &\leq \tilde{C}_4 \Delta t \infnorm{U_H - I_h^H U_h^{(k)}} \nonumber\\
&\leq \tilde{C}_5 \Delta t \infnorm{U_h - U_h^{(k)}},
\label{equ:u-half_H}
\end{align}
where in the last step we used the identity $U_H = I_h^H U_h$.
By inserting \fref{equ:u-half} and this result subsequently into \fref{equ:u-1_rek_u-half}, we obtain
\begin{align*}
\infnorm{U_h - U_h^{(k+1)}} &\leq C_4 \Delta t \infnorm{U_h - U_h^{(k+\half)}}\\
&\leq C_4 \Delta t (\tilde{C}_1 \infnorm{U_h - U_h^{(k)}} + \tilde{C}_2 \infnorm{U_H - U_H^{(k+\half)}})\\
&\leq \tilde{C}_6 \Delta t \infnorm{U_h - U_h^{(k)}} + \tilde{C}_7 \Delta t ( \tilde{C}_5 \Delta t \infnorm{U_h - U_h^{(k)}})\\
&= (\tilde{C}_6 + \tilde{C}_8 \Delta t) \Delta t \infnorm{U_h - U_h^{(k)}}.
\end{align*}
Since the step size $\Delta t$ is assumed to be sufficiently small, i.e.\ bounded above, the following estimate is valid
\begin{align*}
\tilde{C}_6 + \tilde{C}_8 \Delta t \leq C_9,
\end{align*}
which concludes the proof for \fref{equ:mlsdc_conv_gen_iter}.
The proof of \fref{equ:mlsdc_conv_gen_lte} is similar to the one of \fref{equ:sdc_conv_lte} in \fref{theo:sdc_conv}, using the previous result.
\end{proof}
\begin{remark}
As before, if the Lipschitz constant of $f$ depends on the spatial resolution, then constants $\tilde{C}_3$ and $C_{3,H}$ do as well.
In~\eqref{equ:u-half_H}, constant $\tilde{C}_5$ then only depends on the original condition $1 - C_{3,H} < 1$, which is just the condition we know from SDC, but with a scaled constant if spatial coarsening is applied.
As in remark~\ref{rem:dx_sdc} we can write $\tilde{C}_5 = \tilde{C}_5(\delta^{-1})$.
This only affects $C_{10}$ in \fref{equ:mlsdc_conv_gen_lte}, where now $C_{10} = C_{10}(\delta^{-(k+1)})$.
\end{remark}
As \fref{theo:sdc_conv}, this theorem can also be read as a convergence statement. It shows that MLSDC, interpreted as an iterative method solving the collocation problem, converges linearly with a convergence rate in $\mathcal{O}(\Delta t)$ if $C_9 \Delta t < 1$. Moreover, the second part of the theorem shows that MLSDC, in the sense of a discretization method for ODEs, converges with order $\min(k_0+k, M+1)$.
\begin{remark}
\label{rem:mlsdc_analogous}
The results regarding consistency and stability of SDC, namely \fref{cor:conv_last} and \fref{theo:sdc_stab}, can be easily adapted for MLSDC. Analogous to SDC, it can be proven that the error at the last collocation node can be bounded by $\mathcal{O}(\Delta t^{\min(k_0+k, 2M)})$ and the increment function of MLSDC is Lipschitz continuous for $\Delta t$ small enough.
\end{remark}
Although \fref{theo:mlsdc_conv_gen} is the first general convergence theorem for MLSDC, its statement is rather disappointing: It merely establishes that MLSDC converges as least as fast as SDC, although more work is done per iteration.
A deeper look into the proof of the theorem gives an idea on the cause for the rather unexpected low convergence order. In particular, it is the estimate leading to \fref{equ:u-half} which is responsible for this issue. This equation implies that
\begin{align*}
\norm{U_h - U_h^{(k+\half)}} \leq C \norm{U_h - U_h^{(k)}},
\end{align*}
which essentially means that the additional iteration on the coarse level does not gain any additional order in $\Delta t$ compared to the previously computed iterated on the fine level $U_h^{(k)}$. More specifically, it is the estimation
\begin{align*}
\infnorm{(I-I_H^h I_h^H)(U_h - U_h^{(k)})} \leq C \infnorm{U_h - U_h^{(k)}},
\end{align*}
in equation \eqref{equ:u-half} which leads to this result.
Thus, a possibly superior behavior of MLSDC seems to depend on the magnitude of $\infnorm{(I - I_H^h I_h^H)e_h}$ with $e_h = U_h - U_h^{(k)} \in \mathbb{C}^{MN}$ which describes the difference between an original vector and the one which results from restricting and interpolating it. Consequently, the term can be interpreted as the quality of the approximation on the coarse level or the accuracy loss it causes, respectively. In the following, this term will be examined in detail, resulting in a new theorem for the convergence of MLSDC with a higher convergence order but additional assumptions which have to met.
\subsection{An improved convergence result}\label{ssec:improved_mlsdc}
For this purpose, we will mainly focus on a specific coarsening strategy here, in particular coarsening in space. The differences occurring from coarsening in time will be discussed at the end of this section. Moreover, we will focus on particular methods used for the transfer operators. For $I_H^h$ we consider a piece-wise Lagrange interpolation of order $p$. This means that instead of using all $N_H$ available values to approximate the value at a particular point $x_i \in \Omega_h$, only its $p$ neighbors are taken into account for this purpose. Hence, $I_H^h$ corresponds to the application of a $p$-th order Lagrange interpolation for each point. For the restriction operator $I_h^H$, on the other hand, we consider simple injection. Thereby, we can mainly focus on the interpolation order and disregard the restriction order.
The following lemma now provides an appropriate estimation for the considered term $\infnorm{(I - I_H^h I_h^H)e_h}$.
\begin{lemma}
\label{lem:ihh_gen}
Let $E \coloneqq (E_m)_{1 \leq m \leq M}$ denote the remainder of the truncated inverse discrete Fourier transformation of $U_h - U_h^{(k)}$, i.e.
\begin{align*}
E_m \coloneqq \sum_{\ell=N_0}^{N-1} \abs{c_{m,\ell}},\quad m = 1, \dots, M.
\end{align*}
for some cutoff index $N_0 \le N$ and $c_{m,\ell}\in\mathbb{C}$ being the Fourier coefficients.
Then, the following estimate for this error is valid:
\begin{align*}
\infnorm{(I - I_H^h I_h^H)(U_h - U_h^{(k)})} \leq (C_{11} \Delta x^p + C_{12}(E)) \infnorm{U_h - U_h^{(k)}},
\end{align*}
where $I \equiv I_{MN}$ denotes the identity matrix of size $MN$, $I_H^h$ is the piece-wise spatial Lagrange interpolation of order $p$ and $I_h^H$ denotes the injection operator. Furthermore, $\Delta x \equiv \Delta x_H$ is defined as the resolution in space on the coarse level $\Omega_H$ of MLSDC.
\end{lemma}
\begin{proof}
\renewcommand{\infnorm}[1]{\lVert#1\rVert_\infty}
First of all, we need to introduce some definitions.
For ease of notation, the considered error vector $U_h - U_h^{(k)}$ will be denoted by $e_h$ within the proof.
In detail, the following definition is used:
\begin{align*}
e_{m,n} = e_{h,n}(\tau_m) = u_{h,n}(\tau_m)- u_{h,n}^{(k)}(\tau_m), \quad \forall m=1,\dots,M,\; n=1,\dots,N,
\end{align*}
where $m$ denotes the temporal index identifying the particular collocation node $\tau_m$ and $n$ represents the spatial index referring to some discretized points $(x_n)_{1 \leq n \leq N}$ within the considered interval in space $[0,S]$. Additionally, we assume the spatial steps to be equidistant here.
Another definition needed for the proof is $g_{m,p}(x)$. It denotes the Lagrangian interpolation polynomial of order $p$ for the restricted vector $I_h^H e_h$ for each point in time $\tau_m$, $m=1, \dots, M$. As the two levels $\Omega_h$ and $\Omega_H$ differ in their spatial resolution, the transfer operators are applied at the spatial axis and thus can be considered separately for each component $e_h(\tau_m)$. More specifically, the restriction operator, corresponding to simple injection according to the assumptions, omits several values of $e_h(\tau_m) \in \mathbb{C}^N$ resulting in $(I_h^H e_h)(\tau_m) \in \mathbb{C}^{N_H}$, where $N_H$ denotes the number of degrees of freedom at the coarse level. The subsequent application of the interpolation operator at this vector then leads to the $M$ interpolating polynomials $(g_{m,p}(x))_{1 \leq m \leq M}$ with $p$ referring to their order of accuracy.
Having introduced these notations, the considered term can be written as
\begin{align*}
(I - I_H^h I_h^H) (U_h - U_h^{(k)}) = (I - I_H^h I_h^H) e_h = \left(e_{m,n} - g_{m,p}(x_n)\right)_{\substack{1 \leq m \leq M,\\1 \leq n \leq N}},
\end{align*}
so that we can focus on $\abs{e_{m,n} - g_{m,p}(x_n)}$.
Since $g_{m,p}(x)$ partially interpolates the points $(e_{m,n})_{1 \leq n \leq N}$, it seems very reasonable to use the general error estimation of Lagrangian interpolation to determine an estimate for the considered term. However, there is a crucial issue: The corresponding error bound, generally defined in \cite{bartels,interp-err} by
\begin{align}
\label{equ:interpol_error}
\max_{x \in [a,b]} \abs{f(x) - g_p(x)} \leq \frac{\Delta x^p}{4p} \abs{f^{(p)} (\xi)}, \quad \xi \in [a,b]
\end{align}
apparently depends on the function $f$ from which the interpolation points are obtained. In our case, namely $\abs{e_{m,n} - g_{m,p}(x_n)}$, we do not have a specific function directly available to which the error components $e_{m,n}$ correspond.
To fill this gap, we will now derive an appropriate function for this purpose, using a continuous extension of the inverse Discrete Fourier Transformation (iDFT).
The iDFT at the spatial axes for the points $e_{m,n}$ is given by
\begin{align*}
e_{m,n} = \frac{1}{\sqrt{N}} \sum_{\ell=0}^{N-1} c_{m,\ell} \exp{i \frac{2\pi}{N} (n-1) \ell}, \quad m=1, \dots, M, \quad n=1, \dots, N
\end{align*}
with $c_{m,\ell}$ as Fourier coefficients and $i$ symbolizing the imaginary unit \cite{fourier}.
A continuous extension $\tilde{e}_m(x), x \in [0,S]$, on the whole spatial interval can then be derived by enforcing $\tilde{e}_m(x_n) \overset{!}{=} e_{m,n}$ for all $m$ and $n$.
With the transformation $x_n = \frac{S}{N} (n-1)$, i.e.\ $n = \frac{N}{S} x_n +1$, implied by an equidistant spatial discretization, it follows
\begin{align*}
\tilde{e}_m(x_n) &= \frac{1}{\sqrt{N}} \sum_{\ell=0}^{N-1} c_{m,\ell} \exp{i \frac{2\pi}{S}\ell x_n}
\end{align*}
and hence
\begin{align*}
\tilde{e}_m(x) \coloneqq \frac{1}{\sqrt{N}} \sum_{\ell=0}^{N-1} c_{m,\ell} \exp{i \frac{2\pi}{S} \ell x}, \quad x \in [0, S], \quad m=1, \dots, M.
\end{align*}
Consequently, we have found a function describing the points $e_{m,n}$. Thus, an interpretation of $g_{m,p}(x)$ as interpolation polynomial of $p$ points stemming from $\tilde{e}_m(x)$ is possible and the error estimation for Lagrange interpolation, presented in \fref{equ:interpol_error}, can now be applied. As a result, we get
\begin{align}
\abs{e_{m,n} - g_{m,p}(x_n)} = \abs{\tilde{e}_m(x_n) - g_{m,p}(x_n)} &\leq \frac{\Delta x^p}{4p} \abs{\tilde{e}_m^{(p)} (\xi)}
\end{align}
with $\xi \in [0,S]$.
With the insertion of the definition of $\tilde{e}_m(x)$ and its $p$th derivative, it follows
\begin{align*}
\abs{\tilde{e}_m(x_n) - g_{m,p}(x_n)} &\leq \frac{\Delta x^p}{4p} \abs*{\frac{1}{\sqrt{N}} \sum_{\ell=0}^{N-1} c_{m,\ell} (i \frac{2\pi}{S}\ell)^p \exp{i \frac{2\pi}{S}\ell \xi}}\\
&= \frac{1}{\sqrt{N}} \frac{\Delta x^p}{4p} \left(\frac{2\pi}{S}\right)^p \abs*{\sum_{\ell=0}^{N-1} c_{m,\ell} \ell^p \exp{i \frac{2\pi}{S}\ell \xi}},
\end{align*}
so that with $\tilde{C}(p) \coloneqq \frac{1}{4p} \left(\frac{2\pi}{S}\right)^p$ we have
\begin{align*}
\abs{\tilde{e}_m(x_n) - g_{m,p}(x_n)} &\leq \frac{1}{\sqrt{N}} \tilde{C}(p) \Delta x^p \sum_{\ell=0}^{N-1} \abs*{c_{m,\ell} \ell^p \exp{i \frac{2\pi}{S}\ell \xi}}\\
&\leq \frac{1}{\sqrt{N}} \tilde{C}(p) \Delta x^p \sum_{\ell=0}^{N-1} \abs{c_{m,\ell}} \ell^p.
\end{align*}
We now choose $N_0\le N$ such that
\begin{align*}
\sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \geq \epsilon_m > 0 \quad \forall m
\end{align*}
for given $\epsilon_m$. Note that this is not possible if there exists an $m$ for which the coefficients $(c_{m,\ell})_{\ell = 0, \dots, N-1}$ are all $0$. This would imply that for this particular $m$ the error $e_{m,n}$ equals to $0$ for all $n = 1, \dots, N$. However, the respective error $e_h(\tau_m)$ would then be $0$ and could just be disregarded in our particular context as it does not have an impact on the considered maximum norm. Therefore, the assumption does not lead to a loss of generality. Finally, we define the remainders of the sums as
\begin{align*}
E_m \coloneqq \sum_{\ell=N_0}^{N-1} \abs{c_{m,\ell}},\quad m = 1, \dots, M.
\end{align*}
Now, the sum in the previous estimation will be split at $N_0$ resulting in the following estimate:
\begin{align*}
\abs{\tilde{e}_m(x_n) - g_{m,p}(x_n)} &\leq \frac{1}{\sqrt{N}} \tilde{C}(p) \Delta x^p \left( \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \ell^p + \sum_{\ell=N_0}^{N-1} \abs{c_{m,\ell}} \ell^p \right).
\end{align*}
With the simple estimation $\ell \leq N_0$ for the first sum and $\ell \leq N$ for the second one, the following formulation using the definition of the remainder $E_m$ is obtained:
\begin{align*}
\abs{\tilde{e}_m(x_n) - g_{m,p}(x_n)} &\leq \frac{1}{\sqrt{N}} \tilde{C}(p) \Delta x^p \left( N_0^p \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} + N^p \sum_{\ell=N_0}^{N-1} \abs{c_{m,\ell}} \right)\\
&= \frac{1}{\sqrt{N}} \tilde{C}(p) \Delta x^p \left( N_0^p \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} + N^p E_m \right).
\end{align*}
Now, we will have a look at the norm of the whole vector $(I - I_H^h I_h^H)e_h$, but instead of the maximum norm, we first consider the squared 2-norm given by
\begin{align*}
\norm{(I - I_H^h I_h^H) e_h}_2^2 &= \sum_{n=1}^N \sum_{m=1}^M \abs{\tilde{e}_m(x_n) - g_{m,p}(x_n)}^2.
\end{align*}
By the insertion of the previous estimation, it follows
\begin{align*}
\norm{(I - I_H^h I_h^H) e_h}_2^2 &\leq \sum_{n=1}^N \sum_{m=1}^M \left[ \frac{1}{\sqrt{N}} \tilde{C}(p) \Delta x^p \left( N_0^p \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} + N^p E_m \right) \right]^2.
\end{align*}
Since the summands are independent of the running index $n$, the equation can be simplified to
\begin{align}
\label{equ:tmp1}
\norm{(I - I_H^h I_h^H) e_h}_2^2 &\leq \tilde{C}(p)^2 \Delta x^{2p} \sum_{m=1}^M \left( N_0^p \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} + N^p E_m \right)^2.
\end{align}
Now, each inner summand can be written as
\begin{align}
\label{equ:tmp2}
\left( N_0^p \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} + N^p E_m \right)^2 &=
N_0^{2p} \left( \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \right)^2 +
2 N_0^p N^p E_m \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \nonumber \\
&\quad+ N^{2p} E_m^2\\
&=: S_1 + S_2 + S_3.\nonumber
\end{align}
While $S_1$ is an intended component (it contains the squared sum of $c_{m,\ell}$ which we will need to get back to the norm of $e_h$), the other two summands $S_2$ and $S_3$ are inconvenient. Therefore, we will now eliminate them by searching a $T(E_m)$ such that
\begin{align}\begin{split}
\label{equ:tmp3}
S_1 + S_2 + S_3 &\leq S_1 + T(E_m) \left( \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \right)^2\\
&= \left( N_0^{2p} + T(E_m) \right) \left( \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \right)^2.
\end{split}\end{align}
This is true if
\begin{align*}
S_2 + S_3 - T(E_m) \left( \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \right)^2 \leq 0,
\end{align*}
which in turn leads to
\begin{align*}
T(E_m) \geq \frac{2 N_0^p N^p}{\sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}}} E_m + \frac{N^{2p}}{\left( \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \right)^2} E_m^2,
\end{align*}
after using the definitions of $S_2$ and $S_3$.
Thus, for
\begin{align}
\label{equ:def_tem}
T(E_m) \coloneqq \frac{2 N_0^p N^p}{\epsilon_m} E_m + \frac{N^{2p}}{\epsilon_m^2} E_m^2
\end{align}
we can bound
\begin{align*}
S_1 + S_2 + S_3 \le \left( N_0^{2p} + T(E_m) \right) \left( \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \right)^2.
\end{align*}
Using the Cauchy-Schwarz inequality we have
\begin{align*}
\left( \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \right) ^2 = \left( \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}} \cdot 1 \right) ^2 \leq \sum_{\ell=0}^{N_0-1} \abs{c_{m,\ell}}^2 \cdot \sum_{\ell=0}^{N_0-1} 1^2 = N_0 \sum_{\ell=0}^{N_0-1} c_{m,\ell}^2,
\end{align*}
so that with \eqref{equ:tmp1}, \eqref{equ:tmp2} and \eqref{equ:tmp3} we get
\begin{align*}
\norm{(I - I_H^h I_h^H) e_h}_2^2 &\leq \tilde{C}(p)^2 \Delta x^{2p} N_0 \sum_{m=1}^M \left( N_0^{2p} + T(E_m) \right) \sum_{\ell=0}^{N_0-1} c_{m,\ell}^2.
\end{align*}
Further, it follows from Parseval's theorem~\cite{fourier} that
\begin{align*}
\sum_{\ell=0}^{N_0-1} c_{m,\ell}^2 \leq \sum_{\ell=0}^{N-1} c_{m,\ell}^2 = \sum_{n=1}^{N} e_{m,n}^2
\end{align*}
and thus
\begin{align*}
\norm{(I - I_H^h I_h^H) e_h}_2^2 &\leq \tilde{C}(p)^2 \Delta x^{2p} N_0 \sum_{m=1}^M \left( N_0^{2p} + T(E_m) \right) \sum_{n=1}^{N} e_{m,n}^2\\
&= \tilde{C}(p)^2 \Delta x^{2p} N_0 \left( N_0^{2p} + \max_{m=1,\dots,M} T(E_m) \right) \norm{e_h}_2^2.
\end{align*}
Since it is the maximum norm we are interested in and not the Euclidean one, an appropriate transformation is required, given by
\begin{align*}
\infnorm{x} \leq \norm{x}_2 \leq \sqrt{n} \infnorm{x} \quad \forall x \in \mathbb{C}^n,
\end{align*}
so that
\begin{align*}
\infnorm{(I - I_H^h I_h^H) e_h} &\leq \norm{(I - I_H^h I_h^H) e_h}_2\\
&\leq \tilde{C}(p) \Delta x^{p} \sqrt{N_0 \left( N_0^{2p} + \max_{m=1,\dots,M} T(E_m) \right)} \norm{e_h}_2\\
&\leq \tilde{C}(p) \Delta x^{p} \sqrt{N_0MN} \sqrt{N_0^{2p} + \max_{m=1,\dots,M} T(E_m)} \infnorm{e_h}.
\end{align*}
Using the triangle inequality for square roots, the sum can be split as
\begin{align*}
\infnorm{(I - I_H^h I_h^H) e_h} &\leq \tilde{C}(p) \Delta x^{p} \sqrt{N_0MN} \left( N_0^{p} + \sqrt{\max_{m=1,\dots,M} T(E_m)} \right) \infnorm{e_h}.
\end{align*}
With the insertion of the definition of $T(E_m)$ presented in \fref{equ:def_tem}, we get
\begin{align}\begin{split}
\label{equ:first_sight}
\infnorm{(I - I_H^h I_h^H) e_h} &\leq \tilde{C}(p) \sqrt{N_0MN} N_0^{p} \Delta x^{p}\\
&\quad \cdot \left( N_0^p + \sqrt{\max_{m=1,\dots,M} \frac{2 N_0^p N^p}{\epsilon_m} E_m + \frac{N^{2p}}{\epsilon_m^2} E_m^2} \right) \infnorm{e_h}.
\end{split}\end{align}
At the first sight, it looks like $\Delta x^p$ is the dominating term in this equation. However, a closer look reveals that the root term is in $\mathcal{O}(N^p)$ which shifts the dominance of this summand towards the remainder $E_m$. To see this, we first transform the root term by extracting $N^{2p}$:
\begin{align*}
\sqrt{\max_{m=1,\dots,M} \frac{2 N_0^p N^p}{\epsilon_m} E_m + \frac{N^{2p}}{\epsilon_m^2} E_m^2} &\leq
\sqrt{\max_{m=1,\dots,M} N^{2p} \left( \frac{2 N_0^p}{\epsilon_m} E_m + \frac{1}{\epsilon_m^2} E_m^2 \right)}\\
&= N^p \sqrt{\max_{m=1,\dots,M} \frac{2 N_0^p}{\epsilon_m} E_m + \frac{1}{\epsilon_m^2} E_m^2}.
\end{align*}
Then, we consider the definition of the step size on the fine level, namely $\Delta x_h = \frac{S}{N}$, which allows the representation of $N^p$ as $\left( \frac{S}{\Delta x_h} \right) ^p$. From this representation, it follows
\begin{align*}
\Delta x^p &\sqrt{\max_{m=1,\dots,M} \frac{2 N_0^p N^p}{\epsilon_m} E_m + \frac{N^{2p}}{\epsilon_m^2} E_m^2}\\&\quad \leq S^p \left( \frac{\Delta x_H}{\Delta x_h} \right)^{p} \sqrt{\max_{m=1,\dots,M} \frac{2 N_0^p}{\epsilon_m} E_m + \frac{1}{\epsilon_m^2} E_m^2}.
\end{align*}
The insertion of this estimation into \fref{equ:first_sight} finally leads to the overall result
\begin{align*}
&\infnorm{(I - I_H^h I_h^H) e_h} \leq \tilde{C}(p) \sqrt{N_0MN} N_0^{2p} \Delta x^{p} \infnorm{e_h}\\
&\quad\quad\quad\quad\quad\quad + \frac{(2\pi)^p}{4p} \left( \frac{\Delta x_H}{\Delta x_h} \right)^{p} \left( \sqrt{\max_{m=1,\dots,M} \frac{2 N_0^p}{\epsilon_m} E_m + \frac{1}{\epsilon_m^2} E_m^2} \right) \infnorm{e_h},
\end{align*}
where we used the definition of $\tilde{C}(p)$ to eliminate $S^p$ in the second summand.
In this estimation, we can finally see that $\Delta x^p$ does not dominate the second summand anymore. It is replaced by the relation between the step size on the coarse and on the fine level which can be regarded as constant. As a result, the remainder $E_m$ is now the dominating item in the term. Hence, a formulation like
\begin{align*}
\infnorm{(I - I_H^h I_h^H) e_h} &\leq C_{11} \Delta x^p \infnorm{e_h} + C_{12}(E) \infnorm{e_h}
\end{align*}
with $E \coloneqq (E_m)_{1 \leq m \leq M}$ is reasonable and concludes the proof.
\end{proof}
\begin{remark}
The splitting of the sum and the consideration of the vector of remainders $E$ is needed since the constant $C_{11}$ would otherwise depend on $N^p$.
Considering the relation $\frac{N}{S} = \frac{1}{\Delta x}$, this would mean that the approximated boundary would not depend on $\Delta x$ anymore. As a result, we would not obtain a better estimation than the simple one $\infnorm{(I-I_H^h I_h^H)(e_h)} \ \le\ C \infnorm{e_h}$, which was already used in the proof of \fref{theo:mlsdc_conv_gen}. By the applied split of the series, the term $N^p$ is replaced by $N_0^p$ which yields a more meaningful estimation as it keeps the dependence on the term $\Delta x^p$ while adding another on the smoothness of the error.
\end{remark}
\begin{remark}
Note that ideally the error only has a few, low-frequency Fourier coefficients, i.e. $\tilde{e}_{m}(x)$ can be written as
\begin{align*}
\tilde{e}_m(x) \coloneqq \frac{1}{\sqrt{N}} \sum_{\ell=0}^{N_0-1} c_{m,\ell} \exp{i \frac{2\pi}{S} \ell x}, \quad x \in [0, S], \quad m=1, \dots, M,
\end{align*}
using $N_0$ summands only.
Then, $E_m = 0$ and the estimate
\begin{align*}
\infnorm{(I - I_H^h I_h^H) e_h} &\leq C_{11} \Delta x^p \infnorm{e_h} + C_{12}(E) \infnorm{e_h}
\end{align*}
reduces to
\begin{align*}
\infnorm{(I - I_H^h I_h^H) e_h} &\leq C_{11} \Delta x^p \infnorm{e_h}.
\end{align*}
\end{remark}
The following theorem uses \fref{lem:ihh_gen} to extend \fref{theo:mlsdc_conv_gen}. In particular, the provided estimation for $\infnorm{(I - I_H^h I_h^H) (U_h - U_h^{(k)})}$ is used in the corresponding proof which results in a new convergence theorem for MLSDC.
\begin{theorem}
\label{theo:mlsdc_conv_prec}
Consider a generic initial value problem like (\ref{equ:ivp}) with a Lipschitz-continuous function $f$ on the right-hand side. Furthermore, let the conditions of \fref{lem:ihh_gen} be met.
Then, if the step size $\Delta t$ is sufficiently small, MLSDC converges linearly to the solution of the collocation problem with a convergence factor in $\mathcal{O}((\Delta x^p + C(E)) \Delta t + \Delta t^2)$, i.e. the following estimate for the error is valid:
\begin{align}
\label{equ:mlsdc_conv_prec_iter}
\infnorm{U_h - U_h^{(k)}} \leq ((C_{13} \Delta x^p + C_{14}(E)) \Delta t + C_{15} \Delta t^2) \infnorm{U_h - U_h^{(k-1)}},
\end{align}
where $\Delta x \equiv \Delta x_H$ is defined as the resolution in space on the coarse level $\Omega_H$ of MLSDC and the constants $C_{13}$, $C_{14}(E)$ and $C_{15}$ are independent of $\Delta t$.
If, additionally, the solution of the initial value problem $u$ is $(M+1)$-times continuously differentiable, the LTE of MLSDC compared to the solution of the ODE can be bounded by:
\begin{align}\begin{split}
\label{equ:mlsdc_conv_prec_lte}
\infnorm{\bar{U_h} - U_h^{(k)}} &\leq C_{17} \Delta t^{M+1} \norm{u}_{M+1}\\
&\quad+ \sum_{l=0}^{k} C_{l+18} (\Delta x^p + C_{14}(E))^{k-l} \Delta t^{k_0+k+l} \norm{u}_{k_0+1}
\end{split}\end{align}
where the constants $C_{17}, \dots, C_{k+18}$ are independent of $\Delta t$, $k_0$ denotes the approximation order of the initial guess $U_h^{(0)}$ and $\norm{u}_p$ is defined by $\infnorm{u^{(p)}}$.
\end{theorem}
\begin{proof}
The proof is similar to the one of \fref{theo:mlsdc_conv_gen} but differs in the used estimation for $\infnorm{(I - I_H^h I_h^H) (U_h - U_h^{(k)})}$. Here, \fref{lem:ihh_gen} instead of the simple norm compatibility inequality is used for this purpose. Based on the estimations (\ref{equ:u-1_rek_u-half}), (\ref{equ:u-half_Ihh}) and (\ref{equ:u-half_H}), arising in the proof of the mentioned theorem, it follows
\begin{align*}
\infnorm{U_h - U_h^{(k+1)}} \leq \tilde{C}_1 \Delta t \infnorm{(I - I_H^h I_h^H) (U_h - U_h^{(k)})} + C_{15} \Delta t^2 \infnorm{U_h - U_h^{(k)}}.
\end{align*}
As already mentioned, we will now apply \fref{lem:ihh_gen}, namely
\begin{align*}
\infnorm{(I - I_H^h I_h^H)(U_h - U_h^{(k)})} \leq (C_{11} \Delta x^p + C_{12}(E)) \infnorm{U_h - U_h^{(k)}},
\end{align*}
which yields
\begin{align*}
\infnorm{U_h - U_h^{(k+1)}} &\leq (C_{13} \Delta x^p + C_{14}(E)) \Delta t \infnorm{U_h - U_h^{(k)}}\\
&\quad + C_{15} \Delta t^2 \infnorm{U_h - U_h^{(k)}}\\
&= ((C_{13} \Delta x^p + C_{14}(E)) \Delta t + C_{15} \Delta t^2) \infnorm{U_h - U_h^{k}}.
\end{align*}
This concludes the proof of \fref{equ:mlsdc_conv_prec_iter}.
The proof of \fref{equ:mlsdc_conv_prec_lte} is again similar to the one of the second equation in \fref{theo:sdc_conv}, using the previous result. Additionally, the binomial theorem is applied to simplify the arising term $((C_{13} \Delta x^p + C_{14}(E)) \Delta t + C_{15} \Delta t^2)^k$.
\end{proof}
\begin{remark}
It is here that a possible dependency of $f$'s Lipschitz constant on $\Delta x$ plays a key role.
Similar to the observations before, we find that in this case \fref{equ:mlsdc_conv_prec_iter} needs to be replaced with
\begin{align*}
\infnorm{U_h - U_h^{(k)}} \leq ((C_{13} \Delta x^{p} + C_{14}&(E))C(\delta^{-1}) \Delta t \\&+ C_{15}(\delta^{-2}) \Delta t^2) \infnorm{U_h - U_h^{(k-1)}},
\end{align*}
where $\delta$ denotes the difference between spatial and temporal resolution (up to constants) and comes from the initial step size restriction of SDC.
The term $C_{13} \Delta x^{p} + C_{14}(E)$ itself does not depend on $\delta$, since it comes from the remainder of the interpolation estimate in lemma~\ref{lem:ihh_gen}, where $f$ and therefore its Lipschitz constant as well as the step size does not play a role.
As before, \fref{equ:mlsdc_conv_prec_lte} has to be modified, now including the term $\delta^{-(k-l+1)}$ in the sum.
The constant $C_{17}$ is still independent of $\delta$.
\end{remark}
\begin{remark}
Similar to \fref{cor:conv_last}, it can be proven that the order limit $M+1$ in \fref{theo:mlsdc_conv_prec} can be replaced by $2M$ if only the error at the last collocation node is considered.
\end{remark}
The theorem states that, under the named conditions, MLSDC converges linearly with a convergence rate of $\mathcal{O}((\Delta x^p + C(E)) \Delta t + \Delta t^2)$ to the collocation solution if $(C_{13} \Delta x^p + C_{14}(E)) \Delta t + C_{15} \Delta t^2 < 1$. This means that if $\Delta x^p$ and the vector of remainders $E$ are sufficiently small, the error of MLSDC decreases by two orders of $\Delta t$ with each iteration, which indeed represents an improved convergence behavior compared to the one described in \fref{theo:mlsdc_conv_gen}. Otherwise, i.e.\ if $\Delta x^p$ and $E$ are not that small, it only decreases by one order in $\Delta t$ which is equivalent to the result of the previous theorem.
In the second equation of the theorem, it can be seen that again, $\Delta x^p$ and $E$ are the crucial factors here. If they are small enough such that $\Delta t^{k_0 + 2k}$ is the leading order, MLSDC converges with order $\min(k_0+2k-1, 2M-1)$ and thus gains two orders per iteration. Otherwise, the convergence order is only $\min(k_0+k-1, 2M-1)$, i.e.\ the error decreases by one order in $\Delta t$ in each iteration.
Note that, as a result, it is advisable to use a high interpolation order $p$ and a small spatial step size $\Delta x$ on the coarse level in practical applications of MLSDC. This theoretical result matches the numerical observations described in \cite{mlsdc-2}. In section 2.2.5 of this paper, it is mentioned that the convergence properties of MLSDC seem to be highly dependent on the used interpolation order and resolution in space. Moreover, it was said that in the considered numerical examples a high resolution in space, i.e.\ a small $\Delta x$, led to a lower sensitivity on the interpolation order $p$. Our theoretical investigation provides an explanation for this behavior.
It seems reasonable to use a similar approach to determine the conditions for a higher convergence order of MLSDC if coarsening in time instead of space is used. Analogous to \fref{equ:interpol_error} in the proof of \fref{lem:ihh_gen}, the Lagrangian error estimation could be used for this purpose, resulting in the following estimation
\begin{align}
\label{equ:err_coarse_time}
\infnorm{(I- I_H^h I_h^H)(U_h - U_H^{(k)})} \leq \frac{\Delta \tau^p}{4p} \infnorm{e^{(p)}(t)},
\end{align}
where $I_H^h$ and $I_h^H$ denote temporal transfer operators now and $e(t)$ is defined as the continuous error of MLSDC compared to the collocation solution. In this case, however, the function $e(t)$ is implicitly known and thus does not have to be approximated by an iDFT. In particular, it is a polynomial of degree $M \equiv M_h$ as both the collocation solution $U$ and each iterate $U_h^{(k)}$ of MLSDC are polynomials of that degree, respectively. This can be seen by considering that $\Delta t Q F(U)$ as well as $\Delta t Q_\Delta F(U)$ essentially represent a sum of integrals of Lagrange polynomials which apparently results in a polynomial. Consequently, the $p$-th derivative of $e(t)$ is a polynomial of degree $M_h-p$. The maximal interpolation order $p$ is the number of collocation nodes $M_H$ on the coarse level. Here, we will assume that $p = M_H$, i.e. the maximal interpolation order is used. Note that $e^{(p)}(t) = 0$ for $p > M_h$ and hence $\infnorm{(I- I_H^h I_h^H)(U_h - U_H^{(k)})} = 0$ for $M_H = M_h$ which is consistent with the expected behavior as it means that no coarsening is used at all.
As a conclusion, it can be said that, according to \fref{equ:err_coarse_time} the improved convergence behavior of MLSDC using coarsening in time is dependent on the used time step size $\Delta \tau = C \Delta t$ and the number of collocation nodes on the coarse level $M_H$. Note that it is also dependent on the specific coefficients of $e^{(p)}(t)$. However, these are highly dependent on the right-hand side $f$ of the IVP and thus cannot be controlled by any method parameters.
In summary, two convergence theorems for MLSDC were established in this section. While the first one, \fref{theo:mlsdc_conv_gen}, represents a general statement on the convergence of the method, the second one, \fref{theo:mlsdc_conv_prec}, provides theoretically established guidelines for the parameter choice in practical applications of MLSDC in order to achieve an improved convergence behavior of the method. In the next section, we will examine numerical examples of MLSDC to check if the resulting errors match those theoretical predictions.
\section{Conclusions and Outlook}
In this paper, we established two convergence theorems for multi-level spectral deferred correction (MLSDC) methods, using similar concepts and ideas as those presented in \cite{tang} for the proof of the convergence of SDC. In the first theorem, namely \fref{theo:mlsdc_conv_gen}, it was shown that with each iteration of MLSDC the error compared to the solution of the initial value problem decreases by at least one order of the chosen step size $\Delta t$, limited by the accuracy of the underlying collocation solution. The corresponding theorem only requires the operator on the right-hand side of the considered initial value problem to be Lipschitz-continuous, not necessarily linear, and the chosen time step size $\Delta t$ to be sufficiently small.
Consequently, we found a first theoretical convergence result for MLSDC proving that it converges as good as SDC does. However, we would expect and numerical results already indicated that the additional computations on the coarse level, more specifically the SDC iterations performed there, would lead to an improved convergence behavior of the method.
For that reason, we analyzed the errors in greater detail, leading to a second theorem on the convergence of MLSDC, namely \fref{theo:mlsdc_conv_prec}. Here, we focused on a specific coarsening strategy and transfer operators. In particular, we considered MLSDC using coarsening in space with Lagrangian interpolation. Given these assumptions, we could prove that, if particular conditions are met, the method can even gain two orders of $\Delta t$ in each iteration until the accuracy of the collocation problem is reached. This consequently led us to theoretically established guidelines for the parameter choice in practical applications of MLSDC in order to achieve the described improved convergence behavior of the method. More specifically, the corresponding theorem says that for this purpose the spatial grid size on the coarse level has to be small, the interpolation order has to be high and the errors have to be smooth.
We presented numerical examples which confirm these theoretical results. In particular, it could be observed that the change of one of those crucial parameters immediately led to a decrease in the order of accuracy. Essentially, it resulted in a convergence behavior as it was described in the first presented theorem.
Besides, there are several open questions related to the presented work which have not yet been investigated. Three of them are briefly discussed here.
\textbf{More information, better results.}
The results presented here are quite generic.
As a consequence, since we only assume Lipschitz continuity of the right-hand side of the ODE and do not pose conditions on the SDC preconditioner, both constants and step size restrictions are rather pessimistic.
Using more knowledge of the right-hand side or the matrix $Q_\Delta$ will yield better results, as it already did for SDC.
Since the goal of this paper is to establish a baseline for convergence of MLSDC, exploiting this direction, especially with respect to the treatment of convergence in the stiff limit as done in~\cite{sdc-lu} for SDC, is left for future work.
\textbf{Smoothness of the error.}
The second theorem, describing conditions for an improved convergence behavior of MLSDC, has a drawback regarding its practical significance.
The way \fref{theo:mlsdc_conv_prec} is currently proven requires a smooth error after its periodic extension.
This occurring condition of a smooth error does not always apply and is in particular not easy to control.
Essentially, something like a smoothing property would be needed to ensure that the error always becomes smooth after enough iterations. Numerical results indicate that this property apparently does not hold for SDC, though~\cite{conv-pfasst}. In this context, however, it would be sufficient if we could at least control this condition, i.e.\ derive particular criteria for the parameters of the method ensuring the errors to be smooth. The numerical examples presented in \fref{sec:examples} particularly lead to the assumption that the selection of a smooth initial guess $U^{(0)}$ would result in smooth errors for $U^{(k)}$, $k \geq 1$, at least for a particular set of problems.
\textbf{Other extensions of SDC.}
Furthermore, it could be tried to adapt the presented convergence proofs of MLSDC to other extensions and variations of SDC, as for example the parallel-in-time method PFASST (Parallel Full Approximation Scheme in Space and Time)~\cite{pfasst} or general semi-implicit and multi-implicit formulations of SDC (SISDC/MISDC)~\cite{imex-1,imex-2}. Whereas an adaptation to SISDC and MISDC methods seems to be rather straightforward \cite{causley}, we found that the application of similar concepts and ideas to prove the convergence of PFASST may involve some difficulties. In particular, the coupling of the different time steps, i.e.\ the use of the approximation at the endpoint of the last subinterval for the start point of the next one, could cause a problem in this context since the corresponding operator is independent of $\Delta t$ and would thus add a constant term to our estimations.
\section{Spectral Deferred Corrections}
\label{sec:sdc}
In the following SDC is presented as preconditioned Picard iterations for the collocation problem. The used approach and notations are substantially based on \cite{mlsdc-1,conv-pfasst} and references therein. First, the collocation problem for a generic initial value problem is explained. Then, SDC is described as a solver for this problem and compact notations are introduced. Finally, an existing theorem on the convergence of SDC, including its proof, is presented.
\subsection{SDC and the collocation problem}
Consider the following autonomous initial value problem (IVP)
\begin{align}\begin{split}
\label{equ:ivp}
u'(t) &= f(u(t)), \quad t\in[t_0, t_1],\\
u(t_0) &= u_0
\end{split}\end{align}
with $u(t), u_0 \in \mathbb{C}^N$ and $f:\mathbb{C}^N\to\mathbb{C}^N$, $N\in\mathbb{N}$. To guarantee the existence and uniqueness of the solution, $f$ is required to be Lipschitz continuous. Since a high-order method shall be used, $f$ is additionally assumed to be sufficiently smooth.
The IVP can be written as
\begin{align*}
u(t) = u_0 + \int_{t_0}^t f(u(s))ds, \quad t\in[t_0, t_1]
\end{align*}
and choosing $M$ quadrature nodes $\tau_1, ..., \tau_M$ within the time interval such that $t_0 \le \tau_1 < \tau_2 < ... < \tau_M = t_1$, the integral is now approximated using a spectral quadrature rule like Gauß-Radau.
This approach results in the discretized system of equations
\begin{align}
\label{equ:intgl1}
u_m = u_0 + \Delta t \sum_{j=1}^M q_{m,j} f(u_j), \quad m=1,...,M,
\end{align}
where $u_m \approx u(\tau_m)$, $\Delta t = t_1 - t_0$ denotes the time step size and $q_{m,j}$ represent the quadrature weights for the unit interval with
\begin{align*}
q_{m,j} = \frac{1}{\Delta t} \int_{t_0}^{\tau_m} l_j(s) ds.
\end{align*}
Here, $l_j$ represents the $j$-th Lagrange polynomial corresponding to the set of nodes $(\tau_m)_{1 \leq m \leq M}$.
We can combine these $M$ equations into the following system of linear or non-linear equations, defining the collocation problem:
\begin{gather}
C(U) \coloneqq (I_{MN} - \Delta t (Q \otimes I_N) F)(U) = U_0,
\label{equ:coll_prob}
\end{gather}
where $U \coloneqq (u_1, u_2, \dots, u_M)^T \in \mathbb{C}^{MN}$, $U_0 \coloneqq (u_0, u_0, \dots, u_0)^T \in \mathbb{C}^{MN}$, $Q \coloneqq (q_{m,j})_{1 \leq m,j \leq M}$ is the matrix gathering the quadrature weights, the vector function $F$ is given by $F(U) \coloneqq (f(u_1), f(u_2), \dots, f(u_M))^T$ and $I_{MN}, I_N$ are the identity matrices of dimensions $MN$ and $N$.
As described above, the solution $U$ of the collocation problem approximates the solution of the initial value problem (\ref{equ:ivp}). With this in mind, the following theorem, referring to \cite[Thm.~7.10]{hairer-wanner-1}, provides a statement on its order of accuracy.
\begin{theorem}
\label{theo:coll_prob_err}
The solution $U = (u_1, u_2, \dots, u_M)^T \in \mathbb{C}^{MN}$ of the collocation problem defined by \fref{equ:coll_prob} approximates the solution $u$ of the IVP \eqref{equ:ivp} at the collocation nodes. In particular, for $\bar{U} \coloneqq (u(\tau_1), \dots, u(\tau_M))^T$ the following error estimation applies:
\begin{align*}
\| \bar{U} - U \|_\infty \leq C_1 \Delta t^{M+1} \|u\|_{M+1},
\end{align*}
where $C_1$ is independent of $\Delta t$, $M$ denotes the number of nodes and $\|u\|_{M+1}$ represents the maximum norm of $u^{(M+1)}$, the $(M+1)$th derivative of $u$.
\end{theorem}
Interpreting the collocation problem as a discretization method with discretization parameter $n \coloneqq \Delta t^{-1}$, the theorem shows that the discrete approximation $U$ defined by the collocation problem converges with order $M+1$ to the solution $\bar{U}$ of the corresponding IVP.
Since the system of equations \eqref{equ:coll_prob} defining the collocation problem is naturally dense as the matrix $Q$ gathering the quadrature weights is fully populated, a direct solution is not advisable, in particular if the right-hand side of the ODE is non-linear. An iterative method to solve the problem is SDC.
The standard Picard iteration for the collocation problem \eqref{equ:coll_prob} is given by
\begin{align}\begin{split}
\label{equ:richardson}
U^{(k+1)} &= U^{(k)} + (U_0 - C(U^{(k)}))\\
&= U_0 + \Delta t (Q \otimes I_N)F(U^{(k)}).
\end{split}\end{align}
As this method only converges for very small step sizes $\Delta t$, using a preconditioner to increase range and speed of convergence is reasonable. The SDC-type preconditioners are defined by
\begin{align*}
P(U) = (I_{MN} - \Delta t (Q_\Delta \otimes I_N) F) (U),
\end{align*}
where the matrix $Q_\Delta = (q_{\Delta_{m,j}})_{1 \leq m,j \leq M} \approx Q$ is formed by the use of a simpler quadrature rule. In particular, $Q_\Delta$ is typically a lower triangular matrix, such that solving the system can be easily done by forward substitution.
Common choices for $Q_\Delta$ include the matrix
\begin{align*}
Q_\Delta = \frac{1}{\Delta t}
\begin{pmatrix}
\Delta \tau_1 \\
\Delta \tau_1 & \Delta \tau_2 \\
\vdots & \vdots & \ddots \\
\Delta \tau_1 & \Delta \tau_2 & \hdots & \Delta \tau_M
\end{pmatrix}
\end{align*}
with $\Delta \tau_m = \tau_m - \tau_{m-1}$ for $m=2,...,M$ and $\Delta \tau_1 = \tau_1 - t_0$ representing the right-sided rectangle rule. Similarly, the left-sided rectangle rule \cite{sdc-expl-euler} or a part of the $LU$ decomposition of the matrix $Q$ \cite{sdc-lu} are chosen. The theoretical considerations in the next chapters do not rely on a specific matrix $Q_\Delta$. However, in the numerical examples the right-sided rectangle rule as given above is used.
By the use of such an operator to precondition the Picard iteration (\ref{equ:richardson}), the following iterative method for solving the collocation problem is obtained
\begin{align}\begin{split}
\label{equ:sdc}
(I_{MN} - \Delta t (Q_\Delta \otimes I_N)F) (U^{(k+1)})
= U_0 + \Delta t ((Q-Q_\Delta) \otimes I_N)F(U^{(k)}),
\end{split}\end{align}
which constitutes the SDC iteration \cite{sdc-orig,huang}.
Written down line-by-line, this formulation recovers the original SDC notation given in~\cite{sdc-orig}.
A more implicit formulation is given by
\begin{align}\begin{split}
\label{equ:sdc_implicit}
U^{(k+1)} = U_0 + \Delta t (Q_\Delta \otimes I_N)F(U^{(k+1)}) + \Delta t ((Q-Q_\Delta) \otimes I_N)F(U^{(k)})
\end{split}\end{align}
and this will be used for the following convergence considerations.
\subsection{Convergence of SDC}
There already exist several approaches proving the convergence of SDC, particularly those presented in \cite{causley,hagstrom,hansen,huang,xia,tang}. Here, we will focus on the idea of the proof from \cite{tang} as it uses the previously introduced matrix formulation of SDC, needed for an appropriate adaptation for a convergence proof of MLSDC, and simultaneously, provides a general result for linear and non-linear initial value problems.
We will review the idea of this proof in some detail to introduce the notation and the key ideas.
This is followed by a concise discussion on stability and convergence of SDC in the sense of one-step ODE solvers.
The approach in \cite{tang} relies on a split of the local truncation error (LTE). The key concept used in the proof is a property of the operators $QF(U)$ and $Q_\Delta F(U)$, respectively, which can be interpreted as a kind of extended Lipschitz continuity. It is presented in the following lemma using the previously introduced notations. For reasons of readability, the sizes of the identity matrices are no longer denoted here.
\begin{lemma}
\label{lem:estimation}
If $f:\mathbb{C}^N \to \mathbb{C}^N$ is Lipschitz continuous, the following estimates apply
\begin{align*}\begin{split}
\infnorm{\Delta t (Q \otimes I) (F(U_1) - F(U_2))}
&\leq C_2 \Delta t \infnorm{U_1 - U_2},\\
\infnorm{\Delta t (Q_\Delta \otimes I) (F(U_1) - F(U_2))}
&\leq C_3 \Delta t \infnorm{U_1 - U_2},
\end{split}\end{align*}
where the constants $C_2$ and $C_3$ are dependent on the Lipschitz constant $L$, but independent of $\Delta t$ and $U_1, U_2 \in \mathbb{C}^{NM}$.
\end{lemma}
\begin{proof}
Can be shown directly using the definition of the maximum norm, the Lipschitz continuity of $f$ and the compatibility between maximum absolute row sum norm for matrices and maximum norm for vectors.
\end{proof}
\begin{remark}
For a system of ODEs stemming from a discretized PDE, the constants $C_2$ and $C_3$ may depend on the spatial resolution given by some grid spacing $\Delta x$, because the Lipschitz constant of $f$ may depend on it.
In this case we have $C_2 = C_2(\Delta x^{-d})$, $C_3 = C_3(\Delta x^{-d})$ for $d\in\mathbb{N}$.
For example, using second-order finite differences in space for the heat equation results in the ODE system $u' = Au$ with matrix $A \in \mathcal{O}(\Delta x^{-2})$, i.e. $d=2$ in this case.
This has to be kept in mind for most of the upcoming results and we will address this point separately in remarks where appropriate.
This will be particularly relevant for the convergence results in section~\ref{ssec:improved_mlsdc}, where the spatial discretization plays a key role.
Note, however, that this is a rather pessimistic estimate.
When focusing on spatial operators with more restrictive properties (e.g.~linearity) or when using a specific matrix $Q_\Delta$, the convergence results can be improved substantially, both in terms of constants and time-step size restrictions.
For SDC, this has been already done, see e.g.~\cite{sdc-lu}.
\end{remark}
The following theorem provides a convergence statement for SDC using the presented lemma in the proof.
\begin{theorem}
\label{theo:sdc_conv}
Consider a generic initial value problem like (\ref{equ:ivp}) with a Lipschitz-continuous function $f$ on the right-hand side.
If the step size $\Delta t$ is sufficiently small, SDC converges linearly to the solution $U$ of the collocation problem with a convergence rate in $\mathcal{O}(\Delta t)$, i.e. the following estimate for the error of the $k$-th iterated $U^{(k)}$ of SDC compared to the solution of the collocation problem is valid:
\begin{align}
\label{equ:sdc_conv_iter}
\infnorm{U-U^{(k)}} &\leq C_4 \Delta t \infnorm{U - U^{(k-1)}}
\end{align}
where the constant $C_4$ is independent of $\Delta t$.
If, additionally, the solution of the initial value problem $u$ is $(M+1)$-times continuously differentiable, the LTE of SDC compared to the solution $\bar{U}$ of the ODE can be bounded by
\begin{align}\begin{split}
\label{equ:sdc_conv_lte}
\infnorm{\bar{U} - U^{(k)}} &\leq C_5 \Delta t^{k_0+k} \|u\|_{k_0+1}
+ C_6 \Delta t^{M+1} \|u\|_{M+1}\\
&= \mathcal{O}(\Delta t^{\min(k_0+k, M+1)}),
\end{split}\end{align}
where the constants $C_5$ and $C_6$ are independent of $\Delta t$, $k_0$ denotes the approximation order of the initial guess $U^{(0)}$ and $\|u\|_p$ is defined by $\infnorm{u^{(p)}}$.
\end{theorem}
\begin{proof}
We again closely follow \cite{tang} here. According to the definition of the collocation problem (\ref{equ:coll_prob}) and an SDC iteration (\ref{equ:sdc_implicit}), it follows
\begin{align*}
\infnorm{U-U^{(k)}} &= \infnorm{\Delta t (Q \otimes I) (F(U) - F(U^{(k-1)}))\\
&\quad\quad+ \Delta t (Q_\Delta \otimes I) (F(U^{(k-1)}) - F(U^{(k)}))}.
\end{align*}
Together with the triangle inequality and \fref{lem:estimation}, we obtain
\begin{align*}
\infnorm{U-U^{(k)}} &\leq C_2 \Delta t \infnorm{U - U^{(k-1)}} + C_3 \Delta t \infnorm{U^{(k-1)} - U^{(k)}}.
\end{align*}
Applying the triangle inequality again yields
\begin{align*}
\infnorm{U-U^{(k)}}
&\leq \tilde{C}_1 \Delta t \infnorm{U - U^{(k-1)}} + C_3 \Delta t \infnorm{U - U^{(k)}},
\end{align*}
where here and in the following, we use variables in the form of $\tilde{C}_i$ to denote temporary arising constants. We continue by subtracting $C_3 \Delta t \infnorm{U - U^{(k)}}$ from both sides and dividing by $1 - C_3 \Delta t$ which results in
\begin{align*}
\infnorm{U-U^{(k)}} &\leq \frac{\tilde{C}_1}{1-C_3 \Delta t} \Delta t \infnorm{U - U^{(k-1)}}.
\end{align*}
If the step size is sufficiently small, in particular
\begin{align}\label{eq:conv_cond}
C_3 \Delta t < 1,
\end{align} the following estimate is valid
\begin{align}\label{eq:conv_cond_II}
\frac{\tilde{C}_1}{1-C_3 \Delta t} \leq C_4,
\end{align}
which concludes the proof for \fref{equ:sdc_conv_iter}.
Continuing with recursive insertion, we get
\begin{align*}
\infnorm{U-U^{(k)}} \leq \tilde{C}_2 \Delta t^k \infnorm{U - U^{(0)}}.
\end{align*}
Since $U^{(0)}$ is assumed to be an approximation of $k_0$-th order, we further know that
\begin{align*}
\norm{\bar{U} - U^{(0)}} \leq \tilde{C}_3 \Delta t^{k_0} \norm{u}_{k_0+1}.
\end{align*}
This estimation together with the triangle inequality and the error estimation for the solution of the collocation problem stated in \fref{theo:coll_prob_err} yields
\begin{align}
\infnorm{U-U^{(k)}} &\leq \tilde{C}_2 \Delta t^k (\infnorm{\bar{U} - U} + \infnorm{\bar{U} - U^{(0)}}) \nonumber\\
\label{equ:proof_sdc_err_coll}
&\leq \tilde{C}_4 \Delta t^{M+k+1} \|u\|_{M+1} + C_6 \Delta t^{k_0+k} \|u\|_{k_0+1}.
\end{align}
Altogether, it follows
\begin{align*}
\infnorm{\bar{U} - U^{(k)}} &\leq \infnorm{\bar{U} - U} + \infnorm{U - U^{(k)}}\\
&\leq C_1 \Delta t^{M+1} \|u\|_{M+1} + \tilde{C}_4 \Delta t^{M+k+1} \|u\|_{M+1} + C_6 \Delta t^{k_0+k} \|u\|_{k_0+1}\\
&= (C_1 + \tilde{C}_4 \Delta t^k) \Delta t^{M+1} \|u\|_{M+1} + C_6 \Delta t^{k_0+k} \|u\|_{k_0+1}.
\end{align*}
Since the step size $\Delta t$ is assumed to be sufficiently small, i.e.\ bounded above, the following estimate is valid
\begin{align*}
C_1 + \tilde{C}_4 \Delta t^k \leq C_5,
\end{align*}
which finally concludes the proof for \fref{equ:sdc_conv_lte}.
\end{proof}
\begin{remark}\label{rem:dx_sdc}
Note that if the right-hand side of the ODE comes from a discretized PDE with a given spatial resolution, no additional restriction is posed.
In this case, $C_3 = C_3(\Delta x^{-d}) = \tilde{C}_5\Delta x^{-d}$ for some constant $\tilde{C}_5$, so that the condition~\eqref{eq:conv_cond} becomes $\tilde{C}_5\Delta t < \Delta x^{d}$.
Since we did not specify $Q_\Delta$ here, the SDC iterations can be explicit or implicit and it is natural to obtain such a restriction on the time-step size.
Similar restrictions can be found in other convergence results for SDC, see e.g.~\cite{causley,hagstrom,huang}.
Condition~\eqref{eq:conv_cond_II} then translates to
\begin{align*}
C_4 \geq \frac{\tilde{C}_1}{1-C_3 \Delta t} = \frac{\tilde{C}_6\Delta x^{-d}}{1-\tilde{C}_5\Delta x^{-d} \Delta t} = \frac{\tilde{C}_6}{\Delta x^d-\tilde{C}_5 \Delta t}
\end{align*}
for some constant $\tilde{C}_6$.
Assuming a fixed distance between $\Delta x^p$ and $\tilde{C}_5 \Delta t$ with $\Delta x^d-\tilde{C}_5 \Delta t = \delta$ we can write $C_4 = C_4(\delta^{-1})$ to indicate the dependence of $C_4$ on this distance and not $\Delta x^p$ or $\Delta t$ alone.
Thus, $C_4$ does not pose an additional restriction to the convergence of SDC.
Note that for $C_5$ we have $C_5 = C_5(\delta^{-(k+1)})$, so that the choice of $\delta$ can increase the constant in front of the $\Delta t^{k_0+k}$-term quite substantially, but it does not affect the $\Delta t^{M+1}$-term coming from the collocation problem itself.
\end{remark}
The theorem can be read as a convergence statement for SDC. In particular, the first estimation (\ref{equ:sdc_conv_iter}) shows that SDC, interpreted as an iterative method to solve the collocation problem, converges linearly to the solution of the collocation problem with a convergence rate of $\mathcal{O}(\Delta t)$ if $C_4\Delta t < 1$. The second part of the theorem, \fref{equ:sdc_conv_lte} shows that SDC, in the sense of a discretization method, converges with order $\min(k_0+k, M+1)$ to the solution of the initial value problem. In other words, the method gains one order per iteration, limited by the selected number of nodes used for discretization.
We can now immediately extend this result by looking at the right endpoint of the single time interval (which, in our case, is equal to the last collocation node). There, the convergence rate is limited not by the number of collocation nodes, but by the order of the quadrature.
\begin{corollary}
\label{cor:conv_last}
Consider a generic initial value problem like (\ref{equ:ivp}) with a Lipschitz-continuous function $f$ on the right-hand side. Furthermore, let the solution of the initial value problem $u$ be $(2M)$-times continuously differentiable.
Then, if the step size $\Delta t$ is sufficiently small, the error of the $k$-th iterated of SDC, defined by \fref{equ:sdc_implicit}, at the last collocation node $u_M^{(k)}$, compared to the exact value at this point $u(\tau_M)$, can be bounded by
\begin{align}
\label{equ:conv_last_point}
\begin{split}
\infnorm{u(\tau_m) - u_M^{(k)}} &\leq C_7 \Delta t^{2M} \max(\|u\|_{2M}, \|u\|_{M+1})\\
&\quad+ C_8 \Delta t^{k_0+k} \max(\|u\|_{k_0+1}, \|u\|_{M+1})
\end{split}\\
&= \mathcal{O}(\Delta t^{\min(k_0+k, 2M)}),\nonumber
\end{align}
where the constants $C_7$ and $C_8$ are independent of $\Delta t$, $k_0$ denotes the approximation order of the initial guess $U^{(0)}$ and $\|u\|_p$ is defined by $\infnorm{u^{(p)}}$.
\end{corollary}
\begin{proof}
The proof mainly relies on the interpretation of the solution of the collocation problem evaluated at the last node $\tau_M$ as the result of a Radau method with $M$ stages. With this in mind, the well-known convergence, or in this case rather consistency, order of Radau methods yields the estimate \cite{hairer-wanner-2}
\begin{align*}
\infnorm{u(\tau_M) - u_M} \leq \tilde{C}_1 \Delta t^{2M} \|u\|_{2M},
\end{align*}
where $\tilde{C}_1$ is independent of $\Delta t$. Here and in the following, temporary arising constants will again be denoted by symbols like $\tilde{C}_i$. However, they are separately defined and thus, do not correspond to the ones used in previous proofs.
To use this estimation, we first have to apply the triangle inequality to the left-hand side of \fref{equ:conv_last_point}, in particular
\begin{align}
\label{equ:proof_last_point_triangle}
\infnorm{u(\tau_m) - u_M^{(k)}} &\leq \infnorm{u(\tau_m) - u_M} + \infnorm{u_M - u_M^{(k)}}.
\end{align}
Then, with the definition of the vector $R \coloneqq (0,\dots,0,1) \in \mathbb{R}^{1 \times M}$ which, multiplied with another vector, only captures its last value, the second term on the right-hand side of the above equation can be transferred to
\begin{align*}
\infnorm{u_M - u_M^{(k)}} &= \infnorm{R U - R U^{(k)}} \leq \infnorm{R} \infnorm{U - U^{(k)}} = \infnorm{U - U^{(k)}}\\
&\le \tilde{C}_2 \Delta t^{M+k+1} \norm{u}_{M+1} + C_6 \Delta t^{k_0+k} \norm{u}_{k_0+1},
\end{align*}
where the last estimate comes from \fref{equ:proof_sdc_err_coll} in the proof of \fref{theo:sdc_conv}.
Finally, by inserting all these results in \fref{equ:proof_last_point_triangle}, we obtain
\begin{align*}
\infnorm{u(\tau_m) - u_M^{(k)}} &\leq \infnorm{u(\tau_m) - u_M} + \infnorm{U - U^{(k)}}\\
&\leq \tilde{C}_1 \Delta t^{2M} \|u\|_{2M} + \tilde{C}_2 \Delta t^{M+k+1} \|u\|_{M+1} + C_5 \Delta t^{k_0+k} \|u\|_{k_0+1}.
\end{align*}
Note that the leading order of this term is essentially independent of the summand corresponding to $\Delta t^{M+k+1}$. For $\Delta t$ small enough, this result can be seen by a case analysis for $k$. For $k+1 \geq M$, the considered summand is dominated by $\Delta t^{2M} \geq \Delta t^{M+k+1}$ and thus can be disregarded in terms of leading order analysis. In the other case, i.e.\ for $k+1< M$, the considered summand will, however, be greater than the one of order $2M$. Therefore, we will instead compare it to $\Delta t^{k_0+k}$ in this case. Since the number of collocation nodes $M$ is usually chosen to be greater than the approximation order $k_0$ of the initial guess, the relation $\Delta t^{k_0+k} \geq \Delta t^{M+k+1}$ applies and hence, ${k_0+k}$ will be the leading order for $k < M$. Thus, the considered summand $\Delta t^{M+k+1}$ is again dominated by another term and can be disregarded concerning the overall asymptotic behavior. These considerations consequently lead to the following estimation
\begin{align*}
\infnorm{u(\tau_m) - u_M^{(k)}} \leq \tilde{C}_3 \Delta t^{2M} \max(\|u\|_{2M}, \|u\|_{M+1}) + C_5 \Delta t^{k_0+k} \|u\|_{k_0+1}
\end{align*}
for $ k \geq M$ and
\begin{align*}
\infnorm{u(\tau_m) - u_M^{(k)}} \leq \tilde{C}_1 \Delta t^{2M} \|u\|_{2M} + \tilde{C}_4 \Delta t^{k_0+k} \max(\|u\|_{k_0+1}, \|u\|_{M+1})
\end{align*}
for $k < M$, which can be combined to
\begin{align*}
\infnorm{u(\tau_m) - u_M^{(k)}} &\leq C_7 \Delta t^{2M} \max(\|u\|_{2M}, \|u\|_{M+1})\\
&\quad+ C_8 \Delta t^{k_0+k} \max(\|u\|_{k_0+1}, \|u\|_{M+1}),
\end{align*}
concluding the proof.
\end{proof}
With this corollary, it can be concluded that SDC, in the sense of a single-step method to solve ODEs, is consistent of order $\min(k_0+k-1, 2M-1)$. To extend this result towards a statement on the convergence order of the method an additional proof of its stability is needed.
The following theorem provides an appropriate result for SDC.
\begin{theorem}
\label{theo:sdc_stab}
Consider a generic initial value problem like (\ref{equ:ivp}) with a Lipschitz-continuous function $f$ on the right-hand side.
If the step size $\Delta t$ is sufficiently small and an appropriate initial guess is used, the SDC method, defined by \fref{equ:sdc}, is stable.
\end{theorem}
\begin{proof}
As usual for single-step methods, we will prove the Lipschitz continuity of the increment function of SDC in order to prove the stability of the method.
A general single-step method is defined by the formula
\begin{align*}
u_{n+1} = u_n + \Delta t \phi(u_n),
\end{align*}
where $u_n$ denotes the approximation at the time step $t_n$ and $\phi(u_n)$ is the increment function. Our aim now is to identify the specific increment function $\phi$ corresponding to SDC and to, subsequently, show its Lipschitz continuity, i.e. prove the validity of $\abs{\phi(u_n) - \phi(v_n)} \leq L_\phi \abs{u_n - v_n}$.
First, note that the $k$-th iterated of SDC $u_m^{(k)}$ at an arbitrary collocation node $\tau_m$ ($1 \leq m \leq M$) can be written as
\begin{align*}
u_m^{(k)} &= u_n + r_m^{(k)} \mbox{ with}\\
r_m^{(k)} &\coloneqq \Delta t (Q_\Delta F(U_n+r^{(k)}))_m + \Delta t ((Q - Q_{\Delta}) F(U_n + r^{(k-1)}))_m,\\
r^{(k)} &\coloneqq (r_1^{(k)}, \dots, r_M^{(k)})^T \mbox{ and } U_n = (u_n, \dots, u_n)^T
\end{align*}
for $k\ge1$ according to a line-wise consideration of \fref{equ:sdc}, where the subscript $m$ denotes the $m$th line of the vectors. Consequently, the corresponding approximation at the time step $t_{n+1} = \tau_M$ can be written as
\begin{gather*}
u_{n+1} = u_M^{(k)} = u_n + r_M^{(k)} \eqqcolon u_n + \Delta t \phi^{(k)}(u_n)\\
\mbox{with } \phi^{(k)}(u_n) = \frac{1}{\Delta t} r_M^{(k)}.
\end{gather*}
Hence, we have found an appropriate, albeit implicit definition for the increment function $\phi^{(k)}$ of SDC.
As a second step, it follows an investigation on the Lipschitz continuity of this function. For that, we start by noting that
\begin{align}
\label{equ:rel_phi_r}
\abs{\phi^{(k)}(u_n) - \phi^{(k)}(v_n)} &= \frac{1}{\Delta t} \abs{r_M^{(k)} - s_M^{(k)}} \leq \frac{1}{\Delta t} \infnorm{r^{(k)} - s^{(k)}},
\end{align}
where the $r$-terms belong to $u_n$ and the $s$-terms to $v_n$. Now, we will further analyze the term $\norm{r^{(k)} - s^{(k)}}$. With the insertion of the corresponding definitions and an application of the triangle inequality, it follows
\begin{align*}
\norm{r^{(k)} - s^{(k)}} &\leq \norm{\Delta t Q_\Delta (F(U_n + r^{(k)}) - F(V_n + s^{(k)}))}\\
&\quad+ \norm{\Delta t (Q - Q_\Delta) (F(U_n + r^{(k-1)}) - F(V_n + s^{(k-1)}))}.
\end{align*}
The use of \fref{lem:estimation} and a reapplication of the triangle inequality further yield
\begin{align*}
\norm{r^{(k)} - s^{(k)}} &\leq \tilde{C}_1 \Delta t(\abs{u_n - v_n} + \norm{r^{(k)} - s^{(k)}} + \norm{r^{(k-1)} - s^{(k-1)}}).
\end{align*}
Continuing with the same trick as in the proof of \fref{theo:sdc_conv}, namely a subtraction of $\tilde{C}_1 \Delta t \norm{r^{(k)} - s^{(k)}}$ and a subsequent division by $1- \tilde{C}_1 \Delta t$, we get
\begin{align*}
\norm{r^{(k)} - s^{(k)}} &\leq \frac{\tilde{C}_1}{1- \tilde{C}_1 \Delta t} \Delta t (\abs{u_n - v_n} + \norm{r^{(k-1)} - s^{(k-1)}}).
\end{align*}
If the step size $\Delta t$ is sufficiently small, the following estimation applies
\begin{align*}
\frac{\tilde{C}_1}{1- \tilde{C}_1 \Delta t} &\leq \tilde{C}_2
\end{align*}
and a subsequent iterative insertion further yields
\begin{align*}
\norm{r^{(k)} - s^{(k)}} \leq \tilde{C}_3 \sum_{l = 1}^{k} \Delta t^l \abs{u_n-v_n} + \tilde{C}_4 \Delta t^{k} \norm{r^{(0)} - s^{(0)}}.
\end{align*}
With the insertion of this result in \fref{equ:rel_phi_r} above, it finally follows
\begin{align*}
\abs{\phi^{(k)}(u_n) - \phi^{(k)}(v_n)} \leq C \sum_{l = 0}^{k-1} \Delta t^l \abs{u_n-v_n} + C \Delta t^{k-1} \norm{r^{(0)} - s^{(0)}}.
\end{align*}
The value of $\norm{r^{(0)} - s^{(0)}}$ depends on the initial guess for the SDC iterations. If, for example, the value at the last time step is used as the initial guess for all collocation nodes, i.e.\ $U^{(0)} = U_n$ and $V^{(0)} = V_n$, we get $\norm{r^{(0)} - s^{(0)}} = 0 - 0 = 0$. If, by contrast, the initial guess is chosen to be zero, it follows $\norm{r^{(0)} - s^{(0)}} = \abs{u_n - v_n}$. Both variants, however, guarantee that $\abs{\phi^k(u_n) - \phi^k(v_n)} \leq C \abs{u_n - v_n}$ which was to be shown.
\end{proof}
\begin{remark}
Note that the assumed upper bound for the step size $\Delta t$ in the previous theorem is the same as the one in \fref{cor:conv_last}, describing the consistency of SDC. Hence, there is no additional restriction for the convergence of the method.
\end{remark}
Together with the last theorem, \fref{cor:conv_last} can be extended towards a convergence theorem for SDC regarded in the context of single-step methods to solve ODEs. Specifically, the proven stability of the method allows a direct transfer of the order of consistency to the order of convergence. Consequently, it follows that SDC, in the sense of a single-step method, converges with order $\min(k_0+k-1, 2M-1)$.
In the next chapter, MLSDC, a multi-level extension of SDC, is described. It is motivated by the assumption that additional iterations on a coarser level may increase the order of accuracy while keeping the costs rather low. In the next chapter we will investigate whether this assumption holds true, i.e. if the convergence order is indeed increased by the additional execution of relatively low-cost iterations on the coarse level. | {
"timestamp": "2020-08-17T02:00:52",
"yymm": "2002",
"arxiv_id": "2002.07555",
"language": "en",
"url": "https://arxiv.org/abs/2002.07555",
"abstract": "The spectral deferred correction (SDC) method is class of iterative solvers for ordinary differential equations (ODEs). It can be interpreted as a preconditioned Picard iteration for the collocation problem. The convergence of this method is well-known, for suitable problems it gains one order per iteration up to the order of the quadrature method of the collocation problem provided. This appealing feature enables an easy creation of flexible, high-order accurate methods for ODEs. A variation of SDC are multi-level spectral deferred corrections (MLSDC). Here, iterations are performed on a hierarchy of levels and an FAS correction term, as in nonlinear multigrid methods, couples solutions on different levels. While there are several numerical examples which show its capabilities and efficiency, a theoretical convergence proof is still missing. This paper addresses this issue. A proof of the convergence of MLSDC, including the determination of the convergence rate in the time-step size, will be given and the results of the theoretical analysis will be numerically demonstrated. It turns out that there are restrictions for the advantages of this method over SDC regarding the convergence rate.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Convergence analysis of multi-level spectral deferred corrections",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808747730899,
"lm_q2_score": 0.8705972566572503,
"lm_q1q2_score": 0.8534298403310215
} |
https://arxiv.org/abs/1805.00343 | Teaching Differentiation: A Rare Case for the Problem of the Slope of the Tangent Line | In this article we discuss an important students' misconception about derivatives, that the expression of the derivative of the function contains the information as to whether the function is differentiable or not where the expression is undefined. As a working example we consider a typical Calculus problem of finding the horizontal tangent lines of a function. Following the standard procedure, we derive the expression for the derivative using Product Rule. The search for the values of the independent variable, that make the derivative equal zero, leads to missing the unique solution of the problem. We show that in this case, even though the expression of the derivative is undefined, the function indeed possesses the derivative at the point. We also provide the methodological treatment of such functions, which can be effectively used in the classroom. | \section{Introduction}
Derivate is one of the most important topics not only in mathematics, but also in physics, chemistry, economics and engineering. Every standard Calculus course provides a variety of exercises for the students to learn how to apply the concept of derivative. The types of problems range from finding an equation of the tangent line to the application of differentials and advanced curve sketching. Usually, these exercises heavily rely on such differentiation techniques as Product, Quotient and Chain Rules, Implicit and Logarithmic Differentiation \cite{Stewart2012}. The definition of the derivative is hardly ever applied after the first few classes and its use is not much motivated.
Like many other topics in undergraduate mathematics, derivative gave rise to many misconceptions \cite{Muzangwa2012}, \cite{Gur2007}, \cite{Li2006}. Just when the students seem to learn how to use the differentiation rules for most essential functions, the application of the derivative brings new issues. A common students' error of determining the domain of the derivative from its formula is discussed in \cite{Rivera2013} and some interesting examples of the derivatives, defined at the points where the functions themselves are undefined, are provided. However, the hunt for misconceptions takes another twist for the derivatives undefined at the points where the functions are in fact defined.
The expression of the derivative of the function obtained using differentiation techniques does not necessarily contain the information about the existence or the value of the derivative at the points, where the expression for the derivative is undefined. In this article we discuss a type of continuous functions that have the expression for the derivative undefined at a certain point, while the derivative itself at that point exists. We show, how relying on the formula for the derivative for finding the horizontal tangent line of a function, leads to a false conclusion and consequently to missing a solution. We also provide a simple methodological treatment of similar functions suitable for the classroom.
\section{Calculating the Derivative}
In order to illustrate how deceitful the expression of the derivative can be to a students' eye, let us consider the following problem.
\vspace{12pt}
\fbox{\begin{minipage}{5.25in}
\begin{center}
\begin{minipage}{5.0in}
\vspace{10pt}
\emph{Problem}
\vspace{10pt}
Differentiate the function $f\left(x\right)=\sqrt[3]{x}\sin{\left(x^2\right)}$. For which values of $x$ from the interval $\left[-1,1\right]$ does the graph of $f\left(x\right)$ have a horizontal tangent?
\vspace{10pt}
\end{minipage}
\end{center}
\end{minipage}}
\vspace{12pt}
Problems with similar formulations can be found in many Calculus books \cite{Stewart2012}, \cite{Larson2010}, \cite{Thomas2009}. Following the common procedure, let us find the expression for the derivative of the function $f\left(x\right)$ applying the Product Rule:
\begin{eqnarray}
f'\left(x\right) &=& \left(\sqrt[3]{x}\right)'\sin{\left(x^2\right)}+\left(\sin{\left(x^2\right)}\right)'\sqrt[3]{x} \notag \\ &=& \frac{1}{3\sqrt[3]{x^2}}\sin{\left(x^2\right)}+2x\cos{\left(x^2\right)}\sqrt[3]{x} \notag \\ &=& \frac{6x^2\cos{x^2}+\sin{x^2}}{3\sqrt[3]{x^2}} \label{DerivativeExpression}
\end{eqnarray}
Similar to \cite{Stewart2012}, we find the values of $x$ where the derivative $f'\left(x\right)$ is equal to zero:
\begin{equation}
6x^2\cos{x^2}+\sin{x^2} = 0
\label{DerivativeEqualZero}
\end{equation}
Since the expression for the derivative (\ref{DerivativeExpression}) is not defined at $x=0$, it is not hard to see that for all values of $x$ from $\left[-1,1\right]$ distinct from zero, the left-hand side of (\ref{DerivativeEqualZero}) is always positive. Hence, we conclude that the function $f\left(x\right)$ does not have horizontal tangent lines on the interval $\left[-1,1\right]$.
However, a closer look at the graph of the function $f\left(x\right)$ seems to point at a different result: there is a horizontal tangent at $x=0$ (see Figure \ref{fig:FunctionGraph}).
First, note that the function $f\left(x\right)$ is defined in $x=0$. In order to verify if it has a horizontal tangent at this point, let us find the derivative of the function $f\left(x\right)$ using definition:
\begin{eqnarray}
f'\left(0\right) &=& \lim_{h\rightarrow0}{\frac{f\left(0+h\right)-f\left(0\right)}{h}} \notag \\
&=& \lim_{h\rightarrow0}{\frac{\sqrt[3]{h}\sin{\left(h^2\right)}}{h}} \notag \\
&=& \lim_{h\rightarrow0}{\left(\sqrt[3]{h} \cdot {h} \cdot \frac{\sin{\left(h^2\right)}}{h^2}\right)} \notag \\
&=& \lim_{h\rightarrow0}{\sqrt[3]{h}} \cdot \lim_{h\rightarrow0}{h} \cdot \lim_{h\rightarrow0}{\frac{\sin{\left(h^2\right)}}{h^2}} \notag \\
&=& 0 \cdot 0 \cdot 1 = 0 \notag
\end{eqnarray}
since each of the limits above exists. We see that, indeed, the function $f\left(x\right)$ possesses a horizontal tangent line at the point $x=0$.
\section{Closer Look at the Expression for the Derivative}
What is the problem with the standard procedure proposed by many textbooks and repeated in every Calculus class? The explanation lies in the following premise: the expression of the derivative of the function does not contain the information as to whether the function is differentiable or not at the points where it is undefined. As it is pointed out in \cite{Rivera2013}, the domain of the derivative is determined \emph{a priori} and therefore should not be obtained from the formula of the derivative itself.
In the example above the Product Law for derivatives requires the existence of the derivatives of both functions at the point of interest. Since the function $\sqrt[3]{x}$ is not differentiable in zero, the Product Rule cannot be applied.
In order to see what exactly happens when we apply the Product Rule, let us find the expression for the derivative using definition of the derivative:
\begin{eqnarray}
f'\left(x\right) &=& \lim_{h\rightarrow0}{\frac{f\left(x+h\right)-f\left(x\right)}{h}} \notag \\
&=& \lim_{h\rightarrow0}{\frac{\sqrt[3]{x+h}\sin{\left(x+h\right)^2}-\sqrt[3]{x}\sin{\left(x^2\right)}}{h}} \notag \\
&=& \lim_{h\rightarrow0}{\frac{\left(\sqrt[3]{x+h}-\sqrt[3]{x}\right)}{h}\sin{\left(x^2\right)}} + \notag \\
&& \lim_{h\rightarrow0}{\frac{\left(\sin{\left(x+h\right)^2}-\sin{\left(x^2\right)}\right)}{h}\sqrt[3]{x+h}} \notag \\
&=& \lim_{h\rightarrow0}{\frac{\sqrt[3]{x+h}-\sqrt[3]{x}}{h}} \cdot \lim_{h\rightarrow0}{\sin{\left(x^2\right)}} + \notag \\&& \lim_{h\rightarrow0}{\frac{\sin{\left(x+h\right)^2}-\sin{\left(x^2\right)}}{h}} \cdot \lim_{h\rightarrow0}{\sqrt[3]{x+h}} \notag \\
&=& \frac{1}{3\sqrt[3]{x^2}} \cdot \sin{\left(x^2\right)}+2x\cos{\left(x^2\right)} \cdot \sqrt[3]{x} \notag
\end{eqnarray}
which seems to be identical to the expression (\ref{DerivativeExpression}).
Students are expected to develop a skill of deriving similar results and know how to find the derivative of the function using definition of the derivative only. But how `legal' are the performed operations?
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.0in]{sin.pdf}
\vspace{.1in}
\caption{Graph of the function $g\left(x\right)=\sqrt[3]{x}\cos{\left(x^2\right)}$}
\label{fig:GFunction}
\end{center}
\end{figure}
Let us consider each of the following limits:
\begin{eqnarray*}
&& \lim_{h\rightarrow0}{\frac{\sqrt[3]{x+h}-\sqrt[3]{x}}{h}} \notag \\
&& \lim_{h\rightarrow0}{\sin{\left(x^2\right)}}\notag \\
&& \lim_{h\rightarrow0}{\frac{\sin{\left(x+h\right)^2}-\sin{\left(x^2\right)}}{h}}\notag \\
&& \lim_{h\rightarrow0}{\sqrt[3]{x+h}}.
\end{eqnarray*}
The last three limits exist for all real values of the variable $x$. However, the first limit does not exist when $x=0$. Indeed
\begin{equation*}
\lim_{h\rightarrow0}{\frac{\sqrt[3]{0+h}-\sqrt[3]{0}}{h}} = \lim_{h\rightarrow0}{\frac{1}{\sqrt[3]{h^2}}} = + \infty
\end{equation*}
This implies that the Product and Sum Laws for limits cannot be applied and therefore this step is not justifiable in the case of $x=0$. When the derivation is performed, we automatically assume the conditions, under which the Product Law for limits can be applied, i.e. that both limits that are multiplied exist. It is not hard to see that in our case these conditions are actually equivalent to $x\neq0$. This is precisely why, when we wrote out the expression for the derivative (\ref{DerivativeExpression}), it already contained the assumption that it is only true for the values of $x$ that are different from zero.
Note, that in the case of $x=0$ the application of the Product and Sum Laws for limits is not necessary, since the term $\left(\sqrt[3]{x+h}-\sqrt[3]{x}\right)\sin{\left(x^2\right)}$ vanishes.
The correct expression for the derivative of the function $f\left(x\right)$ should be the following:
\begin{equation*}
f'\left(x\right) =
\begin{cases}
\frac{6x^2\cos{\left(x^2\right)}+\sin{\left(x^2\right)}}{3\sqrt[3]{x^2}}, & \mbox{if } x \neq 0 \\
0, & \mbox{if } x = 0
\end{cases}
\end{equation*}
The expression for the derivative of the function provides the correct value of the derivative only for those values of the independent variable, for which the expression is defined; it does not tell anything about the existence or the value of the derivative, where the expression for the derivative is undefined. Indeed, let us consider the function
\begin{equation*}
g\left(x\right) = {\sqrt[3]{x}}\cos{\left(x^2\right)}
\end{equation*}
and its derivative $g'\left(x\right)$
\begin{equation*}
g'\left(x\right) = \frac{\cos{\left(x^2\right)}-6x^2\sin{\left(x^2\right)}}{3\sqrt[3]{x^2}}
\end{equation*}
Similar to the previous example, the expression for the derivative is undefined at $x=0$. Nonetheless, it can be shown that $g\left(x\right)$ is not differentiable at $x=0$ (see Figure \ref{fig:GFunction}). Therefore, we provided two visually similar functions: both have the expressions for their derivatives undefined in zero, however, one of these functions possesses a derivative, but the other one does not.
\section{Methodological Remarks}
Unfortunately, there exist many functions similar to the ones discussed above and they can arise in a variety of typical Calculus problems: finding the points where the tangent line is horizontal, finding an equation of the tangent and normal lines to the curve at the given point, the use of differentials and graph sketching. Relying only on the expression of the derivative for determining its value at the undefined points may lead to missing a solution (as in the example discussed above) or to some completely false interpretations (as in the case of curve sketching).
As it was discussed above, the expression for the derivative does not provide any information on the existence or the value of the derivative, where the expression itself is undefined. Here we present a methodology for the analysis of this type of functions.
Let $f\left(x\right)$ be the function of interest and $f'\left(x\right)$ be the expression of its derivative undefined at some point $x_{0}$. In order to find out if $f\left(x\right)$ is differentiable at $x_{0}$, we suggest to follow a list of steps:
\begin{enumerate}
\item Check if the function $f\left(x\right)$ itself is defined at the point $x_{0}$. If $f\left(x\right)$ is undefined at $x_{0}$, then it is not differentiable at $x_{0}$. If $f\left(x\right)$ is defined at $x_{0}$, then proceed to next step.
\item Identify the basic functions that are used in the formula of the function $f\left(x\right)$, that are themselves defined at the point $x_{0}$, but their derivative is not (such as, for example, the root functions).
\item Find the derivative of the function $f\left(x\right)$ at the point $x_{0}$ using definition.
\end{enumerate}
The importance of the first step comes from the fact that most students tend to pay little attention to the functions domain analysis when asked to investigate its derivative. Formally, the second step can be skipped, however it will give the students the insight into which part of the function presents a problem and teach them to identify similar cases in the future. the difficulty of accomplishing the third step depends on the form of the function and sometimes can be tedious. Nevertheless, it allows the students to apply the previously obtained skills and encourages the review of the material.
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.0in]{cos.pdf}
\vspace{.1in}
\caption{Graph of the function $g\left(x\right)=\sqrt[3]{x}\cos{\left(x^2\right)}$}
\label{fig:GFunction}
\end{center}
\end{figure}
\section{Conclusion}
We discussed the misconception, that the expression of the derivative of the function contains the information as to whether the function is differentiable or not at the points, where the expression is undefined. We considered a typical Calculus problem of looking for the horizontal tangent line of a function as an example. We showed how the search for the values that make the expression of the derivative equal zero leads to missing a solution: even though the expression of the derivative is undefined, the function still possesses the derivative at the point. We provided an example of the function that similarly has the expression for the derivative undefined, however the function is not differentiable at the point. We also presented the methodological treatment of such functions by applying the definition of the derivative, which can be used in the classroom.
| {
"timestamp": "2018-05-02T02:09:26",
"yymm": "1805",
"arxiv_id": "1805.00343",
"language": "en",
"url": "https://arxiv.org/abs/1805.00343",
"abstract": "In this article we discuss an important students' misconception about derivatives, that the expression of the derivative of the function contains the information as to whether the function is differentiable or not where the expression is undefined. As a working example we consider a typical Calculus problem of finding the horizontal tangent lines of a function. Following the standard procedure, we derive the expression for the derivative using Product Rule. The search for the values of the independent variable, that make the derivative equal zero, leads to missing the unique solution of the problem. We show that in this case, even though the expression of the derivative is undefined, the function indeed possesses the derivative at the point. We also provide the methodological treatment of such functions, which can be effectively used in the classroom.",
"subjects": "History and Overview (math.HO)",
"title": "Teaching Differentiation: A Rare Case for the Problem of the Slope of the Tangent Line",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9898303416461329,
"lm_q2_score": 0.8615382147637196,
"lm_q1q2_score": 0.852776665460772
} |
https://arxiv.org/abs/1809.04072 | The 21 Card Trick and its Generalization | The 21 card trick is well known. It was recently shown in an episode of the popular YouTube channel Numberphile. In that trick, the audience is asked to remember a card, and through a series of steps, the magician is able to find the card. In this article, we look into the mathematics behind the trick, and look at a complete generalization of the trick. We show that this trick can be performed with any number of cards. | \section{Introduction to the 21 card trick}
The \textbf{21 card trick} (21CT) is a very popular card trick. It was also recently shown in an episode of the popular YouTube channel \href{https://www.youtube.com/watch?v=d7dg7gVDWyg}{Numberphile}. We first explain how this trick is performed in a series of steps. For the purpose of this demonstration, we will assume that a magician (henceforth, Magi), is showing this trick to his friend and audience (henceforth, Audy).
\begin{enumerate}
\item Audy randomly chooses 21 cards from a deck of cards. He remembers one card from that set, shuffles the set of 21 cards and hands it back to Magi.
\item Magi requests Audy to pay attention as he puts the cards face up one at a time adjacent to each other creating 3 stacks of 7 cards each in the process. Magi asks Audy to tell him which stack contains his card.
\item Magi then puts the stack that contained Audy's card between the two other stacks, and then repeats Step (2) two more times.
\item Magi now puts the cards in the table one at a time, and stops at the 11th card which turns out to be Audy's card.
\end{enumerate}
Note that this trick can be made more ``magical" with some extra activities, which we will not discuss as they don't contribute anything to the main problem. Our current goal is to find out why the 11th card on the deck happened to be Audy's card. We will explore the mathematics behind this after we look at a detailed example of the trick.
Once Audy hands the shuffled deck of cards to Magi, we know that Audy's card could be any one of the 21 cards in that deck. Let's number the cards from top to bottom of the deck by $1, 2, 3, \ldots, 21$. We shall use the term \textbf{\emph{deck id}} to denote the position of Audy's card in the deck. Similarly, we will use the term \textbf{\emph{stack id}} to denote the position of Audy's card in a stack from the top.
\begin{framed}
\begin{definition}
An \textbf{iteration} is considered to be the process of splitting the cards of a deck into stacks.
\end{definition}
\begin{definition}
The \textbf{deck id} of Audy's card after $k(\geq 1)$ iterations of splitting into stacks, denoted by $d_k$, is the position of his card from the top, in a deck of $n$ distinct cards.
\end{definition}
\begin{definition}
The \textbf{stack id} of Audy's card after $k( \geq 1)$ iterations of splitting into stacks, denoted by $s_k$, is the position of his card from the top, in its individual stack.
\end{definition}
\end{framed}
\noindent
Note that the initial deck id of Audy's card before any iterations happen is denoted by $d_0$. We now show a detailed example of the 21CT in action.
\begin{enumerate}
\item Audy selects a random collection of 21 cards. The cards are:
\smallskip
\noindent
\psset{inline=boxed}
\fourc, \tenc, \tred, \tenh, \tend, \eigd, \sixc, \tres, \tens, \twoc, \fives, \sixh, \twod, \eigc, \Ac, \Kd, \ninec, \fours, \nineh, \Kh, \Jd
\smallskip
\noindent
He remembers a card from this set. Suppose it's \fours. He then shuffles this deck again and hands it over to Magi. The shuffled deck looks like this:
\smallskip
\noindent
\eigd, \tend, \nineh, \twoc, \tred, \fives, \fourc, \sixc, \sixh, \twod, \tenc, \tenh, \Kd, \Jd, \ninec, \eigc, \tres, \Ac, \tens, \fours, \Kh \, \, ($d_0 = 20$)
\enspace
\noindent
Note that an iteration hasn't happened yet. Thus, $s_0$ is not defined, and $d_0 = 20$ (as \fours \, is the 20th card in the deck at the moment).
\smallskip
\item Now Magi does the first iteration by putting these cards face up one at a time adjacent to each other creating 3 stacks. The stacks then look like this:
\smallskip
\noindent
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|}
\hline
Stack 1 & Stack 2 & Stack 3 \\
\hline
\eigd & \tend & \nineh \\
\hline
\twoc & \tred & \fives \\
\hline
\fourc & \sixc & \sixh \\
\hline
\twod & \tenc & \tenh \\
\hline
\Kd & \Jd & \ninec \\
\hline
\eigc & \tres & \Ac \\
\hline
\tens & \fours & \Kh \\
\hline
\end{tabular}
\caption{Stacks after Iteration 1 ($s_1 = 7$ as \protect\psset{inline=boxed} \protect\fours \, is the 7th card in the second stack)}
\end{table}
\smallskip
\noindent
He then asks Audy which stack contains his card. Audy replies by saying that his card is in Stack 2 (\fours \, is in Stack 2).
\item Magi then puts Stack 2 in between Stack 1 and Stack 3. The set of 21 cards now look as follows from top to bottom:
\smallskip
\noindent
\eigd, \twoc, \fourc, \twod, \Kd, \eigc, \tens, \tend, \tred, \sixc, \tenc, \Jd, \tres, \fours, \nineh, \fives, \sixh, \tenh, \ninec, \Ac, \Kh \, \, ($d_1 = 14$)
\smallskip
\noindent
Magi now repeats Step (2) two more times. The stacks after iteration 2 looks like this:
\smallskip
\noindent
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|}
\hline
Stack 1 & Stack 2 & Stack 3 \\
\hline
\eigd & \twoc & \fourc \\
\hline
\twod & \Kd & \eigc \\
\hline
\tens & \tend & \tred \\
\hline
\sixc & \tenc & \Jd \\
\hline
\tres & \fours & \nineh \\
\hline
\fives & \sixh & \tenh \\
\hline
\ninec & \Ac & \Kh \\
\hline
\end{tabular}
\caption{Stacks after Iteration 2 ($s_2 = 5$)}
\end{table}
\noindent
\psset{inline=boxed}
He then again asks Audy which stack contains his card. Audy replies by saying that his card is in Stack 2 again (\fours \, is in Stack 2). Magi again puts Stack 2 in between Stack 1 and Stack 3. The set of 21 cards now look as follows from top to bottom:
\smallskip
\noindent
\eigd, \twod, \tens, \sixc, \tres, \fives, \ninec, \twoc, \Kd, \tend, \tenc, \fours, \sixh, \Ac, \fourc, \eigc, \tred, \Jd, \nineh, \tenh, \Kh \, \, ($d_2 = 12$)
\smallskip
\noindent
Magi again repeats the process of splitting the cards into 3 stacks for the third and final time. The stacks after iteration 3 look like this:
\smallskip
\noindent
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|}
\hline
Stack 1 & Stack 2 & Stack 3 \\
\hline
\eigd & \twod & \tens \\
\hline
\sixc & \tres & \fives \\
\hline
\ninec & \twoc & \Kd \\
\hline
\tend & \tenc & \fours \\
\hline
\sixh & \Ac & \fourc \\
\hline
\eigc & \tred & \Jd \\
\hline
\nineh & \tenh & \Kh \\
\hline
\end{tabular}
\caption{Stacks after Iteration 3 ($s_3 = 4$)}
\end{table}
\smallskip
\noindent
Now for the final time, Magi asks Audy about the stack that contains his card. Audy replies by saying that his card is in Stack 3 (\fours \, is in Stack 3). Magi then puts Stack 3 in between Stack 1 and Stack 2. The set of 21 cards now look as follows from top to bottom:
\smallskip
\noindent
\eigd, \sixc, \ninec, \tend, \sixh, \eigc, \nineh, \tens, \fives, \Kd, \fours, \fourc, \Jd, \Kh, \twod, \tres, \twoc, \tenc, \Ac, \tred, \tenh \, \, ($d_3 = 11$)
\item Magi now puts each card from the top face down on the table one by one, and flips over the \textbf{11th card} which turns out to be Audy's card, \fours.
\smallskip
\noindent
\begin{center}
\eigd, \sixc, \ninec, \tend, \sixh, \eigc, \nineh, \tens, \fives, \Kd,
\fcolorbox{blue}{green}{\fours}
\end{center}
\end{enumerate}
\smallskip
\noindent
Magi correctly finds out Audy's card, and it leaves Audy startled. He wonders what kind of voodoo did Magi apply in all of this. Little does he know about the power of mathematics behind this trick.
\section{The Mathematics behind the 21 card trick}
Most card tricks rely on mathematics. The 21CT is no exception. We now look at the mathematics behind this trick.
\smallskip
\noindent
Before we look into the steps of the 21CT again, we would like to look at the definition of the \textbf{ceiling} and \textbf{floor} functions, and three lemmas that will be used extensively in this article.
\bigskip
\noindent
\begin{framed}
\begin{definition}
Suppose $x \in \mathbb{R}$. The \textbf{ceiling} of $x$ denoted by $\ceil{x}$ is the smallest integer greater than or equal to $x$. In general, if $\ceil{x} = n \in \mathbb{Z}$, then $n-1 < x \leq n$.
\end{definition}
\begin{definition}
Suppose $x \in \mathbb{R}$. The \textbf{floor} of $x$ denoted by $\floor{x}$ is the largest integer less than or equal to $x$. In general, if $\floor{x} = n \in \mathbb{Z}$, then $n \leq x < n+1$. In this case $x - n = x - \floor{x}$ is called the \emph{fractional part} of $x$, and is denoted by $\{x\}$.
\end{definition}
\end{framed}
\begin{lemma}\label{lem1}
For $x, y \in \mathbb{R}$, if $x \leq y$, then
$$\ceil{x} \leq \ceil{y}$$
\end{lemma}
\begin{proof}
We divide this into two cases.
\enspace
\noindent
\underline{Case 1 :} ($y \leq \ceil{x}$) As $\ceil{x} - 1 < x \leq \ceil{x}$ by definition, and $x \leq y$, therefore, $\ceil{x} - 1 < y \leq \ceil{x}$. This implies that $\ceil{y} = \ceil{x}$, and therefore, $\ceil{x} \leq \ceil{y}$.
\enspace
\noindent
\underline{Case 2 :} ($y > \ceil{x}$). By definition, $y \leq \ceil{y}$. Therefore, we have $\ceil{x} < y \leq \ceil{y}$. Hence, $\ceil{x} < \ceil{y}$.
\enspace
\noindent
Combining the two above case, we have the desired inequality.
\end{proof}
\begin{lemma}\label{lem2}
For $n \in \mathbb{Z}$ and $x \in \mathbb{R}$,
$$\ceil{n+x} = n + \ceil{x}$$
\end{lemma}
\begin{proof}
Suppose $\ceil{n+x} = d$. Thus,
\begin{equation*}
\begin{split}
d-1 < n + & x \leq d \\
(d-n)-1 < \, & x \leq d-n
\end{split}
\end{equation*}
This implies, $\ceil{x} = d - n$. Adding $n$ to both sides, we have $n + \ceil{x} = d$ which shows the desired result.
\end{proof}
\begin{lemma}\label{lem3}
For $n, m \in \mathbb{Z}$ with $m > 0$, and $x \in \mathbb{R}$,
$$\ceil[\Bigg]{\frac{n + \ceil{x}}{m}} = \ceil[\Bigg]{\frac{n+x}{m}}$$
\end{lemma}
\begin{proof}
Suppose $\ceil[\Bigg]{\dfrac{n+x}{m}} = d$. Thus,
\begin{alignat*}{2}
d-1 &< &\frac{n+x}{m} &\leq d \\
md - m &< &n + x &\leq md \\
md - m - n &< &x \, \, \, \, \, &\leq md - n
\end{alignat*}
As $m, d, n \in \mathbb{Z}$, thus $md-m-n$ and $md-n$ are integers. By definition of the ceiling function, $x \leq \ceil{x}$. Hence, $md - m - n < \ceil{x}$. On the other hand, $x, md-n \in \mathbb{R}$. Hence, by Lemma \ref{lem1}, $\ceil{x} \leq \ceil{md-n} = md-n$ as $md - n \in \mathbb{Z}$. Thus, we have
\begin{alignat*}{2}
md-m-n &< &\ceil{x} \, \, \, \, \, \, \, &\leq md-n \\
md - m &< &n + \ceil{x} &\leq md \\
\frac{md - m}{m} &< &\frac{n + \ceil{x}}{m} &\leq \frac{md}{m} \\
d-1 &< &\frac{n + \ceil{x}}{m} &\leq d
\end{alignat*}
This implies, $\ceil[\Bigg]{\dfrac{n + \ceil{x}}{m}} = d$, which proves the result.
\end{proof}
\noindent
Now we look back at the steps of the card trick again.
\enspace
\begin{enumerate}[Step 1:]
\item Audy hands the shuffled deck to Magi. We denote the initial deck id of Audy's card by $d_0$ as no iterations of splitting into stacks has been done yet. Therefore,
$$1 \leq d_0 \leq 21$$
\item Magi now puts the cards from the top, face up, on the table adjacent to each other, creating 3 stacks of 7 cards each. Audy's card is in one of the three stacks. We claim the following.
\begin{claim}\label{claim1}
If $s_k$ denotes the stack id, and $d_k$ denote the deck id of Audy's card after $k$ iterations, then,
$$s_{k} = \ceil[\Big]{\dfrac{d_{k-1}}{3}} \hspace{0.2in} \text{for} \hspace{0.1in} k \geq 1$$
\end{claim}
\begin{proof}
It is imperative for us to note that a stack id for Audy's card is created after every iteration. Hence, $s_k$ is defined only when $k \geq 1$. On the other hand, a deck id for Audy's card is created when the 21 cards are all in a single deck, which happens for the first time when Audy hands over the shuffled deck to Magi. Thus, $d_k$ is defined for $k \geq 0$. Also, note that the $k$th iteration which creates $s_{k}$ is performed using the 21 card deck after the $(k-1)$th iteration which creates $d_{k-1}$. Hence the relationship between $s_{k}$ and $d_{k-1}$. It is clear that the stack id of a card is the ``row number" of the card in the stack. Thus for $k \geq 1$, the first row consists of cards with $d_{k-1} = 1, 2, 3$. The second row consists of cards with $d_{k-1} = 4, 5, 6$ and so on. This means that the $n$th row consists of cards with $d_{k-1} = 3n-2, 3n-1, 3n$. As $n =s_{k}$. Thus,
\begin{alignat*}{2}
3s_{k}-3 &< &d_{k-1} &\leq 3s_{k} \\
s_{k} -1 &< &\frac{d_{k-1}}{3} &\leq s_{k}
\end{alignat*}
Therefore, $\ceil[\Big]{\dfrac{d_{k-1}}{3}} = s_{k}$.
\end{proof}
\noindent
As $1 \leq d_0 \leq 21$, hence,
\begin{alignat*}{2}
\dfrac{1}{3} &\leq &\dfrac{d_0}{3} \, &\leq 7 \\
1 &\leq &\ceil[\Big]{\dfrac{d_0}{3}} &\leq 7 \qquad (\text{By Lemma } \ref{lem1}) \\
1 &\leq &s_1 \, \, &\leq 7 \qquad (\text{By Claim } \ref{claim1})
\end{alignat*}
\noindent
Now Magi asks Audy to tell him the stack that contains his card. Audy responds by saying that it's Stack 2.
\item Magi then puts Stack 2 in between the other two stacks and creates a full deck of 21 cards. Note that there is a stack of 7 cards on top of Stack 2 at the moment. The position of Audy's card in Stack 2 is currently $s_1$. Hence, the new deck id of Audy's card after the first iteration is,
$$d_1 = 7+ s_1$$
Since $1 \leq s_1 \leq 7$, therefore,
\begin{alignat*}{2}
8 &\leq &7+s_1 &\leq 14 \\
8 &\leq &d_1 \, \, \, &\leq 14
\end{alignat*}
\noindent
Now the second iteration is performed, and the stack id $s_2$ is created,
\begin{alignat*}{2}
8 &\leq &d_1 &\leq 14 \\
\dfrac{8}{3} &\leq &\dfrac{d_1}{3} &\leq \dfrac{14}{3} \\
3 &\leq &\ceil[\Big]{\dfrac{d_1}{3}} &\leq 5 \qquad (\text{By Lemma } \ref{lem1}) \\
3 &\leq &s_2 \, \, &\leq 5 \qquad (\text{By Claim } \ref{claim1})
\end{alignat*}
\noindent
Note that Magi now knows that the stack id of Audy's card is either 3, 4 or 5, but that is not good enough, as he needs to find the exact card. He again asks Audy to tell him the stack where his card belongs. Audy mentions that it's Stack 2, and Magi puts Stack 2 in between the other two stacks to create a deck of 21 cards. The position of Audy's card in Stack 2 is currently $s_2$. Therefore, the new deck id of Audy's card after the second iteration is,
$$d_2 = 7 + s_2$$
Now, $3 \leq s_2 \leq 5$, therefore,
\begin{alignat*}{2}
10 &\leq &7+s_2 &\leq 12 \\
10 &\leq &d_2 \, \, \, &\leq 12
\end{alignat*}
\noindent
Now the third iteration is performed, and the stack id $s_3$ is created,
\begin{alignat*}{2}
10 &\leq &d_2 \, \, &\leq 12 \\
\dfrac{10}{3} &\leq &\dfrac{d_2}{3} \, \, &\leq \dfrac{12}{3} \\
4 &\leq &\ceil[\Big]{\dfrac{d_2}{3}} &\leq 4 \qquad (\text{By Lemma } \ref{lem1}) \\
4 &\leq &s_3 \, \, &\leq 4 \qquad (\text{By Claim } \ref{claim1})
\end{alignat*}
\noindent
This means $s_3 = 4$. Magi can now see the finish line. Magi has now found the exact stack id of Audy's card in a stack, but he does not know exactly which stack contains Audy's card. He thus asks Audy one final time about the stack that contains his card. Audy mentions Stack 3, and Magi puts Stack 3 in between the other two stacks. The position of Audy's card in the middle stack is currently $s_3 = 4$. Hence, the new deck id of Audy's card after the third iteration is,
$$d_3 = 7 + s_3 = 11$$
\item Magi now knows that Audy's card is the 11th card from the top of the 21 card deck. He puts each of the card from the top on the table, face down, until he flips over the 11th card to Audy's delight.
\end{enumerate}
\section{Generalization of the 21 card trick}\label{sec3}
Now that we have seen how the 21CT works, and the mathematics behind it, we ask ourselves whether the same trick can be performed by Magi using a random set of, let's say $C$ number of cards. He also wants to find out the number of stacks he needs to split his cards into, where should he put the stack that contains Audy's card, how many iterations should he perform, and lastly, the deck id of Audy's card after the final iteration.
\begin{comment}
\begin{enumerate}
\item Can this trick be done with any number of cards?
\item Can arbitrary number of stacks be used during an iteration?
\item Does the stack that contains the desired card need to always be in the middle?
\end{enumerate}
We'll answer these questions in this section. Let's say Magi is given a shuffled deck of $C$ distinct cards by his friend Audy again after remembering a card. Magi wonders whether he can perform a similar trick to the 21CT using $C$ cards.
\end{comment}
\noindent
Let's help out Magi perform his trick. Before we proceed, we introduce some notations.
\begin{itemize}
\item Number of given cards = $C(>0) \in \mathbb{Z}$.
\item Number of stacks to split into = $n(>0 \text{ and} \leq C) \in \mathbb{Z}$.
\item Number of stacks to put on top of the stack which contains Audy's card = $j(\geq 0 \text{ and} <n)\in \mathbb{Z}$.
\item Number of iterations to be performed = $k(>0) \in \mathbb{Z}$.
\item Deck id of Audy's card after the final iteration = $l(>0) \in \mathbb{Z}$.
\end{itemize}
We have thus created a 4-tuple ($C, n, j, k$) that provides information about the magic trick.
\begin{framed}
\begin{definition}
A magic trick ($C, n, j, k$) is \textbf{solvable}, if there exists integers $k$ and $l$ such that the magic trick can be performed with the given parameters in $k$ iterations. In this case, we write $$(C, n, j, k) = l$$
If such a $k$ or $l$ does not exist then the magic trick cannot be performed, and we say that ($C, n, j, k$) is \textbf{not solvable}.
\end{definition}
\end{framed}
\noindent
The 21 card trick is represented by (21, 3, 1, 3) and is solvable and $(21, 3, 1, 3) = 11$. Following along the lines of the 21CT, our main assumption here is that each stack contains the same number of cards. Thus, $n \mid C$. Thus, $n \leq C$.
\noindent
Before we proceed any further, we first look at a generalization of Claim \ref{claim1} in the form of a theorem and an additional result.
\begin{theorem}\label{thm4}
Suppose $C$ cards are split into $n$ stacks during an iteration. If $s_k$ denotes the stack id, and $d_k$ denote the deck id of Audy's card after $k$ iterations, then,
$$s_{k} = \ceil[\Big]{\dfrac{d_{k-1}}{n}} \hspace{0.2in} \text{for} \hspace{0.1in} k \geq 1$$
\end{theorem}
\begin{proof}
The proof of this theorem is very similar to the earlier proof of Claim \ref{claim1}. We know that the stack id of a card is the ``row number" of the card in the stack. Thus for $k \geq 1$, the first row consists of cards with $d_{k-1} = 1, 2, 3, \ldots, n$. The second row consists of cards with $d_{k-1} = n+1, n+2, \ldots, 2n$, and so on. This means that the $q$th row consists of cards with $d_{k-1} = (q-1)n + 1, (q-1)n + 2, \ldots, (q-1)n + n$. As $q = s_{k}$. Hence,
\begin{alignat*}{2}
n(s_{k}-1) &< &d_{k-1} &\leq n(s_{k} - 1) + n \\
n(s_{k}-1) &< &d_{k-1} &\leq ns_{k} \\
s_{k}-1 &< &\frac{d_{k-1}}{n} &\leq s_{k}
\end{alignat*}
Therefore, $\ceil[\Big]{\dfrac{d_{k-1}}{n}} = s_{k}$.
\end{proof}
\begin{theorem}\label{thm5}
Suppose $k \geq 1$, then
$$d_k = \Big(\frac{C}{n}\Big)j + s_k$$
\end{theorem}
\begin{proof}
As $C$ cards are being split into $n$ stacks, hence each stack contains $\dfrac{C}{n}$ cards. $s_k$ is the stack id of the desired card after $k$ iterations. $j$ stacks are kept on top of the stack that contains the desired card. There are a total of $\Big(\dfrac{C}{n}\Big)j$ cards in these $j$ stacks. Thus, the new deck id after $k$ iteration is,
$$d_k = \Big(\frac{C}{n}\Big)j +s_k \hspace{0.2in} \text{for} \hspace{0.1in} k \geq 1$$
\end{proof}
\noindent
Note that our goal is to find an exact value for $d_k$. This would tell us the deck id of the desired card which would successfully conclude the trick. Due to Theorem \ref{thm5}, we now have an expression for $d_k$ that we can investigate further. This results in the next theorem which will be used extensively in the next section.
\begin{theorem}\label{thm6}
Suppose $\dfrac{C}{n} = m$ and $k \geq 1$. Then,
$$d_k =
\begin{cases}
mj + \ceil[\Bigg]{\dfrac{mjn\Big(\dfrac{n^{k-1}-1}{n-1}\Big) + d_0}{n^k}}, & \text{for} \hspace{0.1in} n > 1 \\
d_0, & \text{for} \hspace{0.1in} n = 1
\end{cases}
$$
\end{theorem}
\begin{proof}
\underline{Case 1 :} ($n>1$)
We will use mathematical induction on $k$ to prove this result.
\noindent
For $k=1$, L.H.S. = $d_1 = mj + s_1$ from Theorem \ref{thm5}. Thus, $d_1 = mj + \ceil[\Big]{\dfrac{d_0}{n}}$.
\noindent
R.H.S. = $mj + \ceil[\Bigg]{\dfrac{mjn\Big(\dfrac{n^{0}-1}{n-1}\Big) + d_0}{n}} = mj + \ceil[\Big]{\dfrac{d_0}{n}}$. Hence, the result is true for $k=1$. Now assume the result is true for some $k = t \geq 1$. Thus,
$$d_t = mj + \ceil[\Bigg]{\frac{mjn\Big(\dfrac{n^{t-1}-1}{n-1}\Big) + d_0}{n^t}}$$
Now,
$$d_{t+1} = mj + s_{t+1} = mj + \ceil[\Big]{\dfrac{d_t}{n}} = mj + \ceil[\Bigg]{\dfrac{mj + \ceil[\Big]{\frac{mjn\Big(\frac{n^{t-1}-1}{n-1}\Big) + d_0}{n^t}}}{n}}$$
As $mj$ and $n$ are integers, hence by Lemma \ref{lem3},
$$\ceil[\Bigg]{\dfrac{mj + \ceil[\Big]{\frac{mjn\Big(\frac{n^{t-1}-1}{n-1}\Big) + d_0}{n^t}}}{n}} = \ceil[\Bigg]{\dfrac{mj + \frac{mjn\Big(\frac{n^{t-1}-1}{n-1}\Big) + d_0}{n^t}}{n}} = \ceil[\Bigg]{\dfrac{mjn^t + \Big(\frac{mjn^t-mjn}{n-1}\Big) +d_0}{n^{t+1}}}$$
Now,
$$\ceil[\Bigg]{\dfrac{mjn^t + \Big(\dfrac{mjn^t-mjn}{n-1}\Big) +d_0}{n^{t+1}}} = \ceil[\Bigg]{\dfrac{\Big(\dfrac{mjn^{t+1}-mjn}{n-1}\Big) +d_0}{n^{t+1}}} = \ceil[\Bigg]{\dfrac{mjn\Big(\dfrac{n^{t}-1}{n-1}\Big) +d_0}{n^{t+1}}}$$
Therefore,
$$d_{t+1} = mj + \ceil[\Bigg]{\dfrac{mjn\Big(\dfrac{n^{(t+1)-1}-1}{n-1}\Big) +d_0}{n^{(t+1)}}}$$
and thus the result is true for $k = t+1$. Therefore, by induction, the result is true for all $k \geq 1$.
\noindent
\underline{Case 2 :} ($n=1$) As $n=1$, and $0 \leq j < n$, hence $j = 0$. By Theorem \ref{thm5},
$$d_k = s_k \qquad for \hspace{0.1in} k \geq 1$$
Now, according to Theorem \ref{thm4}, $s_k = \ceil{d_{k-1}} = d_{k-1}$ for $k \geq 1$. Therefore,
$$d_k = d_{k-1} \qquad for \hspace{0.1in} k \geq 1$$
Solving this recurrence relation, we get $d_k = d_0$.
\end{proof}
\section{Main Result}
\noindent
The 21CT and the results that we have seen so far pushes us to the more general question.
\begin{framed}
\noindent
\begin{center}\label{gen-ques}
\underline{\textbf{General Question}}
\end{center}
Given integers $C, n,$ and $j$, does there exist a positive integer $k$, such that the trick $(C, n, j, k)$ is solvable? In that case, what is $(C, n, j, k)$?
\end{framed}
\noindent
We answer this question using an algorithm. We will then look at the proof of the results in our algorithm.
\begin{framed}\label{algorithm}
\begin{enumerate}
\item Start with given integers $C, n, j$.
\item Find $m = \dfrac{C}{n}$ and $b = \dfrac{mj}{n-1}$ (for $n>1$).
\item If $C = 1$, then $(1, 1, 0, k)$ is solvable for any $k \geq 1$. In this case, $(1, 1, 0, k) = 1$.
\item If $C>1$ and $n = 1$, then $(C, 1, 0, k)$ is not solvable for any $k \geq 1$.
\item If $C, n > 1$, then $(C, n, 0, k)$ is solvable for any integer $k \geq \log_nC$. In this case, $(C, n, 0, k) = 1$.
\item If $C, n > 1$, then $(C, n, n-1, k)$ is solvable for any integer $k > \log_n(C-1)$. In this case, $(C, n, n-1, k) = C$.
\item If $C, n > 1$ and $0<j<n-1$, then $(C, n, j, k)$ is solvable if, and only if, $(n-1) \not\mid mj$. In this case, $(C, n, j, k)$ is solvable for any integer $k > \log_nt$, where $t = \max\Big\{\dfrac{C-bn}{1-\{b\}}, \dfrac{bn-1}{\{b\}}\Big\}$ and, $(C, n , j, k) = mj + \floor{b} + 1$.
\end{enumerate}
\end{framed}
\noindent
The steps (3) - (7) of the above algorithm gives a complete answer to the general question posed at the beginning of this section. We will prove each step in the form of a theorem, culminating with proof of the all important Step (7).
\begin{theorem} (Step (3)) \label{thm:step3}
If $C = 1$, then $(1, 1, 0, k)$ is solvable for any $k \geq 1$ with $(1, 1, 0, k) = 1$.
\end{theorem}
\begin{proof}
This is a trivial case as there is only 1 card. As $0 < n \leq C$, therefore, $n=1$. Also, as $j < n$, hence $j = 0$. By Theorem \ref{thm6}, $d_k = d_0$. We know that the initial deck id $d_0$ of the desired card is between $1$ and $C$. Thus, $1 \leq d_0 \leq 1$. Therefore, $d_k = d_0 = 1$. Thus $(1, 1, 0, k)$ is solvable for any $k \geq 1$, with $(1, 1, 0, k) = 1$.
\end{proof}
\begin{theorem} (Step (4)) \label{thm:step4}
If $C>1$ and $n = 1$, then $(C, 1, 0, k)$ is not solvable for any $k \geq 1$.
\end{theorem}
\begin{proof}
By Theorem \ref{thm6}, $d_k = d_0$, as $n=1$. But $1 \leq d_0 \leq C$. Thus,
$$1 \leq d_k \leq C \hspace{0.3in} \text{ for } \hspace{0.1in} k \geq 1$$
As $C>1$, there is no specific value of $d_k$ that we can find for any $k$. Thus, the trick $(C, 1, 0, k)$ is not solvable.
\end{proof}
\begin{theorem} (Step (5)) \label{thm:step5}
If $C, n > 1$, then $(C, n, 0, k)$ is solvable for any integer $k \geq \log_nC$. In this case, $(C, n, 0, k) = 1$.
\end{theorem}
\begin{proof}
From Theorem \ref{thm6} with $j = 0$, we have
$$d_k = \ceil[\Big]{\frac{d_0}{n_k}}$$
As $1 \leq d_0 \leq C$, therefore,
\begin{alignat*}{2}
\dfrac{1}{n^k} &\leq & \dfrac{d_0}{n^k} \, \,&\leq \dfrac{C}{n^k} \\
\ceil[\Big]{\dfrac{1}{n^k}} &\leq & \, \ceil[\Big]{\dfrac{d_0}{n^k}} &\leq \ceil[\Big]{\dfrac{C}{n^k}}
\end{alignat*}
As $n>1$, therefore $0 < \dfrac{1}{n^k} < 1$. Hence,
\begin{alignat*}{2}
1 &\leq & \, d_k & \leq \ceil[\Big]{\dfrac{C}{n^k}} \numberthis \label{step5:ineq1}
\end{alignat*}
\noindent
Now, $k \geq \log_nC$ and $C>1$. This implies,
\begin{align*}
n^k &\geq C \\
0 < & \frac{C}{n^k} \leq 1
\end{align*}
Therefore, $\ceil[\Big]{\dfrac{C}{n^k}} = 1$. Using this result in inequality \eqref{step5:ineq1}, we have
$$1 \leq d_k \leq 1$$
Thus, $d_k = 1$. This shows that after $k$ iterations, we have a specific value for the the deck id $d_k$, and this value is the position of the desired card which happens to be the first card in the deck. Hence, the trick $(C, n, 0, k)$ is solvable for any integer $k \geq \log_nC$, and $(C, n, 0, k) = 1$.
\end{proof}
\begin{theorem} (Step (6)) \label{thm:step6}
If $C, n > 1$, then $(C, n, n-1, k)$ is solvable for any integer $k > \log_n(C-1)$. In this case, $(C, n, n-1, k) = C$.
\end{theorem}
\begin{proof}
Here $j = n-1$. Therefore, by Theorem \ref{thm6},
\begin{equation*}
\begin{split}
d_k = m(n-1) + \ceil[\Bigg]{\frac{m(n-1)n\Big(\dfrac{n^{k-1}-1}{n-1}\Big) + d_0}{n^k}} & = m(n-1) + \ceil[\Bigg]{\frac{m(n^k-n)}{n^k} + \frac{d_0}{n^k}} \\
& = m(n-1) + \ceil[\Bigg]{m + \frac{d_0 - mn}{n^k}}
\end{split}
\end{equation*}
As $m \in \mathbb{Z}$, therefore, by Lemma \ref{lem2},
\begin{equation*}
\begin{split}
d_k = m(n-1) + \ceil[\Bigg]{m + \frac{d_0 - mn}{n^k}} &= m(n-1) + m + \ceil[\Bigg]{\frac{d_0 - mn}{n^k}} \\
&= mn + \ceil[\Bigg]{\frac{d_0 - C}{n^k}} \hspace{0.2in} \text{as } C = mn.
\end{split}
\end{equation*}
$d_0$ is the initial deck id before any iteration. Hence $1 \leq d_0 \leq C$. This implies,
\begin{alignat*}{2}
1-C &\leq & \, d_0 - C \,\,\, &\leq 0 \\
\ceil[\Bigg]{\frac{1-C}{n^k}} &\leq & \, \ceil[\Bigg]{\frac{d_0 - C}{n^k}} &\leq 0 \numberthis \label{step6:ineq1}
\end{alignat*}
\noindent
We are told that $k > \log_n(C-1)$, hence $n^k >C-1$. This implies,
\begin{equation*}
\begin{split}
\frac{1-C}{n^k} &> -1 \\
\ceil[\Bigg]{\frac{1-C}{n^k}} & > -1
\end{split}
\end{equation*}
This along with inequality $\ceil[\Bigg]{\dfrac{1-C}{n^k}} \leq 0$ from \eqref{step6:ineq1}, implies that $\ceil[\Bigg]{\dfrac{1-C}{n^k}} = 0$. Thus,
$$0 \leq \ceil[\Bigg]{\frac{d_0 - C}{n^k}} \leq 0$$
which means $\ceil[\Bigg]{\dfrac{d_0 - C}{n^k}} = 0$. Therefore,
$$d_k = mn = C$$
This shows that after $k$ iterations, we have a specific value of the deck id $d_k$, and this value is the position of the desired card from the top of the deck. Thus, the trick $(C, n, n-1, k)$ is always solvable for $k > \log_n(C-1)$, and $(C, n, n-1, k) = C$.
\end{proof}
\begin{theorem} (Step (7)) \label{thm:step7}
Suppose $m = \dfrac{C}{n}$ and $b = \dfrac{mj}{n-1}$. If $C, n > 1$ and $0<j<n-1$, then $(C, n, j, k)$ is solvable if, and only if, $(n-1) \nmid mj$. In this case, $(C, n, j, k)$ is solvable for any integer $k > \log_nt$, where $t = \max\Big\{\dfrac{C-bn}{1-\{b\}}, \dfrac{bn-1}{\{b\}}\Big\}$ and, $(C, n , j, k) = mj + \floor{b} + 1$.
\end{theorem}
\begin{proof}
Before we proceed further, we first verify that each of these logarithms are defined. As $0 < j < n-1$, hence,
\begin{align*}
mj & < m(n-1) \\
b(n-1) & < m(n-1) \\
b &< m \qquad \qquad \text{ as } \hspace{0.1in} n > 1 \\
bn &< mn \\
C &> bn \qquad \qquad \text{ as } \hspace{0.1in} C = mn
\end{align*}
Also, when the trick is solvable then $b \not\in \mathbb{Z}$. So, $0 < \{b\} < 1$, hence $1 - \{b\} > 0$. Thus $\dfrac{C-bn}{1-\{b\}} > 0$, and hence, $\log_n\Big(\dfrac{C-bn}{1-\{b\}} \Big)$ is defined. For the other logarithm, we know that $C \geq n$, thus, $C > n-1$, and since $0 < j < n-1$, hence,
\begin{align*}
Cj &> n-1 \\
\frac{Cj}{n-1} &> 1 \\
\frac{mnj}{n-1} &>1 \\
bn & > 1 \qquad \qquad \text{ as } \hspace{0.1in} \frac{mj}{n-1} = b
\end{align*}
As $\{b\} > 0$, hence, $\dfrac{bn-1}{\{b\}} > 0$, and thus, $\log_n\Big(\dfrac{bn-1}{\{b\}}\Big)$ is defined.
\noindent
Now we come back to the main proof.
\begin{center}
\underline{\textbf{Part 1}}
\end{center}
We first show that if $(n-1) \nmid mj$ then $(C, n, j, k)$ is solvable. As $(n-1) \nmid mj$, thus $b \not\in \mathbb{Z}$. Hence $0<\{b\} < 1$. From Theorem \ref{thm6}, we have,
\begin{align*}
d_k = mj + \ceil[\Bigg]{\frac{b(n-1)n\Big(\dfrac{n^{k-1}-1}{n-1}\Big) + d_0}{n^k}} & = mj + \ceil[\Bigg]{\frac{b(n^k-n)}{n^k} + \frac{d_0}{n^k}} \\
& = mj + \ceil[\Bigg]{b + \frac{d_0 - bn}{n^k}}
\end{align*}
As $b = \floor{b} + \{b\}$ and $\floor{b} \in \mathbb{Z}$, hence, by Lemma \ref{lem2},
$$d_k = mj + \floor{b} + \ceil[\Bigg]{\{b\} + \frac{d_0 - bn}{n^k}}$$
As usual, $1 \leq d_0 \leq C$. Thus,
\begin{alignat*}{2}
\{b\} + \frac{1 - bn}{n^k} &\leq &\{b\} + \frac{d_0 - bn}{n^k} \, \, \, &\leq \{b\} + \frac{C - bn}{n^k} \numberthis \label{step7:ineq1}\\
\text{Hence, } \qquad \ceil[\Bigg]{\{b\} + \frac{1 - bn}{n^k}} &\leq &\ceil[\Bigg]{\{b\} + \frac{d_0 - bn}{n^k}} &\leq \ceil[\Bigg]{\{b\} + \frac{C - bn}{n^k}} \numberthis \label{step7:ineq2}
\end{alignat*}
Now suppose $t = \dfrac{C-bn}{1-\{b\}}$. Thus $k > \log_n\Big(\dfrac{C-bn}{1-\{b\}} \Big) \geq \log_n\Big(\dfrac{bn-1}{\{b\}}\Big)$. This implies,
\begin{align*}
n^k &> \frac{C-bn}{1-\{b\}} \qquad \text{ and } \qquad n^k > \frac{bn-1}{\{b\}}\\
\frac{C-bn}{n^k} &<1-\{b\} \qquad \text{ and } \qquad \frac{1-bn}{n^k} > -\{b\}\\
\{b\} + \frac{C-bn}{n^k} &<1 \qquad \qquad \text{ and } \qquad \{b\} + \frac{1-bn}{n^k} > 0\\
\end{align*}
We have already seen that $C>bn$ and $\{b\} >0$. Thus,
$$0 < \{b\} + \frac{C-bn}{n^k} < 1$$
This implies,
\begin{align*}
\ceil[\Bigg]{\{b\} + \frac{C - bn}{n^k}} = 1 \numberthis \label{step7:ineq3}
\end{align*}
From inequality \eqref{step7:ineq1}, we see that,
$$\{b\} + \frac{1 - bn}{n^k} \leq \{b\} + \frac{C - bn}{n^k} < 1$$
Thus,
$$0 < \{b\} + \frac{1-bn}{n^k} < 1$$
This again implies,
\begin{align*}
\ceil[\Bigg]{\{b\} + \frac{1 - bn}{n^k}} = 1 \numberthis \label{step7:ineq4}
\end{align*}
Using results from \eqref{step7:ineq3} and \eqref{step7:ineq4} in inequality \eqref{step7:ineq2}, we have
$$1 \leq \ceil[\Bigg]{\{b\} + \frac{d_0 - bn}{n^k}} \leq 1$$
Therefore,
$$ \ceil[\Bigg]{\{b\} + \frac{d_0 - bn}{n^k}} = 1$$
and $d_k = mj + \floor{b} + 1$. As there is a specific value of $d_k$ after $k$ iterations, thus $(C, n, j, k)$ is solvable in this case with $(C, n, j, k) = mj + \floor{b} + 1$.
\enspace
\noindent
Now suppose $t = \dfrac{bn-1}{\{b\}}$. Thus $k > \log_n\Big(\dfrac{bn-1}{\{b\}}\Big) \geq \log_n\Big(\dfrac{C-bn}{1-\{b\}} \Big)$. We follow the same argument as before to conclude that
$$ \ceil[\Bigg]{\{b\} + \frac{d_0 - bn}{n^k}} = 1$$
and $d_k = mj + \floor{b} + 1$. Again, as there is a specific value of $d_k$ after $k$ iterations, thus $(C, n, j, k)$ is solvable in this case with $(C, n, j, k) = mj + \floor{b} + 1$.
\noindent
Now we prove the other direction.
\begin{center}
\underline{\textbf{Part 2}}
\end{center}
We have to prove that for $C, n>1$ and $0<j<n-1$, if $(C, n, j, k)$ is solvable then $(n-1) \nmid mj$. We will however prove the contrapositive of this statement as they are equivalent. Hence we will show that for $C, n>1$ and $0<j<n-1$, if $(n-1) \mid mj$, then $(C, n, j, k)$ is not solvable.
\noindent
As $(n-1) \mid mj$, thus $b \in \mathbb{Z}$. Using Theorem \ref{thm6},
\begin{align*}
d_k = mj + \ceil[\Bigg]{\frac{b(n-1)n\Big(\dfrac{n^{k-1}-1}{n-1}\Big) + d_0}{n^k}} & = mj + \ceil[\Bigg]{\frac{b(n^k-n)}{n^k} + \frac{d_0}{n^k}} \\
& = mj + \ceil[\Bigg]{b + \frac{d_0 - bn}{n^k}}\\
& = mj + b + \ceil[\Bigg]{\frac{d_0 - bn}{n^k}} \text{ as } b \in \mathbb{Z}
\end{align*}
We know that $1 \leq d_0 \leq C$. Thus,
\begin{alignat*}{2}
\frac{1 - bn}{n^k} &\leq &\frac{d_0 - bn}{n^k} \, \, \, &\leq \frac{C - bn}{n^k} \numberthis \label{step7:ineq5}\\
\text{Hence, } \qquad \ceil[\Bigg]{\frac{1 - bn}{n^k}} &\leq &\ceil[\Bigg]{\frac{d_0 - bn}{n^k}} &\leq \ceil[\Bigg]{\frac{C - bn}{n^k}} \numberthis \label{step7:ineq6}
\end{alignat*}
We have already seen that $C > bn$. Thus,
\begin{align*}
\frac{C-bn}{n^k} &> 0 \\
\ceil[\Bigg]{\frac{C-bn}{n^k}} & > 0
\end{align*}
Similarly, we have also seen that $bn > 1$. Hence,
\begin{align*}
\frac{1-bn}{n^k} &< 0 \\
\ceil[\Bigg]{\frac{1-bn}{n^k}} & \leq 0
\end{align*}
As $\ceil[\Bigg]{\dfrac{C-bn}{n^k}} \in \{1, 2, 3, \ldots\}$ and $\ceil[\Bigg]{\dfrac{1-bn}{n^k}} \in \{0, -1, -2, \ldots \}$, hence from inequality \eqref{step7:ineq6}, $\ceil[\Bigg]{\dfrac{d_0 - bn}{n^k}}$ does not yield any specific integer for any $k$. Thus, $d_k = mj + b + \ceil[\Bigg]{\dfrac{d_0 - bn}{n^k}}$ is also not a specific integer for any $k$. Therefore, $(C, n, j, k)$ is not solvable.
\end{proof}
Note that from theorems \ref{thm:step3}, \ref{thm:step5}, \ref{thm:step6}, \ref{thm:step7}, we see that the value of a solvable trick $(C, n, j, k)$ does not depend on $k$. Hence, two solvable tricks $(C_1, n_1, j_1, k_1)$ and $(C_2, n_2, j_2, k_2)$ will be considered the \textbf{same trick} if $C_1 = C_2, n_1 = n_2,$ and $j_1 = j_2$.
\section{Other Interesting Results}
\noindent
Now that we have a complete mathematical understanding of the trick, we ask ourselves some interesting questions.
\begin{enumerate}
\item Given a number of cards $C$, how how many solvable magic tricks are there?
\item How does our algorithm solve the 21CT?
\item Assuming $C, n > 1$, what choices of $C$ and $n$ will guarantee that $(C, n, j, k)$ is solvable for all $0 \leq j \leq n-1$ and an appropriate $k$.
\item Assuming $C, n > 1$, what choices of $C$ and $n$ will guarantee that $(C, n, j, k)$ is not solvable for any $0 < j < n-1$.
\end{enumerate}
We answer these questions in the form of a theorem and several corollaries.
\begin{theorem}\label{thm:interestingresult1}
For a given number of cards $C \geq 1$, the number of solvable magic tricks $p$, is given by
$$p =
\begin{cases}
1, & C = 1 \\
\sum\limits_{\substack{n>1 \\ n \mid C }} \Big(n+1 - \gcd\Big(\frac{C}{n}, n-1\Big)\Big) & C>1, n>1
\end{cases}
$$
\end{theorem}
\begin{proof}
\noindent
\underline{Case 1 ($C=1$):} If $C= 1$, then $n = 1$, therefore, there is only one trick $(1, 1, 0, k)$ which is solvable for any $k >0$ by Theorem \ref{thm:step3}. Thus $p=1$.
\enspace
\noindent
\underline{Case 2 ($C>1$):} If $n = 1$, then there is again only one trick $(C, 1, 0, k)$ which is not solvable for any $k>0$ by Theorem \ref{thm:step4}. Therefore, $p=0$.
Now suppose $C>1$ and $n>1$. We choose a fixed $n$ that is a divisor of $C$. We will first find out how many tricks are not solvable. By Theorem \ref{thm:step7}, if $0 < j < n-1$, then $(C, n, j, k)$ is not solvable for any $k$, if $(n-1) \mid mj$ where $m = \dfrac{C}{n}$. As $0 < j < n-1$, thus $(n-1) \nmid j$. The possible values of $mj$ are $m, 2m, 3m, \ldots, (n-2)m$. There are $n-2$ such values. We thus need to find out which multiples of $m$ are also multiples of $n-1$ in order for the trick $(C, n, j, k)$ to be not solvable. The answer happens to be the multiples of $\lcm(m, n-1)$ from the definition of the least common multiple. However, we need to find out how many multiples of $\lcm(m, n-1)$ exist in the set $\{m, 2m, 3m, \ldots, (n-2)m\}$. This is given by $\dfrac{m(n-1)}{\lcm(m, n-1)} - 1$. We subtract 1 as $m(n-1)$ is one such multiple that does not belong in the set. From elementary number theory \cite{Burton}, we see that $\dfrac{m(n-1)}{\lcm(m, n-1)} = \gcd(m, n-1)$. Thus, the number of tricks $(C, n, j, k)$ that are not solvable for $0 < j < n-1$ is $\gcd(m, n-1) - 1$. So, the number of tricks that are solvable for $0 < j < n-1$ is $(n-2) - (\gcd(m, n-1) - 1) = n-1 - \gcd(m, n-1)$. Now for $j = 0$ and $j=n-1$ we have seen that $(C, n, j, k)$ is solvable by Theorems \ref{thm:step5} and \ref{thm:step6}. These add 2 more solvable tricks. Hence, the total number of solvable tricks $(C, n, j, k)$ for a fixed $n$, and an appropriate $k$ is
$$n-1 - \gcd(m, n-1) + 2 = n+1 -\gcd\Big(\frac{C}{n}, n-1\Big)$$
As $n > 1$ cycles through the divisors of $C$, we can see the total number of solvable tricks for a particular $C$ to be
$$p = \sum\limits_{\substack{n>1 \\ n \mid C }} \Big(n+1 - \gcd\Big(\frac{C}{n}, n-1\Big)\Big)$$
\end{proof}
\begin{corollary}
The 21CT, $(21, 3, 1, 3)$ is solvable.
\end{corollary}
\begin{proof}
We use Step (7) of our algorithm as $C, n >1$ and $0 < j < 2$. Here $m = 21/3 = 7$, $mj = 7\cdot 1 = 7$, and $n-1 = 2$. As, $ 2 \nmid 7$, hence $(21, 3, 1, k)$ is solvable for any integer $k > \log_nt$. We now find $t$. In order to find $t$, we need to know $b$ and $\{b\}$. $b = \dfrac{mj}{n-1} = \dfrac{7}{2}$. Hence, $\floor{b} = 3$, and $\{b\} = \dfrac{1}{2}$.
\noindent
$t = \max \Bigg\{ \dfrac{21-\Big(\dfrac{7}{2}\Big)\cdot 3}{1-\dfrac{1}{2}},
\dfrac{\Big(\dfrac{7}{2}\Big)\cdot 3 - 1}{\dfrac{1}{2}}\Bigg\} = \max \{21, 19\} = 21$. As $k > \log_3 21 = 2.771$, hence, $k = 3$. Thus, the 21CT, $(21, 3, 1, 3)$ is solvable. Moreover, $d_3 = mj + \floor{b} + 1 = 7 + 3 + 1 = 11$. Therefore, $(21, 3, 1, 3) = 11$.
\end{proof}
\begin{corollary}\label{cor:intresults3}
Suppose $C, n > 1$. If $\gcd\Big((n-1), \dfrac{C}{n}\Big) = 1$, then $(C, n, j, k)$ is solvable for all $0 \leq j \leq n-1$ and an appropriate $k$.
\end{corollary}
\begin{proof}
By Theorem \ref{thm:interestingresult1}, the number of solvable tricks for a specific $n$, is equal to $n+1-\gcd\Big(\dfrac{C}{n}, n-1\Big)$. As $\gcd\Big(\dfrac{C}{n}, n-1\Big) = 1$, hence the number of solvable tricks is $n+1 - 1 = n$ which are for all the possible values of $j = 0, 1, 2, \ldots, n-1$. Hence, $(C, n, j, k)$ is solvable for all $0 \leq j \leq n-1$ and an appropriate $k$.
\end{proof}
\begin{comment}
\begin{corollary}
Suppose $C, n > 1$. If $n=C$, then $(C, n, j, k)$ is solvable for all $0 \leq j \leq n-1$ and $k \geq 1$. In this case, $(C, n, j, k) = j+1$.
\end{corollary}
\begin{proof}
Here $\gcd(n-1, \dfrac{C}{n}) = \gcd(C-1, 1) = 1$. Hence, by Corollary \ref{cor:intresults3}, $(C, n, j, k)$ is solvable for all $0 \leq j \leq n-1$.
\end{proof}
\end{comment}
\begin{corollary}
Suppose $C, n > 1$. If $C = n(n-1)$, then $(C, n, j, k)$ is not solvable for any $0 < j < n-1$. To find
\end{corollary}
\begin{proof}
Using previous notation, $m = \dfrac{C}{n}$, we see that $C = n(n-1)$ is the same as $m = n-1$. Thus, $n-1 \mid mj$ for all $0 < j < n-1$. Hence, by Theorem \ref{thm:step7}, $(C, n, j, k)$ is not solvable.
\end{proof}
\section{Conclusion}
We have found a complete answer to the \hyperref[gen-ques]{general question} posed at the start of Section 4 (Main Result). The \hyperref[algorithm]{algorithm} presented in that section shows us a way of determining if a trick is solvable while starting with $C$ cards, split into $n$ stacks, with $j$ stacks going on top of the stack with the desired card after every iteration. In brief, here is a summary of questions that are answered in this paper.
\begin{itemize}
\item If Magi is handed $C$ $(>1)$ cards and is asked to split them into $n$ $(>1)$ stacks, without any specific $j$, then he can perform the trick $(C, n, 0, k)$ or $(C, n, n-1, k)$ for appropriate $k$'s as they are solvable due to Theorem \ref{thm:step5}, and \ref{thm:step6} respectively.
\item If Magi is handed $C$ $(>1)$ cards and is asked to split them into $n$ $(>1)$ stacks, with a specific $0<j <n-1$, then he can use Theorem \ref{thm:step7} to determine if the trick $(C, n, j, k)$ is solvable for any $k$ and proceed.
\end{itemize}
In addition, we also found the total number of solvable tricks for a given $C$ in Theorem \ref{thm:interestingresult1}. Finally, using our algorithm we list some solvable tricks other than the 21CT below, that can be performed by anyone with a deck of cards to impress their friends.
\smallskip
\noindent
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|}
\hline
$(20, 4, 2, 3) = 14$ & $(28, 4, 2, 3) = 19$ & $(36, 6, 4, 3) = 29$ \\
\hline
$(21, 7, 5, 2) = 18$ & $(30, 5, 3, 3) = 23$ & $(36, 9, 3, 2) = 14$ \\
\hline
$(24, 6, 4, 3) = 20$ & $(32, 4, 2, 3) = 22$ & $(39, 3, 1, 4) = 20$ \\
\hline
$(25, 5, 3, 3) = 19$ & $(33, 3, 1, 4) = 17$ & $(40, 4, 2, 3) = 27$ \\
\hline
$(27, 3, 1 , 4) = 14$ & $(35, 5, 3, 3) = 27$ & $(40, 8, 5, 2) = 29$ \\
\hline
\end{tabular}
\caption{List of 15 solvable tricks}
\end{table}
| {
"timestamp": "2018-10-22T02:11:28",
"yymm": "1809",
"arxiv_id": "1809.04072",
"language": "en",
"url": "https://arxiv.org/abs/1809.04072",
"abstract": "The 21 card trick is well known. It was recently shown in an episode of the popular YouTube channel Numberphile. In that trick, the audience is asked to remember a card, and through a series of steps, the magician is able to find the card. In this article, we look into the mathematics behind the trick, and look at a complete generalization of the trick. We show that this trick can be performed with any number of cards.",
"subjects": "Combinatorics (math.CO)",
"title": "The 21 Card Trick and its Generalization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9850429156022934,
"lm_q2_score": 0.8652240825770432,
"lm_q1q2_score": 0.8522828529510101
} |
https://arxiv.org/abs/2007.14389 | Asymptotic behaviour of minimal complements | The notion of minimal complements was introduced by Nathanson in 2011 as a natural group-theoretic analogue of the metric concept of nets. Given two non-empty subsets $W,W'$ in a group $G$, the set $W'$ is said to be a complement to $W$ if $W\cdot W'=G$ and it is minimal if no proper subset of $W'$ is a complement to $W$. The inverse problem asks which sets may or not occur as minimal complements. We show some new results on the inverse problem and investigate how the study of the inverse problem naturally gives rise to questions about the asymptotic behaviour of these sets, providing partial answers to some of them. | \section{Introduction}
\subsection{Motivation}
Let $A, B$ be non-empty subsets in a group $G$. The set $A$ is said to be a left (resp. right) complement to $B$ if $A \cdot B = G$ (resp. $B\cdot A = G$). The set $A$ is a minimal left complement to $B$ if $$A \cdot B = G \textnormal{ and } (A\setminus \lbrace a\rbrace )\cdot B \subsetneq G \text{ for any } a\in A.$$
The minimal right complements are defined analogously. In the literature, complements (as defined above) are also known as additive or multiplicative complements (depending on the group structure) to distinguish them from set-theoretic complements, but in this article we shall use the term complement or minimal complement to mean the above.
The study of minimal complements began with Nathanson in \cite{NathansonAddNT4}, who introduced the notion in the context of additive number theory and geometric group theory as an analogue of the metric concept of nets adapted to groups. Indeed, (group-theoretic) nets and minimal nets are related with complements and minimal complements. See \cite[Lemma 2]{NathansonAddNT4}. In the same article, he posed certain questions regarding the classification of sets which admit minimal complements. We shall refer to these problems as the direct problems. Works on the direct problems include those of Nathanson \cite{NathansonAddNT4}, Chen--Yang \cite{ChenYang12}, Kiss--S\'{a}ndor--Yang \cite{KissSandorYangJCT19}, the authors \cite{MinComp1}, \cite{MinComp2} etc. Recently, the study of sets which may or may not occur as minimal complements has also become popular. We shall refer to them as the inverse problems (a term coined by Alon--Kravitz--Larson). Works mainly on the inverse problems include those of Kwon \cite{Kwon}, Alon--Kravitz--Larson \cite{AlonKravitzLarson}, Burcroff--Luntzlara \cite{BurcroffLuntzlara}, the authors \cite{CoMin1,CoMin2, CoMin3, CoMin4} etc. (in fact, \cite{CoMin2}, \cite{CoMin3} deal with co-minimal pairs (see Definition \ref{Def:CoMin}) and hence are concerned with both the direct and the inverse problems). It is often the case that the inverse problems are harder to answer, e.g., even if we restrict to finite groups, it is easy to see that any non-empty subset admits a minimal complement, but the corresponding inverse problem, asking whether any non-empty subset occurs as a minimal complement or not, has a negative answer. This forms the basis of our investigation on the asymptotic behaviour of sets which occur as minimal complements.
\subsection{Results obtained}
In the context of groups of several kinds, it follows from the works of Alon--Kravitz--Larson (see Proposition \ref{Prop:23rdBdd}), Burcroff--Luntzlara \cite[Lemma 5]{BurcroffLuntzlara} and that of the authors \cite[Theorem C]{CoMin1}, \cite[Corollary 2.9]{CoMin1} (see also \cite{CoMin4}) that ``large" subsets cannot occur as minimal complements.
On the other hand, the recent works of Kwon \cite[Theorem 9]{Kwon}, Alon--Kravitz--Larson (Theorem \ref{Thm:SmallWorks}) and the authors \cite[Theorem B]{CoMin1} show that the ``small'' subsets of several groups are minimal complements. However, it is not established that any nonempty finite subset of any infinite group is a minimal complement (to the best of knowledge of the authors). Using the ideas of the proof of \cite[Theorem 16]{AlonKravitzLarson}, we prove Theorem \ref{Thm:SmallInNon-abelian}, which implies that it is indeed the case (see Corollary \ref{Cor:SmallInNon-abelian}).
\begin{theorem}\label{Thm:SmallInNon-abelian}
If $C$ is a nonempty finite subset of a group $G$ such that $|G| > |C|^5 - |C|^4$, then $C$ is a minimal complement in $G$.
\end{theorem}
\begin{corollary}\label{Cor:SmallInNon-abelian}
Any nonempty finite subset of any infinite group is a minimal complement.
\end{corollary}
The above corollary generalizes \cite[Theorem 9]{Kwon}, \cite[Theorem B]{CoMin1}, \cite[Theorem 2]{AlonKravitzLarson}.
Moreover, increasing our understanding of which sets occur as minimal complements, the following result is also shown in section \ref{Sec:MinComp}.
\begin{theorem}\label{Thm:UnionOfCosetsInZ^d}
Let $d, n, k$ be a positive integers such that
$$k \leq \frac{n^{d/3}} {2(d \log_2 n)^{2/3}}.$$
Let $\ensuremath{\mathcal{X}}_1, \cdots, \ensuremath{\mathcal{X}}_k$ be subsets of $\ensuremath{\mathbb{Z}}^d$ which are minimal complements in $\ensuremath{\mathbb{Z}}^d$. Let $c_1, \cdots, c_k$ be elements of $\ensuremath{\mathbb{Z}}^d$ which are pairwise distinct modulo $n\ensuremath{\mathbb{Z}}^d$. Then
$\cup_{1 \leq i \leq k} (c_i + n\ensuremath{\mathcal{X}}_i)$
is a minimal complement in $\ensuremath{\mathbb{Z}}^d$.
Moreover, if any nonempty subset of $\ensuremath{\mathbb{Z}}^d$ having finite symmetric difference with any one of $\ensuremath{\mathcal{X}}_1, \cdots, \ensuremath{\mathcal{X}}_k$ is a minimal complement in $\ensuremath{\mathbb{Z}}^d$, then any nonempty subset of $\cup_{1\leq i \leq k} (c_i + n\ensuremath{\mathbb{Z}}^d)$ having finite symmetric difference with
$\cup_{1 \leq i \leq k} (c_i + n\ensuremath{\mathcal{X}}_i)$
is a minimal complement in $\ensuremath{\mathbb{Z}}^d$.
\end{theorem}
Let $G$ be a finite group of order $n$ and consider the collection $\mathcal{C}$ of all non-empty subsets of $G$ which occur as minimal complements. There are several immediate questions about the elements of $\mathcal{C}$, e.g., what are the sizes of the elements of $\mathcal{C}$, what are the asymptotic properties of the sizes as $n\rightarrow \infty$, what are the integers $k$ between $1$ and $n$ such that any subset (or some subset) of $G$ of size $k$ is a minimal complement?
In a prior work of the authors, some such questions were asked \cite[Question 1]{CoMin1}.
One can also study these questions by restricting to particular classes of groups, e.g., in the context of cyclic groups, or abelian groups, or non-abelian groups. We shall investigate these questions and provide partial answers to some of them in section \ref{Sec:Limit}. In section \ref{Sec:AsyCoMin}, we shall study the above questions for co-minimal pairs (see Definition \ref{Def:CoMin}).
\section{Background literature}
\label{Sec:BackLit}
To study the asymptotic behaviour of minimal complements and that of co-minimal pairs we shall repeatedly use some previous results. Most of them are very recent and not yet in widespread use, so it is worthwhile to collect them here.
\begin{theorem}
[{\cite[Theorem 1]{AlonKravitzLarson}}]\label{Thm:SmallWorks}
Let $G$ be a group of order $n\geq 2$. If $C$ is a nonempty subset of $G$ of size
$$\leq \frac{n^{1/3}}{2(\log_2 n)^{2/3}},$$
then $C$ is a minimal left complement to some subset in $G$ and it is a minimal right complement to some subset in $G$.
\end{theorem}
\begin{proposition}
[{\cite[Proposition 13]{AlonKravitzLarson}}]
\label{Prop:23rdBdd}
Let $G$ be a finite group. If a subset $C$ of $G$ is a minimal complement to some subset $W$,
then
$$|C|
\leq |G| \frac{|W|} {2|W| -1}.$$
\end{proposition}
Alon, Kravitz and Larson showed the above results in the context of abelian groups, but it can be seen that their proof extends to the setting when $G$ is not assumed to be abelian (by replacing the sums of the form $a+b$ (resp. $a-b$) by $a\cdot b$ (resp. $a\cdot b^{-1}$) and the sets of the form $a+B$ (resp. $a-B$) by $a\cdot B$ (resp. $a\cdot B^{-1}$). Recently, Burcroff and Luntzlara have proved a result which is more general than
Proposition \ref{Prop:23rdBdd} in the context of abelian groups
\cite[Lemma 5]{BurcroffLuntzlara}.
\begin{theorem}
[{\cite[Theorem B]{CoMin1}}]
Given any nonempty subset $S$ of a group $G$ with $|S|\leq 2$, there are subsets $L, R$ of $G$ such that $(S, R), (L, S)$ are co-minimal pairs.
\end{theorem}
\begin{proposition}
[{\cite[Proposition 2.17]{CoMin4}}]
\label{Prop:fini}
Let $G$ be a finite group and $C$ be a subset of a subgroup $H$ of $G$ satisfying
$$ |H| > |C| > 2[G:H] |H\setminus C|.$$
Then $C$ is not a minimal complement to any subset of $G$.
\end{proposition}
Proposition \ref{Prop:fini} was first established by Alon, Kravitz and Larson in the context of abelian groups \cite[Proposition 17]{AlonKravitzLarson}.
\begin{proposition}
\label{Prop:CoMinCartesian}
If $G_1, G_2$ are groups and $(A_1, B_1)$ (resp. $(A_2, B_2)$) is a co-minimal pair in $G_1$ (resp. $G_2$), then $(A_1 \times A_2, B_1\times B_2)$ is a co-minimal pair in $G_1\times G_2$.
\end{proposition}
The above result is a special case of \cite[Proposition 3.2]{CoMin1}.
\section{Sets occurring as minimal complements}
\label{Sec:MinComp}
In this section, we exhibit several sets that occur as minimal complements.
\begin{theorem}
\label{Thm:MinCompTwoSidedTra}
Let $C$ be a nonempty subset of a group $G$. Assume that the set $C$ contains a right translate of itself only if it is equal to $C$. If for each $c\in C$, there exists an element $g_c\in G$ such that the sets $g_c \cdot C^{-1} \cdot C$ are pairwise disjoint, then $C$ is a minimal right complement in $G$.
Moreover, if $C$ is finite and
$$|G| > |C|^5 - |C|^4,$$
then $C$ is a minimal complement in $G$.
\end{theorem}
\begin{proof}
For $c\in C$, let $w_c$ denote the element $g_c \cdot c^{-1}$. It follows that the sets $w_c\cdot c \cdot C^{-1} \cdot C$ are pairwise disjoint.
It also follows that the sets $w_c\cdot c \cdot C^{-1}$ are pairwise disjoint.
Let $W$ denote the union of the sets $\{w_c\,|\,c\in C\}$ and $G \setminus (\cup_{c\in C} w_c \cdot c \cdot C^{-1})$.
Choose an element $z$ of $G$. If $G \setminus (\cup_{c\in C} w_c \cdot c \cdot C^{-1})$ contains some element of $z\cdot C^{-1}$, then $W\cdot C$ contains $z$. Suppose $z\cdot C^{-1}$ is contained in $\cup_{c\in C} w_c \cdot c \cdot C^{-1}$.
Since the sets $w_c\cdot c \cdot C^{-1} \cdot C$ are pairwise disjoint, it follows that $z\cdot C^{-1}$ is contained in $w_c \cdot c \cdot C^{-1}$ for exactly one $c\in C$. By the hypothesis, we obtain $z\cdot C^{-1} = w_c\cdot c \cdot C^{-1}$. So $z$ belongs to $W\cdot C$.
It follows that $W\cdot C = G$. Since the sets $w_c\cdot c \cdot C^{-1}$ are pairwise disjoint, it follows that $w_c \cdot c \notin w_d \cdot C$ for any two distinct $c, d \in C$. So, $C$ is a minimal right complement to $W$.
Note that given finite subsets $A_1, \cdots, A_r$ of $G$, there exist elements $g_1, \cdots, g_r$ in $G$ such that $g_1\cdot A_1, \cdots, g_r\cdot A_r$ are pairwise disjoint
if
$$|G| > |A_1 \cdot A_s^{-1} | + \cdots + |A_{s-1} \cdot A_s^{-1}| $$
holds for any $1< s \leq r$.
If $|G| > |C|^5 - |C|^4$, then there exist $w_1, \cdots, w_k$ in $G$ such that the sets $w_i\cdot c_i \cdot C^{-1} \cdot C$ are pairwise disjoint.
Let $W$ denote the union of the sets $\{w_1, \cdots, w_k\}$ and $G \setminus (\cup_{1\leq i \leq k} w_i \cdot c_i \cdot C^{-1})$.
Choose an element $z$ of $G$.
Since the sets $w_i\cdot c_i \cdot C^{-1} \cdot C$ are pairwise disjoint, at most one of $w_1 \cdot c_1 \cdot C^{-1}, \cdots, w_k \cdot c_k \cdot C^{-1}$ intersects with $z\cdot C^{-1}$.
If $w_i\cdot c_i \cdot C^{-1}$ intersects with $z\cdot C^{-1}$ for some $i$,
then $z$ belongs to $W\cdot C$ if $z\cdot C^{-1} = w_i\cdot c_i \cdot C^{-1}$,
and $z$ belongs to $W\cdot C$ if $z\cdot C^{-1} \neq w_i\cdot c_i \cdot C^{-1}$.
If none of $w_1 \cdot c_1 \cdot C^{-1}, \cdots, w_k \cdot c_k \cdot C^{-1}$ intersects with $z\cdot C^{-1}$, then $z\in W\cdot C$. It follows that $W\cdot C = G$. Since the sets $w_i\cdot c_i \cdot C^{-1} \cdot C$ are pairwise disjoint, it follows that the sets $w_i \cdot c_i \cdot C^{-1}$ are pairwise disjoint, and hence $w_i \cdot c_i \notin w_j \cdot C$ for any two distinct $i, j$. So, $C$ is a minimal right complement to $W$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:SmallInNon-abelian}]
It follows from Theorem \ref{Thm:MinCompTwoSidedTra}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{Cor:SmallInNon-abelian}]
It follows from Theorem \ref{Thm:MinCompTwoSidedTra}.
\end{proof}
\begin{corollary}
For any $d\geq 1$, any nonempty bounded subset of $\ensuremath{\mathbb{Q}}^d$ is a minimal complement.
\end{corollary}
\begin{proof}
It follows from Theorem \ref{Thm:MinCompTwoSidedTra}.
\end{proof}
One of the crucial steps of the proof of \cite[Theorem 16]{AlonKravitzLarson} is to establish the following.
\begin{proposition}
\label{Prop:ExistenceOfSKTuple}
Given any finite group $\Gamma$ of order $n$, two integers $s\geq 2, k\geq 1$ satisfying
$$\frac{s^2 k^3}{n} + \frac{e^s k^{3s}} {n^{s-1}} + k
\left(
\frac{s^2
k^3}{n}
\right)^s
<1$$
and a subset $C = \{c_1, \cdots, c_k\}$ of $\Gamma$ of size $k$, there exists an $sk$-tuple
$$
\mathbf w :=
\begin{pmatrix}
w_1^{(1)} & w_1^{(2)} & \cdots & w_1^{(s)}\\
w_2^{(1)} & w_2^{(2)} & \cdots & w_2^{(s)}\\
\vdots & \vdots & \ddots & \vdots \\
w_k^{(1)} & w_k^{(2)} & \cdots & w_k^{(s)}\\
\end{pmatrix}
$$
with values in $\Gamma$ such that each of the following statements is false.
\begin{enumerate}
\item
there exist distinct pairs $(i, p), (j, q)$ with $1\leq i, j\leq k, 1\leq p,q\leq s$ such that $w_i^{(p)} \cdot c_i \in w_j^{(q)} \cdot C$.
\item
there exist at least $s$ distinct pairs $(i, p)$ such that the corresponding sets $w_i^{(p)} \cdot c_i \cdot C^{-1} \cdot C$ have a nonempty intersection.
\item
there exists an integer $1\leq i \leq k$ such that for any $1\leq p \leq s$, there exist $z\in \Gamma$, $1\leq j \leq k, j \neq i, 1\leq q\leq s$ such that the following conditions hold.
\begin{itemize}
\item
$$
\frac ks <
|(w_i^{(p)} \cdot c_i \cdot C^{-1}) \cap (z\cdot C^{-1})| < k$$
\item
$w_j^{(q)} \cdot c_j \cdot C^{-1}$ contains the first element\footnote{The elements of $z\cdot C^{-1}$ are ordered according to the order of the elements of $C$.} of $(z\cdot C^{-1}) \setminus
(w_i^{(p)} \cdot c_i \cdot C^{-1})$.
\end{itemize}
\end{enumerate}
\end{proposition}
The above result is established by Alon, Kravitz and Larson in the context of abelian groups. Moreover, their argument also works without the hypothesis that the underlying group is abelian and yields the above result.
\begin{theorem}\label{Thm:UnionOfCosets}
Let $G$ be a group and $H$ be a normal subgroup of $G$ of index $n\geq 1$.
Let $C = \{c_1, \cdots, c_k\} \cdot H$ be a subset of $G$, which is the union of $k$ distinct cosets of $H$ in $G$. Let $\ensuremath{\mathcal{C}}_1, \cdots, \ensuremath{\mathcal{C}}_k$ be subsets of $c_1 H, \cdots, c_k H$ respectively such that for any $1\leq i \leq k$, $c_i^{-1} \ensuremath{\mathcal{C}}_i$ is a minimal right complement in $H$. If there exists an integer $s\geq 2$ satisfying
$$\frac{s^2 k^3}{n} + \frac{e^s k^{3s}} {n^{s-1}} + k
\left(
\frac{s^2
k^3}{n}
\right)^s
<1,
$$
then the set
$$\ensuremath{\mathcal{C}}:=\cup_{1\leq i \leq k} \ensuremath{\mathcal{C}}_i$$
is a minimal right complement in $G$.
\end{theorem}
\begin{proof}
For $1\leq i \leq k$, let $\ensuremath{\mathcal{W}}_i$ be a subset of $H$ such that $c_i^{-1} \cdot \ensuremath{\mathcal{C}}_i$ is a minimal right complement to $\ensuremath{\mathcal{W}}_i$ in $H$. We will assume that $\ensuremath{\mathcal{W}}_i$ contains the identity element.
In the following, the image of an element $x$ of $G$ under the mod $H$ reduction map $G\to G/H$ is denoted by $\overline x$.
By Proposition \ref{Prop:ExistenceOfSKTuple}, there exists an $sk$-tuple
$$
\mathbf w :=
\begin{pmatrix}
w_1^{(1)} & w_1^{(2)} & \cdots & w_1^{(s)}\\
w_2^{(1)} & w_2^{(2)} & \cdots & w_2^{(s)}\\
\vdots & \vdots & \ddots & \vdots \\
w_k^{(1)} & w_k^{(2)} & \cdots & w_k^{(s)}\\
\end{pmatrix}
$$
with values in $G$ such that the following conditions hold.
\begin{enumerate}[(i)]
\item
for any two distinct pairs $(i, p), (j, q)$ with $1\leq i, j\leq k, 1\leq p,q\leq s$, $\overline w_i^{(p)} \cdot \overline c_i \notin \overline w_j^{(q)} \cdot \overline C$.
\item
the number of pairs $(i, p)$ such that the corresponding sets $\overline w_i^{(p)} \cdot \overline c_i \cdot \overline C^{-1} \cdot \overline C$ have a nonempty intersection, is $<s$.
\item
for any $1\leq i \leq k$, there exists $1\leq p\leq s$ such that for any $z\in G$ and for any entry of the $sk$-tuple not lying in the $i$-th row (i.e., for any $j\neq i$ and $1\leq q\leq s$),
at least one of the following conditions is false.
\begin{itemize}
\item
$$
\frac ks <
|(\overline w_i^{(p)} \cdot \overline c_i \cdot \overline C^{-1}) \cap (\overline z\cdot \overline C^{-1})| < k$$
\item
$\overline w_j^{(q)} \cdot \overline c_i \cdot \overline C^{-1}$ contains the first element\footnote{The elements of $z\cdot \overline C^{-1}$ are ordered according to the order $\overline c_1, \cdots, \overline c_k$ of the elements of $\overline C$.} of $(\overline z\cdot \overline C^{-1}) \setminus
(\overline w_i^{(p)} \cdot \overline c_i \cdot \overline C^{-1})$.
\end{itemize}
\end{enumerate}
For each $1\leq i \leq k$, choose an integer $1\leq p = p_i \leq s$ as in the third condition above. Let $W$ denote the union of the sets
$$
w_1^{(p_1)} c_1\ensuremath{\mathcal{W}}_1c_1^{-1}, \cdots, w_k^{(p_k)} c_k\ensuremath{\mathcal{W}}_kc_k^{-1}
$$
and $G \setminus (\cup_{1\leq i \leq k} w_i^{(p_i)}\cdot c_i \cdot C^{-1})$.
By the first condition, for any $1\leq i \leq k$, no element of
$w_i^{(p_i)}\cdot c_i H
=
w_i^{(p_i)}\cdot c_i \ensuremath{\mathcal{W}}_i c_i^{-1} \ensuremath{\mathcal{C}}_i
$
is contained in $w_j^{(p_j)} \ensuremath{\mathcal{W}}_j \cdot \ensuremath{\mathcal{C}}$ for any $j\neq i$. So, it is enough to show that $W \cdot \ensuremath{\mathcal{C}} = G$ to conclude that $\ensuremath{\mathcal{C}}$ is a minimal right complement to $W$.
Choose an element $z$ of $G$. If $z\cdot C^{-1} = w_i^{(p_i)} \cdot c_i \cdot C^{-1}$ for some $i$,
then
$$z\cdot c_{i_z}^{-1} \cdot h_z = w_i^{(p_i)}$$
for some $1\leq i_z\leq k, h_z\in H$. This shows that
$$
z
= w_i^{(p_i)} c_{i_z} c_{i_z}^{-1} h_z^{-1} c_{i_z}.$$
Since $H$ is normal in $G$, it follows that
\begin{align*}
z
& \in w_i^{(p_i)} c_{i_z} H \\
& =
w_i^{(p_i)} c_{i_z} \ensuremath{\mathcal{W}}_{i_z} c_{i_z}^{-1} \ensuremath{\mathcal{C}}_{i_z} \\
&
\subseteq
W \cdot \ensuremath{\mathcal{C}}.
\end{align*}
So, $z$ lies in $W\cdot \ensuremath{\mathcal{C}}$.
Assume that $z\cdot C^{-1} \neq w_i^{(p_i)} \cdot c_i \cdot C^{-1}$ for any $i$, i.e., none of the $k$ sets $\overline w_1^{(p_1)} \cdot \overline c_1 \cdot \overline C^{-1}, \cdots, \overline w_k^{(p_k)} \cdot \overline c_k \cdot \overline C^{-1}$ contains $\overline z\cdot \overline C^{-1}$.
By the second condition, at most $s-1$ of them intersect with $\overline z\cdot \overline C^{-1}$. If the intersection of each such set with $\overline z\cdot \overline C^{-1}$ contains $\leq \frac ks$ elements, then the union of the $k$ sets $\overline w_1^{(p_1)} \cdot \overline c_1 \cdot C^{-1}, \cdots, \overline w_k^{(p_k)} \cdot \overline c_k \cdot C^{-1}$ does not contain $\overline z\cdot \overline C^{-1}$, and hence
the union of the $k$ sets $w_1^{(p_1)} \cdot c_1 \cdot C^{-1}, \cdots, w_k^{(p_k)} \cdot c_k \cdot C^{-1}$ does not contain any element of $z\cdot c_r^{-1} H$ for some $1\leq r\leq k$. So, the union of the $k$ sets $w_1^{(p_1)} \cdot c_1 \cdot C^{-1}, \cdots, w_k^{(p_k)} \cdot c_k \cdot C^{-1}$ does not contain any element of $z\cdot \ensuremath{\mathcal{C}}_r^{-1}$ for some $1\leq r\leq k$. It follows that $z$ lies in $W \cdot \ensuremath{\mathcal{C}}$.
If the intersection of one of those $s-1$ sets with $\overline z\cdot \overline C^{-1}$ contains $>\frac ks$ elements, then by the third condition, it follows that for some $i$, the first element of $(\overline z\cdot \overline C^{-1}) \setminus
(\overline w_i^{(p_i)} \cdot \overline c_i \cdot C^{-1})$ is not contained in $\overline w_j^{(q)}\cdot \overline c_j \cdot \overline C^{-1}$ for any $j \neq i$ and for any $1\leq q \leq s$, and hence the union of the $k$ sets $\overline w_1^{(p_1)} \cdot \overline c_1 \cdot \overline C^{-1}, \cdots, w_k^{(p_k)} \cdot \overline c_k \cdot \overline C^{-1}$ does not contain $\overline z\cdot \overline C^{-1}$, and consequently, $z$ is contained in $W\cdot \ensuremath{\mathcal{C}}$. So $\ensuremath{\mathcal{C}}$ is a minimal right complement to $W$ in $G$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:UnionOfCosetsInZ^d}]
It follows from Theorem \ref{Thm:UnionOfCosets}.
\end{proof}
\section{Study of asymptotic behaviour}
\label{Sec:Limit}
For a finite group $G$ of order $n$, let $\ensuremath{\mathcal{A}}(G)$ (resp. $\ensuremath{\mathcal{S}}(G)$) denote the set of positive integers between $1$ and $n$ such that for any (resp. for some) integer $k\in \ensuremath{\mathcal{A}}(G)$ (resp. $\ensuremath{\mathcal{S}}(G)$), any (resp. some) subset of $G$ of size $k$ is a minimal complement. Note that the inclusion
$$\ensuremath{\mathcal{A}}(G) \subseteq \ensuremath{\mathcal{S}}(G)$$
holds.
There are several immediate questions about the structure of these sets, and the common structure of these sets when $G$ runs over a certain class of groups. We explain them below.
For $* = \ensuremath{\mathrm{cyc}}$ (resp. $\ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}$), a finite group $G$ is said to be a $*$-group if it is cyclic (resp. abelian, nilpotent, supersolvable, solvable).
For $* = \emptyset$, a finite group $G$ is said to be a $*$-group if it is satisfies no additional condition other than being a group.
For $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$ and for any positive integer $n$, consider the following subsets of $\{1, 2, \cdots, n\}$ defined as follows.
\begin{align*}
\ensuremath{\mathcal{A}}_n^*
& := \bigcap_{G \text{ is a $*$-group of order }n}\ensuremath{\mathcal{A}}(G) ,\\
\ensuremath{\mathcal{S}}_n^*
& := \bigcap_{G \text{ is a $*$-group of order }n}\ensuremath{\mathcal{S}}(G).
\end{align*}
The sets $\ensuremath{\mathcal{A}}_n^\emptyset, \ensuremath{\mathcal{S}}_n^\emptyset$ are also denoted by $\ensuremath{\mathcal{A}}_n, \ensuremath{\mathcal{S}}_n$ respectively.
In \cite[Question 1]{CoMin1}, the authors asked to determine the structures of the sets $\ensuremath{\mathcal{A}}_n^\ensuremath{\mathrm{cyc}}, \ensuremath{\mathcal{S}}_n^\ensuremath{\mathrm{cyc}}, \ensuremath{\mathcal{A}}_n^\ensuremath{\mathrm{ab}}, \ensuremath{\mathcal{S}}_n^\ensuremath{\mathrm{ab}}, \ensuremath{\mathcal{A}}_n, \ensuremath{\mathcal{S}}_n$.
Very recently, some parts of this question have been answered by some of the results of Alon, Kravitz and Larson. They established that the sizes of the minimal complements in the group $\ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}}$ are exactly $1, 2, \cdots, \lfloor 2n/3\rfloor, n$ \cite[p. 5]{AlonKravitzLarson}. This shows that
\begin{equation}
\label{Eqn:SnCyc}
\ensuremath{\mathcal{S}}^\ensuremath{\mathrm{cyc}}_n
=
\left\{1, 2, \cdots, \left\lfloor 2n/3\right\rfloor, n\right\}
\end{equation}
for $n\geq 2$.
It would be interesting to investigate the structure of the sets $\ensuremath{\mathcal{X}}_n^*$, and the asymptotic behaviour of the sets $\frac 1n \ensuremath{\mathcal{X}}_n^*$ for $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$. Using the results of Section \ref{Sec:BackLit}, one can conclude several results, as we describe below.
For $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$, one has the inclusion
$$\ensuremath{\mathcal{A}}_n^* \subseteq \ensuremath{\mathcal{S}}_n^*,$$
and the inclusions
\begin{equation}
\label{Eqn:Inclu*}
\ensuremath{\mathcal{X}}_n
\subseteq \ensuremath{\mathcal{X}}_n^\ensuremath{\mathrm{sol}}
\subseteq \ensuremath{\mathcal{X}}_n^\ensuremath{\mathrm{ssol}}
\subseteq \ensuremath{\mathcal{X}}_n^\ensuremath{\mathrm{nil}}
\subseteq \ensuremath{\mathcal{X}}_n^\ensuremath{\mathrm{ab}}
\subseteq \ensuremath{\mathcal{X}}_n^\ensuremath{\mathrm{cyc}}
\end{equation}
hold for $\ensuremath{\mathcal{X}} = \ensuremath{\mathcal{A}}, \ensuremath{\mathcal{S}}$.
Moreover, for $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}\}$,
\begin{equation}
\label{Eqn:Inclumn}
m \ensuremath{\mathcal{S}}_n^* \subseteq \ensuremath{\mathcal{S}}_{mn}^*
\end{equation}
holds for any positive integers $n,m$ with $\gcd (m,n)=1$,
and
\begin{equation}
\label{Eqn:InclumnDiv}
\frac 1n \ensuremath{\mathcal{S}}_n^* \subseteq \frac 1m \ensuremath{\mathcal{S}}_m^*
\end{equation}
holds for any positive integers $n,m$ with $n\mid m$ and $\gcd(n, m/n) =1$ (see Lemma \ref{Lemma:SnInclusionSm}).
\begin{lemma}
\label{Lemma:SnInclusionSm}
Equations \eqref{Eqn:Inclumn}, \eqref{Eqn:InclumnDiv} hold.
\end{lemma}
\begin{proof}
Fix $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}\}$, and let $k$ be an element of $\ensuremath{\mathcal{S}}_n^*$.
Let $G$ be a $*$-group of order $mn$ where $m$ is a positive integer with $\gcd(m,n)=1$. Since $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}\}$ and $\gcd(m, n) = 1$, it follows that $G$ is isomorphic to $G_1 \times G_2$ where $G_1$ (resp. $G_2$) is a group of order $m$ (resp. $n)$.
Note that $G_2$ contains a subset $A$ of size $k$, which is a minimal complement in $G_2$. Then the subset $G_1 \times A$ of $G_1 \times G_2$ contains $mk$ elements. By Proposition \ref{Prop:CoMinCartesian}, $G_1 \times A$ is a minimal complement to some subset of $G_1\times G_2$. Hence $mk$ is an element of $\ensuremath{\mathcal{S}}(G_1\times G_2)$. Thus $mk$ lies in $\ensuremath{\mathcal{S}}(G)$ for any $*$-group $G$ of order $mn$. So $mk$ lies in $\ensuremath{\mathcal{S}}_{mn}^*$. This establishes Equation \eqref{Eqn:Inclumn}.
Equation \eqref{Eqn:InclumnDiv} follows from Equation \eqref{Eqn:Inclumn}.
\end{proof}
For $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$, it follows from Theorem \ref{Thm:SmallWorks} that any large finite group contains many minimal complements and thus
$$\lim_{n\to \infty} |\ensuremath{\mathcal{X}}_n^*| = \infty.$$
\begin{questionIntro}
\label{Qn:XnGetsLarger}
For $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}$ and $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$, describe the asymptotic property of the sequence $|\ensuremath{\mathcal{X}}_n^*|$.
\end{questionIntro}
For $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$, it follows from Theorem \ref{Thm:SmallWorks} that the smallest positive integer lying outside $\ensuremath{\mathcal{X}}_n^*$ diverges to $\infty$, i.e.,
$$\lim_{n\to \infty} \min (\{1, 2, \cdots, n\} \setminus \ensuremath{\mathcal{X}}_n^*)= \infty.$$
\begin{questionIntro}
\label{Qn:LargeStringAt1}
For $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}$ and $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$, describe the asymptotic property of the sequence
$\min (\{1, 2, \cdots, n\} \setminus \ensuremath{\mathcal{X}}_n^*).$
\end{questionIntro}
For $\ensuremath{\mathcal{X}} = \ensuremath{\mathcal{A}}$, $* = \ensuremath{\mathrm{ab}}$, it follows from \cite[Corollary 18]{AlonKravitzLarson} that
$$
\liminf_{n\to \infty}
\frac{\min (\{1, 2, \cdots, n\} \setminus \ensuremath{\mathcal{X}}_n^*)}{\sqrt n}
\leq
\sqrt 2,$$
and from \cite[Theorem 3]{AlonKravitzLarson} that
$$
\min (\{1, 2, \cdots, n\} \setminus \ensuremath{\mathcal{X}}_n^*)
=
O(n^{3/4 + \varepsilon})$$
for any $\varepsilon > 0$.
Moreover, Alon, Kravitz and Larson conjectured that
$$
\min (\{1, 2, \cdots, n\} \setminus \ensuremath{\mathcal{X}}_n^*)
=
\widetilde \Theta(\sqrt n)$$
for $\ensuremath{\mathcal{X}} = \ensuremath{\mathcal{A}}, * = \ensuremath{\mathrm{ab}}$ \cite[Conjecture 7]{AlonKravitzLarson}.
From Propositions \ref{Prop:23rdBdd}, \ref{Prop:fini}, it follows that for $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$,
$$\lim_{n\to \infty} |(\{1, 2, \cdots, n\} \setminus \ensuremath{\mathcal{X}}_n^*)| = \infty.$$
\begin{questionIntro}
\label{Qn:XnComplementGetsLarger}
For $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$, determine the asymptotic property of the sequence
$$ |(\{1, 2, \cdots, \lfloor 2n/3\rfloor \} \setminus \ensuremath{\mathcal{X}}_n^*)|.$$
\end{questionIntro}
From Propositions \ref{Prop:23rdBdd}, \ref{Prop:fini}, it follows that for $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$, the maximum of $\ensuremath{\mathcal{X}}_n^*$ (excluding $n$) is $\leq \frac 23n$ (for $n\geq 2$). Thus for $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$,
$$\left(\frac 23, 1\right) \cap \frac 1n \ensuremath{\mathcal{X}}_n^* = \emptyset.$$
Moreover, it follows that for $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$ and for any $\varepsilon > 0$,
$$(0, \varepsilon) \cap \frac 1n \ensuremath{\mathcal{X}}_n^* \neq \emptyset$$
for large enough $n$ (since $1$ lies in $\ensuremath{\mathcal{X}}_n^*$).
This motivates the following question about the asymptotic behaviour of
$$\frac 1n \ensuremath{\mathcal{X}}^*_n$$
as $n\to \infty$ for $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$,
and the asymptotic behaviour of these sets when $n$ ranges over an infinite set of positive integers (for instance, the set of primes, or the set of all prime powers, or the set of powers of a fixed prime, or the set of square-free integers etc.).
\begin{questionIntro}
\label{Qn:Measure}
Let $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$.
\begin{enumerate}[(i)]
\item
Let $0 \leq a < b \leq \frac 23$. Evaluate
$$
\limsup \frac{[a, b] \cap \frac 1n \ensuremath{\mathcal{X}}_n^*}
{|\ensuremath{\mathcal{X}}_n^*|},
\liminf \frac{[a, b] \cap \frac 1n \ensuremath{\mathcal{X}}_n^*}
{|\ensuremath{\mathcal{X}}_n^*|}.
$$
Does the sequence
$$
\frac{[a, b] \cap \frac 1n \ensuremath{\mathcal{X}}_n^*}
{|\ensuremath{\mathcal{X}}_n^*|}
$$
converge? Otherwise, what are its subsequential limits?
\item
Does there exist a probability measure $\mu$ on $[0, \frac 23]$ such that
$$
\lim_{n\to \infty}
\frac{[a, b] \cap \frac 1n \ensuremath{\mathcal{X}}_n^*}
{|\ensuremath{\mathcal{X}}_n^*|}
= \int_{[0, \frac 23]} \chi_{[a, b]} d\mu
$$
for any $0 \leq a < b \leq \frac 23$, where $\chi_A$ denotes the characteristic function of $A$ for $A\subseteq [0, \frac 23]$?
\end{enumerate}
\end{questionIntro}
\begin{questionIntro}
\label{Qn:BddMinCompDetailed}
Let $\ensuremath{\mathcal{X}}\in \{\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{A}}\}, *\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$.
\begin{enumerate}[(i)]
\item
Determine the open subsets of $[0, \frac 23]$ which do not intersect with $\frac 1n \ensuremath{\mathcal{X}}_n^*$ for any/large enough/infinitely many $n$.
\item
Determine the open subsets of $[0, \frac 23]$ which have nonempty intersection with $\frac 1n \ensuremath{\mathcal{X}}_n^*$ for any/large enough/infinitely many $n$.
\end{enumerate}
\end{questionIntro}
\begin{remark}
In the above questions, one can restrict the integer $n$ from the set of positive integers to some smaller sets, for instance, the set of primes, or the set of all prime powers, or the set of powers of a fixed prime, or the set of square-free integers etc., and study these questions when $n$ varies over such a smaller subset.
\end{remark}
Using results from Section \ref{Sec:BackLit}, we partially answer Question \ref{Qn:XnComplementGetsLarger} in Proposition \ref{Prop:Qn3Ans}.
\begin{lemma}
\label{Lemma:B1}
Let $n$ be a positive integer and $d_1, \cdots, d_k$ be distinct divisors of $n$ satisfying $d_1\mid d_2 \mid \cdots \mid d_k$.
For $1\leq i \leq k$, let $B_i$ denote the set defined by
$$
B_i
:=
\left\{
\frac n{d_i} -1 , \frac n{d_i} - 2, \cdots, \frac n{d_i} -
\left(\left\lceil \frac n{d_i(2d_i + 1)} \right\rceil -1\right)
\right\}
.
$$
The union
$
\cup_{i=1}^k B_i
$
consists of
$$\sum_{i=1}^k
\left(\left\lceil \frac n{d_i(2d_i + 1)} \right\rceil -1\right)
$$
elements.
\end{lemma}
\begin{proof}
Note that the sets $B_1, \cdots, B_k$ are pairwise disjoint since for any $i< j$,
\begin{align*}
\frac n{d_i} - \left(\left\lceil \frac n{d_i(2d_i + 1)} \right\rceil -1\right)
& \geq
\frac n{d_i} - \frac n{d_i(2d_i + 1)} \\
& = \frac {2nd_i}{d_i(2d_i + 1)} \\
& = \frac {2n}{2d_i + 1} \\
& > \frac n{2d_i} \\
& \geq
\frac n{d_j}
\end{align*}
holds.
This proves the Lemma.
\end{proof}
\begin{lemma}
\label{Lemma:B2}
Let $G$ be a group of order $n$. Let $d_1, \cdots, d_k$ be distinct divisors of $n$ such that $d_1\mid \cdots \mid d_k$. Assume that $G$ contains a subgroup of size $n/d_i$ for any $1\leq i \leq k$.
Then the set $\{1, 2, \cdots, n\} \setminus \ensuremath{\mathcal{A}}(G) $ contains the set
$$
\cup_{i=1}^k B_i ,
$$
and hence contains at least
$$\sum_{i=1}^k
\left(\left\lceil \frac n{d_i(2d_i + 1)} \right\rceil -1\right)
$$
elements.
\end{lemma}
\begin{proof}
Note that for any integer $r$ satisfying
$$
1\leq r \leq
\left(\left\lceil \frac n{d_i(2d_i + 1)} \right\rceil -1\right),
$$
the inequality
$$
\frac{\frac n{d_i} -r}{r}
> 2d_i $$
holds,
and hence by Proposition \ref{Prop:fini}, $G$ contains a subset of size $n/d_i -r$ which is not a minimal complement.
Thus for any $1\leq i \leq k$, the set $B_i$ does not intersect with $\ensuremath{\mathcal{A}}(G)$. So the set $\{1, 2, \cdots, n\} \setminus \ensuremath{\mathcal{A}}(G) $ contains
$$
\cup_{i=1}^k B_i .
$$
Its cardinality is given by Lemma \ref{Lemma:B1}.
\end{proof}
\begin{lemma}
\label{Lemma:Bdd}
Let $n$ be a positive integer, $p$ be a prime number and $M$ be a positive integer such that $p^M$ divides $n$.
Then there exists a sequence of $k$ distinct divisors $d_1, \cdots, d_k$ of $n$ satisfying
$1< d_1 \mid d_2 \mid \cdots \mid d_k$
such that
$$\sum_{i=1}^k
\left(\left\lceil \frac n{d_i(2d_i + 1)} \right\rceil -1\right)
\geq
\frac n{p^2(2 + \frac 1{p})} - M.
$$
\end{lemma}
\begin{proof}
Let $k = M$ and $d_i = p ^{i}$ for $1\leq i\leq M$.
Note that
\begin{align*}
\sum_{i=1}^k
\left(\left\lceil \frac n{d_i(2d_i + 1)} \right\rceil -1\right)
& \geq
\sum_{i=1}^M
\left(\frac n{d_i(2d_i + 1)} -1\right)
\\
& \geq
\sum_{i=1}^M
\frac n{(2 + \frac 1{p})d_i^2} - M
\\
& =
\frac n{2 + \frac 1{p}}
\sum_{i=1}^M
\frac 1{d_i^2} - M
\\
& =
\frac n{2 + \frac 1{p}} \frac 1{p^2} \frac {1 - \frac 1{p^{2M}}}{1 -\frac 1{p^2}} - M
\\
&
\geq
\frac n{p^2(2 + \frac 1{p})}- M .
\end{align*}
\end{proof}
\begin{proposition}
\label{Prop:Qn3Ans}
Let $\{n_k\}_{k\geq 1}$ be a sequence of positive integers. Assume that no term of this sequence gets repeated infinitely often, i.e., it does not admits any constant subsequence. Let $\ensuremath{\mathcal{P}}$ be a finite set of primes such that all the prime divisors of any term of this sequence lie in this set.
Then for any $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$,
$$|(\{1, 2, \cdots, \lfloor 2n_k/3\rfloor \} \setminus \ensuremath{\mathcal{A}}_{n_k}^*)|
\geq
\frac {n_k} {(\max \ensuremath{\mathcal{P}})^2(2 + \frac 1{\min \ensuremath{\mathcal{P}}})} - \frac{\log n_k}{\log \min \ensuremath{\mathcal{P}}}
$$
holds for large enough $k$, and consequently,
$$\lim_{k\to \infty} |(\{1, 2, \cdots, \lfloor 2n_k/3\rfloor \} \setminus \ensuremath{\mathcal{A}}_{n_k}^*)| = \infty.$$
\end{proposition}
\begin{proof}
By Equation \eqref{Eqn:Inclu*}, it suffices to prove the above inequality for $* = \ensuremath{\mathrm{cyc}}$.
Since the terms of the sequence $\{n_k\}_{k\geq 1}$ have prime factors from a finite set of primes and no term of this sequence gets repeated infinitely often, it follows that for any $M> 0$, there exists a positive integer $K$ such that for each $k\geq K$,
the integer $n_k$ is divisible by $p_k^M$ for some $p_k\in\ensuremath{\mathcal{P}}$.
By Lemmas \ref{Lemma:B1}, \ref{Lemma:B2}, \ref{Lemma:Bdd},
for any $k\geq K$, it follows that
there exists a subset $B$ of $\{1, 2, \cdots, \lfloor 2n_k/3\rfloor \}$ containing at least
\begin{align*}
\frac {n_k} {p_k^2(2 + \frac 1{p_k})} - M
& =
\frac {n_k} {p_k^2(2 + \frac 1{p_k})} - \frac{\log n_k}{\log p_k}\\
& \geq
\frac {n_k} {(\max \ensuremath{\mathcal{P}})^2(2 + \frac 1{\min \ensuremath{\mathcal{P}}})} - \frac{\log n_k}{\log \min \ensuremath{\mathcal{P}}}
\end{align*}
many elements such that $\ensuremath{\mathcal{A}}(G)$ avoids $B$ for any nilpotent group $G$ of order $n_k$. This establishes the result.
\end{proof}
Using Equation \eqref{Eqn:SnCyc}, we partially answer Question \ref{Qn:Measure} in Proposition \ref{Prop:Qn4Ans}.
\begin{proposition}
\label{Prop:Qn4Ans}
For $\ensuremath{\mathcal{X}} = \ensuremath{\mathcal{S}}, * = \ensuremath{\mathrm{cyc}}$,
$$
\lim_{n\to \infty}
\frac{(a, b) \cap \frac 1n \ensuremath{\mathcal{X}}_n^*}
{|\ensuremath{\mathcal{X}}_n^*|} = \frac 32( b-a),
$$
holds for $0 \leq a < b \leq \frac 23$, and Question \ref{Qn:Measure}(ii) admits an answer in the affirmative.
\end{proposition}
\begin{proof}
From Equation \eqref{Eqn:SnCyc}, it follows that
$$
\lim_{n\to \infty}
\frac{(a, b) \cap \frac 1n \ensuremath{\mathcal{X}}_n^*}
{|\ensuremath{\mathcal{X}}_n^*|} = \frac 32( b-a).
$$
Note that for the probability measure $\mu$ corresponding to the Lebesgue measure on $[0, \frac 23]$, it follows that
$$
\lim_{n\to \infty}
\frac{(a, b) \cap \frac 1n \ensuremath{\mathcal{X}}_n^*}
{|\ensuremath{\mathcal{X}}_n^*|}
= \int_{[0, \frac 23]} \chi_{[a, b]} d\mu
$$
for any $0 \leq a < b \leq \frac 23$.
This answers part (ii).
\end{proof}
Using Proposition \ref{Prop:fini}, we prove the following lemma, and then establish Proposition \ref{Prop:OpenAvoidance} which partially answers part (i) of Question \ref{Qn:BddMinCompDetailed}.
\begin{lemma}
\label{Lemma:OpenAvoidance}
Let $G$ be a group of order $n$. Let $i$ be an integer such that $G$ admits a subgroup of index $i$.
Then
$$\left(\frac 2{2i+1} , \frac 1i\right) \cap \frac 1{|G|} \ensuremath{\mathcal{A}}(G) = \emptyset.$$
\end{lemma}
\begin{proof}
Let $H$ be a subgroup of $G$ of index $i$.
For any integer $m$ satisfying
$$\frac {2i}{2i+1} |H| < m < |H|,$$
it follows from Proposition \ref{Prop:fini} that no subset of $H$ containing $m$ elements is a minimal complement in $G$. Thus
$$\left(\frac {2i}{2i+1} |H|, |H|\right) \cap \ensuremath{\mathcal{A}}(G) = \emptyset,$$
which yields the result.
\end{proof}
\begin{proposition}
\label{Prop:OpenAvoidance}
For $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$,
$$
\bigcup_{i=0}^\infty \left(\frac 2{2p^i+1} , \frac 1{p^i}\right)
$$
does not intersect with $\ensuremath{\mathcal{A}}_{p^n}^*$ for any $n\geq 0$.
\end{proposition}
\begin{proof}
For any group $G$ of order $p^n$, Lemma \ref{Lemma:OpenAvoidance} implies that the set
$$
\bigcup_{i=0}^n \left(\frac 2{2p^i+1} , \frac 1{p^i}\right)
$$
does not intersect with
$\frac 1{|G|} \ensuremath{\mathcal{A}}(G)$. Since the smallest element of $\frac 1{|G|} \ensuremath{\mathcal{A}}(G)$ is $\frac 1{|G|}$, it follows that
$$
\bigcup_{i=0}^\infty \left(\frac 2{2p^i+1} , \frac 1{p^i}\right)
$$
does not intersect with
$\frac 1{|G|} \ensuremath{\mathcal{A}}(G)$.
Hence
$$
\bigcup_{i=0}^\infty \left(\frac 2{2p^i+1} , \frac 1{p^i}\right)
$$
does not intersect with none of
$\ensuremath{\mathcal{A}}_{p^n}^*$
for any $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$.
\end{proof}
Note that it follows from Equation \eqref{Eqn:SnCyc} that given any nonempty open subset $U$ of $[0, \frac 23]$, it has nonempty intersection with $\frac 1n \ensuremath{\mathcal{S}}_n^\ensuremath{\mathrm{cyc}}$ for large enough $n$.
This partially answers part (ii) of Question \ref{Qn:BddMinCompDetailed}.
\section{Asymptotic behaviour of co-minimal pairs}
\label{Sec:AsyCoMin}
\begin{definition}\label{Def:CoMin}
A pair $(A, B)$ of two nonempty subsets $A, B$ of a group $G$ is called a co-minimal pair if $A \cdot B = G$, and $A' \cdot B \subsetneq G$ for any $\emptyset \neq A' \subsetneq A$ and $A\cdot B' \subsetneq G$ for any $\emptyset \neq B' \subsetneq B$.
\end{definition}
For any finite group $G$, let
$\ensuremath{\mathcal{S}}_2(G)$
denote the
set of pairs of the form $(a, b)$ such that there is a co-minimal pair $(A, B)$ in $G$ with $|A| = a, |B| = b$.
For $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$ and for any positive integer $n$, consider the following subset of $\{1, 2, \cdots, n\}\times \{1, 2, \cdots, n\}$ defined as follows.
\begin{align*}
\ensuremath{\mathcal{S}}_{2, n}^*
& := \bigcap_{G \text{ is a $*$-group of order }n}\ensuremath{\mathcal{S}}_2(G).
\end{align*}
The set $\ensuremath{\mathcal{S}}_{2,n}^\emptyset$ is also denoted by $\ensuremath{\mathcal{S}}_{2,n}$.
By Theorem \ref{Thm:SmallWorks}, it follows that
$$\lim_{n\to \infty} |\ensuremath{\mathcal{S}}_{2,n}^*| = \infty$$
for $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$.
\begin{questionIntro}
\label{Qn:XnGetsLarger2}
For $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$, describe the asymptotic property of the sequence $|\ensuremath{\mathcal{S}}_{2,n}^*|$.
\end{questionIntro}
Let $I$ denote the unit interval $[0,1]$ and $I^2$ denote the unit square $[0,1]\times [0,1]$. The square $[0, 1/2]\times [0, 1/2]$ is denoted by $I^2_{1/2}$.
It follows from Proposition \ref{Prop:23rdBdd} that for a co-minimal pair $(A, B)$ in any finite group $G$,
$$
2|A||B| - |A| \leq |G| |B|
$$
holds.
So, for any $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$ and any $(x, y)\in \frac 1n \ensuremath{\mathcal{S}}_{2,n}^*$,
$$\left(x - \frac 1{2n} \right)
\left(y - \frac 12\right) \leq \frac 1 {4n}$$
and
$$\left(x - \frac 1{2} \right)
\left(y - \frac 1{2n}\right) \leq \frac 1 {4n}$$
hold. For $n\geq 1$, define
$$U_n
=
\left\{(x, y) \,|\, 0 \leq x \leq 1 , 0 \leq y \leq 1,
2xy
\leq x + \frac 1n y,
2xy
\leq y + \frac 1n x\right\}.$$
Note that $U_n$ contains $\frac 1n \ensuremath{\mathcal{S}}_{2,n}^*$ for any $n\geq 1$ and any $*\in \{\ensuremath{\mathrm{cyc}},\ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$.
\begin{lemma}
\label{Lemma:Avoidance}
For any $0 < \varepsilon < 1/2$, let $R_\varepsilon$ denote the subset of $I^2$ defined by
$$
R_\varepsilon =
([\varepsilon, 1] \times [1/2 + \varepsilon, 1] )
\cup
([1/2 + \varepsilon, 1] \times [\varepsilon, 1] ).
$$
The region $R_\varepsilon$ does not intersect with $U_n$ for large enough $n$.
\end{lemma}
\begin{proof}
Let $N$ be a positive integer such that $1/2N + 1/2\sqrt N < \varepsilon$.
Let $(x, y)$ be an element of $R_\varepsilon$.
If $(x, y)$ lies in $[\varepsilon, 1] \times [1/2 + \varepsilon, 1]$, then for any $n\geq N$,
\begin{align*}
\left(x - \frac 1{2n} \right)\left(y - \frac 12 \right)
& \geq
\varepsilon
\left(x - \frac 1{2n} \right)\\
& \geq
\varepsilon
\left(x - \frac 1{2N} \right)\\
& >
\frac 1{2\sqrt N} \cdot \frac 1{2\sqrt N} \\
& \geq \frac 1 {4n}.
\end{align*}
If $(x, y)$ lies in $[1/2 + \varepsilon, 1] \times [\varepsilon, 1]$, then for any $n\geq N$,
\begin{align*}
\left(x - \frac 12 \right)\left(y - \frac 1{2n}\right)
& \geq
\varepsilon
\left(y - \frac 1{2n} \right)\\
& \geq
\varepsilon
\left(y - \frac 1{2N} \right)\\
& >
\frac 1{2\sqrt N} \cdot \frac 1{2\sqrt N} \\
& \geq \frac 1 {4n}.
\end{align*}
So no element of $R_\varepsilon$ lies in $U_n$ for $n\geq N$.
\end{proof}
Note that for any $\varepsilon >0$ and $*\in \{\ensuremath{\mathrm{cyc}},\ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$, the set $R_\varepsilon$ avoids
$\frac 1n \ensuremath{\mathcal{S}}_{2,n}^*$ for large enough $n$.
\begin{questionIntro}
\label{Qn:XnComplementGetsLarger2}
For $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$ and for $0 < \varepsilon < 1/2$, determine the asymptotic property of the sequence
$$ |(([1/n, 2/n, \cdots, n/n]^2\setminus R_\varepsilon)\setminus \ensuremath{\mathcal{S}}_{2,n}^*)|.$$
\end{questionIntro}
Note that
$$|(([1/n, 2/n, \cdots, n/n]^2\setminus R_\varepsilon)\setminus \ensuremath{\mathcal{S}}_{2,n}^*)|
\geq
\left\lfloor \frac n2 \right\rfloor^2 -1$$
since $(1/n, i/n), (i/n, 1/n)$ belongs to $([1/n, 2/n, \cdots, n/n]^2\setminus R_\varepsilon)\setminus \ensuremath{\mathcal{S}}_{2,n}^*$ for $1\leq i \leq \lfloor n/2\rfloor$. Thus a more interesting question would be to study the asymptotic property of the sequence
$$ |(([1/n, 2/n, \cdots, n/n]^2\setminus (R_\varepsilon \cup R'_n))\setminus \ensuremath{\mathcal{S}}_{2,n}^*)|$$
where $R'_n$ consists of those points of $[1/n, 2/n, \cdots, n/n]^2$ which are avoided by $\ensuremath{\mathcal{S}}_{2,n}^*$ for ``obvious reasons''. Thus, $R'_n$ contains those points satisfying $xy < 1/n$.
\begin{questionIntro}
\label{Qn:Measure2}
Let $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$.
\begin{enumerate}[(i)]
\item
Let $0 \leq a < b \leq 1, 0 \leq c < d \leq 1$. Evaluate
$$
\limsup \frac{([a, b]\times [c,d]) \cap \frac 1n \ensuremath{\mathcal{S}}_{2,n}^*}
{|\ensuremath{\mathcal{S}}_{2,n}^*|},
\liminf \frac{([a, b]\times [c,d]) \cap \frac 1n \ensuremath{\mathcal{S}}_{2,n}^*}
{|\ensuremath{\mathcal{S}}_{2,n}^*|}.
$$
Does the sequence
$$
\frac{([a, b]\times [c,d]) \cap \frac 1n \ensuremath{\mathcal{S}}_{2,n}^*}
{|\ensuremath{\mathcal{S}}_{2,n}^*|}
$$
converge? Otherwise, what are its subsequential limits?
\item
Does there exist a probability measure $\mu$ on $I^2$ such that
$$
\lim_{n\to \infty}
\frac{([a, b] \times [c,d])\cap \frac 1n \ensuremath{\mathcal{S}}_{2,n}^*}
{|\ensuremath{\mathcal{S}}_{2,n}^*|}
= \int_{I^2} \chi_{[a, b]\times [c,d]} d\mu
$$
for any $0 \leq a < b \leq 1, 0 \leq c < d \leq 1$, where $\chi_A$ denotes the characteristic function of $A$ for $A\subseteq I^2$?
\end{enumerate}
\end{questionIntro}
\begin{questionIntro}
\label{Qn:BddMinCompDetailed2}
Let $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$.
\begin{enumerate}[(i)]
\item
Determine the open subsets of $I^2$ which do not intersect with $\frac 1n \ensuremath{\mathcal{S}}_{2,n}^*$ for any/large enough/infinitely many $n$.
\item
Determine the open subsets of $I^2$ which have nonempty intersection with $\frac 1n \ensuremath{\mathcal{S}}_{2,n}^*$ for any/large enough/infinitely many $n$.
\end{enumerate}
\end{questionIntro}
Note that Lemma \ref{Lemma:Avoidance} partially answers part (i) of the above question.
\begin{definition}
A $k$-tuple $(A_1, \cdots, A_k)$ of non-empty subsets of a group $G$ is said to be a co-minimal $k$-tuple if
$$A_1 \cdot A_2 \cdot \cdots \cdot A_k = G$$
and for any $1\leq i \leq k$ and for any $a_i \in A_i$,
$$A_1 \cdot A_2 \cdot \cdots \cdot(A_i \setminus \{a_i\}) \cdot \cdots \cdot A_k \neq G.$$
\end{definition}
Note that one can define an analogue of $\ensuremath{\mathcal{S}}_{2, n}^*$ as follows.
Let $k\geq 2$ be an integer.
For any finite group $G$, let
$\ensuremath{\mathcal{S}}_k(G)$
denote the
set of pairs of the form $(a_1, \cdots, a_k)$ such that there is a co-minimal $k$-tuple $(A_1, \cdots, A_k)$ in $G$ with $|A_i| = a_i$ for $1\leq i \leq k$.
For $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$ and for any positive integer $n$, consider the following subset of $\{1, 2, \cdots, n\}^k$ defined as follows.
\begin{align*}
\ensuremath{\mathcal{S}}_{k, n}^*
& := \bigcap_{G \text{ is a $*$-group of order }n}\ensuremath{\mathcal{S}}_k(G).
\end{align*}
The set $\ensuremath{\mathcal{S}}_{k,n}^\emptyset$ is also denoted by $\ensuremath{\mathcal{S}}_{k,n}$.
By Theorem \ref{Thm:SmallWorks}, it follows that
$$\lim_{n\to \infty} |\ensuremath{\mathcal{S}}_{k,n}^*| = \infty$$
for $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$.
One could ask the following questions, which are analogous to Questions \ref{Qn:XnGetsLarger2},
\ref{Qn:XnComplementGetsLarger2},
\ref{Qn:Measure2},
\ref{Qn:BddMinCompDetailed2}.
\begin{questionIntro}
For $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$, describe the asymptotic property of the sequence $|\ensuremath{\mathcal{S}}_{k,n}^*|$.
\end{questionIntro}
\begin{questionIntro}
For $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$ and for $0 < \varepsilon < 1/2$, determine the asymptotic property of the sequence
$$ |(([1/n, 2/n, \cdots, n/n]^k\setminus R_n)\setminus \ensuremath{\mathcal{S}}_{k,n}^*)|.$$
where $R_n$ consists of those points of $[1/n, 2/n, \cdots, n/n]^k$ which are avoided by $\ensuremath{\mathcal{S}}_{k,n}^*$ for ``obvious reasons''. For instance, $R'_n$ contains those points satisfying $x_1\cdots x_k < 1/n^{k-1}$.
\end{questionIntro}
\begin{questionIntro}
Let $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$.
\begin{enumerate}[(i)]
\item
Let $a_1, b_1, \cdots, a_k, b_k$ be real numbers satisfying $0 \leq a_i < b_i \leq 1$ for all $1\leq i \leq k$. Evaluate
$$
\limsup \frac{([a_1, b_1] \times \cdots \times [a_k,b_k]) \cap \frac 1n \ensuremath{\mathcal{S}}_{k,n}^*}
{|\ensuremath{\mathcal{S}}_{k,n}^*|},
\liminf \frac{([a_1, b_1] \times \cdots \times [a_k,b_k]) \cap \frac 1n \ensuremath{\mathcal{S}}_{k,n}^*}
{|\ensuremath{\mathcal{S}}_{k,n}^*|}.
$$
Does the sequence
$$
\frac{([a_1, b_1] \times \cdots \times [a_k,b_k]) \cap \frac 1n \ensuremath{\mathcal{S}}_{k,n}^*}
{|\ensuremath{\mathcal{S}}_{k,n}^*|}
$$
converge? Otherwise, what are its subsequential limits?
\item
Does there exist a probability measure $\mu$ on $I^k$ such that
$$
\lim_{n\to \infty}
\frac{([a_1, b_1] \times \cdots \times [a_k,b_k])\cap \frac 1n \ensuremath{\mathcal{S}}_{k,n}^*}
{|\ensuremath{\mathcal{S}}_{k,n}^*|}
= \int_{I^k} \chi_{[a_1, b_1] \times \cdots \times [a_k,b_k]} d\mu
$$
for any real numbers $a_1, b_1, \cdots, a_k, b_k$ satisfying $0 \leq a_i < b_i \leq 1$ for all $1\leq i \leq k$, where $\chi_A$ denotes the characteristic function of $A$ for $A\subseteq I^k$?
\end{enumerate}
\end{questionIntro}
\begin{questionIntro}
Let $*\in \{\ensuremath{\mathrm{cyc}}, \ensuremath{\mathrm{ab}}, \ensuremath{\mathrm{nil}}, \ensuremath{\mathrm{ssol}}, \ensuremath{\mathrm{sol}}, \emptyset\}$.
\begin{enumerate}[(i)]
\item
Determine the open subsets of $I^k$ which do not intersect with $\frac 1n \ensuremath{\mathcal{S}}_{k,n}^*$ for any/large enough/infinitely many $n$.
\item
Determine the open subsets of $I^k$ which have nonempty intersection with $\frac 1n \ensuremath{\mathcal{S}}_{k,n}^*$ for any/large enough/infinitely many $n$.
\end{enumerate}
\end{questionIntro}
\section{Acknowledgements}
The first author is supported by the ISF Grant no. 662/15. He wishes to thank the Department of Mathematics at the Technion where a part of the work was carried out. The second author would like to acknowledge the Initiation Grant from the Indian Institute of Science Education and Research Bhopal, and the INSPIRE Faculty Award from the Department of Science and Technology, Government of India.
\def\cprime{$'$} \def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex
\accent"16\hss}D} \def\cfac#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"13\hss}}\penalty 10000
\hskip-1\wd7\penalty 10000\box7}
\def\cftil#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"7E\hss}}\penalty 10000
\hskip-1\wd7\penalty 10000\box7}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2020-07-29T02:23:49",
"yymm": "2007",
"arxiv_id": "2007.14389",
"language": "en",
"url": "https://arxiv.org/abs/2007.14389",
"abstract": "The notion of minimal complements was introduced by Nathanson in 2011 as a natural group-theoretic analogue of the metric concept of nets. Given two non-empty subsets $W,W'$ in a group $G$, the set $W'$ is said to be a complement to $W$ if $W\\cdot W'=G$ and it is minimal if no proper subset of $W'$ is a complement to $W$. The inverse problem asks which sets may or not occur as minimal complements. We show some new results on the inverse problem and investigate how the study of the inverse problem naturally gives rise to questions about the asymptotic behaviour of these sets, providing partial answers to some of them.",
"subjects": "Combinatorics (math.CO)",
"title": "Asymptotic behaviour of minimal complements",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771786365549,
"lm_q2_score": 0.8633916011860785,
"lm_q1q2_score": 0.8519751282768961
} |
https://arxiv.org/abs/1504.07156 | On consecutive sums in permutations | We study the number of values taken by the sums $\sum_{i=u}^{v-1} a_i$, where $a_1,a_2,\dots,a_n$ is a permutation of $1,2,\dots,n$ and $1 \leq u < v \leq n+1$. In particular, we show that for a random choice of a permutation, with high probability there are $(\frac{1+e^{-2}}{4} +o(1)) n^2$ such sums. This answers an old question of Erdős and Harzheim. We also obtain non-trivial bounds on the maximum possible number of distinct sums, ranging over all permutations of $1,2,\dots,n$. We close with some questions concerning the minimal possible number of distinct sums. | \section{Introduction}
For a sequence $a = (a_i)_{i=1}^n $ with $a_i \in \mathbb{Z}$, let $S(a)$ denote the set of all distinct sums $\sum_{i=u}^v a_i$ with $1 \leq u \leq v \leq n$. We shall mostly be interested in the size of $S(a)$, where $a_i$ is a permutation of the set $[n] := \{1,2,\dots,n\}$. A trivial upper bound $\abs{S(a)} \leq \binom {n+1}{2}$ follows from counting the number of choices of $u$ and $v$ (or, incidentally, from computing $\max S(a)$).
The special case when $a_i = i$ for all $i$ was considered by Erd\H{o}s and Harzheim \cite{Erdos-1977-27}. It can be shown that for such $a$ one has $\abs{S(a)} = o(n^2)$. Because of the elementary formula
$$\sum_{i=u}^v i = \frac{(v-u-1)(v+u)}{2},$$
we have $S(a) \subset [n] \cdot [2n] \subset [2n]\cdot [2n]$, where we use the notation $A \cdot B = \{ab \ : \ a \in A, b \in B\}$. Hence, obtaining good bounds on $\abs{S(a)}$ is essentially equivalent to obtaining good bounds on $\abs{[n]\cdot[n]}$. The latter question, known as the \emph{multiplication table problem}, has been extensively studied.
The first proof that $\abs{[n] \cdot [n] } = o(n^2)$ is due to Erd\H{o}s \cite{Erdos-1955}, with further improvements by the same author \cite{Erdos-1960}. The exact asymptotics were only recently obtained by Ford \cite{Ford-2008}, who shows that:
\begin{equation}
\abs{ [n] \cdot [n] } = \Theta\bra{ n^2 (\log n)^{-c} (\log \log n)^{-3/2}},
\end{equation}
where $c=1-\frac{1+\log \log 2}{\log 2}$.
This special case lead Erd\H{o}s to pose the following question \cite{Erdos-1977-27}:
\begin{question}\label{question:Erdos-Harheim}
Is it true that for any $\varepsilon$ there exists $n_0$ such that for any $n \geq n_0$ and for any permutation $a_1,\dots,a_n$ of $[n]$ we have $\abs{S(a)} \leq \varepsilon n$?
\end{question}
The purpose of this note is to show that the answer to this question is an emphatic \emph{``no''}. Without further ado, we give the simplest example we are aware of where the conjecture fails.
\begin{proposition}\label{I:prop:cexple-basic}
For any $n$ one can find a permutation $a$ of $[n]$ such that $\abs{S(a)} \geq \frac{1}{4}n^2. $
\end{proposition}
\begin{proof}
Let $a$ be the permutation $1,n,2,n-1,3,n-2,\dots$, so that for each odd $i$ one has $a_i + a_{i+1} = n+1$.
Let $S_1 \subset S(a)$ be the set of the sums $s = \sum_{i=u}^v a_i$ with $u \equiv v \pmod{2}$. If $v = u+2l$ then $s = (n+1)l + k$, where $k = a_v$ if $u,v$ are odd and $k = a_u$ otherwise. If $s$ has another representation $s = \sum_{i=u'}^{v'} a_i = (n+1)l' + k'$ like before, then it follows that $l'=l,\ k'=k$, hence $u,v \equiv u',v' \pmod{2}$ and finally $u'=u,\ v'=v$. Thus, all $ \ceil{ \frac{n+1}{2} }\cdot \floor{ \frac{n+1}{2} } \geq \frac{n^2}{2}$ sums in $S_1$ are distinct.
\end{proof}
The constant $\frac{1}{4}$ in the above Proposition \ref{I:prop:cexple-basic} can be improved slightly, using a randomised variant of the construction above.
\begin{proposition}\label{I:prop:cexple-improved}
For any $n$ there exists a permutation $a$ of $[n]$ such that $\abs{S(a)} \geq (c_1+o(1)) n^2$, where $c_1 = \frac{3}{2}-\frac{2}{\sqrt{e}} = 0.286\dots$.
\end{proposition}
In the other direction, we have the following slight improvement over the trivial bound $S(a) \leq (\frac{1}{2}+o(1))n^2$.
\begin{proposition}\label{I:prop:upper-bound}
For any permutation $a$ of $[n]$, we have $\abs{S(a)} \leq (c_2+o(1)) n^2$ where $c_2 = \frac{1}{4} + \frac{\pi}{16} = 0.446\dots$.
\end{proposition}
These bounds are proven in Sections \ref{section:EXT-low} and \ref{section:EXT-up} respectively. It would be very surprising if either of these results were optimal. We expect that $\max_{a} S(a) = (c+o(1)) n^2$ for some constant $c$ with $c_1 < c < c_2$. It may be interesting to find the exact value of $c$.
While the above examples resolve the original question of Erd\H{o}s, they do not say what happens for a ``typical'' permutation. Our main result shows that the the answer to the Question \ref{question:Erdos-Harheim} is still negative ``on average'' in a rather strong sense.
\begin{proposition}\label{I:prop:exact}
Let $a$ be a permutation of $[n]$ chosen uniformly at random.
Then $\EE(\abs{S(a)}) = (c + o(1)) n^2$ where $c= \frac{1+e^{-2}}{4} = 0.283\dots$ as $n \to \infty$.
Moreover,
for any $\delta >0$ we have $\mathbb{P}\bra{ \abs{ \frac{\abs{S(a)}}{n^2} - c } > \delta} \to 0$ as $n \to \infty$.
\end{proposition}
The proof of this proposition is carried out in Section \ref{section:EXC}, dealing with the expected value of $\abs{S(a)}$, and Section \ref{section:HM}, dealing with the second moment.
To close this section, we remark that a somewhat similar problem was considered by Hegyv{\'a}ri in \cite{Hegyvari}. Instead of a permutation of $[n]$, \cite{Hegyvari} deals with a sequence $(a_i)_{i = 1}^k$ where $k \leq n$ and $a_i \in [n]$, and asks for which $k$ it is possible that \emph{all} sums $\sum_{i=u}^v a_i$ are distinct. This turns out to be possible for $k \geq (\frac{1}{3}-o(1)) n$, which gives an alternative proof of Proposition \ref{I:prop:cexple-basic} with a slightly worse constant $\frac{1}{18}$ in place of $\frac{1}{4}$. Conversely, our Proposition \ref{I:prop:exact} implies a non-trivial bound $k \leq \bra{\sqrt{\frac{\pi}{8} + \frac{1}{2}} + o(1)}n$.
\subsection*{Notation}
We use a variety of asymptotic notation. We write $X = O(Y)$, or $X \ll Y$ if there exists a constant $C > 0$ such that $\abs{X} \leq C Y$. If the constant $C$ is allowed to depend on a parameter $M$, we occasionally write $X = O_M(Y)$. Likewise, we write $X = \Omega(Y)$ if there exists a constant $c > 0$ such that $X > c Y$. Throughout the paper, we work in the regime $n \to \infty$. We write $X = o(Y)$ if $Y > 0$ for $n$ large enough an $\frac XY \to 0$ as $n \to \infty$, and similarly $X = \omega(Y)$ if instead $\frac XY \to \infty$. Finally, if $X = O(Y)$ and $X = \Omega(Y)$ we write $X \sim Y$ or $X = \Theta(Y)$. Occasionally, we this notation with a different limit in mind, with the obvious modifications.
In particular, we freely use expressions such as $O(Y)$, $\Omega(Y)$, $o(Y)$, $\omega(Y)$, $\Theta(Y)$ to denote unspecified functions with the asymptotic behaviour as just described.
\subsection*{Acknowledgements} The author wishes to thank Ben Green to pointing out this problem, and for much advice during the work on it. The author is also grateful to Sean Eberhard, Freddie Manners, Przemek Mazur, Rudi Mrazovi\'{c} and Aled Walker for many fruitful discussions. Finally, the author thanks Jozsef Solymosi and Norbert Hegyv\'{a}ri for helpful comments.
\section{Expected value of $\abs{S(a)}$}\label{section:EXC}
\newcommand{{\mathrm{uni}}}{{\mathrm{uni}}}
In this section we study the asymptotic behaviour of $\EE\abs{S(a)}$. Our main goal is to prove the first part of Proposition \ref{I:prop:exact}, namely the asymptotic formula for $\EE\abs{S(a)}$. We will reuse much of our work here in the subsequent section to compute the second moment $\EE\abs{S(a)}^2$.
Throughout the argument, $n$ is a large integer, and $a$ is a permutation selected uniformly at random from the set of the permutations of $[n]$.
One of our main tools is an inequality due to Hoeffding. We will mostly use the slightly less well-known variant of it, pertaining to random variables chosen from a finite set without replacement, such as the $a_i$.
\begin{theorem}[Hoeffding \cite{Hoeffding-1963}]
Let $X_i$ ($i=1,\dots,m)$ be independent random variables with $\a \leq X_i \leq \b$ for some constants $\a,\b$ and $\mu = \sum_{i=1}^k \EE X_i$. We then have for any $t > 0$
\begin{equation}\label{EXP:eq:01}
\mathbb{P}\bra{ \abs{{\sum_{i=1}^k} X_i - \mu } \geq t } \leq 2\exp\bra{- \frac{2 t^2}{k(\b-\a)^2}}.
\end{equation}
Moreover, the same inequality \eqref{EXP:eq:01} is satisfied if instead of independence we assume that $X_i$ are drawn without replacement from a finite (multi-)set.
\end{theorem}
For any $\varepsilon > 0$, let $S_\varepsilon(a)$ denote the set of all the sums $s$ of the form $\sum_{i=u}^v a_i$ with $\varepsilon n \leq u \leq n - \frac{2s}{n} - \varepsilon n$. It would be sufficient to work with a concrete choice such as $\varepsilon = \frac{1}{\log n}$, but many other choices are equally valid.
We first reduce the problem of determining $\EE\abs{S(a)}$ to the dealing with a single (putative) sum in $S_\varepsilon(a)$
\begin{proposition}\label{EXC:prop:P-s-in-S}
Fix a choice of $\varepsilon = \varepsilon(n)$ with $\varepsilon = o(1)$ and $\varepsilon = n^{-o(1)}$ as $n \to 0$. Let $s = \sigma \frac{n^2}{2}$ with $\varepsilon < \sigma < 1-\varepsilon$. Then
\begin{equation}\label{EXC:eq:P-s-in-S}
\mathbb{P}(s \not\in S_{\varepsilon}(a)) = e^{-2+2\sigma} + o(1).
\end{equation}
Here, the error term is uniform with respect to $s$, but may depend on the implicit rates of convergence in the remaining usages of $o(1)$ notation.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{I:prop:exact}, assuming \ref{EXC:prop:P-s-in-S}]
Take, for instance, $\varepsilon(n) := \frac{1}{\log n}$. If the bound in equation \eqref{EXC:eq:P-s-in-S} holds for each $s$, then we have:
\begin{align*}
\EE\abs{S_{\varepsilon}(a)} &= \sum_{\varepsilon \frac{n^2}{2} < s < (1 - \varepsilon) \frac{n^2}{2} } \mathbb{P}(s \in S_\varepsilon(a)) + O(\varepsilon n^2) \\
&= \frac{n^2}{2} \int_{\varepsilon}^{1-\varepsilon} (1-e^{-2+2\sigma}) d \sigma + o( n^2)
\\ &= n^2 \bra{ \frac{1+e^{-2}}{4} + o(1) }.
\end{align*}
Hoeffding's inequality implies that $\abs{S(a) \setminus S_{\varepsilon}(a) } = o(n^2)$ with probability $1-o(1)$. Indeed, if $s = \sum_{i=u}^v a_i \in S(a) \setminus S_\varepsilon(a)$ then one of the three possibilities holds. Either $u < \varepsilon n$ or $v > n-2\varepsilon n$ or $v-u \leq \frac{2s}{n}-\varepsilon n$. The first two cases account for $O(\varepsilon n^2)$ elements of $S(a) \setminus S_\varepsilon(a)$. For the third case, note that by Hoeffding's inequality and the union bound, we have:
$$
\mathbb{P}\bra{(\exists u,v) \ : \ \sum_{i=u}^v a_i > \frac{n}{2}(v-u) + \varepsilon n^2 } \ll
{n^2 \exp\bra{-2\varepsilon^2 n}} = o(n^2).
$$
Of course, both sets $S(a), S_\varepsilon(a)$ have size $O(n^2)$. Hence,
\begin{equation}
{\EE\abs{S(a)}} = (1+o(1)){\EE\abs{S_\varepsilon(a)}} = (1+o(1)) \frac{1+e^{-2}}{4} n^2.
\qedhere
\end{equation}
\end{proof}
We devote most of the remainder of this section to proving Proposition \ref{EXC:prop:P-s-in-S}.
Given an index $u$, we define the set
$$S_u(a) := \left\{ \sum_{i=u}^v a_i \ : \ u \leq v \leq n\right\}.$$
Clearly, $s \in S_\varepsilon(a)$ precisely when $s \in S_{u}(a)$ for some $\varepsilon n \leq u \leq (1 - \sigma - \varepsilon)n$ (where $\sigma = \frac{2s}{n}$).
Let $N$ be a large even integer. By the inclusion/exclusion formula, we have:
\begin{equation}\label{EXC:eq:39}
\mathbb{P}( s \not \in S_\varepsilon(a)) \leq \sum_{\abs{U} \leq N} (-1)^{\abs{U}} \mathbb{P}\bra{ \bigwedge_{u \in U} s \in S_u(a)},
\end{equation}
where the sum is taken over all sets $U \subset [n]$ with $\min U \geq \varepsilon n$ and $\max U \leq (1 - \sigma - \varepsilon)n$. An analogous formula holds for odd $N$, except that the inequality sign is reversed. Hence, Proposition \ref{EXC:prop:P-s-in-S} will follow easily once a good estimate is found for each of the sums $\sum_{\abs{U} = M}\mathbb{P}\bra{ \bigwedge_{u \in U} s \in S_u(a)}$.
We next prove a lemma which will allow us to eliminate a proportion of terms in such sums. The statement is slightly more general to accommodate latter applications to the second moment. We will only need it in the special case when $s_k$ take at most two distinct values, but this additional assumption does not simplify the argument.
\newcommand{\mathcal{S}}{\mathcal{S}}
\begin{lemma}\label{EXC:lem:kill-bad-U}
Fix a choice of $\varepsilon = \varepsilon(n)$ with $\varepsilon = o(1)$ and $\varepsilon = n^{-o(1)}$ as $n \to 0$. Let $M$ be an integer, for $1 \leq k \leq M$ let $s_k = \sigma_k \frac{n^2}{2}$ with $\varepsilon < \sigma_k < 1-\varepsilon$ and let $u_k \in [n]$ be distinct. Then
\begin{equation}\label{EXC:eq:40a}
\mathbb{P}\bra{ \bigwedge_{k=1}^M s_k \in S_{u_k}(a)} = O \bra{ \frac{ {\frac{1}{\varepsilon}\log n} }{n}}^M ,
\end{equation}
where the implicit constant depends only on $M$.
\end{lemma}
\begin{proof}
By Hoeffding's inequality, there is a constant $C_M$ such that with probability $1-O\bra{\frac{1}{n^M}}$ any interval $[u,v]$ of length $C_M \log n$ has $\sum_{i=u}^v a_i \geq n$.
Let us now select $a$ by selecting $a_i$ in the order of decreasing $i$. For each $k$, the event $s_k \in S_{u_k}(a)$ becomes determined at the time when $a_{u_k}$ is chosen. Either there are at most $C_M \log n$ choices of $a_{u_k}$ leading $s_k \in S_{u_k}(a)$, or there is an interval of length $C_M \log n$ and sum less than $ n$. Since the number of choices of $a_{u_k}$ is at least $\varepsilon n$ in each step, we obtain the sough bound by a standard inductive argument.
\end{proof}
Our next goal is to obtain bounds on the probabilities in \eqref{EXC:eq:39} for ``generic'' $U$. The general strategy at this point is to select $a_i$ in several ``stages''. For each $u \in U$, we shall select $a_i$ with $u \leq i < u+m$ from a suitably constructed set $A_u$ of size $\abs{A_u} \sim \frac{1}{m} n$, where $m$ is an integer to be specified later. The rationale is that (assuming some genericity conditions on $A_u$) it should be relatively easy to understand the distribution of the sum $\sum_{i=u}^{u+m-1} a_i$, and conditioning on the choice of $A_u$, the sums $\sum_{i=u}^{u+m-1} a_i$ are independent. The main reason why this approach succeeds is contained in the following innocuous lemma.
Recall that for two functions $f,g \colon \mathbb{Z} \to \mathbb{R}$ with finite support, we define their convolution $f \ast g$ by $f \ast g (x) = \sum_{y \in \mathbb{Z}} f(x-y) g(y)$. We shall denote $\norm{f}_{\infty} := \max_{x \in \mathbb{Z}} \abs{f(x)}$. Also recall that we use $\omega(1)$ do denote an arbitrary function tending to $\infty$ as $n \to \infty$.
\begin{lemma}\label{EXC:lem:f-A,2=uni}
Fix a choice of integer $m = m(n)$ with $m = n^{o(1)}$ and $m = \omega(\log n)$ as $n \to \infty$.
Let $A \subset [n]$ be a randomly chosen set with $\abs{A} = \floor{n/m^2}$. Then the bound
\begin{equation}
\norm{\frac{1_{A}}{\abs{A}} \ast \frac{1_{A}}{\abs{A}} - \frac{1_{[n]}}{n} \ast \frac{1_{[n]}}{n} }_{\infty} \leq \frac{1}{n}\cdot \frac{m^5}{n^{1/2}}
\label{EXC:eq:41}
\end{equation}
holds with probability $1-n^{-\omega(1)}$.
\end{lemma}
Again, we could work with a specific choice of $m$, such as $m \sim n^{\frac{1}{\log \log n}}$ or (with minor modifications) $m \sim n^{\frac{1}{100}}$. The slightly unnatural looking condition $\abs{A} = \floor{n/m^2}$ will simplify notation later on.
\begin{proof}
Take arbitrary $x \in [n]$. It will suffice to show that the inequality
$$
\abs{ 1_{A} * 1_{A}(x) - \frac{1}{m^4} 1_{[n]} * 1_{[n]}(x) } \leq \sqrt{n} m
$$
holds with probability $1-n^{-\omega(1)}$, and apply the union bound. We may assume without loss of generality that $x \leq n/2$. For ease of notation, we additionally assume that $x$ is odd. In this case, $1_{[n]} * 1_{[n]}(x) = x-1$.
Define $A_1 = A\cap [1,x/2)$ and $A_2 = (x/2,x]$ so that we have $1_A * 1_A(x) = 2\abs{A_1 \cap (x-A_2)}$. An application of Hoeffding's inequality shows that the bounds:
$$
\abs{ \abs{A_1} - \frac{x-1}{2m} } \leq \sqrt{x m}, \qquad
\abs{ \abs{A_2} - \frac{x-1}{2m} } \leq \sqrt{x m}
$$
hold with probability $\geq 1 - 2 \exp(-2m) = 1-n^{-\omega(1)}$. Similarly, the bound
$$
\abs{
\abs{ A_1 \cap (x-A_2)} - \frac{\abs{A_1}\abs{A_2}}{x/2}
} \leq \sqrt{ x m }
$$
holds with probability $\geq 1 - 2 \exp(-2m) = 1-n^{-\omega(1)}$. Hence, by the union bound, we have with probability $1-n^{-\omega(1)}$ that
\begin{align*}
\abs{ 1_{A} * 1_{A}(x) - \frac{1}{m^4} 1_{[n]} * 1_{[n]}(x) }
&= \abs{2 \abs{A_1 \cap (x-A_2)} - \frac{x-1}{m^4} }
\\& \leq \abs{ \frac{4 \abs{A_1} \abs{A_2}}{x} - \frac{x-1}{m^4} } + \sqrt{n m}
\\& \leq 2 \abs{ \abs{A_1} - \frac{x}{2m} } + 2\abs{ \abs{A_1} - \frac{x}{2m} } + \sqrt{nm}
\\& \leq 5 \sqrt{nm} < m \sqrt{n},
\end{align*}
as required.
\end{proof}
As the reader will have noticed, we make no serious attempt to optimize the dependence on $m$. The exponent $5$ does not play a role; we merely need it to be a constant
\newcommand{{r}}{{r}}
For a set $A \subset [n]$ and integer $k$, let us agree to denote by ${r}_{A,k}$ the distribution of $Y = X_1 + X_2 + \dots
X_k$, where $X_i$ are chosen uniformly without replacement, that is
$${r}_{A,k}(x) = \mathbb{P}(Y = x) = \mathbb{P}\bra{\sum_{i=1}^k X_i = x}.$$ Also, let ${r}_{{\mathrm{uni}},k}$ denote the distribution of $V = U_1 + U_2 + \dots + U_k$ where $U_i \in [n]$ are chosen uniformly and independently, hence in particular ${r}_{{\mathrm{uni}},k} = {r}_{{\mathrm{uni}},1} * \cdots * {r}_{{\mathrm{uni}},1}$.
For instance, if $\abs{A} = \floor{n/m^2}$ then ${r}_{A,1} = \frac{1_A}{\abs{A} }$, and since we have the bound $\norm{ {r}_{A,1} \ast {r}_{A,1} - {r}_{A,2}}_\infty = O\bra{\frac{m^4}{n^2}}$, the above lemma says that with overwhelming probability $\norm{ {r}_{A,2} - {r}_{{\mathrm{uni}},2} }_{\infty} \ll \frac{1}{n} \frac{m^5}{n^{1/2}}$. More generally, we have the following.
\begin{corollary}\label{EXC:lem:f-A,m=uni}
Fix $m$ with $m = n^{o(1)}$ and $m = \omega(\log n)$, even. Let $A \subset [n]$ be a randomly chosen set with $\abs{A} = m\floor{n/m^2}$. Then the bound
\begin{equation}
\norm{ {r}_{A,m} - {r}_{{\mathrm{uni}},m} }_{\infty} \leq \frac{1}{n} \cdot \frac{m^6}{n^{1/2}}
\label{EXC:eq:42}
\end{equation}
holds with probability $1-n^{-\omega(1)}$.
\end{corollary}
\begin{proof}
We consider a two-stage random process. First, we select disjoint sets $A_1,\dots,A_m \subset A$ with $\abs{A_i} = \floor{n/m^2}$ uniformly at random. Second, we select $b_{2j-1},b_{2j} \in A_j$ uniformly at random without replacement. It follows that
\begin{equation}
{r}_{A,m} = \EEbig_{A_1,\dots,A_{m/2}} {r}_{A_1,2} \ast {r}_{A_2,2} \ast \dots {r}_{A_{m/2},2}.
\end{equation}
where the expectation is taken over all partitions of $A$.
It is a standard fact that if $f$ is a probability distribution then for any bounded $g$ one has $\norm{ f \ast g}_{\infty} \leq \norm{g}_\infty$. Hence, we obtain by a simple inductive argument
\begin{equation}
\norm{ {r}_{A_1,2} \ast {r}_{A_2,2} \ast \dots \ast {r}_{A_{m/2},2} - {r}_{{\mathrm{uni}},m} }_\infty \leq \sum_{j=1}^{m/2} \norm{{r}_{A_j,2} - {r}_{{\mathrm{uni}},2}}_\infty.
\end{equation}
Note that if $A$ is selected uniformly at random (subject to cardinality), then so is each of the sets $A_j$.
Using Lemma \ref{EXC:lem:f-A,2=uni} and the union bound, with probability $1
-n^{-\omega(1)}$, the bound \eqref{EXC:eq:41} holds for each $A_j$. Hence, with probability $1-n^{-\omega(1)}$, also the bound \eqref{EXC:eq:42} holds.
\end{proof}
We shall exploit the fact that the sums $\sum_{i=u}^v a_i$ tend to have roughly the correct magnitude if $v-u$ is larger than a power of $\log n$. To make this idea precise, we introduce the following piece of notation. Let $I \subset [n]$ be a (connected) interval, and let $b = (b_i)_{i \in I}$ be a $[n]$-valued sequence indexed by $I$. Then $\mathscr{R}(b)$ is the event that for any non-empty interval $J \subset I$ one has
\begin{equation}
\abs{ \sum_{j \in J}b_i - \abs{J} \frac{n}{2} } \leq \sqrt{\abs{J}} n \log n.
\label{EXC:eq:R}
\end{equation}
It is an easy consequence of the Hoeffding inequality that for any $I$ we have
$$\mathbb{P}\brabig{\mathscr{R}(a|_{I})} = 1 - n^{-\omega(1)},$$ where we use the notation $a|_I$ to denote the restriction of $a$ to $I$. (In fact, we could replace $\log n$ in \eqref{EXC:eq:R} above with any function $= \omega(\sqrt{\log n})$). Slightly more generally, we have the following result.
\begin{lemma}\label{EXC:lem:P-R-uniform} Fix a choice of even integer $m$ with $m = n^{o(1)}$ and $m = \omega(\log n)$.
Let $A \subset [n]$ be a randomly chosen set with $\abs{A} = m \floor{n/m^2}$. Let $b = (b_1,\dots,b_m)$ be chosen uniformly at random without replacement from $A$. Then there is a function $h$ with $h(n) \to \infty$ as $n \to \infty$ such that
\begin{equation}\label{EXC:cond:R1}
\mathbb{P}_{A}\bra{ \mathbb{P}_{b_i \in A}( \neg \mathscr{R}(b)) \geq n^{-h(n)} } \leq n^{-h(n)}.
\end{equation}
\end{lemma}
\begin{proof}
By Hoeffding's inequality, for each interval $J$ with $\abs{J} = l$ one has:
$$
\mathbb{P}_{b_i \in [n] }
\bra{ \abs{ \sum_{j \in J} b_j - l \frac{n}{2} } \geq \sqrt{l} n \log n}
\leq 2\exp\bra{ - 2 \log^2 n}
= n^{-\omega(1)},
$$
where $b_i$ are chosen from $[n]$ without replacement, uniformly at random. Hence, after an application of the union bound over $O(n^2)$ choices of $J$, we also know that $$\EEbig_{A}\brabig{ \mathbb{P}_{b_i \in A}( \neg \mathscr{R}(b)) } = \mathbb{P}_{b_i \in [n]} \bra{ \neg \mathscr{R}(b) } = n^{-\omega(1)}.$$ The claim now follows from the Chebyshev bound.
\end{proof}
The above lemma can be restated by saying that for a randomly selected $A$, the bound
\begin{equation}\label{EXC:cond:R2}
\mathbb{P}_{b_i \in A}( \mathscr{R}(b)) \geq 1 - n^{-\omega(1)}
\end{equation}
holds with probability $1 - n^{-\omega(1)}$.
We are now ready to estimate the probabilities in \eqref{EXC:eq:39}. The only caveat is that we shall need to assume a certain genericity condition on $U$. Namely, we shall require that the set $U$ be well-separated:
\begin{equation}\label{EXC:cond:U-reg}
(\forall u, v \in U) \ u = v \vee \abs{u-v} > m,
\end{equation}
where $m$ is a parameter yet to be specified. Under this condition, we can now prove the needed bound. Again, we prove slightly more then necessary at the moment, since in this section we will only apply the following proposition with a constant sequence $s_k = s$.
\begin{proposition}\label{EXC:prop:P-wedge-s-in-S}
Fix a choice of $\varepsilon = \varepsilon(n)$ and even $m = m(n)$ such that $\varepsilon = o(1)$, $m = n^{o(1)}$ and $m = \bra{ \frac{1}{\varepsilon} \log n}^{\omega(1)}$ as $n \to 0$. Let $M$ be an integer. For $1 \leq k \leq M$ let $s_k = \sigma_k \frac{n^2}{2}$ with $\varepsilon < \sigma_k < 1-\varepsilon$ and let $u_k \in [n]$ be distinct, and such that $U := \{u_k\}_{k=1}^M$ satisfies the well-separatedness condition \eqref{EXC:cond:U-reg}. Then
\begin{equation}
\mathbb{P}\bra{ \bigwedge_{u \in U}^M \bra{ s_k \in S_{u_k}(a) } } = \bra{\frac{2}{n}}^M (1+ o(1)),
\end{equation}
where the implicit error term depends only on $M$.
\end{proposition}
\begin{proof}
Define intervals $I_k = [u_k,u_k+ m)$ for $k \in [M]$, and $I_{0} = [1, M m\floor{n/m^2} ]$, and finally $I_{\infty} = [n] \setminus \bigcup_{k=0}^M I_k$. Because of the well-separetedness \eqref{EXC:cond:U-reg}, these are disjoint.
We can think of $a$ as being selected in two stages. Firstly, for each $1 \leq k \leq M$ we select a set $A_k$ of size $m\floor{n/m^2}$ uniformly at random, and put $A_{\infty} = [n] \setminus \bigcup A_k$. Next, for each $k \in [M] \cup \{\infty\}$ for $i \in I_{k}$, we select $a_i$ uniformly at random from $A_{k}$ without replacement. Finally, for $i \in I_0$, we select $a_i$ from $\bigcup_{k \in [M]} A_k$ (except for values which have already been used) uniformly at random. It is not hard to see that this procedure produces the uniform distribution of $a$.
By Lemmas \ref{EXC:lem:f-A,m=uni} and \ref{EXC:lem:P-R-uniform}, for each $k \in [M]$ the regularity
conditions \eqref{EXC:eq:42} and \eqref{EXC:cond:R2} hold for $A_k$ with probability $1-n^{-\omega(1)}$. Hence, it will suffice to prove the bound
\begin{equation}\label{EXC:eq:49}
\mathbb{P}\bra{ \bigwedge_{k=1}^M (s_k \in S_{u_k}(a)) \middle| a_{i} \in A_k \text{ for all } k,\ i \in I_k } = \bra{\frac{2}{n}}^M \bra{1+ o(1)}
\end{equation}
where each of the sets $A_k$ satisfies \eqref{EXC:eq:42} and \eqref{EXC:cond:R2} (here, the error term does not depend on $s_k$).
Because for each fixed $k$ we have $\mathbb{P}(\neg \mathscr{R}( a|_{I_k} ) | \ a_{i} \in A_k \text{ for } i \in I_k) = n^{-\omega(1)}$ by \eqref{EXC:cond:R2}, we may replace each event $s_k \in S_{u_k}(a)$ in \eqref{EXC:eq:49} with the event $s_k \in S_{u_k}(a) \wedge \mathscr{R}(a|_{I_k})$ at the cost of introducing a negligible error term of the order $O(n^{-\omega(1)})$.
The claim now follows by a standard inductive argument. It suffices to prove the following.
\begin{lemma*}
Let $k \in [M]$. For $k < l \leq M$ and $l = \infty$, let $a^*_{I_l} = (a^*_j)_{j \in I_l}$ be $[n]$-valued sequences such that the well-distribution condition $\mathscr{R}(a^*_{I_l})$ defined in \eqref{EXC:eq:R} holds. Let $A \subset [n]$ be a set with $\abs{A} = m \floor{n/m^2}$ satisfying the well-separatedness condition \eqref{EXC:eq:42} and the well-distribution \eqref{EXC:cond:R2}. Then
\begin{equation}\label{EXC:eq:50}
\mathbb{P} \left( s_k \in S_{u_k}(a) \wedge \mathscr{R}(a_{I_k})
\middle|
\begin{matrix}
a_{i} \in A \text{ for } i \in I_k \text{ and } \\
a_{I_{l}} = a_{I_l}^* \text{ for } l = k+1,\dots,M, \infty
\end{matrix}
\right) = \frac{2+o(1)}{n},
\end{equation}
\end{lemma*}
where the error term does not depend on $s_k$.
\begin{proof}
Let $b_1,\dots,b_m$ be chosen uniformly at random without replacement from $A$. Then the probability in \eqref{EXC:eq:50} can be rewritten as
\begin{equation}\label{EXC:eq:51}
\mathbb{P}_{b_i \in A} \bra{ s_k - \sum_{i=1}^m b_i \in S_{u_k+m}(a^*) \wedge \mathscr{R}(b)}.
\end{equation}
Because of \eqref{EXC:cond:R2}, we have $\mathbb{P}_b(\mathscr{R}(b)) = 1 - n^{-\omega(1)}$.
Putting $$R := s_k-S_{u_k+m}(a^*),$$ it will hence suffice to estimate $\mathbb{P}_{b} \bra{ \sum_{i=1}^m b_i \in R}$.
The assumption \eqref{EXC:cond:R2} for $A_{k+1},A_{k+2},\dots,A_M$ implies that for interval any $J \subset \bigcup_{l =k+1}^M I_l \cup I_\infty$ with $\abs{J} \gg \frac{\log^2n}{\varepsilon^4}$ one has:
\begin{equation}
\sum_{i \in J} a^*_j = \frac{n}{2} \abs{J}(1+O(\varepsilon^2)).
\end{equation}
Note in particular that
$$\max S_{u_k+m}(a^*) = \frac{n}{2}(n-u_k-m)(1+O(\varepsilon^2)) \geq \frac{1}{2}n^2(\sigma_k+\varepsilon - O(\varepsilon^2)).$$ Thus, we have somewhat crude bounds $\min R < 0$ and $ \max R > \frac{1}{2} \varepsilon n^2 > mn$. (Recall that $\varepsilon > \frac{1}{\sqrt{n}} > \frac{m}{n}$.)
If $K \subset [\min R, \max R]$ is an arbitrary interval of length at least $n \frac{\log^2n}{\varepsilon^4}$ then
\begin{equation}
\abs{K \cap R} \geq \frac{n}{2} \abs{K} (1+ O(\varepsilon^2)).
\label{EXC:eq:45}
\end{equation}
Indeed, if $J$ denotes the set of $v$ such that $\sum_{i=u_k+m}^v a_i^* \in K$ then
$$ \abs{K} + O(n) = \sum_{i \in J} a_{i}^* = \frac{n}{2} \abs{K\cap R}(1+O(\varepsilon^2)),$$
which easily implies the claim.
Returning to the probability we wish to estimate, we clearly have
\begin{equation}\label{EXC:eq:53}
\mathbb{P}_{b}\bra{ \sum_{i=1}^m b_i \in R} = \sum_{x } 1_R(x) {r}_{A,m}(x),
\end{equation}
where ${r}_{A,m}(x)$ is the distribution of $\sum_{i \in I_k} b_i$. The sum is formally taken over $\mathbb{Z}$ but ${r}_{A,m}$ is identically $0$ outside the interval $[m,mn]$.
It follows from Corollary \ref{EXC:lem:f-A,m=uni} that
\begin{align}\label{EXC:eq:55}
\abs{
\sum_{x } 1_R(x) \bra{ {r}_{A,m}(x) - {r}_{{\mathrm{uni}},m}(x) }
} \leq mn \norm{{r}_{A,m} - {r}_{{\mathrm{uni}},m}}_\infty
= O\bra{ \frac{m^6}{n^{1/2}} } .
\end{align}
We note some basic properties of ${r}_{{\mathrm{uni}},m}$. Firstly, ${r}_{{\mathrm{uni}},m}$ takes its maximal value at a precisely one point, with respect to which it is symmetric. Indeed, this property holds for ${r}_{{\mathrm{uni}},2}$ and is preserved under convolution with ${r}_{{\mathrm{uni}},2}$.
Secondly, for any integer $x$ we claim that ${r}_{{\mathrm{uni}},m}(x) \ll \frac{1}{m^{1/2}n}$. For this purpose we introduce the exponential sum $\phi(t) := \frac{1}{n} \sum_{k=1}^n e( k t) = \frac{e(t)}{n} \frac{e(tn)-1}{e(t)-1}$ where $e(t) := e^{2 \pi i t}$, so that for each $x$ we have $${r}_{{\mathrm{uni}},m}(x) = \int_{0}^1 e(-xt) \phi(t)^m dt \leq \int_0^1 \abs{\phi(t)}^m dt.$$
We have the trivial bound $\abs{\phi(t)} \leq 1$ for each $t$. Slightly less trivially, $\abs{\phi(t)} = \abs{ \frac{\sin n t\pi}{n \sin t \pi}}$. Hence, for $\frac{1}{2n} \leq t \leq 1 - \frac{1}{2n}$ we find $\abs{\phi(t)} \leq \frac{1}{\pi/2 - o(1)} < \frac{2}{3}$ if $n$ is large enough, and hence
$$\int_{1/2n}^{1-1/2n} \abs{\phi(t)}^m dt < \bra{\frac{2}{3}}^m \ll \frac{1}{m^{1/2}n}.$$
For $1 < t < \frac{1}{2n}$ we have $\abs{\phi(t)} = \abs{ \frac{\sin \pi n t}{ n\pi t(1 + O(1/n^2))}}$ and hence $\abs{\phi(t)}^m = \abs{ \frac{\sin \pi n t}{n \pi t}}^me^{O(m/n^2)}$. One can find a universal constant $c$ such that $\abs{ \frac{\sin u}{u}} \leq e^{-c u^2}$ for $0 < u < \pi/2$, and thus $\abs{ \phi(t) }^m \leq e^{-c t^2 n^2 m}$ (possibly with a different constant). Consequently,
$$\int_{0}^{1/2n} \abs{ \phi(t) }^m dt < \int_{0}^\infty e^{-c t^2 n^2 m} dt < \frac{C}{m^{1/2}n}$$
for an absolute constant $C>0$. Applying the same argument to $1-\frac{1}{2n} < t < 1$, we conclude that ${r}_{{\mathrm{uni}},m}(x) \leq \frac{C}{m^{1/2}n}$, where $C$ is absolute.
We may now estimate $\sum_{x } 1_R(x) {r}_{{\mathrm{uni}},m}(x)$. Let us separate the interval $[0, m n]$ into pieces $\dots,K_{-2},K_{-1},K_0,K_{1},K_{2},\dots$, arranged in this order, disjoint, each of length $(1+o(1)) n \frac{\log^2n}{\varepsilon^4}$, except for $K_0 := \{\frac{m(n+1)}{2}\}$ of length $1$. Consider the function $g_{-}$ defined by $g_{-}(x) = \frac{2}{n}(1-C \varepsilon^2)$ for $x \in {K_i}$ with $\abs{i} > 1$ and $g_{-}(x) = 0$ for $x \in K_{1},K_0,K_{-1}$. Similarly, consider $g_{+}$ defined by $g_{+}(x) = \frac{2}{n}(1+C \varepsilon^2)$ for $x \in {K_i}$ with $\abs{i} \neq 0$ and $g_{+}\bra{\frac{m(n+1)}{2}} = 2 \frac{\log^2n}{\varepsilon^4}$.
The density assumption \eqref{EXC:eq:45} and monotonicity of ${r}_{{\mathrm{uni}},m}$ imply that
\begin{equation}
\sum_{x} {r}_{{\mathrm{uni}},m}(x) g_{-}(x) \leq \sum_{x} {r}_{{\mathrm{uni}},m}(x) 1_{R}(x) \leq \sum_{x} {r}_{{\mathrm{uni}},m}(x) g_{+}(x),
\end{equation}
provided that $C$ is sufficiently large. Moreover, we have
$$
\sum_{\substack{x \in K_i, \abs{i} \leq 1 }}
{r}_{{\mathrm{uni}},m}(x) \ll \frac{\log^{2} n}{\varepsilon^2 m^{1/2}} \ll \varepsilon^2,
$$
and inserting this bound into the above inequality we find
\begin{equation}
\sum_{x} {r}_{{\mathrm{uni}},m}(x) g_{-}(x) \geq \frac{2}{n}\bra{1-C \varepsilon^2}\bra{1 - O(\varepsilon^2)
} = \frac{2}{n}\bra{1-O(\varepsilon^2)}.
\end{equation}
By a similar argument, we have
\begin{equation}
\sum_{x} {r}_{{\mathrm{uni}},m}(x) g_{+}(x) \leq \frac{2}{n}(1+C \varepsilon^2) + {r}_{{\mathrm{uni}},m}\bra{\frac{m}{2n}} \cdot 2\frac{\log^2n}{\varepsilon^4} = \frac{2}{n}(1+O(\varepsilon^2)).
\end{equation}
Hence, we obtain
\begin{equation}
\sum_{x} {r}_{{\mathrm{uni}},m}(x) 1_{R}(x) = \frac{2}{n}(1+O(\varepsilon^2)).
\end{equation}
Combined with \eqref{EXC:eq:53} and \eqref{EXC:eq:55}, this finishes the proof of the inductive step, and hence the entire proof.
\renewcommand{\qedsymbol}{}\end{proof}
\end{proof}
We are now in position to finish the proof of Proposition \ref{EXC:prop:P-s-in-S} (and hence also the first part of Proposition \ref{I:prop:exact}).
\begin{proof}[Proof of Proposition \ref{EXC:prop:P-s-in-S}]
Let $N$ be a large even integer. From \eqref{EXC:eq:39} we have
\begin{equation}
\mathbb{P}(s \not\in S_\varepsilon(a)) \leq \sum_{M \leq N} (-1)^M \sum_{\abs{U} = M} \mathbb{P}\bra{ \bigwedge_{u \in U} s \in S_u(a) }.
\end{equation}
The inner sum runs over $ \binom{ (1-\sigma + o(1))n}{M} = (1+o(1)) \frac{(1-\sigma)^M}{M!}n^M$ summands. The proportion of summands for which \eqref{EXC:cond:U-reg} fails is $O(\frac{m}{n})$, and by Lemma \ref{EXC:lem:kill-bad-U} each of them contributes at most $O\bra{\frac{\bra{ \frac{1}{\varepsilon}\log n}^M}{n^M}}$. Hence, their total contribution is $O\bra{ \frac{m \bra{ \frac{1}{\varepsilon}\log n}^M}{n}} = o(1)$.
All remaining summands are by Proposition \ref{EXC:prop:P-wedge-s-in-S} equal to $\bra{\frac{2}{n}}^M(1+o(1))$. Thus, the inner sum can be estimated as
\begin{align*}
\sum_{\abs{U} = M} \mathbb{P}\bra{ \bigwedge_{u \in U} s \in S_u(a) } &= \frac{(1-\sigma)^M}{M!}n^M \cdot \bra{\frac{2}{n}}^M(1+o(1)) + o(1)
\\&= \frac{ (2-2\sigma)^M }{M!} + o(1).
\end{align*}
This leads to the upper bound
\begin{align*}
\mathbb{P}(s \not \in S_\varepsilon(a)) &\leq \sum_{M=0}^N (-1)^M \frac{ (2-2\sigma )^M }{M!} + o(1)
\leq e^{-2+2\sigma} + \frac{C}{N!} + o(1),
\end{align*}
where the constant $C$ is absolute. Letting $N \to \infty$ slowly with $n$ we conclude that $\mathbb{P}(s \in S_\varepsilon(a)) \leq e^{-2+2\sigma} + o(1)$.
Running the same argument with $N$ odd we conclude the reversed bound $\mathbb{P}(s \in S_\varepsilon(a)) \geq e^{-2+2\sigma} + o(1)$. Hence $\mathbb{P}(s \not\in S_\varepsilon(a)) = e^{-2+2\sigma}(1+o(1))$, where the error term is uniform with respect to $s$.
\end{proof}
\section{Higher moments of $\abs{S(a)}$}\label{section:HM}
In this section, we (asymptotically) compute the second moment $\EE\abs{S(a)}^2$, where $a$ is a randomly selected permutation of $[n]$. Thus, we prove the concentration around the mean for $\abs{S(a)}$.
By a standard application of the second moment method, we have for any $\delta > 0$ that
$$
\mathbb{P}\bra{ \abs{\abs{S(a)}^{\phantom{|}\!\!\!} - \EE(\abs{S(a)}} > \delta n^2 } \leq \frac{\EE{\abs{S(a)}^2} - \EE(\abs{S(a)})^2}{\delta^2 n^4}.
$$
Hence, the concentration around the mean in Proposition \ref{I:prop:exact} will follow directly from the following result.
\begin{proposition}\label{HM:prop:E(S(a)^p)}
The second moment of $\abs{S(a)}$ is given by $$\EE \abs{S(a)}^2 = (1+o(1))c^2 n^{4}$$ as $n \to \infty$, where $c = \frac{1+e^{-2}}{4}$.
\end{proposition}
Because $\abs{S(a)}/n^2$ is bounded, the concentration around the mean, as stated in Proposition \ref{I:prop:exact}, implies the asymptotic formula for the higher moments $\EE \abs{S(a)}^p$ for all $p\geq 1$, namely $\EE \abs{S(a)}^p = (1+o(1)) c^p n^{2p}$. Similar formula also follows from a slight adaptation of the argument we give.
We will argue along the same lines as in Section \ref{section:EXC}. The only missing ingredient we need in order to compute the higher moments of $\abs{S(a)}$ is a strengthening of Lemma \ref{EXC:lem:kill-bad-U} where $u_k$ are not necessarily distinct. This task is less trivial than it might appear at first.
Consider an example where $u_1 = u_2$, $s_3 = s_1 - s_2$ (e.g. $s_3=s_2=\frac{1}{2} s_1$) and $\EE \sum_{i=u_1}^{u_3} a_i = s_2$. For the condition $\bigwedge_{k=1}^3 (s_k \in S_{u_k}(a))$ to hold, it suffices that $s_2 = \sum_{i=u_1}^{u_3-1} a_i $ and $s_3 \in S_{u_3}(a)$. One expects the probability of the latter event to be roughly $\frac{1}{n^{5/2}}$, at least when $s_k$ are not too small and $u_3$ is chosen to maximise $\mathbb{P}(s_2 = \sum_{i=u_1}^{u_3-1} a_i)$. This is significantly larger than the bound $\frac{1}{n^3}$ which one might expect by analogy to Lemma \ref{EXC:lem:kill-bad-U}. Hence, the direct generalisation of Lemma \ref{EXC:lem:kill-bad-U} is not possible; instead we prove an averaged version.
As before, we will only apply the following Lemma in the case when $s_k$ take at most two distinct values.
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\operatorname{out-deg}}{\operatorname{out-deg}}
\newcommand{\operatorname{in-deg}}{\operatorname{in-deg}}
\begin{lemma}\label{EXC:lem:kill-bad-U-2}
Fix a choice of $\varepsilon = \varepsilon(n)$ with $\varepsilon = o(1)$ and $\varepsilon = n^{-o(1)}$ as $n \to 0$. Let $M$ be an integer, for $1 \leq k \leq M$ let $s_k = \sigma_k \frac{n^2}{2}$ with $\varepsilon < \sigma_k < 1-\varepsilon$. Denote by $\mathcal{U}$ the set of all sequences $u= (u_k)_{k=1}^M$ such that $\varepsilon n < u_k < (1-\sigma_k - \varepsilon)n$ for all $k$ and \emph{not} all $u_k$ are distinct. Then
\begin{equation}\label{EXC:eq:71}
\sum_{u \in \mathcal{U}} \mathbb{P}\bra{ \bigwedge_{k=1}^M s_k \in S_{u_k}(a)} = O\bra{ \frac{\bra{\frac{1}{\varepsilon} \log n}^M}{n}},
\end{equation}
where the implicit constant depends only on $M$.
\end{lemma}
\begin{proof}
Throughout the argument, $M$ is fixed, and we allow all constructions to depend on $M$.
We introduce a structure which we call ``type graph'', intended to record possible linear dependencies between different sums of the form $\sum_{i=u_k}^{v} a_{i}$ and target sums $s_k$. A type graph $G$ consists of the following data:
\begin{enumerate}
\item An initial interval $K = [N]$ together with a partition $K = I \cup J$, $I \cap J = \emptyset$ and $\abs{I},\abs{J} \leq M$.
\item A collection of $M$ edges $E \subset K \times K$, ordered so that if $\bra{i,j} \in E$ then $i < j$, such that $\operatorname{out-deg}(i) \geq 1$ for each $i \in I$ and $\operatorname{in-deg}(j) \geq 1$ for each $j \in J$.
\item A collection of labels $s_{{i,j}}$ for each $\bra{i,j} \in E$, where each $s_{{i,j}}$ has the form $s_{{i,j}} = \sum_{k} q_k s_k$ with $q_k \in \mathbb{Z}$ and $\sum_{k} \abs{ q_k } \leq 2^{M^2 - L} $ where $L = \sum_{\bra{i,j} \in E} (j-i)$.
\end{enumerate}
The bound in the final condition is purely technical; it's significance will become clear in the course of the proof. Formally, $G$ is a 4-tuple consisting of $I,J,E$ and $(s_{{i,j}})_{\bra{i,j} \in E}$. Whenever we introduce a type graph $G$, we implicitly also introduce $I,J,K$, $E$ and $s_{{i,j}}$ appearing in the definition.
If $G$ is a type graph, and $(w_k)_{k \in K}$ is a (strictly) increasing sequence, then we define the condition $\mathscr{A}_G(a,w)$ to hold precisely when the system of equations:
\begin{equation}\label{EXC:eq:78}
\sum_{i=w_k}^{w_l-1} a_i = s_{{k,l}}, \quad \bra{k,l} \in E
\end{equation}
is satisfied.
We will say that two type graphs $G$ and $G'$ are equivalent if the events $\mathscr{A}_G(a,w)$ and $\mathscr{A}_{G'}(a,w)$ are equivalent (in particular, $K = K'$). We will call a type graph $G$ minimal if it minimises the ``total edge length'' (which also appears in the final condition defining graph types)
\begin{align}\label{EXC::eq:def-L}
L = \sum_{\bra{k,l} \in E} (l-k)
\end{align}
within equivalence class. Clearly, each equivalence class contains a minimal element.
\begin{observation*}
If $G$ is a minimal type graph, then the underlying unlabelled graph $(K,E)$ is a union of disjoint paths.
\end{observation*}
\begin{proof}
First note that if $G$ is minimal, then there is no pair of edges $\bra{k,l}$, $\bra{m,l} \in E$ with equal endpoints.
Indeed, if there was such a pair, without loss of generality with $k < m$, then we can replace the edge $\bra{k,l}$ with $\bra{k,m}$ and put $s_{k,m} = s_{{k,l}} - s_{{l,m}}$. (Note that this decreases the total edge length $L$ from \eqref{EXC::eq:def-L} by at least $1$, and thus $s_{{k,l}} - s_{{l,m}}$ is a feasible label). Hence, if $G$ is minimal then for each $l$, $\operatorname{in-deg}(l) \leq 1$.
By a similar argument, $\operatorname{out-deg}(k) \leq 1$ for each $k$. If follows that
$G$ (as an unlabelled graph) is a union of disjoint paths.
\end{proof}
\begin{observation*}
If $(u_k)_{k=1}^M \in \mathcal{U}$ and if $a$ is a permutation such that $s_k \in S_{u_k}(a) $ for each $k$, then there exists a minimal type graph $G$ and an increasing sequence $(w_k)_{k \in K}$ such that $\{w_i\}_{i \in I} = \{u_k\}_{k=1}^M$, $\abs{I} < M$ and $\mathscr{A}_G(a,w)$ holds.
\end{observation*}
\begin{proof}
By assumption, for each $k$ there is some $v_k$ such that $\sum_{i=u_k}^{v_k-1} a_i = s_{k}$. We may define an increasing sequence $(w_k)_{k=1}^N$ so that $\{w_k\}_{k=1}^N = \{u_k\}_{k=1}^M \cup \{v_k\}_{k=1}^M$ and put $K = [N]$. We next take $I = \{i \in K \ : \ (\exists k \in [M]) \ w_i = u_k \}$ and $J = K \setminus I$. Finally, we define $E$ by putting $\bra{i,j} \in E$ precisely when $(w_i,w_j) = (u_k,v_k)$ for some $k \in M$, and take $s_{{i,j}} := s_k$.
Thus defined $G$ satisfies all of the imposed conditions, possibly except minimality. Replacing $G$ by a minimal type graph within the equivalence class, we obtain the claim.
\end{proof}
It follows from the above Observation that the sum in \eqref{EXC:eq:71} is at most:
\begin{equation}\label{EXC:eq:75}
\sum_{\substack{ G \text{ -- m.t.g.} \\ \abs{I} < M}}
\sum_{\substack{ (w_k)_{k \in K} \\ w_k \geq \varepsilon n}}
\mathbb{P}( \mathscr{A}_G(a, w))
\end{equation}
where the sum $\sum_G$ is taken over all minimal type graphs $G$ with $\abs{I} < M$, and sum $\sum_{w}$ is taken over all choices of $(w_k)_{k\in K}$ with $\varepsilon n \leq w_k$ for all $k$.
Since there are $O_M(1)$ type graphs, to prove the proposition, it will suffice to show that for each minimal type graph $G$ with $\abs{I} < M$ we have
\begin{equation}\label{EXC:eq:76}
\sum_{\substack{ (w_k)_{k \in K} \\ w_k \geq \varepsilon n}}
\mathbb{P}( \mathscr{A}_G(a, w))
\leq C_M \frac{ \bra{ \frac{1}{\varepsilon} \log n}^M}{n},
\end{equation}
where the constant $C_M$ depends only on $M$.
Let $K_0$ be the set of $k \in K$ such that $\operatorname{in-deg}(k) = 0$, and $K_1 = K \setminus K_0$. If $\{w_k \ : \ k \in K_0\}$ is specified then for any permutation $a$ there is at most one choice of $\{w_k \ : \ k \in K_1\}$ such that $\mathscr{A}_G(a,w)$ holds. Hence, we may rewrite the probability in \eqref{EXC:eq:76} as:
\begin{equation}\label{EXC:eq:77}
\sum_{\substack{ (w_k)_{k \in K} \\ w_k \geq \varepsilon n}}
\mathbb{P}( \mathscr{A}_G(a, w)) = \sum_{\substack{(w_m)_{m \in K_0} \\ w_m \geq \varepsilon n }} \mathbb{P}( (\exists (w_m)_{m \in K_1}) \ : \ \mathscr{A}_G(a, w)).
\end{equation}
We will now select $a_i$ in the order of increasing $i$, starting at $i(0) = \floor{ \varepsilon n} $. Thus, at ``time'' $0 \leq t \leq n-i(0)$, we select $a_{i(t)}$, where $i(t) = i(0) + t$. We then select $a_{i}$ with $1 \leq i < i(0)$ at times $n - i(0) < t \leq n$, but these values will not play a significant role. To study time evolution, we let $\mathcal{F}_t$ be the $\sigma$-algebra generated by $a_{v(t')}$ with $0 \leq t' \leq t$.
At time $t = 0$, the values $w_m$ with $m \in K_0$ are specified, but not the values $w_m$ with $m \in K_1$. If $w_k$ has been specified and $\bra{k,l} \in E$ then as $t$ increases, so will $\sum_{i=w_k}^{i(t)} a_{i}$. As long as $\sum_{i=w_k}^{i(t)} a_{i} \neq s_{{k,l}}$ we consider $w_l$ to be unspecified, but as soon $\sum_{i=w_k}^{i(t)} a_{i} = s_{{k,l}}$ we put $w_l = i(t)+1$. If no such $t$ is found, then $w_l$ is unspecified indefinitely. Note hat whether or not $w_k$ is specified at $t$ is a $\mathcal{F}_t$-measurable event.
At each time $t$, either there is a least value $l = l(t) \in K_1$ such that $w_{l}$ has not yet been specified, or else all $w_{l}$ have been specified, and we set $l(t) := \infty$. If $l(t) \neq \infty$, there is some $k = k(t)$, such that $\bra { {k(t)}, {l(t)}} \in E$ and hence $\mathscr{A}_G(a,w)$ imposes the condition $\sum_{i=w_{k(t)}}^{w_{l(t)}} a_i = s_{k(t),l(t)}$. If $l(t) = \infty$, set $k(t) = \infty$.
We introduce random variables $X_t,Y_t$, adapted to the filtration $\mathcal{F}_t$. The variable $X_t$ counts equations required by $\mathscr{A}_G(a,w)$ which have been satisfied:
$$
X_{t+1}=
\begin{cases}
1 & \text{ if $\sum_{i=w_{k(t)}}^{i(t+1)} a_i = s_{k(t),l(t)}$ ,} \\
0 & \text{ otherwise (including $k(t) = \infty$) }.
\end{cases}
$$
Note that the event $\mathscr{A}_G(a,w)$ is equivalent to $\sum_{t} X_t = \abs{K_1}$. The variable $Y_t$ counts potential opportunities for $X_{t+1}$ to take value $1$:
$$
Y_{t+1}=
\begin{cases}
1 & \text{ if $s_{k(t),l(t)}-\sum_{i=w_{k(t)}}^{i(t)} a_i \in [n]$,} \\
0 & \text{ otherwise (including $k(t) = \infty$)}.
\end{cases}
$$
Note that if $X_{t+1} = 1$ then $Y_t = 1$. More precisely, we have:
$$
\EE(X_{t+1} \ | \ \mathcal{F}_t) \leq \frac{Y_t}{\varepsilon n},
$$
since at most one of more than $\varepsilon n$ possible values of $a_{i(t+1)}$ guarantees that $X_{t+1} = 1$.
Let $C = C_M$ be a large constant. We always have the following bound:
\begin{align*}
\mathbb{P}\bra{ \sum_t X_t = \abs{K_1} }
&\leq \mathbb{P}\bra{ \sum_t X_t = \abs{K_1} \ \wedge \ \sum_t Y_t \leq C \log n }
\\& + \mathbb{P}\bra{ \sum_t Y_t > C \log n }
\end{align*}
(where the sums $\sum_t$ can be assumed to run over $0 \leq t \leq n-\varepsilon n$).
The first summand can estimated using a union bound on the choice of the set of $t$ with $X_t = 1$:
\begin{align*}
\mathbb{P}\bra{ \sum_t X_t = \abs{K_1} \ \wedge \ \sum_t Y_t \leq C \log n } &\leq \binom{C \log n}{\abs{K_1}}\bra{ \frac{1}{\varepsilon n} }^{\abs{K_1}}
\\ &= O_M \bra{ { \frac{ \bra{ \frac{1}{\varepsilon}\log n}^{\abs{K_1}} }{n^{\abs{K_1}} }}}.
\end{align*}
For the second sum, if $C$ is sufficiently large (with respect to $M$) we have by a standard application of Hoeffding's inequality
$$
\mathbb{P}\bra{ \sum_t Y_t > C \log n } = O_M \bra{\frac{1}{n^{\abs{K_1}} } }.
$$
In total, the sum in \eqref{EXC:eq:77} is bounded from above by $O_M\bra{ \frac{\bra{ \frac{1}{\varepsilon}\log n}^{M}}{n^{\abs{K_1} - \abs{K_0}} }}$.
To finish the proof, it remains to verify that $\abs{K_1} > \abs{K_0}$. Recall that, being minimal, $G$ is a union of a certain number $P$ of paths of lengths $l_1,l_2,\dots,l_P \geq 2$ with startpoints in $I$ and endpoints in $J$. We have $\abs{K_0} = P \leq \abs{I}$ and $\abs{K_1} = \sum_{j=1}^P (l_j -1) = \abs{E} = M$. Since $M > \abs{I}$ by assumption, we are done.
\end{proof}
We now have all the tools necessary to compute the second moment of $\abs{S(a)}$. The argument is very similar to the one we used to compute $\EE \abs{S(a)}$. In places where the arguments are virtually identical, we give only the outline, and encourage the reader to look to Section \ref{section:EXC} for details.
\begin{proof}[Proof of Proposition \ref{HM:prop:E(S(a)^p)}]
Fix a choice of $\varepsilon$, for instance $\varepsilon(n) = \frac{1}{\log n}$. With the a Riemann integral approximation argument much as before, it will suffice to show that for any $s_1,s_2$ with $s_j = \sigma_j \frac{n^2}{2}$, $\varepsilon < \sigma_j < 1-\varepsilon$ one has
\begin{equation}\label{EXC:eq:61}
\mathbb{P}\bra{ s_1,s_2 \not \in S_\varepsilon(a) } = (1+o(1)) \prod_{j=1}^2 e^{-2+2\sigma_j}.
\end{equation}
where the error term is uniform with respect to the choice of $s_1,s_2$. Once this is established, we also have
$$\mathbb{P}\bra{ s_1,s_2 \in S_\varepsilon(a) } = (1+o(1)) \prod_{j=1}^2 (1-e^{-2+2\sigma_j}).$$
and hence
\begin{align*}
\EE\abs{S_\varepsilon(a)}^2 &= \sum_{s_1,s_2} \mathbb{P}\bra{ s_1,s_2 \in S_\varepsilon(a) } + o(n^4)
\\ &= \sum_{s_1,s_2} \prod_{j=1}^2 (1-e^{-2+2 \frac{2s}{n^2}}) + o(n^4)
\\ &= n^4 \int_{\varepsilon}^{1-\varepsilon} \prod_{j=1}^2 (1-e^{-2+2\sigma_j})d \sigma_1 d\sigma_2 + o(n^4)
= n^4(c^2 + o(1)),
\end{align*}
where the sums run over $\varepsilon \frac{n^2}{2} < s_1,s_2 < (1- \varepsilon) \frac{n^2}{2}$, and $c = \frac{1 + e^{-2}}{4}$. Finally, we note that $\abs{S(a)} = (1+o(1))\abs{S_\varepsilon(a)}$, as we have already shown in Section \ref{section:EXC}. Hence, it remains to prove \eqref{EXC:eq:61}.
We may rewrite the probability in \eqref{EXC:eq:61} using inclusion/exclusion as:
\begin{equation}\label{EXC:eq:62}
\sum_{M_1=0}^\infty (-1)^{M_1} \sum_{\abs{U_1} = M_1} \sum_{M_2=0}^\infty (-1)^{M_2} \sum_{\abs{U_p} = M_p} \mathbb{P}\bra{ \bigwedge_{j=1}^2 \bigwedge_{k=1}^{M_j} \bra{ s_j \in S_{u_{j,k}}(a) }},
\end{equation}
where the inner sums are taken over all choices of $U_{j} = \{u_{j,k}\}_{k=1}^{M_j} \subset [n]$ such that $\varepsilon n < u_{j,k} < (1-\varepsilon-\sigma_j) n$.
Let $N$ be a large even integer, and put $N_1 = N$ and
$$
N_2 = N_2(M_1) =
\begin{cases}
N & \text{ if } M_1\equiv 0 \pmod{2} \\
N+1 & \text{ if } M_1 \equiv 1 \pmod{2}.
\end{cases}
$$
Then the sum in \eqref{EXC:eq:62} is bounded from above by the truncated sum
\begin{equation}\label{EXC:eq:62a}
\sum_{M_1=0}^{N_1} (-1)^{M_1}
\sum_{M_2=0}^{N_2} (-1)^{M_2}
\sum_{\substack{ \abs{U_j} = M_j \\ j=1,\dots,p}}
\mathbb{P}\bra{ \bigwedge_{j=1}^2 \bigwedge_{k=1}^{M_j} \bra{ s_j \in S_{u_{j,k}}(a) }}.
\end{equation}
With the same definitions, for odd values of $N$ the expression above gives a lower bound. Hence, to find asymptotics for the sum \eqref{EXC:eq:62}, it will suffice to find asymptotics for each of summands with $M_1,M_2$ fixed.
We now consider the innermost sum. Let $U = U_{1} \cup U_2$. Using Lemmas \ref{EXC:lem:kill-bad-U} and \ref{EXC:lem:kill-bad-U-2}, we may disregard the contribution $O\bra{\frac{m \bra{ \frac{1}{\varepsilon} \log n}^M}{n}} =o(1)$ from $U$ with $U_1 \cap U_2 \neq \emptyset$, and from $U$ which do not satisfy the regularity condition \eqref{EXC:cond:U-reg} for a suitable choice of the parameter $m = m(n)$. For remaining $U$, we have from Proposition \ref{EXC:prop:P-wedge-s-in-S} (or more precisely, the internal Lemma in its proof) that
\begin{equation}\label{EXC:eq:63}
\sum_{\substack{ \abs{U_j} = M_j \\ j=1,2}}
\mathbb{P}\bra{ \bigwedge_{j=1}^2 \bigwedge_{k=1}^{M_j} \bra{ s_j \in S_{u_{j,k}}(a) }}
= (1+o(1))\bra{ \frac{2}{n} }^{M_1+M_2}.
\end{equation}
The number of choices of $(U_1,U_2)$ when $(M_1,M_2)$ are fixes is equal to $(1+o(1)) \prod_{j=1}^2 \frac{(1-\sigma_j)^{M_j}}{M_j!}n^{M_j}$. Thus the inner sum is asymptotically equal to:
\begin{equation}\label{EXC:eq:64}
\bra{ \frac{2}{n} }^{M_1+M_2} \prod_{j=1}^2 \frac{(1-\sigma_j)^{M_j}}{M_j!}n^{M_j}
= \prod_{j=1}^2 \frac{(2-2\sigma_j)^{M_j}}{M_j!}.
\end{equation}
Thus, for any large even integer $N$ the sum in \eqref{EXC:eq:62a} is bounded from above by:
\begin{equation}\label{EXC:eq:65}
\sum_{\substack{ M_j=0 \\ j = 1,2}}^{\infty} (-1)^{M_1+M_2}
\prod_{j=1}^2 \frac{(2-2\sigma_j)^{M_j}}{M_j!}
+ \frac{C}{N!} + o(1),
\end{equation}
where $C$ is an absolute constant. The sum in \eqref{EXC:eq:65} is simply the Taylor expansion of $\prod_{j=1}^2 e^{-2+2\sigma_j}$. Letting $N \to \infty$ slowly with $n$, and repeating the same argument for $N$ odd, we conclude that \eqref{EXC:eq:61} holds.
\end{proof}
\section{Lower bound}\label{section:EXT-low}
We will presently prove a lower bound on $\max_{a} \abs{S\bra{a}}$ as $a$ runs over permutations of $[n]$, which is slightly better than $\max_{a} \abs{S\bra{a}} \geq \EE_a \abs{S\bra{a}} = \bra{ \frac{1+e^{-2}}{4} +o(1)} n^2 $ (which in turn is slightly better than $\max_{a} S(a) \geq (\frac{1}{4}+o(1))n^2$ from Proposition \ref{I:prop:cexple-basic}).
Namely, we will show that $\max_{a} \abs{S\bra{a}} \geq (\frac{3}{2}-\frac{2}{\sqrt{e}} + o(1)) n^2$, thus proving Proposition \ref{I:prop:cexple-improved}. We accomplish this by introducing a randomised variant of the construction from Proposition \ref{I:prop:cexple-basic}.
We note that the numerical values of the two constants mentioned above are rather close: $\frac{3}{2}-\frac{2}{\sqrt{e}} = 0.286\dots$, while $\frac{1+e^{-2}}{4} = 0.283\dots$. Hence, the quality of the bound is hardly a reason for interest in the results of this section. Instead, we point out that our construction gives a fairly broad and explicit class of permutations with $\abs{S(a)}$ significantly above the expected value. In particular, the bound $\max_{a} \abs{S\bra{a}} \geq \EE_a \abs{S\bra{a}}$ is far from sharp.
\renewcommand{\l}{\lambda}
\begin{proof}[Proof of Proposition \ref{I:prop:cexple-improved}]\mbox{}\\
Let $a$ be a permutation of $[n]$ selected uniformly at random, subject to the condition that $a_{i} + a_{i+1} = n+1$ for each odd $i$. We will show that $\EE \abs{S(a)} \geq (c+o(1)) n^2$ where $c = \frac{3}{2}-\frac{2}{\sqrt{e}}$.
Let us fix a choice of $s = l\bra{n+1} + r$, where $l = \lambda \frac{n}{2}$ and $r = \rho n$ are integers and $0 \leq \lambda, \rho \leq 1$.
Our first goal is to estimate $\mathbb{P}\bra{s \in S(a)}$, where we fix the value of $s$.
It is clear that if $s = \sum_{i=u}^v a_i$ then $v-u+1 \in \{2l,2l+1,2l+2\}$. Let us enumerate all possible intervals of length $2l$, and let $\mathscr{A}_i$ be the event that $s$ is the sum of $a_j$ over the $i$-th interval, where $1 \leq i \leq n-2l = \bra{1-\l + o(1)}n$. Likewise, let $\mathscr{B}_j$ be the event that $s$ is the sum over the $j$-th interval of length $2l+2$, $1 \leq j \leq n-2l-2$. Finally, let $\mathscr{C}$ be the event that $s$ is the sum over some interval of length $2l+1$; there can be at most one such interval.
For each $i$ there are some $u,v$ (namely the endpoints of the relevant interval) such that $\mathscr{A}_i$ holds if and only if $r = a_u + a_v$. Hence $\mathbb{P}\bra{\mathscr{A}_i} = \frac{\rho}{n} + O\bra{\frac{1}{n^2}}$. Similarly, each $\mathscr{B}_j$ is equaivalent to $r + \bra{n+1} = a_v + a_u$ for some $u,v$, and $\mathbb{P}\bra{\mathscr{B}_j} = \frac{1-\rho}{n} + O\bra{\frac{1}{n^2}}$. Likewise, $\mathscr{C}$ holds if and only if $r = a_u$ where either $u$ is even and $\leq n-2l$ or $u$ is odd and $\geq 2l$, and thus $\mathbb{P}\bra{\mathscr{C}} = 1-\l + O\bra{\frac{1}{n}}$.
Moreover, we have the following asymptotic independence condition.
\begin{lemma*
Let $k,l$ be integers. Then
\begin{equation}\label{EXT:eq:11}
\EEE_{\abs{I} = k} \EEE_{\abs{J}=l} \mathbb{P}\brabig{ \bigwedge_{i \in I} \mathscr{A}_i \wedge \bigwedge_{j \in J} \mathscr{B}_j } = \bra{\frac{\rho}{n}}^k \bra{\frac{1-\rho}{n}}^l + O\bra{\frac{1}{n^{k+l+1}}},
\end{equation}
where the expected values are taken over all possible choices of index sets of given size.
Likewise, we have
\begin{equation}\label{EXT:eq:12}
\EEE_{\abs{I} = k} \EEE_{\abs{J}=l} \mathbb{P}\brabig{ \bigwedge_{i \in I} \mathscr{A}_i \wedge \bigwedge_{j \in J} \mathscr{B}_j \wedge \mathscr{C} }
= \bra{\frac{\rho}{n}}^k \bra{\frac{1-\rho}{n}}^l \bra{1-\l} + O\bra{\frac{1}{n^{k+l+1}}}.
\end{equation}
Here, the implicit constants may depend on $k,l$.
\end{lemma*}
\begin{proof}
We only give the argument for the former equality, the latter being analogous. For given $I,J$ with $\abs{I} = k,\ \abs{J} = l$, let $\mathscr{E} = \mathscr{E}_{I,J} = \bigwedge_{i \in I} \mathscr{A}_i \wedge \bigwedge_{j \in J} \mathscr{B}_j$. Also, let $m = k+l$.
Recall that each event $\mathscr{A}_i$ is equivalent to $a_{u_i} + a_{v_i} = r$, and likewise $\mathscr{B}_j$ is equivalent to $a_{u_j'} + a_{v_j'} = r+n+1$. Because of the restriction $a_{i}+a_{i+1} = n+1$ for odd $i$, the event $\mathscr{E}$ is equivalent to a system of $m$ linear equations of the form
\begin{equation}\label{EXT:eq:01}
\sum_{j=1}^{2m} M_{ij} b_j = c_i \qquad (i \in [m]),
\end{equation}
where $M_{ij} \in \{0,-1,+1\}$ with $M_{ij} \neq 0$ for precisely $2$ values of $j$ for a fixed $i$, $b_j = a_{w_j}$ for some odd $w_j \in [n]$ and $c_{i} \in \{r,r+(n+1),r-(n+1),r-2(n+1)\}$. We may write \eqref{EXT:eq:01} more briefly as $Mb = c$.
We will need a uniform bound on $\mathbb{P}(\mathscr{E}_{I,J})$, valid for all choices of $I,J$ with $\abs{I} = k,\abs{J} = l$. For this purpose, we introduce an (undirected) graph $G = G_{I,J}$ with vertex set $[2m]$ such that the edge $(j_1, j_2)$ is present precisely when there exists a row $i$ such that $M_{i,j_1}, M_{i,j_2} \neq 0$.
Note that if $\mathbb{P}(\mathscr{E}_{I,J}) \neq 0$ then for any pair $j_1,j_2$ there can be at most one $i$ such that $M_{i,j_1}, M_{i,j_2} \neq 0$; else one could find a pair of intervals with endpoints differing by $1$ and both sums equal to $s$, which is impossible. Hence, $G$ has precisely $m$ edges. Moreover, it follows from the construction that $G$ is acyclic.
Hence, $G$ has $2m-m = m$ connected components (possibly some of which are singletons).
For each connected component of $G$, one value of $b_j$ may be chosen freely, and then \eqref{EXT:eq:01} determines the remaining ones. Hence,
$$\mathbb{P}(\mathscr{E}_{I,J}) = \mathbb{P}(Mb=c) = O\bra{\frac{1}{n^m}},$$ where the implicit constant depends at most on $m$.
Having obtained the universal upper bound above, we will restrict to suitably generic $I,J$. If $I, J$ are chosen randomly from all subsets of $[n]$ of suitable sizes $\abs{I} = k,\ \abs{J} =l$, then with probability $1 - O\bra{\frac{1}{n}}$ the corresponding graph $G$ consist of $m$ disconnected intervals --- this happens precisely if no pair of indices $u_i,v_i,u'_j,v'_j$ introduced above is equal, nor is any pair of the form $w,w+1$ with $w$ odd.
Hence, at the cost of introducing an error term $O\bra{\frac{1}{n^{m+1}}}$, we may restrict our attention to $I,J$ such that the corresponding grahp $G$ is a union of $m$ connected, components as described above.
We now argue for a fixed choice of $I,J$. Suppose that $\mathscr{D}$ is one of the events $\mathscr{A}_i,\mathscr{B}_j$, and let $u,v$ be the relevant coordinates $u_i,v_i$ or $u'_j,v'_j$ accordingly. For an integer $N$ (bounded in terms of $k,l$), let $d_1,\dots,d_N \in [n]$ and $i_{1},\dots,i_{N} \in [n]$ be arbitrary. Then
\begin{equation}\label{EXT:eq:02a}
\mathbb{P}\bra{\mathscr{D} \mid a_{i_1}=d_1,\dots,a_{i_N} = d_N} = \mathbb{P}\bra{\mathscr{D}} + O\bra{\frac{1}{n^2}},
\end{equation}
where the implicit constant depends only on $N$, provided that the event we condition on above has non-zero probability and that $i_1,\dots,i_N$ differ from $u,v$ by at least $2$. By a standard argument, the above equality \eqref{EXT:eq:02a} implies that for any $I' \subset I,\ J' \subset J$ we have
\begin{equation}\label{EXT:eq:02b}
\mathbb{P}\bra{\mathscr{D} \mid \bigwedge_{i \in I'} \mathscr{A}_i \wedge \bigwedge_{j \in J'} \mathscr{B}_j } = \mathbb{P}\bra{\mathscr{D}} + O\bra{\frac{1}{n^2}}.
\end{equation}
Ordering $\mathscr{A}_i,\mathscr{B}_j$ as $\mathscr{D}_i$ with $i \in K$, putting $\mathscr{E}_j' = \bigwedge_{i < j} \mathscr{D}_j$ and using $\mathbb{P}(\mathscr{D}_j) = O\bra{\frac 1n}$ we find
\begin{align*}\label{EXT:eq:02c}
\mathbb{P}\bra{\mathscr{E} } &= \prod_{j \in K} \mathbb{P}\bra{\mathscr{D}_j \mid \mathscr{E}_j'} = \prod_{j \in K} \bra{ \mathbb{P}\bra{\mathscr{D}_j} + O\bra{\frac{1}{n^2}}}
\\ &= \prod_{j \in K} \mathbb{P}\bra{\mathscr{D}_j} + O\bra{\frac{1}{n^{m+1}}}.
\end{align*}
This is precisely the stated bound.
\end{proof}
We may now estimate $\mathbb{P}\bra{s \not \in S(a)}$ from inclusion-exclusion formula, truncated at level $N$, where $N$ is a large even integer:
\begin{align*}
\mathbb{P}\bra{s \not \in S(a)} &
\leq \sum_{k+l\leq N}\sum_{\abs{I} = k} \sum_{\abs{J} = l} \bra{-1}^{k+l} \mathbb{P}\bra{\bigwedge_{i \in I} A \wedge \bigwedge_{j \in J} \mathscr{B}_j}
\\& -
\sum_{k+l < N}\sum_{\abs{I} = k} \sum_{\abs{J} = l} \bra{-1}^{k+l} \mathbb{P}\bra{\bigwedge_{i \in I} A \wedge \bigwedge_{j \in J} \mathscr{B}_j \wedge C}.
\end{align*}
Substituting bounds from the Lemma above we find:
\begin{align*}
\mathbb{P}\bra{s \not \in S(a)} &
\leq \sum_{k+l\leq N} \bra{-1}^{k+l} \bra{ \frac{\rho}{n}}^k
\bra{ \frac{1-\rho}{n}}^l \frac{\bra{\frac{1-\lambda}{2}n}^{k+l} }{k! \ l!}
\\ &-\sum_{k+l < N} \bra{-1}^{k+l} \bra{ \frac{\rho}{n}}^k
\bra{ \frac{1-\rho}{n}}^l \frac{\bra{\frac{1-\lambda}{2}n}^{k+l} }{k! \ l!} \bra{1-\lambda} + O_N\bra{\frac{1}{n}}
\\& = \sum_{m \leq N} \bra{-1}^{m} \frac{\bra{\frac{1-\lambda}{2}}^{m} }{ m!} \lambda
+ O\bra{\frac{1}{N!}} + O_N\bra{\frac{1}{n}}
\\& = e^{-\frac{1-\lambda}2}\lambda + O\bra{\frac{1}{N!}} + O_N\bra{\frac{1}{n}},
\end{align*}
where we use the notation $O_N\bra{\cdot}$ to signify that the implicit constant is allowed to depend on $N$. Letting $N \to \infty$ slowly with $n$, we conclude that:
$$
\mathbb{P}\bra{s \not \in S(a)} \leq e^{-\frac{1-\lambda}{2}}\lambda + o(1),
$$
with the error term uniform with respect to the choice of $s$. It follows that:
\begin{align*}
\EE \abs{S(a)}
& \geq n^2\frac{1+o(1)}{2}\int_0^{1} \int_0^{1} \bra{1 - e^{-\frac{1-\lambda}{2}}\lambda} d\lambda d\rho
\\ &= n^2 \bra{\frac{3}{2}-\frac{2}{\sqrt{e}} - o\bra{1} }.
\end{align*}
A symmetric argument yields $\EE \abs{S(a)} \leq n^2 \bra{\frac{3}{2}-\frac{2}{\sqrt{e}} + o\bra{1} }$, which finishes the proof.
\end{proof}
\section{Upper bound}\label{section:EXT-up}
At this stage it seems that the distinct sums $\sum_{i=u}^v a_i$ tend to be rather numerous. The trivial upper bound on their quantity is $\binom{n+1}{2}$, which happens to be both the upper bound for any single sum and the number of distinct intervals.
It is natural to ask if this bound is sharp, and it turns out that it is not. In this section we obtain a slight improvement, namely
\begin{equation}
{S(a)} \leq \bra{\frac 14 + \frac{\pi}{16} + o(1)}{n^2},
\label{EXTup::eq:L-def}
\end{equation}
thus proving Proposition \ref{I:prop:upper-bound}.
We will consider the set of sums which are above average value. Define:
\begin{equation}
L(a) := \bbra{ (u,v) \in [n]^2 \ : \ u \leq v, \ \sum_{i=u}^v a_i \geq \frac{1}{2} \binom{n+1}{2} }.
\label{EXTup:eq:La-def}
\end{equation}
Our main idea is to show that $L(a)$ can never be too large. The following proposition easily implies Proposition \ref{I:prop:upper-bound}.
\begin{proposition}\label{EXTup:prop:L(a)-bound}
Let $a$ be a permutation of $[n]$, and let $L(a)$ be defined as above. Then:
$$
L(a) \leq \bra{\frac{\pi}{16} + o(1)} n^2.
$$
\end{proposition}
\begin{remark*}
The constant $\frac{\pi}{16}$ cannot be improved, as shown by the ``tent map'' permutation:
$$
a_i =
\begin{cases}
2 i, & i \leq \frac{n}{2} \\
2(n-i) + 1, & i > \frac{n}{2}.
\end{cases}
$$
This is essentially the only possible example, as will become clear in the course of the proof. Note, however, that for this particular permutation we have $S(a) = o(n^2)$ for similar reasons as for the trivial permutation; see also Proposition \ref{LOW:prop:small-S}. We believe that the bound in \ref{I:prop:upper-bound} is not sharp.
\end{remark*}
\begin{proof}[Proof of Proposition \ref{I:prop:upper-bound} assuming Proposition \ref{EXTup:prop:L(a)-bound}]
Partitioning $S(a)$ in two parts at $\frac{1}{2} \binom{n+1}{2}$ we easily find that:
\begin{equation*}
\abs{S(a)} \leq \frac{1}{2} \binom{n+1}{2} + \abs{L(a)} \leq\bra{ \frac{1}{4} + \frac{\pi}{16} + o(1)}n^2.
\end{equation*}
This is precisely the sought bound.
\end{proof}
We will devote the remainder of this section to proving Proposition \ref{EXTup:prop:L(a)-bound}.
To begin with, we reduce the problem to the case when the permutation can be partitioned into two monotonous parts. This part of the argument restricts the sample space significantly, and will be crucial to ensure that the continuous version of the problem has enough compactness for the maximum to be realised.
\begin{observation}\label{EXTup:obs:perm-mono}
Let $a$ be a permutation on $[n]$. Then there exists another permutation $a'$ such that $\abs{L(a)} \leq \abs{L(a')}$ and there exists $k$ such that $a_1',\dots,a_k'$ is increasing and $a_k',a_{k+1}',\dots, a_n'$ is decreasing.
\end{observation}
\begin{proof}
Choose $k$ so that $\sum_{i=1}^{k} a_i \geq \frac{1}{2} \binom{n+1}{2}$ and $\sum_{i=k}^{n} a_i \geq \frac{1}{2} \binom{n+1}{2}$.
Consider the permutation $a'$ obtained from $a$ by sorting $a_1,\dots,a_k$ in increasing order and $a_{k+1},\dots,a_n$ in the decreasing order. More precisely, let $a'$ be such that $\{ a_i' \ : \ 1 \leq i \leq k\} = \{ a_i \ : \ 1 \leq i \leq k\}$, $\{ a_i' \ : \ k < i \leq n\} = \{ a_i \ : \ k < i \leq n\}$, $a_1' < \dots < a_k'$ and $a_{k+1}' > \dots > a_n'$. Clearly, $a'$ has the required monotonicity property.
We claim that $L(a) \subseteq L(a')$. Suppose that $(u,v) \in L(a)$. By the choice of $k$, we have $u \leq k \leq v$. Thus,
$$
\frac{1}{2} \binom{n+1}{2} \leq \sum_{i=u}^v a_i
= \sum_{i=u}^k a_i + \sum_{i=k+1}^v a_i
\leq \sum_{i=u}^k a_i' + \sum_{i=k+1}^v a_i' = \sum_{i=u}^v a_i'.
$$
Thus, $(u,v) \in L(a')$, as desired.
\end{proof}
We are now ready to introduce the continuous variant. The analogue of the space of all permutations of $[n]$ obeying the monotonicity condition in Observation \ref{EXTup:obs:perm-mono} is the family ${ \mathcal{F}_{\mathrm{mon}} }$ of measurable functions $f\colon [0,1] \to [0,1]$ obeying the following conditions:
\begin{enumerate}
\item\label{EXTup:cond:01} for any measurable $E \subset [0,1]$, $\int_{E} f(x)dx \geq \frac{\abs{E}^2}{2}$, and $\int_0^1 f(x) dx = \frac{1}{2}$,
\item\label{EXTup:cond:02} there exists $\kappa=\kappa_f$ such that $\int_{0}^\kappa f(x) dx = \int_{\kappa}^1 f(x) dx = \frac{1}{4}$, and moreover $f|_{[0,\kappa]}$ is increasing and $f|_{[\kappa,1]}$ is decreasing.
\end{enumerate}
(Here and elsewhere, if $E \subset \mathbb{R}$ is measurable then $\abs{E}$ denotes the Lebesgue measure of $E$.) Note that for a permutation $a$ and any index set $I$ we have $\sum_{i \in I} a_i \geq \binom{\abs{I}+1}{2}$, in analogy to condition \eqref{EXTup:cond:01}.
We will also occasionally need to use the larger family $\mathcal{F}$ of functions $f\colon [0,1] \to [0,1]$ which only satisfy the condition \eqref{EXTup:cond:01} but not necessarily \eqref{EXTup:cond:02}. We note in passing that $\mathcal{F}$ is convex, and both ${ \mathcal{F}_{\mathrm{mon}} }$ and $\mathcal{F}$ are closed in the $L^1$ topology.
Another thing we are missing is an continuous analogue of $L(a)$ from \eqref{EXTup:eq:La-def}. For any $f \in \mathcal{F}$ we define:
\begin{align}
L(f) &:= \bbra{(x,y) \in [0,1]^2 \ : \ x \leq y,\ \int_x^y f(t) dt \geq \frac{1}{4} }
\end{align}
The continuous analogue of Proposition \ref{EXTup:prop:L(a)-bound} is the following statement.
\begin{proposition}\label{EXTup:prop:L(f)-bound}
Suppose that $f \in { \mathcal{F}_{\mathrm{mon}} }$. Then $\abs{L(f)} \leq \frac{\pi}{16}$.
\end{proposition}
This bound is sharp. The (unique) function $f$ with $\Lambda(f) = \frac{\pi}{16}$ will turn out to be the ``tent map'':
$$
f(x) =
\begin{cases}
2x & \text{ if } x \leq \frac{1}{2}\\
2(1-x) & \text{ if } x \geq \frac{1}{2}.
\end{cases}
$$
We defer the proof of Proposition \ref{EXTup:prop:L(f)-bound}. Our immediate goal is to deduce Proposition \ref{EXTup:prop:L(a)-bound} from Proposition \ref{EXTup:prop:L(f)-bound}. Before we do that, we make some preliminary observations which will be useful in the course of the proof, as well as in the main body of the argument. For $f \in \mathcal{F}$, define
\begin{align}
v_f(u) & := \sup\left\{ v \in [0,1] \ : \ \int_{u}^v f(x) dx \leq \frac{1}{4} \right\}, \\
u_f(u) & := \inf\left\{ u \in [0,1] \ : \ \int_{u}^v f(x) dx \leq \frac{1}{4} \right\} , \\
\Lambda(f) &:= \abs{L(f)}.
\end{align}
\begin{observation}\label{EXTup:obs:prelim}
With the definitions as above, the following are true.
\begin{enumerate}
\item The maps $\mathcal{F} \ni f \mapsto v_f \in L^1([0,1])$ and $f \mapsto u_f$ are well defined and continuous, where on $\mathcal{F}$ we take the $L^1$ topology.
\item For any $u_0,v_0$ with $\int_{u_0}^{v_0} f(x) dx = \frac{1}{4}$ we have
$$ \Lambda(f) = \int_{0}^{u_0} (1-v_f(x)) dx + \int_{v_0}^1 u_f(x)dx- u_0 (1-v_0).$$
\item In particular, we have the formulas $$\Lambda(f) = \int_{0}^1 (1-v_f(u))du = \int_{0}^1 u_f(v) dv.$$
\item The map $\mathcal{F} \ni f \mapsto \Lambda(f) \in \mathbb{R}$ is continuous.
\item If $f \in { \mathcal{F}_{\mathrm{mon}} }$, then the set $L(f)$ is convex.
\end{enumerate}
\end{observation}
\begin{proof}
The integral formula for $\Lambda(f)$ follows from partitioning $L(f)$ into three parts: $ L_- = L(f) \cap \{ u < u_0,\ v< v_0 \}$, ${L}_+ = {L}(f) \cap \{ u > u_0,\ v > v_0 \}$ and ${L}_+ = {L}(f) \cap \{ u \leq u_0,\ v \leq v_0 \} = [0,u_0] \times [v_0,1]$.
Continuity of $f \mapsto \Lambda(f)$ is clear once the previous points are established. It remains to prove continuity of $f \mapsto v_f$ (argument for $f \mapsto u_f$ is analogous).
Take any $f,f_n \in \mathcal{F}$ with $f_n \to f$ in $L^1$. Fix $u$ and let $v = v_f(u)$. Let us suppose that $v < 1$, since the case $v = 1$ is easier. For any $\delta > 0$ we have
$$
\int_u^{v-\delta} f(x) dx + \frac{1}{4} \delta^2 <
\int_u^{v} f(x) dx = \frac{1}{4} < \int_u^{v+\delta} f(x) dx - \frac{1}{4} \delta^2
$$
Thus, for $n > n_0(\delta)$ we have:
$$
\int_u^{v-\delta} f_n(x) dx + \frac{1}{8} \delta^2 < \frac{1}{4} < \int_u^{v+\delta} f_n(x) dx - \frac{1}{8} \delta^2.
$$
Consequently, $v-\delta < v_{f_n}(u) < v + \delta$. Taking $\delta \to 0$ we conclude that $v_{f_n}(u) \to v$ as $n \to \infty$. Hence, $v_{f_n} \to v_f$ pointwise. Since all relevant functions are bounded, $v_{f_n} \to v_f$ in $L^1$.
Finally, we prove convexity of $L(f)$. Suppose that $(u_1,v_1), (u_2,v_2) \in L(f)$ and $u = \frac{u_1+u_2}{2}, v= \frac{v_1+v_2}{2}$. We may without loss of generality assume that $u_1 < u_2 < \kappa_f < v_1 < v_2$, and that $\int_{u_1}^{v_1}f(x)dx = \int_{u_2}^{v_2}f(x)dx = \frac{1}{4}$. We then have
$$
\int_{u_1}^{u_2}f(x)dx = \int_{v_1}^{v_2}f(x)dx =: I,
$$
and because of monotonicity $\int_{u_1}^{u}f(x)dx \leq \frac{1}{2} \leq \int_{u}^{u_2}f(x)dx$ and $\int_{v_1}^{v}f(x)dx \geq \frac{1}{2} \leq \int_{v}^{v_2}f(x)dx$. Hence,
$$ \int_{u}^{v}f(x)dx = \int_{u_1}^{v_1}f(x)dx - \int_{u_1}^{u}f(x)dx + \int_{v_1}^{v}f(x)dx \geq \frac{1}{4},$$
and consequently $(u,v) \in L(f)$. Since $L(f)$ is closed, this proves convexity.
\end{proof}
\begin{proof}[Proof of Proposition \ref{EXTup:prop:L(a)-bound} assuming Proposition \ref{EXTup:prop:L(f)-bound}]
For each $n$, let $a^{(n)}$ be a permutation of $[n]$ which maximizes $\abs{L(a^{(n)})}$.
We may assume without loss of generality that $a^{(n)}_i$ are increasing for $i=1,\dots,k^{(n)}$ and decreasing for $i = k^{(n)},\dots,n$.
To the permutation $a^{(n)}$ we may associate a function $f_n\colon[0,1] \to \mathbb{R}$ defined by
\begin{equation}
f_n\bra{\frac{i + t}{n}} = \frac{a_i^{(n)}}{n+1},\quad \text{ for } i \in [n],\ t \in [-1,0),
\end{equation}
(and for completeness, put $f(1) := \frac{a_n^{(n)}}{n+1}$).
We have the obvious bounds $0 \leq f_n(x) \leq 1$, as well as the formula $$\sum_{i=u+1}^{v} a_i^{(n)} = n(n+1) { \int_{u/n}^{v/n} f_n(x) dx}$$ for $u,v \in [n]$. In particular, $\int_0^1 f_n(x) dx = \frac{1}{2}$. It is not difficult to see that for any measurable $E$ with $\abs{E} = \frac{m + \mu}{n}$ with $m \in \mathbb{N}, \mu \in [0,1)$ we have
$$\int_E f_n(x) dx \geq \frac{1 + 2 + \dots + m + (m+1)\mu }{n(n+1)} = \frac{(m+\mu)(m+1)}{2n(n+1)^2} \geq \frac{\abs{E}^2}{2},$$
where the last inequality can be checked with elementary methods. It is also clear from the construction that $f_n$ is monotonous on $[0,\frac{k}{n}]$ and on $[\frac{k-1}{n},1]$.
Hence, $f_n \in { \mathcal{F}_{\mathrm{mon}} }$. By Proposition \ref{EXTup:prop:L(f)-bound}, $\Lambda(f_n) \leq \frac{\pi}{16}$. It remains to relate $\Lambda(f_n)$ to $\abs{L(a^{(n)})}$. For any $u,v$, if $(u,v) \in L(a^{(n)})$ then the square $[\frac{u}{n},\frac{u+1}{n}] \times [\frac{v}{n},\frac{v+1}{n}]$ is contained in ${L}(f_n)^c$, and conversely if $(u,v) \not\in L(a^{(n)})$ then square $[\frac{u+1}{n},\frac{u+2}{n}] \times [\frac{v-1}{n},\frac{v}{n}]$ is contained in ${L}(f_n)$. Together with the observation that $L(f)$ is convex,
this shows that $\frac{\abs{L(a^{(n)})}}{n^2} - \Lambda(f_n) = O(\frac 1n)$.
Thus, $\abs{L(a)} = (1+o(1)) \Lambda(f_n) n^2 \leq \frac{\pi}{16}n^2,$ which finishes the proof.
\end{proof}
The rest of this section will be devoted to proving Proposition \ref{EXTup:prop:L(f)-bound}. Our first step in that direction is to show that the supremum $\sup_{g \in \mathcal{F}} \Lambda(g)$ is realised by some $f \in { \mathcal{F}_{\mathrm{mon}} }$, where it is convenient to allow $g$ to range over the larger family $\mathcal{F}$ to simplify perturbation arguments later.
\begin{observation}
There exists $f \in { \mathcal{F}_{\mathrm{mon}} }$ such that $\Lambda(f) = \sup_{g \in \mathcal{F}} \Lambda(g)$
\end{observation}
\begin{proof}
For any $g \in \mathcal{F}$, there exists $\tilde g \in { \mathcal{F}_{\mathrm{mon}} }$ such that $\Lambda(\tilde g) \geq \Lambda(g)$. This follows from an argument essentially equivalent to the one in Observation \ref{EXTup:obs:perm-mono}.
Hence, it will suffice to show that the supremum $\sup_{g \in { \mathcal{F}_{\mathrm{mon}} }} \Lambda(g)$ is realised by some $f$, which in turn will follow once we show that ${ \mathcal{F}_{\mathrm{mon}} }$ is compact.
Compactness of ${ \mathcal{F}_{\mathrm{mon}} }$ is direct consequence of the classical Helly's selection theorem, see e.g. \cite{Brunk-1956} for details. For a direct proof, consider any sequence $f_n \in { \mathcal{F}_{\mathrm{mon}} }$. Passing to a subsequence, we may assume that $f_n$ converges pointwise on $\mathbb{Q} \cap [0,1]$. By motonicity, $f_n$ converges pointwise a.e. to some function $f$. Thus, by the dominated convergence, $f_n$ converges in $L^1$. It is clear that $f \in { \mathcal{F}_{\mathrm{mon}} }$.
\end{proof}
Once we know that there is $f \in { \mathcal{F}_{\mathrm{mon}} }$ which maximises $\Lambda$, we may study such $f$ more closely. It comes as no surprise that the behaviour of $\Lambda(f)$ under small distortions is relevant.
\begin{lemma}\label{EXTup:lem:delta-Lambda}
Let $f \in \mathcal{F}$, and suppose that $h \in L^\infty([0,1])$ is such that $f+\tau h \in \mathcal{F}$ for sufficiently small $\tau > 0$.
For $\tau > 0$, denote:
\begin{align}
\Delta_\tau \Lambda(f) &:= \Lambda\bra{f+\tau h} - \Lambda\bra{f},\\
\delta_h \Lambda(f) &:= \lim_{\tau \to 0} \frac{1}{\tau} \Delta_\tau \Lambda(f).
\end{align}
We then have the formula:
\begin{align*}
\delta_h \Lambda(f) = \int_{0}^1 h(x) w(x) dx, \ \text{where } \
w(x) :=
\begin{cases}
\int_{0}^x \frac{du}{f(v(u))} & \text{ if } x \leq \kappa, \\
\int_{x}^1 \frac{dv}{f(u(v))} & \text{ if } x \geq \kappa,
\end{cases}
\end{align*}
where $\kappa=\kappa_f$ is such that $\int_{0}^\kappa f(x) dx = \int_{\kappa}^1 f(x) dx = \frac{1}{4}$.
\end{lemma}
\begin{proof}
We may assume without loss of generality that $\norm{h}_\infty \leq 1$.
Following the convention suggested above, for $\tau > 0$ we denote:
\begin{align*}
(\Delta_\tau u_f)(x) &:= u_{f+\tau h}(x) - u_{f}(x),\
&(\Delta_\tau v_f)(x) &:= v_{f+\tau h}(x) - v_{f}(x),
\\
(\delta_h u_f)(x) &:= \lim_{\tau \to 0} \frac{1}{\tau} (\Delta_\tau u_f)(x),
&(\delta_h v_f)(x) &:= \lim_{\tau \to 0} \frac{1}{\tau} (\Delta_\tau v_f)(x).
\end{align*}
Since $f$ is fixed, we will suppress dependence on $f$, writing $\Delta_\tau u$, $\Delta_\tau v$ and $\delta_h u$ and $\delta_h v$ whenever ambiguity does not arise.
We have a trivial estimate $\abs{\Delta_\tau v (u)} \leq 2 \sqrt{ \tau}$ for each $u$, which follows directly from the chain of inequalities
$$\tau \norm{h}_{1} \geq \abs{
\int_{v(u)}^{v(u)+\Delta_\tau v(u)} (f-\tau h)(x) dx }
\geq \frac{(\Delta_\tau v(u))^2}{2} - \tau \norm{h}_{1}.
$$
(Here and elsewhere, we use the convention that $\int_a^b \equiv -\int_b^a$ if $a > b$).
In the same way, we have $\abs{\Delta_\tau u (v)} \leq 2 \sqrt{ \tau}$ for each $v$.
Now, fix $\mu < \kappa$. If $\tau$ is small enough, then for $0 \leq u \leq \mu$ we have that $v(u) + \Delta_\tau v(u) < 1$. For $u < \mu$ we have, slightly more precisely than above:
\begin{align*}
\frac{1}{2} &= \int_{u}^{v(u) + \Delta_\tau v(u)} (f + \tau h)(x) dx
\\ &=
\frac{1}{2} + \tau \int_u^{v(u)} h(x) dx +
\int_{v(u)}^{v(u) + \Delta_\tau v(u)} f(x) dx + O\bra{ \tau^{3/2}}.
\end{align*}
Hence
\begin{align*}
\frac{1}{\Delta_\tau v(u)} \int_{v(u)}^{v(u) + \Delta_\tau v(u)} f(x) dx
&= - \frac{\tau}{\Delta_\tau v(u)} \bra{
\int_u^{v(u)} h(x) dx + O\bra{\sqrt{\tau}}
}.
\end{align*}
For a.e. $u$, the expression on the left hand side tends to $f(v(u))$ as $\tau \to 0$, by the Lebesgue density theorem. Hence, for a.e. $u$, $\delta_h v(u)$ is well defined and:
\begin{align}
\delta_h v(u) = \lim_{\tau \to 0} \frac{\Delta_\tau v(u)}{\tau} = -\frac{\int_u^{v(u)} h(x) dx }{f(v(u))}. \label{EXTup:eq:54}
\end{align}
By a symmetric argument, if we fix $\kappa < \nu \leq 1$ then for a.e. $v > \nu$ we have:
\begin{align}
\delta_h u(v) = \frac{\int_{u(v)}^{v} h(x) dx }{f(u(v))}. \label{EXTup:eq:55}
\end{align}
Fix $0 < \mu < \kappa $ and put $\nu = v(\mu)$. Using Observation \ref{EXTup:obs:prelim} we have:
$$
\Lambda(f) = \int_0^{\mu} (1-v(u)) du + \int_{\nu}^1 u(v) dv - \mu (1-\nu).
$$
Note that for any $\varepsilon > 0$ and any sufficiently small $\tau$ we have $\Delta_\tau u(v) \leq \frac{2\tau \norm{h}_1}{ f( \nu + \varepsilon)}$ for $u \leq \mu$, and hence $\frac{ \Delta_\tau v(u)}{\tau} $ is uniformly bounded. Likewise, $\frac{ \Delta_\tau u(v)}{\tau} $ if uniformly bounded for $v \geq \nu$. We may now compute:
\begin{align*}
\delta_h \Lambda(f) &= \lim_{\tau \to 0} \frac 1\tau \Delta_\tau \Lambda(f)
\\&= \lim_{\tau \to 0}
\int_0^{\mu} - \frac{ \Delta_\tau v(u)}{\tau} du + \int_{\nu}^1 \frac{ \Delta_\tau u(v)}{\tau} dv + \mu \frac{ \Delta_\tau v(\mu) }{\tau} + O(\tau)
\\&= -\int_0^{\mu} \delta_h v(u) du + \int_{\nu}^1 \delta_h u(v) dv + \mu \delta_h v(\mu)
\end{align*}
where the last equality uses the dominated convergence theorem. Note that this holds for any $0 < \mu < \kappa$. Passing to the limit $\mu \to 0$ or $\mu \to \kappa$ we find simpler expressions:
\begin{align}
\delta_h \Lambda(f)
&= -\int_0^{\kappa} \delta_h v(u) du = \int_{\kappa}^1 \delta_h u(v) dv \label{EXTup:eq:56}
\end{align}
Inserting \eqref{EXTup:eq:54} into the first equation of \eqref{EXTup:eq:56} and exchanging the order of integration, we now find:
\begin{align*}
\delta_h \Lambda(f) &= \int_0^\kappa \frac{\int_u^{v(u)} h(x) dx }{f(v(u))} du
= \int_{0}^{1} h(x) \int_{u(x)}^{\min(x,\kappa)} \frac{du}{f(v(u))}dx.
\end{align*}
Let $w(x)$ denote the value of the inner integral $\int_{u(x)}^x \frac{du}{f(v(u))}$. If $x \leq \kappa$ then $u(x) = 0$ so we obtain the sought formula $w(x) = \int_{0}^x \frac{du}{f(v(u))}$. Else, the formula follows from a change of variables $v = v( u)$ together with the observation that $\frac{dv}{du} = \frac{f(u)}{f(v)}$.
\end{proof}
Using standard techniques, we can extract from the above Lemma strong structural information about the function $f$ minimising $\Lambda$. The proof is complicated by the fact that we need to account for a variety of pathological behaviour that $f$ may potentially exhibit. However, the key idea is simply to relate to each undesirable behaviour of $f$ a perturbation which increases $\Lambda(f)$.
\newcommand{\operatorname{sgn}}{\operatorname{sgn}}
For $t \in \mathbb{R}$, we let $\operatorname{sgn} t$ denote the sign of $t$, with the convention that $\operatorname{sgn} t = +1$ for $t > 0$, $\operatorname{sgn} t = -1$ for $t < 0$ and $\operatorname{sgn} 0 = 0$. In particular, $f$ is continuous.
\begin{lemma}\label{EXTup:lem:constraints}
Suppose that $f \in { \mathcal{F}_{\mathrm{mon}} }$ is such that $\Lambda(f) = \sup_{g \in \mathcal{F}} \Lambda(g)$, and let $x, y \in [0,1]$. Then:
\begin{enumerate}
\item\label{EXTup:cond:11mono@lem:cons} For each $x,y$ we have $\operatorname{sgn}( f(x) - f(y) ) = \operatorname{sgn} ( w(x) - w(y) )$.
\item\label{EXTup:cond:12cont@lem:cons} The function $f$ is continuous (except perhaps at $\kappa_f$).
\item\label{EXTup:cond:13meas@lem:cons} For each $E \subset [0,1]$, measurable, we have $\abs{f^{-1}(E)} = \abs{E}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We separate the proof into several steps.
\textit{Step 1.} If $f(x) > f(y)$ then $w(x) \geq w(y)$.
Suppose otherwise, so that $w(x) < w(y)$. Since $w$ is continuous, there is some $\varepsilon > 0$ such that $w(x') < w(y')$ when $\abs{x'-x}, \abs{y'-y} < \varepsilon$. Suppose for concreteness that $x < \kappa_f < y$ (the remaining cases being fully analogous), so that $f$ is increasing in a neighbourhood of $x$ and decreasing in a neighbourhood of $y$.
We wish to consider a function $\tilde f$ obtained by ``swapping'' the intervals $[x,x+\varepsilon]$ and $[y-\varepsilon,y]$. Hence, we define
\begin{align*}
\tilde f(x + t) &= f(y-\varepsilon + t) & t \in [0,\varepsilon]& \\
\tilde f(y-\varepsilon + t) &= f(x + t) &t \in [0,\varepsilon]& \\
\tilde f(x) &= f(x) \qquad & x \not \in [x,x+\varepsilon] \cup [y-\varepsilon,y]&
\end{align*}
It is clear that $\tilde f \in \mathcal{F}$, since membership in $\mathcal{F}$ is ``invariant under rearrangement'', in the sense that the condition $g \in \mathcal{F}$ can be phrased purely in terms of the values $\abs{g^{-1}(E)}$ for $E \subset [0,1]$, measurable.
Put $h = \tilde f - f$, so that $f + \tau h \in \mathcal{F}$ for $t \in [0,1]$ because of convexity. Using Lemma \ref{EXTup:lem:delta-Lambda} (and the notation therein) we have:
\begin{align*}
\delta_h \Lambda(f) &= \int_0^1 h(z)w(z) dz
\\&= \int_0^\varepsilon (w(x + t) - w(y-\varepsilon+t)) ( f(y-\varepsilon+t) - f(x+t)) dt > 0.
\end{align*}
Hence, for sufficiently small $\tau$ we conclude that $\Lambda( f + \tau h) > \Lambda(f)$, contradicting the choice of $f$.
\textit{Step 2.} There is no interval where $f$ is constant.
Suppose otherwise, so that there exists some $t_0$ with $U := f^{-1}(\{t_0\})$ with positive measure. Then, we claim that for any test function $h$ such that $\operatorname{supp} h \subset U$, $\norm{h}_{\infty} \leq 1$, $\int_0^1 h(x) dx = 0 $, and for sufficiently small $\tau > 0$ we have $f + \tau h \in \mathcal{F}$. (Recall that $\operatorname{supp} h = \{ x \ : \ h(x) \neq 0 \}$.) The fact that $U$ will generally consist of two disjoint intervals contained in $[0,\kappa_f]$ and $[\kappa_f,1]$ respectively complicates the notation slightly. Instead, we argue a slightly simpler statement, and leave it to the interested reader to work out the details.
\textit{Claim.} Pick two points $0 \leq x_0 < x_1 \leq 1$. Let $G(x) = \frac{x^2}{2}$ and let $F(x) = \frac{x_0+x_1}{2} x - \frac{x_0x_1}{2}$ (so that $F(x) = G(x)$ for $x = x_0,x_1$). Also, fix a differentialbe function $H$ with $\operatorname{supp} H \subset [x_0,x_1]$ and $\norm{H'}_{\infty} \leq 1$. Then, for sufficiently small $\tau$, for any $x \in [x_0,x_1]$ we have $F(x) + \tau H(x) \geq G(x)$.
\textit{Proof of Claim.} Taking $\tau$ sufficiently small, we may ensure that $F'(x) + \tau H'(x) > G'(x)$ for $x_0 \leq x \leq x_0 + \frac{x_1-x_0}{10}$ and $F'(x) + \tau H'(x) < G'(x)$ for $ \leq x_1 - \frac{x_1-x_0}{10} \leq x \leq x_1$. Hence $F(x) + \tau H(x) \geq G(x)$ when $x$ is within $ \frac{x_1-x_0}{10} $ of $x_0,x_1$. Since $F(x) - G(x)$ is bounded away from $0$ on for $x_0 + \frac{x_1-x_0}{10} \leq x \leq x_1 - \frac{x_1-x_0}{10}$ we can ensure $F(x) + \tau H(x) \geq G(x)$ for such $x$ by choosing sufficiently small $\tau$.
We are now ready to derive a contradiction. Since for any fixed $h$ as above we have $f \pm \tau h \in \mathcal{F}$, we conclude that $\delta_h \Lambda(f) = 0$. Hence, by Lemma \ref{EXTup:lem:delta-Lambda} we have $\int_U h(z)w(z) dz = 0$. However, this is only possible if $w$ is not constant on $U$, which it clearly is not. Thus, $f$ is not constant on any interval.
\textit{Step 3.} For any $t > 0$, we have $\inf_{\abs{E}=t} \int_{E} f(x) dx = \frac{t^2}{2}$.
Once this equality is proved, the condition \eqref{EXTup:cond:13meas@lem:cons} follows. (Both are easily seen to be equivalent to the statement that the increasing rearrangement of $f$ is the map $\tilde f(x) = x$.)
Note that inequality in one direction follows directly from the fact that $f \in \mathcal{F}$. We just need to prove $\inf_{\abs{E}=t} \int_{E} f(x) dx \leq \frac{t^2}{2}$.
Suppose for the sake of contradiction that for some $t_0$ we have strict inequality $\int_{E_0} f(x) dx > \frac{1}{2}t_0^2$ where $E_0$ is chosen so that $\abs{E_0} = t_0$ and the integral is minimised. From monotonicity of $f$ it follows that $E_0$ takes the form $E_0 = [0,x_0] \cup [y_0,1]$ for some $0 \leq x_0 \leq \kappa_f \leq y_0 \leq 1$, and because $f$ is nowhere constant, the choice of $x_0,y_0$ is unique.
Let $t_1 > t_0$ be such that $\int_{E_0} f(x) dx = \frac{1}{2}t_1^2$, and let $E_1 = [0,x_1] \cup [y_1,1]$ with $\abs{E_1} = t_1$ minimise $\int_{E_1} f(x) dx$. Take $F = E_1 \setminus E_0 = [x_0,x_1] \cup [y_1, y_0]$, and take a set $U$ such that $\operatorname{cl} U \subset \operatorname{int} F$ and $\abs{U} > 0$. Since for any $x \in U$ and $x' \in E_0$ we have $f(x) > f(x')$, it is not difficult to show that for any $h$ with $\operatorname{supp} h \subset U$, $\norm{h}_\infty \leq 1$, $\int_0^1 h(x)dx = 0$, for sufficiently small $\tau$ we have $f + \tau h \in \mathcal{F}$.
By the same argument as in Step 2., this means that $\int_0^1 h(x) w(x)dx = 0$. Since $w$ is non-constant and $h$ is arbitrary, this is the sought contradiction.
\textit{Step 4.} The function $f$ is continuous (expect possibly at $\kappa_f$).
Note that the only way $f$ could be discontinuous is if it had a jump discontinuity at some point $x$. Suppose that this is the case, and assume for concreteness that $x < \kappa_f$ so that $f$ is increasing at $x$. Since by Step 3., the image of $f$ is dense in $[0, 1]$, we may find some $y$ where $f$ is continuous such that $\lim_{z \to x-} f(z) < f(y) < \lim_{z \to x+} f(z)$. Assumption that $x < \kappa_f$ forces $y > \kappa_f$, so for $y' < y$, sufficiently close, we have $w(y') > w(y)$. Hence, using Step 1. we have $w(x) \leq w(y) < w(y') \leq w(x)$, which is a contradiction.
\textit{Step 5.} If $f(x) = f(y)$ then $w(x) = w(y)$.
Assume for concreteness that $x < \kappa_f$. Using Steps 1. and 2. we have $w(x) = \lim_{z \to x} w(z) \leq w(y)$, and symmetrically $w(x) \geq w(y)$.
Steps 5. and 1. together imply condition \eqref{EXTup:cond:11mono@lem:cons}, hence the proof is complete.
\end{proof}
Note that modifying a function $f \in \mathcal{F}$ at a single point does not affect $\Lambda(f)$. Hence, the above Lemma \ref{EXTup:lem:constraints} implies in particular that $\Lambda$ is maximised by a continuous, nowhere constant function $f \in { \mathcal{F}_{\mathrm{mon}} }$.
For continuous $f \in { \mathcal{F}_{\mathrm{mon}} }$ which are not constant on any interval, we introduce local inverse functions $\alpha_f \colon [0,1] \to [0,\kappa_f]$ and $\beta_f\colon [0,1] \to [\kappa_f,1]$ so that
$$f(\a_f(t)) = f(\b_f(t)) = t.$$
Whenever possible, we will suppress dependence on $f$, writing simply $\a$ and $\b$. We have the following, somewhat unexpected, relation.
\newcommand{\tilde}{\tilde}
\begin{lemma}\label{EXTup:obs:at+bt=cons}
Suppose that $f \in { \mathcal{F}_{\mathrm{mon}} }$ is such that $\Lambda(f) = \sup_{g \in \mathcal{F}} \Lambda(g)$ and that $f$ is continuous. Then, for any $t \in [0,1]$ we have $v \circ \a(t) + u \circ \b(t) = 1$. In particular, $\kappa = \frac{1}{2}$.
\end{lemma}
\begin{proof}
The claim is clearly true for $t = 1$, and the value of $\kappa$ follows from the remaining part of the statement by taking $t = 0$.
It is a direct consequence of Lemma \ref{EXTup:lem:constraints} that $\a$ and $\b$ are Lipschitz continuous, with Lipschitz constant at most $1$. Moreover, $u$ and $v$ are Lipschitz continuous on $[0,1] \setminus (\k - \varepsilon,\k+\varepsilon)$ for any $\varepsilon > 0$ (with the Lipschitz constant dependent on $\varepsilon$). Thus, $v \circ \a + u \circ \b$ is Lipschitz continuous on $[0,1 -\varepsilon]$ for any $\varepsilon$. In particular, $v \circ \a + u \circ \b$ is absolutely continuous on $[0,1 -\varepsilon]$, and to prove that it is constant it will suffice to show that $(v \circ \a + u \circ \b)'(t)=0$ for almost all $t$. Passing to the limit $\varepsilon \to 0$ will then complete the proof.
A simple computation yields:
\begin{equation}
(u \circ \b)'(t) = \b'(t) \frac{ f \circ \b(t) }{f \circ u \circ \b(t)} = \frac{t \b'(t) }{f \circ u \circ \b(t)}, \qquad
(v \circ \a)'(t) = - \frac{t \a'(t) }{f\circ v \circ \a(t)} \label{EXTup:eq:43-a}
\end{equation}
almost everywhere (where $\a'(t)$ and $\b'(t)$ are defined).
Another application of Lemma \ref{EXTup:lem:constraints} implies that $w(\a(t)) = w(\b(t))$ (with $w$ defined as in Lemma \ref{EXTup:lem:delta-Lambda}). Differentiating this equality, we conclude that:
\begin{equation}
\frac{\b'(t)}{f \circ u \circ \b(t)} = \frac{\a'(t)}{f\circ v \circ \a(t)} = 0.
\label{EXTup:eq:43-b}
\end{equation}
Combining \eqref{EXTup:eq:43-a} and \eqref{EXTup:eq:43-b} we conclude that indeed $(v \circ \a + u \circ \b)'(t)=0$ a.e., which finishes the proof.
\end{proof}
We are now ready to prove the final bit of information we need about the function maximising $\Lambda$, namely the symmetry.
\begin{observation}\label{EXTup:obs:symmetry}
Suppose that $f \in { \mathcal{F}_{\mathrm{mon}} }$, continuous, is such that $\Lambda(f) = \sup_{g \in \mathcal{F}} \Lambda(g)$.
Then any $s, t \in [0,1]$, if $\a(s) = {u \circ \b}(t)$ then also $\a(t) = {u \circ \b}(s)$. In particular, the function $f$ is symmetric: $f(x) = f(1-x)$ for all $x$.
\end{observation}
\begin{proof}
We begin by proving the symmetry of $f$, assuming the former part of the claim. It will be enough to show that for any $t$, $\a(t) + \b(t) = 1$. Take any $t$, and let $s$ be such that $\a(s) = {u \circ \b}(t)$. By assumption, $\a(t) = {u \circ \b}(s)$. We have ${v \circ \a}(s) = v \circ u \circ \b(t) = \b(t)$. Hence, $\a(t) + \b(t) = {u \circ \b}(s) + {v \circ \a}(s) = 1$.
For the remaining part of the argument, it will be convenient to define a pair of transformations $T_\a$ and $T_\b$ on $[0,1]$ given by $T_\a = f \circ {u \circ \b}$ and $T_\b = f \circ {v \circ \a}$. We note several properties of these transformations.
\begin{enumerate}
\item\label{EXTup:cond:T1} $T_\a \circ T_\b (t) = t$ and $T_\b \circ T_\a (t) = t$ for any $t$;
\item\label{EXTup:cond:T2} $\a(T_\a(t)) = {u \circ \b}(t)$ and $\b(t) = {v \circ \a}(T_\a(t))$ for any $t$;
\item\label{EXTup:cond:T3} $\abs{ \a(T_\a^2(t)) - \a(T_\a^2(t')) } = \abs{ \b(t) - \b(t')}$ for any $t,t'$;
\item\label{EXTup:cond:T4} $T_\a(t) > T_\a(t')$ and $T_\b(t) > T_\b(t')$ for any $t < t'$.
\end{enumerate}
Assertions \eqref{EXTup:cond:T1} and \eqref{EXTup:cond:T2} follow directly by substitution. For example, we have:
$$
T_\a \circ T_\b (t) = f \circ {u \circ \b} \circ f \circ {v \circ \a}(t)
= f \circ {u} \circ {v \circ \a}(t) = f \circ{ \a} (t) = t,
$$
where we use that $u \circ v(x) = x$ and $\beta \circ f(x) = x$ in appropriate ranges of $x$. The remaining equalities follow along similar lines.
Assertion \eqref{EXTup:cond:T4} follows from known monotonicity properties of $f$, $\a, \b$ and $u, v$. For instance, if $t < t'$ then $\b(t) > \b(t')$, hence $u \circ \beta(t) > u \circ \beta(t')$ and $f\circ u \circ \beta(t) > f \circ u \circ \beta(t')$ (note that $f$ is increasing in the relevant range).
Assertion \eqref{EXTup:cond:T3} is the least obvious and the most crucial. Using Observation \ref{EXTup:obs:at+bt=cons} and previously established properties of $T_\a$ we have:
\begin{align*}
\abs{ \a(T_\a^2(t)) - \a(T_\a^2(t')) } &=
\abs{ u \circ \b (T_\a(t)) - u \circ \b (T_\a(t')) }
\\ &= \abs{ v \circ \a (T_\a(t)) - v \circ \a (T_\a(t')) }
= \abs{ \b (t) - \b(t') }.
\end{align*}
Our main claim is equivalent to the statement that for each $t$, $T_\a^2(t) = t$, where we may restrict our attention to $t$ with $T_\a(t) < t$. For the sake of contradiction, suppose that for some $t_0$ we have $T_\a^2(t_0) \neq t_0$. For concreteness, we may suppose that $T_\a^2(t_0) > t_0$, the other case being fully analogous.
Let us consider the consecutive iterates $t_n := T_\a^n(t_0)$ for $n \in \mathbb{Z}$ (for $n < 0$ we use $T_\beta = T_\a^{-1}$). By \eqref{EXTup:cond:T4}, it is clear that $t_{2n}$ is monotonously increasing, while $t_{2n+1}$ is monotonously decreasing. Moreover, we may define:
\begin{align*}
I_n &:= { \int_{\a(t_n)}^{\a(t_{n+2})} f(x) dx } = { \int_{\b(t_{n-1})}^{\b(t_{n+1})} f(x) dx }\\
\end{align*}
where the two integrals are equal because by \eqref{EXTup:cond:T2} we have
\begin{align*}
0 &= \int_{\a(t_n)}^{v\circ \a(t_n)}f(x)dx - \int_{\a(t_{n+2})}^{v\circ \a(t_{n+2})} f(x)dx
\\&= \int_{\a(t_n)}^{\a(t_{n+2})}f(x)dx - \int_{\b(t_{n-1})}^{\b(t_{n+1})}f(x)dx.
\end{align*}
In similar spirit, we may define
\begin{align*}
l_n &:= {\a(t_{n+2}) - \a(t_{n}) } = {\b(t_{n-2}) - \b(t_{n}) }
\end{align*}
where the two quantities are equal by a direct application of \eqref{EXTup:cond:T3}.
Using monotonicity of $f$ and $t_n$, we find that for any $n \in \mathbb{Z}$ we have
\begin{align*}
l_{2n} t_{2n+2} &\geq I_{2n} \geq l_{2n} t_{2n},
&l_{2n+1} t_{2n+3} &\leq I_{2n+1} \leq l_{2n+1} t_{2n+1}, \\
l_{2n+1} t_{2n-1} &\geq I_{2n} \geq l_{2n+1} t_{2n+1},
&l_{2n+2} t_{2n} &\leq I_{2n+2} \leq l_{2n+2} t_{2n+2} .
\end{align*}
Combining the above inequalities, we find
$$
\frac{l_{2n}}{l_{2n+2}}
= \frac{l_{2n}}{l_{2n+1}} \cdot \frac{l_{2n+1}}{l_{2n+2}}
\geq \frac{t_{2n+1}}{t_{2n+2}}
\cdot \frac{t_{2n}}{t_{2n+1}}
= \frac{t_{2n}}{t_{2n+2}}.
$$
Taking the product as $n$ ranges over $-N+1,\dots,0$ and letting $N \to \infty$ we conclude that
$$\liminf_{N\to \infty} l_{-2N} \geq \frac{ l_0}{t_0} \lim_{N\to \infty} t_{-2N}. $$
The limit on the right hand side is defined, because the sequence $t_{-2N}$ is decreasing.
We claim that $\lim_{N\to \infty} t_{-2N} > 0$. Clearly, we have $T_\a(t_0) = t_1 < t_0$ and $T_\a(t_1) = t_2 > t_1$. Thus, there is some $t_1 < t_* < t_0$ such that $T_\a(t_*) = t_*$. By monotonicity of $T_\a$ we have that for any $t$ that $t > t_*$ if and only if $T_\a(t) < t_*$, which is further equivalent to $T_\a^2(t) > t_*$. Hence, $t_{2n} > t_*$ for all $n \in \mathbb{Z}$, and in particular $\lim_{N\to \infty} t_{-2N} \geq t_* > 0$.
It follows that $l_{-2N}$ converges to a non-zero limit as $N \to \infty$. On the other hand, by construction of $l_n$, the interval $[0,1]$ contains a disjoint union of intervals with lengths $l_n$, $n \in \mathbb{Z}$. In particular, $\sum_{n \in \mathbb{Z}} l_n < \infty$, contradicting the above statement.
\end{proof}
\begin{corollary}
The unique (up to a.e. equality) function $f \in { \mathcal{F}_{\mathrm{mon}} }$ maximising $\Lambda$ is given by:
$$
f(x) =
\begin{cases}
2x & \text{ if } x \leq \frac{1}{2}\\
2(1-x) & \text{ if } x \geq \frac{1}{2}.
\end{cases}
$$
\end{corollary}
We are now in position to finish the proof of Proposition \ref{EXTup:prop:L(f)-bound}. Let $f$ be the function defined above. The region ${L}(f)$ can be described explicitly:
$${L}(f) = \bbra{ (x, y) \in [0,1]^2 \ : \ x \leq y, \ x^2 + (1-y)^2 \leq \frac{1}{4} }.$$
This gives $\Lambda(f) = \frac{\pi}{16}$, which is precisely the needed bound.
\section{Closing remarks}\label{section:END}
In the previous sections, we have obtained a fairly satisfactory understanding the $\abs{S(a)}$ in for a random permutation, as well as in the ``best case scenario'' where $a$ is chosen to maximize $\abs{S(a)}$. It is natural to ask about the behaviour of $\abs{S(a)}$ in the ``worst case scenario'', when $a$ is chosen to minimize $\abs{S(a)}$. We now address this problem, but we ask more questions than we answer.
The best lower bound we are aware of can be obtained by an argument in \cite{Solymosi-2005} (also present in \cite{Balog-2014}), a variant of which we sketch below for the convenience of the reader.
\begin{proposition}
For any permutation $a$ of $[n]$ we have $\abs{S(a)} \geq \frac{n^{3/2}}{4\sqrt{2}}$.
\end{proposition}
\begin{proof}
For any integer $k$, consider the set $S_k(a)$ of the sums $s \in S(a)$ with $kn+1 \leq s \leq (k+1)n$. Clearly, $S(a) = \bigcup_{k \geq 0} S_k(a)$, and the union is disjoint.
Take any $1 \leq k \leq \frac{n+1}{4}$. For any $u$, at least one of the sums $\sum_{i=u}^n a_i$ or $\sum_{i=1}^u a_i$ exceeds $ \frac{1}{2} \binom{n+1}{2} \geq k n $. For concreteness, suppose $\sum_{i=u}^n a_i > kn$ and let $v$ be the smallest integer such that $\sum_{i=u}^v a_i > k n$.
By the choice of $v$ we have $\sum_{i=u}^v a_i \in S_{k-1}(a)$ and $\sum_{i=u+1}^v a_i \in S_{k}(a)\cup S_{k-1}(a)$. Hence, $a_u \in S_k(a) - (S_k(a) \cup S_{k-1}(a))$. Since $u$ was chosen arbitrarily, we conclude that
$ S_k(a) - (S_k(a) \cup S_{k-1}(a)) \supset [n],$
which implies that $$\abs{S_k(a)} + \abs{S_{k-1}(a)} \geq \sqrt{2n}.$$
Summing over $1 \leq k \leq \frac{n+1}{4}$ and using $\abs{S_0(a)} = n \geq \sqrt{2n} $ we conclude that:
$$\abs{S(a)} \geq \frac{1}{2} \sum_{k=0}^{\floor{\frac{n+1}4}} (\abs{S_k(a)} + \abs{S_{k-1}(a)}) \geq \frac{1}{2} \floor{\frac{n+5}{4}} \sqrt{2n} \geq \frac{n^{3/2}}{4\sqrt{2}}.$$
\end{proof}
This bound is rather far from what one would expect. The trivial permutation $a_i = i$ is essentially the only known example with $\abs{S(a)} = o(n^2)$. In this case, we have $\abs{S(a)} \sim n^{2-o(1)} $.
Slightly more generally, we have a similar result for permutations of ``bounded complexity''. Because the result is rather standard, we only sketch the argument.
\begin{example}\label{LOW:prop:small-S}
Fix an integer $M$. Let $a$ be a permutation of $[n]$ of complexity at most $M$, by which we mean that there is a partition $[n] = \bigcup_{j=1}^{M} I_j$ into intervals such that for each $j \in [M]$, $i \in I_j$ we have $a_i = i b_j + c_j$, where $b_j,c_j$ are some constants and $\abs{b_j} \leq M$.
Then $\abs{S(a)} = o(n^2)$ but for any $\delta > 0$ also $\abs{S(a)} = \Omega(n^{2-\delta})$ (the implicit rates of convergence are allowed to depend on $M$ and $\delta$).
\end{example}
\begin{proof}[Sketch of the proof]
For the lower bound, it is enough to observe that one of the intervals $I_j$ has large length $\abs{I_j} \geq n/M$. For $u,v \in I_j$, $S(a)$ contains the sum
$$ s = \sum_{i=u}^{v} a_i = (v-u+1)\bra{c_j + \frac{u+v}2 b_j}.$$
There are at least $\binom{n/M}2$ choices of $u,v$, and each sum $s$ is represented in at most $d(s) \ll n^{\delta}$ possible ways. Hence, $\abs{S(a)} = \Omega(n^{2-\delta})$.
For the upper bound, it suffices to show that for any $j,k \in [M]$ the number of sums $\sum_{i=u}^v a_i$ with $u \in I_j,\ v \in I_k$ is $o(n^2)$. If we fix $j,k$, and let $u \in I_j,\ v \in I_k$ then
$$\sum_{i=u}^v a_i = Av^2 + Bu^2 + C x + D y + E,$$
where $A = \frac{b_k}{2}$, $B = - \frac{b_j}{2}$. Hence, the problem reduces to showing that for the polynomial $P(u,v) = Av^2 + Bu^2 + C x + D y + E$ we have
$$
\abs{\bbra{ P(u,v) \ : \ u,v \in [n] }} = o(n^2).
$$
It is not difficult to reduce to the case $C = D = E = 0$ and $\gcd(A,B) = 1$. There are not two cases to consider.
\textit{Case 1.} Suppose that $-AB$ is a square. Then $P(u,v)$ factors as $(A'u + B'v)(A'u -B'u)$ and the bound follows from the theorem of Erd\H{o}s cited in the introduction.
\textit{Case 2.} Suppose that $-AB$ is not a square. Then, by Chebotarev's density theorem, there is a family $\mathcal{P}$ of primes $p$ with positive relative density such that $-AB$ is not a square modulo $p$. For any $p \in \mathcal{P}$ we then have $$P(u,v) \not \equiv p,2p,\dots,(p-1) p \pmod{p^2},$$
and thus
\begin{align*} \lim_{n \to \infty} \frac{ \abs{\bbra{ P(u,v) \ : \ u,v \in [n] }} }{ n^2 } \ll
\prod_{\substack{ p \in \mathcal{P}}} \bra{1-\frac{p-1}{p^2}} = 0. & \qedhere
\end{align*}
\end{proof}
In the above example, we could equally well allow the ``complexity'' tend to $\infty$ with $n$, but do so sufficiently slowly.
Hence, both for random and highly structured permutations $a$, we have $\abs{S(a)} \sim n^{2-o(1)}$. In absence of plausible counterexamples, we pose the following question.
\begin{question}
Is it the case that for any $\delta > 0$, there exists a constant $c_\delta > 0$ such that for any $n$, and for any permutation $a$ of $[n]$ one has $\abs{S(a)} \geq c_\delta n^{2-\delta}$?
\end{question}
In fact, all examples with $\abs{S(a)} = o(n^2)$ we are aware of exhibit some algebraic structure, much as in Proposition \ref{LOW:prop:small-S}. It is not quite the case that $\abs{S(a)}$ is minimised for the trivial permutation, but none of the known examples seems to be significantly worse. Hence, we may ask more boldly:
\begin{question}
Does there exists an absolute constant $c$ such that for any $n$, and for any permutation $a$ of $[n]$ one has $\abs{S(a)} \geq c \abs{S(\mathrm{id}_n)}$?
(Here, $\mathrm{id}_n$ denotes the trivial permutation on $[n]$).
\end{question}
In similar spirit, we may also ask if the only way for $S(a)$ to be small is if $a$ has some algebraic structure. To give an indication of just how much structure one may hope to find, we give the following examples.
\begin{example}
Fix a constant $M$, and suppose that $M \mid n$. Consider the permutation
$$a = (1,\frac{n}{M}+1,\dots,(M-1)\frac{n}{M}+1, 2,\frac{n}{M}+2,\dots,(M-1)\frac{n}{M}+2,3,\dots),$$ that is $a_i = \ceil{\frac{i}{M}} + \frac{n}{M} (i-1 \mod M)$. Then $\abs{S(a)} = o(n^2)$ (where the decay rate may depend on $M$).
\end{example}
Note, however, that a similar looking permutation of Proposition \ref{I:prop:cexple-basic} has $\Theta(n^2)$ sums.
\begin{example}
Take any permutation $a$ with $\abs{S(a)} = o(n^2)$, and let $m = o(n)$. Consider a permutation $b$ obtained from $a$ by choosing $m$ pairs of consecutive indices $i,i+1$ and swapping $a_i$ with $a_{i+1}$. More precisely, pick a set $I$ with $\abs{I} = m$ and such that $i \in I$ implies $i+1 \not \in I$, and define:
$$
b_i =
\begin{cases}
a_{i+1} & \text{ if } i \in I,\\
a_{i-1} & \text{ if } i+1 \in I,\\
a_{i} & \text{ otherwise}.\\
\end{cases}
$$
Then $\abs{S(b)} = o(n^2)$.
\end{example}
In view of the above examples, one cannot hope to ensure that $a_i$ looks structured on any large structured piece of $[n]$, nor that it looks structured on an long interval. We can, however, hope that the following should have a positive answer.
\begin{question}
Does there exists $\varepsilon > 0$ such that the following is true?
Let $a$ be a permutation of $[n]$ with $\abs{S(a)} \leq \varepsilon n^2$. Then there exists an index set $I \subset [n]$ with $\abs{I} = \omega(1)$ as $n \to \infty$ and constants $b,c$ such that for $i \in I$ we have $a_i = b i + c$.
\end{question}
One may also ask a similar questions in a more general context. Let $a$ be a permutation of a set $A \subset \mathbb{N}$ of size $\abs{A} = n$, not necessarily equal to $[n]$. Let $S(a)$ be defined just as before, to be the set $\bbra{ \sum_{i=u}^v a_i \ : \ u,v \in [n]}$. How small can $\abs{S(a)}$ be?
It is perhaps more natural to phrase this question in different terms. For a set $B = \{b_i \ : \ i \in [m] \}$ with $b_1 < b_2 < \dots < b_m$, define (following the terminology of \cite{Solymosi-2005} and \cite{Balog-2014}) the set of gaps $D(B) = \{ b_{i+1} - b_{i} \ : \ i \in [m-1]\}$. Note that setting $B = \{ \sum_{i=1}^{v} a_i \ : \ v \in [n] \} \cup \{0\}$ we recover $S(a) = (B-B) \cap \mathbb{N}$ and $D(B) = \{a_i \ : \ i \in [n] \} = A$.
\begin{question}
Suppose that $B \subset \mathbb{N}$ with $\abs{B} = n$ is such that $\abs{D(B)} = n-1$. For which $\delta > 0$ does there exist $c_\delta > 0$ (independent of $B$) such that $\abs{B-B} \geq c_\delta n^{2-\delta}$?
\end{question}
This question is already alluded to in \cite{Solymosi-2005}, and resolved positively in the case $\delta = \frac{1}{2}$. For $\delta \in (0,\frac{1}{2})$ to the best of our knowledge, the answer is not known.
\nocite{Alon-Spencer,Rudin}
\bibliographystyle{plain}
| {
"timestamp": "2015-05-21T02:00:46",
"yymm": "1504",
"arxiv_id": "1504.07156",
"language": "en",
"url": "https://arxiv.org/abs/1504.07156",
"abstract": "We study the number of values taken by the sums $\\sum_{i=u}^{v-1} a_i$, where $a_1,a_2,\\dots,a_n$ is a permutation of $1,2,\\dots,n$ and $1 \\leq u < v \\leq n+1$. In particular, we show that for a random choice of a permutation, with high probability there are $(\\frac{1+e^{-2}}{4} +o(1)) n^2$ such sums. This answers an old question of Erdős and Harzheim. We also obtain non-trivial bounds on the maximum possible number of distinct sums, ranging over all permutations of $1,2,\\dots,n$. We close with some questions concerning the minimal possible number of distinct sums.",
"subjects": "Combinatorics (math.CO); Number Theory (math.NT)",
"title": "On consecutive sums in permutations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754510863375,
"lm_q2_score": 0.8652240756264639,
"lm_q1q2_score": 0.851878384550685
} |
https://arxiv.org/abs/1608.07247 | Minimal number of points on a grid forming line segments of equal length | We consider the minimal number of points on a regular grid on the plane that generates $n$ line segments of points of exactly length $k$. We illustrate how this is related to the $n$-queens problem on the toroidal chessboard and show that this number is upper bounded by $kn/3$ and approaches $kn/4$ as $n\rightarrow\infty$ when $k+1$ is coprime with $6$ or when $k$ is large. | \section{Introduction}
We consider points on a regular grid on the plane which form horizontal, vertical or diagonal blocks of exactly $k$ points (which we will call {\em patterns})\footnote{We use the convention that an isolated point corresponds to $4$ patterns of length $1$; a horizontal, a vertical and 2 diagonal patterns.}. For example, the set of points in Fig. \ref{fig:one} shows 12 points forming 3 patterns of length $5$. Note that since a pattern of length $k$ has to have exactly $k$ points flanked by empty grid locations, the set of points in Fig. \ref{fig:one} contains 4 patterns of length 2 and does not contain any patterns of length $4$ or of length $3$. Our motivation for studying this problem is the Bingo-4 problem proposed by Sun et al. and described in OEIS\cite{OEIS} sequence \href{http://oeis.org/A273916}{A273916} where the case $k=4$ is considered. Let $a_k(n)$ denote the minimal number of points needed to form $n$ patterns of length $k$, i.e. Fig. \ref{fig:one} shows that $a_5(3) = 12$. Finding the exact value of $a_k(n)$ appears to be difficult and not feasible for large $n$. The purpose of this note is to provide an analysis on the asymptotic behavior of $a_k(n)$.
\begin{figure}[htbp]
\centerline{\includegraphics[width=3in]{figure1.pdf}}
\caption{12 points on a grid forming 3 patterns of length 5.}\label{fig:one}
\end{figure}
\section{Bounds and asymptotic behavior of $a_k(n)$}
It is easy to see that $a_k(1) = k$, $a_k(2) = 2k-1$ and $a_k(3) = 3(k-1)$. Next, consider Fekete's subadditive Lemma \cite{fekete:subadditive:1923} which is applicable to subadditive sequences.
\begin{lemma}[Fekete's subadditive Lemma]
If the sequence $a(n)$ is subadditive, i.e. $a(n+m) \leq a(n)+a(m)$, then $\lim_{n\rightarrow\infty}\frac{a_n}{n}$ exists and is equal to $\inf \frac{a_n}{n}$.
\label{lem:fekete}
\end{lemma}
\begin{theorem}
For all $k$, $a_k(n)$ is subadditive, and $f(k) = \lim_{n\rightarrow\infty}\frac{a_k(n)}{n} $ exists and satisfies $\frac{k}{4}\leq f(k) \leq \frac{k}{3}$.
\label{thm:bound}
\end{theorem}
\begin{proof}
Since each pattern takes $k$ points and each point can be part of at most $4$ patterns, $a_k(n) \geq \frac{kn}{4}$.
It is clear that $a_k(n)$ is subadditive. Lemma \ref{lem:fekete} implies that $f(k)$ exists and is equal to $\inf_n \frac{a_k(n)}{n}$.
Consider a $k$ by $m$ rectangular array of points with $k\leq m$. It is easy to see that there are $3m-2k+2$ length $k$ patterns there. This shows that $a_k(3m-2k+2)\leq km$ which implies that
$\frac{k}{4}\leq f(k) \leq \frac{k}{3}$.
\end{proof}
\section{Constellations where each point is part of 4 different patterns}
The upper bound $\frac{k}{3}$ on $f(k)$ in Theorem \ref{thm:bound} shows that for large $n$ we can construct a constellation of $n$ points such that most points are part of $3$ different patterns. Is it possible to construct a constellation such that most points are part of $4$ different patterns (a horizontal, a vertical and two diagonal patterns) and thus achieve the lower bound $\frac{k}{4}$? The case $k=1$ is simple. Since $a_1(4n) = n$ as exhibited by the constellation of $n$ isolated points, this implies that $f(1) = \frac{1}{4}$.
Let $\sigma$ be a permutation on the integers $\{0,1,\cdots , k\}$. Consider a $k+1$ by $k+1$ square grid and place a point on each position $(i,j)$ except when it is of the form $(i,\sigma(i))$. It is clear that tiling this grid on the plane results in a constellation that have horizontal and vertical patterns of length $k$.
In order for the diagonals to also have a block of exactly $k$ points, $\{i+\sigma(i) \mod k+1\}$ and $\{i-\sigma(i) \mod k+1\}$ need to be permutations of $\{0,1,\cdots , k\}$ as well. Consider a $N$ by $N$ subgrid of this tiling. Except for points near the edges which is on the order of $kN \propto k\sqrt{n}$, all points belong to $4$ patterns of length $k$. Thus we have proved the following:
\begin{theorem} \label{thm:perm}
If there is a permutation $\sigma$ of the numbers $\{0,1,\cdots ,k\}$ such that
$\sigma_1 = \{i+\sigma(i) \mod k+1\}$ and $\sigma_2 = \{i-\sigma(i) \mod k+1\}$ are both permutations, then $f(k) = \frac{k}{4}$. In particular, $\frac{a_k(n)}{n}$ converges to $f(k)$ on the order of
$O\left(\frac{1}{\sqrt{n}}\right)$.
\end{theorem}
If $\sigma$ satisfies the conditions of Theorem \ref{thm:perm}, then so does $\sigma^{-1}$. For a fixed integer $m$, the permutation
$\sigma(i) + m \mod k+1$ also satisfies these conditions. We will use this to partition the set of admissible permutations into equivalent classes. More specifically,
\begin{definition}
Let $S_{k+1}$ be the set of permutations on $\{0,1,\cdots , k\}$. $T_{k+1}\subset S_{k+1}$ is defined as the set of permutations $\sigma$ such that $\{i+\sigma(i)\mod k+1\}$ and $\{i-\sigma(i)\mod k+1\}$ are in $S_{k+1}$. The equivalence relation $\sim$ is defined as follows. If $\sigma, \tau \in T_{k+1}$, then $\sigma \sim \tau$ if $\tau = \sigma^{-1}$ or
there exist an integer $m$ such that $\sigma(i) = \tau(i) + m \mod k+1$ for all $i$.
\end{definition}
Thus if $T_{k+1}\neq \emptyset$, then $f(k) = \frac{k}{4}$.
\section{Modular $n$-queens problem}
The $n$-queens problem asks whether $n$ nonattacking queens can be placed on an $n$ by $n$ chessboard. The answer is yes and is first shown by Pauls \cite{pauls:nqueens:1874,bell:nqueens:2009}.
Next consider a toroidal $n$ by $n$ chessboard, where the top edge is connected to the bottom edge and the left edge is connected to the right edge.
Polya \cite{polya:nqueens:1918} showed that a solution to the corresponding modular $n$-queens problem exists if and only if $n$ is coprime with $6$. It is clear that
a permutation in $T_{k+1}$ corresponds to a solution of the modular $(k+1)$-queens problem. Thus Polya's result is equivalent to the following result:
\begin{theorem}\label{thm:polyaT}
$T_{k+1} \neq \emptyset$ if and only if $k+1$ is coprime with $6$.
\end{theorem}
\begin{corollary}\label{cor:one}
If $k+1$ is coprime with $6$, then $f(k) = \frac{k}{4}$.
\end{corollary}
Monsky \cite{monsky:nqueens:1989} shows that $n-2$ nonattacking queens can be placed on an $n$ by $n$ toroidal chess board and $n-1$ queens can be placed if
$n$ is not divisible by $3$ or $4$.
This implies the following which shows that for $k$ large, $f(k)$ approaches the lower bound $\frac{k}{4}$:
\begin{theorem}\label{cor:upperbound}
$f(k) \leq \frac{k(k+1)+2}{4(k-1)}$. If $k+1$ is not divisible by $3$ or $4$, then
$f(k) \leq \frac{k(k+1)+1}{4k}$.
\end{theorem}
\begin{proof}
Consider a $k+1$ by $k+1$ array with $k+1-r$ nonattacking queens. By placing a point on the location where there are no queens we obtain a constellation with $(k+1)^2-(k+1-r)$ points.
Each queen position corresponds to $4$ patterns. Thus when this array is tiled, we get for a large number of points a ratio $\frac{a_k(n)}{n}$ approaching
$\frac{(k+1)^2-(k+1-r)}{4(k+1-r)} = \frac{k(k+1)+r}{4(k+1-r)}$. The conclusion follows by setting $r =1 $ or $r= 2$.
\end{proof}
\begin{corollary}
$\lim_{k\rightarrow \infty} \frac{f(k)}{k} = \frac{1}{4}$.
\end{corollary}
\subsection{Lattice construction}
As in the $n$-queens problem, we can construct permutations in $T_{k+1}$ via a lattice construction.
In particular, we construct a constellation of points by placing a point on the grid if and only if it is not a point on a lattice spanned by two vectors $v_1$ and $v_2$.
For instance with the lattice points generated by the vectors $(1,2)$ and $(2,-1)$, the set of points with $N=15$ is shown in Fig. \ref{fig:two}.
In particular, this configuration shows that $f(4) = 1$.
\begin{figure}[htbp]
\centerline{\includegraphics[width=6in]{figure2.pdf}}
\caption{A lattice constellation. Points in the center of the grid are part of $4$ different patterns, showing that $\frac{a_{4}(n)}{n} \rightarrow 1$ as $n\rightarrow \infty$.}\label{fig:two}
\end{figure}
The following result appears to be well-known \cite{bell:nqueens:2009}, but we include it here for completeness.
\begin{theorem} \label{thm:coprime}
If there exists $1 <m < k$ such that $m-1$, $m $ and $m+1$ are all coprime with $k+1$, then the lattice construction with $v_1 = (1,m)$ and $(k+1,0)$ generates a permutation $\sigma$ in $T_{k+1}$.
\end{theorem}
\begin{proof}
Consider the lattice generated with the vectors $(1,m)$ and $(0,k+1)$. Clearly, if $m$ is coprime with $k+1$, then we find in a $k+1$ by $k+1$ subarray
locations which do not have a point of the form $(i,\sigma(i))$ with $\sigma$ a permutation.
The lattice points have coordinates $(a, ma+(k+1)b)$ which lie on the $2$ main diagonals if $a=ma+(k+1)b$ or $-a = ma+(k+1)b$.
In the first case $-(m-1)a=(k+1)b$. Since $m-1$ is coprime with $k+1$, this means that $a$ is a multiple of $k+1$, i.e., a diagonal pattern must have length $k$. In the second case $-(m+1)a = (k+1)b$. Since $k+1$ is coprime with $m+1$, again this means that $a$ is a multiple of $k+1$.
\end{proof}
Theorem \ref{thm:coprime} also provides a proof of Corollary \ref{cor:one} since if $k+1$ is coprime with $6$, then $1$, $2$ and $3$ are all coprime with $k+1$. In particular the lattice construction with $v_1 = (1,2)$ and $(k+1,0)$ generates a permutation $\sigma$ in $T_{k+1}$.
Fig. \ref{fig:three} shows the construction for $k= 12$.
\begin{figure}[htbp]
\centerline{\includegraphics[width=6in]{figure3.pdf}}
\caption{A lattice constellation for $k=12$ generated by vectors $(1,2)$ and $(0,13)$.}\label{fig:three}
\end{figure}
For $k=4$,
there is only one equivalence class $(0,2,4,1,3)$ in $T_{k+1}$ that satisfies the conditions of Theorem \ref{thm:perm}. For $k=6$, there are two equivalent classes
$(0,2,4,6,1,3,5)$ and $(0,3,6,2,5,1,4)$. For $k=10$, there are $4$ equivalent classes.
In particular, Theorem \ref{thm:coprime} shows that if $k+1 > 4$ is prime, then there are at least $\frac{k-2}{2}$ equivalent classes in $T_{k+1}$. This is because each $2\leq m \leq k-1$ is coprime with $k+1$ and the permutation generated by $m$ is the inverse of the permutation generated by $k-1-m$ which are equivalent\footnote{For general $k$, see \cite{burger:nqueens:2004} for a formula of the number of such permutations.}. It is possible to have more than $\frac{k-2}{2}$ equivalent classes as there are permutations in $T_{k+1}$ not generated by a lattice.
For $k+1$ coprime with $6$, if $k = 4, 6$ and $10$, all permutations in $T_{k+1}$ are generated by a lattice. For $k = 12$, there are permutations in $T_{k+1}$ that are not generated by a lattice. One such example is shown in Fig. \ref{fig:k12}. Such solutions are referred to as {\em nonlinear} solutions \cite{bell:nqueens:2009}.
\begin{figure}[htbp]
\centerline{\includegraphics[width=6in]{k12nonlattice.pdf}}
\caption{A constellation for $k=12$ not generated by a lattice corresponding to the permutation $(0,2,4,6,11,9,12,5,3,1,7,10,8)$.}\label{fig:k12}
\end{figure}
\section{Conclusions}
We studied the asymptotic behavior of the minimal number of points needed to generate $n$ patterns of length $k$ using a construction based on permutations of
$\{0,1,\cdots, k\}$ with certain properties. We showed that this construction allows us to create patterns where asympotically most points are part of 4 patterns. This construction is equivalent to the modular $(k+1)$-queens problem and thus $f(k)=\frac{k}{4}$ for $k+1$ coprime with $6$. If $k+1$ is even or $k+1$ is divisible by $3$, this construction fails to provide such a constellation.
However, results in the modular $n$-queens problem can still provide an upper bound on $f(k)$ which shows that $\lim_{k\rightarrow\infty}\frac{f(k)}{k} = \frac{1}{4}$. Even though these constructions for the modular $n$-queens problem provide limiting value of $\frac{a_k(n)}{n}$ as $n\rightarrow \infty$, for a fixed $n$ the optimal constellation to achieve $a_k(n)$ can be quite different (see for example \url{https://oeis.org/A273916/a273916.png}) .
\section{Acknowledgements}
We are indebted to Don Coppersmith for stimulating discussions and for providing his many insights during the preparation of this note.
| {
"timestamp": "2017-07-20T02:01:08",
"yymm": "1608",
"arxiv_id": "1608.07247",
"language": "en",
"url": "https://arxiv.org/abs/1608.07247",
"abstract": "We consider the minimal number of points on a regular grid on the plane that generates $n$ line segments of points of exactly length $k$. We illustrate how this is related to the $n$-queens problem on the toroidal chessboard and show that this number is upper bounded by $kn/3$ and approaches $kn/4$ as $n\\rightarrow\\infty$ when $k+1$ is coprime with $6$ or when $k$ is large.",
"subjects": "Combinatorics (math.CO)",
"title": "Minimal number of points on a grid forming line segments of equal length",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683473173829,
"lm_q2_score": 0.8807970889295664,
"lm_q1q2_score": 0.8698473254361339
} |
https://arxiv.org/abs/1901.07685 | A Reider-type Result for Smooth Projective Toric Surfaces | Let $L$ be an ample line bundle over a smooth projective toric surface $X$. Then $L$ corresponds to a very ample lattice polytope $P$ that encodes many geometric properties of $L$. In this article, by studying $P$, we will give some necessary and sufficient numerical criteria for the adjoint series $|K_X+L|$ to be either nef or (very) ample. | \section{Introduction}
The problem of determining whether a line bundle is nef or (very) ample is an important question in algebraic geometry. The Nakai-Moishezon criterion \cite{Nakai1963, Moishezon1964} states that a Cartier divisor $D$ on a proper scheme $X$ over an algebraically closed field is ample if and only if $D^{\dim(Y)}\cdot Y > 0$ for every closed integral subscheme $Y$ of $X$. For toric varieties, a special form of the criterion holds: if $D\cdot C>0$ for every torus-invariant curve $C\subset X$ then $D$ is ample. Furthermore, if $D\cdot C\ge 0$ for every torus-invariant curve $C\subset X$ then $D$ is globally generated \cite{Laterveer1996, Mavlyutov2000, Mustata2002}. However, the question is more complicated when we consider the adjoint bundle $D+K_X$. Namely, are there numerical conditions for $D\cdot C$ so that $D+K_X$ is globally generated or ample? Fujita conjectured the following:
\begin{conjecture}[\cite{Fujita1985}]\label{Fujita conjecture}
Let $X$ be an $n$-dimensional projective algebraic variety, smooth or with mild singularities, and $D$ an ample divisor on $X$. Then
\begin{enumerate}[(1)]
\item For $t\ge n+1$, $tD+K_X$ is basepoint free.
\item For $t\ge n+2$, $tD+K_X$ is very ample.
\end{enumerate}
\end{conjecture}
The conjecture is true for toric varieties \cite{Fujino2003, Payne2006}. For smooth surfaces, Fujita's conjecture follows from Reider's theorem \cite{Reider1988}.
In this article, we will present a combinatorial proof for a Reider-type result for smooth projective toric surfaces.
\begin{proposition}\label{My Reider variant}
Let $X$ be a smooth projective toric surface not isomorphic to $\mathbb{P}^2$, and let $L$ be an ample line bundle on $X$.
\begin{enumerate}
\item The adjoint series $|K_X+L|$ is not base point free if and only if there exists an effective torus-invariant divisor $D\subset X$ such that
\begin{align*}
D\cdot L=1&\text{ and } D^2=0.
\end{align*}
\item The adjoint series $|K_X+L|$ is not ample if and only if there exists an effective torus-invariant divisor $D\subset X$ such that either
\begin{align*}
D\cdot L=1 &\text { and } D^2=-1 \text{ or } D^2=0 \text{; or}\\
D\cdot L=2&\text{ and } D^2=0\text{; or}\\
D\cdot L=3&\text{ and } D^2=1.
\end{align*}
Furthermore, if $L^2\ge 10$, then $|K_X+L|$ is not ample if and only if there exists an effective torus-invariant divisor $D\subset X$ such that either
\begin{align*}
D\cdot L=1 &\text { and } D^2=-1 \text{ or } D^2=0 \text{; or}\\
D\cdot L=2&\text{ and } D^2=0.
\end{align*}
\end{enumerate}
\end{proposition}
As a convention, in this article, we will follow the notations in \cite{Cox2011}. In particular, we will always use $M$ to denote the ambient lattice if there is no confusions.
\subsection*{Acknowledgments}
We would like to thank Milena Hering for suggesting the problem and for her invaluable guidance. We also want to thank Ivan Cheltsov for some of the comments.
\section{Toric Surfaces Reviewed}
Let $A$ be an ample line bundle over a projective toric variety $X$ corresponding to a polytope $P\subset M_{\mathbb{R}}$. Then we have a combinatorial interpretation of the intersection number $A\cdot C$ where $C\subset X$ is any torus-invariant curve as follows.
\begin{lemma}[{\cite[(1.4) and Page 457]{Laterveer1996}}]\label{lattice length}
Let $A$ be an ample line bundle on a projective toric variety $X$ corresponding to a polytope $P$. For a torus invariant curve $C$, let $E$ be the corresponding edge on $P$. Then $A\cdot C$ is equal to the lattice length of $E$, i.e.,
\[A\cdot C=|E\cap M|-1.\]
\end{lemma}
For our purpose, we will need to use the classification of smooth projective toric surfaces: every smooth complete toric surfaces is a finite blowup of either $\mathbb{P}^2$, $\mathbb{P}^1\times\mathbb{P}^1$, or the Hirzebruch surface $\mathscr{F}_a$, where $a\ge 2$ ( {\cite[Theorem 10.4.3]{Cox2011}}). Another important fact that we will use is that every ample line bundle on a smooth projective toric surface is also very ample.
\begin{lemma}[{\cite[Theorem 6.1.15]{Cox2011}}]
A line bundle on a smooth complete toric variety is ample if and only if it is very ample.
\end{lemma}
Smooth toric surfaces are interesting objects to work with; partially because of their computability. For example, we have the following lemma.
\begin{lemma}[{\cite[Proposition 10.4.11]{Cox2011}}]\label{K_X x D_i=b_i-2}
Let $u_0,\ldots, u_r$ be ray generators of a smooth complete fan $\Sigma$ in $N_{\mathbb{R}}\cong \mathbb{R}^2$. Let $X=X_{\Sigma}$ be the smooth projective toric surface from $\Sigma$ and $D_i=V(u_i)$ for $0\le i\le r$. Let $K_X$ be the canonical divisor $K_X=-\sum_{i=0}^rD_i$. Then
\[K_X\cdot D_i=b_i-2,\]
where the $b_1,\ldots,b_{r-1}$ are integers such that $u_{i-1}+u_{i+1}=b_iu_i$ for all $0\le i\le r$, where $u_{-1}=u_r$ and $u_{r+1}=u_0$.
\end{lemma}
The following corollary follows directly from \cite[Lemma 10.4.1]{Cox2011} and Lemma \ref{K_X x D_i=b_i-2}.
\begin{corollary}\label{(L+KX)D=LD-D^2-2}
Let $u_0,\ldots, u_r$ be ray generators of a smooth complete fan $\Sigma$ in $N_{\mathbb{R}}\cong \mathbb{R}^2$. Let $X=X_{\Sigma}$ be the smooth projective toric surface from $\Sigma$ and $D_i=V(u_i)$ for $0\le i\le r$. Let $K_X$ be the canonical divisor $K_X=-\sum_{i=0}^rD_i$. Then for $0\le i\le r$,
\[(L+K_X)\cdot D_i=L\cdot D_i-D_i^2-2.\]
\end{corollary}
We also know that the blowup of a toric variety corresponds to a subdivision of fan. Thus the number of generating rays of the fan corresponding to a toric surface increases after a blowup (\cite[Proposition 3.3.15]{Cox2011}).
\begin{example}\label{Nef cone if Hirzebruch surface}
Consider the Hirzebruch surface $\mathscr{F}_r=\mathbb{P}(\mathcal{O}_{\mathbb{P}^1}\oplus\mathcal{O}_{\mathbb{P}^1}(r))$, $r\ge 1$, whose fan $\Sigma$ given by the following figure
\begin{figure}[H]\label{Hirzebruch fan}
\begin{center}
\begin{tikzpicture}[scale=1]
\coordinate (5) at (0,0);
\coordinate (6) at (2,0);
\coordinate (7) at (0,2);
\coordinate (8) at (-1,2);
\coordinate (9) at (0,-2);
\fill[pattern=dots, opacity=.5] (6)--(7)--(5)--(6);
\fill[pattern=checkerboard light gray, fill opacity=0.5] (5)--(8)--(9)--(5);
\fill[pattern=crosshatch dots gray, fill opacity=.5] (5)--(9)--(6)--(5);
\fill[fill=lightgray, fill opacity=.5] (5)--(7)--(8)--(5);
\node at (.6,.6) {$\sigma_1$};
\node at (.6,-.6) {$\sigma_2$};
\node at (-.25,0) {$\sigma_3$};
\node at (-.3,1.5) {$\sigma_4$};
\node [left] at (-.5,1) {$(-1,r)$};
\draw[thick,->] (5)--(6);
\draw[thick,->] (5)--(7);
\draw[thick,->] (5)--(8);
\draw[thick,->] (5)--(9);
\filldraw (0,0) circle (1pt);
\filldraw (-.5,1) circle (1pt);
\end{tikzpicture}
\caption{The Hirzebruch fan}
\end{center}
\end{figure}
The ray generators of $\Sigma$ are $v_1=(1,0)$, $v_2=(0,1)$, $v_3=(-1,r)$, and $v_4=(0,-1)$. Let the associated divisors be $D_1$, $D_2$, $D_3$, and $D_4$, respectively. By \cite[Proposition 4.1.2]{Cox2011},
\begin{align*}
0\sim \divisor(\chi^{e_1})&=\sum_{i=1}^4\langle e_1,v_i\rangle D_i= D_1-D_3\\
0\sim \divisor(\chi^{e_2})&=\sum_{i=1}^4\langle e_2,v_i\rangle D_i=D_2+aD_3-D_4.
\end{align*}
Thus $D_3\sim D_1$, $D_4\sim D_2+aD_3$, and
\[\Pic(\mathscr{F}_r)\simeq \left\{aD_3+bD_4\hspace{1mm}\middle\vert\hspace{1mm} a,b\in\mathbb{Z}\right\}.\]
The maximal cones of $\Sigma$ are $\sigma_1$, $\sigma_2$, $\sigma_3$ and $\sigma_4$ as in Figure \ref{Hirzebruch fan}. Let $D=aD_3+bD_4$. We compute the $m_{\sigma_i}$ to be
\[m_1=(-a,0),\hspace{2mm} m_2=(-a,b),\hspace{2mm} m_3=(rb,b),\hspace{2mm} m_4=(0,0).\]
Then by \cite[Lemma 6.1.13]{Cox2011}, $D$ is very ample if and only if $a,b>0$. The nef cone of $\mathscr{F}_r$ is given by
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=1.5]
\draw [thick,->](0,0) --(0,2);
\draw [thick,->](0,0) --(2,0);
\draw [dashed] (0,0)--(0,-1);
\draw [dashed] (0,0)--(-1,0);
\draw [fill] (0,0) circle [radius=0.025];
\draw [fill] (1,0) circle [radius=0.025];
\draw [fill] (0,1) circle [radius=0.025];
\node [left] at (0,1) {$[D_4]$};
\node [below] at (1,0) {$[D_3]$};
\fill [pattern=dots, fill opacity=0.2] (0,0)--(0,2)--(2,2)--(2,0)--(0,0);
\end{tikzpicture}
\caption{The nef cone of $\mathscr{F}_r$}
\end{center}
\end{figure}
By \cite[Lemma 10.4.1]{Cox2011}, we have $D_1^2=D_3^2=0$, $D_2^2=-a$, $D_4^2=a$, $D_1\cdot D_2=D_2\cdot D_3=D_3\cdot D_4=D_4\cdot D_1=1$, $D_1\cdot D_3=D_2\cdot D_4=0$.
\end{example}
Finally, we will make use of the Hodge's Index Theorem:
\begin{lemma}[{\cite[Theorem V.1.9]{Hartshorne1977}}]\label{Hodge index theorem}
Let $D$ be an ample divisor on a smooth projective surface $S$. If $E$ is a divisor such that $D\cdot E=0$, then $E^2\le 0$. The equality occurs if and only if $E$ is numerically equivalent to $0$.
\end{lemma}
\begin{corollary}[{\cite[Exercise V.1.9]{Hartshorne1977}}]\label{Hodge inequality}
Let $D$ be an ample divisor on a smooth projective surface $S$ and $E$ an arbitrary divisor. Then
\[(D\cdot E)^2\ge D^2 E^2.\]
\end{corollary}
\begin{proof}
Since $D$ is ample, $D^2>0$. Let $H=(D^2)E-(D\cdot E)D$. We have
\[D\cdot H=(D^2)E\cdot D-(D\cdot E)D^2=0.\]
Then by Lemma \ref{Hodge index theorem}, we must have $H^2\le 0$. In other words,
\begin{align*}
0\ge &\left((D^2)E-(D\cdot E)D\right)\cdot \left((D^2)E-(D\cdot E)D\right)\\
= &D^4E^2-2(D\cdot E)^2(D^2)+D^2(D\cdot E)^2\\
= &D^2\left(D^2E^2-(D\cdot E)^2\right).
\end{align*}
Since $D^2>0$, it follows that $(D\cdot E)^2\ge D^2E^2$.
\end{proof}
\section{Toric Surfaces and Lattice Polygons}
In this section, we review and prove some lemmas on lattice polygons that we will use to the proof of Proposition \ref{My Reider variant}.
\begin{lemma}[{\cite[Lemma 1]{Arkinstall1980}}]\label{pigeonhole}
Every lattice polygon with at least 5 edges has at least an interior lattice point.
\end{lemma}
\begin{lemma}\label{second pigeonhole}
Let $v_1,\ldots,v_5$ be lattice points such that no three points are collinear. Then there exists a lattice point in $\conv(v_1,\ldots,v_5)\backslash\{v_1,\ldots,v_5\}$.
\end{lemma}
\begin{proof}
Let the coordinates of $v_i$ be $(x_i,y_i)$ for $i=1,\ldots, 5$. By the pigeonhole principle, there must be $i\neq j$ such that $x_i\equiv x_j\pmod 2$ and $y_i\equiv y_j\pmod 2$. Then the midpoint $m$ of $v_iv_j$ is a lattice point. Since no three points in $\{v_1,\ldots,v_5\}$ are collinear, it follows that $m\in \conv(v_1,\ldots,v_5)\backslash\{v_1,\ldots,v_5\}$.
\end{proof}
As a consequence, we obtain:
\begin{lemma}\label{Volume of polygon >=9}
Let $P$ be a lattice polygon that has at least $5$ vertices and assume that one of its edges has lattice length $4$. Then $\Vol(P)\ge 9$.
\end{lemma}
\begin{proof}
It suffices to prove the lemma when $P$ is a lattice pentagon. Let $P=\conv(v_1,\ldots,v_5)$, where $v_1,\ldots,v_5$ are ordered clockwise in $M$. Without loss of generality suppose that the lattice length of the edge joining $v_1$ and $v_5$ is $4$; i.e., there are $3$ other lattice points $y_1$, $y_2$, $y_3$ in between $v_1$ and $v_5$.
\begin{figure}[H]\label{pentagon}
\begin{center}
\begin{tikzpicture}[scale=1.2]
\draw [help lines, opacity=.5] (0,0) grid (4,2);
\coordinate (1) at (0,0);
\coordinate (2) at (0,1);
\coordinate (3) at (1,2);
\coordinate (4) at (3,1);
\coordinate (5) at (4,0);
\coordinate (6) at (1,1);
\coordinate (7) at (1,0);
\coordinate (8) at (2,0);
\coordinate (9) at (3,0);
\coordinate (10) at (2,1);
\fill [color=gray, opacity=.5] (1)--(2)--(3)--(4)--(7)--(1);
\filldraw [pattern=dots, opacity=.5] (3)--(6)--(9)--(5)--(4)--(3);
\draw [thick] (1)--(2)--(3)--(4)--(5)--(1);
\draw [thick] (4)--(7);
\node [below] at (1) {$v_1$};
\node [left] at (2) {$v_2$};
\node [above] at (3) {$v_3$};
\node [right] at (4) {$v_4$};
\node [below] at (5) {$v_5$};
\node [below right] at (6) {$x$};
\node [below right] at (10) {$y$};
\node [below] at (7) {$y_1$};
\node [below] at (8) {$y_2$};
\node [below] at (9) {$y_3$};
\filldraw (1) circle (1pt);
\filldraw (2) circle (1pt);
\filldraw (3) circle (1pt);
\filldraw (4) circle (1pt);
\filldraw (5) circle (1pt);
\filldraw (6) circle (1pt);
\filldraw (10) circle (1pt);
\end{tikzpicture}
\caption{A lattice pentagon that has an edge whose lattice length is $4$}
\end{center}
\end{figure}
Consider the polytope $Q=\conv(v_1,v_2,v_3,v_4,y_1)$. Then by Lemma \ref{pigeonhole}, there must be a lattice point $x$ in the interior of $Q$. Then $x$ lies in at most one of the segments $v_1v_3$, $y_1v_3$, $y_2v_3$, $y_3v_3$, $v_5v_3$. If $x$ lies in $v_1v_3$ or if $x$ does not lie in any mentioned segments, consider the set of $5$ points $\{x,v_3,v_4,v_5,y_1\}$. By Lemma \ref{second pigeonhole}, there must be another lattice point $y$ in $P$ that is not the same as the points listed before. If $y\in \partial P$, then $|\partial P\cap M|\ge 9$ and $|P^0\cap M|\ge 1$. By Pick's theorem \cite{Pick1899},
\[\Vol(P)=|\partial P\cap M|+2|P^0\cap M|-2\ge 9.\]
If $y\in P^0$, then $|\partial P\cap M|\ge 8$ and $|P^0\cap M|\ge 2$. Again, by Pick's theorem,
\[\Vol(P)=|\partial P\cap M|+2|P^0\cap M|-2\ge 10.\]
If $x$ lies in $v_3y_1$ or $v_3y_2$ then we get such a point $y$ from $\conv(x,v_3,v_4,v_5,y_3)$. If $x$ lies in $v_3y_3$ or $v_3v_5$ then we get $y$ from $\conv(v_1,v_2,v_3,x,y_2)$. The same argument follows and we proved the lemma.
\end{proof}
We will also need the following lemmas for the proof of Proposition \ref{My Reider variant}.
\begin{lemma}\label{L^2>=LD+4}
Let $L$ be an ample line bundle over a smooth projective toric surface $X$. Let $\Sigma$ be the fan of $X$. Suppose that $\Sigma$ has $n\ge 5$ rays $\rho_1,\ldots, \rho_n$. Then for any integer $1\le i\le n$,
\[L^2\ge L\cdot D_{\rho_i}+4.\]
\end{lemma}
\begin{proof}
Let $P$ be the polytope associated to $L$. By Pick's theorem (\cite{Pick1899}) and since $L$ is ample so that $L\cdot D_{\rho_i}\ge 1$ for all $i$,
\begin{equation}\label{Pick}
\vol(P)=\frac{L^2}{2}=\frac{|\partial P\cap M|}{2}+|P^0\cap M|-1,
\end{equation}
where $\partial P$ and $P^0$ are the sets of all boundary points and interior points of $P$, respectively. By Lemma \ref{lattice length},
\begin{equation}\label{perimeter of polytope}
|\partial P\cap M|=\sum_{j=1}^nL\cdot D_{\rho_j}.
\end{equation}
Hence, combining \eqref{Pick} and \eqref{perimeter of polytope} gives
\[L^2=\sum_{j=1}^n L\cdot D_{\rho_j}+2|P^0\cap M|-2
\ge L\cdot D_{\rho_i}+(n-1)+2|P^0\cap M|-2.\]
Since $n\ge 5$, by Lemma \ref{pigeonhole}, $|P^0\cap M|\ge 1$. Therefore,
\[L^2\ge L\cdot D_{\rho_i}+4.\]
\end{proof}
\section{A Reider-type Result for Toric Surfaces}
We will devote this section to prove Proposition \ref{My Reider variant}. First of all, it is true for $X\cong \mathbb{P}^1\times \mathbb{P}^1$.
\begin{lemma}\label{P1xP1}
Proposition \ref{My Reider variant} holds for $X\cong\mathbb{P}^1\times\mathbb{P}^1$.
\end{lemma}
\begin{proof}
Let $\Sigma$ be the fan of $X=\mathbb{P}^1\times \mathbb{P}^1$ as follows.
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}[scale=2]
\fill[pattern=dots, opacity=.5] (0,0)--(1,0)--(1,1)--(0,1)--(0,0);
\fill[pattern=checkerboard light gray, fill opacity=0.5] (0,0)--(0,1)--(-1,1)--(-1,0)--(0,0);
\fill[pattern=crosshatch dots gray, fill opacity=.5] (0,0)--(-1,0)--(-1,-1)--(0,-1)--(0,0);
\fill[fill=lightgray, fill opacity=.5] (0,0)--(0,-1)--(1,-1)--(1,0)--(0,0);
\draw[thick,->] (0,0)--(0,1);
\draw[thick,->] (0,0)--(1,0);
\draw[thick,->] (0,0)--(-1,0);
\draw[thick,->] (0,0)--(0,-1);
\node at (.5,.5) {$\sigma_1$};
\node at (.5,-.5) {$\sigma_2$};
\node at (-.5,-.5) {$\sigma_3$};
\node at (-.5,.5) {$\sigma_4$};
\end{tikzpicture}
\caption{The fan of $\mathbb{P}^1\times\mathbb{P}^1$}
\end{center}
\end{figure}
By \cite[Lemma 10.4.1]{Cox2011}, $D_{\rho}^2=0$ for all $\rho\in\Sigma(1)$. Thus, we need to show that there exists $\rho$ such that $L\cdot D_{\rho}=1$ in the first part and $L\cdot D_{\rho}\le 2$ in the second part.
For any ample bundle $L$ on $X$, if $L+K_X$ is not basepoint free, then there exists $\rho\in \Sigma(1)$ such that $(L+K_X)\cdot D_{\rho}< 0$. Then By lemma \ref{K_X x D_i=b_i-2},
\[(L+K_X)\cdot D_{\rho}=L\cdot D_{\rho}-D_{\rho}^2-2<0.\]
This implies $0<L\cdot D_{\rho}< D_{\rho}^2+2=2$, so that $L\cdot D_{\rho}=1$.
Now suppose that $L+K_X$ is not ample and $(L+K_X)\cdot D_{\rho}\le 0$. Then By lemma \ref{K_X x D_i=b_i-2},
\[(L+K_X)\cdot D_{\rho}=L\cdot D_{\rho}-D_{\rho}^2-2\le 0.\]
This implies $1\le L\cdot D_{\rho}\le D_{\rho}^2+2=2$. Hence, either $L\cdot D_{\rho}=1$ and $D_{\rho}^2=0$ or $L\cdot D_{\rho}=2$ and $D_{\rho}^2=0$. The conclusion follows.
\end{proof}
Secondly, we show that Proposition \ref{My Reider variant} holds for Hirzebruch surfaces.
\begin{lemma}\label{Hirzebruch}
Proposition \ref{My Reider variant} holds for $X\cong \mathscr{F}_a$, $a\ge 1$.
\end{lemma}
\begin{proof}
Consider the Hirzebruch surface $X=\mathscr{F}_r=\mathbb{P}(\mathcal{O}_{\mathbb{P}^1}\oplus\mathcal{O}_{\mathbb{P}^1}(r))$, $r\ge 1$ as in Example \ref{Nef cone if Hirzebruch surface}. We have
\[\Pic(\mathscr{F}_r)\simeq \left\{aD_3+bD_4\hspace{1mm}\middle\vert\hspace{1mm} a,b\in\mathbb{Z}\right\}.\]
The canonical divisor of $X$ is given by
\[K_X=-(D_1+D_2+D_3+D_4)\sim -(2-a)D_3-2D_4.\]
Recall that $D_1^2=D_3^2=0$, $D_2^2=-a$, $D_4^2=a$, $D_1\cdot D_2=D_2\cdot D_3=D_3\cdot D_4=D_4\cdot D_1=1$, and $D_1\cdot D_3=D_2\cdot D_4=0$ (cf. \cite[Lemma 10.4.1]{Cox2011}).
Let $L$ be an ample line bundle over $\mathscr{F}_r$. Then $L^2>0$. We have two cases as follows.
\begin{itemize}
\item If $r=1$ then $K_X=-D_3-2D_4$. For $L$ to be ample while $L+K_X$ is not nef, $L$ has to be of the form $L\sim cD_3+D_4$, $c>0$. In this case, take $D=D_3$, then
\[L\cdot D=1\text{ and } D^2=0.\]
For $L$ to be ample while $L+K_X$ is not ample, $L$ has to be of the form $L\sim D_3+cD_4$, or $L\sim cD_3+D_4$, or $L\sim cD_3+2D_4$, where $c\ge 1$.
\begin{enumerate}
\item If $L\sim D_3+cD_4$, take $D=D_2$, then
\[L\cdot D=1\text{ and } D^2=-1.\]
\item If $L\sim cD_3+D_4$, take $D=D_3$, then
\[L\cdot D=1\text{ and } D^2=0.\]
\item If $L\sim cD_3+2D_4$, take $D=D_3$, then
\[L\cdot D=2\text{ and } D^2=0.\]
\end{enumerate}
\item $r\ge 2$:
For $L$ to be ample but $K_X+L$ is not nef, $L$ has the form
\[L\sim D_4+cD_3 \hspace{5mm} (c\ge 0).\]
Take $D=D_3$, then $L\cdot D=1\text{ and } D^2=0$.
For $L$ to be ample but $K_X+L$ is not, $L$ has the form $L\sim cD_3+D_4$ or $L\sim cD_3+2D_4$, where $c\ge 1$.
\begin{enumerate}
\item If $L\sim cD_3+D_4$, take $D=D_3$, then
\[L\cdot D=1\text{ and } D^2=0.\]
\item If $L\sim cD_3+2D_4$, take $D=D_3$, then
\[L\cdot D=2\text{ and } D^2=0.\]
\end{enumerate}
\end{itemize}
\end{proof}
Finally, we will give the proof for the final case of Proposition \ref{My Reider variant}, when $X$ is an arbitrary blowup of $\mathbb{P}^1\times\mathbb{P}^1$ or the Hirzebruch surface.
\begin{proof}[Proof of Proposition \ref{My Reider variant}]\label{proof of Reider}
The sufficiency trivially holds by Corollary \ref{(L+KX)D=LD-D^2-2}. We now prove the necessity.
By the classification of smooth projective toric surfaces, the proofs for the cases of $\mathbb{P}^1\times\mathbb{P}^1$ (Lemma \ref{P1xP1}) and $\mathscr{F}_a$ (Lemma \ref{Hirzebruch}), it suffices to prove the proposition in the case that the fan $\Sigma$ of $X$ has at least $5$ rays.
We first prove part 1. Suppose that $K_X+L$ is not basepoint free. Then there exists $\rho\in \Sigma(1)$ such that $(K_X+L)\cdot D_{\rho}<0$. Take $D=D_{\rho}$. By Lemma \ref{K_X x D_i=b_i-2},
\[(L+K_X)\cdot D=L\cdot D-D^2-2<0.\]
This implies $L\cdot D< D^2+2$, so since $L$ is ample,
\begin{equation}\label{eq1}
0\le L\cdot D-1\le D^2.
\end{equation}
\begin{itemize}
\item If $D^2\le -1$, then $L\cdot D\le 0$, which is a contradiction to the hypothesis that $L$ is ample.
\item If $D^2=0$, either $D\cdot L=0$ or $D\cdot L=1$. But $D\cdot L>0$ since $L$ is ample. Thus $D\cdot L=1$. The proposition holds for this case.
\end{itemize}
It remains to show that $D^2$ cannot be positive.
Since the fan of $X$ contains at least 5 rays, by Lemma \ref{L^2>=LD+4},
\begin{equation}\label{eq2}
L^2\ge L\cdot D+4.
\end{equation}
In addition, it follows from Corollary \ref{Hodge inequality} that
\begin{equation}\label{eq4}
(L\cdot D)^2\ge L^2\cdot D^2.
\end{equation}
Combining \eqref{eq4} with \eqref{eq1} and \eqref{eq2} yields
\[(L\cdot D)^2\ge (L\cdot D-1)(L\cdot D+4)=(L\cdot D)^2+3L\cdot D-4.\]
This implies $L\cdot D\le 1$. The only possibility is $L\cdot D=1$. Then by \eqref{eq4}, $D^2=L^2=1$, which is impossible since $L^2\ge L\cdot D+4=5$. Therefore, it cannot be the case that $D^2>0$.
We now prove the second part of the proposition. Suppose that $K_X+L$ is not ample, so there exists $\rho\in \Sigma(1)$ such that $(K_X+L)\cdot D_{\rho}\le 0$. Let $D=D_{\rho}$. By Corollary \ref{(L+KX)D=LD-D^2-2},
\begin{align*}
(L+K_X)\cdot D=L\cdot D-D^2-2\le 0.
\end{align*}
This implies $L\cdot D\le D^2+2$; hence,
\begin{equation}\label{eq5}
1\le L\cdot D\le D^2+2.
\end{equation}
\begin{itemize}
\item If $D^2=-1$, then $1\le L\cdot D\le 1$, so $L\cdot D=1$.
\item If $D^2=0$, either $D\cdot L=1$ or $D\cdot L=2$.
\end{itemize}
Now we consider the case that $D^2\ge 1$. Since the fan of $X$ contains at least 5 rays, by Lemma \ref{L^2>=LD+4},
\begin{equation}\label{eq6}
L^2\ge L\cdot D+4.
\end{equation}
By Corollary \ref{Hodge inequality},
\begin{equation}\label{eq7}
(L\cdot D)^2\ge L^2\cdot D^2
\end{equation}
Since $D^2\ge 1$, then by \eqref{eq6}, $L^2\ge 5$. Thus by \eqref{eq7}, $(L\cdot D)^2\ge L^2\cdot D^2\ge 5$, so $L\cdot D>2$. It follows that $L\cdot D\ge 3$. Hence, $L\cdot D-2\ge 1$. This inequality combining with \eqref{eq5} and \eqref{eq6} yields
\[(L\cdot D)^2\ge (L\cdot D-2)(L\cdot D+4)=(L\cdot D)^2+2L\cdot D-8.\]
This implies $L\cdot D\le 4$. The only possibilities are $L\cdot D=3$ or $L\cdot D=4$.
\begin{itemize}
\item If $D^2=1$ then $L\cdot D\le 3$ by \eqref{eq5}. Since $L\cdot D$ can only be either $3$ or $4$, $L\cdot D=3$ in this case. Furthermore, suppose that $L^2\ge 10$. If $L\cdot D=3$ and $D^2=1$ then $9=(L\cdot D)^2< 10\le L^2\cdot D^2$, a contradiction to \eqref{eq7}.
\item Now assume that $D^2\ge 2$. If $L\cdot D=3$, then $L^2\ge 7$ by \eqref{eq6}, and $L^2\cdot D^2\ge 7\cdot 2=14>9=(L\cdot D)^2$, a contradiction to \eqref{eq7}. Now assume that $L\cdot D=4$. Then the polygon $P_L$ associated to $L$ has at least $5$ vertices and one of its edges has lattice length $4$ by Lemma \ref{lattice length}. Hence, $L^2\ge 9$ by Lemma \ref{Volume of polygon >=9}. It follows that $16=(L\cdot D)^2< 18\le L^2\cdot D^2$, a contradiction to \eqref{eq7}.
\end{itemize}
The proposition follows.
\end{proof}
\section{Some Applications}
The following corollary gives an affirmative answer for a stronger form of Fujita's conjecture (Conjecture \ref{Fujita conjecture}) in case of smooth complete toric surfaces. Note that for $n$-dimensional toric varieties, the Fujita's conjecture is in fact a corollary of \cite[Corollary 0.2]{Fujino2003} and \cite[Theorem 1]{Payne2006}.
\begin{corollary}[{\cite{Fujino2003, Payne2006}}]
Let $X$ be a smooth complete surface not isomorphic to $\mathbb{P}^2$. Let $L$ be an ample line bundle on $X$ such that $L\cdot C\ge 2$ for all toric invariant curve $C\subset X$. Then $\mathcal{O}_X(K_X+L)$ is globally generated. If $L^2\ge 10$ and $L\cdot C\ge 3$ for all toric invariant curve $C\subset X$, then $\mathcal{O}_X(K_X+L)$ is very ample.
\end{corollary}
\begin{proof}
Suppose that $\mathcal{O}_X(K_X+L)$ is not globally generated. By Proposition \ref{My Reider variant}, there exists a toric invariant curve $C$ such that $L\cdot C=0$ or $L\cdot C=1$, a contradiction.
\end{proof}
As a corollary, we have a stronger form of \cite[Corollary 2.7]{Lazarsfeld1994} for smooth toric surfaces as follows.
\begin{corollary}
If $A$ is an ample line bundle on a smooth complete toric surface $X$ not isomorphic to $\mathbb{P}^2$, then $|K_X+2A|$ is nef, and $|K_X+4A|$ is very ample.
\end{corollary}
\begin{proof}
Take $L=2A$, then for any toric invariant curve $C\subset X$, $L\cdot C=2A\cdot C\ge 2$. By Proposition \ref{My Reider variant}, $|K_X+2A|$ is nef. Similarly, take $L'=4A$, then $(L')^2=16A^2>10$, and $L\cdot C=4A\cdot C\ge 4$. By Proposition \ref{My Reider variant}, $|K_X+4A|$ is very ample.
\end{proof}
\begin{remark}
It would be interesting to see if we can apply the classification in Proposition \ref{My Reider variant} to the study of Iskovskikh-Shokurov conjecture \cite{Iskovskikh2005} for conic bundles over smooth toric surfaces.
\end{remark}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2019-01-24T02:06:09",
"yymm": "1901",
"arxiv_id": "1901.07685",
"language": "en",
"url": "https://arxiv.org/abs/1901.07685",
"abstract": "Let $L$ be an ample line bundle over a smooth projective toric surface $X$. Then $L$ corresponds to a very ample lattice polytope $P$ that encodes many geometric properties of $L$. In this article, by studying $P$, we will give some necessary and sufficient numerical criteria for the adjoint series $|K_X+L|$ to be either nef or (very) ample.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "A Reider-type Result for Smooth Projective Toric Surfaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9773707973303964,
"lm_q2_score": 0.8887587979121383,
"lm_q1q2_score": 0.8686468949497913
} |
https://arxiv.org/abs/2111.03191 | A note on using the mass matrix as a preconditioner for the Poisson equation | We show that the mass matrix derived from finite elements can be effectively used as a preconditioner for iteratively solving the linear system arising from finite-difference discretization of the Poisson equation, using the conjugate gradient method. We derive analytically the condition number of the preconditioned operator. Theoretical analysis shows that the ratio of the condition number of the Laplacian to the preconditioned operator is $8/3$ in one dimension, $9/2$ in two dimensions, and $2^9/3^4 \approx 6.3$ in three dimensions. From this it follows that the expected iteration count for achieving a fixed reduction of the norm of the residual is smaller than a half of the number of the iterations of unpreconditioned CG in 2D and 3D. The scheme is easy to implement, and numerical experiments show its efficiency. | \section{Introduction}
\label{sec:intro}
Consider a standard finite difference discretization of the Poisson equation in one, two and three dimensions:
\begin{equation}
-\Delta u = f
\label{eq:poisson}
\end{equation}
on a simple domain $\Omega$, e.g., the unit interval, square or cube respectively, and subject to simple boundary conditions such as Dirichlet.
Suppose we discretize the problem on a uniform mesh whose size is $h = \frac{1}{n+1}$, where $n$ is the number of meshpoints in a single direction of the domain.
The computational stencil for the Laplacian is given by
\begin{equation}\label{eq:Laplace-stencil-1d}
A_1= \frac{1}{h^2} \begin{bmatrix} -1 & 2 & -1 \end{bmatrix},
\end{equation}
\begin{equation}\label{eq:Laplace-stencil-2d}
A_2=\frac{1}{h^2}
\begin{bmatrix}
& -1 & \\
-1 & 4 & -1 \\
& -1 &
\end{bmatrix},
\end{equation}
and
\begin{equation}\label{eq:Laplace-stencil-3d}
A_3=\frac{1}{h^2}
\begin{bmatrix}
\begin{bmatrix}
- 1
\end{bmatrix}
\quad
\begin{bmatrix}
& -1 & \\
-1 & 6 & -1 \\
& -1 &
\end{bmatrix}\quad
\begin{bmatrix}
-1
\end{bmatrix}
\end{bmatrix}.
\end{equation}
For large problems in two and three dimensions we are interested in iterative methods, and specifically, in solving the resulting linear system by the Conjugate Gradient (CG) method.
In our recent work \cite{CH2021addVanka}, we showed that in one dimension and in the context of multigrid, an element-wise Vanka smoother is equivalent to the scaled mass operator obtained from the linear finite element method, and in two dimensions, the
element-wise Vanka smoother is equivalent to the scaled mass operator discretized by bilinear finite element method plus a scaled identity operator. While the context of that work is different, this has motivated us to ask whether the mass matrix obtained from the finite element method can be utilized as a preconditioner for the Laplacian. Here, we mean that preconditioning would amount to multiplying by the mass matrix; no inversion is involved. Such a possibility seems attractive given the ease of multiplying the Laplacian by the mass matrix, which is sparse and well conditioned. In this short note we provide analytical and numerical evidence that using the mass matrix in this manner at least doubles the convergence speed of CG in 2D and 3D at a modest computational cost.
In Section \ref{sec:convergence} we provide analytical observations on the condition number and the spectral distribution of the Laplacian scaled by the mass matrix, in comparison with the Laplacian. In Section \ref{sec:numerical} we validate our analysis experimentally. Brief concluding remarks are given in Section \ref{sec:conc}.
\section{Convergence analysis}
\label{sec:convergence}
The stencil of the mass matrix in 1D using linear finite elements is given by
\begin{equation*}
M= \frac{h}{6} \begin{bmatrix}
1 & 4 & 1
\end{bmatrix}.
\end{equation*}
In the context of solving the Poission equation~\eqref{eq:poisson}, we consider the scaled mass matrix as a preconditioner for the Laplacian operator defined in \eqref{eq:Laplace-stencil-1d}, \eqref{eq:Laplace-stencil-2d} and \eqref{eq:Laplace-stencil-3d}, given by
\begin{equation*}
M_1= h M, \quad M_2= M \otimes M, \quad M_3= h^{-1} M \otimes M\otimes M.
\end{equation*}
A well-known convergence bound on CG is determined by the condition number of the coefficient matrix, and we will study the condition numbers of $A_d$ and preconditioned operator $M_d A_d$. We rush to add that formally one would need to consider a symmetric positive definite similarity transformation of the latter, $M_d^{1/2} A_d M_d^{1/2} $, but the spectrum and the condition number are not affected by that transformation.
The following results are straightforward and/or well known, and are provided without a proof.
\begin{lemma}\cite{MR1807961}
The eigenvalues of $A_d, d=1,2,3,$ are given by
\begin{equation*}
\lambda(A_d) = \frac{2}{h^2} \sum_{j=1}^d \left (1-\cos(\pi h k_j) \right), \quad k_j \in \{ 1, 2,\ldots, n\}.
\end{equation*}
\end{lemma}
\begin{lemma}
The eigenvalues of $M_d, d=1,2,3,$ are given by
\begin{equation*}
\lambda(M_d) = \frac{h^2}{3^d} \prod_{j=1}^d \left(2+\cos(\pi h k_j) \right), \quad k_j \in \{ 1, 2,\ldots, n\}.
\end{equation*}
\end{lemma}
\begin{lemma}\label{lemma:cond-Laplace}
The condition number of $A_d, d=1,2,3$, is given by
\begin{equation*}
\kappa = \frac{\lambda_{\rm max}(A_d) }{\lambda_{\rm min}(A_d) } = \frac{1-\cos(\pi h n)}{1-\cos(\pi h)}.
\end{equation*}
\end{lemma}
From Lemma \ref{lemma:cond-Laplace}, we see that when $h\rightarrow 0$, the condition number $\kappa$ goes to infinity, which means that the iteration number will increase dramatically for CG without preconditioner.
We now consider the preconditioned operator $T_d = M_dA_d$ and analyze its eigenvalues.
\begin{theorem}
The eigenvalues of $T_d = M_dA_d$ are given by
\begin{equation}\label{eq:T_d-eigs}
\lambda(T_d) = \frac{2}{3^d} \prod_{j=1}^d \left(2+\cos(\pi h k_j) \right) \left(\sum_{j=1}^d \left (1-\cos(\pi h k_j) \right) \right).
\end{equation}
Furthermore, the condition number of $T_d$ is as follows:
\\
In 1D,
\begin{equation}\label{eq:kap-1d}
\kappa_p = \frac{\left(2+\cos(\pi h [2/(3h)]) \right) \left(1-\cos(\pi h [2/(3h)]) \right) }{ (2+\cos(\pi h) ) (1-\cos(\pi h))},
\end{equation}
where $[\cdot]$ stands for the integer part of a number.
\noindent In 2D
\begin{equation}\label{eq:kap-2d}
\kappa_p = \frac{\left(2+\cos(\pi h [1/(2h)]) \right)^2 \left(1-\cos(\pi h [1/(2h)]) \right) }{\left(2+\cos(\pi h) \right)^2 \left(1-\cos(\pi h) \right) }.
\end{equation}
In 3D
\begin{equation}\label{eq:kap-3d}
\kappa_p = \frac{\left(2+\cos(\pi h \beta ) \right)^3 \left(1-\cos(\pi h \beta) \right) }{\left(2+\cos(\pi h) \right)^3 \left(1-\cos(\pi h) \right) },
\end{equation}
where $\beta=[{\rm arc cos}(1/4)/(\pi h)]$.
\end{theorem}
\begin{proof}
We can consider local Fourier analysis \cite{MR1807961} here to compute the eigenvalues of $M_d A_d$. When $M_d$ and $A_d$ are obtained from periodic operator, then the eigenvalues of
$M_d A_d$ are the products of eigenvalues of $M_d$ and $A_d$.
When $d=1$, from \eqref{eq:T_d-eigs}, we have
\begin{equation*}
\lambda(T_1) =\frac{2}{3} \left(2+\cos(\pi h k_j) \right) \left (1-\cos(\pi h k_j) \right).
\end{equation*}
Let us consider $f_1(x) =(2+x)(1-x)$ with $x \in[-1,1]$. Note that the maximum of $f_1(x)$ is achieved at $x=-\frac{1}{2}$ and the minimum is achieved at $x=1$.
Thus,
\begin{align*}
\lambda_{\max}(T_1)&=\frac{2}{3} \left(2+\cos(\pi h [2/(3h)]) \right) \left(1-\cos(\pi h [2/(3h)]) \right),\\
\lambda_{\min}(T_1) &=\frac{2}{3}(2+\cos(\pi h) ) (1-\cos(\pi h)),
\end{align*}
which leads to \eqref{eq:kap-1d}.
When $d=2$, from \eqref{eq:T_d-eigs}, we have
\begin{equation*}
\lambda(T_2) =\frac{2}{9} \left(2+\cos(\pi h k_1) \right)\left(2+\cos(\pi h k_2) \right) \left (2-\cos(\pi h k_1) -\cos(\pi h k_2)\right).
\end{equation*}
Let us consider $f_2(x,y) =(2+x)(2+y)(2-x-y)$ with $x,y \in[-1,1]$. We compute the derivatives of $f_2$ with respect to $x$ and $y$, given by
\begin{align*}
f_{2,x}&= -4x-2y-2xy-y^2,\\
f_{2,y} &= -4y-2x-2xy-x^2.
\end{align*}
Solving $f_{2,x}=f_{2,y}=0$ with $x,y\in[-1,1]$ gives $x=y=0$. It readily follows that $(0,0)$ is a local maximum point and $f_2(0,0)=8$.
Next, we consider the boundary of $\Omega=[-1,1]^2$, and we find the extreme maximum are $f_2(1/2,-1)=f_2(-1,1/2)=25/4<f_2(0,0)$ and the minimum is $f(1,1)=0$.
Thus,
\begin{align*}
\lambda_{\max}(T_2)&=\frac{4}{9} \left(2+\cos(\pi h [1/(2h)]) \right)^2 \left(1-\cos(\pi h [1/(2h)]) \right),\\
\lambda_{\min}(T_2) &=\frac{4}{9}\left(2+\cos(\pi h) \right)^2 \left(1-\cos(\pi h) \right),
\end{align*}
which leads to \eqref{eq:kap-2d}.
When $d=3$, from \eqref{eq:T_d-eigs}, we have
\begin{equation*}
\lambda(T_3) =\frac{2}{27} \left(2+\cos(\pi h k_1) \right)\left(2+\cos(\pi h k_2) \right) \left(2+\cos(\pi h k_3) \right) \left (3-\cos(\pi h k_1) -\cos(\pi h k_2) -\cos(\pi hk_3)\right).
\end{equation*}
Let us consider $f_3(x,y,z) =(2+x)(2+y)(2+z)(3-x-y-z)$ with $x,y,z \in[-1,1]$. We compute the derivatives of $f_3$ with respect to $x, y$ and $z$, given by
\begin{align*}
f_{3,x}&= (2+y)(2+z)(1-2x-y-z),\\
f_{3,y} &=(2+x)(2+z)(1-2y-x-z),\\
f_{3,z} &= (2+x)(2+y)(1-2z-x-y).
\end{align*}
Solving $f_{3,x}=f_{3,y}=f_{3,z}=0$ with $x,y,z\in[-1,1]$ gives $x=y=z=1/4$. It is obvious that $(1/4,1/4,1/4)$ is a local maximum point and $f_3(1/4,1/4,1/4)=(9/4)^4$.
Next, we consider $f_3(x,y,z)$ at the boundary of $\Omega=[-1,1]^3$, and due to the symmetry of $f_3$, we only need to consider two cases.
One is that $z=1$ and $(x,y)\in[-1,1]^2$ and the other is $z=-1$ and $(x,y)\in[-1,1]^2$ . When $z=1$ and $(x,y)\in[-1,1]^2$, $f_3(x,y,1)=g_1(x,y)=3(2+x)(2+y)(2-x-y)$ $(x,y)\in [-1,1]^2$. However, from the proof of the case $d=2$, we know that the maximum of $g_1(x,y)$ is $g_1(0,0)=24$ and the minimum of $g_1(x,y)$ is $g_1(1,1)=0$. When $z=-1$ and $(x,y)\in[-1,1]^2$, $f_3(x,y,-1)=g_2(x,y)=(2+x)(2+y)(4-x-y)$. It can easily be shown that the maximum of $g_2(x,y)$ is $g_2(1/2,1)=75/4$ and the minimum of $g_2(x,y)$ is $g_2(0,-1)=10$.
Thus, the maximum of $f_3$ is $f_3(1/4,1/4,1/4)=(9/4)^4$ and the minimum is $f_3(1,1,1)=0$. This means that
\begin{align*}
\lambda_{\max}(T_3)&=\frac{2}{9} \left(2+\cos(\pi h \beta) \right)^3 \left(1-\cos(\pi h \beta) \right), \quad \beta=[{\rm arc cos}(1/4)/(\pi h)],\\
\lambda_{\min}(T_3) &=\frac{2}{9}\left(2+\cos(\pi h) \right)^3 \left(1-\cos(\pi h) \right),
\end{align*}
which yields \eqref{eq:kap-3d}.
\end{proof}
Next, we describe the relationship between the two condition numbers, $\kappa$ and $\kappa_p$.
\begin{theorem}\label{thm:ratio-results}
Define the ratio $r =\frac{\kappa}{\kappa_p} $. The the ratio satisfies: \\
In 1D
\begin{equation*}
r_1 = \frac{(1-\cos(\pi h n) (2+\cos(\pi h) )}{ \left(2+\cos(\pi h [2/(3h)]) \right) \left(1-\cos(\pi h [2/(3h)]) \right) },
\end{equation*}
and
\begin{equation*}
\lim_{h\rightarrow 0} r_1 = \frac{8}{3}\approx 2.7.
\end{equation*}
In 2D
\begin{equation*}
r_2 = \frac{(1-\cos(\pi h n) \left(2+\cos(\pi h) \right)^2} {\left(2+\cos(\pi h [1/(2h)]) \right)^2 \left(1-\cos(\pi h [1/(2h)]) \right) },
\end{equation*}
and
\begin{equation*}
\lim_{h\rightarrow 0} r_2 = \frac{9}{2}=4.5.
\end{equation*}
In 3D
\begin{equation*}
r_3 = \frac{(1-\cos(\pi h n) \left(2+\cos(\pi h) \right)^3} {\left(2+\cos(\pi h \beta ) \right)^3 \left(1-\cos(\pi h \beta) \right) },
\end{equation*}
where $\beta=[{\rm arc cos}(1/4)/(\pi h)]$ and
\begin{equation*}
\lim_{h\rightarrow 0} r_3 = \frac{2^9}{3^4} \approx 6.3.
\end{equation*}
\end{theorem}
From Theorem \ref{thm:ratio-results} it is interesting to notice that the gains in terms of condition number ratios grow with the dimension; this suggests that our approach is particularly effective for 3D.
It is well known that the convergence bound of CG satisfies \cite{saad2003iterative}
\begin{equation*}
\| x-x_k \|_A \leq 2 \left(\frac{ \sqrt{ \kappa }-1 } { \sqrt{ \kappa}+1} \right)^k \| x-x_0\|_A.
\end{equation*}
Requiring $\| x-x_k \|_A \leq \epsilon$ gives
\begin{equation*}
k \approx \frac{ 1} {2} {\rm log } (\epsilon/2) \sqrt{ \kappa }.
\end{equation*}
It follows that to achieve the same convergence tolerance $\epsilon$, the ratio of iteration numbers of CG without preconditioner to that of CG with preconditioner is
\begin{equation}\label{eq:theoretical-itn-ratio}
\frac{k}{k_p} = \sqrt{\frac{\kappa}{\kappa_p}} =\sqrt{r}.
\end{equation}
\section{Numerical experiments}
\label{sec:numerical}
To demonstrate the efficiency of the mass matrix as a preconditioner for the Laplacian, we consider the Poisson equation in two and three dimensions on the unit square and cube, respectively, subject to homogeneous Dirichlet boundary conditions. We discretize it using a uniform mesh, as briefly described in Section~\ref{sec:intro}. We run CG with and without preconditioner and stop the iteration when the residual norm is below $10^{-8}$.
In Figure~\ref{fig:eigenvalues-2d-cg} we illustrate the effect that preconditioning has on the eigenvalues of the matrix. It is evident that most of the eigenvalues of the preconditioned matrix have values relatively close to 1, which explains the effectiveness of the preconditioner.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.99\textwidth]{eigenvalues.pdf}
\caption{Eigenvalues of the Laplacian vs. the product of the mass matrix by the Laplacian. This is a 2D problem with $n=32$, i.e., the matrix is of dimensions $1,024 \times 1,024$.}\label{fig:eigenvalues-2d-cg}
\end{center}
\end{figure}
In Table \ref{tab:ratio}, we numerically compute the ratio $\frac{\kappa}{\kappa_p}$ for different values of meshgrid size $h$ and dimension $d$. The results in Table \ref{tab:ratio} very closely match the analytical results in Theorem \ref{thm:ratio-results}. For fixed $d$, when $h$ increases, the ratio $r_d$ increases, but $r_d$ is bounded.
\begin{table}[H]
\caption{The ratio of two condition numbers}
\centering
\begin{tabular}{c l l l }
\hline
$r_d$ &8 &16 &32 \\
\hline
\hline
$r_1$ &32.1634/12.6914 $\approx$ 2.5 & 116.4612/44.2414$\approx$ 2.6 & 440.6886/165.8836 $\approx$ 2.7 \\
\hline
$r_2$ & 32.1634/7.6173 $\approx$ 4.2 & 116.4612/26.3451$\approx$ 4.4 & 440.6886/98.3943 $\approx$ 4.5 \\
\hline
$r_3$ & 32.1634/5.5393 $\approx$ 5.8 & 116.4612/18.8900$\approx$ 6.2 & 440.6886/70.1771 $\approx$ 6.3 \\
\hline
\end{tabular}\label{tab:ratio}
\end{table}
We now move to show some convergence results in 2D and 3D. In Table~\ref{tab:iterations} we summarize our findings.
\begin{table}[H]
\caption{Iteration counts for 2D and 3D experiments. The column under `mtx-size' gives the sizes of the linear systems considered (number of degrees of freedom). The term `th-itn-ratio' stands for `theoretical iteration counts ratio' (see \eqref{eq:theoretical-itn-ratio}), as explained in the example. The term `itn-ratio' refers to the ratio between iteration counts for the unpreconditioned case (`itn-unprec') and iteration counts for the preconditioned case (`itn-prec').}
\centering
\begin{tabular}{ccccccc}
\hline
Type & $n$ & mtx-size &itn-unprec & itn-prec & th-itn-ratio & itn-ratio\\
\hline
\hline
2D & 32 & 1,024 & 62 & 30 & 2.12 & 2.07 \\
& 64 & 4,096 & 122 & 58 & & 2.10 \\
& 128 & 16,384 & 231 & 110 & & 2.10 \\
& 256 & 65.536 & 454 & 215 & & 2.11 \\ \hline \hline
3D & 32 & 32,768 & 81 & 33 & 2.51 & 2.45 \\
& 64 &262,144 & 158 & 63 & & 2.51 \\
& 96 &884,736 & 225 & 90 & & 2.50 \\
& 128 & 2,097,152 & 296 & 118 & & 2.51 \\ \hline
\end{tabular}\label{tab:iterations}
\end{table}
These results in Table~\ref{tab:iterations} are consistent with our theoretical findings.
When $d=2$, $r_2\approx 4.5$ and $\sqrt{4.5}\approx 2.12$. Thus, unpreconditioned CG is expected to take approximately 2.12 times the iteration number of preconditioned CG.
When $d=3$, $r_3\approx 6.3$ and $\sqrt{6.3}\approx 2.51$. Thus, unpreconditioned CG is expected to take approximately 2.51 times the iteration number of preconditioned CG. The table shows that those predictions are remarkably accurate in both 2D and 3D.
In Figure \ref{fig:Errors-2d-cg}, we show convergence history for $n=128$ and $n=256$ in 2D. In Figure \ref{fig:Errors-3d-cg} we show convergence history $n=64$ and $n=128$ in 3D.
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{cg-Mass-2d-128.pdf}
\includegraphics[width=0.49\textwidth]{cg-Mass-2d-256.pdf}
\caption{Convergence history of CG with and without a preconditioner in 2D. Left: $n=128$. Right: $n=256$.}\label{fig:Errors-2d-cg}
\end{figure}
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{cg-Mass-3d-64.pdf}
\includegraphics[width=0.49\textwidth]{cg-Mass-3d-128.pdf}
\caption{Convergence history of CG with and without preconditioner in 3D. Left: $n=64$. Right: $n=128$. }\label{fig:Errors-3d-cg}
\end{figure}
\section{Concluding remarks}
\label{sec:conc}
Our analytical results provide a remarkably accurate estimate of the condition number and iteration counts for CG. At a minimal cost that amounts to a matrix-vector product by the sparse and well-conditioned mass matrix, convergence speed is at least doubled. The gains are stronger in the 3D case. The cost of the additional matrix-vector product per iteration is modest, especially if considered in a parallel computing environment. Therefore, the overall computational gains are meaningful.
The proposed scheme is extremely simple and easy to implement and may make it possible to utilize the mass matrix in other problems in potentially useful ways.
\bibliographystyle{siam}
| {
"timestamp": "2021-11-08T02:06:06",
"yymm": "2111",
"arxiv_id": "2111.03191",
"language": "en",
"url": "https://arxiv.org/abs/2111.03191",
"abstract": "We show that the mass matrix derived from finite elements can be effectively used as a preconditioner for iteratively solving the linear system arising from finite-difference discretization of the Poisson equation, using the conjugate gradient method. We derive analytically the condition number of the preconditioned operator. Theoretical analysis shows that the ratio of the condition number of the Laplacian to the preconditioned operator is $8/3$ in one dimension, $9/2$ in two dimensions, and $2^9/3^4 \\approx 6.3$ in three dimensions. From this it follows that the expected iteration count for achieving a fixed reduction of the norm of the residual is smaller than a half of the number of the iterations of unpreconditioned CG in 2D and 3D. The scheme is easy to implement, and numerical experiments show its efficiency.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A note on using the mass matrix as a preconditioner for the Poisson equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771786365549,
"lm_q2_score": 0.8791467564270272,
"lm_q1q2_score": 0.8675219559145405
} |
https://arxiv.org/abs/1403.3431 | Minimal TSP Tour is coNP-Complete | The problem of deciding if a Traveling Salesman Problem (TSP) tour is minimal was proved to be coNP-complete by Papadimitriou and Steiglitz. We give an alternative proof based on a polynomial time reduction from 3SAT. Like the original proof, our reduction also shows that given a graph $G$ and an Hamiltonian path of $G$, it is NP-complete to check if $G$ contains an Hamiltonian cycle (Restricted Hamiltonian Cycle problem). | \section{Introduction}
The \emph{Traveling Salesman Problem} (TSP{}) is a well--known problem from
graph theory \cite{PapadimitriouComplexity},\cite{GJ}: we are given $n$
cities and a nonnegative integer distance $d_{ij}$ between
any two cities $i$ and $j$ (assume that the distances are symmetric, i.e.
for all $i,j, d_{ij} = d_{ji}$). We are asked to find the \emph{shortest tour}
of the cities, that is a permutation $\pi$ of $[1..n]$ such that
$\sum_{i=1}^n d_{\pi(i),\pi(i+1)}$ (where $\pi(n+1) = \pi(n)$) is as small as possible. Its decision version is the following:
\begin{myquote}
{\sc TSPDecision}: If a nonnegative
integer bound $B$ (the traveling salesman's ``budget'') is given along
with the distances, does there exist
a tour of all the cities having total length no more than $B$?
\end{myquote}
\noindent{\sc TSPDecision}{} is {\sf NP}--complete{} (we assume that the reader is familiar with
the theory of {\sf NP}--completeness, for a good introduction see \cite{GJ} or \cite{Sipser}).
In \cite{PapadimitriouComplexity} two other problems are introduced:
\begin{myquote}
{\sc TSPExact}: Given the distances $d_{ij}$ among the $n$ cities and an nonnegative integer $B$, is the length of
the shortest tour \emph{equal} to $B$; and
\\
\\
{\sc TSPCost}: Given the distances $d_{ij}$ among the $n$ cities calculate the \emph{length} of the shortest tour.
\end{myquote}
\noindent{\sc TSPExact}{} is {\sf DP}--complete (a language $L$ is in the class {\sf DP}{} if and only if there are two languages $L_1 \in {\sf NP}$
and $L_2 \in {\sf coNP}$ and $L = L_1 \cap L_2$); {\sc TSPCost}{} and TSP{} are both $\sf{FP^{NP}}$--complete
($\sf{FP^{NP}}${} is the class of all functions from strings to strings that can be computed by
a polynomial--time Turing machine with a {\sc SAT}{} oracle) \cite{PapadimitriouComplexity}.
Recently a post by Jean Francois Puget:
``\href{https://www.ibm.com/developerworks/community/blogs/jfp/entry/no_the_tsp_isn_t_np_complete}{No, The TSP Isn't NP Complete}'' and the subsequent reply by Lance Fortnow:
``\href{http://blog.computationalcomplexity.org/2014/01/is-traveling-salesman-np-complete.html}{Is Traveling Salesman NP-Complete?}'' \cite{blog:tspmindecision} (re--)raised the question of the correct interpretation
of the statement ``TSP is {\sf NP}--complete{}'';
indeed, if we are given a tour, checking that it is the shortest tour seems
not to be in {\sf NP}{}.
A question about the complexity of the following problem:
\begin{myquote}
{\sc TSPMinDecision}{}: Given a set of $n$ cities, the distance between all city pairs and a tour $T$, is T visiting each city exactly once and is T of minimal length?
\end{myquote}
\noindent was posted on \mbox{cstheory.stackexchange.com}, a question and answer site for professional researchers in theoretical computer science and related fields
\cite{cstheory:tspmindecision}.
We gave an answer with a first sketch of the proof that {\sc TSPMinDecision}{} is {\sf coNP}{}--complete,
but after formalising and publishing it on arXiv, \href{http://cstheory.stackexchange.com/a/21644/3247}{we discovered that the result
is not new} and it originally appeared in \cite{papasteitsp} (see also Section~19.9 in \cite{combopt}). The proof given by Papadimitriou and Steiglitz is different:
they prove that the Restricted Hamiltonian Cycle (RHC) problem is {\sf NP}--complete{}
starting from an instance of the Hamiltonian cycle problem $G$ and modifying $G$
into a new graph $G'$ that contains an Hamiltonian path, and has an Hamiltonian
cycle if and only if the original $G$ has an Hamiltonian cycle.
Our alternative proof is a chain of reductions from {\sc 3SAT}{} to
the problem of finding a tour shorter than a given one,
and it may be interesting in and of itself, so we decided not to
withdraw the paper.
\section{Minimal TSP tour is coNP--complete}
\label{sec:proof}
Proving that {\sc TSPMinDecision}{} is {\sf coNP}{}--complete is equivalent to proving the
{\sf NP}{}--completeness of the following:
\begin{definition}[{\sc TSPAnotherTour}{} problem]~\\
\noindent{
\textbf{Instance}: A complete graph $G = (V, E)$ with positive
integer distances $d_{ij}$ between the nodes,
and a simple cycle $C$ that visits all the nodes of $G$.
}
\smallskip
\noindent{\textbf{Question}: Is there a simple cycle $D$ that visits all the nodes of $G$
such the total length of the tour $D$ in $G$ is strictly less than
the total of the tour $C$ in $G$?
}
\end{definition}
\begin{theorem}\label{thm:tspanothertour}
{\sc TSPAnotherTour}{} is {\sf NP}--complete{}.
\end{theorem}
\begin{proof}
It is easy to see that a valid solution to the problem can be verified in polynomial
time: just check if the tour $D$ visits all the cities and if its length is strictly
less than the length of the given tour $C$, so the problem is in {\sf NP}{}.
To prove its hardness we give a polynomial time reduction from {\sc 3SAT}{};
given a {\sc 3CNF}{} formula $\varphi$ with $n$ variables $x_1,\ldots,x_n$ and
$m$ clauses $C_1,...,C_m$; we introduce a new dummy variable $z$ and
add it to every clause: $(x_{i_1} \lor x_{i_2} \lor x_{i_3} \lor z)$. We obtain a {\sc 4CNF}{} formula $\varphi^z$ that
has at least one satisfying assignment (just set $u=true$). Note that
every satisfying assignment of $\varphi^z$ in which $z =false$ is also
a satisfying assignment of $\varphi$.
From $\varphi^z$ we generate an undirected graph $G = \{V,E\}$ following the same standard transformation used
to prove that the Hamiltonian cycle problem is {\sf NP}--complete{}: for every
clause we add a node $c_j$, for every
variable $x_i$ we add a \emph{diamond--like} component,
and we add a directed edge from one of the nodes of the diamond
to the node $c_j$ if $x_i$ appears in $C_j$ as a positive literal;
a directed edge from $c_j$ to one of the nodes of the diamond
if $x_i$ appears in $C_j$ as a negative literal.
Starting from the top we can choose to traverse the diamonds
corresponding to variables $x_1,x_2,...,x_n,u$ from left to right
(i.e. set $x_i$ to $true$) or from right to left (i.e. set $x_i$ to $false$).
The resulting directed graph $G$ has an Hamiltonian cycle if and only if the original formula
is satisfiable. For the details of the construction see \cite{Sipser} or \cite{AroraBarak}.
We focus on the diamond corresponding to the dummy variable $z$; let $e_z$
be the edge that must be traversed if we assign to $u$ the value of $true$
(see Figure~\ref{fig:reduction}).
\begin{figure}[htp]
\centering
\includegraphics[width=7cm]{figreduction.pdf}
\caption{Reduction from {\sc 3SAT}{} to directed Hamiltonian cycle.}\label{fig:reduction}
\end{figure}
We can transform $G$ to an undirected graph $G' = \{V',E'\}$
replacing each node $u \in V$ with three linked nodes $u_1, u_2, u_3 \in V'$ and modify the edges according to the standard reduction used to prove the {\sf NP}{}-completeness of UNDIRECTED HAMILTONIAN CYCLE from DIRECTED HAMILTONIAN CYCLE \cite{Sipser}:
we use $u_1$ for the incoming edges of $u$, and $u_3$ for the outgoing edges,
i.e. we replace every directed edge $(u \to v) \in E$ with $(u_3 \to v_1) \in E'$.
We have $G'$ has an Hamiltonian cycle if and only if $G$ has an Hamiltonian cycle
if and only if $\varphi^z$ is satisfiable.
Finally we transform $G'$ into an instance of {\sc TSPAnotherTour}{} assigning length $1$ to
all edges except edge $e_z$ which has length $2$; and we complete the graph
adding the missing edges and setting their length to $3$.
The dummy variable $z$ guarantees that we can easily find a tour $T$: just
travel the diamonds from left to right without worrying of the clause nodes;
when we reach the diamond corresponding to $z$, traverse it from left to right
(i.e. assign to $z$ the value of $true$), and include all the $c_j$s. By construction the total length of the tour $T$ is exactly $|V'|+1$: all edges have length 1 except $e_u$ that has length $2$.
Another tour $D$ can have a length strictly less than $|V'|+1$ only if
it doesn't use the edge $e_u$; so if it exists we can derive a valid
satisfying assignment for the original formula $\varphi$, indeed
by construction $\varphi$
is satisfiable if and only if there exists a satisfying assignment for
$\varphi^z$ in which $z=false$. In the opposite direction
if there exists a valid satisfying assignment for $\varphi$ we can
easily find a tour $D$ of length $|V'|$: just traverse the diamonds
according to the truth values of the variables $x_i$ and traverse
the diamond corresponding to $z$ from right to left.
So there is another tour $D$ of total length strictly less than $T$ if
and only if the original {\sc 3SAT}{} formula $\varphi$ is satisfiable.
\end{proof}
Hence we have:
\begin{corollary}
{\sc TSPMinDecision}{} is {\sf coNP}--complete.
\end{corollary}
The reduction used to prove Theorem~\ref{thm:tspanothertour} ``embeds'' the $\sf{NP}$--completeness proof of the \emph{Restricted Hamiltonian Cycle problem}
(RHC) \cite{combopt}:
\begin{theorem}
\label{cor:ham}
Given a graph $G$ and an Hamiltonian path in it, it is {\sf NP}--complete{} to decide
if $G$ contains an Hamiltonian cycle as well.
\end{theorem}
\begin{proof}
In the reduction above, after the creation of
the undirected graph $G'$, if we remove the edge $e_z$,
we are sure that an Hamiltonian path exists from one endpoint
of $e_z$ to the other (just delete $e_z$ from the Hamiltonian cycle that can be constructed setting $z = true$). An Hamiltonian cycle in $E \setminus \{e_z\}$ \emph{must} use
the edge corresponding to $z = false$, so it exists if and only if
the original {\sc 3SAT}{} formula $\varphi$ is satisfiable.
\end{proof}
\section{Conclusion}
We are optimist: if someone -- out there -- shouts: ``TSP is NP--complete''
we are confident that he really means: ``The decision version of TSP is NP--complete'';
and we hope that, soon or later, someone -- out there -- will shout
``We already know that there is [not] a polynomial time algorithm that solves TSP
because $\sf{P}$ is [not] equal to $\sf{NP}$'' :-)
\section*{Acknowledgements}
Thanks to P\'alv\"olgyi D\"om\"ot\"or for the nice hint about
Theorem~\ref{cor:ham}, and to Marcus Ritt for pointing out
the original Papadimitriou and Steiglitz's paper.
\bibliographystyle{plain}
| {
"timestamp": "2014-03-24T01:06:10",
"yymm": "1403",
"arxiv_id": "1403.3431",
"language": "en",
"url": "https://arxiv.org/abs/1403.3431",
"abstract": "The problem of deciding if a Traveling Salesman Problem (TSP) tour is minimal was proved to be coNP-complete by Papadimitriou and Steiglitz. We give an alternative proof based on a polynomial time reduction from 3SAT. Like the original proof, our reduction also shows that given a graph $G$ and an Hamiltonian path of $G$, it is NP-complete to check if $G$ contains an Hamiltonian cycle (Restricted Hamiltonian Cycle problem).",
"subjects": "Computational Complexity (cs.CC)",
"title": "Minimal TSP Tour is coNP-Complete",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513905984457,
"lm_q2_score": 0.8791467564270272,
"lm_q1q2_score": 0.8669717963906259
} |
https://arxiv.org/abs/1905.05312 | Books versus triangles at the extremal density | A celebrated result of Mantel shows that every graph on $n$ vertices with $\lfloor n^2/4 \rfloor + 1$ edges must contain a triangle. A robust version of this result, due to Rademacher, says that there must in fact be at least $\lfloor n/2 \rfloor$ triangles in any such graph. Another strengthening, due to the combined efforts of many authors starting with Erdős, says that any such graph must have an edge which is contained in at least $n/6$ triangles. Following Mubayi, we study the interplay between these two results, that is, between the number of triangles in such graphs and their book number, the largest number of triangles sharing an edge. Among other results, Mubayi showed that for any $1/6 \leq \beta < 1/4$ there is $\gamma > 0$ such that any graph on $n$ vertices with at least $\lfloor n^2/4\rfloor + 1$ edges and book number at most $\beta n$ contains at least $(\gamma -o(1))n^3$ triangles. He also asked for a more precise estimate for $\gamma$ in terms of $\beta$. We make a conjecture about this dependency and prove this conjecture for $\beta = 1/6$ and for $0.2495 \leq \beta < 1/4$, thereby answering Mubayi's question in these ranges. | \section{Introduction}
Mantel's theorem \cite{Man} from 1907 is among the earliest results in extremal graph theory. It states that the maximum number of edges that a triangle-free graph on $n$ vertices can have is $\lfloor n^2/4 \rfloor$, with equality if and only if the graph is the balanced complete bipartite graph. So a graph on $n$ vertices with one more edge must have at least one triangle. Must it have many triangles? Must there be an edge in many triangles? Such questions have a long history of study in extremal graph theory.
In unpublished work, Rademacher answered the first question above in 1950, proving that every graph on $n$ vertices with $\lfloor n^2/4 \rfloor+1$ edges has at least $\lfloor n/2 \rfloor$ triangles, which is tight by adding an edge inside the larger part of a balanced complete bipartite graph. Erd\H{o}s \cite{Er62} then extended this result to graphs with a linear number of extra edges and, in \cite{Er62b}, studied the problem for larger cliques. Over the last fifty years, many further results in this direction have been obtained by various researchers, see, e.g., \cite{Bol, LPS, Lov-Sim, Nikiforov, Razborov, Reiher} and their references.
The second question, about finding an edge in many triangles, was first studied by Erd\H{o}s \cite{Er62} in 1962. A {\it book} in a graph is a collection of triangles that have an edge in common. The \emph{size} of the book is the number of such triangles. The \emph{book number} of a graph $G$, denoted by $b(G)$, is the size of the largest book in the graph. Erd\H{o}s proved that every graph $G$ on $n$ vertices with $\lfloor n^2/4 \rfloor+1$ edges satisfies $b(G) \geq n/6-O(1)$ and conjectured that the $O(1)$-term can be removed. Solving this conjecture and answering the second question above, Edwards and, independently, Khad\v{z}iivanov and Nikiforov~\cite{KN79} proved that every such graph satisfies $b(G)\geq n/6$, which is tight.
Our concern here is with a problem of Mubayi \cite{Mubayi} about the interplay between the two questions above. More precisely, if a graph $G$ on $n$ vertices with $\lfloor n^2/4 \rfloor+1$ edges satisfies $b(G) \leq b$, at least how many triangles must it have? We write $t(n, b)$ for this minimum number. Mubayi proved that for fixed $\beta \in (1/4,1/2)$, if $b(G)<\beta n$, then $t(n,b) \geq \left(\frac{1}{2}\beta(1-2\beta)-o(1)\right)n^2$, a bound which is asymptotically tight. He also showed that $t(n, b)$ changes from quadratic to cubic in $n$ when $b \approx n/4$. More precisely, he proved that for each $\beta \in (1/6,1/4)$ there is $\gamma>0$ such that $t(n,\beta n) \geq \gamma n^3$. He then asked for a more precise determination of the optimal $\gamma$ in terms of $\beta$, but added that the problem `seems very hard'. Our contribution in this paper is to make a conjecture about this dependency and to confirm this conjecture for $\beta = 1/6$ and for $0.2495 \leq \beta < 1/4$.
To say more, consider the $3$-prism graph, the skeleton of the $3$-prism, consisting of two disjoint triangles with a perfect matching between them. For nonnegative integers $b$ and $n$ with $b \leq n/4$, let $S_{b,n}$ be the graph on $n$ vertices formed by blowing up the $3$-prism graph, where four of the six parts, corresponding to the vertices of two edges of the matching, are of size $b$, and the remaining two parts are of size $\lfloor (n-4b)/2\rfloor$ and $\lceil (n-4b)/2\rceil$. Restated, $S_{b,n}$ has vertex set consisting of six parts $U_1,U_2,U_3,V_1,V_2,V_3$ with $|U_1|=|U_2|=|V_1|=|V_2|=b$, $|U_3|=\lfloor (n-4b)/2\rfloor$, $|V_3|=\lceil (n-4b)/2\rceil$, $U_i$ is complete to $U_j$ for $i \not = j$, $V_i$ is complete to $V_j$ for $i \not = j$, $U_i$ is complete to $V_i$ for each $i$ and there are no other edges. The graph $S_{b,n}$ has $n$ vertices, $\lfloor n^2/4 \rfloor$ edges, book number $b$ if $b \geq n/6$ and $b^2(n-4b)$ triangles. If $b=0$ or $n/4$, then $S_{b,n}$ is the balanced complete bipartite graph, but otherwise has triangles. We make the following conjecture.\footnote{Though the final version of Mubayi's paper~\cite{Mubayi} contains no conjecture about the behaviour of $t(n, \beta n)$ for $1/6 \leq \beta < 1/4$, the original arXiv version described a construction which is almost identical to that given here. As such, we might well ascribe an approximate version of Conjecture~\ref{mainconj} to him.}
\begin{conj} \label{mainconj}
If $n/6 \leq b < n/4$, then every graph on $n$ vertices with at least $\lfloor n^2/4 \rfloor$ edges and book number at most $b$
which is not the balanced complete bipartite graph has at least $b^2(n-4b)$ triangles, with equality if and only if the graph is $S_{b,n}$.
\end{conj}
Our main result is a proof of Conjecture \ref{mainconj} when $b$ is not much smaller than $n/4$.
\begin{thm}\label{thmbooknearquarter}
Conjecture \ref{mainconj} holds for graphs with at least $n^2/4$ edges if $0.2495n \leq b < n/4$.
\end{thm}
While Theorem \ref{thmbooknearquarter} is stated for graphs with at least $n^2/4$ edges, the proof is robust enough to yield the analogous result for graphs with $\lfloor n^2/4 \rfloor$ edges. We only prove the weaker statement for simplicity of presentation.
The perceptive reader will have noticed that our Conjecture~\ref{mainconj} differs from Mubayi's question in one small, but important, point of detail: we allow our graphs to have $\lfloor n^2/4 \rfloor$ edges, whereas Mubayi looks at graphs with at least $\lfloor n^2/4 \rfloor + 1$ edges, thus guaranteeing that there are always some triangles. However, Conjecture~\ref{mainconj} also implies an asymptotically tight bound on the function $t(n, b)$. To see this, consider a slightly different blow-up of the $3$-prism graph, adding one vertex to each $U_i$ and subtracting one vertex from each $V_i$. If $n$ is even, we get a graph with book number $b+1$ and with three more edges and $n$ more triangles than $S_{n,b}$. If $n$ is odd, we get a graph with book number $b+1$ and with two more edges and $n-2b$ more triangles than $S_{n,b}$. We now delete two edges, each in $b+1$ triangles but not in a common triangle, if $n$ is even and one edge in $b+1$ triangles if $n$ is odd, yielding the bounds $t(n,b+1) \leq b^2(n-4b)+n-2(b+1)$ if $n$ is even and $t(n,b+1) \leq b^2(n-4b)+n-2b-(b+1)$ if $n$ is odd. Together with Conjecture~\ref{mainconj}, these constructions imply the required asymptotic estimate on $t(n,b)$ for $n/6 \leq b \leq n/4-\omega(1)$, where the $\omega(1)$ term indicates any function tending to infinity with $n$.
We also study what happens at the other end of the range, showing that Conjecture~\ref{mainconj} holds for $b = n/6$. More precisely, we will make use of results from a paper of Bollob\'as and Nikiforov~\cite{BN05}, themselves derived from the earlier work of Edwards and Khad\v{z}iivanov--Nikiforov~\cite{KN79}, to show that the conjecture holds in this case.
\begin{thm}\label{thmbooknearsixth}
Conjecture~\ref{mainconj} holds for graphs with at least $n^2/4$ edges if $b = n/6$.
\end{thm}
Once again, the theorem holds for graphs with $\lfloor n^2/4 \rfloor$ edges, but it is more convenient, principally from a notational standpoint, to assume that there are at least $n^2/4$ edges.
\vspace{3mm}
{\bf Notation.} For a graph $G$ and vertex $v$, the neighborhood $N(v)$ denotes the set of vertices adjacent to $v$, while the degree of $v$ is denoted by $d(v):=|N(v)|$ and the degree of $v$ into a vertex subset $A$ is denoted by $d_A(v):=|N(v) \cap A|$. For two vertices $u$ and $v$, their common neighborhood is denoted by $N(u,v)$ and their codegree $d(u,v)=|N(u,v)|$ is the number of vertices adjacent to both $u$ and $v$. The codegree of $u$ and $v$ into a vertex subset $A$ is denoted by $d_A(u,v):=|N(u,v) \cap A|$. For a vertex subset $A$, we write $E(A)$ for the set of edges in $A$ and $e(A)$ for the number of such edges. Similarly, for vertex subsets $A$ and $B$, the set of edges with one vertex in $A$ and the other in $B$ is denoted by $E(A,B)$ and the number of such edges is $e(A,B)=|E(A,B)|$. If the underlying graph $G$ is not clear from context, we include it in the notation.
\section{Proof of Theorem~\ref{thmbooknearquarter}}
The following lemma gives a bound on the maximum cut of a graph with few triangles. The result and proof in the special case of triangle-free graphs is due to Erd\H{o}s, Faudree, Pach and Spencer \cite{EFPS}.
\begin{lem}\label{firstlemma}
If $G$ is a graph with $n$ vertices, $m$ edges and $t$ triangles, then $G$ can be made bipartite by deleting
at most $m-\frac{4m^2}{n^2}+\frac{6t}{n}$ edges.
\end{lem}
\begin{proof}
We will show that there is a vertex $x$ for which $N(x)$ and $\overline{N(x)}$ form the desired bipartition of the vertex set by picking $x$ uniformly at random. The expected number of edges in the neighborhood of $x$ is $3t/n$. The expected number of edges in $\overline{N(x)}$ is
\begin{align*}
\frac{1}{n}\sum_{x \in V(G)} e(\overline{N(x)}) & = \frac{1}{n}\sum_{(a,b) \in E(G)} \left(n-d(a)-d(b)+d(a,b)\right)\\
& = m+\frac{3t}{n}-\frac{1}{n}\sum_{a \in V(G)} d(a)^2 \leq m+\frac{3t}{n}-\frac{4m^2}{n^2},
\end{align*}
where the first equality follows by double counting the number of triples $(x,a,b)$ of vertices where $(a,b)$ is an edge but $(x,a)$ and $(x,b)$ are not edges and the last inequality is by Cauchy--Schwarz. Thus, the expected number of edges in $N(x)$ and $\overline{N(x)}$ is at most $m+\frac{6t}{n}-\frac{4m^2}{n^2}$. Hence, there exists a choice of $x$ for which this random variable is at most the expected value.
\end{proof}
We use Lemma~\ref{firstlemma} to prove the following result, which gives conditions under which a graph contains a large induced bipartite subgraph.
\begin{lem}\label{secondlemma}
Let $G$ be a graph with $n$ vertices, $m \geq n^2/4$ edges, $t \leq c^2n^3/24$ triangles and book number $b \leq \left(\frac{1}{2}-c\right)n$. Then $G$ contains an induced bipartite subgraph that contains all but at most $48t/cn^2$ vertices.
\end{lem}
\begin{proof}
By Lemma \ref{firstlemma} and $m \geq n^2/4$, $G$ has a vertex partition $V(G)=A_0 \cup B_0$ such that all but at most $6t/n \leq c^2n^2/4$ edges are in $A_0 \times B_0$. We have $|A_0|,|B_0| \geq \left(1-c\right)n/2$, as otherwise the number of edges in $G$ is at most $|A_0||B_0|+c^2n^2/4 < n^2/4$, a contradiction.
Let $A$ consist of all vertices $a \in A_0$ with more than $(b+|B_0|)/2$ neighbors in $B_0$. The set $A$ is independent, as otherwise we would have an edge in more than $b$ triangles. From each vertex $a \in A_0 \setminus A$, the number of missing edges to $B_0$ is at least $(|B_0|-b)/2 \geq cn/4$. Thus, we get at least $|A_0 \setminus A| \cdot cn/4$ missing edges from $A_0 \setminus A$ to $B_0$. We also have that the number of missing edges across $A_0 \times B_0$ is at most $|A_0||B_0|-(m-6t/n) \leq 6t/n$, so it follows that $|A_0 \setminus A| \leq (6t/n)/(cn/4) =24t/cn^2$. Similarly, letting $B$ consist of all vertices $b \in B_0$ with more than $(b+|A_0|)/2$ neighbors in $B_0$, we have that $B$ is independent and $|B_0 \setminus B| \leq 24t/cn^2$. Thus, $A \cup B$ induces a bipartite subgraph that contains all but at most $48t/cn^2$ vertices.
\end{proof}
Our goal in this section is to prove Theorem \ref{thmbooknearquarter}. Since the proof is somewhat long, we first give an outline. Let $G$ be a graph on $n$ vertices with at least $n^2/4$ edges and book number at most $b$ (where $b<n/4$, but is not much smaller) which is not the balanced complete bipartite graph, but contains as few triangles as possible. We let $H$ be an induced bipartite subgraph of $G$ with the maximum number of vertices. Let $A$ and $B$ be the parts of $H$ and let $C$ be the remaining vertices, so that $A$, $B$ and $C$ form a vertex partition of $G$. We begin the proof by deriving some simple properties of the graph $G$. For instance, as there are not many triangles in $G$, we can use Lemma \ref{secondlemma} to deduce that $|C|$ is small. We can also conclude that $C$ is nonempty since $G$ has at least $n^2/4$ edges but is not complete bipartite. Moreover, by the choice of $H$, every vertex in $C$ has a neighbor in both $A$ and $B$. With a little more effort, we can even show that the degree of each vertex in $C$ is at least the maximum of $A$ and $B$.
From this point on, we do not need to use the fact that each edge of $G$ is in at most $b$ triangles, just that a random edge from $E(A \cup B,C)$ is in expectation in at most $b$ triangles. We form a new graph $G_1$ on the same vertex set as $G$ by adding edges to $A \times B$ to make $A$ complete to $B$ and deleting the same number of edges from $E(A \cup B,C)$. We can do this so that in $G_1$ each vertex in $C$ has degree at most $b$ to $A$ and degree at most $b$ to $B$, the total number of triangles does not increase and a random edge in $(A \cup B) \times C$ is in expectation in at most $b$ triangles. We are not able to guarantee that $G_1$ has book number at most $b$, but tracking this related expectation is sufficient for our purposes.
We now form another graph $G_2$ from $G_1$ by deleting edges in $E(C)$ and adding an equal number of edges to $(A \cup B) \times C$ so that each vertex in $C$ has degree $b$ to $A$ and degree $b$ to $B$. There are three types of triangle in $G_2$, those with exactly $i$ vertices in $C$ for $i=1,2,3$. It is easy to compute a lower bound on the number of type $1$ or $2$ triangles in $G_2$ and we show that this also gives a lower bound in $G_1$. Furthermore, the expected number of triangles containing a random edge of $E(G_2) \cap (A \cup B) \times C$ is at most the expected number of triangles containing a random edge of $E(G_1) \cap (A \cup B) \times C$. If $|C|<n-4b$, this expected number is larger than $b$, contradicting the fact that the corresponding expected number in $G$ is at most $b$. If $|C| \geq n-4b$, we find that the number of type $1$ or $2$ triangles in $G_2$ (and, hence, in $G$) is at least $b^2(n-4b)$, with equality only if $|C|=n-4b$. Furthermore, equality can occur only if $G = G_2$ and all triangles are of type $1$, so no edge in $C$ is in a triangle. But equality also implies that $|E(C)| \geq |C|^2/4$, so Mantel's theorem forces $C$ to induce a balanced complete bipartite graph. The parts of this partition determine two parts of the graph $S_{b,n}$, while the set of neighbors and nonneighbors of any vertex in $C$ partition each of $A$ and $B$ into two pieces, determining the remaining parts.
\vspace{3mm}
{\it Proof of Theorem \ref{thmbooknearquarter}.} Let $G$ be a graph on $n$ vertices with $m \geq n^2/4$ edges and book number at most $b=(1-\epsilon)n/4$, where $\epsilon \leq 1/500$, which is not the balanced complete bipartite graph, but for which the number $t$ of triangles is as small as possible. As the graph $S_{b,n}$ satisfies all of these conditions except possibly the last and has $b^2(n-4b)$ triangles, we may assume that $t \leq b^2(n-4b) \leq \epsilon n^3/16$.
Let $H$ be the largest induced bipartite subgraph of $G$ and let $A$ and $B$ denote the parts of $H$ with $|A| \geq |B|$. Let $C=V(G)\setminus V(H)$. If a vertex in $C$ is not adjacent to some vertex in $A$, then we can add it to $A$ and get a larger induced bipartite subgraph of $G$, a contradiction. Since similar reasoning holds with $B$ in place of $A$, we have the following claim.
{\bf Claim 1:} Every vertex in $C$ has a neighbor in both $A$ and $B$.
By Lemma \ref{secondlemma} with $c=1/4$, we have the next claim.
{\bf Claim 2:} $|C| \leq 192t/n^2 \leq 12\epsilon n$.
If $|C|=0$, then $G$ is bipartite and, as the number of edges is at least $n^2/4$, $G$ has to be the balanced complete bipartite graph, a contradiction which yields the following claim.
{\bf Claim 3:} The set $C$ is nonempty.
We next observe that $G$ must have large minimum degree.
{\bf Claim 4:} Every vertex in $B \cup C$ has degree at least $|A|$ and every vertex in $A$ has degree at least $|B|$.
{\bf Proof:} Suppose that $v \in B \cup C$. If $d(v)<|A|$, we can delete all edges containing $v$ and then make $v$ complete to $A$. This operation increases the number of edges of $G$ and, as $v$ is not in any triangle in the new graph, does not increase $b(G)$ or $t(G)$. We can then delete an edge of the resulting graph which is in a triangle, obtaining a new graph $G'$ with at least $n^2/4$ edges which still has $b(G') \leq b$ but has fewer triangles than $G$. If $G'$ has zero triangles, then it is the complete balanced bipartite graph on an even number of vertices and the deleted edge would be in $n/2$ triangles, contradicting that the book number is at most $n/4$. Otherwise, $G'$ contradicts the choice of $G$ and the claim follows. The case where $v \in A$ follows similarly. \qed
As $A$ is an independent set with minimum degree at least $|B|$, each vertex $u \in A$ is adjacent to all but at most $|C| $ vertices in $B$. Similarly, every vertex in $B$ is adjacent to all but at most $|C|$ vertices in $A$. We thus have the following claim.
{\bf Claim 5:} Every vertex in $A$ (respectively, $B$) is adjacent to all but at most $|C|$ vertices in $B$ (respectively, $A$).
From Claims 1 and 5, we have the following claim, as otherwise $v$ is in an edge in more than $b$ triangles.
{\bf Claim 6:} For every vertex $v \in C$, $d_A(v),d_B(v) \leq b+|C|$.
From the previous claim, for each vertex $v \in C$, we have
$$d_A(v) = d(v)-d_C(v)-d_B(v) \geq |A|-|C|-(b+|C|) \geq \frac{n-|C|}{2}-|C|-(b+|C|)=\frac{n}{2}-b-\frac{5}{2}|C|.$$
Since the same bound clearly holds for $d_B(v)$, we have the following claim.
{\bf Claim 7:} For every vertex $v \in C$, $d_A(v),d_B(v) \geq \frac{n}{2}-b-\frac{5}{2}|C|$.
Let $D=D(G)=\max_{v \in C}(d_A(v),d_B(v))$ and $d = d(G)=\min_{v \in C}(d_A(v),d_B(v))$ so that $d \leq d_A(v),d_B(v) \leq D$ for all vertices $v \in C$. In general, for a graph parameter, we will usually not specify the graph if it is $G$, but we will if it is another graph, as we did in the proof of Claim 4.
{\bf Claim 8:} $|C| > \frac{2}{3}\left(n-4b\right)$.
{\bf Proof:} Suppose otherwise, that $|C| \leq \frac{2}{3}\left(n-4b\right)$. If $D \leq b$, then the total number of edges of $G$ is at most
\begin{eqnarray*} |A||B|+\sum_{v \in C} (d_A(v)+d_B(v))+{|C| \choose 2} & < & \left(\frac{n-|C|}{2}\right)^2+2b|C|+\frac{|C|^2}{2}
\\ & = & \frac{n^2}{4}+\frac{|C|}{2}\left(\frac{3}{2}|C|-(n-4b)\right) \\ & \leq & \frac{n^2}{4},\end{eqnarray*}
a contradiction. Thus, we must have $D > b$. Suppose $D=d_A(v)$ with $v \in C$ (the case $D=d_B(v)$ is handled in the same way).
For each $u \in N_B(v)$, as the edge $(u,v)$ is in at most $b$ triangles, there must be at least $D-b$ missing edges from $u$ to $N_A(v)$. Hence, there are at least $(D-b)d_B(v)$ missing edges between $A$ and $B$. Then the number of edges in $G$ is at most
\begin{eqnarray*} |A||B|-(D-b)d_B(v)+2D|C|+{|C| \choose 2} \! \!& < & \!\! \left(\frac{n-|C|}{2}\right)^2-(D-b)\left(\frac{n}{2}-b-\frac{5}{2}|C|\right)+2D|C|+\frac{|C|^2}{2} \\ \! \! & = &\!\! \frac{n^2}{4}+\frac{|C|}{2}\left(\frac{3}{2}|C|-(n-4b)\right)-(D-b)\left(\frac{n}{2}-b-\frac{9}{2}|C|\right)
\\\! \! & < &\! \! \frac{n^2}{4}+\frac{|C|}{2}\left(\frac{3}{2}|C|-(n-4b)\right)
\\\!\! & \leq & \! \!\frac{n^2}{4},
\end{eqnarray*}
a contradiction. The first inequality above uses Claim 7, while the second inequality uses $D-b>0$ and $\frac{n}{2}-b-\frac{9}{2}|C|>0$, which follows from $b \leq \frac{n}{4}$, Claim 2 and $\epsilon < 1/216$.
\qed
For $i \in \{0,1,2,3\}$, we say that a triangle in $G$ is of type $i$ if it contains exactly $i$ vertices from $C$. We let $t_i$ denote the number of triangles of type $i$. As there are no triangles in $H=G[A \cup B]$, we have $t_0=0$. Let $t'=t_1+t_2$ be the number of triangles of type 1 or 2.
Let $\bar b(G)$ denote the expected number of triangles containing a random edge in $E(A \cup B,C)$. That is, $\bar b(G)=2t'(G)/e_G(A \cup B,C)$.
{\bf Claim 9:} There is a graph $G_1$ with $V(G_1)=V(G)$ and $e(G_1)=e(G)$ such that $G_1$ induces a complete bipartite graph on $A \cup B$ with parts $A$ and $B$, $D(G_1) \leq b$, $d(G_1) \geq \frac{n}{4}-\frac{5}{2}|C|$, $t(G_1) \leq t(G)$, $t'(G_1) \leq t'(G)$ and $\bar b(G_1) \leq \bar b(G)$. Moreover, if $G_1 \not =G$, then $t(G_1)<t(G)$.
{\bf Proof:}
Suppose there are $s$ missing edges between $A$ and $B$ in $G$. Consider adding all $s$ missing edges between $A$ and $B$ (so $A$ is now complete to $B$) and then deleting $s$ edges between $C$ and $A \cup B$, deleting them one at a time from a vertex in $C$ of largest degree to $A$ or $B$ to obtain a new graph $G_1$. To see that this process is possible, note that each vertex $v \in B$ has degree at least $|A|$ by Claim 4 and so has at least as many neighbors in $C$ as it has nonneighbors in $A$. Note, by construction, that $V(G_1) = V(G)$ and $e(G_1)=e(G)$.
If $D(G)>b$ and $v$ is a vertex with $d_A(v)=D(G)$ (the case $d_B(v)=D(G)$ is handled in the same way), then, in the graph $G$, for each $u \in N_B(v)$, the edge $(u,v)$ is in at most $b$ triangles, so $u$ has at least $D(G)-b$ missing edges to $A$. Thus, by Claim 7,
$$s \geq d_B(v)(D(G)-b) \geq \left(\frac{n}{2}-b-\frac{5}{2}|C|\right)(D(G)-b) \geq \left(\frac{n}{4}-\frac{5}{2}|C|\right)(D(G)-b) \geq 2|C|(D(G)-b),$$
where the last inequality follows from Claim 2 and $\epsilon < 1/216$. The final expression is an upper bound on the number of edges that must be deleted between $C$ and $A \cup B$ to guarantee $\max_{v \in C}(d_A(v),d_B(v)) \leq b$. We thus have $D(G_1) \leq b$ in this case. If $D(G) \leq b$, then, since we only deleted edges between $C$ and $A \cup B$ to make $G_1$, $D(G_1) \leq D(G) \leq b$. Hence, in either case, we have $D(G_1) \leq b$.
Observe that if $D(G_1)>d(G_1)+1$, then we must have $d(G_1)=d(G)$ as if, say, $d_A(v)=d(G)$, we would never delete an edge from $v$ to $A$ in the process of obtaining $G_1$. In this case, we have, by Claim 7, that $d(G_1)=d(G) \geq \frac{n}{2}-b-\frac{5}{2}|C| \geq \frac{n}{4}-\frac{5}{2}|C|$. Otherwise, we have that the degrees $d_A(v)$, $d_B(v)$ in $G_1$ are all simply the average degree rounded up or down. The number of edges of $G_1$ between $C$ and $A \cup B$ satisfies
$$e_{G_1}(A \cup B,C) \geq \frac{n^2}{4}-|A||B|-{|C| \choose 2} \geq \frac{n^2}{4}-\left(\frac{n-|C|}{2}\right)^2-\frac{|C|^2}{2}=\frac{|C|n}{2}-\frac{3}{4}|C|^2.$$
So the average value of $d_X(v)$ over all $2|C|$ choices of $v \in C$ and $X \in \{A,B\}$ is at least $\frac{n}{4}-\frac{3}{8}|C|$.
Hence,
\begin{equation}\label{dG1}
d(G_1) \geq \min\left(\frac{n}{4}-\frac{5}{2}|C|,\frac{n}{4}-\frac{3}{8}|C|-1\right) \geq \frac{n}{4}-\frac{5}{2}|C|.
\end{equation}
Since each of the $s$ edges added between $A$ and $B$ is in at most $|C|$ triangles, in total this process added at most $s|C|$ triangles. Once these edges have been added and the graph between $A$ and $B$ is complete bipartite, we remove the $s$ edges from between $C$ and $A \cup B$. Since each such edge is contained in at least $d(G_1)$ triangles, we remove at least $s d(G_1)$ triangles in total. Hence,
$$t'(G_1)-t'(G) \leq s(|C|-d(G_1)).$$
As $n \geq 14 \cdot 12 \epsilon n > 14|C|$ for $\epsilon < 1/168$, it follows from (\ref{dG1}) that $t'(G_1) \leq t'(G)$. As no edges in $C$ are added or deleted in obtaining $G_1$ from $G$, we have $t_3(G_1)=t_3(G)$ and, hence, $t(G_1) \leq t(G)$. Moreover, if $s \not =0$, then $t'(G_1)<t'(G)$ and, hence, $t(G_1)<t(G)$.
Finally, we check that $\bar b(G_1) \leq \bar b(G)$. This is equivalent to showing that
$$\frac{2t'(G_1)}{e_{G_1}(A \cup B,C)} \leq \frac{2t'(G)}{e(A \cup B,C)}$$
and, as $e_{G_1}(A \cup B,C)=e(A \cup B,C)-s$, this is equivalent to showing that
$$\left(t'(G)-t'(G_1)\right)e(A \cup B,C) \geq st'(G).$$
From the bound $t'(G)-t'(G_1) \geq s(d(G_1)-|C|)$, this would follow if we could show that
$$(d(G_1)-|C|)e(A \cup B,C) \geq t'(G).$$
Each edge in $E(A \cup B,C)$ is in at most $b$ triangles in $G$ and each type $1$ or $2$ triangle has exactly two such edges, so $t'(G) \leq e(A \cup B,C)b/2$. Hence, it suffices to show that $d(G_1)-|C|\geq b/2$, which follows from (\ref{dG1}), $|C| \leq 12\epsilon n$, $b \leq n/4$ and $\epsilon \leq 1/336$. This completes the proof of Claim 9. \qed
{\bf Claim 10:} $|C| \leq \frac{1}{1-240\epsilon}(n-4b) \leq 2(n-4b)=2\epsilon n$.
{\bf Proof:} Suppose, for the sake of contradiction, that $|C|>\frac{1}{1-240\epsilon}(n-4b)$. Each vertex $v \in C$ is in $d_A(v,G_1)d_B(v,G_1)$ type $1$ triangles in $G_1$. As $d_A(v,G_1)d_B(v,G_1)\geq d(G_1)^2 \geq \left(\frac{n}{4}-\frac{5}{2}|C|\right)^2$ by Claim 9, the number of triangles in $G_1$ is at least $$|C|\left(\frac{n}{4}-\frac{5}{2}|C|\right)^2 \geq \left(\frac{n}{4}\right)^2|C|\left(1-\frac{20|C|}{n}\right).$$
This last expression is an increasing function of $|C|$ for $|C|\leq \frac{n}{40}$ (which holds since $|C| \leq 12\epsilon n$ and $\epsilon \leq 1/480$). Hence, as $b \leq n/4$ and $\frac{1}{1-240\epsilon}(n-4b) < |C| \leq 12\epsilon n$, we have that the number of triangles in $G_1$ (and, hence, $G$) is larger than $b^2(n-4b)$, a contradiction. \qed
For any graph $G'$ on $V(G)$ for which $A \cup B$ induces a complete bipartite graph, the number of triangles containing an edge $(u,v) \in E_{G'}(A,C)$ is
$$d_B(u,v,G')+d_{C}(u,v,G')=d_B(v,G')+d_C(u,v,G') \geq d_B(v,G')+d_C(u,G')+d_C(v,G')-|C|.$$
Similarly, if $(u,v) \in E_{G'}(B,C)$, then the number of triangles in $G'$ containing the edge $(u,v)$ is at least $d_A(v,G')+d_C(u,G')+d_C(v,G')-|C|$. Summing over all edges in $E_{G'}(A \cup B,C)$ and using the fact that each type 1 or 2 triangle contains exactly two such edges, the number of type 1 or 2 triangles in $G'$ is at least $\tilde{t}(G')$, defined by
$$2\tilde{t}(G'):=-|C|e_{G'}(A \cup B,C)+\sum_{v\in C} \left(2d_A(v,G')d_B(v,G')+d_C(v,G')d_{A \cup B}(v,G')\right)+\sum_{u \in A \cup B}d_C(u,G')^2.$$
To see this, note, for example, that each term of the form $d_C(u, G')$ appears $d_C(u, G')$ times, once for each edge $(u, v) \in E_{G'}(A \cup B, C)$.
{\bf Claim 11:} There is a graph $G_2$ obtained from $G_1$ by deleting some edges with both vertices in $C$ and adding an equal number of edges to $(A \cup B) \times C$ such that $d_A(v,G_2)=d_B(v,G_2)=b$ for all $v \in C$, $\tilde{t}(G_2) \leq \tilde{t}(G_1)$ and $e_{G_2}(A \cup B,C) \geq e_{G_1}(A \cup B,C)$. Moreover, if
$G_2 \not =G_1$, then $\tilde{t}(G_2)<\tilde{t}(G_1)$.
{\bf Proof:} As $d_A(v,G_1),d_B(v,G_1) \leq b$, we can arbitrarily delete edges from $C$ (as long as there are edges) and add an equal number of edges to $(A \cup B) \times C$ to obtain the graph $G_2$ with $d_A(v,G_2)=d_B(v,G_2)=b$. This is possible because, by Claim 10 and $b = (1 - \epsilon)n/4$, the number of edges we would get, not including those in $C$, is
$$|A||B|+|C|2b \leq \left(\frac{n-|C|}{2}\right)^2+|C|2b = \frac{n^2}{4}+\frac{|C|^2}{4}-\frac{\epsilon}{2}|C|n \leq \frac{n^2}{4},$$
leaving enough room for a nonnegative number of edges in $C$. Note that, by construction, $G_2$ has at least as many edges across $(A \cup B) \times C$ as $G_1$.
Let $G'$ be a graph obtained at some stage of the process of transforming $G_1$ into $G_2$. If we delete an edge $(v,v')$ from $G'$ with $v,v'\in C$ to obtain $G''$, then it decreases the value of $2\tilde{t}(G')$ by $d_{A \cup B}(v,G')+d_{A \cup B}(v',G') \geq 4\left( \frac{n}{4}-\frac{5}{2}|C|\right)=n-10|C|$, where the inequality is by the lower bound on $d(G_1)$ from Claim 9. If we add an edge $(u,v) \in (A \cup B) \times C$ to this graph (with, say, $u \in A$), it increases the value of $2\tilde{t}(G'')$ by $$-|C|+ 2 d_B(v,G'')+d_C(v,G'')+2d_C(u,G'')+1 \leq 2|C|+1+ 2b,$$ where the last inequality uses $d_C(v,G''),d_C(u,G'') \leq |C|$ and $d_B(v,G'') \leq b$. Hence, in deleting an edge with both vertices in $C$ and adding an edge in $(A \cup B) \times C$, we decreased the value of $\tilde{t}(G')$ by at least $n-10|C|-(2|C|+1+2b) \geq \frac{n}{2}-13|C| \geq \frac{n}{2}-156\epsilon n > 0$, where we used Claim 2. Thus, in the process of going from $G_1$ to $G_2$, $\tilde{t}$ decreases at each step, so
$\tilde{t}(G_2) \leq \tilde{t}(G_1)$, with equality only if $G_2 = G_1$. \qed
We have
\begin{eqnarray*} 2\tilde{t}(G_2) & = & -2|C|^2b+2|C|b^2+4be_{G_2}(C)+\sum_{u \in A \cup B} d_C(u,G_2)^2 \\ & \geq & -2|C|^2b+2|C|b^2+4b\left(n^2/4-|A||B|-2b|C|\right)+4|C|^2b^2/|A \cup B| \\ & \geq &
-2|C|^2b+2|C|b^2+4b\left(n^2/4-\left((n-|C|)/2\right)^2 -2b|C|\right)+4|C|^2b^2/|A \cup B|
\\ & = & -3|C|^2b-6|C|b^2+2|C|bn+4|C|^2b^2/(n-|C|),\end{eqnarray*}
where, in the first inequality, we used the Cauchy--Schwarz inequality and $\sum_{u \in A \cup B} d_C(u,G_2) = 2 |C| b$. The last expression, as a function of $|C|$, is increasing for $b$ in the range of interest and $|C|$ in the range determined by Claims 8 and 10, which can be seen by taking the derivative with respect to $|C|$. Using this fact, we may evaluate this expression at $|C|=n-4b$ to conclude that
$$b^2(n-4b) \leq \tilde{t}(G_2) \leq \tilde{t}(G_1) \leq t(G_1) \leq t(G)$$
for $|C| \geq n-4b$. Furthermore, the only way we could get equality in the above bound is if $|C|=n-4b$, $|A|=|B|=2b$ and if we moved no edges in getting $G_1$ from $G$ and $G_2$ from $G_1$, so that $G_2$ and $G$ are the same. Therefore, in $G$, $A$ is complete to $B$ and $d_A(v)=d_B(v)=b$ for each vertex $v \in C$. Hence, as each vertex in $C$ is in $b^2$ type $1$ triangles, the number of triangles of type $1$ in $G$ is $b^2(n-4b)$ so there are no type $2$ or $3$ triangles in $G$. In particular, no edge in $C$ belongs to a triangle. On the other hand,
$$e(C) \geq \frac{n^2}{4} - |A||B| - 2b|C| \geq \frac{n^2}{4} - \left(\frac{n - |C|}{2}\right)^2 - 2b|C| = -\frac{|C|^2}{4} + \frac{\epsilon}{2} |C| n = \frac{|C|^2}{4},$$
where, in the last inequality, we used that $|C| = n - 4 b = \epsilon n$. As $C$ has at least $|C|^2/4$ edges but induces a triangle-free graph, Mantel's theorem implies that $|C|$ is even (which is equivalent to $n$ being even) and $C$ induces a balanced complete bipartite graph with parts $C_1$, $C_2$ of equal size. As no edge in $C$ is in a triangle with a vertex in $A$ or $B$ and yet $d_A(v)=b=|A|/2$ and $d_B(v)=b=|B|/2$ for each $v \in C$, we have equitable partitions $A=A_1 \cup A_2$ and $B=B_1 \cup B_2$ such that $C_1$ is complete to $A_1 \cup B_1$, $C_2$ is complete to $A_2 \cup B_2$ and there are no other edges between $A \cup B$ and $C$. It is now easy to check that $G$ is the graph $S_{n,b}$ with parts $A_1,B_1,C_1,B_2,A_2,C_2$.
It remains to check the case $|C|<n-4b$. We will show that there is an edge in more than $b$ triangles, a contradiction. Indeed,
\begin{eqnarray*} b(G) & \geq & \bar b(G) \geq \bar b(G_1) = 2 t'(G_1)/e_{G_1}(A \cup B,C) \geq 2 \tilde{t}(G_1)/e_{G_1}(A \cup B,C)\\ & \geq & 2 \tilde{t}(G_2)/e_{G_1}(A \cup B,C) \geq 2 \tilde{t}(G_2)/e_{G_2}(A \cup B,C).\end{eqnarray*}
This last expression is at least
$$\frac{1}{2b|C|}\left( -3|C|^2b-6|C|b^2+2|C|bn+4|C|^2b^2/(n-|C|) \right) = -\frac{3}{2}|C|-3b+n+2|C|b/(n-|C|).$$
In the range of interest, this function is strictly decreasing in $|C|$. Given that we are assuming that $|C|<n-4b$, if we evaluate the above expression at $|C|=n-4b$, we get $b$ and, hence, $b(G)$ is greater than this value, a contradiction. This completes the proof of Theorem \ref{thmbooknearquarter}. \qed
\section{Proof of Theorem~\ref{thmbooknearsixth}} \label{sec:sixth}
The main result of this section, which easily implies Theorem~\ref{thmbooknearsixth}, is as follows.
\begin{thm}
If $\epsilon > 0$ is sufficiently small, then every graph on $n$ vertices with at least $n^2/4$ edges and book number at most $\left( \frac16 + \epsilon^3 \right) n$ which is not the balanced complete bipartite graph has at least $\left( \frac{1}{108} - O( \epsilon ) \right) n^3$ triangles.
\end{thm}
\begin{proof}
Suppose that $G$ is a graph satisfying the assumptions of the theorem. As with Theorem~\ref{thmbooknearquarter}, we will prove the result through a sequence of claims.
{\bf Claim A:}
$G$ is approximately regular, in that $| \{ v : \left| d(v) - \frac{n}{2} \right| \ge \epsilon n \} | \le \epsilon n$.
{\bf Proof:}
Let $b = b(G)$, $t(G)$ be the number of triangles in $G$ and $m$ be the number of edges. We use the following inequality, proved by Bollob\'as and Nikiforov \cite{BN05} (see Equation (8)),
\[ ( 6 b - n ) t(G) \ge b \left( \sum_v d(v)^2 - nm \right). \]
Since $\sum_v d(v)^2 = \sum_v \left( d(v) - \frac{n}{2} \right)^2 + 2mn - \frac{n^3}{4}$, we have
\[ ( 6 b - n ) t(G) \ge b \left( \sum_v \left( d(v) - \frac{n}{2} \right)^2 + nm - \frac{n^3}{4} \right) \geq b \sum_v \left( d(v) - \frac{n}{2} \right)^2. \]
As the right-hand side is non-negative and $t(G) > 0$ (since $G$ is not the balanced complete bipartite graph), it follows that $6 b - n \ge 0$. Using the simple bound $t(G) \le \frac13 b m$, we find that
$$6 b - n \ge \frac{3}{m} \sum_v \left( d(v) - \frac{n}{2} \right)^2 \ge \frac{3}{m} | \{ v : | d(v) - \tfrac{n}{2} | \ge \epsilon n \} | \epsilon^2 n^2.$$
Suppose now that $| \{ v : \left| d(v) - \frac{n}{2} \right| \ge \epsilon n \} | > \epsilon n$. Substituting this in and using $m \leq n^2/2$ yields $6 b - n > \frac{3 \epsilon^3 n^3}{m} \ge 6 \epsilon^3 n$ and, hence, $b > \left( \frac16 + \epsilon^3 \right)n$, a contradiction.
\qed
Now remove any vertices of degree less than $\left( \frac12 - \epsilon \right)n$ from $G$. By Claim A, this gives a new graph $G'$ on $n' \geq (1 - \epsilon)n$ vertices.
Since $G$ had at least $n^2/4$ edges and we removed at most $(n - n')(\frac{1}{2}-\epsilon) n$ edges, $G'$ also has at least
$(n')^2/4$ edges. The minimum degree of $G'$ is at least $(\frac{1}{2}-2\epsilon)n \geq (\frac{1}{2}-2\epsilon)n'$ and $b(G') \leq ( \frac16 + \epsilon^3) n \leq ( \frac16 + \frac{\epsilon}{5}) n'$. For simplicity, we shall again call this smaller graph $G$ and suppose that it has $n$ vertices. Furthermore, increasing $\epsilon$ by at most a factor $2$, we have that the minimum degree of $G$ is at least $(\frac{1}{2}-\epsilon)n$ and $b(G) \leq ( \frac16 + \frac{\epsilon}{10}) n$. The additional error introduced by increasing $\epsilon$ is easily covered by the $O(\epsilon)$ term in our bound on the number of triangles.
Given any vertex $v \in G$, we will use the shorthand $N_v$ for the neighbors of $v$ and $M_v$ for the set of nonneighbors (including $v$). Note that we have $|N_v| \ge \left( \frac12 - \epsilon \right)n$ and $|M_v| \le \left( \frac12 + \epsilon \right) n$ for all $v$. Clearly, for any $v \in G$ and $x \in N_v$, we have $d_{N_v}(x) \le b(G) \le \left( \frac16 + \frac{\epsilon}{10} \right)n$.
{\bf Claim B:}
Given $v \in G$ and $x \in N_v$, if $d_{N_v}(x) \ne 0$, then $\left( \frac16 - 4\epsilon \right)n \le d_{N_v}(x) \le \left( \frac16 + \frac{\epsilon}{10} \right) n$.
{\bf Proof:}
Let $y \in N_v$ be a neighbour of $x$. Note that $x$ and $y$ both have degree at least $\left( \frac12 - \epsilon \right)n$. Thus, the number of common neighbors in $M_v$ is at least
\begin{eqnarray}
\label{claimB}
d_{M_v}(x,y) &\ge& d_{M_v}(x) + d_{M_v}(y) - |N_{M_v}(x) \cup N_{M_v}(y)| \ge d_{M_v}(x) + d_{M_v}(y) -|M_v|\nonumber\\
&\ge& \left( \frac12 - 3 \epsilon \right) n - d_{N_v}(x) - d_{N_v}(y),
\end{eqnarray}
where we used that $d_{M_v}(x) = d(x) - d_{N_v}(x) \geq \left(\frac{1}{2} - \epsilon\right) n - d_{N_v}(x)$ and $|M_v| \le \left( \frac12 + \epsilon \right) n$. Using the bounds $d_{M_v}(x,y) \le d(x,y) \le b(G) \le \left( \frac16 + \frac{\epsilon}{10} \right)n$ and $d_{N_v}(y) \le b(G) \le \left( \frac16 + \frac{\epsilon}{10} \right) n$, we deduce that $d_{N_v}(x) \ge \left( \frac16 - 3 \epsilon - \frac{\epsilon}{5} \right) n \ge \left( \frac16 - 4 \epsilon \right)n$.
\qed
This proof also shows that if $x$ and $y$ are neighbors in $N_v$, then they have at least $\left( \frac16 - 4 \epsilon \right)n$ common neighbors in $M_v$. Therefore,
they can have at most $O( \epsilon n)$ common neighbors in $N_v$. Note also that we must have $d(v) = |N_v| \le \left( \frac12 + O( \epsilon ) \right) n$, since a smaller bound on the size of $|M_v|$ would force $d_{M_v}(x,y) > \left( \frac16 + \frac{\epsilon}{10} \right) n$.
{\bf Claim C:}
Suppose $e(N_v) > 0$. Then $e(N_v) \ge \left( \frac{1}{36} - O( \epsilon ) \right)n^2$ and there are at least $\left( \frac{1}{216} - O( \epsilon ) \right)n^3$ triangles with two vertices in $N_v$ and one in $M_v$.
{\bf Proof:}
Suppose $x \sim y$ in $N_v$. Then $d_{N_v}(x) > 0$, so $d_{N_v}(x) \ge \left( \frac16 - 4 \epsilon \right)n$ and similarly for $y$. Moreover, since $x$ and $y$ have at most $O( \epsilon n)$ common neighbors in $N_v$, the neighbors of $x$ and the neighbors of $y$ in $N_v$ give at least $\left( \frac13 - O( \epsilon ) \right)n$ vertices of positive degree in $G[N_v]$, each of which has degree at least $\left( \frac16 - 4 \epsilon \right)n$. Thus, $e(N_v) \ge \frac12 \left( \frac13 - O( \epsilon ) \right) \left( \frac16 - 4 \epsilon \right) n^2 = \left( \frac{1}{36} - O( \epsilon ) \right) n^2$. Moreover, each of these edges must have at least $\left( \frac16 - O( \epsilon ) \right) n$ common neighbors in $M_v$, giving $\left( \frac{1}{36} - O( \epsilon ) \right) \left( \frac16 - O( \epsilon ) \right) n^3 = \left( \frac{1}{216} - O( \epsilon ) \right) n^3$ triangles with two vertices in $N_v$ and one in $M_v$.
\qed
{\bf Claim D:}
For every $v \in G$, $e(N_v) = e(M_v) + O( \epsilon n^2)$.
{\bf Proof:}
We have $\sum_{x \in N_v} d(x) = 2e(N_v) + e(N_v,M_v)$ and $\sum_{x \in M_v} d(x) = 2e(M_v) + e(N_v, M_v)$. Since the graph is almost regular by Claim A and $|N_v|, |M_v| = \left( \frac12 + O( \epsilon ) \right)n$, it follows that the two sums are approximately equal, that is, $e(N_v) = e(M_v) + O( \epsilon n^2)$.
\qed
{\bf Claim E:}
For every $v \in G$, there is some $w \in N_v$ with $d_{N_v}(w) = 0$.
{\bf Proof:}
Suppose on the contrary that $d_{N_v}(w) > 0$ for all $w \in N_v$. Then, by $|N_v| \geq \left( \frac12 - \epsilon \right)n$ and Claim B,
\[ e(N_v) \ge \frac12 \left( \frac12 - \epsilon \right) \left( \frac16 - 4 \epsilon \right)n^2 = \left( \frac{1}{24} - O( \epsilon ) \right) n^2. \]
Each of these edges extends to at least $\left( \frac16 - O( \epsilon ) \right)n$ triangles with a vertex in $M_v$, giving at least $\left( \frac{1}{144} - O(\epsilon) \right)n^3$ such triangles. Moreover, by Claim D, we have $e(M_v) \ge \left( \frac{1}{24} - O( \epsilon ) \right) n^2$.
Now consider any edge $x \sim y$ in $N_v$ and count the number of triangles containing $x$ or $y$ with two vertices in $M_v$. By Claim B, we have $|N_{M_v}(x)| \leq \left(\frac{1}{3} + O(\epsilon)\right) n$. Together with $d_{M_v}(x, y) \ge \left( \frac16 - 4 \epsilon \right)n$, this implies that $| N_{M_v}(x) \setminus N_{M_v}(y) | \le \left(\frac16 + O(\epsilon) \right)n$ and similarly for $| N_{M_v}(y) \setminus N_{M_v}(x)|$.
The proof of Claim B also implies that $|M_v \setminus (N_{M_v}(x) \cup N_{M_v}(y))|=O(\epsilon n)$, otherwise, from inequality (\ref{claimB}), we have that $d_{M_v}(x,y)$ is too big, a contradiction.
Therefore, there are at most $\left( \frac{1}{36} + O(\epsilon) \right)n^2$ edges in $M_v$ that do not form a triangle with $x$ or $y$. This leaves at least $\left( \frac{1}{72} - O(\epsilon) \right)n^2$ edges that do form a triangle with at least one of $x$ or $y$. Summing these up for every edge in $N_v$ gives $\left( \frac{1}{1728} - O( \epsilon ) \right)n^4$. By Claim B, each vertex $x$ has degree at most $\left(\frac16 + \frac{\epsilon}{10} \right) n$ in $N_v$, so this gives an upper bound on the number of times any triangle can be counted. Hence, the total number of such triangles (one vertex in $N_v$, two in $M_v$) is at least $\left( \frac{1}{288} - O(\epsilon) \right)n^3$. This implies that the total number of triangles in $G$ is at least $\left( \frac{1}{96} - O( \epsilon ) \right) n^3$, which is too large.
\qed
We may now complete the proof. Since $e(G) \geq n^2/4$ and $G$ is not the balanced complete bipartite graph, $G$ must contain a triangle. Let $v$ be a vertex of this triangle, so that $e(N_v) > 0$. By Claim C, we have $e(N_v) \ge \left( \frac{1}{36} - O( \epsilon ) \right)n^2$ and at least $\left( \frac{1}{216} - O( \epsilon ) \right) n^3$ triangles with two vertices in $N_v$. Moreover, by Claim E, there is some $w \in N_v$ with $d_{N_v}(w) = 0$, which implies that $N_w \subset M_v$. Note now that $|M_v \setminus N_w | = O( \epsilon n)$. Since $e(M_v) = e(N_v) + O(\epsilon n^2)$ by Claim D, it follows that $e(N_w) \ge e(N_v) - O( \epsilon n^2) > 0$. Hence, again by Claim C, there are at least $\left( \frac{1}{216} - O( \epsilon ) \right)n^3$ triangles with two vertices in $N_w$. Since $N_w \cap N_v = \emptyset$, these triangles are distinct from those above, which gives $\left( \frac{1}{108} - O( \epsilon ) \right)n^3$ triangles in total.
\end{proof}
If $\epsilon = 0$ and equality holds throughout the argument above, consider a vertex $v$ which is contained in a triangle and a vertex $x \in N_v$ with $d_{N_v}(x) \neq 0$.
Then, by Claim A, the graph is $n/2$-regular and, by Claim B, we have $d_{N_v}(x) = n/6$. Note, moreover, that $N_v$ is triangle-free by the comments after Claim B, which implies that $N(v, x)$ is an independent set. Similarly, for any $y \in N(v, x)$, $d_{N_v}(y) = n/6$ and $N(v, y)$ must be an independent set. We now split $N_v$ into three parts, each with $n/6$ vertices, namely, $N(v, x)$, $N(v, y)$ and the remainder, which we label $R_v$.
By the proof of Claim C, we see that if $e(N_v)>n^2/36$, then there are more than $n^3/216$ triangles with two vertices in $N_v$ and one in $M_v$. Since, by Claim E, there is a vertex $w \in N_v$ with no neighbors in $N_v$, we have that
$N_w=M_v$ and, hence, there are at least $n^3/216$ triangles with two vertices in $M_v$ and one in $N_v$. So altogether there are more than $n^3/108$ triangles, a contradiction. This implies that $e(N_v)=n^2/36$ and, therefore, there are exactly $n/3$ vertices in $N_v$
with degree $n/6$ in $N_v$. Since the neighbors of $x$ and $y$ must all have positive degree in $N_v$ (which by the above discussion should be $n/6$), we conclude that the vertices in $R_v$ have no neighbors in $N_v$, while there must be a complete bipartite graph between $N(v,x)$ and $N(v,y)$.
Picking now any vertex $u \in R_v$, we see that its neighborhood must be $M_v$, the complement of $N_v$. By the same argument as above, the induced graph on $M_v = N_u$ must consist of a balanced complete bipartite graph between two parts $N(u, x'), N(u, y')$, each with $n/6$ vertices, and a set $R_u$ of $n/6$ vertices with no neighbors in $M_v$, each of which must then be complete to $N_v$. Since there are $n^3/216$ triangles between $R_u, N(v, x)$ and $N(v, y)$ and a similar number between $R_v, N(u, x')$ and $N(u, y')$, we see that there are no more triangles, so any vertex in $N(v, x) \cup N(v, y)$ can only have neighbors in one of $N(u, x')$ or $N(u, y')$ and vice versa. Putting all this together, we see that equality holds only if the graph is the blow-up of a $3$-prism with $n/6$ vertices in each part, as claimed.
\vspace{-2mm}
\section{Concluding remarks}
The most obvious question that we have left open is Conjecture~\ref{mainconj}. Our results only establish this conjecture when $b = n/6$ or when $0.2495 n \leq b < n/4$, so much more remains to be done. In the first instance, it might be interesting to show that there is some $\epsilon > 0$ such that the conjecture holds for all $n/6 \leq b \leq \left(\frac{1}{6} + \epsilon\right) n$.
There are of course many natural variants of Mubayi's question: how does the tradeoff between triangles and books change if we assume there are at least $\alpha n^2$ edges for some $1/4 < \alpha < 1/2$? what happens for larger cliques? what about hypergraphs? But the question also points to a more general metaquestion, of how the local and global counts for substructures play off against one another. There are many contexts besides graphs in which such questions can be asked.
\vspace{3mm}
{\bf Acknowledgements.} We are grateful to Shagnik Das and Nina Kam\v cev for helpful conversations and are particularly indebted to Shagnik for writing up an early draft of Section~\ref{sec:sixth}. We would also like to thank the anonymous referees for their detailed and insightful reports.
| {
"timestamp": "2019-10-22T02:18:05",
"yymm": "1905",
"arxiv_id": "1905.05312",
"language": "en",
"url": "https://arxiv.org/abs/1905.05312",
"abstract": "A celebrated result of Mantel shows that every graph on $n$ vertices with $\\lfloor n^2/4 \\rfloor + 1$ edges must contain a triangle. A robust version of this result, due to Rademacher, says that there must in fact be at least $\\lfloor n/2 \\rfloor$ triangles in any such graph. Another strengthening, due to the combined efforts of many authors starting with Erdős, says that any such graph must have an edge which is contained in at least $n/6$ triangles. Following Mubayi, we study the interplay between these two results, that is, between the number of triangles in such graphs and their book number, the largest number of triangles sharing an edge. Among other results, Mubayi showed that for any $1/6 \\leq \\beta < 1/4$ there is $\\gamma > 0$ such that any graph on $n$ vertices with at least $\\lfloor n^2/4\\rfloor + 1$ edges and book number at most $\\beta n$ contains at least $(\\gamma -o(1))n^3$ triangles. He also asked for a more precise estimate for $\\gamma$ in terms of $\\beta$. We make a conjecture about this dependency and prove this conjecture for $\\beta = 1/6$ and for $0.2495 \\leq \\beta < 1/4$, thereby answering Mubayi's question in these ranges.",
"subjects": "Combinatorics (math.CO)",
"title": "Books versus triangles at the extremal density",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771770811146,
"lm_q2_score": 0.8740772400852111,
"lm_q1q2_score": 0.8625194715221363
} |
https://arxiv.org/abs/math/0412443 | Minimum Perimeter Rectangles That Enclose Congruent Non-Overlapping Circles | We use computational experiments to find the rectangles of minimum perimeter into which a given number n of non-overlapping congruent circles can be packed. No assumption is made on the shape of the rectangles. In many of the packings found, the circles form the usual regular square-grid or hexagonal patterns or their hybrids. However, for most values of n in the tested range n =< 5000, e.g., for n = 7, 13, 17, 21, 22, 26, 31, 37, 38, 41, 43...,4997, 4998, 4999, 5000, we prove that the optimum cannot possibly be achieved by such regular arrangements. Usually, the irregularities in the best packings found for such n are small, localized modifications to regular patterns; those irregularities are usually easy to predict. Yet for some such irregular n, the best packings found show substantial, extended irregularities which we did not anticipate. In the range we explored carefully, the optimal packings were substantially irregular only for n of the form n = k(k+1)+1, k = 3, 4, 5, 6, 7, i.e., for n = 13, 21, 31, 43, and 57. Also, we prove that the height-to-width ratio of rectangles of minimum perimeter containing packings of n congruent circles tends to 1 as n tends to infinity. | \section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{2.3ex plus .2ex}{\normalsize\bf}}
\begin{document}
\title{Minimum Perimeter Rectangles That Enclose\\
Congruent Non-Overlapping Circles}
\date{}
\maketitle
\begin{center}
\author{Boris D. Lubachevsky \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Ronald L. Graham\\
{\em lubachevsky@netscape.net \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ graham@ucsd.edu}\\
Bell Laboratories \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ University of California \\
600 Mountain Avenue \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ at San Diego \\
Murray Hill, New Jersey \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ La Jolla, California }
\end{center}
\setlength{\baselineskip}{0.995\baselineskip}
\normalsize
\vspace{0.5\baselineskip}
\vspace{1.5\baselineskip}
\begin{abstract}
We use computational experiments
to find the rectangles of minimum perimeter into which a given number
$n$ of non-overlapping congruent circles can be packed. No
assumption is made on the shape of the rectangles.
In many of the packings found,
the circles form the usual regular square-grid or hexagonal
patterns or their hybrids. However,
for most values of $n$ in the tested
range $n \le 5000$, e.g., for
$n = 7, 13, 17, 21, 22, 26, 31, 37, 38, 41, 43...,4997, 4998, 4999, 5000$,
we prove that the optimum cannot possibly be achieved
by such regular arrangements.
Usually, the irregularities in
the best packings found for such $n$ are
small, localized modifications to regular patterns;
those irregularities are usually easy to predict.
Yet
for some such irregular $n$,
the best packings found show substantial, extended irregularities
which we did not anticipate.
In the range we explored carefully,
the optimal packings were substantially irregular
only for $n$ of the form
$n = k(k+1)+1$, $k = 3, 4, 5, 6, 7$,
i.e. for $n = 13, 21, 31, 43$, and 57.
Also, we prove that the height-to-width ratio
of rectangles of minimum perimeter containing packings of $n$ congruent
circles tends to 1 as $n \rightarrow \infty$.
{\bf Key words}: disk packings, rectangle, container design,
hexagonal, square grid
{\bf AMS subject classification:} primary 52C15, secondary 05B40, 90C59
\end{abstract}
\section{Introduction}\label{sec:intro}
\hspace*{\parindent}
Consider the task of finding
the rectangular region of least perimeter
that encloses a given number $n$ of
circular disks of equal diameter.
The circles must not
overlap with each other or extend outside the rectangle.
The
aspect ratio of the rectangle,
i.e. the ratio of its height to width,
is variable and
subject to the perimeter-minimizing choice
as well as the positions of the circles
inside the rectangle.
Dense packings of circles in rectangles of a fixed shape,
in particular, in squares,
have been the subject of
many investigations \cite{GL2}, \cite{NO1}, \cite{NO2}, \cite{NO3};
a comprehensive survey is given in \cite{SMC},
see also \cite{Specht}.
When the aspect ratio of the enclosing rectangle is fixed,
the densest packing minimizes
both the area and the perimeter of the rectangle.
However, by allowing a variable aspect ratio,
the two optima may differ for the same number of circles $n$.
In 1970,
Ruda relaxed the restriction of the fixed aspect
ratio while trying to minimize the area, see
\cite{Ruda}.
He found the minimum-area packings of $n$ congruent circles
in the variably shaped rectangles for $n \le 8$ and
conjectured the minima for $9 \le n \le 12$.
In \cite{LG}, we extended Ruda's conjectures to
$n \le 5000$.
In this paper we switch our attention to
minimizing the rectangle perimeter,
while keeping the aspect ratio of the rectangle variable.
We report the results of essentially
the same computational procedure for finding the minimum-perimeter
packings as the procedure used in \cite{LG} for finding
the minimum-area packings.
Even though
the optima themselves are usually different,
the structures of the set of optimal
packings turn out to be
very similar
for the two minimization tasks.
In either case for many $n$,
the optimum pattern
is regular, i.e. it is a square-grid pattern,
or a hexagonal pattern, or a hybrid of these two patterns.
One difference is that the occurrence of
the non-regular patterns is more frequent in the minimum-perimeter
case than in the minimum-area case:
the smallest non-regular $n$ for the minimum-perimeter criterion is $n = 7$
while that for the minimum-area criterion is $n = 49$ in \cite{LG};
for almost all $n$ that are close to $n = 5000$
the minimum-perimeter packings are not regular
while
for the majority of $n$ everywhere in the range $1 \le n \le 5000$,
the regular packings
still supply the minimum-area rectangles.
It appeared in \cite{LG},
that
if the minimum-area rectangular packing
for a particular $n \le 5000$ is not regular,
then
the minimum could be obtained by a small and
easy-to-predict,
localized modification to a regular pattern.
In the case of the minimum perimeter,
modifications of the same type
apparently supply the optima for most
numbers of circles $n$ in the studied range
$n \le 5000$ for which the minimum-perimeter rectangular
packing happens not to be regular.
But not for all such numbers!
For certain exceptional numbers of circles $n$,
the packing pattern with the smallest perimeter we found is
complex and/or requires extended modifications to a regular
pattern.\footnote{
We now realize that similar exceptions might also exist in the case of minimizing
the area.
See the footnote
in Section~\ref{sec:largn} for an example.
}
A further surprise was that
these exceptionally irregular packings,
their patterns being complex and unpredictable,
seem to occur quite predictably. In particular,
in the range $n \le 62$ which we explored carefully,
our experiments detected such irregular packings
only for $n$ of the form
$n=k(k+1)+1$, i.e.
for $n = 13, 21, 31, 43$, and $57$.
The best packings
for larger terms $n$ of this sequence,
i.e. for $n = 73$, 91,..., are probably similarly irregular,
although we could not test that as thoroughly
as for the smaller terms
because the computational resources
needed for such a testing grow very rapidly with $n$.
Most of our findings are unproven conjectures,
the outcomes of computer experiments.
We do not know why the exceptionally irregular values of $n$
appear along the sequence $n=k(k+1)+1$
and only speculate
by suggesting a possible reason.
A few facts which we can prove are
explicitly stated as being provable.
\section{Computational method}\label{sec:cmethod}
\hspace*{\parindent}
To obtain the minimum-perimeter packing conjectures
we use a variant of
the computational technique employed in \cite{LG}
for generating the minimum-area packing conjectures.
The technique consists of two independent algorithms:
the restricted search algorithm and
the ``compactor'' simulation algorithm.
We now review these procedures.
The restricted search algorithm operates on the
assumption
that the desired minimum is achieved on a set of
configurations which is much smaller than the set of all possible
configurations.
The set is restricted to include only
hexagonal patterns, square-grid patterns, their hybrids,
and the patterns obtained by removing some circles from
these patterns.
For a given number of $n > 0$ circles,
a configuration in the restricted set $R_n$
is defined by 6 integers:
\\
$~~~~w$, the number of circles in the longest row,
\\
$~~~~h$, the number of rows arranged in a hexagonal alternating pattern,
\\
$~~~~h_{-}$, those among the $h$ hexagonally arranged rows
that consist of $w-1$ circles each;
the rest $h - h_-$ rows consist of $w$ circles each,
\\
$~~~~s$, the number of rows, in addition to $h$ rows,
that are stacked in the square-grid pattern,
\\
$~~~~s_-$, those among the $s$ square-grid rows that consist of $w-1$
circles each; the remaining $s - s_-$ rows consist of $w$ circles each,
\\
$~~~~v$, the number of ``mono-vacancies'' or holes.
\\
The numbers must be non-negative and must
satisfy the following additional restrictions:
\\
$w > 0$, $h + s > 0$, $s_- \le s$, $s_- < s + h$, $h \ne 1$,
$v \le \min \{ w,h + s \} - 1$,
\\
if $h$ is even, $h = 2k$,
then $h_-$ can take on only two values $h_- = 0$ or $h_- = k$,
\\
if $h$ is odd, $h = 2k+1$,
then $h_-$ can take on only three values $h_- = 0$ or $h_- = k$ or $h = k+1$.
\\
Finally, the total number of circles must equal $n$,
\begin{equation}
\label{totaln}
w (h + s) - h_- - s_- - v = n .
\end{equation}
\begin{figure}
\centering
\includegraphics*[width=3.45in]{class.ps}
\caption{A configuration in the restricted set $R_n$
for $n=29$ circles;
here $w = 5$, $h = 5$, $h_- = 2$, $s = 2$, $s_{-} = 1$, and $v = 3$,
so that $29 = w(h + s) - h_- - s_{-} - v$.
}
\label{fig:class}
\end{figure}
A ``general case'' example is shown in Figure \ref{fig:class}.
As further examples,
we now identify the 6-tuples ($w$, $h$, $h_-$, $s$, $s_-$, $v$)
for some configurations presented in the following sections.
In each example that follows, the unmentioned parameters
of the tuple are equal zero.
In Figure \ref{fig:same}:
\\
configuration "1 circle" has $w = s = 1$;
\\
configuration ``2 circles'' has $w = 2$, $s = 1$;
it can also be identified as having $w = 1$, $s = 2$;
\\
configuration ``4 circles'' has $w = s = 2$;
\\
configuration ``6 circles'' has $w = 3$, $s = 2$;
it can also be identified as having $w = 2$, $s = 3$;
\\
configuration ``9 circles'' has $w = s = 3$;
\\
configuration ``11 circles'' has $w = 4$, $h = 3$, $h_- = 1$;
\\
configuration ``12 circles'' has $w = 4$, $s = 3$;
it can also be identified as having $w = 3$, $s = 4$;
\\
configuration ``15 circles'' has $w = 4$, $h = 3$, $h_- = 1$, $s = 1$;
\\
configuration ``19 circles'' has $w = 5$, $h = 3$, $h_- = 1$, $s = 1$.
In Figure \ref{fig:7}:
\\
configuration $a$ has $w = h = 3$, $h_- = 2$;
\\
configuration $b$ has $w = 2$, $h = 3$, $h_- = 1$, $s = 1$;
\\
configuration $c$ has $w = h = 3$, $h_- = 1$, $v = 1$.
In Figure \ref{fig:17a26}:
\\
configuration $a$ has $w = 4$, $h = 5$, $h_- = 2$, $v = 1$;
\\
configuration $c$ has $w = 5$, $h = 6$, $h_- = 3$, $v = 1$.
In Figure \ref{fig:200}:
\\
configuration $a$ has $w = 13$, $h = 16$, $h_- = 8$;
\\
configuration $b$ has $w = 29$, $h = 7$, $h_- = 3$.
Given the values of $w$, $h$, $h_-$, and $s$,
and that of the common radius of the circles $r$,
the height $H$, width $W$, and perimeter $P$
of the enclosing rectangle can be found from
\begin{equation}
\label{height}
\ \ \ H/r = 2 s + \left\{ \begin{array}{ll}
2 + ( h - 1 ) \sqrt 3 & \mbox{if $h > 0$} \\
0 & \mbox{if $h = 0$}
\end{array}
\right .
\end{equation}
\begin{equation}
\label{width}
W/r = 2 w + \left\{ \begin{array}{ll}
1 & \mbox{if $h > 0$ and $h_{-} = 0$} \\
0 & \mbox{if $h = 0$ \ or \ $h_{-} > 0$}
\end{array}
\right .
\end{equation}
\begin{equation}
\label{perim}
P = 2 ( W + H )
\end{equation}
Note that this ratio $P/r$ is a number of the form $ x + y \sqrt 3 $,
where the non-negative integers
$x$ and $y$ are obvious functions
of $w,h,h_-$ and $s$.
Sometimes several different patterns correspond
to a given tuple $w$, $h$, $h_-$, $s$, $s_-$, $v$;
those may differ in the ways the $s$ or $s_-$ rows are attached or
the $v$ holes are selected.
Since the shape and the perimeter of the enclosing rectangle
do not change among these variations,
the minimization procedure treats them all as the same packing.
An important fact is that for each $n > 0$,
there are only a finite number of 6-tuples
$w$, $h$, $h_-$, $s$, $s_-$, $v$,
that satisfy the restrictions above.
For a given value of $n$,
our procedure lists all such 6-tuples,
and for each of them computes $P/r$,
and selects the configurations that correspond to the minimum
value of $P/r$.
By presenting the values $P/r$ in the form $x+y\sqrt{3}$
with integers $x$ and $y$,
only comparisons among integers are involved
and the selection of the minimum is exact.
The reader should be reminded
that we do not claim that our restricted
search procedure produces
the global optimum for packing $n$ circles in a rectangle.
In fact, we include
some configurations which are clearly
non-optimal,
for example, those with $v > 0$.
The usefulness of the given definition of the sets $R_n$
should become apparent when we compare below the restricted
search algorithm outcomes with those of the
``compactor'' simulation algorithm.
The ``compactor'' simulation works as follow
(see also \cite{PAS}).
It begins by generating a random starting configuration
with $n$ circles lying inside a (large) rectangle
without circle-circle overlaps.
The starting configuration is feasible but is usually rather sparse.
Then the computer imitates a ``compactor'' with
each side of the rectangle pressing against the circles,
so that the circles are being forced
towards each other until they ``jam.''
Possible circle-circle or circle-boundary conflicts
are resolved using a simulation of hard collisions so
that no overlaps or boundary-penetrating
circles occur during the process.
The simulation for a particular $n$ is repeated many times,
with different starting circle configurations.
If the final perimeter value in a run is smaller than
the record achieved thus far, it replaces the current record.
Eventually in this process, the record stops improving up to
the level of accuracy allowed by the double precision accuracy
of the computer. The resulting packing now becomes a candidate
for the optimal packing for this value of $n$.
The main advantage of the ``compactor'' simulation vs.
the restricted search is that in the simulation
no assumption is made about the resulting packing pattern.
The circles are ``free to choose'' any final configuration
as long as it is ``jammed.''
This advantage comes at a price: the simulation time needed
in multiple attempts
to achieve a good candidate packing for a particular $n$
is typically several orders of magnitude longer than the time needed
on the same computer to deliver
the minimum in set $R_n$ by the restricted search procedure.
For example, it may take a fraction of a second to
find the minimum perimeter
packing of 15 circles by the search in $R_{15}$ and
it may take days
with thousands of attempts
to produce the same answer by simulation.
\section{Results: regular and semi-regular optimal packings}\label{sec:regasem}
\hspace*{\parindent}
Table \ref{tab:1t62} lists the packings of $n$ equal circles
in rectangles of the smallest found perimeter
for each $n$ in the range
$1 \le n \le 62$, except $n = 13, 21, 31, 43$, and $57$.
A somewhat arbitrary bound $n = 62$ was set so that
the ``compactor'' simulation was performed
for each $n \le 62$ but only
for a few isolated values $n > 62$
since the simulation slows down significantly for larger $n$.
On the other hand,
the minimum perimeter packings in $R_n$ were produced
by the restricted search procedure for each $n \le 5000$.
\begin{table}
\begin{center}
\fbox{
\begin{tabular}{r|r|r|r|r|r||r|r|r|r|r|r||r|r|r|r|r|r}
$n$&$w$&$h$&$h_{-}$&$s$&$\delta$&
$n$&$w$&$h$&$h_{-}$&$s$&$\delta$&
$n$&$w$&$h$&$h_{-}$&$s$&$\delta$ \\ \hline
1 & 1 & 0 & 0 & 1 & 0 & *22 & 5 & 5 & 2 & 0 &$\delta_2$
& *41 & 6 & 7 & 0 & 0 &$\delta_2$ \\
2 & 2 & 0 & 0 & 1 & 0 & 23 & 5 & 5 & 2 & 0 & 0
& 42 & 6 & 7 & 0 & 0 & 0 \\
3 & 2 & 2 & 1 & 0 & 0 & 24 & 5 & 3 & 1 & 2 & 0
& 44 & 6 & 8 & 4 & 0 & 0 \\
4 & 2 & 0 & 0 & 2 & 0 & 25 & 5 & 5 & 0 & 0 & 0
& *45 & 7 & 7 & 3 & 0 &$\delta_3$ \\
5 & 2 & 3 & 1 & 0 & 0 & *26 & 5 & 6 & 3 & 0 &$\delta_2$
& 46 & 7 & 7 & 3 & 0 & 0 \\
6 & 3 & 0 & 0 & 2 & 0 & 27 & 5 & 6 & 3 & 0 & 0
& 47 & 7 & 5 & 2 & 2 & 0 \\
*7 & 3 & 3 & 1 & 0 &$\delta_1$ & 28 & 5 & 5 & 2 & 1 & 0
& 48 & 6 & 8 & 0 & 0 & 0 \\
8 & 3 & 3 & 1 & 0 & 0 & & 6 & 5 & 2 & 0 & 0
& 49 & 7 & 7 & 0 & 0 & 0 \\
9 & 3 & 0 & 0 & 3 & 0 & 29 & 5 & 3 & 1 & 3 & 0
& 50 & 6 & 9 & 4 & 0 & 0 \\
10 & 3 & 4 & 2 & 0 & 0 & & 6 & 3 & 1 & 2 & 0
& *51 & 7 & 8 & 4 & 0 &$\delta_3$ \\
11 & 3 & 3 & 1 & 1 & 0 & 30 & 5 & 6 & 0 & 0 & 0
& 52 & 7 & 8 & 4 & 0 & 0 \\
& 4 & 3 & 1 & 0 & 0 & 32 & 5 & 7 & 3 & 0 & 0
& 53 & 7 & 7 & 3 & 1 & 0 \\
12 & 4 & 0 & 0 & 3 & 0 & 33 & 6 & 6 & 3 & 0 & 0
& & 8 & 7 & 3 & 0 & 0 \\
14 & 4 & 4 & 2 & 0 & 0 & 34 & 6 & 5 & 2 & 1 & 0
& 54 & 6 & 9 & 0 & 0 & 0 \\
15 & 4 & 3 & 1 & 1 & 0 & 35 & 5 & 7 & 0 & 0 & 0
& *55 & 7 & 8 & 0 & 0 &$\delta_3$ \\
16 & 4 & 0 & 0 & 4 & 0 & 36 & 6 & 6 & 0 & 0 & 0
& 56 & 7 & 8 & 0 & 0 & 0 \\
*17 & 4 & 5 & 2 & 0 &$\delta_2$ &**37 & 6 & 7 & 3 & 0 &$\delta_1$
& *58 & 7 & 9 & 4 & 0 &$\delta_4$ \\
18 & 4 & 5 & 2 & 0 & 0 & *38 & 6 & 7 & 3 & 0 &$\delta_3$
& 59 & 7 & 9 & 4 & 0 & 0 \\
19 & 4 & 3 & 1 & 2 & 0 & 39 & 6 & 7 & 3 & 0 & 0
& 60 & 8 & 8 & 4 & 0 & 0 \\
& 5 & 3 & 1 & 1 & 0 & 40 & 6 & 5 & 2 & 2 & 0
& 61 & 8 & 7 & 3 & 1 & 0 \\
20 & 4 & 5 & 0 & 0 & 0 & & 7 & 5 & 2 & 1 & 0
& *62 & 7 & 9 & 0 & 0 &$\delta_3$ \\
\end{tabular}
}
\caption{Packings of $n$ circles in rectangles
of the smallest found perimeter
for all $n$ in the range $1 \le n \le 62$,
except $n = 13, 21, 31, 43$, and 57.
The packing patterns are described with parameters
$w$, $h$, $h_-$, $s$, and $\delta_i$ and with star markings
as explained in the text
}
\label{tab:1t62}
\end{center}
\end{table}
All packings presented
in Table \ref{tab:1t62}
can be split into two sets.
The first set consists
of either perfectly hexagonal packings
or perfectly square-grid packings
or their hybrids.
We will call these {\em regular} packings.
A regular packing of $n$ circles is characterized
in Table \ref{tab:1t62}
by the parameters $n$, $w$, $h$, $h_-$, and $s$,
as defined
in Section~\ref{sec:cmethod}.
Note that the
parameters $s_-$ and $v$ which are also defined
in Section~\ref{sec:cmethod}
are not present
in Table \ref{tab:1t62}.
These $s_-$ and $v$ equal 0 for each regular packing.
For example,
Table \ref{tab:1t62} lists
two conjectured minimum-perimeter packings of $n=11$ circles:
one with
$w=3$, $h=3$, $h_-=1$, and $s=1$,
and the other with
$w=4$, $h=3$, $h_-=1$, and $s=0$.
The latter is perfectly hexagonal as seen
in Figure \ref{fig:same} (configuration ``11 circles'').
The former is a hybrid of hexagonal and square-grid packings.
A similar hybrid is also
the only conjectured minimum-perimeter packing of $n=15$ circles
which is listed in
Table \ref{tab:1t62}.
It has parameters
$w=4$, $h=3$, $h_-=1$, and $s=1$
and it is shown
in Figure \ref{fig:same} (configuration ``15 circles'').
Given the parameters of a regular packing,
one easily determines the shape and the perimeter of the
rectangle that encloses the circles
using formulas \eqref{height}, \eqref{width}, and \eqref{perim}.
The perimeter for each conjectured minimum-perimeter packings of 11 circles is
$20 + 4 \sqrt{3}$
and for that of 15 circles
$24 + 4 \sqrt{3}$
in units equal to the common circle radius.
It will be convenient to define the shape of a rectangle
by the ratio $L/S$ where
\begin{equation}
\label{los}
L = \max \{ H, W \},
S = \min \{ H, W \},
\end{equation}
so that
\begin{equation}
\label{argt1}
L/S \ge 1 .
\end{equation}
In these three examples, we have, respectively
\\
$L/S = (4 + 2 \sqrt{3})/6
= 1.2440169..$,
for the first packing of 11 circles
in Table~\ref{tab:1t62};
\\
$L/S = 8/(2 + 2 \sqrt{3})
= 1.4541016..$,
for the second packing of 11 circles
in Table~\ref{tab:1t62};
\\
$L/S = 8/(4 + 2 \sqrt{3})
= 1.0717968..$,
for the packing of 15 circles.
Similar simple calculations can be done for all
the other regular packings.
All entries $n$ that correspond to regular packings
are not marked by stars in the table.
Now consider the other set of the packings,
those with entries $n$ that are marked by stars in the table.
The smallest example is $n=7$.
\begin{figure}
\centering
\includegraphics*[width=5.1in]{7.ps}
\caption{
Packings of 7 equal circles in the smallest perimeter rectangle:
$a$) among the hexagonal configurations,
$b$) among the hybrid configurations,
$c$) among the regular-with-holes configurations,
$d$) among all configurations tested.
The perimeters of the rectangles of the packings $a$, $b$, and $c$ are the same
and are larger than the perimeter of the packing $d$.
}
\label{fig:7}
\end{figure}
The smallest perimeter rectangle that we could find
(using the simulated ``compactor'' procedure)
is that for configuration $d$
in Figure~\ref{fig:7}.
Because
this configuration does not fit
the description
in Section~\ref{sec:cmethod}
of a possible pattern in $R_7$,
the restricted search procedure cannot find the packing $d$.
Instead, the best in $R_7$ happen to be
the configurations $a$, $b$, and $c$.
Having among them different aspect ratios
of the enclosing rectangle,
the configurations $a$, $b$, and $c$ are of the same perimeter
$P/r = 16 + 4 \sqrt{3} = 22.928293..$.
Out of these three, the $c$ requires the smallest
number of circles to be moved
and the smallest readjustment of the boundary
to obtain the $d$.
We move circles labeled
in Figure~\ref{fig:7}
as $A$, $B$, and $C$
to turn the $c$ into the $d$.
For the entry $n = 7$,
Table~\ref{tab:1t62} lists parameters
$w = 3$, $h = 3$, and $h_- = 1$ and
those are of the configuration $c$.
Also the entry is marked with one star
which represents one mono-vacancy
in the pattern $c$.
The implied convention here is that this entry represents
the configuration $d$ because $d$, while
not being describable in the terms of the table,
can be
obtained by a simple and standard transformation
from $c$.
(Note that the configuration $c$ can be defined in several
ways depending on the position of the hole; the resulting
different configurations are not distinguished
by the restricted search procedure or in the table.)
As $c$ is turned into $d$, the width of the rectangle decreases
by the value $\delta = \delta_1$ where
\begin{equation}
\label{delta1}
\delta_1 = 2 - \sqrt{2 \sqrt {3}}.
\end{equation}
The smallest found perimeter for the 7 circles thus becomes
\begin{equation}
\label{odd}
P^{opt}/r = P/r - 2 \delta
\end{equation}
which is 22.650743.. here.
The $\delta$ will be called the {\em improvement} parameter.
The equality $\delta = 0$ together with
the absence of star markings distinguishes
a regular packing entry in
Table \ref{tab:1t62}.
On the other hand, the entries
with $\delta = \delta_i > 0,~i = 1, 2, 3$ or 4, in
Table \ref{tab:1t62}
correspond to packings that are not regular.
The number of stars that marks the $n$ for such an entry equals
the number of mono-vacancies $v$
in the packing according to the definition of class $R_n$
in Section~\ref{sec:cmethod}.
\begin{figure}
\centering
\includegraphics*[width=6.0in]{17a26.ps}
\caption{Packings of 17 ($a$ and $b$) and 26 ($c$ and $d$)
equal circles in rectangles. Packings $a$ and $c$ are the best
in $R_{17}$ and $R_{26}$, respectively.
Packings $b$ and $d$ are the best we could find
for their number of circles.
Alternative equivalent positions of circles $D$ in packing $b$
and $C$ in packing $d$ are shown
}
\label{fig:17a26}
\end{figure}
\begin{figure}
\centering
\includegraphics*[width=6.0in]{37a38.ps}
\caption{Packings of 38 ($a$ and $b$) and 37 ($c$ and $d$)
equal circles in rectangles. Packings $a$ and $c$ are the best
in $R_{38}$ and $R_{37}$, respectively.
Packings $b$ and $d$ are the best we could find
for their number of circles
}
\label{fig:37a38}
\end{figure}
\begin{figure}
\centering
\includegraphics*[width=6.0in]{58a62.ps}
\caption{Packings of 58 ($a$ and $b$) and 62 ($c$ and $d$)
equal circles in rectangles. Packings $a$ and $c$ are the best
in $R_{58}$ and $R_{62}$, respectively.
Packings $b$ and $d$ are the best we could find
for their number of circles.
A positive gap exists between circles $C$ and $J$
and also between circles $G$ and $K$ in packing $b$.
An alternative equivalent position of circle $F$ is shown
in packing $b$
}
\label{fig:58a62}
\end{figure}
Figures~\ref{fig:17a26}, \ref{fig:37a38}, and \ref{fig:58a62}
show six other
non-regular packings.
Those are labeled
$b$ and $d$ in each figure.
Their regular-with-holes precursors,
as found by the restricted search procedure,
are the configurations which are labeled $a$ and $c$ in each figure.
Note that the best found packings of $n=17$ and $n=26$ circles
shown in the diagrams $b$ and $d$ in Figure~\ref{fig:17a26}
are obtained from
the best packings found by the restricted search procedure
and
shown, respectively, in the diagrams $a$ and $c$
in this figure,
using
improvement parameter $\delta = \delta_2$
where
\begin{equation}
\label{delta2}
\delta_2 =
2 - 0.5 \sqrt 3 -
3^{1/4} (2 \sqrt 3 - 1)/(2 \sqrt {4 - \sqrt 3 }).
\end{equation}
Also note that
there are several equivalent ways of the improvement
resulting in the same value of $\delta = \delta_2$.
For example, circle $D$ in
Figure~\ref{fig:17a26}$b$,
can occupy an alternative position
in which
$D$ contacts
the right side of the rectangle instead of the unlabeled circle
to its left while remaining in contact with circles $B$ and $E$.
This position is also
shown in the figure.
A similar equivalent re-positioning of circle $C$ is shown in
Figure~\ref{fig:17a26}$d$.
The best found packing of $n=38$ circles
shown in Figure~\ref{fig:37a38}$b$ is obtained from
the best packing found by the restricted search procedure
and
shown in Figure~\ref{fig:37a38}$a$
using
improvement parameter $\delta = \delta_3$.
The best found packing of $n=58$ circles
shown in Figure~\ref{fig:58a62}$b$ is obtained from
the best packing found by the restricted search procedure
and
shown in Figure~\ref{fig:58a62}$a$
using
improvement parameter $\delta = \delta_4$.
The values of $\delta_i$ are given in Table~\ref{tab:delta}.
In the packing $a$ shown in Figure~\ref{fig:58a62} and in all the previously
discussed packing diagrams,
the distances in
circle-circle or circle-boundary pairs are zero
whenever the circles in each pair are
apparently in contact with each other.
The packing $b$ in
Figure~\ref{fig:58a62} gives an exception to this rule:
the distance between the circles $C$ and $J$
and that between the circles $G$ and $K$ is
0.05323824.. of the common circle radius,
perhaps too small to be discerned as positive from the diagram.
Note that the $\delta$-improvement of the configuration
sometimes releases certain circles from the contacts with their
neighbors. The released circles become the so-called {\em rattlers}.
In Figure~\ref{fig:17a26}, circle $A$ becomes a rattler
during the $\delta_2$-conversion of the configuration $c$
into
the configuration $d$.
In Figure~\ref{fig:37a38}, circle $D$ becomes a rattler
during the $\delta_1$-conversion of the configuration $c$
into
the configuration $d$.
Rattlers are represented by
unshaded circles in the packing diagrams.
\begin{table}
\begin{center}
\fbox{
\begin{tabular}{r|c|c}
$i$&$\delta_i$& defined or used in \\ \hline
1 & 0.13879028..
& Equation~\eqref{delta1},
Figures~\ref{fig:7}$d$, \ref{fig:37a38}$d$ \\
2 & 0.05728065..
& Equation~\eqref{delta2}, Figures~\ref{fig:17a26}$b$,
\ref{fig:17a26}$d$, \ref{fig:57}$b$ \\
3 & 0.01935364..
& Figures~\ref{fig:37a38}$b$, \ref{fig:31}$b$, \ref{fig:43}$b$,
\ref{fig:58a62}$d$
\\
4 & 0.00403953.. & Figure~\ref{fig:58a62}$b$
\\
\end{tabular}
}
\caption{Improvement parameters $\delta_i$}
\label{tab:delta}
\end{center}
\end{table}
In all the examples in Table~\ref{tab:1t62}, the $\delta$-improvement
of the configuration is localized:
not counting
the circles possibly used for covering the holes,
all the circles
involved in the change are located by the boundary on one side.
During the change
the width of the rectangle
decreases by $\delta$ while
the height stays unchanged.
The obtained packings, although they are non-regular,
are close to their regular-with-holes precursors.
We will call such non-regular packings {\em semi-regular}.
All the non-regular packings listed by their
regular-with-holes representations in
Table~\ref{tab:1t62} are semi-regular.
We skipped several values of $n$ in
Table~\ref{tab:1t62}.
The best found packings obtained for the skipped $n$
show more irregularity than the
semi-regular packings do.
These excluded from
Table~\ref{tab:1t62}
packings will be called {\em irregular}.
An irregular packing is defined by negation:
it is a packing that cannot be generated
by the described above simple adjustment
where the circles that move are
limited to those covering the holes and
to those located at a side column of a regular-with-holes packing.
We conclude this section with the following observation.
If among the best packings in $R_n$ delivered by the
restricted search there is at least one with holes
or if a non-regular packing of $n$ circles is known
with a smaller perimeter than of those best in $R_n$,
then the packing of $n$ circles
in a rectangle of the minimum perimeter
{\em provably} cannot have a regular pattern.
That is, it cannot be purely hexagonal or purely square-grid
or a hybrid pattern.
Thus, the optimum packings for each
star-marked semi-regular $n$ in
Table~\ref{tab:1t62} cannot possibly be regular.
We will see in the following section that
the optimum packings for the irregular $n$,
those skipped in
Table~\ref{tab:1t62}, cannot be regular either:
for each skipped $n$ we will present a packing which is better
than the record best in $R_n$.
\section{Results: irregular optimal packings}\label{sec:irreg}
\hspace*{\parindent}
The smallest skipped entry in
Table~\ref{tab:1t62} is $n = 13$.
Figure~\ref{fig:13}$a$ presents the only existing best in $R_{13}$ packing
as found by the restricted search procedure.
The packing has $w = 3$, $h = 5$, $h_- = 2$.
Its perimeter is $P/r = 16 + 8 \sqrt{3} = 29.856406...$.
Since there is no hole in the packing,
the case $n = 13$ would have qualified as a regular one
and would have been listed as such
with its parameters $w$, $h$, and $h_-$
in Table~\ref{tab:1t62}
were it not for the ``compactor'' simulation.
Unexpectedly for us, the ``compactor'' produced
a better packing!
That packing with the perimeter
$P^{opt}/r = 29.851847510...$ which is smaller
than the perimeter of the packing $a$
in Figure~\ref{fig:13} by at least 0.004
is the packing $b$ shown in the same figure.
\begin{figure}
\centering
\includegraphics*[width=5.8in]{13.ps}
\caption{Packings of 13 equal circles in rectangles:
$a$) with the smallest perimeter of the enclosing rectangle
among the set $R_{13}$,
$b$) with the smallest perimeter of the enclosing rectangle
we could find
}
\label{fig:13}
\end{figure}
The pattern of the packing $b$
in Figure~\ref{fig:13} is truly irregular and non-obvious,
unlike the straight-forward pattern of the packing $a$.
Even the existence of the packing $b$ should not be taken for granted.
By contrast the existence of the packing $a$
in Figure~\ref{fig:13}
can be easily proven by construction
and so can the existence of all the other
regular and semi-regular packings discussed above.
The small black dots in the packing diagram $b$
indicate the so-called {\em bonds} or contact points
in circle-circle or circle-boundary
pairs.
A bond
indicates the distance being exactly zero between the pair,
while the absence of a bond in a spot of an apparent contact
indicates the distance being positive, i.e. no contact.
For example, there is no contact between circle 9 and the bottom
boundary in the packing $b$.
(The 13 circles are arbitrarily assigned distinct labels 1 to 13
in Figure~\ref{fig:13}$b$ to facilitate their
referencing.)
No bond indication is needed in the diagram of the packing $a$
in Figure~\ref{fig:13}
nor in any other regular
packing diagram
because the points of apparent contacts
are always the true
contacts in such packings.
In semi-regular packings such non-contacts
do occur, for example
the one between circles $C$ and $J$
in
Figure~\ref{fig:58a62}$b$.
With the explicit indication of the bonds,
it is {\em provably} possible to construct the packing
in Figure~\ref{fig:13}$b$ and this construction is {\em provably} unique,
so the positions of all circles, except the rattler,
and the rectangle dimensions are
uniquely defined.
The computed horizontal width and vertical heights of the packing
in units equal the circle radius $r$ are
\\
$~~~~~~~~~~~~~~~~~~~~~~~W/r = 5.463267269314... ~~~~H/r = 5.462656485780...$
\\
which implies the perimeter value given above
and
$L/S = 1.000111810716...$
so the rectangle is almost a square to within about 0.01\%
Note that
when in \cite{LG} we minimized the area of the rectangle
we were unable to find packings better than,
in the present paper terminology,
either regular or semi-regular.
The case $n = 13$ is a violation
of such structure for the case of
minimizing the perimeter.
Suspicious of other such violations,
we ran
many more tries
of the ``compactor'' simulation for $n=14,15,16,17,18,19$ and 20.
No violation was detected.
All these cases seem to be either regular or semi-regular
as presented in Table~\ref{tab:1t62}.
But for $n = 21$ we encountered another gross
violation of regularity.
\begin{figure}
\centering
\includegraphics*[width=5.8in]{21.ps}
\caption{Packings of 21 equal circles in rectangles:
$a$) with the smallest perimeter of the enclosing rectangle
among the set $R_{21}$,
$b$) with the smallest perimeter of the enclosing rectangle
we could find
}
\label{fig:21}
\end{figure}
The case of $n = 21$ is similar to that of $n = 13$.
Here again, the restricted search procedure
delivers the best in $R_{21}$ packing (shown in Figure~\ref{fig:21}$a$)
with $w = 4$, $h = 6$, $h_- = 3$ and
perimeter $P/r = 20 + 10 \sqrt{3} = 37.3205081..$.
The packing is regular and looks very different from the best packing
found by the simulation (shown in Figure~\ref{fig:21}$b$),
the latter
with the perimeter $P^{opt}/r = 37.309294229..$ which is smaller
than the perimeter of the packing
shown in Figure~\ref{fig:21}$a$ by at least 0.01.
Same as in the case of $n = 13$,
the best found packing for $n = 21$
exhibits a rather irregular structure,
which makes
the existence of the packing non-obvious.
With the bonds shown in Figure~\ref{fig:21}$a$,
it is possible to {\em prove} the existence
of the packing by construction and
it is possible to {\em provably} uniquely determine the position
of the circles, except the rattlers,
the width, height and the $L/S$ ratio of the rectangle:
\\
$~~~~~~~~~~~~~W/r = 7.433745175630.. ~~~H/r = 7.220901938764.. ~~~ L/S = 1.029475990489..$
The patterns of both irregular best found packings,
while being dissimilar to all the other
conjectured optimum packings considered thus far,
show some similarity between themselves.
This similarity
is emphasized by the circle labeling.
Labels 1 to 13
in Figure~\ref{fig:21}$b$ are assigned
to the circles that occupy the positions in
that figure which are similar to the corresponding
circles 1 to 13 in Figure~\ref{fig:13}$b$.
The similarity, however, is not perfect.
For example,
the bond between circle 9 and the bottom boundary
in Figure~\ref{fig:21}$b$ does not find its counterpart
in Figure~\ref{fig:13}$b$.
The remaining skipped entries in Table~\ref{tab:1t62}
are $n = 31$, 43, and 57.
For these three values of $n$, unlike $n=13$ or 21,
the best in $R_n$ packings all have holes,
as seen in Figures~\ref{fig:31}$a$,
\ref{fig:43}$a$, and
\ref{fig:57}$a$,
and hence avail themselves for $\delta$-improvements.
\begin{figure}
\centering
\includegraphics*[width=6.0in]{31.ps}
\caption{Packings of 31 equal circles in rectangles:
$a$) with the smallest perimeter of the enclosing rectangle
among the set $R_{31}$,
$b$) $\delta_3$-improved packing $a$,
$c$) with the smallest perimeter of the enclosing rectangle
we could find
}
\label{fig:31}
\end{figure}
\begin{figure}
\centering
\includegraphics*[width=6.1in]{43.ps}
\caption{Packings of 43 equal circles in rectangles:
$a$) with the smallest perimeter of the enclosing rectangle
among the set $R_{43}$,
$b$) $\delta_3$-improved packing $a$,
$c$) with the smallest perimeter of the enclosing rectangle
we could find. The black dots indicate bonds
of the labeled circles in the packing $c$
}
\label{fig:43}
\end{figure}
\begin{figure}
\centering
\includegraphics*[width=6.3in]{57.ps}
\caption{Packings of 57 equal circles in rectangles:
$a$) with the smallest perimeter of the enclosing rectangle
among the set $R_{57}$,
$b$) $\delta_2$-improved packing $a$,
$c$) with the smallest perimeter of the enclosing rectangle
we could find.
Alternative position of circle $F$ is shown in packing $b$.
The black dots indicate bonds
of the labeled circles in the packing $c$
}
\label{fig:57}
\end{figure}
The improved packings
labeled $b$ in these three figures
have perimeters, respectively
\\
$P/r =
12(2+\sqrt{3}) - 2 \delta_3
=
44.74590240843..
$ for 31 circles,
\\
$P/r =
14(2+\sqrt{3}) - 2 \delta_3
=
52.21000402357..
$ for 43 circles,
\\
$P/r =
16(2+\sqrt{3}) - 2 \delta_2
=
59.59825161939..
$ for 57 circles.
Those improved perimeters still exceed the
perimeters of the corresponding best packings found,
which happen to be irregular, namely
\\
for 31 circles $P^{opt}/r =
44.7095500424198..
$ is smaller than $P/r$ by at least 0.035,
\\
for 43 circles $P^{opt}/r =
51.99029827020367..
$ is smaller than $P/r$ by at least 0.2,
\\
for 57 circles $P^{opt}/r =
59.4543998853414..
$ is smaller than $P/r$ by at least 0.14.
The patterns of the irregular packings $c$ in
Figures~\ref{fig:31},
\ref{fig:43}, and
\ref{fig:57} somewhat resemble each other, especially
the packings of 43 and 57 circles.
Moreover, the best found packing of 43 circles in
Figure~\ref{fig:43}$c$ is an exact subset in
the best found packing of 57 circles in
Figure~\ref{fig:57}$c$.
In both packings, the dots attached to 5 circles labeled $A$ to $E$
indicate the bonds of these circles.
Thus, for example, the circles $B$ and $D$ do contact the circle $C$,
but do not contact the right side of the rectangle, where
the gap is
0.00957...
of the circle radius, too small to be
discerned in the diagrams.
Similarly, the circle $C$ does contact the right side of the rectangle
but does not contact either of the two unlabeled
circles immediately at $C$'s left.
Between the pairs that do not include at least one labeled circle
the bonds exist in the obvious places,
and they are not specifically indicated by dots
in the figures.
As before, it is possible to {\em prove} the existence of
the irregular packings $c$ in
Figures~\ref{fig:31},
\ref{fig:43}, and
\ref{fig:57}.
This existence might be non-obvious, especially,
for the latter two packings.
\section{Double optimality and related properties}\label{sec:dopt}
\hspace*{\parindent}
As mentioned in Introduction,
minimizing the perimeter and minimizing the area
of the rectangle
lead to
generally different optimal packings
for the same number of circles $n$,
if the rectangle aspect ratio is variable.
Are there packings
optimal under both criteria at the same time?
Figure~\ref{fig:same} displays
such conjectured double optimal packings
that were obtained by comparing the list of smallest area
packings reported in \cite{LG} with that of
the smallest perimeter packings reported here.
\begin{figure}
\centering
\includegraphics*[width=6.0in]{same.ps}
\caption{The rectangle that encloses
each of these packings has the smallest perimeter and the smallest area
that we could find for their number of circles
}
\label{fig:same}
\end{figure}
For a particular $n$ and a particular optimality criterion,
there may be several equivalent optimum packings.
For example,
two minimum-perimeter packings exist for $n=11$ according
to Table~\ref{tab:1t62} and two minimum-area packings exist
for $n=15$ according to \cite{LG}.
However, no more than one packing was found to be double optimal
for any $n$.
Since we will show
in Section~\ref{sec:prf}
that for $n \rightarrow \infty$ the $L/S$ ratio
of the minimum-perimeter rectangles tends to 1,
and it is conjectured in this case
that the $L/S$ ratio for the minimum-area rectangles tends to
$2 + \sqrt{3}$ (see \cite{LG}),
then, conjecturally,
there may be only a finite number of double optimal packings.
In fact, we believe Figure~\ref{fig:same} lists
all double optimal packings.
For larger $n$, best rectangular shapes found
under the two optimality criteria become
noticeably different from each other,
e.g., see the best packings found under either criteria
for $n = 200$ in Figure~\ref{fig:200}.
\begin{figure}
\centering
\includegraphics*[width=6.1in]{200.ps}
\caption{The best packings found for $n=200$ equal circles
in rectangles of a variable shape: $a$) under the criterion
of the minimum perimeter, $b$) under the criterion
of the minimum area
}
\label{fig:200}
\end{figure}
Consider a configuration $C^{opt}$ of $n$ equal circles
which supplies the
global minimum perimeter for its enclosing rectangle
and suppose the $C^{opt}$
happens to be {\em not} the one that supplies
the global minimum to the area of the enclosing rectangle.
We believe, however, that the rectangle of this $C^{opt}$ still
holds a record of being a rectangle of the minimum area,
albeit locally.
Specifically, if we vary slightly the ratio $L/S$ of the rectangle
around its value given by $C^{opt}$ and for each such $L/S$
find the rectangle of the densest possible packing of $n$ unit-radius circles
(this rectangle possesses
both the minimum perimeter and the minimum area for its value $L/S$),
then the area of the rectangle for the configuration $C^{opt}$
will turn out to be the minimum among the areas of all those varied rectangles.
We also believe that the statement which is obtained from
the statement above by the interchange of
the minimum-perimeter criterion with the minimum-area criterion is also
true.
That is,
a configuration that delivers the
global minimum of the area of the enclosing rectangle
also holds a record of supplying the minimum of its perimeter,
though perhaps only locally.
Can the two sets of configurations,
those that deliver the local minima for the rectangle area
and those that deliver the local minima for the rectangle perimeter,
be the same sets?
We have tested the former statement
(that the global minimum-perimeter optimality implies the local
minimum-area optimality) numerically
for some values of $n$.
If this conjecture is true,
then the minimum-perimeter packings for some values of $n$ have to be
of a higher density than the densest packings of $n$ equal circles
in a square.
Cases $n=13$ and $n=21$ seem to be
such occurrences.
The configurations of 13 and 21 circles with
the smallest found rectangular perimeter shown
in Figures~\ref{fig:13}$b$ and \ref{fig:21}$b$ resemble the
respective best found packings of 13 and 21 equal circles in a square,
see for example, \cite{GL2}.
The only visible difference between the pairs for each $n$
is in the positions of the bonds.
The $L/S$ ratios in either minimum-perimeter packing
is very close to 1, so
each respective densest packing in a square is a local neighbor
of the corresponding minimum-perimeter packing.
It can be verified that each of the two densest-in-a-square packings
in \cite{GL2}
has a lower density than that of
its minimum-perimeter counterpart reported here.
\section{Minimum-perimeter packings for larger $n$}\label{sec:largn}
\hspace*{\parindent}
Because the results of the restricted search, available for all
$n \le 5000$, were
supported by simulation only for all $n \le 62$,
our conjectures become more speculative for larger $n$.
For $n \le 62$
the restricted search reliably predicts the best packings
found by the simulation
except those of the form
\begin{equation}
\label{k(k+1)+1}
n = k(k+1)+1,
\end{equation}
where $3 \le k \le 7$.
All those predicted packings
happen to be either regular or semi-regular.
The optimal packings for the values of $n$ of the form
\eqref{k(k+1)+1}
appear to be exceptionally irregular.
For which $n > 62$ are the best packings similarly irregular?
Our previous experience for packing equal circles
in various shapes suggests that
the numbers of circles which result in
exceptionally ``bad'' or irregular optimal packings
often follow immediately after the numbers which result
in exceptionally ``good'' or regular optimal packings.
For example,
triangular numbers of circles $n=k(k+1)/2$
arrange themselves optimally in regular triangular patterns
inside equilateral triangles and
in \cite{GL1} we observed that
the optimal arrangements
of $n=k(k+1)/2+1$ circles
look irregular and
disturbed.
The minimum-perimeter packings of $n = k(k+1)$ equal circles
inside rectangles with a variable aspect ratio, as
conjectured
by the restricted search for
\begin{equation}
\label{333}
4 \le k \le 33 ,
\end{equation}
are
regular hexagonal arrangements of
$h = k+1$ alternating rows
with $w = k$ circles in each row.
For $k > 33~(n > 1122)$ this
regular pattern with $w=k$, $h=k+1$ does not serve as the optimum.
The latter statement is proven at least for $n \le 5000$.
Thus we speculate that the 30 values of $k$ in \eqref{333} also
might correspond to the 30 cases
of $n$ as computed by formula \eqref{k(k+1)+1}
in each of which the minimum-perimeter
packing of $n$ circles in rectangles with variable aspect ratio
is irregular.
The irregular minimum-perimeter packings,
probably, also occur
for some $n$ which are
{\em not}
of the form \eqref{k(k+1)+1}.
We believe $n = 66$ is the smallest such $n$.
In fact, $n = 66$ is the smallest one with the properties:
(A) it is not of the form \eqref{k(k+1)+1} for any integer $k$,
(B) the best in $R_n$ packing, as delivered by the restricted search,
has
$h = 9$ alternating rows,
$h_{-} = 4$ of which are one circle shorter, and
$v = 2$ holes.
The smallest $n$ for which (B) holds is
$n = 57$.
The best in $R_{57}$ packing is shown
in Figure~\ref{fig:57}$a$.
By attaching
a column of 9 alternating circles at the left of
Figure~\ref{fig:57}$a$ we obtain a diagram of the
best in $R_{66}$ packing.
The best in $R_{57}$ packing
can be $\delta_2$-improved as shown
in Figure~\ref{fig:57}$b$.
The best in $R_{66}$ packing
can be $\delta_2$-improved in the same way.
In either one of these $\delta_2$-improved packings
all the circles can be ``unjammed''
so that there would be no contacts among them or with the boundary.
Hence the perimeter of either one can be further
reduced by subsequent ``compaction'' of the rectangle.
Our ``compactor'' simulation suggests
that for $n = 57$
the irregular packing
thus obtained
does not overtake the conjectured
optimum packing shown in
Figure~\ref{fig:57}$c$.
However, we believe
that its analogue for $n = 66$ might just be the optimum packing
and it would be irregular.
Approximately,
each of the two ``compacted'' irregular packings might look similar
to the packing in Figure~\ref{fig:57}$b$.
Unfortunately,
obtaining their exact patterns,
including the identification of the bonds,
proved to be beyond our
current computing capabilities.
(The hardness of the computations might indicate existence
of several local minima
near the ``unjammed'' configurations.)
Note that the pattern of the least perimeter
packing for $n = 66$
would probably differ substantially
from the patterns of the irregular least perimeter packings
for $n = 13, 21, 31, 43$, and 57,
those conjectured
in Section~\ref{sec:irreg}.
The chance to encounter a non-regular $n$ (which
by definition
has to correspond to either a semi-regular or irregular optimal packing)
increases quickly with $n$.
In Table~\ref{tab:5000} we list the
packings found by the restricted search procedure for several segments
of consecutive $n$.
The segments are arbitrarily selected within the set $62 < n \le 5000$.
\begin{table}
\begin{center}
\fbox{
\begin{tabular}{r|r|r|r|r|r||r|r|r|r|r|r||r|r|r|r|r|r}
$n$&$w$&$h$&$h_{-}$&$s$&$v$&
$n$&$w$&$h$&$h_{-}$&$s$&$v$&
$n$&$w$&$h$&$h_{-}$&$s$&$v$ \\ \hline
101 & 9 & 12 & 6 & 0 & 1 & 501 & 21 & 24 & 0 & 0 & 3
& 2001 & 44 & 46 & 23 & 0 & 0 \\
102 & 9 & 12 & 6 & 0 & 0 & 502 & 21 & 24 & 0 & 0 & 2
& 2002 & 41 & 49 & 0 & 0 & 7 \\
103 & 10 & 11 & 5 & 0 & 2 & 503 & 21 & 24 & 0 & 0 & 1
& 2003 & 41 & 49 & 0 & 0 & 6 \\
104 & 10 & 11 & 5 & 0 & 1 & 504 & 21 & 24 & 0 & 0 & 0
& 2004 & 41 & 49 & 0 & 0 & 5 \\
105 & 10 & 11 & 5 & 0 & 0 & 505 & 22 & 23 & 0 & 0 & 1
& 2005 & 41 & 49 & 0 & 0 & 4 \\
106 & 10 & 9 & 4 & 2 & 0 & 506 & 22 & 23 & 0 & 0 & 0
& 2006 & 41 & 49 & 0 & 0 & 3 \\
& 11 & 9 & 4 & 1 & 0 & *507 & 20 & 26 & 13 & 0 & 0
& 2007 & 41 & 49 & 0 & 0 & 2 \\
107 & 9 & 12 & 0 & 0 & 1 & 508 & 21 & 25 & 12 & 0 & 5
& 2008 & 41 & 49 & 0 & 0 & 1 \\
108 & 9 & 12 & 0 & 0 & 0 & 509 & 21 & 25 & 12 & 0 & 4
& 2009 & 41 & 49 & 0 & 0 & 0 \\
109 & 10 & 11 & 0 & 0 & 1 & 510 & 21 & 25 & 12 & 0 & 3
& 2010 & 42 & 48 & 0 & 0 & 6 \\
110 & 10 & 11 & 0 & 0 & 0 & 511 & 21 & 25 & 12 & 0 & 2
& 2011 & 42 & 48 & 0 & 0 & 5 \\
... & .. & .. & . & . & . & ... & . & . & . & . & .
& .. & ..& ..& ..& ..& .. \\
251 & 14 & 18 & 0 & 0 & 1 & 1001 & 30 & 34 & 17 & 0 & 2
& 4991 & 64 & 78 & 0 & 0 & 1 \\
252 & 14 & 18 & 0 & 0 & 0 & 1002 & 30 & 34 & 17 & 0 & 1
& 4992 & 64 & 78 & 0 & 0 & 0 \\
253 & 15 & 17 & 0 & 0 & 2 & 1003 & 30 & 34 & 17 & 0 & 0
& 4993 & 68 & 74 & 37 & 0 & 2 \\
254 & 15 & 17 & 0 & 0 & 1 & 1004 & 31 & 33 & 16 & 0 & 3
& 4994 & 68 & 74 & 37 & 0 & 1 \\
255 & 15 & 17 & 0 & 0 & 0 & 1005 & 31 & 33 & 16 & 0 & 2
& 4995 & 68 & 74 & 37 & 0 & 0 \\
256 & 16 & 16 & 0 & 0 & 0 & 1006 & 31 & 33 & 16 & 0 & 1
& 4996 & 65 & 77 & 0 & 0 & 9 \\
257 & 14 & 19 & 9 & 0 & 0 & 1007 & 31 & 33 & 16 & 0 & 0
& 4997 & 65 & 77 & 0 & 0 & 8 \\
258 & 15 & 18 & 9 & 0 & 3 & 1008 & 28 & 36 & 0 & 0 & 0
& 4998 & 65 & 77 & 0 & 0 & 7 \\
259 & 15 & 18 & 9 & 0 & 2 & 1009 & 29 & 35 & 0 & 0 & 6
& 4999 & 65 & 77 & 0 & 0 & 6 \\
260 & 15 & 18 & 9 & 0 & 1 & 1010 & 29 & 35 & 0 & 0 & 5
& 5000 & 65 & 77 & 0 & 0 & 5 \\
... & .. & .. & ..& ..& .. & .... & .. & .. & ..& ..& ..
& ... & . & . & . & . & . \\
\end{tabular}
}
\caption{Packings of $n$ circles in rectangles
of the smallest perimeter as found by the restricted search
for several contiguous segments of $n$ that were
arbitrarily selected
within the set $62 < n \le 5000$.
The packings shown are either regular (with $v = 0$)
and then they
all
are believed to be globally optimal, with the exception of the case $n=507$,
or they are regular with holes
(with $v > 0$) and then they can be $\delta$-improved into
semi-regular packings.
The exceptional entry $n = 507 = 22\times23+1$ is marked with a star.
The optimal packing for 507 circles might
be of an unknown irregular pattern
}
\label{tab:5000}
\end{center}
\end{table}
The structure of Table~\ref{tab:5000} is similar to that
of Table~\ref{tab:1t62}, except that
an additional column is provided for
the number of holes $v$.
The entries $n$ with $v > 0$ are frequent
in Table~\ref{tab:5000}, unlike
Table~\ref{tab:1t62}.
Also, we choose to skip the $\delta$ column here.
(Determining and presenting the higher-order $\delta_i$
would involve many details
exceeding the reasonable limits for this paper.)
The discussion above suggests
that, perhaps,
some of the entries with
multiple holes, $v > 1$,
correspond to irregular packings,
if their $\delta$-improvements
avail themselves
to further improvements same as the packing in
Figure~\ref{fig:57}$b$.\footnote{
As reported in \cite{LG},
for some $n \ge 393$ the configurations with the least area
among the set $R_n$ have multiple holes.
In particular,
the smallest $n$ for which
property (B) holds for
the least rectangular area configuration
among the set $R_n$
is $n = 453$.
The non-zero parameters of the configuration
are $w~=~51, h~=~9, h_{-}~=~4, v~=~2$.
The packing of $453$ congruent circles which delivers
the global minimum to the area of the enclosing rectangle
is, probably, irregular.
}
The larger values of $n$ that would correspond to regular packings
become rare.
However, we do not believe regular $n$ eventually disappear.
In other words, we do not believe the largest such $n$ exists.
For the infinite sequence of values of $n$,
all those of the form
\begin{equation}
\label{square1}
n=\frac{1}{2}(a_k+1)(b_k+1),~~ k = 1,2,..
\end{equation}
with
\begin{equation}
\label{square2}
a_1 = 1, a_2 = 3, a_{k+2} = 4a_{k+1} - a_k,~~~
b_1 = 1, b_2 = 5, b_{k+2} = 4b_{k+1} - b_k,~~k = 1,2,...
\end{equation}
the minimum-perimeter
packings are probably regular.
The fractions
$a_k /b_k$ are (alternate) convergents to
$1/ \sqrt 3$, and it has been conjectured by Nurmela et al.
\cite{NOR} that for these $n$, a ``nearly'' hexagonal packing of
$n$ circles in a square they describe is in fact optimal,
that is, it has the largest possible density among those in a square.
The beginning terms $n = 12, 120, 1512$
of sequence \eqref{square1} are within the range $n \le 5000$
for which we exercised the restricted search procedure.
For these three $n$'s
the search delivers regular patterns
as the minimum-perimeter packing in $R_n$:
a $4 \times 3$ square-grid for $n=12$,
the hexagonal packing
with $w = 10$, $h=12$ for $n=120$ and
that
with $w = 36$, $h=42$ for $n=1512$.
Note that the latter two are
the {\em exact} hexagonal packings
and they have densities larger than that of the corresponding
``nearly'' hexagonal packings, those that are best in a square
according to the conjecture in \cite{NOR}.
We believe the same relation between
the best packings of $n$ circles in a square and the minimum-perimeter
packings of $n$ circles in rectangles continues
for all larger $n$ of the form \eqref{square1}, \eqref{square2}.
Increasing the value of $n$ seems to diminish
such phenomena as dimorphism and hybrid packings.
Dimorphism of the optima
is the existence of two different optimal rectangular shapes.
The dimorphism occurs within the interval $1 \le n \le 62$
for $n = 11$, 19, 28, 29, 40 and 53,
see Table~\ref{tab:1t62}.
The hybrid packings,
identifiable in the table by
$h$ and $s$
being both positive,
occur for $n = 11$, 15, 19, 24, 28, 29, 34, 40, 47, 53, and 61.
Among the $n$'s selected for Table~\ref{tab:5000},
the two different optima exist
only for $n = 106$.
Both optima also happen to be hybrids
and no other hybrid occurs in Table~\ref{tab:5000}.
The remaining cases of dimorphism and/or hybrid packings
among $1 \le n \le 5000$ are all listed
in Table~\ref{tab:pm}.
\begin{table}
\begin{center}
\fbox{
\begin{tabular}{r|r|r|r|r||r|r|r|r|r||r|r|r|r|r}
$n$&$w$&$h$&$h_{-}$&$s$&
$n$&$w$&$h$&$h_{-}$&$s$&
$n$&$w$&$h$&$h_{-}$&$s$ \\ \hline
69 & 8 & 7 & 3 & 2 & 151 &12 &11 & 5 & 2
& 298 &17 &17 & 8 & 1 \\
& 9 & 7 & 3 & 1 & &13 &11 & 5 & 1
& &18 &17 & 8 & 0 \\
78 & 9 & 7 & 3 & 2 & 176 &13 &13 & 6 & 1
& 316 &18 &17 & 8 & 1 \\
86 & 9 & 9 & 4 & 1 & &14 &13 & 6 & 0
& 371 &19 &19 & 9 & 1 \\
&10 & 9 & 4 & 0 & 190 &14 &13 & 6 & 1
& &20 &19 & 9 & 0 \\
96 &10 & 9 & 4 & 1 & 233 &15 &15 & 7 & 1
& 452 &21 &21 &10 & 1 \\
127 &11 &11 & 5 & 1 & &16 &15 & 7 & 0
& &22 &21 &10 & 0 \\
&12 &11 & 5 & 0 & 249 &16 &15 & 7 & 1
& 541 &23 &23 &11 & 1 \\
139 &12 &11 & 5 & 1 & & & & &
& &24 &23 &11 & 0 \\
\end{tabular}
}
\caption{Packings of $n$ circles in a rectangle
of the smallest found perimeter which are hybrid
and/or exist in two differently shaped rectangles.
All such cases $n > 62$ are listed in this table, except $n = 106$,
which is listed in Table~\ref{tab:5000}
}
\label{tab:pm}
\end{center}
\end{table}
For either phenomenon, dimorphism or hybrid packings,
the largest $n$ for which the phenomenon still occurs
appears to be $n = 541$.
For comparison:
both phenomena also occur for the criterion of the minimum area
in \cite{LG}, but both end much sooner, the largest
$n$ for which either phenomenon still occurs appears to be $n = 31$.
\section{Optimal rectangles are asymptotically square}\label{sec:prf}
\hspace*{\parindent}
In this section we will show that as $n$ goes to infinity, the ratio
of $L/S$ for the minimum perimeter rectangle in which $n$ congruent circles
can be packed tends to 1. This will follow from the following
considerations. For a compact, convex subset $X$ of the Euclidean
plane, define the {\em packing number} $p(X)$ to be the cardinality of
the largest possible set of points within $X$ such that the distance
between any two of the points is at least 1.
Let $A(X)$ be the area and $P(X)$ be the perimeter of $X$.
The following result of
Oler (see \cite{Oler}, \cite{FG}) bounds $p(X)$:
{\bf Theorem:}
$$p(X) \le \frac{2}{\sqrt 3} A(X) + \frac{1}{2} P(X) +1.$$
It is easy to prove the following lower bound on the packing number for
a square $S(\alpha)$ of side $ \alpha$:
{\bf Fact}: $$p(S(\alpha)) \ge \frac{2}{\sqrt 3} \alpha^2$$.
Suppose $R$ is an optimal rectangle with side
lengths $m + \epsilon$ and $m - \epsilon$.
Thus, $R$ has perimeter $4m$ and area $m^2 - \epsilon^2$. By the preceding
upper bound on $p(R)$ we have
$$p(R) \le \frac{2}{\sqrt 3}(m^2-\epsilon^2) + \frac{1}{2}(4m) + 1.$$
Since $R$ is optimal, then we must have $p(R) \ge p(S(m))$.
This implies that
$$\frac{2}{\sqrt 3}(m^2-\epsilon^2) + 2m + 1 \ge \frac{2}{\sqrt 3}m^2.$$
From this it follows that
$$\epsilon \le ( \frac{\sqrt 3}{2}(2m+1) )^{1/2},$$
which is of a lower order than $m$, the order of the side lengths.
It is now straight-forward to convert this inequality to one for packing
circles rather than points, and our claim is proved. As an example, if
$m = 1000$ then $\epsilon < 41.623$..., so that the ratio $L/S \le 1.0869.$
\newpage
| {
"timestamp": "2008-05-30T11:34:11",
"yymm": "0412",
"arxiv_id": "math/0412443",
"language": "en",
"url": "https://arxiv.org/abs/math/0412443",
"abstract": "We use computational experiments to find the rectangles of minimum perimeter into which a given number n of non-overlapping congruent circles can be packed. No assumption is made on the shape of the rectangles. In many of the packings found, the circles form the usual regular square-grid or hexagonal patterns or their hybrids. However, for most values of n in the tested range n =< 5000, e.g., for n = 7, 13, 17, 21, 22, 26, 31, 37, 38, 41, 43...,4997, 4998, 4999, 5000, we prove that the optimum cannot possibly be achieved by such regular arrangements. Usually, the irregularities in the best packings found for such n are small, localized modifications to regular patterns; those irregularities are usually easy to predict. Yet for some such irregular n, the best packings found show substantial, extended irregularities which we did not anticipate. In the range we explored carefully, the optimal packings were substantially irregular only for n of the form n = k(k+1)+1, k = 3, 4, 5, 6, 7, i.e., for n = 13, 21, 31, 43, and 57. Also, we prove that the height-to-width ratio of rectangles of minimum perimeter containing packings of n congruent circles tends to 1 as n tends to infinity.",
"subjects": "Metric Geometry (math.MG)",
"title": "Minimum Perimeter Rectangles That Enclose Congruent Non-Overlapping Circles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771755256741,
"lm_q2_score": 0.872347369700144,
"lm_q1q2_score": 0.8608124735499592
} |
https://arxiv.org/abs/1412.7702 | Convergence from below suffices | An elementary application of Fatou's lemma gives a strengthened version of the monotone convergence theorem. We call this the convergence from below theorem. We make the case that this result should be better known, and deserves a place in any introductory course on measure and integration. | \section{The convergence from below theorem}
Three famous convergence-related results appear in most introductory courses on measure and integration:
the monotone convergence theorem, Fatou's lemma and the dominated convergence theorem.
In teaching this material it is common to follow the approach taken in, for example, \cite[Chapter 1]{Rudin}.
There Rudin begins by proving the monotone convergence theorem and then deduces Fatou's lemma. Finally, he deduces the dominated convergence theorem from Fatou's lemma. The result which we call the {\em convergence from below theorem}
(Theorem \ref{CBT} below) is essentially distilled from this proof of the dominated convergence theorem (\cite[pp. 26-27]{Rudin}).
We do not claim originality for this result, or for the related Theorem \ref{infinite-integral}. They are presumably known, although we know of no explicit references for them. However, we wish to make a case that that they should be better known than they are. In particular, we suggest that Theorem \ref{CBT} deserves a name and a place in the syllabus when this material is taught.
\vskip .2cm
Throughout we discuss results concerning pointwise convergence. In the usual way, there are versions of all these results in terms of almost-everywhere convergence instead.
For convenience, we shall use the following terminology.
Let $X$ be a set, let $(f_n)$ be a sequence of functions from $X$ to $[0,\infty]$ and let
$f$ be another function from $X$ to $[0,\infty]$. We say that the functions $f_n$ converge to $f$
{\bf from below on $X$} if the functions $f_n$ tend to $f$ pointwise on $X$ and
$f_n(x) \leq f(x)$ $(n \in {\mathbb N},~x \in X)$.
We say that the functions $f_n$ converge to $f$
{\bf monotonely from below on $X$} if the functions $f_n$ tend to $f$ pointwise on $X$ and, for all $x \in X$, we have
$f_1(x) \leq f_2(x) \leq f_3(x) \leq \cdots$.
\medskip
We begin by recalling the statement of the monotone convergence theorem.
\newpage
\begin{theorem}
{\bf (Monotone convergence theorem)}
Let $(X,{\cal F},\mu )$
be a measure space, and let $f : X \to [0,\infty ]$ be a measurable function.
Let $(f_n)$ be a sequence of measurable functions from $X$ to $[0,\infty]$ which converge to $f$ monotonely from below on $X$.
Then
\[
\int _X f\,{\rm d} \mu = \lim _{n\to \infty} \int _X f_n\,{\rm d} \mu\,.
\]
\end{theorem}
The measurability assumption on
$f$ is, of course, redundant here as it follows from the pointwise convergence of $f_n$ to $f$.
We now observe that an elementary application of Fatou's lemma shows that we may weaken the monotone convergence assumption. We have not found this result stated explicitly in the literature, and it does not appear to have a name. We propose to call it the {\em convergence from below theorem}.
The concepts involved in the statements and applications of the monotone convergence theorem and the dominated convergence theorem are relatively simple. We suggest that convergence from below is a similarly simple concept, which should appeal to all levels of student. In particular, those students who find the concepts of $\liminf$ and $\limsup$ difficult may be happier applying the convergence from below theorem rather than Fatou's lemma (where possible).
\begin{theorem}
{\bf (Convergence from below theorem)}
\label{CBT}
Let $(X,{\cal F},\mu )$
be a measure space, and let $f : X \to [0,\infty ]$ be a measurable function.
Let $(f_n)$ be a sequence of measurable functions from $X$ to $[0,\infty]$ which converge to $f$ from below on $X$.
Then
\[
\int _X f\,{\rm d} \mu = \lim _{n\to \infty} \int _X f_n\,{\rm d} \mu\,.
\]
\end{theorem}
{\bf Proof.}
Clearly
\[
\limsup_{n\to \infty}\int_X f_n\,{\rm d} \mu \leq \int_X f\, {\rm d} \mu\,.
\]
However, by Fatou's lemma,
\[
\int_X f\,{\rm d} \mu \leq \liminf_{n\to\infty}\int_X f_n\,{\rm d} \mu\,.
\]
The result follows immediately.
\hfill$\Box$\par
\vskip .2cm
\noindent
{\bf Remarks.}
\begin{enumerate}
\item[(1)]
The monotone convergence theorem is now a special case of our stronger convergence from below theorem.
\item[(2)]
In the case where $\int_X f\,{\rm d} \mu < \infty$, the convergence from below theorem is an immediate consequence of the dominated convergence theorem.
\item[(3)]
In the case where $\int_X f\,{\rm d} \mu = \infty$, the result does not follow directly from either the monotone convergence theorem or the dominated convergence theorem. The following elementary result clarifies the situation in this case.
\end{enumerate}
\begin{theorem}
\label{infinite-integral}
Let $(X,{\cal F},\mu )$
be a measure space, and let $f : X \to [0,\infty ]$ be a measurable function with $\int_X f\,{\rm d} \mu = \infty$.
Let $(f_n)$ be a sequence of measurable functions from $X$ to $[0,\infty]$ which converge to $f$ pointwise on $X$.
Then
\[
\lim _{n\to \infty} \int _X f_n\,{\rm d} \mu\, = \infty.
\]
\end{theorem}
{\bf Proof.}
By Fatou's lemma,
\[
\infty = \int_X f\,{\rm d} \mu \leq \liminf_{n\to\infty}\int_X f_n\,{\rm d} \mu\,.
\]
It follows immediately that
$\lim _{n\to \infty} \int _X f_n\,{\rm d} \mu\, = \infty$, as required.
\hfill$\Box$\par
\vskip .2cm
We suggest that the convergence from below theorem deserves a place between Fatou's lemma and the dominated convergence theorem: the dominated convergence theorem may be deduced from the convergence from below theorem as follows. This proof is based on the proof given in \cite[pp. 26-27]{Rudin}, but applying the convergence from below theorem in the middle.
\begin{theorem} {\bf (Dominated convergence theorem)}
Let $(X,{\cal F},\mu )$
be a measure space, let $g : X \to [0,\infty]$ be a measurable function.
with $\int_X f\,{\rm d} \mu < \infty$ and let $f$ be a measurable function from $X$ to ${\mathbb C}$.
Let $(f_n)$ be a sequence of measurable functions from $X$ to $C$ which converge to $f$ pointwise on $X$
and such that $|f_n(x)| \leq g(x)$ $(n \in {\mathbb N}, x \in X)$.
Then
\[
\lim_{n \to \infty}\int_X|f_n-f|\,{\rm d} \mu\, = 0
\]
and
\[
\int _X f\,{\rm d} \mu = \lim _{n\to \infty} \int _X f_n\,{\rm d} \mu\,.
\]
\end{theorem}
{\bf Proof.}
The second equality follows quickly from the first.
To prove the first equality, observe that the non-negative, measurable functions $2g-|f_n-f|$ converge to the function $2g$ from below. Thus, by the convergence from below theorem,
\[
\lim_{n \to \infty}\int_X\left( 2g -|f_n-f|\right)\,{\rm d} \mu\, = \int_X 2g \,{\rm d} \mu\,.
\]
The result now follows by subtracting $\int_X 2g \,{\rm d} \mu\,$ from both sides and rearranging.
\hfill$\Box$\par
\vskip .2cm
As discussed above, the convergence from below theorem is more than covered by a combination of the dominated convergence theorem and Theorem \ref{infinite-integral}. Also, since the convergence from below theorem is such an elementary consequence of Fatou's lemma, any applications may also be deduced from that lemma.
However, the monotone convergence theorem continues to be used in the literature, and any application of the monotone convergence theorem can be replaced directly by an application of the
convergence from below theorem.
Of course, we then only need to check the weaker conditions of the latter theorem.
Also, the convergence from below theorem can be used to give elegant solutions to simple problems where neither the monotone convergence theorem nor the dominated convergence theorem apply directly. Here is such an application (an elementary undergraduate exercise).
\vskip .2cm
\noindent
{\bf Exercise.}
Let $\lambda$ denote Lebesgue measure on ${\mathbb R}$.
Prove that, for every Lebesgue measurable subset $E$ of ${\mathbb R}$, we have
\[
\int_E x^2\, {\rm d} \lambda(x) = \lim_{n \to \infty} \int_E \left( x^2 - \frac{1}{n} | x \sin n x|\right) \, {\rm d} \lambda(x)\,.
\]
\vskip .2cm
\noindent
{\bf Solution.}
Since $|x \sin n x| \leq n x^2$ ($n \in {\mathbb N}$, $x \in {\mathbb R}$), the result is an immediate consequence of the convergence from
below theorem.
We may, instead, apply Fatou's lemma directly. This does, of course, lead to a quick solution which essentially proves the convergence from below theorem again along the way.
We may also consider separately the cases where $\int_E x^2\, {\rm d} \lambda(x)<\infty$ and
where $\int_E x^2\, {\rm d} \lambda(x)=\infty$.
In the first case we may apply the dominated convergence theorem, and in the second case we may use Theorem \ref{infinite-integral}. However the use of the convergence from below theorem renders this splitting into two cases unnecessary.
\section{Proving the convergence from below theorem directly}
Above we suggested following the usual development of the theory, but inserting the convergence from below theorem between Fatou's lemma and the dominated convergence theorem. There are several alternatives, however. For example, we can prove Fatou's lemma directly first and then deduce the convergence from below theorem. The monotone convergence theorem and the dominated convergence theorem then follow easily.
Another approach is to modify the standard proof of the monotone convergence theorem (\cite[1.26]{Rudin}) in order to give a direct proof of the convergence from below theorem. The monotone convergence theorem, dominated convergence theorem and Fatou's lemma are then corollaries of this.
We conclude with such a direct proof.
In this proof we
avoid explicit reference to $\liminf$ and $\limsup$ in order to make the proof more accessible to students who have
difficulty with these concepts. However, only minor changes are needed to give a direct proof of Fatou's lemma instead.
\vskip .2cm
\noindent{\bf Direct proof of Theorem \ref{CBT}.}
First note that we have $\int_X f_n\,{\rm d} \mu \leq \int_X f\,{\rm d} \mu$ $(n \in {\mathbb N})$. Thus it is sufficient to prove that,
for all $\alpha < \int_X f\,{\rm d} \mu$, $\int_X f_n\,{\rm d} \mu$ is eventually greater than $\alpha$, i.e., there is an $N \in {\mathbb N}$ such that, for all $n \geq N$, we have $\int_X f_n\,{\rm d} \mu > \alpha$.
Given such an $\alpha$, the definition of the integral tells us that there is a nonnegative, simple measurable function $s$ with $s(x)\leq f(x)$ $(x \in X)$ and such that
$\int_X s \,{\rm d} \mu > \alpha$.
Choose $c \in (0,1)$ large enough that
$\int_X c s \,{\rm d} \mu > \alpha$.
Set
$A_n = \{x \in X: cs(x) \leq f_n(x)\}$
and, for each $k \in {\mathbb N}$, set
\[
B_k = \bigcap_{n \geq k} A_n = \{x \in X: cs(x) \leq f_n(x) \jtext{for all} n \geq k\}.
\]
Clearly, $B_1\subseteq B_2\subseteq \cdots$. We claim that $\bigcup_{k=1}^\infty B_k = X$.
Let $x \in X$. If $s(x)>0$, then $cs(x)<f(x)$, and so
$x \in B_k$ provided that $k$ is large enough. On the other hand, if $s(x)=0$, then $x \in B_k$ for all $k \in {\mathbb N}$.
This proves our claim.
By standard continuity properties of measures, we have
\[
\int_X{cs\,{\rm d} \mu} = \lim_{k\to \infty} \int_{B_k} cs\,{\rm d}\mu\,.
\]
Choose $N \in {\mathbb N}$ such that $\int_{B_N} cs\,{\rm d}\mu > \alpha$.
For all $n \geq N$ and $x \in B_N$ we have $cs(x) \leq f_n(x)$.
Thus, for $n \geq N$, we have
\[
\int_X f_n \,{\rm d}\mu\, \geq \int_{B_N} f_n \,{\rm d}\mu\, \geq \int_{B_N} cs \,{\rm d}\mu\, > \alpha,
\]
as required.
\hfill$\Box$\par
| {
"timestamp": "2014-12-25T02:09:06",
"yymm": "1412",
"arxiv_id": "1412.7702",
"language": "en",
"url": "https://arxiv.org/abs/1412.7702",
"abstract": "An elementary application of Fatou's lemma gives a strengthened version of the monotone convergence theorem. We call this the convergence from below theorem. We make the case that this result should be better known, and deserves a place in any introductory course on measure and integration.",
"subjects": "Functional Analysis (math.FA)",
"title": "Convergence from below suffices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620077761345,
"lm_q2_score": 0.8807970764133561,
"lm_q1q2_score": 0.8738053160699835
} |
https://arxiv.org/abs/1911.09792 | Minority Voter Distributions and Partisan Gerrymandering | Many people believe that it is disadvantageous for members aligning with a minority party to cluster in cities, as this makes it easier for the majority party to gerrymander district boundaries to diminish the representation of the minority. We examine this effect by exhaustively computing the average representation for every possible $5\times 5$ grid of population placement and district boundaries. We show that, in fact, it is advantageous for the minority to arrange themselves in clusters, as it is positively correlated with representation. We extend this result to more general cases by considering the dual graph of districts, and we also propose and analyze metaheuristic algorithms that allow us to find strong lower bounds for maximum expected representation. |
\section{Introduction and Motivation}
\input{sections/intro.tex}
\section{Basic Definitions and Terminology}
\input{sections/definitions.tex}
\section{Grid}
\input{sections/grid.tex}
\newpage
\section{Metaheuristics for Optimization}
\input{sections/metaheuristics.tex}
\section{Conclusion and Further Research}
\input{sections/conclusion.tex}
\section*{Acknowledgements}
The results in this paper originated from a research project at PROMYS 2019. We are deeply grateful to Diana Davis for proposing the problem, as well as for her constant support and guidance. We thank our counselor Kenz Kallal for his mentoring and for always being there to support us. This research would not be possible without David Fried, Glenn Stevens, Roger Van Peski, the PROMYS Foundation, and the Clay Mathematics Institute. We would also like to thank Moon Duchin for her ideas, suggestions, and insight into this topic. Additionally, the first author would like to thank Justin Almeida for his guidance and support following the end of the program.
\printbibliography
\newpage
\input{sections/appendix.tex}
\end{document}
\section{Computation}
The code used to perform our search, as well as the code for our metaheuristic algorithms (and their evaluations), can be found at \url{https://github.com/jiahuac/GerryGrid}.
\section{$\mathsf{ClusP}$ Against $\mathbb{E}(\mathsf{Rep})$ Given $\mathsf{Num}$}
\label{appendix:clush-v-rep}
\input{figures/appendixgraphs.tex}
\begin{landscape}
\section{The rates at which each algorithm approaches the absolute known maximum for a $5\times 5$ grid for differing $k_\mathrm{max}$. ($\mathsf{Num}=5,6,7,8,9,10,11,12$)}
\label{appendix:comparisons}
\includegraphics[scale=0.48, trim={1cm 0.48cm 2cm 1cm},clip]{figures/algorithm-comparison/5.pdf}
\includegraphics[scale=0.48, trim={1cm 0.48cm 2cm 1cm},clip]{figures/algorithm-comparison/6.pdf}
\includegraphics[scale=0.48, trim={1cm 0.48cm 2cm 1cm},clip]{figures/algorithm-comparison/7.pdf}
\includegraphics[scale=0.48, trim={1cm 0.48cm 2cm 1cm},clip]{figures/algorithm-comparison/8.pdf}
\includegraphics[scale=0.48, trim={1cm 0.48cm 2cm 1cm},clip]{figures/algorithm-comparison/9.pdf}
\includegraphics[scale=0.48, trim={1cm 0.48cm 2cm 1cm},clip]{figures/algorithm-comparison/10.pdf}
\includegraphics[scale=0.48, trim={1cm 0.48cm 2cm 1cm},clip]{figures/algorithm-comparison/11.pdf}
\includegraphics[scale=0.48, trim={1cm 0.48cm 2cm 1cm},clip]{figures/algorithm-comparison/12.pdf}
\end{landscape}
\subsection{Methodology}
Harris provides an efficient algorithm for determining the various population distributions on an $n\times n$ square grid given sufficiently small dimensions \cite{Harris2010CountingIlk}. We use this algorithm to generate an exhaustive list of all $4006$ possible districting plans for a $5\times 5$ square grid, $\mathbb{D}$. There are a total of $2^{25}$ possible voter distributions $\Delta$ on a $5\times 5$ grid, with $2$ choices for each of the $25$ blocks. Then, taking the quotient by the dihedral group $D_4$ to account for symmetries of a square, we obtain $2^{23} \approx 8 \times 10^{6}$ possible districting plans. We were able to compute this exhaustive search over every districting plan in several hours on a standard laptop. The $6\times 6$ square grid case increases the computational complexity by a factor of over $200,000 < \frac{451206}{4006} \cdot \frac{2^{36}}{2^{35}},$ which exceeds the computational capacity available to us and most institutions.
For each voter distribution that we iterate through, we record the following pieces of information: the number of {\color{red}$\bullet$} blocks, the clustering score ($\mathsf{Clus}$ and $\mathsf{ClusP}$), the distribution of the number of seats this voter distribution yields over all districting plans, and the expected (average) seats this voter distribution yields over all districting plans.
\begin{Def}
We assume there are two parties, Dot ({\color{red}$\bullet$}) and Blank. Define $\mathsf{Num}$ as the number of dot-voting blocks in a given voting distribution $\Delta$.
\end{Def}
\subsection{Observations}
We use our exhaustive data from the $5\times5$ grid search to set a certain number of {\color{red}$\bullet$} blocks as a constant and retrieve all such voter distributions $\Delta$ with that constant number of {\color{red}$\bullet$} blocks. Using these, we examine the correlation between clustering $\mathsf{ClusP}(\Delta)$ and expected representation $\mathbb{E}(\mathsf{Rep}_\Delta)$ for that specific $\mathsf{Num}$. Additionally, we quantify this correlation, which we find to be approximately linear, using a linear regression test.
\begin{figure}[h]
\centering
\includegraphics[width=4in]{figures/cluspnumh/ClusP-10-hex.pdf}
\caption{$\mathsf{ClusP}$ vs. $\mathbb{E}(\mathsf{Rep})$ for $\mathsf{Num} = 10$.}
\label{fig:clusp10}
\end{figure}
For instance, Figure \ref{fig:clusp10} shows plot of $\mathsf{ClusP}(\Delta)$ against $\mathbb{E}(\mathsf{Rep}_\Delta)$ with a linear regression line of best fit superimposed. There are $\binom{25}{\mathsf{Num}}$ possible ways to arrange $\mathsf{Num}$ on a $5\times5$ grid, so a scatter plot obfuscates the relative densities of data points. Hence, we use a a heatmap-bin plot to display the data, with histograms of each data set, $\mathsf{ClusP}(\Delta)$ and $\mathbb{E}(\mathsf{Rep}_\Delta)$. The darker shades of the plot indicate higher relative densities.
Furthermore, we can vary $\mathsf{Num}$ to extend these results, which each gives us a scatterplot for each $\mathsf{Num}$. We can find the slope of the linear regression line for each of these cases, and we can compare this slope with each corresponding $\mathsf{Num}$ value. Note that when $\mathsf{Num} = 0$, there is no possible $\Delta$ to calculate $\mathsf{ClusP}(\Delta)$ from. When $\mathsf{Num} = 1\text{ or }2$, the state space of possible voter distributions is so small that the linear regression test results in a slope of $0$. See Table \ref{tab:LinRegTestNumH312} for the slope of every linear regression line for when $\mathsf{Num} < 13.$ Plots for other $\mathsf{Num}$ can be found in Appendix \ref{appendix:clush-v-rep}, which produces the results seen in Table \ref{tab:LinRegTestNumH312}.
\begin{table}[h]
\begin{tabular}{@{}rl@{}}
\toprule
$\mathsf{Num}$ & \textbf{Slope} \\ \midrule
1 & 0 \\
2 & 0 \\
3 & 0.2993106942 \\
4 & 0.6704477756 \\
5 & 1.040404652 \\
6 & 1.350056768 \\
7 & 1.553744171 \\
8 & 1.619139053 \\
9 & 1.527148112 \\
10 & 1.271799538 \\
11 & 0.8600922591 \\
12 & 0.3117844548 \\ \bottomrule \\
\end{tabular}
\caption{Results of linear regression test for $\mathsf{Num} = 3$ to $12$}
\label{tab:LinRegTestNumH312}
\end{table}
Then, we take the slope from each linear regression test and plot $\mathsf{Num}$ against these slope values. We display the results of this scatter plot in Figure \ref{fig:numh-vs-slope}. Figure \ref{fig:numh-vs-slope} shows that when Dots are in the minority such that $3 \leq \mathsf{Num} \leq 12,$ the slope is positive. Thus, there is a positive correlation between increased clustering, according to a higher clustering score, and increased representation. Therefore, if the minority desires maximal representation, then clustering is a better strategy. This correlation peaks at $\mathsf{Num}=8$ before decreasing in correlation as $\mathsf{Num}$ increases. Furthermore, when Dots are in the majority such that $13 \leq \mathsf{Num} \leq 22$, the slope is negative. This means that when the majority clusters itself, the expected representation reduces, so the majority's best strategy is to stop clustering and disperse evenly instead. For $\mathsf{Num} = 23,24$, the cases are symmetric to $\mathsf{Num} = 1,2$, with a very limited number of possible $\Delta$ and a linear regression test of slope $0$.
\input{figures/numHvslope.tex}
Using our data, we can also determine the \textbf{absolute} best and worst voter distributions that yield the highest and lowest amounts of representation averaged over all the districting plans. We can make observations regarding these best and worst performing voter distributions.
\input{figures/outliersgood5x5.tex}
\input{figures/outliersbad5x5.tex}
Clustering leads to packing for the minority party, which tends to yield higher expected representation across all possible districting plans. We observe trends across all outlier cases that maximize the expected representation. In Figure \ref{fig:square-gridgood}, we show the representation-maximizing voter distributions for Dot vote share of 9, 10, and 11 out of 25. Note that although a higher clustering score generally correlates with higher expected representation, the districting plan that maximizes expected representation does not actually have the highest clustering score. An important observation is that these optimal voter distributions all contain \emph{enclaves} of blocks of the opposing party. These enclaves mean that the opposing party is the local minority and is far away from the majority of their blocks, which is sub-optimal for the opposing party, as it is difficult for a district contain enough blocks of the opposing party to win representation. This means that the votes secured for the opposing party by these `enclaves' are almost always wasted.
We also observe trends across all outliers that minimize expected representation. Figure \ref{fig:square-gridbad} displays the representation-minimizing voter distributions for Dot vote share of 9, 10, and 11 out of 25. The least successful minority voter distributions have extremely low cluster scores, with each minority block adjacent to mostly or only majority blocks. The lack of minority clustering allows the opposing (majority) party to easily crack minority votes and secure maximal representation.
This leads us to make the following conjecture:
\begin{Conj}
\label{conj:clustering-relation}
\textbf{(Clustering Relation).} For minority $\mathsf{Num}$, there is a positive correlation between $\mathsf{ClusP}$ and $\mathbb{E}(\mathsf{Rep}_\Delta)$.
\end{Conj}
Figure \ref{fig:3dscatter} plots voter distribution, clustering score for Dots, and expected representation together on a three-dimensional plot. Figure \ref{fig:3dscatter} shows how expected representation varies with both clustering and vote share. It is clear from the plot that both increased clustering and increased vote share increase expected representation.
\input{figures/3dscatter.tex}
\subsection{Evaluative function}
\label{subsection:eval}
We tackle the first problem using a known algorithm. Our score for each voter distribution is $\mathbb{E}(\mathsf{Rep}_\Delta)$, which gives the expected representation over all possible districting plans for a voter distribution. A higher score represents a voter distribution that is robust against gerrymandering, in the sense that it is difficult and improbable to create a districting plan that grossly disadvantages the party in question. This measure also aligns with the `mean test' that the supreme court proposes. However, it is hard to compute this measure through an enumeration of all districting plans. Even for an $n \times n$ grid of squares, exhaustive enumeration of, or even counting, all possible partitions into $n$ equal-sized districts is computationally infeasible above $n=9$ as demonstrated in Table \ref{tab:polyomino-tilings}. Duchin gives and demonstrates a Metropolis-Hastings algorithm (also known as Markov Chain Monte Carlo) to stratify and uniformly sample over all districting plans \cite{Duchin2018GerrymanderingBaseline}. We denote this stochastic algorithm as the evaluative function $\mathsf{Eval}(\Delta)$ which approximates $\mathbb{E}(\mathsf{Rep}_\Delta)$. However optimized this algorithm is, note that it is still the time-limiting step in our proposed algorithms.
Hence, we turn to the second problem, to find a voter distribution yielding maximal representation (or one which is good enough) within a large and discrete space of all voter distributions. A crude method to achieve this would simply be to generate random voter distribution, record down the respective representation that each yields, and have the algorithm return the voter distribution that yielded the highest representation after a set amount of tries. This is the \emph{Random algorithm}, and we use it as a benchmark to evaluate our proposed algorithms. One strategy we use to tackle this problem is through a quasi-greedy algorithm that mutates a voter distribution at every step instead of generating a completely new and random one.
\subsection{Cellular automata}
\emph{Cellular automata} are discrete models in which the state of each cell affects its neighboring cells. We apply a cellular automata algorithm to our $5 \times 5$ grid to simulate clustering.
First, we define a condition for each block that determines if it is \emph{happy} or \emph{unhappy}.
\begin{Def}
A block $b\in V$ is \emph{happy} when the proportion of its like neighbors over its total number of neighbors (i.e. those it shares an edge with) is above a set threshold $\theta$. Likewise, the block $b$ is \emph{unhappy} when the proportion of its like neighbors over its total number of neighbors is less than $\theta$.
\end{Def}
The algorithm proposes an evolution of our voter distribution $\Delta$ by taking the set of all the unhappy blocks (which could be both empty and {\color{red}$\bullet$} party blocks) and shuffling the blocks (in essence, permuting the individual blocks). Notably, this preserves $\mathsf{Num}$ and achieves an effect of increasing the clustering score. This is due to the fact that the blocks we are perturbing contribute to a lower clustering score (as they have a lower number of like neighbors), so we are decreasing the number of `unlike' connections. Its implementation is given in Algorithm \ref{algorithm:cellular-automata}.
\begin{algorithm}[H]
\caption{Cellular automata evolution}
\label{algorithm:cellular-automata}
\begin{algorithmic}[1]
\Require
\Statex Dual graph $G = (V, E)$, discrete voter distribution function $\Delta: V\to \{0, 1\}$, and threshold $\theta\in [0,1]$.
\Procedure{Evolve}{$\Delta$} \Comment{Initial voter distribution $\Delta$}
\State $S \gets \{\}$ \Comment{Set of blocks $b$ with proportion $<\theta$ of similar neighbors}
\State $\{\Delta\}_{\mathrm{swap}} \gets \{\}$ \Comment{$\{\Delta\}_{\mathrm{swap}} = \{\Delta(b)\ |\ b\in S\}$}
\For{$b$ in $V$}
\State $conn_{total} \gets 0$
\State $conn_{\mathrm{same}} \gets 0$
\For{$b_n$ in $\{b_n\ |\ (b, b_n)\in E\}$} \Comment{For all blocks adjacent to $b$}
\IfThen{$\Delta(b) = \Delta(b_n)$}{$conn_{\mathrm{same}} \gets conn_{\mathrm{same}} + 1$} \Comment{\# of similar connections}
\State $conn_{\mathrm{total}} \gets conn_{\mathrm{total}} + 1$ \Comment{\# of total connections}
\EndFor \Comment{After enumeration of $b_n$}
\If{$conn_{\mathrm{same}}/conn_{\mathrm{total}} < \theta$}
\State $S \gets S \cup \{b\}$
\State \textbf{append} $\Delta(b) \to \{\Delta\}_{\mathrm{swap}}$, preserving order
\EndIf
\EndFor \Comment{After enumeration of $b$ in $V$}
\State \textbf{shuffle} $\{\Delta\}_{\mathrm{swap}}$ \Comment{Permutes indices of $\Delta_i\in\{\Delta\}_{\mathrm{swap}}$}
\State \textbf{define} $\Delta'(b) = \begin{cases}
\Delta_i\text{ in }\{\Delta\}_{\mathrm{swap}} & \textbf{for } b = b_i\in S \\
\Delta(b) & \textbf{otherwise } (\text{i.e. }b\not\in S)
\end{cases}$
\State \Return $\Delta'$ \Comment{Returns mutated voter distribution function $\Delta'$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
The progression below gives a visual example of what the cellular automata algorithm does in each iteration.
\begin{figure}[h]
\label{fig:cellular-automata}
\centering
\begin{tikzpicture}[scale=.43]
\begin{scope}
\foreach \x in {0,...,9} {\draw [gray!85] (\x,0)--(\x,9);}
\foreach \y in {0,...,9} {\draw [gray!85] (0,\y)--(9,\y);}
\foreach \p/\q in {9/2, 9/9, 8/6, 8/7, 7/1, 7/3, 7/4, 7/5, 7/6, 6/2, 6/3, 6/5, 6/6, 6/9, 4/1, 4/5, 4/6, 4/7, 4/8, 3/8, 3/9, 2/1, 2/2, 2/3, 2/4, 2/5, 2/7, 2/9, 1/1, 1/7} {\node at (\p-.5,\q-.5) {\color{red}$\bullet$};}
\draw [line width=1.0] (0,0)--(9,0)--(9,9)--(0,9)--cycle;
\end{scope}
\begin{scope}[xshift = 10cm]
\foreach \x in {0,...,9} {\draw [gray!85] (\x,0)--(\x,9);}
\foreach \y in {0,...,9} {\draw [gray!85] (0,\y)--(9,\y);}
\foreach \r/\c in {9/2, 9/9, 8/7, 7/1, 7/2, 6/1, 6/2, 6/4, 6/9, 4/1, 4/5, 4/9, 3/1, 3/7, 2/5, 2/7, 2/8, 2/9, 1/2, 1/7} {\draw [fill=orange!20] (\r,\c)--(\r -1,\c)--(\r -1,\c -1)--(\r,\c -1)--cycle;}
\foreach \p/\q in {9/2, 9/9, 8/6, 8/7, 7/1, 7/3, 7/4, 7/5, 7/6, 6/2, 6/3, 6/5, 6/6, 6/9, 4/1, 4/5, 4/6, 4/7, 4/8, 3/8, 3/9, 2/1, 2/2, 2/3, 2/4, 2/5, 2/7, 2/9, 1/1, 1/7} {\node at (\p-.5,\q-.5) {\color{red}$\bullet$};}
\draw [line width=1.0] (0,0)--(9,0)--(9,9)--(0,9)--cycle;
\end{scope}
\begin{scope}[xshift = 20cm]
\foreach \x in {0,...,9} {\draw [gray!85] (\x,0)--(\x,9);}
\foreach \y in {0,...,9} {\draw [gray!85] (0,\y)--(9,\y);}
\foreach \r/\c in {9/2, 9/9, 8/7, 7/1, 7/2, 6/1, 6/2, 6/4, 6/9, 4/1, 4/5, 4/9, 3/1, 3/7, 2/5, 2/7, 2/8, 2/9, 1/2, 1/7} {\draw [fill=orange!20] (\r,\c)--(\r -1,\c)--(\r -1,\c -1)--(\r,\c -1)--cycle;}
\foreach \p/\q in {9/2, 9/9, 8/6, 7/1, 7/3, 7/4, 7/5, 7/6, 6/1, 6/2, 6/3, 6/4, 6/5, 6/6, 6/9, 4/1, 4/6, 4/7, 4/8, 3/1, 3/8, 3/9, 2/1, 2/2, 2/3, 2/4, 2/8, 2/9, 1/1, 1/7} {\node at (\p-.5,\q-.5) {\color{red}$\bullet$};}
\draw [line width=1.0] (0,0)--(9,0)--(9,9)--(0,9)--cycle;
\end{scope}
\begin{scope}[xshift=30cm]
\foreach \x in {0,...,9} {\draw [gray!85] (\x,0)--(\x,9);}
\foreach \y in {0,...,9} {\draw [gray!85] (0,\y)--(9,\y);}
\foreach \r/\c in {9/2, 9/9, 8/6, 7/1, 7/2, 6/9, 5/1, 4/1, 4/6, 4/9, 2/4, 1/2, 1/7, 1/8} {\draw [fill=orange!20] (\r,\c)--(\r -1,\c)--(\r -1,\c -1)--(\r,\c -1)--cycle;}
\foreach \p/\q in {9/2, 9/9, 8/6, 7/1, 7/3, 7/4, 7/5, 7/6, 6/1, 6/2, 6/3, 6/4, 6/5, 6/6, 6/9, 4/1, 4/6, 4/7, 4/8, 3/1, 3/8, 3/9, 2/1, 2/2, 2/3, 2/4, 2/8, 2/9, 1/1, 1/7} {\node at (\p-.5,\q-.5) {\color{red}\small$\bullet$};}
\draw [line width=1.0] (0,0)--(9,0)--(9,9)--(0,9)--cycle;
\end{scope}
\end{tikzpicture}
\caption{A $9\times9$ square grid with $\textsf{Num} = 30$ demonstrating the respective steps of the cellular automata evolution, with $\theta= 0.4$.}
\end{figure}
The first grid on the left of Figure \ref{fig:cellular-automata} is our initial voter distribution $\Delta$. We can identify all the unhappy tiles on our grid by enumerating the tiles where the proportion of like tiles connected to it is less than $0.4$. In our case, this means a tile with less than $2$ similar edges out of $4$ or $3$ total edges, or $0$ similar edges if it has $2$ total outgoing edges. Unhappy tiles are highlighted on the second grid. The algorithm then shuffles all the unhappy tiles around, leaving the happy tiles as is. This produces the diagram in the third grid, which still highlights the unhappy tile locations after being shuffled. Those blocks that are not highlighted have not been changed. The last grid demonstrates what re-evaluating the unhappy tiles in our new grid would look like. As you can see, the number of unhappy tiles has decreased.
Relying on our conclusion in Conjecture \ref{conj:clustering-relation} and the fact that cellular automata generally gives a more clustered voter distribution with every evolution, we can say that the cellular automata algorithm also improves expected representation $\mathbb{E}(\mathsf{Rep}_\Delta)$ for the minority. Hence, we can use the cellular automata algorithm to approximate a quasi-greedy algorithm, which attempts to find similar voter distributions with better expected representation every time. This becomes useful for Algorithms \ref{algorithm:rrils} and \ref{algorithm:sa} detailed below, both of which rely on the fact that we can easily find or generate an optimal `neighbor' given some voter distribution $\Delta$. This is non-trivial, given that $\{\Delta\}$ is discrete and so large.
\hfill
\begin{flushright}
\emph{(Continued on next page\dots)}
\end{flushright}
\newpage
\subsection{Random-restart iterated local search algorithm (RRILS)}
\label{subsection:rrils}
Using the cellular automata algorithm as a quasi-greedy algorithm allows us to apply this algorithm repeatedly to generate increasingly better voter distributions (hence the `iterated local search'). Of course, this algorithm is succeptible to terminating at a local maxima when we are trying to find the global maxima instead. We can randomly restart the algorithm at some given point any time we reach a local maxima, and this gives a relatively straightforward method for finding a stronger maxima.
In words, the algorithm executes as follows: first, we set the number of trials as $k_\mathrm{max}$. The algorithm will then generate a random voter distribution, $\Delta$. It then approximates the $\mathbb{E}(\mathsf{Rep})$ of this $\Delta$ using the evaluative function $\mathsf{Eval}$ detailed in subsection \ref{subsection:eval}. Then using the greedy algorithm, which shuffles unhappy blocks by swapping them, RRILS can propose a greedy evolution using cellular automata. If there are no new evolutions to be proposed, another random initial voter distribution will be generated. This process repeats, and after iterating $k_\mathrm{max}$ times, the algorithm return the $\Delta$ with maximum $\mathbb{E}(\mathsf{Rep}_\Delta)$ that it has found.
\begin{algorithm}[H]
\caption{Random-restart iterated local search}
\label{algorithm:rrils}
\begin{algorithmic}[1]
\Require
\Statex Evaluative function $\mathsf{Eval}: \{\Delta\} \to \mathbb{R}$ which approximates $\mathbb{E}(\mathsf{Rep}_\Delta)$.
\Procedure{RandomRestart}{$k_{\mathrm{max}}$}
\State $k \gets 0$
\While{$k < k_{\mathrm{max}}$}
\State $\Delta \gets \mathrm{rand}(\Delta)$ \Comment{Generates a random initial voter distribution}
\State $\Delta_{\mathrm{new}} \gets$ null \Comment{Always accepts initial $\Delta$}
\While{$\Delta \neq \Delta_{\mathrm{new}}$}
\State $\mathrm{Trials}[\Delta]\gets \mathsf{Eval}(\Delta)$ \Comment{Appends $\mathsf{Eval}(\Delta)$ to dictionary \underline{Trials} with key $\Delta$}
\State $\Delta_{\mathrm{new}} \gets \textsc{Evolve}(\Delta)$ \Comment{Proposes an evolution of $\Delta$}
\State $k \gets k + 1$
\EndWhile
\EndWhile
\State $\Delta_{\mathrm{max}} \gets \kappa$ such that $\mathrm{Trials}[\kappa]$ maximal
\State \Return $\Delta_{\mathrm{max}}$ \Comment{Returns $\Delta$ that gives maximal $\mathsf{Eval}(\Delta)$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Simulated annealing}
\label{subsection:sa}
\emph{Simulated Annealing} is another metaheuristic designed for optimization, meant for cases where the search space is discrete, such as the space of all voter distributions $\{\Delta\}$.
The algorithm tries to achieve a balance between exploiting (descending a gradient to reach local extrema) such as an iterated local search algorithm, as well as exploring (sampling the search space completely randomly for optimal voter distributions). The algorithm begins with an initial temperature (parameter) of $T_0$. This is the maximum temperature it will ever be at and represents a tendency for the algorithm to explore rather than exploit, which means it readily accepts worse states of proposed voter distributions. With every iteration, the temperature $T$ becomes $T\cdot \alpha$ due to a cooling schedule $\alpha$, which lowers the temperature. As the temperature decreases, the likelihood of accepting a worse state also decreases (defined by a function $\mathsf{Prob}$).
This first version of simulated annealing we present is modified to take into account a known greedy algorithm. Similar to the RRILS, we evaluate $\Delta$'s and propose an evolution of $\Delta$. The algorithm then accepts this proposed state with a probability $\mathsf{Prob}(\delta_{score},T)$ such that $\mathsf{Prob}(\delta_{score},T) = e^{\delta_\mathrm{score}/T}$. The algorithm then decreases the temperature (if it chooses to accept the proposed state), or accepts the random state with a probability of \emph{$\theta_r$} which is a predefined constant. This process repeats, and after iterating $k_\mathrm{max}$ times, the algorithm returns the $\Delta$ which maximizes $\mathbb{E}(\mathsf{Rep}_\Delta)$.
\begin{algorithm}[H]
\caption{Simulated annealing}
\label{algorithm:sa}
\begin{algorithmic}[1]
\Require
\Statex Evaluative function $\mathsf{Eval}: \{\Delta\} \to \mathbb{R}$ which approximates $\mathbb{E}(\mathsf{Rep}_\Delta)$, acceptance probability function $\mathsf{Prob}: \{\Delta\} \times \mathbb{R} \to [0, 1]\in\mathbb{R}$, and following hyper-parameters: initial temperature $T_0$, cooling schedule $\alpha \in [0, 1]$, threshold $\theta$ (of \textsc{Evolve}), and constant probability of accepting random state $\theta_r$.
\Procedure{SimulatedAnneal}{$k_{\mathrm{max}}$}
\State $k \gets 0$
\State $T \gets T_0$ \Comment{Begins at initial temperature $T_0$}
\State $\Delta \gets \mathrm{rand}(\Delta)$
\State $score \gets \mathsf{Eval}(\Delta)$
\While{$k < k_{\mathrm{max}}$}
\State $\mathrm{Trials}[\Delta]\gets score$ \Comment{Appends $\mathsf{Eval}(\Delta)$ to dictionary \underline{Trials} with key $\Delta$}
\State $\Delta_{\mathrm{new}} \gets \textsc{Evolve}(\Delta)$ \Comment{Proposes an evolution of $\Delta$}
\State $score_\mathrm{new} \gets \mathsf{Eval}(\Delta_\mathrm{new})$
\State $\delta_\mathrm{score} \gets score_\mathrm{new} - score$
\If{$\mathsf{Prob}(\delta_\mathrm{score}, T) > \mathrm{rand}(0, 1)$} \Comment{Accepts proposed state with $\mathsf{Prob}(\delta_\mathrm{score}, T)$}
\State $\Delta \gets \Delta_\mathrm{new}$
\State $score \gets score_\mathrm{new}$
\State $T \gets T\cdot \alpha$ \Comment{Decreases temperature}
\ElsIf{$\theta_r > \mathrm{rand}(0, 1)$} \Comment{Otherwise accepts random state with probability $\theta_r$}
\State $\Delta \gets \mathrm{rand}(\Delta)$
\State $score \gets \mathsf{Eval}(\Delta)$
\EndIf
\State $k \gets k + 1$
\EndWhile
\State $\Delta_{\mathrm{max}} \gets \kappa$ such that $\mathrm{Trials}[\kappa]$ maximal
\State \Return $\Delta_{\mathrm{max}}$ \Comment{Returns $\Delta$ that gives maximal $\mathsf{Eval}(\Delta)$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Random Simulated Annealing}
\label{subsection:rsa}
\emph{Random Simulated Annealing} works largely the same way as the aforementioned simulated annealing in Algorithm \ref{algorithm:sa}. However, the steps that this algorithm takes are no longer the greedy steps prescribed by the cellular automata evolution algorithm. Where Algorithm \ref{algorithm:sa} would call {\sc Evolve} on line 8, Random Simulated Annealing calls {\sc Step}. {\sc Step} takes a parameter $n$, the number of blocks to shuffle, randomly selects $n$ blocks, and shuffles their political preferences in place on the dual graph. This is very similar to what our cellular automata algorithm {\sc Evolve} does, but instead of selecting a set of blocks that fits a certain criteria to swap (in {\sc Evolve} we selected `unhappy' blocks), {\sc Step} just selects $n$ random blocks to swap. From our tests, we have found that it is best that the random threshold $\theta_r$ be set to $0$, which means that the algorithm will never select a totally random state. This is because the steps that the algorithm is making is already random and is not susceptible to being trapped at a local extrema point.
One benefit of the random variant of simulated annealing is that it does not rely on the Clustering Conjecture, and hence, we can use this algorithm for both the minority and majority parties.
\subsection{Benchmark Algorithm}
\label{subsection:benchmark}
We use a naive benchmark algorithm to test the performance of the proposed algorithms. The benchmark algorithm simply generates new voter distributions of a set $\mathsf{Num}$ and evaluates them at every step. It returns the voter distribution that has yielded the largest expected representation for a given $\mathsf{Num}$.
\subsection{Results}
Using the 3 algorithms detailed in Subsections \ref{subsection:rrils}, \ref{subsection:sa}, and \ref{subsection:rsa}, we can evaluate the computational improvements that they provide over our benchmark algorithm (and the next best naive alternative), the random sampling in Subsection \ref{subsection:benchmark}.
In these trials, we use the dual graph of the $5\times 5$ square grid as described in Section \ref{sec:grid}. Since we have exhaustively analyzed this case, we know the maximum outcomes for each $\mathsf{Num}$, which gives us yet another benchmark to compare the results to. We run simulations by stochastically sampling the performance of each algorithm a large number of times (here, we run 10,000 trials) and cap each $k_\mathrm{max}$ to a certain value. For each $k_\mathrm{max}$ value, we can determine the average outcome (the optimized maximum expected representation) of each algorithm. As $k_\mathrm{max}$ is the number of iterations our program makes (or the computation time of our program), the algorithm also produces results that are closer to the global maximum as it increases.
Plotting out each of these results gives Figure \ref{fig:algorithm-comparison-6}, where the rates of each algorithm are visually displayed for $\mathsf{Num}=6$ and up to a $k_\mathrm{max}$ of 1000. For the case of $\mathsf{Num}=6$, the random-restart iterated local search algorithm performed similarly to simulated annealing, both of which performed better than the random variation of simulated annealing. Notably, all three algorithms asymptotically approached the absolute maximum significantly faster than random selection.
However, when we change $\mathsf{Num}$ to equal $10$, comparing the algorithms gives us different results. Figure \ref{fig:algorithm-comparison-10} is a variation on Figure \ref{fig:algorithm-comparison-6}, where $\mathsf{Num}=6$. In this case, the random variation of simulated annealing performs better than both random-researt iterated local search and the original greedy variant of simulated annealing.
Unfortunately, we can only conclude that our algorithms differ on a case-by-case basis and that it requires additional evaluation to determine the best algorithm for such an optimization.
\input{figures/algorithm-comparison.tex}
| {
"timestamp": "2019-11-25T02:05:50",
"yymm": "1911",
"arxiv_id": "1911.09792",
"language": "en",
"url": "https://arxiv.org/abs/1911.09792",
"abstract": "Many people believe that it is disadvantageous for members aligning with a minority party to cluster in cities, as this makes it easier for the majority party to gerrymander district boundaries to diminish the representation of the minority. We examine this effect by exhaustively computing the average representation for every possible $5\\times 5$ grid of population placement and district boundaries. We show that, in fact, it is advantageous for the minority to arrange themselves in clusters, as it is positively correlated with representation. We extend this result to more general cases by considering the dual graph of districts, and we also propose and analyze metaheuristic algorithms that allow us to find strong lower bounds for maximum expected representation.",
"subjects": "Computers and Society (cs.CY); Metric Geometry (math.MG); Physics and Society (physics.soc-ph)",
"title": "Minority Voter Distributions and Partisan Gerrymandering",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9553191322715435,
"lm_q2_score": 0.7431680086124812,
"lm_q1q2_score": 0.7099626171196466
} |
https://arxiv.org/abs/1906.04970 | Unified Viscous-to-inertial Scaling in Liquid Droplet Coalescence | This letter presents a theory on the coalescence of two spherical liquid droplets that are initially stationary. The evolution of the radius of a liquid neck formed upon coalescence was formulated as an initial value problem and then solved to yield an exact solution without free parameters, with its two asymptotic approximations reproducing the well-known scaling relations in the viscous and inertial regimes. The viscous-to-inertial crossover observed by Paulsen et al. [Phys. Rev. Lett. 106, 114501 (2011)] is also recovered by the theory, rendering the collapse of data of different viscosities onto a single curve. | \section{\label{sec:level1}First-level heading}
Droplet Coalescence \cite{EggersJ:99a,AartsDGAL:05a,Thoroddsen:05a,PaulsenJD:11a,Xia:17a} is a ubiquitous phenomenon involving impact or contact of dispersed two-phase flows \cite{YarinAL:06a,ZhangP:11a,ThoravalMJ:12a,TranT:13a,KavehpourHP:15a,ZhangP:16a,Xia:19a}. Among the various relevant problems, the initial coalescence of two liquid droplets has been of core interest. The first quantitative analysis of sphere coalescence was provided by Frenkel \cite{FrenkelJJ:45a} based on the assumption of internal Stokes flow; however, the result was commented as ``misleading" by Hopper \cite{HopperRW:93a}, who gave an analytical solution for the coalescence of two cylindrical droplets of radius $R_0$ for viscous sintering. His studies \cite{HopperRW:90a,HopperRW:92a} show that the time evolution of the radius $R$ of the neck (or bridge) between the droplets approximately satisfies $t \sim -R/\ln{R^{\ast}}$ where $R^{\ast}=R/R_0$. Later, Eggers \textit{et al.} \cite{EggersJ:99a} considered the three-dimensional coalescence and attained $R^{\ast} \sim -t^{\ast} \ln{t^{\ast}}$ for $R^{\ast} < 0.03$, where $t^{\ast} = t/\tau_v$ ($\tau_v=\mu R_0/\sigma$ with $\mu$ and $\sigma$ being the dynamic viscosity of the liquid and the surface tension coefficient, respectively). For larger $R^{\ast}$, they \cite{EggersJ:99a,DucheminL:03a} argued that the neck flow goes beyond the Stokes regime to the inertial (or inviscid) regime, and further arrived at the $1/2$ power-law scaling, $R^{\ast} \sim (t/\tau_i)^{1/2}$ with the time scale being $\tau_i=(\rho R_0^3/\sigma)^{1/2}$, where $\rho$ is the liquid density.
Recent advances in the high-speed digital imaging \cite{AartsDGAL:05a,Thoroddsen:05a,YaoW:05a}, state-of-art probing techniques \cite{CaseSC:08a,FezzaaK:08a,PaulsenJD:11a}, and numerical simulation \cite{SprittlesJE:12a,SprittlesJE:14a} enabled researchers to scrutinize the early stages of drop coalescence when $R^{\ast} \ll 1$. As a result, the $1/2$ power-law scaling was confirmed by many experimental \cite{WuM:04a,AartsDGAL:05a,Thoroddsen:05a,BurtonJC:07a,FezzaaK:08a,CaseSC:09a} and numerical \cite{DucheminL:03a,EiswirthRT:12a,PothierJC:12a,GrossM:13a,SprittlesJE:12a} studies. The same scaling was also observed for droplet coalescence on substrate \cite{StoneHA:06a,EddiA:12a,EddiA:13a}. However, the experiments of Aarts \textit{et al.} \cite{AartsDGAL:05a} and Thoroddsen \textit{et al.} \cite{Thoroddsen:05a} indicate that the viscous regime is well predicted by the linear scaling of $R^{\ast} \sim t^{\ast}$, noting that most of their data were in the $R^{\ast} > 0.03$ range. This linear correlation was also corroborated by other studies \cite{YaoW:05a,BurtonJC:07a,PaulsenJD:11a}.
More recently, research interests have been directed towards the crossover (or transition) between the viscous and inertial regimes. The first direct evidence of the crossover from $R^{\ast} \sim t^{\ast}$ to $R^{\ast} \sim (t^{\ast})^{1/2}$ was reported by Burton and Taborek \cite{BurtonJC:07a}. By equating the characteristic velocities from the two scaling laws, they derived the crossover length, $l_c \sim \mu(R_0/\rho \sigma)^{1/2}$, which was later confirmed by Paulsen \textit{et al.} \cite{PaulsenJD:11a,PaulsenJD:13a}, who further obtained the crossover time, $\tau_c \sim \mu^2(R_0/\rho \sigma^3)^{1/2}$. With these time and length scales, Paulsen \textit{et al.} applied a fitting curve, $(R/l_c)^{-1} \sim (t/\tau_c)^{-1}+(t/\tau_c)^{-1/2}$, to collapse the neck evolutions of distinct viscosities, which points to a universality in droplet coalescence. To theoretically explore this universality, we derived a scaling model for the viscous-to-inertial combined coalescence regime \cite{Xia:18a}, with two scaling constants determined by fitting experimental data.
In this letter, we present a theory that significantly rigorizes the previous scaling model \cite{Xia:18a} and contains no empirical constant.
A schematic of the neck between two merging droplets of initial radius $R_0$ is shown in Fig.~\ref{fig:1}. The neck radius, $R$, is defined as the minimum radial distance from the $z$-axis to the neck. Under capillary pressure difference, the neck expands out at a speed of $U(t)$. We assume the flow to be\\
(i) \emph{Quasi-steady}, meaning the flow acceleration is mainly associated with the convection induced by the neck movement.\\
(ii) \emph{Quasi-radial}, meaning the neck region can be treated as a ring of radius $R$ and width $2r_R$, which is driven by a distributed and quasi-radially directed capillary force \cite{EggersJ:99a}. The capillary force is related to two principle curvatures, $1/R$ and $1/r_R$ \cite{GrossM:13a}, with the latter being the effective curvature in the $zr$-plane.\\
(iii) \emph{Localized}, meaning the significant velocity gradients are restricted to the vicinity of the neck as illustrated in Fig.~\ref{fig:1}. This assumption accords with the finding of Paulsen \textit{et al.} \cite{PaulsenJD:11a} that the flow extends over a length comparable to the neck width rather than the neck radius. It follows that the main vortical structure has a length of $O(r_R)$, and the origin (point 1) is considered as the far field where the velocity gradients are effectively zero.\\
(iv) \emph{Geometrical self-similar}, so that the neck width satisfies the simple geometric relation, $r_R/R = \tan{(\theta/2)}$. Under the coalescence regime of $R \ll R_0$, we have $\tan{(\theta/2)} \approx \theta/2 \approx R/(2R_0)$ and, consequently,
\begin{equation}
\label{eq:4}
\frac{r_R}{R} \approx \frac{R}{2R_0} \ll 1,
\end{equation}
which is consistent with other studies \cite{PaulsenJD:11a,GrossM:13a}, but different from Eggers \textit{et al.} \cite{EggersJ:99a} who assumed $r_R/R$ to be higher-order small. This could be responsible for their logarithmic scaling law, which is believed to occur at very early stage of droplet coalescence \cite{Xia:18a}, beyond the resolution of existing experiments.
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{Fig2-JFM2019.png}
\caption{A zoomed-in schematic of the neck region between two merging droplets. The red and blue contours illustrate the vorticity distribution localized around the neck.}
\label{fig:1}
\end{center}
\end{figure}
For the axisymmetric and quasi-steady flow, the $r$-direction N-S equation is expressed as
\begin{equation}
\label{eq:1}
\rho (u_z \partial_z u_r + u_r \partial_r u_r) = -\partial_r p + \mu \left[\partial_z^2 u_r + \partial_r^2 u_r + \partial_r (\frac{u_r}{r}) \right],
\end{equation}
where $u_z$ and $u_r$ are the velocity components in the $z$- and $r$- directions, respectively, and $p$ is the pressure. Along the $r$-axis, $u_z$ and $\partial_z u_r$ are all zeros owing to the condition of symmetry, so the term $u_z \partial_z u_r$ vanishes in Eq.~\ref{eq:1}. We now integrate Eq.~\ref{eq:1} along the $r$-axis from point 1 ($r=0,z=0$) to point 2 ($r=R,z=0$) as
\begin{equation}
\begin{split}
\label{eq:2}
&\int_{1\rightarrow2} \left[ \frac{1}{2}\rho\partial_r u^2_r + \partial_r p - \mu \left(\partial_z^2 u_r + \partial_r^2 u_r + \partial_r (\frac{u_r}{r})\right) \right] \mathrm{d}r\\
& = \frac{1}{2}\rho U^2 + p_2-p_1 - \mu \left(\int_{0}^{R} \partial_z^2 u_r \mathrm{d}r + (\partial_r u_r)|_2 + \frac{U}{R}\right) = 0,
\end{split}
\end{equation}
where the subscripts $_1$ and $_2$ denote the quantities associated with point 1 and 2, respectively. In attaining Eq.~\ref{eq:2}, we have also applied $(u_r)|_1 = 0$ according to the axisymmetric condition, $(\partial_r u_r)|_1 = 0$ following assumption (iii), and $(u_r)|_2 = U(t)$. As the present theory concerns the coalescence of liquid droplets in a gaseous environment, the liquid-gas interface can be treated as a free surface, across which the capillary pressure jump is $p_\infty - p = -2\mu \boldsymbol{n}\cdot \boldsymbol{S} \cdot \boldsymbol{n} + \sigma\kappa$ \citep{TryggvasonG:11a}, where $p_\infty$ is the ambient gas pressure, $\boldsymbol{n}$ and $\kappa$ are the unit normal vector and curvature of the interface, respectively, and $\boldsymbol{S}$ is the rate-of-strain tensor. Accordingly, the pressures at the far-side droplet and the neck satisfy $p_\infty - p_1 = -2\sigma/R_0$ and $p_\infty - p_2 = -2\mu(\partial_r u_r)|_2+\sigma(1/r_R - 1/R)$, respectively. Here, $p_1$ serves as the pressure at the far-side droplet according to assumption (iii). Subtracting the two equations yields $p_2-p_1 = -\sigma(1/r_R - 1/R + 2/R_0) + 2\mu (\partial_r u_r)|_2$, which can be plugged into Eq.~\ref{eq:2} to obtain
\begin{equation}
\begin{split}
\label{eq:2-1}
&\frac{1}{2}\rho U^2 - \sigma \left(\frac{1}{r_R} - \frac{1}{R} + \frac{2}{R_0}\right)\\
&- \mu \left(\int_{0}^{R} \partial_z^2 u_r \mathrm{d}r + (\partial_z u_z)|_2 + \frac{2U}{R}\right) = 0.
\end{split}
\end{equation}
Note that the continuity equation, $\partial_z u_z + \partial_r u_r + u_r/r = 0$, has been used in the above derivation.
The quasi-radial assumption (ii) implies $u_z = 0$ around point 2 and further $(\partial_z u_z)|_2 = 0$ in Eq.~\ref{eq:2-1}. Furthermore, $\partial_z^2 u_r$ can be expressed as
\begin{equation}
\label{eq:2-3}
\partial_z^2 u_r \approx \frac{(\partial_z u_r)|_{z=r_R}-(\partial_z u_r)|_{z=0}}{r_R} = \frac{(\partial_r u_z + \omega)|_{z=r_R}}{r_R},
\end{equation}
with $(\partial_z u_r)|_{z=0}=0$ by axisymmetry and $\omega = \partial_z u_r - \partial_r u_z$ being the vorticity. Eq.~\ref{eq:2-3} essentially gives a leading-order approximation based on linearizing the strain rate near the plane of symmetry. Integrating Eq.~\ref{eq:2-3} from point 4 ($r=0,z=r_R$) to point 3 ($r=R,z=r_R$) yields
\begin{equation}
\label{eq:2-4}
\int_{0}^{R} \partial_z^2 u_r \mathrm{d}r \approx \frac{1}{r_R} \left( (u_z)|_{4}^{3} + \int_0^R \omega|_{z=r_R} \mathrm{d}r \right),
\end{equation}
with $(u_z)|_3 = 0$ by assumption (ii) and $(u_z)|_4 = 0$ by assumption (iii).
Lacking \emph{a priori} knowledge of the vorticity field, we seek an approximation of $\omega|_{z=r_R}$ based on the computational observation that in the $orz$ plane the vortex-dynamical effect of the neck movement induces two opposite-sign vortices that are centered around the two edges of the neck, as illustrated in Fig.~\ref{fig:1-1}. This physical picture is also consistent with assumption (iii). The radial vorticity decay displayed in Fig.~\ref{fig:1-1} further implies that the vortex is analogous to a Batchelor vortex \cite{Batchelor:64a} and has a Gaussian vorticity distribution as
\begin{equation}
\label{eq:2-5}
\omega_0(r') = \frac{U}{r_v}e^{-\left(\frac{r'}{r_v}\right)^2},
\end{equation}
where $r' = R-r$ is the radial location relative to the vortex center located at the neck interface, and $r_v$ is an effective radius of the vortex core. Eq.~\ref{eq:2-5} is also similar to the Oseen-Lamb vortex \cite{WuJZ:06a}, which is an analytical solution to the vorticity diffusion equation.
Recognizing that $\omega_0(r') = \omega(r) = \omega(R-r')$ for $z=r_R$ and $0 \leq r \leq R$, Eq.~\ref{eq:2-4} can be further derived as
\begin{equation}
\label{eq:2-6}
-\frac{U}{r_R} \int_0^{\frac{R}{r_v}} e^{-\left(\frac{r'}{r_v}\right)^2} \mathrm{d}\left(\frac{r'}{r_v}\right) = -\frac{\sqrt{\pi}U}{2r_R} \mathrm{erf}\left(\frac{R}{r_v}\right) \approx -\frac{\sqrt{\pi}U}{2r_R},
\end{equation}
with $R/r_v \gg 1$ given by assumption (iii) and Eq.~\ref{eq:4}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.63]{Fig3-JFM2019.png}
\caption{The simulated flow field for a representative case of $Oh = 4$. The numerical method is reported in the Supplementary Materials \cite{SM:19a}.}
\label{fig:1-1}
\end{center}
\end{figure}
We now plug in Eq.~\ref{eq:2-6} to cast Eq.~\ref{eq:2-1} in the form,
\begin{equation}
\label{eq:2-7}
\frac{1}{2}\rho U^2 - \sigma \left(\frac{1}{r_R} - \frac{1}{R} + \frac{2}{R_0}\right) - \mu \left(- \frac{\sqrt{\pi}U}{2r_R} + \frac{2U}{R}\right) = 0.
\end{equation}
Applying Eq.~\ref{eq:4} and balancing the leading-order terms of Eq.~\ref{eq:2-7} yields
\begin{equation}
\label{eq:3}
\rho U^2 - \frac{4\sigma R_0}{R^2} + \frac{2\sqrt{\pi} \mu R_0 U}{R^2} = 0,
\end{equation}
which can be combined with $\dot{R} = \mathrm{d}R/\mathrm{d}t = U$ to derive
\begin{equation}
\label{eq:6}
\frac{\rho \dot{R}^{\ast 2} L^2}{T^2} - \frac{2\sigma D_0}{R^{\ast 2}L^2} + \frac{\sqrt{\pi} \mu D_0 \dot{R}^{\ast}}{R^{\ast 2}LT} = 0,
\end{equation}
where $D_0=2R_0$, $R^{\ast} = R/L$, $\dot{R}^{\ast}=\dot{R}/U$, and $T=L/U$, with $L$, $U$, and $T$ being the characteristic length, velocity, and time scales, respectively.
The experimental studies of Paulsen \textit{et al.} \cite{PaulsenJD:11a,PaulsenJD:13a} imply the existence of a unified formula for the neck movement given the length and time are scaled properly. Let Eq.~\ref{eq:6} be such a formula, we have
\begin{equation}
\label{eq:7}
\frac{\rho L^2}{T^2} = \frac{\sigma D_0}{L^2} = \frac{\mu D_0}{LT},
\end{equation}
yielding $L = Oh D_0$ and $T = \mu Oh D_0/\sigma$ where $Oh = \mu/\sqrt{\rho \sigma D_0}$ is the Ohnesorge number. Note that $L$ and $T$ match exactly with the viscous-to-inertial crossover scales found by previous studies \cite{BurtonJC:07a,PaulsenJD:11a,PaulsenJD:13a}. Accordingly, Eq.~\ref{eq:6} takes the dimensionless form,
\begin{equation}
\label{eq:8}
\dot{R}^{\ast 2} - \frac{2}{R^{\ast 2}} + \frac{\sqrt{\pi} \dot{R}^{\ast}}{R^{\ast 2}} = 0.
\end{equation}
We can integrate Eq.~\ref{eq:8} with the initial condition $R^{\ast}(t^{\ast}=0) = 0$, where $t^{\ast} = t/T$, to obtain the exact solution,
\begin{equation}
\begin{split}
\label{eq:9}
t^{\ast} = \frac{\sqrt{\pi} R^{\ast}}{4} + \frac{\sqrt{\pi}}{8}\left[R^{\ast}\sqrt{\frac{8 R^{\ast 2}}{\pi} + 1} + \frac{\sqrt{\pi}}{2\sqrt{2}}\sinh^{-1}\left(\frac{2\sqrt{2} R^{\ast}}{\sqrt{\pi}}\right) \right].
\end{split}
\end{equation}
Eq.~\ref{eq:9} readily dictates the asymptotic behaviors associated with the viscous and inertial regimes. For the inertial regime, $R^{\ast} \gg \sqrt{2\pi}/4$, Eq.~\ref{eq:9} yields
\begin{equation}
\label{eq:10}
t^{\ast} \approx \frac{R^{\ast 2}}{2\sqrt{2}} + O(R^{\ast}).
\end{equation}
For the viscous regimes, $R^{\ast} \ll \sqrt{2\pi}/4$, Eq.~\ref{eq:9} yields
\begin{equation}
\label{eq:11}
t^{\ast} \approx \frac{\sqrt{\pi}}{4}\left[\frac{3R^{\ast}}{2} + \frac{\sqrt{\pi}}{4\sqrt{2}}\ln\left(\frac{2\sqrt{2} R^{\ast}}{\sqrt{\pi}} + 1\right) \right] \approx \frac{\sqrt{\pi} R^{\ast}}{2} + O(R^{\ast 2}).
\end{equation}
Eq.~\ref{eq:11} can be also reduced to the form of $R \sim t\sigma/\mu$, which is void of any characteristic length. This can be interpreted that the physics of the viscous regime is intermediate self-similar \cite{BarenblattGI:96a}.
To evaluate our theory, we first write Eq.~\ref{eq:10} in the dimensional form of $R/R_0 \approx c_1(t/\tau_i)^{1/2}$ with $c_1 = 2$, which recovers the $1/2$ power-law scaling for the inertial regime. Similarly, Eq.~\ref{eq:11} can be expressed in the dimensional form of $R/R_0 \approx c_2 t/\tau_v$ with $c_2=2/\sqrt{\pi}$, which gives the linear scaling relation observed from experiments of high-viscosity droplets \cite{AartsDGAL:05a}. As a reference, the fitting coefficients $c_1 = 1.68$ and $c_2 = 1$ were obtained by Paulsen \cite{PaulsenJD:13a} although different values were reported by others \cite{AartsDGAL:05a,Thoroddsen:05a,WuM:04a}.
Fig.~\ref{fig:2} shows existing experimental data of various $Oh$, corresponding to a variety of fluid types, such as water, silicon oil, and glycerol-salt-water mixture, that are of distinct fluid properties as summarized in the Supplementary Materials \cite{SM:19a}. It is observed that all data tend to collapse onto a single curve, well predicted by the current theory. Considering the assumptions and approximations made in the derivation, the agreement between theory and experiment is quite satisfactory. The theory also captures the asymptotic behaviors of the data in the viscous and inertial regimes. Specifically, the $R^{\ast} \sim t^{\ast}$ and $R^{\ast} \sim \sqrt{t^{\ast}}$ scaling relations show up as $R^{\ast} \ll 1$ and $R^{\ast} \gg 1$, respectively, whereas a clear inflection point can be identified around $R^{\ast} \sim O(1)$ and $t^{\ast} \sim O(1)$, marking the transition from viscous to inertial. It should be emphasized that, although empirical \cite{PaulsenJD:13a} and semi-empirical \cite{Xia:18a} models exist previously, this letter presents the first theory that resolves the unified scaling in the viscous-to-inertial combined coalescence process.
\begin{figure}
\begin{center}
\includegraphics[scale=0.37]{Fig4-PRL2019.png}
\caption{Model validation against experimental data from previous studies (see Supplementary Materials \cite{SM:19a} for detailed experimental parameters). A close-up of the crossover regime is shown in the inset plot.}
\label{fig:2}
\end{center}
\end{figure}
Next, we provide further validation of the theory against droplet coalescence simulations of various viscosities. The simulation setup is specified in the Supplementary Materials \cite{SM:19a}. The neck interface evolution for a representative case of $Oh = 0.0016$ is shown in the inset plot of Fig.~\ref{fig:4}. Similar simulations were conducted for $Oh=$ 0.0082, 0.0179, 0.0718, 0.1795, 0.8975, and 4. The corresponding neck radius evolutions are presented in the main plot of Fig.~\ref{fig:4}. It is seen that each simulation data set originates from a finite neck radius, causing the simulated evolution to deviate from the theory. Nevertheless, the later-stage coalescence behavior is less affected by the simulation onset, as each neck evolution curve gradually approaches and then follows its designated scaling, showing that the overall trend of the simulation curves are still captured by the theory. Similar neck evolution behaviors were also observed from previous simulations \cite{SprittlesJE:12a,SprittlesJE:14a}.
Last, this theory suggests that $R^{\ast} = R/(Oh D_0)$ is a criterion segmenting the different coalescence regimes. Although it involves both $R/D_0$ and $Oh$, for different fluids, $Oh$ is the parameter that eventually decides whether the inertial regime could arrive. This is evident from both Fig.~\ref{fig:2} and Fig.~\ref{fig:4} that data in the inertial regime generally corresponds to smaller $Oh$ and vice versa. This criterion has important practical use. For example, Aarts \textit{et al.} \cite{AartsDGAL:05a} considered Data 3 (20 mPa s silicon oil) and Data 4 (50 mPa s silicon oil) to be within the inertial regime, whereas Fig.~\ref{fig:2} clearly shows that Data 3 mainly covers the crossover regime and Data 4 extends from the viscous regime to the crossover regime.
\begin{figure}
\begin{center}
\includegraphics[scale=0.37]{Fig5-PRL2019.png}
\caption{Main: validation of the current theory (Eq.~\ref{eq:9}) against simulated neck evolution for droplets of different viscosities ($Oh$). Inset: time evolution of the simulated neck interface for a representative case with $Oh = 0.0016$.}
\label{fig:4}
\end{center}
\end{figure}
To summarize, this letter presents a theory for the neck evolution during initial coalescence of binary liquid droplets. We have derived and validated a unified solution that applies to the viscous, viscous-to-inertial crossover, and inertial regimes of droplet coalescence. This provides a fundamental framework to support the prominent scaling laws as well as the crossover behaviors observed from previous experimental studies.
We would like to acknowledge the support from the Hong Kong RGC/GRF (PolyU 152217/14E and PolyU 152651/16E) and the ``Open Fund" of State Key Laboratory of Engines (Tianjin University, No. K2018-12).
| {
"timestamp": "2019-06-13T02:07:41",
"yymm": "1906",
"arxiv_id": "1906.04970",
"language": "en",
"url": "https://arxiv.org/abs/1906.04970",
"abstract": "This letter presents a theory on the coalescence of two spherical liquid droplets that are initially stationary. The evolution of the radius of a liquid neck formed upon coalescence was formulated as an initial value problem and then solved to yield an exact solution without free parameters, with its two asymptotic approximations reproducing the well-known scaling relations in the viscous and inertial regimes. The viscous-to-inertial crossover observed by Paulsen et al. [Phys. Rev. Lett. 106, 114501 (2011)] is also recovered by the theory, rendering the collapse of data of different viscosities onto a single curve.",
"subjects": "Fluid Dynamics (physics.flu-dyn)",
"title": "Unified Viscous-to-inertial Scaling in Liquid Droplet Coalescence",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9553191246389618,
"lm_q2_score": 0.7431680086124812,
"lm_q1q2_score": 0.7099626114473561
} |
https://arxiv.org/abs/1901.05369 | Improving linear quantile regression for replicated data | This paper deals with improvement of linear quantile regression, when there are a few distinct values of the covariates but many replicates. On can improve asymptotic efficiency of the estimated regression coefficients by using suitable weights in quantile regression, or simply by using weighted least squares regression on the conditional sample quantiles. The asymptotic variances of the unweighted and weighted estimators coincide only in some restrictive special cases, e.g., when the density of the conditional response has identical values at the quantile of interest over the support of the covariate. The dominance of the weighted estimators is demonstrated in a simulation study, and through the analysis of a data set on tropical cyclones. | \section{Introduction}
Consider a quantile regression problem with a handful of distinct values of covariates, where each covariate profile is replicated many times. A linear regression model for the quantiles are often preferred for such data. If one ignores the fact of replications, the linear quantile regression estimator of \cite{Koenker_1978} can be used for estimating the parameters and related inference. However, the replicated nature of the data enables one to fit a linear (mean) regression model to the conditional sample quantiles for each value of covariates. Since these conditional sample quantiles would in general have different variances, a weighted least squares (WLS) estimator with weights inversely proportional to the estimated variances of the respective conditional sample quantiles may be used. Many researchers, apparently oblivious to this common-sense option, have used the method of \cite{Koenker_1978} for linear quantile regression with replicated data \citep{Redden_2004, Fernandez_2004, Nature_2008, Jagger_2009, Kossin_2013}. Before this trend continues further, it would be interesting to study how the two methods compare.
We show in this paper that the WLS estimator is asymptotically more efficient than the estimator of \cite{Koenker_1978}. Small sample simulation are conducted to chart the domain of this dominance relation, and an illustrative data analysis is carried out to demonstrate the gains made.
\section{Comparison of asymptotic variances}
Suppose the $\tau$-quantile of the conditional distribution of a random variable $Y$ given another random vector $\bf x$ is $q_{Y} (\tau|{\bf x}):= \inf\{q : P(Y \leq q |\bf x) \geq \tau \}$. For a given $\tau\in [0,1]$, consider the linear regression model \citep{Koenker_2005}
\begin{equation}\label{model1}
q_{Y} (\tau|{\bf x}) = {\bf x}' {\boldsymbol\beta}(\tau),
\end{equation}
where $\bf x$ is the vector of regressors (along with intercept) and ${\boldsymbol\beta}(\tau)$ is the vector of corresponding regression coefficients. Consider independent sets of data of the form (${\bf x}_i,Y_{ij}$) with $j=1,\ldots,n_i$, $i=1,\ldots,k$, such that for given ${\bf x}_i$, the $Y_{ij}$s are conditionally iid with common distribution $F_i$. The sample $\tau$-quantile for given ${\bf x}_i$ is
\begin{equation}\label{model2}
\widehat q_i(\tau)=\mathop{\arg\min}_{m_i} \sum_{j=1}^{n_i}\rho_{\tau}(Y_{ij}-m_i),\;\;\;i=1,\ldots,k,
\end{equation}
where $\rho_{\tau}(u)=u(\tau-I(u<0))$. We assume that the distribution $F_i$ has continuous Lebesgue density, $f_i$, with $f_i(u) > 0$ on $\{u: 0 < F_i(u) < 1\}$, for $i=1,\ldots,k$. The limiting distribution of $\widehat q_i(\tau)$ has mean $q_{Y} (\tau|{\bf x}_i)$ and variance given by \citep{Shorack_2009}
\begin{equation}\label{weigts}
\sigma^2_i(\tau)=\frac{\tau(1-\tau)}{n_i f_i^2(F^{-1}_i(\tau))},\;i=1,\ldots,k.
\end{equation}
Linear regression of $\widehat q_i (\tau)$ on ${\bf x}_i$, with $\sigma_i^{-2}(\tau)$ as weights, produces WLS estimator of ${\boldsymbol \beta}(\tau)$
\begin{eqnarray}\label{wls}
\widehat {\boldsymbol \beta}_{wls}(\tau)
=({\bf X}'\Omega^{-1}_{\tau}{\bf X})^{-1}{\bf X}'\Omega^{-1}_{\tau}\widehat {\bf q}(\tau)
\end{eqnarray}
where ${\bf X}=({\bf x}_1:\ldots:{\bf x}_k)'$, for $i=1,\ldots,k$, $\widehat{\bf q}(\tau)=(\widehat q_1(\tau),\ldots,\widehat q_k(\tau))'$ and $\Omega_{\tau}$ is a diagonal matrix with $\sigma^2_1(\tau),\ldots,\sigma^2_k(\tau)$ as diagonal elements, which have to be replaced by consistent estimates.
The estimator proposed by \cite{Koenker_1978} is
\begin{equation}\label{kb}
\widehat {\boldsymbol \beta}_{kb}(\tau)=\underset{\beta\in \mathbb{R}^2}{\arg\min}\sum_{i=1}^k\sum_{j=1}^{n_i}
\rho_\tau(Y_{ij}-{\bf x}'_i\boldsymbol{\beta}(\tau)).
\end{equation}
This estimator (the KB estimator) works even if $n_i=1$ for some or all $i$.
In order to show that \eqref{wls} is asymptotically more efficient than \eqref{kb}, we need the following regularity conditions.\\
{\bf Condition A1.}
For some vector $(\xi_{1},\xi_{2},\dots,\xi_{k})^T$ with positive components,
\begin{equation}\label{condE}
\left(\frac{n_1}{n},\frac{n_2}{n},\ldots,\frac{n_k}{n}\right)^T\rightarrow \left(\xi_{1},\xi_{2},\dots,\xi_{k}\right)^T
\end{equation}
in Euclidean norm, as $n=\sum_{i=1}^kn_i\rightarrow\infty$.\\
{\bf Condition A2.}
The distribution functions $F_i$ are absolutely continuous, with continuous density $f_i$ uniformly bounded away from $0$ and $\infty$ at $F^{-1}_i(\tau)$.\\
{\bf Condition A3.}
$\underset{i=1,\ldots,k}{\max}||{\bf X}_i||/\sqrt{n}\rightarrow 0$ as $n\rightarrow\infty$. Further, the sample matrices
$D_{0n}=n^{-1}\sum_{i=1}^k{n_i} {\bf X}_i {\bf X}^{T}_i$, $D_{1n}=n^{-1}\sum_{i=1}^k{n_i} f_i(F^{-1}_i(\tau)) {\bf X}_i {\bf X}^{T}_i$ and $D_{2n}=n^{-1}\sum_{i=1}^k{n_i} f^2_i(F^{-1}_i(\tau)) {\bf X}_i {\bf X}^{T}_i$ converge to positive definite matrices $D_0$, $D_1$ and $D_2$, respectively, as $n\rightarrow\infty$.
{\bf Theorem 1:}
{\it Under Conditions A1, A2 and A3, and assuming the $\Omega_{\tau}$ in \eqref{wls} is replaced by a consistent estimator,
\begin{itemize}
\item [(a)] $\sqrt{n}(\widehat{\boldsymbol \beta}_{kb}(\tau)-{\boldsymbol \beta}(\tau))\rightarrow \mathcal{N}\left(0,\tau(1-\tau)D^{-1}_1 D_0 D^{-1}_1\right),$
\item [(b)] $\sqrt{n}(\widehat{\boldsymbol \beta}_{wls}(\tau)-{\boldsymbol \beta}(\tau))\rightarrow \mathcal{N}\left(0,\tau(1-\tau) D^{-1}_2\right),$
\item [(c)] the limiting dispersion matrix of $\sqrt{n}(\widehat{\boldsymbol \beta}_{kb}(\tau)-{\boldsymbol \beta}(\tau))$
is larger than or equals to that of $\sqrt{n}(\widehat{\boldsymbol \beta}_{wls}(\tau)-{\boldsymbol \beta}(\tau))$ in the sense of the L$\ddot{o}$wner order
\footnote{A symmetric matrix $A$ is said to be greater than or equal to another symmetric matrix $B$ in the sense of the L$\ddot{o}$wner order if $A-B$ is a non-negative definite matrix.}.
\end{itemize}}
\vspace{.1cm}
{\bf Proof:}
The result of part (a) follows from (\cite{Koenker_2005}, page 121). Part (b) follows from the fact that the WLS estimator is a linear function of the conditional sample quantiles $\widehat q_i(\tau)$, $i=1,\ldots,k$, whose limiting distribution under the given conditions are well known \citep{Shorack_2009}. The continuous mapping theorem ensures that a consistent estimator of $\Omega_{\tau}$ would be an adequate substitute for it.
Note that the asymptotic dispersion matrices of $\sqrt{n}(\widehat{\boldsymbol \beta}_{kb}(\tau)-{\boldsymbol \beta}(\tau))$ and $\sqrt{n}(\widehat{\boldsymbol \beta}_{wls}(\tau)-{\boldsymbol \beta}(\tau))$ are the limits of $\tau(1-\tau)D^{-1}_{1n} D_{0n} D^{-1}_{1n}$ and $\tau(1-\tau)D^{-1}_{2n}$, respectively, where $D_{0n}$, $D_{1n}$ and $D_{2n}$ are as defined in Condition A3.
Thus, part (c) is proved if we can show that for every $n$, $D^{-1}_{2n}\le D^{-1}_{1n} D_{0n} D^{-1}_{1n}$ in the sense of the L\"{o}wner order. It suffices to show that $D_{1n} D^{-1}_{0n} D_{1n} \le D_{2n}.$
Let $D_{0n}=n^{-1}B'B$, $D_{1n}=n^{-1}A'B=n^{-1}B'A$ and $D_{2n}=n^{-1}A'A$, where
\begin{equation}\label{matrixab}
B=\begin{bmatrix} \sqrt{n_1} & \cdots &0\\ \vdots & \ddots&\vdots\\0 &\cdots& \sqrt{n_k}\end{bmatrix}{\bf X},\ \ A=\begin{bmatrix} \sqrt{n_1}f_1(F^{-1}_1(\tau)) & \cdots &0\\
\vdots & \ddots&\vdots\\0 &\cdots& \sqrt{n_k}f_k(F^{-1}_k(\tau))\end{bmatrix}{\bf X}.
\end{equation}
It follows that $$D_{1n} D^{-1}_{0n} D_{1n} = n^{-1}A'B(B'B)^{-1}B'A =n^{-1} A'P_BA \le n^{-1} A'A = D_{2n},$$
where $P_B$ is the orthogonal projection matrix for the column space of $B$. Part~(c) is proved by taking limits of the two sides of the above inequality as $n$ goes to infinity.\hfill$\Box$
The next theorem provides a necessary and sufficient condition for the L\"{o}wner order of part (c) to hold with equality.
{\bf Theorem 2:}
{\it Suppose Conditions A1, A2 and A3 hold and assume that $\Omega_{\tau}$ in \eqref{wls} is replaced by a consistent estimator.
\begin{itemize}
\item [(a)] The asymptotic dispersion matrices of the estimators \eqref{wls} and \eqref{kb} coincide if all
$f_i(F^{-1}_i(\tau))$'s in \eqref{weigts} for $i=1,\ldots,k$ are equal.
\item [(b)] Suppose ${\bf x}_i={1\choose {\bf z}_i'}$ for $i=1,\ldots,k$, where ${\bf z}_1,\ldots,{\bf z}_k$ are samples from a $p$-variate continuous
distribution not restricted to any lower dimensional subspace. The asymptotic dispersion matrices of the estimators \eqref{wls} and \eqref{kb}
coincide only if all $f_i(F^{-1}_i(\tau))$'s in \eqref{weigts} for $i=1,\ldots,k$ are equal.
\end{itemize}
}
{\bf Proof:}
For simplicity of notation, we refer to $f_i(F^{-1}_i(\tau))$ simply by $f_i$ in this proof. The point of departure of the proof of this theorem is part (c) of Theorem~1, where a L$\ddot{\mbox{o}}$wner order between the two dispersion matrices has been established. This order follows from the inequality at the end of the proof of that theorem, which holds with equality if and only if the column space of $A$ is contained in the column space of $B$. From the definition of $A$ and $B$ given in \eqref{matrixab}, this condition amounts to the containment of the column space of ${\bf FX}$ in that of ${\bf X}$, where ${\bf F}$ is the diagonal matrix with $f_1,\ldots,f_k$ as its diagonal elements.
Part (a) is proved by using the fact that if all the $f_i$'s are equal, then ${\bf FX}$ is a constant multiple of ${\bf X}$, implying the equivalence of the column spaces of these two matrices.
In order to prove part (b), we start from the assumption that the column space of ${\bf FX}$ is contained in that of ${\bf X}$, that is, there is a $(p+1)\times(p+1)$ matrix ${\bf C}$ such that ${\bf XC}'=\bf{FX}$. By writing this matrix equation in terms of equality of the corresponding rows of the two sides, we have
$${\bf C}{\bf x}_i=f_i{\bf x}_i\quad \mbox{for $i=1,\ldots,k$}.$$ Therefore, every $f_i$ is an eigen value of the $(p+1)\times (p+1)$ matrix ${\bf C}$ with eigen vector ${\bf x}_i$. Lemma~1 proved below implies that all the $f_i$'s have to be the same almost surely over the distribution of the ${\bf z}_i$'s mentioned in the statement of the theorem.\hfill$\Box$
{\bf Lemma 1:}
{\it Suppose ${\bf z}_1,\ldots,{\bf z}_k$ are samples from a p-variate continuous distribution
not restricted to any lower dimensional subspace. If ${\bf C}$ is a $(p+1)\times(p+1)$ matrix with
${1\choose {\bf z}_1},\ldots,{1\choose {\bf z}_{k}}$ as eigen vectors,
then ${\bf C}$ is almost surely a multiple of the $(p+1)\times(p+1)$ identity matrix.}
{\bf Proof:} Suppose ${\bf z}_1,\ldots,{\bf z}_{p+1}$ are samples drawn initially as in the statement of the lemma and ${\bf C}$ is a $(p+1)\times(p+1)$ matrix
having ${1\choose {\bf z}_1},\ldots,{1\choose {\bf z}_{p+1}}$
as eigen vectors. If ${\bf C}$ is not a multiple of the identity matrix, no eigen value of ${\bf C}$ has multiplicity
$(p+1)$. Therefore, the eigenspace (space of eigenvectors) corresponding to each eigenvalue has dimension $p$ or less.
For ${1\choose {\bf z}_{p+2}},\ldots,{1\choose {\bf z}_k}$ to be eigen vectors of ${\bf C}$, they have to
belong to the union of these eigenspaces (each with dimension $<p$). This event has probability zero, according to the hypothesis
of the lemma. The result follows.\hfill$\Box$
\bigskip
{\bf Remark 1:}
The condition $f_1(F^{-1}_1(\tau))=\cdots=f_k(F^{-1}_k(\tau))$ mentioned in Theorem~2 may occur when, for instance,
the model \eqref{model1} arises from the more restrictive observation model
$$Y_{ij}=\beta_0+\beta_1X_i+e_{ij},\; j=1,\ldots,n_i,\; i=1,\ldots,k,$$
where $e_{ij}\sim F$ for some common distribution $F$ that does not depend on $X_i$. This is a special case of \eqref{model1} with
$\beta_0(\tau)=\beta_0+F^{-1}(\tau)$ and $\beta_1(\tau)=\beta_1$ for all $\tau$.
By denoting $\mu_i=\beta_0+\beta_1 X_i$, we get $F_i(y)=F(y-\mu_i)$ and $f_i(y)=f(y-\mu_i)$.
Thus, the conditional $\tau$- quantile is $F_i^{-1}(\tau)=F^{-1}(\tau)+\mu_i$ and the value of the conditional density at that
quantile is $f_i(F_i^{-1}(\tau))=f(F_i^{-1}(\tau)-\mu_i)=f(F^{-1}(\tau))$,
for $i=1,\ldots,k$. The equality holds for all $\tau$, which is a much stronger condition than the conditions of Theorem~2.
\bigskip
In order to define the estimator \eqref{wls} completely, one has to choose a consistent estimator of $\Omega_{\tau}$, which may obtained by
plugging any consistent estimator of $1/(f_i(F_i^{-1}(\tau))$ in \eqref{weigts}. Let us denote $s_i(\tau)=1/(f_i(F_i^{-1}(\tau))$ and
consider some consistent estimators of this parameter under various conditions.
A simple plug-in estimator is obtained by using the sample quantile to estimate $F_i^{-1}$ and the kernel density estimator
\citep{Silverman_1986} of $f_i$, for each $i$. If $h_{n_i}$ is the kernel bandwidth, then this estimator would be consistent as long as
$h_{n_i}\rightarrow0$ and $n_ih_{n_i}\rightarrow\infty$ as $n_i\rightarrow\infty$, and the conditions of Theorem 1 hold.
By noting that $s_i(\tau)=\frac{d}{dt}F_i^{-1}(\tau)$, \cite{siddiqui} proposed the finite difference estimator
\begin{equation}\label{s_t}
\hat{s}_i(\tau)=[\widehat q_i(\tau+h_{n_i})-\widehat q_i(\tau-h_{n_i})]/{2h_{n_i}},
\end{equation}
which has been quite popular. This estimator is consistent under the conditions of Theorem~1 when
the bandwidth parameter $h_{n_i}$ tends to 0 as $n_i\rightarrow\infty$.
A bandwidth rule, suggested by \cite{Hall_88} for the purpose of obtaining confidence intervals of the $\tau$-quantile based on
Edgeworth expansions is
\begin{equation}
h_{n_i}=n_i^{-1/3} z_{\alpha}^{2/3}[1.5 s_i(\tau)/s_i^{''}(\tau)]^{1/3},\nonumber
\end{equation}
where $z_\alpha$ satisfies $\Phi(z_\alpha)=1-\frac{\alpha}{2}$, and $1-\alpha$ is the specified coverage probability of the said confidence
interval. In the absence of any information about $s_i(\cdot)$, one can use the Gaussian model, as in \cite{Koenker_1999}, to choose
\begin{equation}\label{bandwidth}
h_{n_i}=n_i^{-1/3} z_{\alpha}^{2/3}[1.5 \phi^2(\Phi^{-1}(\tau))/(2(\Phi^{-1}(\tau))^2+1)]^{1/3}.
\end{equation}
\section{Simulations of performance}
We now compare the small sample performances of the estimators $\widehat {\boldsymbol \beta}_{wls}(\tau)$
and $\widehat {\boldsymbol \beta}_{kb} (\tau)$ defined in \eqref{wls}
and \eqref{kb}, in terms of their empirical Mean Squared Error (MSE). The specific version of the WLS estimator we use here is defined by \eqref{wls}
with $\Omega_{\tau}$ replaced by
$$\widehat{\Omega}_{\tau}=\begin{pmatrix} \frac{1}{n_1}\tau(1-\tau)\hat{s}_1(\tau)&0&\cdots&0\\
0&\frac{1}{n_2}\tau(1-\tau)\hat{s}_2(\tau)&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\0&0&\cdots&\frac{1}{n_k}\tau(1-\tau)\hat{s}_k(\tau)\\
\end{pmatrix},$$
where $\hat{s}_i(\tau)$ is defined as in \eqref{s_t} together with \eqref{bandwidth} and $\alpha=0.05$.
For $i=1,\ldots,k$, we simulate a scalar covariate $x_i$ from the gamma distribution with shape parameter $p=2$ and scale parameter
$\theta=0.5$. Then, for every $i$ and $j=1,\ldots,n_i$, we simulate $Y_{ij}$ from $\mathcal{N}(\mu_i,\eta^2_i)$ where
$\mu_i=\beta_1+\beta_2 x_i-\eta_i\Phi^{-1}(\tau)$, so that the $\tau$-quantile of $Y_{ij}$ is $\beta_1+\beta_2x_i$. As for $\eta_i^2$,
we choose two different values: $\eta_i=1/x_i$ and $\eta_i=1$. Only the second choice ensures asymptotic equivalence of the
two estimators as per Theorem~2. We use $\beta_1=1$, $\beta_2=0.5$, quantile $\tau=0.1$, 0.3, 0.5, 0.7 and 0.9 and
number of distinct covariate values $k=5$, 10 and 30. As for the number of replicates $n_i$ for the $i$th distinct value of the covariate,
we choose the balanced design $n_1=\cdots=n_k=n_0$ (say), and use the values 50, 100, 200 and 500 for $n_0$.
These choices of $\tau$, $k$ and $n_i$ by and large cover the data analytic problems of
\cite{Redden_2004}, \cite{Fernandez_2004}, \cite{Nature_2008}, \cite{Jagger_2009} and \cite {Kossin_2013}.
We compute the KB estimator \eqref{kb} by using the quantile regression package quantreg
(R package version 5.29;//www.r-project.org).
Table~\ref{tab1} shows the empirical MSE of the WLS and KB estimators of the two regression parameters,
for $\eta_i=1/x_i$ and the specified values of the other parameters, based on 10,000 simulation runs.
It can be seen that the empirical MSE of the WLS estimator is generally less than that of the KB estimator.
The only case where the KB estimator has much smaller MSE than the WLS estimator occurs for the extreme quantiles ($\tau=0.1$ or 0.9)
and small sample size, ($n_i=50$ and $k=30$). This may be because $n_i=50$ is too small for the estimation of
variance of extreme quantiles. For $n_i=200$ or higher, the MSE of the
WLS estimator is smaller for all the quantiles considered here. For $\tau=0.3$, 0.5 and 0.7, the superiority holds for all the sample sizes considered.
These small sample findings nicely complement the large sample superiority of the WLS estimator over the KB estimator, as described in~Theorem~1.
We now turn to the case $\eta_i=1$ for all $i$, so that the condition of Theorem~2 holds and the two estimators have asymptotically equivalent performance. Table~\ref{tab2} shows the empirical MSE of the WLS and the KB estimators of the regression of parameters, based on 10,000 simulation runs, for $\eta_i=1$ and other parameters having specified values as in Table~\ref{tab2}. It is found that there is no clear dominance of any one estimator over the other, for any choice of sample size. The WLS estimator of $\beta_0$ generally has smaller MSE than the KB estimator, while the KB estimator appears to work better for $\beta_1$. Overall, the empirical MSE of two estimators are very close to one another.
\begin{table}
\caption{Empirical MSE of $\widehat{\boldsymbol\beta}_{wls}$ and $\widehat{\boldsymbol\beta}_{kb}$ for $\eta_i=1/x_i$, $i=1,\dots,k$ and for different values of $\tau$, $k$ and $n_0$.}
\vspace{0.4cm}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccc@{\hskip 5pt}cc@{\hskip 5pt}cc@{\hskip 5pt}cc@{\hskip 5pt}c}\hline
& & & \multicolumn{2}{c}{$n_0$=50}&\multicolumn{2}{c}{$n_0$=100} & \multicolumn{2}{c}{\centering $n_0$=200}
& \multicolumn{2}{c}{\centering $n_0$=500}\\\hline\hline
$\tau$&$k$&Estimator & $\beta_0$& $\beta_1$ & $\beta_0$& $\beta_1$ & $\beta_0$& $\beta_1$ & $\beta_0$& $\beta_1$\\\hline
& \multirow{2}{*}{5} &WLS &0.2662& 0.2769 &0.1291& 0.1521&0.0610& 0.0640&0.0238& 0.0273
\\&&KB& 0.3272 & 0.3521 & 0.1589 & 0.1875&0.0809& 0.0872&0.0332& 0.0389 \vspace{.1cm} \\
0.1&\multirow{2}{*}{10} &WLS &0.0912& 0.0445 &0.0380& 0.0203&0.0174& 0.0094& 0.0063& 0.0035
\\&& KB& 0.0877& 0.0521 &0.0426 & 0.0250&0.0208& 0.0120&0.0087& 0.0051 \vspace{.1cm}\\
&\multirow{2}{*}{30} &WLS &0.0299& 0.0064 &0.0104& 0.0025&0.0042& 0.0011&0.0014& 0.0004
\\&& KB&0.0172& 0.0059 &0.0084& 0.0028&0.0044& 0.0015&0.0017& 0.0006\\\hline\hline
& \multirow{2}{*}{5} &WLS &0.1390& 0.1514 &0.0739& 0.0892 &0.0347& 0.0414&0.0090&0.0054
\\&& KB&0.1889& 0.2027&0.0999& 0.1241&0.0477& 0.0550&0.0128& 0.0076\vspace{.1cm} \\
0.3&\multirow{2}{*}{10} &WLS &0.0373& 0.0212 &0.0184& 0.0102&0.0133& 0.0165&0.0035& 0.0021
\\&& KB&0.0494& 0.0286&0.0247& 0.0139&0.0189& 0.0213 &0.0050& 0.0029 \vspace{.1cm} \\
&\multirow{2}{*}{30} &WLS &0.0079& 0.0024 &0.0036& 0.0011&0.0090& 0.0054& 0.0017& 0.0005
\\&& KB&0.0104& 0.0035&0.0052& 0.0017&0.0128& 0.0076& 0.0025&0.0008\\\hline\hline
& \multirow{2}{*}{5} &WLS &0.1228& 0.142&0.0605&0.0709&0.0156& 0.0089&0.0031&0.0010
\\&& KB&0.1707& 0.1870&0.0846& 0.0978&0.0228& 0.0132 &0.0046& 0.0016 \vspace{.1cm} \\
0.5&\multirow{2}{*}{10} &WLS &0.0320& 0.0190&0.0304& 0.0350&0.0078& 0.0044& 0.0015& 0.0005
\\&& KB&0.0459& 0.0278&0.0425& 0.0477&0.0114& 0.0066& 0.0023& 0.0007 \vspace{.1cm} \\
&\multirow{2}{*}{30} &WLS &0.0061& 0.0019& 0.0122& 0.0134& 0.0031& 0.0018&0.0006& 0.0002
\\&& KB&0.0092& 0.0031&0.0167& 0.0175& 0.0046& 0.0028&0.0009& 0.0003 \\\hline\hline
& \multirow{2}{*}{5} &WLS &0.1453& 0.1890&0.0696&0.0832&0.0185& 0.0104&0.0037&0.0011
\\&& KB&0.1981 & 0.2505&0.1003& 0.1202 &0.0259& 0.0148 &0.0052& 0.0017 \vspace{.1cm}\\
0.7&\multirow{2}{*}{10} &WLS &0.0380& 0.0221&0.0362&0.0469&0.0089& 0.0051& 0.0017& 0.0005
\\&& KB& 0.0512 & 0.0314 &0.0481& 0.0639&0.0125& 0.0072&0.0025& 0.0008 \vspace{.1cm}\\
&\multirow{2}{*}{30} &WLS &0.0079& 0.0024&0.0141& 0.0182&0.0036& 0.0020&0.0006& 0.0002
\\&& KB&0.0104 & 0.0035&0.0192& 0.0239&0.0051& 0.0030&0.0010& 0.0003 \\\hline\hline
& \multirow{2}{*}{5} &WLS &0.2719& 0.2833&0.1302& 0.1570&0.0375& 0.0192&0.0107& 0.0026
\\&& KB&0.3546& 0.3697&0.1623& 0.1863 &0.0437& 0.0254&0.0085& 0.0029 \vspace{.1cm}\\
0.9&\multirow{2}{*}{10} &WLS &0.0912& 0.0432&0.0612& 0.0717&0.0177& 0.009&0.0043& 0.0011
\\&&KB& 0.0870& 0.0490 &0.0802& 0.1000& 0.0213& 0.0124&0.0044& 0.0015\vspace{.1cm}\\
&\multirow{2}{*}{30} &WLS &0.0304& 0.0067&0.0253& 0.0297&0.0063& 0.0034&0.0013& 0.0004
\\&& KB&0.0174& 0.0058 &0.0349& 0.0414&0.0083& 0.0047&0.0017& 0.0005\\\hline\hline
\end{tabular}
}\label{tab1}
\end{table}
\begin{table}
\caption{Empirical MSE of $\widehat{\boldsymbol\beta}_{wls}$ and $\widehat{\boldsymbol\beta}_{kb}$ for $\eta_i=1$, $\forall i$ and for different values of $\tau$, $k$ and $n_0$.}
\vspace{0.4cm}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccc@{\hskip 5pt}cc@{\hskip 5pt}cc@{\hskip 5pt}cc@{\hskip 5pt}c}
\hline
& & &
\multicolumn{2}{c}{$n_0$=50}&\multicolumn{2}{c}{$n_0$=100} & \multicolumn{2}{c}{\centering $n_0$=200}
& \multicolumn{2}{c}{\centering $n_0$=500}\\ \hline\hline
$\tau$&$k$&Estimator & $\beta_0$& $\beta_1$ & $\beta_0$& $\beta_1$ & $\beta_0$& $\beta_1$ & $\beta_0$& $\beta_1$\\\hline
& \multirow{2}{*}{5} &WLS &0.0631& 0.0795 &0.0305& 0.0487&0.0168& 0.0207&0.0064& 0.0096
\\&&KB& 0.0679& 0.0797 & 0.0313& 0.0421&0.0165& 0.0201&0.0070& 0.0092\vspace{.1cm} \\
0.1&\multirow{2}{*}{10} &WLS &0.0227& 0.0256 &0.0106& 0.0128&0.0061& 0.0051& 0.0025& 0.0020
\\&& KB& 0.0231& 0.0233 &0.0112& 0.0119&0.0057& 0.0069&0.0023& 0.0020\vspace{.1cm} \\
&\multirow{2}{*}{30} &WLS &0.0038& 0.0069 &0.0012& 0.0013&0.0008& 0.0005&0.0006& 0.0005
\\&& KB&0.0059& 0.0064 &0.0016& 0.0013&0.0007& 0.0005&0.0007& 0.0005\\\hline\hline
& \multirow{2}{*}{5} &WLS &0.0451& 0.0598 &0.0201& 0.0318 &0.0101& 0.0115&0.0042& 0.0043
\\&& KB&0.0466& 0.0512&0.0206& 0.0286&0.0099& 0.0122&0.0041& 0.0044\vspace{.1cm}\\
0.3&\multirow{2}{*}{10} &WLS &0.0112& 0.0143 &0.0043& 0.0113&0.0080& 0.0065&0.0035& 0.0033
\\&& KB&0.0128& 0.0110&0.0074& 0.0066& 0.0083& 0.0091 &0.0015& 0.0042\vspace{.1cm}\\
&\multirow{2}{*}{30} &WLS &0.0035& 0.0041 &0.0020& 0.0014&0.0004& 0.0003& 0.0004& 0.0003
\\&& KB&0.0039& 0.0039&0.0018& 0.0013&0.0004& 0.0003& 0.0004& 0.0003\\\hline\hline
& \multirow{2}{*}{5} &WLS &0.0372& 0.0664&0.0177& 0.0213&0.0093& 0.0140&0.0041& 0.0046
\\&& KB&0.0373& 0.0420&0.0189& 0.0213&0.0090& 0.0102 &0.0038& 0.0043 \vspace{.1cm} \\
0.5&\multirow{2}{*}{10} &WLS &0.0127& 0.0114&0.0064& 0.0056&0.0031& 0.0027& 0.0012& 0.0011
\\&& KB&0.0126& 0.0114&0.0063& 0.0055&0.0031& 0.0026& 0.0012& 0.0011\vspace{.1cm} \\
&\multirow{2}{*}{30} &WLS &0.0035& 0.0025& 0.0017& 0.0013& 0.0008& 0.0006&0.0003& 0.0002
\\&& KB&0.0034& 0.0025&0.0017& 0.0012& 0.0008& 0.0006&0.0003& 0.0002 \\\hline\hline
& \multirow{2}{*}{5} &WLS &0.0502& 0.0571 &0.0193& 0.0294 &0.0085& 0.0123&0.0046& 0.0042
\\&& KB&0.0524& 0.0553&0.0206& 0.0286&0.0099& 0.0112&0.0041& 0.0042\vspace{.1cm}\\
0.7&\multirow{2}{*}{10} &WLS &0.0041& 0.0145 &0.0065& 0.0121&0.0081& 0.0063&0.0017& 0.0012
\\&& KB&0.0129& 0.0121&0.0085& 0.0066& 0.0031& 0.0034 &0.0017& 0.0011\vspace{.1cm}\\
&\multirow{2}{*}{30} &WLS &0.0032& 0.0045 &0.0016& 0.0018&0.0008& 0.0003& 0.0005& 0.0003
\\&& KB&0.0039& 0.0039&0.0017& 0.0014&0.0004& 0.0004& 0.0004& 0.0003\\\hline\hline
& \multirow{2}{*}{5} &WLS &0.0751& 0.0755 &0.0365& 0.0415&0.0154& 0.0198&0.0072& 0.0083
\\&&KB& 0.0779& 0.0752 & 0.0378& 0.0385&0.0174& 0.0245&0.0071& 0.0084\vspace{.1cm} \\
0.9&\multirow{2}{*}{10} &WLS &0.0215& 0.0226 &0.0141& 0.0118&0.0051& 0.0052& 0.0024& 0.0021
\\&& KB& 0.0231& 0.0293 &0.0112& 0.0098&0.0053& 0.0053&0.0021& 0.0022\vspace{.1cm} \\
&\multirow{2}{*}{30} &WLS &0.0136& 0.0053 &0.0021& 0.0014&0.0007& 0.0005&0.0007& 0.0005
\\&& KB&0.0159& 0.0045 &0.0018& 0.0011&0.0006& 0.0004&0.0008& 0.0005\\\hline\hline
\end{tabular}
}\label{tab2}
\end{table}
\section{Data analysis}
We now use the WLS and the KB estimator to fit model \eqref{model1} to the tropical cyclone data considered in \cite{Nature_2008} and available at {http://myweb.fsu.edu/jelsner/temp/extspace\\ /globalTCmax4.txt}. The satellite based data set consists of lifetime maximum wind speed (metre per second) for each of the 2097 cyclone occurred globally over the years 1981 to 2006. The focus is on the upper quantiles, as these are the storms that may cause major damage.
In Table~\ref{tab3}, we report the KB estimator (also used by \cite{Nature_2008}) along with the WLS estimator for the cyclone data at the 0.85, 0.9, 0.95, 0.975 and 0.99 quantiles. We also show the large sample standard errors of the above two estimators of the intercept and the slope parameters. We observe that the WLS estimator has less standard error in all the cases.
\begin{figure}[]
\centering
\includegraphics[height=3.9 in]{fig11.eps}
\caption[]{\it Scatter plot of the lifetime maximum wind speeds over the years 1981-2006 along with the regression fit using the WLS estimator at 0.85, 0.90, 0.95, 0.975 and 0.99 quantiles.}\label{fig1}
\end{figure}
Figure~\ref{fig1} shows the observed wind speeds in successive years and the regression lines fitted by the WLS method for the 0.85, 0.9, 0.95, 0.975 and 0.99 quantiles. It may be observed that higher quantiles generally have positive slopes of the fitted regression lines, which point towards extreme cyclone becoming progressively more fierce over the years.
\begin{table}
\vspace{0.4cm}
\caption{For a $\tau\in (0,1)$, consider, quantile regression, $q_y(\tau)=\beta_0+\beta_1 x$, where, $\beta_0$ and $\beta_1$ are regression parameters.}\label{table1}
\centering
\begin{tabular}{cccccc}\hline
$\tau$ & $\beta_0$ & $\beta_1$ & p-value ($\beta_0)$ & p-value ($\beta_1$) \\\hline
0.5& 0.017 & 0.997 & 0 & 0\\
0.75 & -0.03 &1.006 & 0& 0\\
0.9&-0.109 &1.102 & 0.02 & 0\\
0.95 &0.47 &1.147 & 0&0\\
0.975 & 1.876 &1.170 & 0 &0\\\hline
\end{tabular}
\end{table}
\section{Concluding remarks}
Thus, the limited simulations and a real data analysis conducted here generally support the wisdom of using the WLS estimator as an alternative to the KB estimator in the case of replicated data, particularly for the middle quantiles.
The key to better performance of the WLS estimator is its utilization of replications through weights. A weighted version of the KB estimator can also accomplish this. \cite{Knight} have shown in an unpublished work that a weighted quantile regression estimator with weights $\sigma_i(\tau)$ as defined in \eqref{weigts} is first order equivalent to the WLS estimator with those weights and is neither uniformly better nor uniformly worse than it in second order. Our simulations (not reported here) confirmed this finding.
\bibliographystyle{apalike}
| {
"timestamp": "2019-01-17T02:16:59",
"yymm": "1901",
"arxiv_id": "1901.05369",
"language": "en",
"url": "https://arxiv.org/abs/1901.05369",
"abstract": "This paper deals with improvement of linear quantile regression, when there are a few distinct values of the covariates but many replicates. On can improve asymptotic efficiency of the estimated regression coefficients by using suitable weights in quantile regression, or simply by using weighted least squares regression on the conditional sample quantiles. The asymptotic variances of the unweighted and weighted estimators coincide only in some restrictive special cases, e.g., when the density of the conditional response has identical values at the quantile of interest over the support of the covariate. The dominance of the weighted estimators is demonstrated in a simulation study, and through the analysis of a data set on tropical cyclones.",
"subjects": "Applications (stat.AP)",
"title": "Improving linear quantile regression for replicated data",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9553191348157373,
"lm_q2_score": 0.743167997235783,
"lm_q1q2_score": 0.7099626081420324
} |
https://arxiv.org/abs/1806.07518 | Eakin-Sathaye type theorems for joint reductions and good filtrations of ideals | Analogues of Eakin-Sathaye theorem for reductions of ideals are proved for ${\mathbb N}^s$-graded good filtrations. These analogues yield bounds on joint reduction vectors for a family of ideals and reduction numbers for $\mathbb N$-graded filtrations. Several examples related to lex-segment ideals, contracted ideals in $2$-dimensional regular local rings and the filtration of integral and tight closures of powers of ideals in hypersurface rings are constructed to show effectiveness of these bounds. | \section{Introduction}
The objective of this paper is to prove Eakin-Sathaye type theorems \cite{ES1976} for joint reductions and good filtrations of ideals. Recall that an ideal $J$ contained in an ideal $I$ in a commutative ring $R$ is called a reduction of $I$ if there is a non-negative integer $n$ such that $JI^n=I^{n+1}.$ The concept of reduction of an ideal was introduced by Northcott and Rees \cite{NR1954}. It has become an important tool in many investigations in commutative algebra and algebraic geometry such as Hilbert-Samuel functions \cite{rees1961}, blow-up algebras \cite{GS1979} singularities of hypersurfaces \cite{teissier1973}, number of defining equations of algebraic varieties
\cite{lyubeznik1986} and many others.
Research in this paper is motivated by the following result of Paul Eakin and Avinash Sathaye \cite{ES1976}. Let $\mu(I)$ denote the minimum number of generators of an ideal $I$ in a local ring.
\begin{theorem}
Let $I$ an ideal of a local ring $R$ with infinite residue field. If $\mu(I^n)< \binom{n+r}{r}$ for some positive integers $n$ and $r$, then there is a reduction $J$ of $I$ generated by $r$ elements such that $JI^{n-1}=I^n.$
\end{theorem}
The case of $n=2, r=1$ was proved by J. D. Sally \cite{sally1975}. The EST (Eakin-Sathaye Theorem) has been revisited by G. Caviglia \cite{caviglia}, N. V. Trung \cite{trung} and Liam O'Carroll \cite{carroll}. Caviglia used Green's hyperplane section theorem to give a new proof of the EST. The EST was generalised by Liam O'Carroll for complete reductions
\cite{carroll}. The versions of the EST proved in this paper follow the approach used by O'Carroll. In order to state his result we recall necessary definitions and results about complete reductions and joint reductions of a family of ideals introduced by Rees in \cite{rees1984}. Recall that the {\em analytic spread } of an ideal $I$ in a local ring $(R,\mathfrak{m} )$
is the Krull dimension of the fiber cone of $I,$ namely, $F(I)=\oplus_{n=0}^\infty I^n/\mathfrak{m} I^n .$ The analytic spread of $I$ is denoted by $\ell(I).$ Let $I_1, I_2,\dots, I_g$ be ideals of $R$ with $s=\ell(I_1I_2\cdots I_g).$ Then there is a $g\times s$ matrix $A=(a_{ij})$ with entries $a_{ij}\in I_i$ for $i=1,2,\dots, g$ and $j=1,2,\dots, s$ such that the ideal
$(y_1,y_2,\dots, y_s)$ is a reduction of the product $I_1I_2\cdots I_g.$ Here $y_j=\prod_{i=1}^ga_{ij}$ for $j=1,2,\dots,s$ and the set of elements $a_{ij}$, $i=1,\ldots,g,$ $j=1,\ldots,s$, is called a complete reduction of the set of ideals $I_1,\ldots,I_g.$
Let $\dim R=d$ and $I_1, I_2,\dots, I_d$ be $\mathfrak{m} $-primary ideals of $R.$ Let $e_1,e_2, \dots, e_d$ be the standard basis of the $d$-dimensional $\mathbb{Q} $-vector space $\mathbb{Q} ^d.$ If $\underline{n} =(n_1,\dots, n_d) \in \mathbb{N} ^d$ then we write $\underline{I}^{\underline{n} }=I_1^{n_1}I_2^{n_2}\cdots I_d^{n_d}.$ A set of elements $(a_1, a_2,\dots, a_d)$ where $a_i\in I_i$ for $i=1,2,\dots, d$ is called a {\em joint reduction} of the set of ideals $(I_1, I_2,\dots, I_d)$ if there exists $\underline{r}=(r_1, r_2,\dots, r_d)\in \mathbb{N} ^d$ such that for all $n_i\geq r_i$ for $i=1,2,\dots, d,$
$$\sum_{i=1}^d a_i \underline{I}^{\underline{n} -e_i}=\underline{I}^{\underline{n} }.$$
The vector $(r_1, r_2,\dots, r_d)$ is called a {\em joint reduction vector} of $I$ with respect to the joint reduction $(a_1, a_2, \dots, a_d).$ Rees proved that if the set of elements $a_{ij}$, $i,j=1,\ldots,d$ is a complete reduction of $I_1,\ldots,I_d$ and if $i\rightarrow j(i)$ is a permutation of $1,\ldots,d$, then the set of elements $a_{ij(i)}$, $i=1,\ldots,d$ is a joint reduction of $I_1,\ldots,I_d.$ O'Carroll gave the following result for complete reductions in \cite{carroll}.
\begin{theorem}[\bf L. O'Carroll]
Let $(R,\mathfrak{m} )$ be a Noetherian local ring with infinite residue field $k:=R/\mathfrak{m} .$ Let $I_1, I_2,\dots, I_s$ be ideals in $R$ and $I=I_1I_2\cdots I_s.$
Suppose that for some $n\geq 1$ and $r\geq 0$ $\mu(I^n) < \binom{n+r}{r}.$ Then there exist ``general" elements $y_1, y_2,\dots, y_r$ with $y_j=x_{1j}\cdots x_{sj}, j=1,2,\dots, r$ where $x_{ij}\in I_i$ for $i=1,2,\dots, s$ such that
$$(y_1, y_2, \dots, y_r)I^{n-1}=I^n.$$
\end{theorem}
We now describe the main results proved in this paper. An $\mathbb{N} ^s$-graded filtration of ideals $\{I_{\underline{n} }\}_{\underline{n} \in \mathbb{N} ^s}$ in $R$ is a collection of ideals which satisfies the conditions
(1) $I_{\underline{n} }\subseteq I_{\underline{m} }$ for all ${\underline{n} } \geq \underline{m} $ and (2) $I_{\underline{n} }I_{\underline{m} } \subseteq I_{\underline{n} +\underline{m} }$
for all $\underline{n} , \underline{m} \in \mathbb{N} ^s.$ We say that the filtration $\mathcal F =\{I_{\underline{n} }\}_{\underline{n} \in \mathbb{N} ^s}$ is {\em good} if the Rees algebra $\mathcal R(\mathcal F)=\oplus_{\underline{n} \in \mathbb{N} ^s}I_{\underline{n} } \underline{t}^{\underline{n} }$ is a finite module over the Rees algebra $\mathcal R(I_{e_1},\dots, I_{e_s})=\oplus_{\underline{n} \in \mathbb{N} ^s} I_{e_1}^{n_1}\cdots I_{e_s}^{n_s}\underline{t}^{\underline{n} }.$ Here $\underline{t}^{\underline{n} }=t_1^{n_1}\dots t_s^{n_s}$ where $t_1, \dots, t_s$ are indeterminates and $\underline{n} =(n_1, \dots, n_s)\in \mathbb{N} ^s$ and $\underline{m} =(m_1,\ldots,m_s)\geq\underline{n} =(n_1,\ldots,n_s)$ if $m_i\geq n_i$ for all $i= 1, \ldots, s.$ We shall prove the following result in Section 3.
\begin{theorem}\label{1.3}
Let $(R, \mathfrak{m} , k)$ be a Noetherian local ring with infinite residue field $k$ and let $\mathcal{F}=\{I_{\underline{n}}\}_{\underline{n} \in \mathbb{N} ^s}$ be an $\mathbb{N} ^s$-graded good filtration in $R.$
Suppose
\[\mu(I_{\underline{n}})< \binom{n_1+r_1}{r_1} \cdots \binom{n_s+r_s}{r_s}\]
for some integers $n_1+\cdots+n_s \geq 1$ and $r_1+\cdots+r_s \geq 0$. Let $\underline{a}$ be the maximum of the degree of the generators of $F=F(\mathcal{F})$ as a module over $G=F(I_{e_1}, \ldots, I_{e_s}),$ where the maximum is taken component-wise. If $\underline{n} \geq \underline{a}+\underline{1}$, then for all $1 \leq i \leq s$ there exist ``general" elements $x_{i1}, \dots, x_{ir_i} \in I_{e_i}$ such that $I_{\underline{n}}= \sum_{i=1}^s (x_{i1}, \dots, x_{ir_i})I_{\underline{n}-e_i}.$
\end{theorem}
In particular, we get the following result for product of adic filtration.
\begin{corollary}\label{cor}
Let $(R, \mathfrak{m} , k)$ be a Noetherian local ring with infinite residue field $k$ and $I_1,\ldots, I_s$ be ideals in $R$. Suppose
\[\mu(I_1^{n_1}\cdots I_s^{n_s})< \binom{n_1+r_1}{r_1}\cdots \binom{n_s+r_s}{r_s}\]
for some integers $n_1+\cdots+n_s \geq 1$ and $r_1+\cdots+r_s \geq 0$. Then for all $1 \leq i \leq s$ there exist ``general" elements $x_{i1}, \dots, x_{ir_i} \in I_i$ such that for all $\underline{m} \geq \underline{n}$,
\[I_1^{m_1}\cdots I_s^{m_s}= \sum_{i=1}^s (x_{i1}, \dots, x_{ir_i})I_1^{m_1}\cdots I_i^{m_i-1}\cdots I_s^{m_s}.\]
\end{corollary}
Example \ref{counter} illustrates how Corollary \ref{cor} improves O'Carroll's generalisation of the EST for joint reductions. The motivation comes from the following observation. Let $I_1,I_2$ be ideals in a $2$-dimensional ring. Let $x_{11}, x_{12} \in I_1$ and $x_{21},x_{22} \in I_2$ such that under the hypothesis of Carroll's result, we get $(x_{11}x_{21},x_{12}x_{22})I_1^{n-1}I_2^{n-1} = I_1^n I_2^n.$ This gives a joint reduction equation of the form $x_{11}I_1^{n-1}I_2^n+x_{22}I_1^nI_2^{n-1}=I_1^n I_2^n.$ Whereas, the joint reduction equation obtained from Corollary \ref{cor}, in this case, is of the form $x_{11}I_1^{n_1-1}I_2^{n_2} + x_{22}I_1^{n_1} I_2^{n_2-1} = I_1^{n_1} I_2^{n_2},$ where $n_1$ may not be equal to $n_2.$
In Section 4, we consider an analogue of the EST to estimate the reduction number of a good $\mathbb{N} $-graded filtration. For this we need depth conditions on the associated graded ring and the fiber cone.
In addition, we need the notion of equimultiple filtration:
\begin{definition}
Let $\mathcal{F}$ be an $\mathbb{N} $-graded good filtration. Define $l(\mathcal{F}) = \dim(F(\mathcal{F}))$ to be the analytic spread of the filtration $\mathcal{F}$. We say that the filtration $\mathcal{F}$ is \emph{equimultiple} if $l(\mathcal{F}) = \operatorname{ht} I_1$.
\end{definition}
\begin{theorem}
Let $(R, \mathfrak{m} )$ be a Cohen-Macaulay local ring with $R/\mathfrak{m} =k$ infinite. Let $\mathcal{F}=\{I_n\}_{n \in \mathbb{N} }$ be an equimultiple good filtration such that $\operatorname{grade}\operatorname{gr}_{\mathcal{F}}(R)_+ \geq l(\mathcal{F})=r$ and $F(\mathcal{F})$ is Cohen-Macaulay. Let $\mu(I_n)< \binom{n+r}{r}$ for some $n\ge 1$. Then there exist $r$ general elements $x_1, \ldots, x_r \in I_1$ such that $I_m = (x_1,\ldots,x_r)I_{m-1}$ for all $m \geq n$.
\end{theorem}
Finally in Section 5, we present several examples which illustrate our results and explain the necessity of depth assumption for the EST for reduction number of good filtrations.
{\bf Acknowledgements:} We thank the referee for a very careful reading of the manuscript and suggesting several improvements.
\section{Preliminaries}
In this section, we setup notation, recall definitions and results which are required in the subsequent sections.
\s {\bf Multi-graded filtrations of ideals}
Let $(R,\mathfrak{m} )$ be a $d$-dimensional Noetherian local ring and $I_1, \ldots, I_s$ be ideals in $R$. For $s\geq 1,$ we put $\underline{0}=(0,\ldots,0)\in{\mathbb{N} }^s$, $\underline{1}=(1,\ldots,1)\in{\mathbb{N} }^s$ and $e_i=(0,\ldots,1,\ldots,0)\in{\mathbb{N} }^s$ where $1$ occurs at the $i$-th position. Let $\underline{n} =(n_1,\ldots,n_s)\in{\mathbb{N} }^s,$ then we write $\underline{I}^{\underline{n} }=I_{1}^{n_1}\cdots I_{s}^{n_s}$ and we put $|\underline{n} |=n_1+\cdots+n_s.$ By the phrase ``for all large $\underline{n} $" we mean $n_i\gg 0$ for all $i= 1, \ldots, s$.
\begin{definition}
A set of ideals $\mathcal{F}=\{I_{\underline{n} }\}_{\underline{n} \in \mathbb{N} ^s}$ is called an $\mathbb{N} ^s$-{\it graded filtration} if for all $\underline{m} ,\underline{n} \in\mathbb{N} ^s, I_{\underline{n} }I_{\underline{m} }\subseteq I_{\underline{n} +\underline{m} }$ and if $\underline{m} \geq\underline{n} ,$ $I_{\underline{m} }\subseteq I_{\underline{n} }$. Moreover, $\mathcal{F}$ is called an $\mathbb{N} ^s$-graded {\it{$\underline{I}=(I_1, \ldots, I_s)$-filtration}} if $\underline{I}^{\underline{n} }\subseteq I_{\underline{n} }$ for all $\underline{n} \in\mathbb{N} ^s$.
\end{definition}
Let $t_1,\ldots,t_s$ be indeterminates. For $\underline{n} \in\mathbb{N} ^s,$ put $\underline t^{\underline{n} }=t_{1}^{n_1}\cdots t_{s}^{n_s}$ and denote the $\mathbb{N} ^s$-graded {\it{Rees ring of $\mathcal{F}$}} by $\mathcal{R}(\mathcal{F})=\bigoplus\limits_{\underline{n} \in \mathbb{N} ^s}
I_{\underline{n} }~{\underline{t}}^{\underline{n} }$.
For $\mathcal{F}=\{\underline{I} ^{\underline{n} }\}_{\underline{n} \in \mathbb{N} ^s}$, we set $\mathcal R(\underline{I} )=\mathcal R(\mathcal{F})$.
The fiber cone of the filtration $\mathcal{F}$ is denoted by $F(\mathcal{F})=\mathcal{R}(\mathcal{F}) \otimes_R R/\mathfrak{m} = \bigoplus\limits_{\underline{n} \in \mathbb{N} ^s}
I_{\underline{n} }/\mathfrak{m} I_{\underline{n} }$. Define $l(\mathcal{F}) = \dim F(\mathcal{F})$ to be the {\it analytic spread} of the filtration $\mathcal{F}$. We say
$F=\bigoplus_{\underline{n} \in \mathbb{N} ^s} F_{\underline{n} }$ is {\it standard $\mathbb{N} ^s$- graded algebra over $k$} if $F=k[F_{e_1}, \ldots, F_{e_s}]$.
\begin{definition}
An $\mathbb{N} ^s$-graded filtration $\mathcal{F}=\lbrace I_{\underline{n} }\rbrace_{\underline{n} \in \mathbb{N} ^s}$ of ideals in $R$ is called an $\underline{I}=(I_1, \ldots, I_s)$-{\emph{good filtration}} if $I_i \subseteq I_{e_i}$ for all $i=1,\ldots,s$ and $\mathcal{R}(\mathcal{F})$ is a finite $\mathcal{R}(\underline{I} )$-module.
\end{definition}
\noindent
If $R$ is an analytically unramified local ring and $I$ is an ideal of $R$, then Rees \cite{reesAU} proved that the integral closure filtration $\mathcal{F}=\{\overline{I^n}\}$ is an $I$-good filtration. Using \cite{GMV}, under the same conditions, the tight closure filtration $\mathcal{T} = \{(I^n)^*\}$ is an $I$-good filtration.
A \emph{reduction} of a good filtration $\mathcal{F} = \{I_n\}$ is an ideal $J \subseteq I_1$ such that $JI_n = I_{n+1}$ for all large $n.$ Equivalently, $J \subseteq I_1$ is a reduction of $\mathcal{F}$ if and only if $\mathcal{R}(\mathcal{F})$ is a finite $\mathcal{R}(J)$-module. A minimal reduction of $\mathcal{F}$ is a reduction of $\mathcal{F}$ minimal with respect to containment. Minimal reductions of a good filtration always exist and are generated by $l(I_1)$ elements if the residue field is infinite.
\begin{remarks}
(1) Let $\mathcal{G}=\{J_n\}_{n \geq 0}$ be a $J$-good filtration. Then $J_{n+1}=JJ_n$ for all large $n.$ Since $J \subseteq J_1,$ it follows that $J_{n+1}=JJ_n \subseteq J_1 J_n \subseteq J_{n+1}$ and hence $J_{n+1}=J_1J_n$ for all large $n$. This shows that $\mathcal{G}$ is also a $J_1$-good filtration.
Some basic facts on good filtration are given in the paper \cite{hoaZarzuela}.
(2) Let $\mathcal{F}=\{I_{\underline{n}}\}_{\underline{n}\in \mathbb{N} ^s}$ be an $\mathbb{N} ^s$-graded good filtration. Then $\mathcal{R}(\mathcal{F})$ is a finite $\mathcal{R}(\underline{I} )$-module by definition, where $\underline{I} =(I_{e_1},\ldots,I_{e_s}).$ Set $G=F(I_{e_1}, \ldots, I_{e_s})= \bigoplus_{\underline{m} \in \mathbb{N} ^s} I_{e_1}^{m_1}\cdots I_{e_s}^{m_s}/\mathfrak{m} I_{e_1}^{m_1}\cdots I_{e_s}^{m_s}$, the fiber cone of the ideals $I_{e_1}, \ldots, I_{e_s}$ and $F=F(\mathcal{F})= \bigoplus_{\underline{m} \in \mathbb{N} ^s} I_{\underline{m}}/\mathfrak{m} I_{\underline{m}}$ the fiber cone of the filtration $\mathcal{F}$. Note that $G$ is a standard multi-graded $k$-algebra and $F$ is a finitely generated $G$-module. Set $G^{(i)}= \bigoplus_{m \geq 0} G_{me_i}$ for all $1 \leq i \leq s$. Clearly $G^{(i)}$ is a standard graded $k$-algebra with $G^{(i)}_1=G_{e_i}$.
(3) Let $\mathcal{F}=\{I_{\underline{n} }\}_{\underline{n} \in \mathbb{N} ^s}$ be a good filtration. Then $\mathcal{F}_i=\{I_{ne_i}\}_{n \ge 0}$ is a good filtration for all $i=1,\ldots,s.$ For, it is sufficient to show that $\mathcal{R}(\mathcal{F}_i)$ is a finite $\mathcal{R}(I_{e_i})$-module. Consider the ideal
\[\mathcal{I}_{(i)} = \bigoplus_{\underline{n} \in \mathbb{N} ^s \setminus \mathbb{N} e_i} \ I_{\underline{n} }~\underline{t}^{\underline{n} } \ \subseteq \ \mathcal{R}(\mathcal{F}).\]
Observe that
\[\mathcal{I}_{(i)} \cap \mathcal{R}(\underline{I} ) = \bigoplus_{\underline{n} \in \mathbb{N} ^s \setminus \mathbb{N} e_i} \ I_{e_1}^{n_1}\cdots I_{e_s}^{n_s}~\underline{t}^{\underline{n} } .\]
As $\mathcal{R}(\mathcal{F})$ is a finite $\mathcal{R}(\underline{I} )$-module, it follows that $\mathcal{R}(\mathcal{F})/\mathcal{I}_{(i)}$ is a finite $\mathcal{R}(\underline{I} )/(\mathcal{I}_{(i)} \cap \mathcal{R}(\underline{I} ))$-module. Since $\mathcal{R}(\mathcal{F})/\mathcal{I}_{(i)} \simeq \mathcal{R}(\mathcal{F}_i)$ and $\mathcal{R}(\underline{I} )/(\mathcal{I}_{(i)} \cap \mathcal{R}(\underline{I} )) \simeq \mathcal{R}(I_{e_i})$, we are done.
\end{remarks}
\s {\bf Zariski topology}
Let $V$ be a finite dimensional $k$-vector space and $\dim_k V=N$. Then we can identify any vector $v \in V$ with an element of $k^N$. By a (non-empty) Zariski-open set in $V^r$, for $r \in \mathbb{N} $, we mean a finite union of sets of the form $X_f:=\{{\bf a}\in k^{Nr}\mid f({\bf a})\neq 0 \}$ for a given non-zero polynomial $f \in k[X_1, \ldots, X_{Nr}]$, see \cite[Exercise 17]{atiyahMacd}. Since for non-zero polynomials $f$ and $g$ in $k[X_1, \ldots, X_{Nr}]$, $X_f \cap X_g = X_{fg}$ so finite intersection of any two non-empty open set in $V^r$ is non-empty. In fact, intersection of finitely many non-empty open sets in $V^r$ is non-empty.
\begin{remark}\label{rmk2}
Let $U$ be a subspace of $V$ with $\dim_k U=m$ and $\dim_k V=n$. Note that $\dim_k V/U=n-m=t$ (say) and $m, t \leq n$. We claim that the quotient map $\pi: V \to V/U$ is continuous in Zariski topology. Since $X_{f}$ is a basic open set of $V/U$ for some $f \in k[X_1, \ldots, X_t]$, it is enough to show that $\pi^{-1}(X_f)$ is a Zariski-open subset of $V$. Notice $\pi^{-1}(X_f) \simeq X_f \times k^m$ is a basic open subset of $V$ defined by $f \in k[X_1, \ldots, X_t]\subseteq k[X_1, \ldots, X_n]$.
\end{remark}
\s {\bf General elements}
Let $F = \oplus_{n \geq 0} F_n$ be a standard graded algebra over a field $k$ and suppose that there exists a $k$-vector space epimorphism $\phi: V \to F_1$. Let $\mathcal{P}$ be a property of elements of $F_1$ and let $r \geq 0$. We say that {\it $\mathcal{P}$ holds for $r$ general elements} $y_1,\ldots, y_r$ of $F_1$ if there exists a non-empty Zariski-open subset $U$ of $V^r$ such that $\mathcal{P}$ holds for every sequence of elements $y_j := \phi(v_j )$ where $(v_1,\ldots, v_r )\in U$.
Let $(R,\mathfrak{m} ,k)$ be a Noetherian local ring and $I$ be an ideal in $R$. Set $F(I)=\bigoplus_{n \geq 0}I^n/\mathfrak{m} I^n$, the fiber cone of $I$. We say $x$ is a ``general" element in $I$ if $x$ is a general element in $F(I)_1$, see \cite[Remark 3.2]{carroll}.
\section{Eakin-Sathaye theorem for multi-graded filtration}
In this section, we generalize the Eakin-Sathaye theorem for multi-graded good filtrations. We prove a lemma first. Set $F_{\underline{n}}=0$ when $\underline{n} \notin \mathbb{N}^s.$
\begin{lemma}\label{lem1}
Let $G=\bigoplus_{\underline{m} \in \mathbb{N} ^s} G_{\underline{m}}$ be a standard $\mathbb{N} ^s$-graded algebra over a field $k$. Let $F= \bigoplus_{\underline{n} \in \mathbb{N} ^s} F_{\underline{n}}$ be an $\mathbb{N} ^s$-graded $k$-algebra and a finitely generated $G$-module such that $G_{\underline{0}}=F_{\underline{0}}=k$ and $G_{e_i}=F_{e_i}$ for all $i=1, \ldots, s$.
Let $y_{i1}, \ldots, y_{ip_i}\in G_{e_i}$ be a basis of $G_{e_i}$. Let $y=y_{ij}$ for some $1\leq i \leq s$ and $1 \leq j \leq p_i$. Set $\overline{G}=G/yG =\oplus_{\underline{n} \in \mathbb{N} ^s} \overline{G}_{\underline{n} }$ and $\overline{F}=F/yF=\oplus_{\underline{n} \in \mathbb{N} ^s} \overline{F}_{\underline{n} }$.
Let for some $r_i \geq 2$ there exist general elements $\overline{a_1}, \ldots, \overline{a_{r_i-1}} \in \overline{G}_{e_i}$ and $\overline{b_{t1}}, \ldots, \overline{b_{tr_t}} \in \overline{G}_{e_t}$ for all $t \neq i$ such that
\[ \overline{F}_{\underline{n} }= (\overline{a_1}, \ldots, \overline{a_{r_i-1}}) \overline{F}_{\underline{n} -e_i}+ \sum_{\substack{t=1 \\ t\neq i}}^{s} (\overline{b_{t1}}, \ldots, \overline{b_{tr_t}})\overline{F}_{\underline{n} -e_t}.\]
Then $y, a_1, \ldots, a_{r_i-1}$ are general elements in $G_{e_i}$ and $b_{t1}, \ldots, b_{tr_t}$ are general elements in $G_{e_t}$ for all $t \neq i$ such that \[F_{\underline{n} }= (y, a_1, \ldots, a_{r_i-1}) F_{\underline{n} -e_i}+ \sum_{\substack{t=1 \\ t\neq i}}^{s} (b_{t1}, \ldots, b_{tr_t})F_{\underline{n} -e_t}.\]
\end{lemma}
\begin{proof}
Note that $\overline{G}= \bigoplus_{\underline{n} \in \mathbb{N} ^s}G_{\underline{n} }/yG_{\underline{n} -e_i}$ and $\overline{F}= \bigoplus_{\underline{n} \in \mathbb{N} ^s}F_{\underline{n} }/yF_{\underline{n} -e_i}$. Hence $\overline{G}_{e_i}=G_{e_i}/ky$ and $\overline{G}_{e_t}=G_{e_t}$ for all $t \neq i$.
Let $V= V_1 \times \cdots \times V_s$ be a Cartesian product of finite-dimensional $k$-vector spaces $V_i$'s over an infinite field $k$ such that there is a $k$-epimorphism $\psi: V \to G_{e_1} \oplus \cdots \oplus G_{e_s}$ induced by the $k$-epimorphisms $\psi_i: V_i \to G_{e_i}$ for $1 \leq i \leq s$. For convenience we can assume that $i=1=j$, i.e., $y=y_{11}$. Clearly $\psi_i$ induces $\overline{\psi_i}: V_i \to \overline{G}_{e_i}$ for all $i$. Note that $\overline{\psi_i}=\psi_i$ for all $i \neq 1$.
By definition of general elements there exists a non-empty Zariski-open subset $U$ of $V_1^{r-1}$ such that for all $(a_1, \ldots, a_{r-1})\in \psi_1(U)$,
\begin{equation}\label{eq0}
\overline{F}_{\underline{n} }= (\overline{a_1}, \ldots, \overline{a_{r_1-1}}) \overline{F}_{\underline{n} -e_1}+ \sum_{t=2}^{s} (\overline{b_{t1}}, \ldots, \overline{b_{tr_t}})\overline{F}_{\underline{n} -e_t}
\end{equation}
holds.
Let $z \in V_1$ such that $\psi_1(z)=y$ (as $y \neq 0$ so $z \neq 0$). Set $U'=kz \backslash \{0\}$. Then $U'$ is a non-empty Zariski open subsets of $V_1$. So by Remark \ref{rmk2}, $U'\times V_1^{r-1}$ and $V_1 \times U$ are non-empty Zariski open subsets of $V_1 \times V_1^{r-1} \simeq V_1^r$ and hence $(U' \times V_1^{r-1}) \cap (V_1 \times U)= U' \times U$ is a non-empty Zariski open subset of $V_1^r$. Note that $G/cyG=\overline{G}$ and $F/cyF=\overline{F}$ for any $0 \neq c \in k.$ Thus if we replace $y$ by $cy$ for any $0 \neq c \in k$, then also \eqref{eq0} holds. This implies that for any $(cy,a_1,\ldots,a_{r-1}) \in \psi_1(U' \times U)$,
\begin{equation}\label{eq00}
F_{\underline{n} }= (cy, a_1, \ldots, a_{r_1-1}) F_{\underline{n} -e_1} + \sum_{t=2}^{s} (b_{t1}, \ldots, b_{tr_t})F_{\underline{n} -e_t},
\end{equation}
holds. Hence $cy, a_1, \ldots, a_{r_1-1} \in G_{e_i}$ are general elements. For $ 2 \le t \le s,$ again by the definition of general elements there exists a non-empty Zariski-open subset $U_t$ of $V_t^{r}$ such that for all $(\overline{b_{t1}}, \ldots, \overline{b_{tr_t}}) \in \overline{{\psi_t}(U_t)}$, \eqref{eq0} holds which implies that for all $(b_{t1}, \ldots, b_{tr_t}) \in \psi_t(U_t)$, \eqref{eq00} holds. Thus $b_{t1}, \ldots, b_{tr_t}$ are $r_t$ general elements in $G_{e_t}$.
\end{proof}
\begin{theorem}\label{esthm-mul-fil}
Let $(R, \mathfrak{m} , k)$ be a Noetherian local ring with infinite residue field $k$ and let $\mathcal{F}=\{I_{\underline{n}}\}_{\underline{n} \in \mathbb{N} ^s}$ be an $\mathbb{N} ^s$-graded good filtration in $R.$
Suppose
\[\mu(I_{\underline{n}})< \binom{n_1+r_1}{r_1} \cdots \binom{n_s+r_s}{r_s}\]
for some integers $n_1+\cdots+n_s \geq 1$ and $r_1+\cdots+r_s \geq 0$. Let $\underline{a}$ be the maximum of the degree of the generators of $F=F(\mathcal{F})$ as a module over $G=F(I_{e_1}, \ldots, I_{e_s}),$ where the maximum is taken component-wise. If $\underline{n} \geq \underline{a}+\underline{1}$, then for all $1 \leq i \leq s$ there exist ``general" elements $x_{i1}, \dots, x_{ir_i} \in I_{e_i}$ such that \[I_{\underline{n}}= \sum_{i=1}^s (x_{i1}, \dots, x_{ir_i})I_{\underline{n}-e_i}.\]
\end{theorem}
Set $G=F(I_{e_1}, \ldots, I_{e_s})= \bigoplus_{\underline{m} \in \mathbb{N} ^s} I_{e_1}^{m_1}\cdots I_{e_s}^{m_s}/\mathfrak{m} I_{e_1}^{m_1}\cdots I_{e_s}^{m_s}=\bigoplus_{\underline{m} \in \mathbb{N} ^s}\underline{I} ^{\underline{m} }/\mathfrak{m} \underline{I} ^{\underline{m} }$, the fiber cone of the ideals $I_{e_1}, \ldots, I_{e_s}$ where $\underline{I} =(I_{e_1}, \ldots, I_{e_s})$ and $F=F(\mathcal{F})= \bigoplus_{\underline{m} \in \mathbb{N} ^s} I_{\underline{m}}/\mathfrak{m} I_{\underline{m}}$, the fiber cone of the filtration $\mathcal{F}$. Then $G$ is a standard $\mathbb{N} ^s$-graded $k$-algebra and $F$ is a finitely generated $G$-module. Again $G_{e_i}=F_{e_i}$ for all $i$ and $G_{\underline{0}}=F_{\underline{0}}=k$.
Now \[\mu(I_{\underline{n}})< \binom{n_1+r_1}{r_1}\cdots \binom{n_s+r_s}{r_s}\] implies that $\dim_k F_{\underline{n}}<\binom{n_1+r_1}{r_1}\cdots \binom{n_s+r_s}{r_s}$. Set $V_i=I_{e_i}/\mathfrak{m} I_{e_i}$ for all $1 \leq i \leq s$ and $V= V_1 \times \cdots \times V_s$. Since $R$ is Noetherian, $V_1, \ldots, V_s$ are finite dimensional vector spaces. Note that $V=V_1 \times \cdots \times V_s= I_{e_1}/\mathfrak{m} I_{e_1} \times \cdots \times I_{e_s}/\mathfrak{m} I_{e_s}= I_{e_1}/\mathfrak{m} I_{e_1} \oplus \cdots \oplus I_{e_s}/\mathfrak{m} I_{e_s}= G_{e_1} \oplus \cdots \oplus G_{e_s}$. Using graded Nakayama Lemma, to prove the foregoing theorem it is enough to prove the following result.
\begin{proposition}\label{mul-fil-pro}
Let $V= V_1 \times \cdots \times V_s$ be a Cartesian product of finite-dimensional $k$-vector spaces $V_1, \ldots, V_s$ over an infinite field $k$. Let $F= \bigoplus_{\underline{n} \in \mathbb{N} ^s} F_{\underline{n}}$ be an $\mathbb{N} ^s$-graded algebra over $k$ and be a finitely generated $G=\bigoplus_{\underline{m} \in \mathbb{N} ^s} G_{\underline{m}}$-module which is a standard $\mathbb{N} ^s$-graded $k$-algebra such that $G_{\underline{0}}=F_{\underline{0}}=k$ and $G_{e_i}=F_{e_i}$
for all $i$. Let $\underline{a}$ be the maximum of the degree of the generators of $F$ as a $G$-module, where the maximum is taken component-wise.
Suppose there is a $k$-epimorphism $\psi: V \to G_{e_1} \oplus \cdots \oplus G_{e_s}$ induced by the $k$-epimorphisms $\psi_i: V_i \to G_{e_i}$ for $1 \leq i \leq s$. Further let for some integers $n_1+\cdots+n_s \geq 1$ and $r_1+\cdots+r_s \geq 0$, $\dim_k F_{\underline{n}}<\binom{n_1+r_1}{r_1}\cdots \binom{n_s+r_s}{r_s}$. If $\underline{n} \geq \underline{a}+\underline{1}$, then there exist ``general" elements $x_{i1}, \dots, x_{ir_i} \in G_{e_i}$ such that
\[F_{\underline{n}}= \sum_{i=1}^s (x_{i1}, \dots, x_{ir_i})F_{\underline{n}-e_i}.\]
\end{proposition}
\begin{proof}
If $r_1+\cdots+r_s=0$ then $r_i=0$ for all $i$. So we get $\dim_k F_{\underline{n}}<1$, i.e., $F_{\underline{n}}=0$. Since each $r_i=0$, $(x_{i1},\ldots,x_{ir_i})=0$ by convention. So the result follows. Again if $n_1+\cdots+n_s=1$ then $n_i=1$ for some $i$ and $n_j=0$ for all $j \neq i$. Then $\dim_k F_{e_i}< \binom{1+r_i}{r_i}$, i.e., $\dim_k G_{e_i}< \binom{1+r_i}{r_i}$ and hence by \cite[Theorem 2.1]{caviglia} the result follows (as $G_{\underline{0}}=F_{\underline{0}}$).
So we may assume that $r_1+\cdots+r_s \geq 1$ and $n_1+\cdots+n_s \geq 2$.
Now suppose that the result is false. Choose a counter example $F= \bigoplus_{\underline{n} \in \mathbb{N} ^s} F_{\underline{n}}$. Pick $r_1,\ldots, r_s$ such that $r_1+\cdots+r_s$ is minimal and $n_1+\cdots+n_s$ is minimal for the chosen $r_1,\ldots, r_s$. Let $y_{i1}, \ldots, y_{ip_i}$ be a basis of $G_{e_i}$ for all $i$. Then clearly $\{y_{i1}, \ldots, y_{ip_i}\}_{i=1}^s$ forms a basis for $G_{e_1} \oplus \cdots \oplus G_{e_s}$. Without loss of generality we may assume that $r_1 \geq 1$ and $n_1 \geq 1$.
\noindent
{\bf Case-1:}
There exists $j \in \{1, \ldots, p_1\}$ such that \[\dim_k ~y_{1j} F_{\underline{n}-e_1} \geq \binom{n_1+r_1-1}{r_1}\binom{n_2+r_2}{r_2}\cdots \binom{n_s+r_s}{r_s}. \]
Note that by given condition $y_{1j} F_{\underline{n}-e_1} \subseteq F_{\underline{n}}$. Without loss of generality we may assume that $j=1$. As $y_{11}$ is a homogeneous element, we can pass to the factor ring $\overline{F}= F/y_{11}F$. So for all $\underline{n} $, we get
\begin{align*}
\dim_k \overline{F}_{\underline{n} }
=& \dim_k F_{\underline{n} }- \dim_k y_{11}F_{\underline{n} -e_1}\\
<& \binom{n_1+r_1}{r_1}\binom{n_2+r_2}{r_2}\cdots \binom{n_s+r_s}{r_s}-\binom{n_1+r_1-1}{r_1}\binom{n_2+r_2}{r_2}\cdots \binom{n_s+r_s}{r_s}\\
=&\binom{n_1+r_1-1}{r_1-1}\binom{n_2+r_2}{r_2}\cdots \binom{n_s+r_s}{r_s}.
\end{align*}
Set $\overline{G}=G/y_{11}G$. Clearly $\overline{G}$ is standard graded and $\overline{F}$ is a finitely generated $\overline{G}$-module. The natural map $\nu: G \to \overline{G}$ induces the $k$-vector space epimorphism $\overline{\psi}:= \nu \circ \psi: V \to \overline{G}_{e_1} \oplus \cdots \oplus \overline{G}_{e_s}$. Note that $\overline{G}_{e_1}= G_{e_1}/y_{11}G_{\underline{0}}$ and $\overline{G}_{e_i}= G_{e_i}$ for all $i \neq 1$. Set $\nu|_{G_{\mathfrak{n} }}=[\nu]_{\mathfrak{n} }: G_{\mathfrak{n} } \to \overline{G}_{\mathfrak{n} }$. Clearly $\overline{\psi}$ is induced by $\overline{\psi}_i:= [\nu]_{e_i} \circ \psi_i: V_i \to G_{e_i} \to \overline{G}_{e_i}$ for all $i$. Now by minimality of $r_1+\cdots+r_s$, there exists general elements $a_1, \ldots, a_{r_1-1} \in G_{e_1}$ and $b_{i1}, \ldots, b_{ir_i} \in G_{e_i}$ for all $i \neq 1$ such that
\begin{equation}\label{eq3}
\overline{F}_{\underline{n} }= (\overline{a_1}, \ldots, \overline{a_{r_1-1}}) \overline{F}_{\underline{n} -e_1}+ \sum_{i=2}^{s} (\overline{b_{i1}}, \ldots, \overline{b_{ir_i}})\overline{F}_{\underline{n} -e_i}.
\end{equation}
If $r_1=1$, then \eqref{eq3} implies that $F_{\underline{n} }= (y_{11}) F_{\underline{n} -e_1}+ \sum_{i=2}^{s} (b_{i1}, \ldots, b_{ir_i})F_{\underline{n} -e_i}.$ Note that this equality is true even if we replace $y_{11}$ by $c y_{11}$ for any $c \in k \backslash \{0\},$ which is a Zariski open subset of $G_{e_1}.$ Thus $y_{11} \in G_{e_1}$ is a general element.
If $r_1\ge 2$, then by Lemma \ref{lem1} it follows that $y_{11}, a_1, \ldots, a_{r_1-1}\in G_{e_1}$ and $b_{i1}, \ldots, b_{ir_i} \in G_{e_i}$ for $i \neq 1$ are general elements satisfying \[F_{\underline{n} }= (y_{11}, a_1, \ldots, a_{r_1-1}) F_{\underline{n} -e_1}+ \sum_{i=2}^{s} (b_{i1}, \ldots, b_{ir_i})F_{\underline{n} -e_i}.\] Hence in both cases, we arrive at a contradiction to our assumption.
\noindent
{\bf Case-2:}
For each $i$ and $j_i \in \{1, \ldots, p_i\}$;
\begin{equation}\label{eq1}
\dim_k ~y_{ij_i} F_{\underline{n} -e_i}< \binom{n_1+r_1}{r_1}\cdots\binom{n_i+r_i-1}{r_i}\cdots \binom{n_s+r_s}{r_s}.
\end{equation}
Set $K^{(ij_i)}= \operatorname{ann}_F y_{ij_i}$, $L^{(ij_i)}= \operatorname{ann}_G y_{ij_i}$ for $1 \leq i \leq s$ and $j_i \in \{1, \ldots, p_i\}$. Notice that all $K^{(ij_i)}$, $L^{(ij_i)}$ are homogeneous ideals and $L^{(ij_i)} F \subseteq K^{(ij_i)}$. Set $F^{(ij_i)}=F/K^{(ij_i)}$ and $G^{(ij_i)}=G/L^{(ij_i)}$. Clearly $F^{(ij_i)}$ is a finitely generated $G^{(ij_i)}$-module. Then for each $i, j_i$ we write \[K^{(ij_i)}= \bigoplus_{\underline{n} \in \mathbb{N} ^s} K^{(ij_i)}_{\underline{n} }, \ F^{(ij_i)}=\bigoplus_{\underline{n} \in \mathbb{N} ^s} F^{(ij_i)}_{\underline{n} }, \
L^{(ij_i)}= \bigoplus_{\underline{n} \in \mathbb{N} ^s} L^{(ij_i)}_{\underline{n} } \mbox{ and } G^{(ij_i)}=\bigoplus_{\underline{n} \in \mathbb{N} ^s} G^{(ij_i)}_{\underline{n} },\]
using the natural multi-grading. Clearly $F^{(ij_i)}_{\underline{n} }= F_{\underline{n} }/K^{(ij_i)}_{\underline{n} }$ and $G^{(ij_i)}_{\underline{n} }= G_{\underline{n} }/L^{(ij_i)}_{\underline{n} }$. Note that $K^{(ij_i)}_{e_i}= \operatorname{ann}_{F_{e_i}} y_{ij_i}$ for all $i$ and $j_i$. So for each $i$ and $j_i$ we get a degree $e_i$ isomorphism $y_{ij_i}F_{\underline{n} -e_i} \simeq F^{(ij_i)}_{\underline{n} -e_i}$. Moreover, for each $i$ and $j_i$, the natural map $\nu^{(ij_i)}: G \to G^{(ij_i)}$ induces the $k$-vector space epimorphism $\psi^{(ij_i)}:=\nu^{(ij_i)} \circ \psi : V \to G^{(ij_i)}_{e_1} \oplus \cdots \oplus G^{(ij_i)}_{e_s}$. Clearly $\psi^{(ij_i)}$ can be induced from the epimorphisms
\begin{align*}
\psi^{(ij_i)}_t&:= [\nu^{(ij_i)}]_{e_t} \circ \psi_t: V_t \to G_{e_t} \to G^{ij_i}_{e_t}
\end{align*}
for $1 \leq t \leq s$. Note that $n_1+\cdots+(n_i-1)+\cdots+n_s< n_1+\cdots+n_s$ in \eqref{eq1}. By minimality of $n_1+\cdots+n_s$ for given $r_1+\cdots+r_s$ and by the meaning of ``general" for any $i,j_i$ there exists non-empty Zariski-open subset $U^{ij_i}_t$ of $V_t^{r_t}$ yielding elements $z^{ij_i}_{t1}, \ldots, z^{ij_i}_{tr_t} \in \psi_t(V_t)$ for all $1 \leq t \leq s$ such that
\[F_{\underline{n} -e_i}^{(ij_i)}= \sum_{t=1}^s (\overline{z^{ij_i}_{t1}}, \ldots, \overline{z^{ij_i}_{tr_t}})F^{(ij_i)}_{\underline{n}-e_i-e_t}.\]
Observe that if there exists $u \in \{1,\ldots,s\}$ such that $n_u=0$, then the above equation holds clearly. Thus $F_{\underline{n} -e_i}= \sum_{t=1}^s (z^{ij_i}_{t1}, \ldots, z^{ij_i}_{tr_t})F_{\underline{n}-e_i-e_t}+ K^{ij_i}_{\underline{n} -e_i}$ and hence \[y_{ij_i}F_{\underline{n} -e_i}= \sum_{t=1}^s (z^{ij_i}_{t1}, \ldots, z^{ij_i}_{tr_t})y_{ij_i}F_{\underline{n}-e_i-e_t} \subseteq \sum_{t=1}^s (z^{ij_i}_{t1}, \ldots, z^{ij_i}_{tr_t})F_{\underline{n}-e_t}.\]
Set $U_t= \bigcap_{i=1}^s \left(\bigcap_{j_i=1}^{p_i} U^{ij_i}_t\right)$. By Remark \ref{rmk2} we get that $U_t$ is a non-empty Zariski-open subset of $V_t^{r_t}$ for all $t$ independent of $i$ and $j_i$ such that for the corresponding elements $x_{t1}, \ldots, x_{tr_t} \in \psi_t(V_t)=G_{e_t}$,
\begin{align} \label{thm3.2eq}
\sum_{i=1}^s \left(\sum_{j_i=1}^{p_i}y_{ij_i}\right)F_{\underline{n}-e_i} \subseteq \sum_{t=1}^s (x_{t1}, \ldots, x_{tr_t})F_{\underline{n}-e_t} \subseteq F_{\underline{n} }.
\end{align}
Let $\{g_1, \ldots, g_v\}$ be the set of generators of $F$ as a $G$-module, with $\deg g_i=\underline{a_i}$ where $\underline{a_i}=(a_{i1}, \ldots, a_{is})\in \mathbb{N} ^s$. Then for all $\underline{m}$,
\[F_{\underline{m}}=g_1 G_{\underline{m}-\underline{a_1}}+ \cdots+g_v G_{\underline{m}-\underline{a_v}}.\]
Note that $G_{\underline{m}-\underline{a_i}}=0$ if $m_j< a_{ij}$ for some $1 \leq j \leq s$. As $G$ is a standard $\mathbb{N} ^s$-graded, we have
\[G_{\underline{m}}= \sum_{i=1}^s\left(\sum_{j_i=1}^{p_i} y_{ij_i}\right)G_{\underline{m}-e_i} \]
for all $\underline{m} \geq \underline{1}.$ Set $\underline{a}=\max\{\underline{a_1}, \ldots, \underline{a_v}\}$, where the maximum is taken component-wise. If $\underline{n}\geq \underline{a}+\underline{1}$, then
\begin{align*}
F_{\underline{n}}
=g_1 G_{\underline{n}-\underline{a_1}}+ \cdots+g_v G_{\underline{n}-\underline{a_v}}
&=\sum_{u=1}^v g_{u} \left(\sum_{i=1}^s\left(\sum_{j_i=1}^{p_i} y_{ij_i}\right)G_{\underline{n}-\underline{a_u}-e_i}\right)\\
&=\sum_{i=1}^s\left(\sum_{j_i=1}^{p_i} y_{ij_i}\right) \left(\sum_{u=1}^v g_{u} G_{\underline{n}-\underline{a_u}-e_i}\right)\\
&=\sum_{i=1}^s\left(\sum_{j_i=1}^{p_i} y_{ij_i}\right)F_{\underline{n}-e_i}.
\end{align*}
Thus if $\underline{n} \geq \underline{a}+\underline{1}$, then using the above equation and \eqref{thm3.2eq}, we get
\[ F_{\underline{n}} = \sum_{t=1}^s (x_{t1}, \ldots, x_{tr_t})F_{\underline{n}-e_t} \]
a contradiction to our assumption. Hence the result follows.
\end{proof}
\begin{remark}
Let $\mathcal{F}=\{I_n\}_{n \geq 0}$ be a good filtration such that $\mu(I_n)< \binom{n+r}{r}$ for some $n>0$ and $r \geq 0.$ If $n \geq a+1$, where $a$ is as defined in the above theorem, then by Theorem \ref{esthm-mul-fil} there exist $r$ general elements $x_1, \ldots, x_r \in I_1$ such that $I_{n}=(x_1, \ldots, x_r)I_{n-1}.$ Let $\dim F(\mathcal{F})=r$. If $I_{m}=(x_1, \ldots, x_r)I_{m-1}$ for all $m \geq n$, then $(x_1, \ldots, x_r)$ is a minimal reduction of $\mathcal{F}.$ But $I_{n}=(x_1, \ldots, x_r)I_{n-1}$ for some $n$ does not always imply $I_{m}=(x_1, \ldots, x_r)I_{m-1}$ for all $m \geq n$. Later in Theorem \ref{ESf} we give a sufficient condition for $(x_1, \ldots, x_r)$ being a minimal reduction of $\mathcal{F}$.
\end{remark}
As an immediate consequence of the above result, we get the following.
\begin{corollary}\label{mulg-adic}
Let $(R, \mathfrak{m} , k)$ be a Noetherian local ring with infinite residue field $k$ and $I_1,\ldots, I_s$ be ideals in $R$. Suppose
\[\mu(I_1^{n_1}\cdots I_s^{n_s})< \binom{n_1+r_1}{r_1}\cdots \binom{n_s+r_s}{r_s}\]
for some integers $n_1+\cdots+n_s \geq 1$ and $r_1+\cdots+r_s \geq 0$. Then for all $1 \leq i \leq s$ there exist ``general" elements $x_{i1}, \dots, x_{ir_i} \in I_i$ such that for all $\underline{m} \geq \underline{n}$,
\[I_1^{m_1}\cdots I_s^{m_s}= \sum_{i=1}^s (x_{i1}, \dots, x_{ir_i})I_1^{m_1}\cdots I_i^{m_i-1}\cdots I_s^{m_s}.\]
\end{corollary}
\section{Eakin-Sathaye theorem for reduction number of $\mathbb{N} $-graded good filtrations}
We now prove an analogue of the Eakin-Sathaye Theorem to estimate the reduction number of an equimultiple good $\mathbb{N} $-graded filtration in Cohen-Macaulay rings. We impose additional assumptions on depth of the associated graded ring and the fiber cone.
Let $R$ be a Noetherian local ring and $\mathcal{F}=\{I_n\}$ be a filtration. For an element $x \in I_1,$ $x^* \in I_1/I_2$ denotes the image of $x$ in $\operatorname{gr}_{\mathcal{F}}(R)$ and $x^\circ \in I_1/\mathfrak{m} I_1$ denotes the image of $x$ in $F(\mathcal{F}).$ If $x^* \not= 0$, then it is said to be superficial in $\operatorname{gr}_{\mathcal{F}}(R)$ if $(0 : x^*) \cap \operatorname{gr}_{\mathcal{F}}(R)_n = 0$ for all $n$ large. Similarly, if $x^\circ \not= 0,$ then it is superficial in $F(\mathcal{F})$ if $(0 : x^\circ) \cap F(\mathcal{F})_n = 0$ for all $n$ large. Let $\min(R)$ denote the set of all minimal prime ideals of $R.$
\begin{lemma}\label{depthff}
Let $(R,\mathfrak{m} )$ be a Noetherian local ring of dimension $d>0$ with $R/\mathfrak{m} =k$ infinite and $I$ be an ideal in $R$ such that $I \nsubseteq \mathfrak{p}$ for any $\mathfrak{p} \in \min(R)$. Let
$\mathcal{F}=\{I_n\}_{n \in \mathbb{N} }$ be an $I$-good filtration. Then there exists $x \in I \setminus \mathfrak{m} I_1$ such that:\\
{\rm (1)} $x \notin \bigcup_{\mathfrak{p} \in \min(R)} \mathfrak{p}$ \\
{\rm (2)} $x^\circ$ is superficial in $F(\mathcal{F})$ \\
{\rm (3)} $x^*$ is superficial in $\operatorname{gr}_{\mathcal{F}}(R)$.
\end{lemma}
\begin{proof}
Let
\begin{align*}
\operatorname{Ass} (\operatorname{gr}_{\mathcal{F}}(R)) = \{P_1, \ldots, P_r, P_{r+1} \ldots, P_{r'}\}, \quad
\operatorname{Ass} (F(\mathcal{F})) = \{Q_1, \ldots, Q_m, Q_{m+1}, \ldots, Q_{m'}\}
\end{align*}
such that for all $n$ large, $I_n/I_{n+1} \subseteq P_i$ for $r+1 \le i \le r'$ and $I_n/\mathfrak{m} I_n \subseteq Q_j$ for $m+1 \le j \le m'.$ Consider the ideal $\mathscr{I}= \oplus_{n \ge 0}I_{n+1}t^n$ of $\mathcal{R}(\mathcal{F}).$ Since $\operatorname{gr}_{\mathcal{F}}(R) = \mathcal{R}(\mathcal{F})/\mathscr{I}$ and $F(\mathcal{F}) = \mathcal{R}(\mathcal{F})/\mathfrak{m} \mathcal{R}(\mathcal{F})$, let $\mathcal{P}=\{P'_1, \ldots, P'_r, Q'_1, \ldots, Q'_m\}$ be the collection of prime ideals in $\mathcal{R}(\mathcal{F})$ which are pre-images of the ideals $P_1,\ldots,P_r$ and $Q_1,\ldots,Q_m.$ Observe that $\mathcal{R}(I) \subseteq \mathcal{R}(\mathcal{F})$ is an integral extension. Set $P'_i \cap \mathcal{R}(I)=P''_i$ and $Q'_j \cap \mathcal{R}(I)=Q''_j$ for all $i$ and $j$. Since $k$ is infinite, $I/\mathfrak{m} I \not= V$, where
\begin{align*}
V = \left(\frac{\mathfrak{m} I_1 \cap I}{\mathfrak{m} I}\right)
\bigcup \left(\frac{I_2 \cap I + \mathfrak{m} I}{\mathfrak{m} I}\right)
\bigcup_{\mathfrak{p} \in \min(R)} \left(\frac{\mathfrak{p} \cap I + \mathfrak{m} I}{\mathfrak{m} I}\right) \
\bigcup_{i=1}^r \left(\frac{P''_i \cap I + \mathfrak{m} I}{\mathfrak{m} I}\right) \
\bigcup_{j=1}^m \left(\frac{Q''_j \cap I + \mathfrak{m} I}{\mathfrak{m} I}\right).
\end{align*}
Hence we can choose
\[x \in I \setminus \left( (\mathfrak{m} I_1 \cap I) \bigcup \ (I_2 \cap I) \bigcup_{\mathfrak{p} \in \min(R)} (\mathfrak{p} \cap I) \ \bigcup_{i=1}^r (P''_i \cap I) \ \bigcup_{j=1}^m(Q''_j \cap I) \right).\]
We first show that $(0 : x^\circ) \cap F(\mathcal{F})_n = 0$ for $n$ large. Let $y^\circ \in (0 : x^\circ).$ Write $(0) = M_1 \cap \cdots \cap M_{m'}$ be a primary decomposition of $(0)$ in $F(\mathcal{F})$ such that $M_j$ is $Q_j$-primary for $j =1,\dots,m'.$ Then $y^\circ x^\circ \in M_j$ for all $1 \le j \le m'.$ Since $x^\circ \notin Q_j$ for $j =1,\dots,m$, it follows that $y^\circ \in M_j$ for $j =1,\dots,m.$ Thus $(0 : x^\circ) \subseteq M_1 \cap \cdots \cap M_m.$ For $m + 1 \le j \le m'$, $F(\mathcal{F})_n \subseteq Q_j$ for $n$ large. Therefore $F(\mathcal{F})_n \subseteq M_{m+1} \cap \cdots \cap M_{m'}.$ This implies that for all $n$ large, $(0 : x^\circ) \cap F(\mathcal{F})_n \subseteq M_1 \cap \cdots \cap M_{m'} = (0).$ Hence $x^\circ$ is superficial in $F(\mathcal{F}).$ A similar argument shows that $x^*$ is superficial in $\operatorname{gr}_{\mathcal{F}}(R).$
\end{proof}
\begin{remark}\label{rmksup}
In the above proof, observe that there exists a non-empty Zariski open subset $U$ of $I/\mathfrak{m} I$, such that for any $x+\mathfrak{m} I \in U$, the lemma holds. Thus $x \in I$ is a general element.
\end{remark}
\begin{theorem}\label{ESfdim1}
Let $(R, \mathfrak{m} )$ be a Cohen-Macaulay local ring of dimension $d>0$ with $R/\mathfrak{m} =k$ infinite. Let $\mathcal{F}=\{I_n\}_{n \in \mathbb{N} }$ be an equimultiple good filtration such that $\operatorname{grade}\operatorname{gr}_{\mathcal{F}}(R)_+ \ge l(\mathcal{F}) =1$ and $F(\mathcal{F})$ is Cohen-Macaulay. If $\mu(I_n)< (n+1)$ for some $n \ge 1$, then there exists a general element $x \in I_1$ such that $I_m = (x)I_{m-1}$ for all $m \ge n$.
\end{theorem}
\begin{proof}
By Lemma \ref{depthff} and Remark \ref{rmksup} there exists a general element $x \in I_1$ such that $x \notin \bigcup_{\mathfrak{p} \in \min(R)} \mathfrak{p}$ and $x^\circ$, $x^*$ are superficial elements in $F(\mathcal{F})$ and $\operatorname{gr}_{\mathcal{F}}(R)$ respectively. Since $R$ is Cohen-Macaulay and $\operatorname{ht} I_1=1$ so $x$ is a nonzerodivisor. As $\operatorname{depth} F(\mathcal{F})= 1$, all the associated primes of $(0)$ in $F(\mathcal{F})$ are relevant primes and hence $x^\circ$ is $F(\mathcal{F})$-regular. Using \cite[Lemma 2.1]{huckabaMarley}, $x^*$ is $\operatorname{gr}_{\mathcal{F}}(R)$-regular. From \cite[Proposition 3.5]{huckabaMarley} it then follows that $(x) \cap I_i = (x)I_{i-1}$ for all $i \ge 1$. Note that $I_{i-1} \simeq (x)I_{i-1}$ which implies that $\mu((x)I_{i-1})=\mu(I_{i-1})$ for all $i \ge 1$. Since $\operatorname{depth}_{(x^\circ)} F(\mathcal{F})=1$, using \cite[Theorem 2.8]{cortadellasZar} it follows that $(x) \cap \mathfrak{m} I_i= (x)\mathfrak{m} I_{i-1}$ for all $i \geq 1$.
Therefore,
$$(x)I_{i-1} \cap \mathfrak{m} I_i \subseteq (x) \cap \mathfrak{m} I_i=(x)\mathfrak{m} I_{i-1} \subseteq (x)I_{i-1} \cap \mathfrak{m} I_i$$
and hence $(x)I_{i-1} \cap \mathfrak{m} I_i= (x)\mathfrak{m} I_{i-1}$ for all $i \ge 1$. For all $i \ge 1$,
\begin{align*}
\mu(I_i) - \mu(I_{i-1})
= \mu(I_i)-\mu((x)I_{i-1})
&= \dim_k \frac{I_i}{\mathfrak{m} I_i}- \dim_k \frac{(x)I_{i-1}}{\mathfrak{m} (x) I_{i-1}} \\
&= \dim_k \frac{I_i}{\mathfrak{m} I_i}- \dim_k \frac{(x)I_{i-1}}{(x)I_{i-1} \cap \mathfrak{m} I_i}\\
&= \dim_k \frac{I_i}{\mathfrak{m} I_i}- \dim_k \frac{(x)I_{i-1}+\mathfrak{m} I_i}{\mathfrak{m} I_i} \\
&= \dim_k \frac{I_i}{(x)I_{i-1}+\mathfrak{m} I_i} \\
&= \ell(I_i/\mathfrak{m} I_i+(x)I_{i-1}) \geq 0
\end{align*}
Note that $\ell(I_i/\mathfrak{m} I_i+(x)I_{i-1})=0$ if and only if $I_i=\mathfrak{m} I_i+(x)I_{i-1}$, i.e., $I_i=(x)I_{i-1}$, for all $i \ge 1$, by Nakayama Lemma.
For all $i \ge 1$, the inclusion map $f_i: I_{i+1} \hookrightarrow I_{i}$ induces the map $$\widetilde{f}_i : \frac{I_{i+1}}{(x)I_{i}} \to \frac{I_{i}}{(x)I_{i-1}}.$$
We claim that $\widetilde{f}_i$ is an injective map for all $i.$ It is sufficient to show that $(x)I_{i-1} \cap I_{i+1}=(x)I_{i}$. Clearly $(x)I_{i} \subseteq (x)I_{i-1} \cap I_{i+1}$. Let $xy \in (x)I_{i-1} \cap I_{i+1}$ for some $y \in I_{i-1}$. Then $y \in (I_{i+1}: x)=I_{i}$ as $x^*$ is $\operatorname{gr}_{\mathcal{F}}(R)$-regular. Thus $xy \in (x)I_{i}$ and hence $(x)I_{i-1} \cap I_{i+1} \subseteq (x)I_{i}$. The claim follows and hence $\widetilde{f}_i$ is injective for all $i \ge 1$. It follows that if $I_i = (x)I_{i-1}$ for some $i \ge 1$, then $I_j = (x)I_{j-1}$ for all $j \geq i$.
If possible, let $I_n \neq (x)I_{n-1}$. Then by the above observation $I_i \neq (x)I_{i-1}$ for all $1 \le i \leq n$. Thus we have $0< \mu((x))< \mu(I_1)< \cdots< \mu(I_n)$ and hence $\mu(I_n)\geq n+1$, a contradiction. Therefore $I_n = (x)I_{n-1}$ which implies that $I_m = (x)I_{m-1}$ for all $m \geq n$.
\end{proof}
\begin{remark}
In the above theorem, if $n=1$, i.e., $\mu(I_1)<2$, then $I_m = (x)I_{m-1}$ for all $m \geq 1$. In particular, $(x) = I = I_1$ and hence $I_n = (x^n)$ for all $n.$
\end{remark}
\begin{theorem}\label{ESf}
Let $(R, \mathfrak{m} )$ be a Cohen-Macaulay local ring with $R/\mathfrak{m} =k$ infinite. Let $\mathcal{F}=\{I_n\}_{n \in \mathbb{N} }$ be an equimultiple good filtration such that $\operatorname{grade}\operatorname{gr}_{\mathcal{F}}(R)_+ \geq l(\mathcal{F})=r$ and $F(\mathcal{F})$ is Cohen-Macaulay. Let $\mu(I_n)< \binom{n+r}{r}$ for some $n\ge 1$. Then there exist $r$ general elements $x_1, \ldots, x_r \in I_1$ such that $I_m = (x_1,\ldots,x_r)I_{m-1}$ for all $m \geq n$.
\end{theorem}
\begin{proof}
If $r=0$, then $(0)$ is the minimal reduction of $\mathcal{F}$. Now $\mu(I_n)< \binom{n+0}{0}=1$ implies $I_n=(0)$ and hence $I_m=(0)$ for all $m \geq n$. Therefore $I_m=(0)I_{m-1}$ for all $m \geq n$. Note that if $r = 1$, then $\dim R \ge r = 1$. Thus the result follows from Theorem \ref{ESfdim1}. Therefore we may assume that $r \geq 2$.
Suppose the result is false. Choose a counter example $(R, \mathfrak{m} )$ in which $r$ is minimal and $n$ is minimal for this given value of $r$. Let $\dim R= d \geq 0$. If $d = 0$, then $r = 0$ and if $d = 1$, then $r = 0$ or $1$. We have seen that in all cases the result holds. So we may assume that $d \geq 2$.
As $R$ is Cohen-Macaulay, by Lemma \ref{depthff} there exists a nonzerodivisor $a \in I_1$ such that $a^\circ$ and $a^*$ are superficial in $F(\mathcal{F})$ and $\operatorname{gr}_{\mathcal{F}}(R)$ respectively. Using \cite[Lemma 2.1]{huckabaMarley}, $a^*$ is $\operatorname{gr}_{\mathcal{F}}(R)$-regular and from \cite[Proposition 3.5]{huckabaMarley} it then follows that $(a) \cap I_n= a I_{n-1}$ for all $n \ge 1.$ Since $\operatorname{depth} F(\mathcal{F}) \ge 2$, $a^\circ$ is a nonzerodivisor in $F(\mathcal{F})$ so \cite[Theorem 2.8]{cortadellasZar} implies that $(a) \cap \mathfrak{m} I_n= a \mathfrak{m} I_{n-1}$ for all $n \ge 1.$
Set $\overline{R}=R/(a)$ and $\overline{I_n}$ to be the image of $I_n$ in $\overline{R}$. Then
\begin{align*}
\frac{\overline{I_n}}{\mathfrak{m} \overline{I_n}}
= \frac{I_n+(a)}{\mathfrak{m} I_n+(a)}
= \frac{I_n+(\mathfrak{m} I_n+(a))}{\mathfrak{m} I_n+(a)}
=\frac{I_n}{\mathfrak{m} I_n+\left((a)\cap I_n \right)}
=\frac{I_n}{\mathfrak{m} I_n+ aI_{n-1}}.
\end{align*}
Thus for all $n \ge 1$,
\begin{align*}
\mu(\overline{I_n})
&= \dim_k ~I_n/ \left(\mathfrak{m} I_n+ aI_{n-1}\right)\\
&=\dim_k ~I_n/\mathfrak{m} I_n- \dim_k \left(\mathfrak{m} I_n+ aI_{n-1}\right)/ \mathfrak{m} I_n\\
&=\dim_k ~I_n/\mathfrak{m} I_n- \dim_k ~ aI_{n-1}/ \left(\mathfrak{m} I_n \cap aI_{n-1}\right).
\end{align*}
Now
$$\mathfrak{m} I_n \cap aI_{n-1}= \mathfrak{m} I_n \cap (a) \cap aI_{n-1}= a\mathfrak{m} I_{n-1} \cap a I_{n-1}=a\mathfrak{m} I_{n-1}$$
and hence $\mu(\overline{I_n})= \dim_k ~I_n/\mathfrak{m} I_n- \dim_k ~ aI_{n-1}/a\mathfrak{m} I_{n-1}$ for all $n \ge 1$. Note that $aI_{n-1}/a\mathfrak{m} I_{n-1} \simeq I_{n-1}/\mathfrak{m} I_{n-1}$ as $a$ is a nonzerodivisor in $R$. This implies that $\mu(\overline{I_n})= \mu(I_n)-\mu(I_{n-1})$ for all $n \ge 1$.
{\bf Case 1}: $\mu(I_{n-1})< \binom{n-1+r}{r}$.
By minimality of $n$ for the chosen $r$, there exist $r$ general elements $x_1, \ldots, x_r \in I_1$ such that $I_m = (x_1,\ldots,x_r)I_{m-1}$ for all $m \geq n-1$, a contradiction.
{\bf Case 2}: $\mu(I_{n-1})\geq \binom{n-1+r}{r}$.
We get $\mu(\overline{I_n})< \binom{n+r}{r}-\binom{n-1+r}{r} = \binom{n+r-1}{r-1}$. Set $\overline{\mathcal{F}}=\mathcal{F}/(a)=\{\overline{I_n}\}_{n \in \mathbb{N} }$. We claim that $F(\overline{\mathcal{F}}) \simeq F(\mathcal{F})/({a}^\circ)$ and $\operatorname{gr}_{\overline{\mathcal{F}}}(\overline{R}) \simeq \operatorname{gr}_{\mathcal{F}}(R)/({a}^*)$. Indeed,
\begin{align*}
F(\overline{\mathcal{F}})
\simeq \bigoplus_{n=0}^{\infty} \frac{I_n + (a)}{\mathfrak{m} I_n + (a)}
\simeq \bigoplus_{n=0}^{\infty} \frac{I_n}{\mathfrak{m} I_n +(I_n \cap (a))}
\simeq \bigoplus_{n=0}^{\infty} \frac{I_n}{\mathfrak{m} I_n + (a)I_{n-1}}
\simeq F(\mathcal{F})/({a}^\circ) .
\end{align*}
Similarly,
\begin{align*}
\operatorname{gr}_{\overline{\mathcal{F}}}(\overline{R})
\simeq \bigoplus_{n=0}^{\infty} \frac{I_n + (a)}{I_{n+1} + (a)}
\simeq \bigoplus_{n=0}^{\infty} \frac{I_n}{I_{n+1}+(I_n \cap (a))}
\simeq \bigoplus_{n=0}^{\infty} \frac{I_n}{I_{n+1} + (a)I_{n-1}}
\simeq \operatorname{gr}_{\mathcal{F}}(R)/({a}^*) .
\end{align*}
As $a^*$ and $a^\circ$ are regular elements in $\operatorname{gr}_{\mathcal{F}}(R)$ and $F(\mathcal{F})$ respectively, $F(\overline{\mathcal{F}})$ is Cohen-Macaulay with $l(\overline{\mathcal{F}})=l(\mathcal{F})-1=r-1 \geq 1$ and
$$\operatorname{grade} \operatorname{gr}_{\overline{\mathcal{F}}}(\overline{R})_+ = \operatorname{grade} \operatorname{gr}_{\mathcal{F}}(R)_+ -1 \geq l(\mathcal{F})-1=l(\overline{\mathcal{F}}).$$
Since $R/(a)$ is Cohen-Macaulay of dimension $d-1$ and $(R/(a))/(I_1/(a)) \simeq R/I_1$, it follows that $\dim R/(a)-\operatorname{ht} (I_1/(a))= \dim R -\operatorname{ht} I_1$. So $(d-1)-\operatorname{ht} (I_1/(a))=d- \operatorname{ht} I_1$ and hence $\operatorname{ht} (I_1/(a)) = \operatorname{ht} I_1 - 1=r-1$. Thus $\overline{\mathcal{F}}$ is an equimultiple good filtration. By minimality of $r$, there exist $r-1$ general elements $\overline{x_1}, \ldots, \overline{x_{r-1}} \in \overline{I_1}$ such that $\overline{I_m} = (\overline{x_1},\ldots,\overline{x_{r-1}}) \overline{I_{m-1}}$ for all $m \geq n$. By our choice $0 \neq a+\mathfrak{m} I_1 \in I_1/\mathfrak{m} I_1=F(I_1)_1$ so by Lemma \ref{lem1} it follows that $a, x_1, \ldots, x_{r-1} \in I_1$ are $r$ general elements for which $I_m=(a, x_1, \ldots, x_{r-1}) I_{m-1}$ for all $m \geq n$, a contradiction.
\end{proof}
\section{Examples}
\subsection{\bf Contracted ideals}
Let $(R,\mathfrak{m} )$ be a $2$-dimensional regular local ring. An $\mathfrak{m} $-primary ideal $I$ is called a contracted ideal \cite[App.~5]{zariskiSamuel}, if there exists an $x \in \mathfrak{m} \backslash \mathfrak{m} ^2$ such that $IR[\mathfrak{m} /x] \cap R=I$.
Zariski \cite{zariski} proved that the product of contracted (complete) ideals in $R$ is contracted (complete) and a complete ideal is contracted. Set $o (I)= \mathfrak{m} $-adic order of $I= \max \{n \mid I \subseteq \mathfrak{m} ^n\}$. Lipman \cite{lipman} and Rees \cite{rees} proved that if $I$ is contracted then $\mu(I)=1+o(I)$, where $\mu(I)$ denotes the minimal number of generators of $I.$ Huneke-Sally \cite{hunekeSally} proved that if $R/\mathfrak{m} $ is infinite, then the converse is also true.
Thus we have the following result.
\begin{theorem}\label{lip}
Let $(R,\mathfrak{m} )$ be a $2$-dimensional regular local ring with infinite residue field and $I$ be an $\mathfrak{m} $-primary ideal in $R$. Then $I$ is contracted if and only if $\mu(I)=o(I)+1$.
\end{theorem}
Let $R$ be a $2$-dimensional regular local ring and $I,J$ be contracted ideals. Using Corollary \ref{mulg-adic}, we find a choice of the joint reduction vector $(m, n)$ such that
\begin{equation}\label{eq2}
I^mJ^n= aI^{m-1}J^n+bI^mJ^{n-1}
\end{equation}
for some $a \in I$ and $b \in J.$
\begin{proposition}\label{jred}
Let $(R,\mathfrak{m} )$ be a $2$-dimensional regular local ring with infinite residue field. If $I$ and $J$ are contracted ideals, then in \eqref{eq2} we can take $m=2\cdot o(J)-1$ and $n=2\cdot o(I)-1$.
\end{proposition}
\begin{proof}
Let $I, J$ be contracted ideals. Then $I^rJ^s$ is also a contracted ideal for any $r,s$. Set $o (I)=\alpha$ and $o(J)=\beta$. Note that $\alpha \geq 1$ and $\beta \geq 1$. Then $\mu(I^mJ^n)= o(I^mJ^n)+1= m \alpha+n \beta+1$. If we can write $\mu(I^mJ^n)< \binom{m+1}{1}\binom{n+1}{1}$ for some $m,n$, then by Corollary \ref{mulg-adic} we get equation \eqref{eq2}. So we want $(m,n)$ to be a solution of the equation
\begin{align*}
\alpha x + \beta y+1< (x+1)(y+1) &\iff
0< xy-(\alpha-1)x -(\beta-1)y \\
&\iff 0< (x-\beta+1)(y-\alpha+1)-(\alpha-1)(\beta-1)
\end{align*}
with $m,n \geq 0$.
Now take $m=2 \beta-1$ and $n=2\alpha-1$. Then we get \[(2\beta-1- \beta+1)(2\alpha-1-\alpha+1)-(\alpha-1)(\beta-1)=\alpha+\beta-1>0.\] Thus the pair $(2 \beta-1,2 \alpha-1)$ satisfies the above equation.
\end{proof}
The following example illustrates that Corollary \ref{mulg-adic} gives a better bound than the bound given by L. O'Carroll's result \cite[Corollary 3.2]{carroll}.
\begin{example} \label{counter}
Let $k$ be a field and $R=k[[x,y]]$ be the power series ring over $k.$ Let $I=(x, y^2)$ and $J=(y, x^2).$ Then $IJ=xJ+yI.$ Therefore the joint reduction vector of $(I,J)$ with respect to $(x,y)$ is $(1,1).$ As $IJ=(xy,x^3,y^3)$ and $I^2J^2 = (x^2y^2,x^4y,xy^4,x^6,y^6)$, it follows that $\mu(IJ)=3 \nless \binom{1+2}{2}=3$ but $\mu(I^2J^2)=5 < \binom{2+2}{2}=6$. So by \cite[Corollary 3.2]{carroll} we get
\[I^2J^2=bI^2J+aIJ^2\]
for some $a \in I$ and $b \in J$, giving the joint reduction vector to be $(2,2)$. Since $I$ and $J$ are contracted ideals, by Proposition \ref{jred} we get $IJ=bI+aJ$ for some $a \in I$ and $b \in J$, clearly giving the exact value of the joint reduction vector.
\end{example}
\subsection{\bf Lexsegment ideals}
\begin{definition} \cite{herzog2017}
An ideal $I \subseteq k[x_1, \ldots, x_n]$ is called a {\it lexsegment ideal}, if for any monomial $u \in I$ and all monomials $v$ with $\deg v=\deg u$ and $v>u$ in the lexicographical order, it follows that $v \in I$.
\end{definition}
\begin{example}
Let $I$ and $J$ be two lexsegment ideals in $R=k[x,y]$. Then by \cite[Lemma 4.3]{herzog2017},
\[I=(x^r, x^{r-1}y^{b_1}, \ldots, x^{r-p}y^{b_p}) \text{ and } J=(x^s, x^{s-1}y^{a_1}, \ldots, x^{s-q}y^{a_q})\]
for some integers $0<b_1< \cdots<b_p$ and $0<a_1< \cdots<a_q$. Clearly $\mu(I)=p+1$ and $\mu(J)=q+1$. By \cite[Corollary 4.5]{herzog2017} it follows that
\[\mu(I^nJ^m)=n \mu(I)+m\mu(J)-(n+m-1)=n(p+1)+m(q+1)-(n+m-1)= pn+qm+1.\]
If $\mu(I^nJ^m)< \binom{n+1}{1}\binom{m+1}{1}=(n+1)(m+1)$, then by Corollary \ref{mulg-adic} there exist $a \in I$ and $b \in J$ such that $I^nJ^m=aI^{n-1}J^{m}+bI^nJ^{m-1}$, i.e., if $pn+qm+1< nm+n+m+1$, or \[(p-1)n +(q-1)m<nm.\] Note that if $p=1$ and $q=2$, then the above equation is satisfied for $n=2,m=1$ and hence joint reduction vector is $(2,1)$. If we take $p=1, q=2$ and $n=m$, then the minimum choice of $n$ such that $n<n^2$ is $2$. Notice $\mu(I^2J^2)=7 \nless \binom{2+2}{2}=6$, so L. O'Carroll's result \cite[Corollary 3.2]{carroll} is not applicable for $n=m=2$.
\end{example}
The following examples show that Theorem \ref{ESf} gives better bound for the reduction numbers of respective filtrations. We use the following proposition to characterize Cohen-Macaulay property of fiber cone.
\begin{proposition} \cite[Proposition 3.7]{cortadellasZar} \label{cort}
Let $\mathcal{F}=\{I_n\}$ be a good filtration such that $I_1$ is $\mathfrak{m} $-primary. Assume that $\operatorname{gr}_{\mathcal{F}}(R)$ is Cohen-Macaulay and let $J$ be a minimal reduction of $\mathcal{F}.$ The following are equivalent: \\
{\rm(1)} $F(\mathcal{F})$ is Cohen-Macaulay. \\
{\rm(2)} $J \cap \mathfrak{m} I_n = J \mathfrak{m} I_{n-1}$ for all $1 \le n \le r_J(\mathcal{F}).$
\end{proposition}
\begin{example}
Let $k$ be an infinite field such that $p =\operatorname{char} k \neq 3$ and $R=k[[X,Y,Z]]/(X^3+Y^3+Z^3).$ Then $R$ is a $2$-dimensional Cohen-Macaulay, analytically unramified local ring. Let $x,y,z$ denote the images of $X,Y$ and $Z$ respectively in $R$ and $I=(y,z).$ Then $\mathcal{F}=\{(I^n)^*\}$ is an $I$-admissible filtration and using \cite{GMV}, $(I^k)^* = \mathfrak{m} ^{k+1}+I^k$ for all $k \ge 1$ and $(I^k)^* = I(I^{k-1})^*$ for all $k \ge 2$. We show that $\operatorname{gr}_{\mathcal{F}}(R) = \oplus_{n \ge 0}(I^n)^*/(I^{n+1})^*$ and $F(\mathcal{F}) = \oplus_{n \ge 0}(I^n)^*/\mathfrak{m} (I^n)^*$ are Cohen-Macaulay.
In order to show that $\operatorname{gr}_{\mathcal{F}}(R)$ is Cohen-Macaulay, using \cite[Theorem 2.3, Corollary 2.1]{viet}, it is sufficient to show that $I(I^{k-1})^* = (I^k)^* \cap I$ for all $k \ge 1$. This is true as $I(I^{k-1})^* = (I^k)^* \subseteq I$ for all $k \ge 2$. Hence, $\operatorname{gr}_{\mathcal{F}}(R)$ is Cohen-Macaulay. Using Proposition \ref{cort}, for $F(\mathcal{F})$ to be Cohen-Macaulay, it is sufficient to show that $I \cap \mathfrak{m} I^* = I\mathfrak{m} .$ Since $\mathfrak{m} ^3 \subseteq \mathfrak{m} I$ it follows that
\begin{align*}
I \cap \mathfrak{m} I^* = I \cap \mathfrak{m} (I+\mathfrak{m} ^2) = I \cap (\mathfrak{m} I+ \mathfrak{m} ^3) = I \cap \mathfrak{m} I = \mathfrak{m} I.
\end{align*}
As $(I^2)^* = I^2 + \mathfrak{m} ^3 =(x^2y,x^2z,y^2,yz,z^2)$ and $\mu((I^2)^*) = 5 < \binom{2+2}{2} = 6$, Theorem \ref{ESf} implies that $r(\mathcal{F}) \le 1$ and hence $r(\mathcal{F})=1$ as $I \neq I^*$.
\end{example}
\begin{example}\label{ex2}
Let $R=k[X,Y,U]$ be a polynomial ring in three variables with unique homogeneous maximal ideal $\mathfrak{m} =(X,Y,U)$ and an infinite residue field $k.$ Set $T=R_{\mathfrak{m} }.$ Then $T$ is a regular local ring with unique maximal ideal $\mathfrak{m} R_{\mathfrak{m} }$ and infinite residue field $k.$ Let $I=(X^2,Y^2,U)R$ and let $\mathcal{F}=\{\overline{I^n}\}_{n \geq 1}$ be the integral closure filtration of $I$. Since $I$ is a homogeneous ideal, it is clear that $I\overline{I^n}=\overline{I^{n+1}}$ in $T$ if and only if $I\overline{I^n}=\overline{I^{n+1}}$ in $R$. As $T$ is an analytically unramified Noetherian local ring, by \cite[Corollary 9.2.1]{swansonHuneke} it follows that $\mathcal{F}$ is an $I$-good filtration in $T$.
We first claim that $\overline{I}=(X^2,XY,Y^2,U).$ Since $XY$ satisfies the equation $t^2-X^2.Y^2=0,$ $XY \in \overline{I}.$ It is sufficient to show that the ideal $(X^2,XY,Y^2,U)$ is integrally closed. Observe that the ideal $(X^2,XY,Y^2)$ is integrally closed in $k[X,Y] = R/(U).$ As contraction of a complete ideal is complete, the claim follows. Consider
\[I \overline{I}= (U^2,Y^2U, XYU, X^2U, Y^4, XY^3, X^2Y^2, X^3Y, X^4).\]
We have $I^2 \subseteq I \overline{I} \subseteq \overline{I^2}$. In order to show $I \overline{I}=\overline{I^2}$ it is enough to show that $I \overline{I}$ is integrally closed. Now
\begin{align*}
&(U^2,Y^2U, XYU, X^2U, Y^4, XY^3, X^2Y^2, X^3Y, X^4) \\
&= (X,U^2,Y^2U,Y^4) \cap (YU,U^2,X^2U,Y^4, XY^3, X^2Y^2, X^3Y, X^4)\\
&= (X,U^2,Y^2U,Y^4) \cap (Y,U^2,X^2U, X^4) \cap (U,\mathfrak{n} ^4),
\end{align*}
where $\mathfrak{n} =(X,Y)$ is the unique homogeneous maximal ideal of $K[X,Y]$. In view of \cite[Proposition 1.4.6]{swansonHuneke} it follows that $(U^2,X^2U, X^4)$ and $(U^2,Y^2U,Y^4)$ are integrally closed. Thus $(X,U^2,Y^2U,Y^4)$ and $(Y,U^2,X^2U, X^4)$ are integrally closed in $R$ and hence integrally closed in $T$ by \cite[Proposition 1.1.4]{swansonHuneke}. Again $(U,\mathfrak{n} ^4)$ is integrally closed in $T$. Hence $I \overline{I}$ is integrally closed.
We now claim that $\mathcal{R}(\mathcal{F}, T)=T[\overline{I}t,\overline{I^2}t^2, \ldots]$ is Cohen-Macaulay. Let $\mathcal{R}(\mathcal{F}, R)$ denote the Rees ring of $R$ with respect to $\mathcal{F}$ and $r(\mathcal{F})=n_0$. Then $\mathcal{R}(\mathcal{F}, R)=R[\overline{I}t,\overline{I^2}t^2, \ldots, \overline{I^{n_0}}t^{n_0}]$ which is Noetherian. Again by \cite[Proposition 1.4.2]{swansonHuneke} we have $\overline{I^n}$ is a monomial ideal for all $n \geq 1$. Therefore $M=(X,Y,U,\overline{I}t,\overline{I^2}t^2, \ldots)$ is a semi-group of monomials in $X,Y,U$. Moreover, by \cite[Proposition 5.2.4]{swansonHuneke} the integral closure of $R[It]$ in its field of fractions is $\mathcal{R}(\mathcal{F}, R)$. Since $R[It] \subseteq \mathcal{R}(\mathcal{F}, R) \subseteq \operatorname{Frac}(R[It])$ so the field of fractions of $\mathcal{R}(\mathcal{F}, R)$ is also $\operatorname{Frac}(R[It])$. Hence $\mathcal{R}(\mathcal{F}, R)=k[M] \subseteq k[X,Y,U,t]$ is normal. Thus by \cite[Proposition 1]{hochster} $M$ is a normal semigroup of monomials and by \cite[Theorem 1]{hochster} it follows that $\mathcal{R}(\mathcal{F}, R)$ is Cohen-Macaulay.
Since $R \subseteq \mathcal{R}(\mathcal{F}, R)$ is a subring so $C=R-\mathfrak{m} $ is also a multiplicative closed subset of $\mathcal{R}(\mathcal{F}, R)$. Hence $\mathcal{R}(\mathcal{F}, T)= C^{-1}\mathcal{R}(\mathcal{F}, R)$ is Cohen-Macaulay. Thus by \cite[Corollary 2.1]{viet} we get that $\operatorname{gr}_{\mathcal{F}}(T)$ is Cohen-Macaulay.
Using \cite[Theorem 2.3]{viet} it follows that $r(\mathcal{F}) \leq \dim T-1=2$. Since $I$ is a minimal reduction of $\mathcal{F}$, $I \overline{I^{n}}=\overline{I^{n+1}}$ for all $n \geq 2$. Thus in our case, $r_I(\mathcal{F})=1$ (as $I \neq \overline{I}$). Now
\[ I \cap (X,Y,U)\overline{I}= I (X,Y,U)=(U^2, YU, XU, Y^3, XY^2, X^2Y, X^3) \]
and hence by Proposition \ref{cort}, it follows that $F(\mathcal{F})$ is Cohen-Macaulay. Thus our assumptions in Theorem \ref{ESf} are satisfied and as $\mu(\overline{I^2})=9< \binom{2+3}{3}= 10$, it follows that $r_{I}(\mathcal{F}) \leq 1.$ Hence $r_I(\mathcal{F})=1.$ Note that $\mathcal{R}(\mathcal{F}, T)$ is Cohen-Macaulay which implies that $r_{J}(\mathcal{F}) \leq 3-1=2$ by \cite[Theorem 2.3]{viet}. This illustrates that we are getting a better bound (in fact exact bound) by our result.
\end{example}
The following example shows that the depth assumptions in Theorem \ref{ESf} cannot be
dropped.
\begin{example}
Let $R=\mathbb{C} [[X,Y,Z]]/(X^4+Y^4+Z^2) = \mathbb{C} [[x,y,z]]$, where $x,y$ and $z$ denote the image of $X,Y$ and $Z$ respectively in $R.$ Then $R$ is a $2$-dimensional Cohen-Macaulay local ring. Put $\mathfrak{m} =(x,y,z).$ We first show that $R$ is normal. Set $T=\mathbb{C} [X,Y,Z]/(f)$, where $f=X^4+Y^4+Z^2.$ Put $l_1=X+Y, l_2=X+iY, l_3=X-Y, l_4=X-iY$. Then $f = X^4+Y^4+Z^2 = Z^2 + l_1l_2l_3l_4 \in \mathbb{C} [X,Y][Z]$. By {\it Eisenstein's criterion}, $f$ is irreducible in $\mathbb{C} [X,Y,Z]$ and hence $T$ is a domain. Using \cite[Theorem 4.4.9]{swansonHuneke}, it follows that $\operatorname{Sing}(T)={\rm V}(f,\operatorname{Jac}(f))={\rm V}(4x^3, 4y^3,2z)=(0,0,0)$. Thus $T_{\mathfrak{p}}$ is regular for any $\mathfrak{p} \in \Spec(T)$ such that $\mathfrak{p} \neq (x,y,z)$. Moreover, as $T$ is Cohen-Macaulay it satisfies $R_1$ and $S_2$. Therefore by \cite[Theorem 23.8]{matsumuraCRT} it is normal. Now $T \subseteq T_{(x,y,z)} \subseteq \operatorname{Frac}(T)$. So $T_{(x,y,z)}$ is normal. Hence by \cite[Section 32]{matsumuraCRT} we get $R$ is normal.
Let $\mathcal{F}=\{\overline{\mathfrak{m} ^n}\}_{n \geq 1}$ and $I=(x,y)R.$ By \cite[Theorem 3.1]{watanabe} we have $\overline{\mathfrak{m} ^n}= I^n+(z)I^{n-2}$ for all $n \ge 1,$ $r(\mathcal{F})=2$ and $\operatorname{gr}_{\mathcal{F}}(R)$ is Cohen-Macaulay. Using Proposition \ref{cort}, it follows that $F(\mathcal{F})$ is Cohen-Macaulay if and only if $I \cap \mathfrak{m} \overline{\mathfrak{m} ^2} = I \mathfrak{m} ^2.$ Observe that $xz \in I \cap \mathfrak{m} \overline{\mathfrak{m} ^2}$ but if $xz \in I \mathfrak{m} ^2 = (x^3,x^2y,x^2z,xy^2,xyz,y^3,y^2z),$ then $XZ \in (X^3,X^2Y,X^2Z,XY^2,XYZ,Y^3,Y^2Z,Z^2),$ a contradiction. Therefore, $I \cap \mathfrak{m} \overline{\mathfrak{m} ^2} \neq I \mathfrak{m} ^2.$ and hence $F(\mathcal{F})$ is not Cohen-Macaulay. Note that $\mu(\overline{\mathfrak{m} ^2})=4 < \binom{2+2}{2}=6$ whereas $r(\mathcal{F}) =2.$ It shows that without the assumption of the Cohen-Macaulay property of $F(\mathcal{F})$, Theorem \ref{ESf} may not hold.
\end{example}
\begin{example}
Let $R=\mathbb{C}[[X,Y]]/(X^4+Y^2) = \mathbb{C}[[x,y]]$ where $x$ and $y$ are images of $X$ and $Y$ in $R$ respectively. Consider the equimultiple good filtration $\mathcal{F} = \{\overline{\mathfrak{m} ^n} \}$. We claim that for all $n \ge 2,$
\[\overline{\mathfrak{m} ^n} = (x^n,x^{n-2}y).\]
Write $X^4+Y^2 = (X^2 +iY)(X^2-iY).$ Observe that the linearity of $Y$ implies that $X^2+iY$ and $X^2-iY$ are irreducible in $T=\mathbb{C}[[X,Y]].$ Hence $f_1 = (X^2+iY)$ and $f_2 = (X^2-iY)$ are minimal primes of $(X^4+Y^2)$ in $T.$ Let $n \ge 2.$ Using the property: An element $a \in \overline{\mathfrak{m} ^n}$ if and only if image of $a$ in $S_1 = T/(f_1)$ is in $\overline{\mathfrak{m} ^nS_1}$ and image of $a$ in $S_2 = T/(f_2)$ is in $\overline{\mathfrak{m} ^nS_2}$, it follows that
\[ \overline{\mathfrak{m} ^n} = (x^n, x^2+iy) \cap (x^n, x^2-iy). \]
Therefore, it is sufficient to show that $(x^n, x^2+iy) \cap (x^n, x^2-iy) = (x^n,x^{n-2}y).$ Set $I=(X^n, X^2+iY), J=(X^n, X^2-iY),$ and $L=(X^n,X^{n-2}Y, X^4+Y^2)$. Then we show that $I \cap J=L$ in $T.$ Clearly, $L \subseteq I \cap J.$ Consider the following exact sequence
\[ 0 \rightarrow \frac{T}{I \cap J} \rightarrow \frac{T}{I} \oplus \frac{T}{J} \rightarrow \frac{T}{I+J} \rightarrow 0. \]
Since $\ell(T/I) = n = \ell(T/J)$ and $\ell(T/(I+J)) = \ell(T/(X^2,Y)) = 2$, it follows that $\ell(T/I\cap J)=2n-2.$ As $\ell(T/L)=2n-2$, the claim holds. Observe that $\overline{\mathfrak{m} ^2} \neq (x)\mathfrak{m} $ as $y \notin (x)\mathfrak{m} $ but $\overline{\mathfrak{m} ^n} = (x)\overline{\mathfrak{m} ^{n-1}}$ for all $n \ge 3$. This implies that $r_{(x)}(\mathcal{F})=2.$
One can partially recover this observation from Theorem \ref{esthm-mul-fil}. Note that $\{1,\overline{y}\}$ forms a basis of $F(\mathcal{F})$ as a $F(\mathfrak{m} )=\oplus_{n \ge 0}\mathfrak{m} ^n/\mathfrak{m} ^{n+1}$-module, where $\deg y=2.$ Thus $a=\max\{\deg 1, \deg \overline{y} \} = 2.$ Let $n \geq 3.$ As $\mu(\overline{\mathfrak{m} ^n}) = 2 < \binom{n+1}{1}=n+1$, and as $n \geq a+1$, Theorem \ref{esthm-mul-fil} implies that $\overline{\mathfrak{m} ^n} = (x)\overline{\mathfrak{m} ^{n-1}}$, for all $n \geq 3.$ One can also check that $\mu(\overline{\mathfrak{m} ^2}) = 2 < (2+1)=3.$ But as $2 \ngeq a+1$, we cannot use Theorem \ref{esthm-mul-fil} in this case.
We now check if Theorem \ref{ESf} can be used to predict the reduction number of the filtration $\mathcal{F}.$ Since $(x) \cap \overline{\mathfrak{m} ^n} = (x)\overline{\mathfrak{m} ^{n-1}}$ for all $n \ge 1$, using \cite[Proposition 3.5]{huckabaMarley}, it follows that $\operatorname{gr}_{\mathcal{F}}(R)$ is Cohen-Macaulay. As $\mu(\overline{\mathfrak{m} ^2}) = 2 < (2+1)=3$, one can conclude that $\overline{\mathfrak{m} ^2}=x\mathfrak{m} $ if the fiber cone of the filtration $\mathcal{F}$ is Cohen-Macaulay. But this fails to be true as $y \notin x\mathfrak{m} $. This happens due to non-Cohen Macauleyness of $F(\mathcal{F}).$ Using Proposition \ref{cort}, $F(\mathcal{F})$ is Cohen-Macaulay if and only if $(x) \cap \mathfrak{m} \overline{\mathfrak{m} ^n} = (x)\mathfrak{m} \overline{\mathfrak{m} ^{n-1}}$ for $n=1,2.$ We claim that
\[ xy \in (x) \cap \mathfrak{m} \overline{\mathfrak{m} ^2} \setminus (x)\mathfrak{m} ^2. \]
Clearly, $xy \in (x) \cap \mathfrak{m} \overline{\mathfrak{m} ^2}.$ If $xy \in (x)\mathfrak{m} ^2 = (x^3,x^2y,xy^2)$, then $XY \in (X^3,X^2Y,XY^2,X^4+Y^2),$ a contradiction. Hence $F(\mathcal{F})$ is not Cohen-Macaulay.
\end{example}
\bibliographystyle{plain}
| {
"timestamp": "2019-10-10T02:07:23",
"yymm": "1806",
"arxiv_id": "1806.07518",
"language": "en",
"url": "https://arxiv.org/abs/1806.07518",
"abstract": "Analogues of Eakin-Sathaye theorem for reductions of ideals are proved for ${\\mathbb N}^s$-graded good filtrations. These analogues yield bounds on joint reduction vectors for a family of ideals and reduction numbers for $\\mathbb N$-graded filtrations. Several examples related to lex-segment ideals, contracted ideals in $2$-dimensional regular local rings and the filtration of integral and tight closures of powers of ideals in hypersurface rings are constructed to show effectiveness of these bounds.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Eakin-Sathaye type theorems for joint reductions and good filtrations of ideals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290955604489,
"lm_q2_score": 0.7310585903489891,
"lm_q1q2_score": 0.7099522676473106
} |
https://arxiv.org/abs/2105.10116 | Approximate analytical solution for transient heat and mass transfer across an irregular interface | Motivated by practical applications in heat conduction and contaminant transport, we consider heat and mass diffusion across a perturbed interface separating two finite regions of distinct diffusivity. Under the assumption of continuity of the solution and diffusive flux at the interface, we use perturbation theory to develop an asymptotic expansion of the solution valid for small perturbations. Each term in the asymptotic expansion satisfies an initial-boundary value problem on the unperturbed domain subject to interface conditions depending on the previously determined terms in the asymptotic expansion. Demonstration of the perturbation solution is carried out for a specific, practically-relevant set of initial and boundary conditions with semi-analytical solutions of the initial-boundary value problems developed using standard Laplace transform and eigenfunction expansion techniques. Results for several choices of the perturbed interface confirm the perturbation solution is in good agreement with a standard numerical solution. | \section{Introduction}
\noindent The diffusion equation is fundamental to applied mathematics with numerous applications in engineering and the physical and life sciences \cite{liu_1998,pontrelli_2020,simpson_2017,zhao_2016,mantzavinos_2016}. Most practical applications of the diffusion equation involve a complex heterogeneous geometry of irregular shape exhibiting spatial variation in diffusivity. While computing numerical solutions to such problems is relatively straightforward, analytical solutions remain highly sought after since they provide greater insight into the physical significance of key model parameters \cite{ozisik_1993}, generally exhibit higher accuracy and can be evaluated at any point in continuous space and time. Of most interest to this work are analytical solutions for the steady-state diffusion equation in irregular geometries such as circles, ellipses or rectangles with perturbed boundaries and constant diffusivity \cite{simpson_2021,scheffler_1974,jiji_2009,aziz_1980} and analytical solutions of the transient diffusion equation in layered media with parallel/concentric interfaces between regions of distinct diffusivity~ \cite{carr_2016,kaoui_2018,rodrigo_2016,hickson_2009,ozisik_1993,carr_2020c}.
In this paper, we consider the transient diffusion equation in a two-dimensional heterogeneous medium comprising two regions separated by an irregular interface. \revision{Our interest in this problem is mainly motivated by the work of \citet{mcinerney_2019} who studied heat conduction in living heterogeneous skin as a way to better understand scald burn treatments. Other motivations include the work of \citet{carr_2019} who analysed heat conduction in two-layer solid materials to develop formulae for calculating the thermal diffusivity of the two constituent materials and \citet{chen_2009} who considered groundwater contamination by way of diffusion of contaminants through manufactured two-layer composite landfill liners. In each of these articles, the authors assume the two layers are separated by a (perfectly) horizontal or vertical interface, which gives rise to a simplified one-dimensional two-layer mathematical model governing the spatial and temporal dynamics of the solution (temperature or concentration in the above applications).
In practice, interfaces are seldom (perfectly) vertical or horizontal but irregular, as demonstrated for skin in Figure \ref{fig:skin}. In this case, diffusive transport can be described by the following set of equations:
\begin{gather}
\label{eq:pde1}
\frac{\partial u_{1}}{\partial t} = D_{1}\Delta u_{1},\quad 0 < x < \ell + \varepsilon w(y),\enspace 0 < y < H,\enspace t > 0,\\
\label{eq:pde2}
\frac{\partial u_{2}}{\partial t} = D_{2}\Delta u_{2},\quad \ell + \varepsilon w(y) < x < L,\enspace 0 < y < H,\enspace t > 0,\\
\label{eq:int1}
u_{1}(\ell + \varepsilon w(y),y,t) = u_{2}(\ell + \varepsilon w(y),y,t),\quad 0 < y < H,\enspace t > 0,\\
\label{eq:int2}
D_{1}\nabla u_{1}(\ell + \varepsilon w(y),y,t)\cdot\mathbf{n}(y) = D_{2}\nabla u_{2}(\ell + \varepsilon w(y),y,t)\cdot\mathbf{n}(y),\quad 0 < y < H,\enspace t > 0,
\end{gather}
where $u_{1}$ and $u_{2}$ denote the temperature in the first and second regions; $x = \ell + \varepsilon w(y)$ specifies the interface separating the two regions; $\mathbf{n}(y)\in\mathbb{R}^{2}$ denotes a vector normal to $x = \ell + \varepsilon w(y)$; and $D_{1}$ and $D_{2}$ denote the distinct diffusivity values in the first and second regions. Continuity of both temperature and diffusive flux is imposed at the interface by way of conditions (\ref{eq:int1}) and (\ref{eq:int2}).}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{Figures/figure1.pdf}
\caption{\textbf{Heat conduction in heterogeneous skin}. (a) Haemotoxylin and eosin (H\&E) staining of real skin showing a well-defined irregular interface separating the epidermis (purple) and dermis (pink) (adapted from \cite{haridas_2017}). (b) Schematic diagram of a heterogeneous skin medium of depth $L$ and width $H$ with an irregular interface described by $x = \ell + \varepsilon w(y)$. We solve the heat equation across the medium, with distinct diffusivities in the epidermis (purple) and dermis (pink) regions and continuity of temperature and diffusive flux imposed at $x = \ell + \varepsilon w(y)$ \cite{mcinerney_2019}.}
\label{fig:skin}
\end{figure}
In this paper, we present an approximate analytical solution of equations (\ref{eq:pde1})--(\ref{eq:int2}), subject to appropriate initial and boundary conditions, by developing asymptotic expansion of the solutions $u_{1}$ and $u_{2}$ valid for small non-negative $\varepsilon$. Using perturbation theory, we show that each term in the asymptotic expansion satisfies an initial-boundary value problem on the unperturbed two-layer domain (i.e., $\varepsilon = w(y) = 0$) with interface conditions depending on the previously computed terms in the expansion. Each initial-boundary value problems is then solved semi-analytically using the Laplace transform for a specific, practically-relevant set of initial and boundary conditions representing scalding at the surface, $x = 0$ \cite{mcinerney_2019}. Resulting solution fields are visualised for several different choices for the perturbed interface and verified against numerical solutions obtained via a standard finite volume discretisation. MATLAB code implementing our perturbation solution and allowing the solution approach to be investigated for other appropriate choices of the parameters ($\varepsilon$, $w(y)$, $\ell$, $D_{1}$, $D_{2}$, $L$ and $H$) is made available on a GitHub repository: \href{https://github.com/elliotcarr/Carr2022a}{https://github.com/elliotcarr/Carr2022a}.
\section{Perturbation solution}
\label{sec:solution_method}
\noindent We assume the solution of equations (\ref{eq:pde1})--(\ref{eq:int2}), subject to appropriate initial and boundary conditions, can be expanded in powers of $\varepsilon$:
\begin{align}
\label{eq:ansatz1}
u_{1}(x,y,t) = \sum_{i=0}^{\infty} \varepsilon^{i}u_{1}^{(i)}(x,y,t),\\
\label{eq:ansatz2}
u_{2}(x,y,t) = \sum_{i=0}^{\infty} \varepsilon^{i}u_{2}^{(i)}(x,y,t),
\end{align}
where the terms $u_{1}^{(i)}$ and $u_{2}^{(i)}$ for $i\in\mathbb{N} := \{0,1,2,\hdots\}$ satisfy the transient diffusion equation on the unperturbed domain, that is, equation (\ref{eq:pde1}) on $0 < x < \ell$ and equation (\ref{eq:pde2}) on $\ell < x < L$ \cite{simpson_2021,scheffler_1974}.
Now consider the interface condition (\ref{eq:int1}). Expanding both sides of (\ref{eq:int1}) in a Taylor series centered at $x = \ell$,
\begin{align*}
\sum_{k=0}^{\infty} \frac{\varepsilon^{k}w(y)^{k}}{k!} \frac{\partial^{k}u_{1}}{\partial x^{k}}(\ell,y,t) &= \sum_{k=0}^{\infty} \frac{\varepsilon^{k}w(y)^{k}}{k!} \frac{\partial^{k}u_{2}}{\partial x^{k}}(\ell,y,t),
\end{align*}
and inserting the expansions (\ref{eq:ansatz1}) and (\ref{eq:ansatz2}) yields:
\begin{align}
\label{eq:int1_expansion}
\sum_{k=0}^{\infty}\sum_{i=0}^{\infty}\frac{\varepsilon^{i+k}w(y)^{k}}{k!}\frac{\partial^{k}u_{1}^{(i)}}{\partial x^{k}}(\ell,y,t) = \sum_{k=0}^{\infty}\sum_{i=0}^{\infty}\frac{\varepsilon^{i+k}w(y)^{k}}{k!}\frac{\partial^{k}u_{2}^{(i)}}{\partial x^{k}}(\ell,y,t).
\end{align}
For all $n\in\mathbb{N}$, the $\mathcal{O}(\varepsilon^{n})$ term on both sides of (\ref{eq:int1_expansion}) is identified when $i + k = n$ or equivalently, $i = n-k$ for $k = 0,\hdots,n$. Matching the $\mathcal{O}(\varepsilon^{n})$ terms on both sides of (\ref{eq:int1_expansion}) therefore yields:
\begin{align*}
\sum_{k=0}^{n}\frac{w(y)^{k}}{k!}\frac{\partial^{k}u_{1}^{(n-k)}}{\partial x^{k}}(\ell,y,t) = \sum_{k=0}^{n}\frac{w(y)^{k}}{k!}\frac{\partial^{k}u_{2}^{(n-k)}}{\partial x^{k}}(\ell,y,t).
\end{align*}
Hence, we have derived the following interface condition satisfied by $u_{1}^{(n)}$ and $u_{2}^{(n)}$ for all $n\in\mathbb{N}$:
\begin{gather*}
u_{1}^{(n)}(\ell,y,t) = u_{2}^{(n)}(\ell,y,t) + f^{(n)}(y,t),
\end{gather*}
where $f^{(n)}(y,t)$ depends on (derivatives of) $u_{1}^{(0)},\hdots,u_{1}^{(n-1)}$ and $u_{2}^{(0)},\hdots,u_{2}^{(n-1)}$:
\begin{gather}
\label{eq:fn}
f^{(n)}(y,t) = \sum_{k=1}^{n}\frac{w(y)^{k}}{k!}\left[\frac{\partial^{k}u_{2}^{(n-k)}}{\partial x^{k}}(\ell,y,t)-\frac{\partial^{k}u_{1}^{(n-k)}}{\partial x^{k}}(\ell,y,t)\right]\!.
\end{gather}
Next consider the interface condition (\ref{eq:int2}). Expanding the first derivatives of $u_{1}$ and $u_{2}$ in Taylor series centered at $x = \ell$ and then inserting the expansions (\ref{eq:ansatz1}) and (\ref{eq:ansatz2}) yields:
\begin{align*}
\frac{\partial u_{1}}{\partial x}(\ell+\varepsilon w(y),y,t) &= \sum_{k=0}^{\infty}\sum_{i=0}^{\infty}\frac{\varepsilon^{i+k}w(y)^{k}}{k!}\frac{\partial^{k+1}u_{1}^{(i)}}{\partial x^{k+1}}(\ell,y,t),\\
\frac{\partial u_{1}}{\partial y}(\ell+\varepsilon w(y),y,t) &= \sum_{k=0}^{\infty}\sum_{i=0}^{\infty}\frac{\varepsilon^{i+k}w(y)^{k}}{k!}\frac{\partial^{k+1}u_{1}^{(i)}}{\partial x^{k}\partial y}(\ell,y,t),\\
\frac{\partial u_{2}}{\partial x}(\ell+\varepsilon w(y),y,t) &= \sum_{k=0}^{\infty}\sum_{i=0}^{\infty}\frac{\varepsilon^{i+k}w(y)^{k}}{k!}\frac{\partial^{k+1}u_{2}^{(i)}}{\partial x^{k+1}}(\ell,y,t),\\
\frac{\partial u_{2}}{\partial y}(\ell+\varepsilon w(y),y,t) &= \sum_{k=0}^{\infty}\sum_{i=0}^{\infty}\frac{\varepsilon^{i+k}w(y)^{k}}{k!}\frac{\partial^{k+1}u_{2}^{(i)}}{\partial x^{k}\partial y}(\ell,y,t).
\end{align*}
The perturbed interface can be described by the vector-valued function $\mathbf{v}(y) = (\ell+\varepsilon w(y),y)$ with tangent vector $\mathbf{v}'(y) = (\varepsilon w'(y),1)$, which allows a normal vector to be identified, $\mathbf{n}(y) = (1,-\varepsilon w'(y))$. Note that $\mathbf{n}(y)$ is not required to have unit length as the magnitude of $\mathbf{n}(y)$ does not affect the interface condition (\ref{eq:int2}). Similarly we have chosen $\mathbf{n}(y)$ to point outwards from the first region but in the working that follows one could just as easily use the normal vector pointing inwards, as a negative sign also does not affect equation (\ref{eq:int2}). Combining the form of $\mathbf{n}(y)$ with the above expressions for the first derivatives yields the following expansion of the interface condition (\ref{eq:int2}):
\begin{multline}
\label{eq:int2_expansion}
D_{1}\left[\sum_{k=0}^{\infty}\sum_{i=0}^{\infty}\frac{\varepsilon^{i+k}w(y)^{k}}{k!}\frac{\partial^{k+1}u_{1}^{(i)}}{\partial x^{k+1}}(\ell,y,t) - w'(y)\sum_{k=0}^{\infty}\sum_{i=0}^{\infty}\frac{\varepsilon^{i+k+1}w(y)^{k}}{k!}\frac{\partial^{k+1}u_{1}^{(i)}}{\partial x^{k}\partial y}(\ell,y,t)\right]\!\\ = D_{2}\left[\sum_{k=0}^{\infty}\sum_{i=0}^{\infty}\frac{\varepsilon^{i+k}w(y)^{k}}{k!}\frac{\partial^{k+1}u_{2}^{(i)}}{\partial x^{k+1}}(\ell,y,t) - w'(y)\sum_{k=0}^{\infty}\sum_{i=0}^{\infty}\frac{\varepsilon^{i+k+1}w(y)^{k}}{k!}\frac{\partial^{k+1}u_{2}^{(i)}}{\partial x^{k}\partial y}(\ell,y,t)\right]\!.
\end{multline}
Here, for all $n\in\mathbb{N}$, the $\mathcal{O}(\varepsilon^{n})$ terms on both sides of (\ref{eq:int2_expansion}) are identified when $i + k = n$ in the first double sum (or equivalently, $i = n-k$ for $k = 0,\hdots,n$) and when $i + k + 1 = n$ in the second double sum (or equivalently, $i = n-k-1$ for $k = 0,\hdots,n-1$). Thus matching the $\mathcal{O}(\varepsilon^{n})$ terms on both sides of (\ref{eq:int2_expansion}) yields:
\begin{multline*}
D_{1}\left[\sum_{k=0}^{n}\frac{w(y)^{k}}{k!}\frac{\partial^{k+1}u_{1}^{(n-k)}}{\partial x^{k+1}}(\ell,y,t) - w'(y)\sum_{k=0}^{n-1}\frac{w(y)^{k}}{k!}\frac{\partial^{k+1}u_{1}^{(n-k-1)}}{\partial x^{k}\partial y}(\ell,y,t)\right]\\ = D_{2}\left[\sum_{k=0}^{n}\frac{w(y)^{k}}{k!}\frac{\partial^{k+1}u_{2}^{(n-k)}}{\partial x^{k+1}}(\ell,y,t) - w'(y)\sum_{k=0}^{n-1}\frac{w(y)^{k}}{k!}\frac{\partial^{k+1}u_{2}^{(n-k-1)}}{\partial x^{k}\partial y}(\ell,y,t)\right]\!.
\end{multline*}
Hence, we have derived the following interface condition satisfied by $u_{1}^{(n)}$ and $u_{2}^{(n)}$ for all $n\in\mathbb{N}$:
\begin{gather*}
D_{1}\frac{\partial u_{1}^{(n)}}{\partial x}(\ell,y,t) = D_{2}\frac{\partial u_{2}^{(n)}}{\partial x}(\ell,y,t) + g^{(n)}(y,t),
\end{gather*}
where, like $f^{(n)}(y,t)$, the function $g^{(n)}(y,t)$ depends on derivatives of $u_{1}^{(0)},\hdots,u_{1}^{(n-1)}$ and $u_{2}^{(0)},\hdots,u_{2}^{(n-1)}$:
\begin{multline}
\label{eq:gn}
g^{(n)}(y,t) = \sum_{k=1}^{n}\frac{w(y)^{k}}{k!}\left[D_{2}\frac{\partial^{k+1}u_{2}^{(n-k)}}{\partial x^{k+1}}(\ell,y,t)-D_{1}\frac{\partial^{k+1}u_{1}^{(n-k)}}{\partial x^{k+1}}(\ell,y,t)\right]\\ + w'(y)\sum_{k=0}^{n-1}\frac{w(y)^{k}}{k!}\left[D_{1}\frac{\partial^{k+1}u_{1}^{(n-k-1)}}{\partial x^{k}\partial y}(\ell,y,t) - D_{2}\frac{\partial^{k+1}u_{1}^{(n-k-1)}}{\partial x^{k}\partial y}(\ell,y,t)\right]\!.
\end{multline}
In summary, $u_{1}^{(n)}$ and $u_{2}^{(n)}$ satisfy the following partial differential equations and interface conditions:
\begin{gather}
\label{eq:ordern_pde1}
\frac{\partial u_{1}^{(n)}}{\partial t} = D_{1}\Delta u_{1}^{(n)},\quad 0 < x < \ell,\enspace 0 < y < H,\enspace t > 0,\\
\label{eq:ordern_pde2}
\frac{\partial u_{2}^{(n)}}{\partial t} = D_{2}\Delta u_{2}^{(n)},\quad \ell < x < L,\enspace 0 < y < H,\enspace t > 0,\\
\label{eq:ordern_int1}
u_{1}^{(n)}(\ell,y,t) = u_{2}^{(n)}(\ell,y,t) + f^{(n)}(y,t),\quad 0 < y < H,\enspace t > 0,\\
\label{eq:ordern_int2}
D_{1}\frac{\partial u_{1}^{(n)}}{\partial x}(\ell,y,t) = D_{2}\frac{\partial u_{2}^{(n)}}{\partial x}(\ell,y,t) + g^{(n)}(y,t),\quad 0 < y < H,\enspace t > 0,
\end{gather}
for all $n\in\mathbb{N}$. Here, the \revision{interface conditions} (\ref{eq:ordern_int1}) and (\ref{eq:ordern_int2}) can be interpreted as continuity of solution and flux (in the $x$ direction) at the unperturbed interface ($x = \ell$) with correction terms $f^{(n)}(y,t)$ and $g^{(n)}(y,t)$ due to the perturbed interface. The above analysis has converted the problem on the perturbed domain into a sequence of problems on the unperturbed domain. The tradeoff being that the interface conditions (\ref{eq:ordern_int1}) and (\ref{eq:ordern_int2}) now involve non-homogeneities.
\section{Test Case}
\label{sec:test_case}
So far we have neglected specifying boundary conditions at the external boundaries ($x = 0, L$ and $y = 0,H$). To demonstrate the asymptotic expansion solution for equations (\ref{eq:pde1})--(\ref{eq:int2}), we now consider the specific case of the following initial and boundary conditions, which are motivated by those featuring in mathematical models of heat transfer in skin \cite{mcinerney_2019,simpson_2017} with heat applied at $x = 0$, (possible) heat loss at $x = L$ and zero heat flux at $y = 0,H$:
\begin{gather}
\label{eq:ic1}
u_{1}(x,y,0) = 0,\quad 0 < x < \ell,\enspace 0 < y < H,\\
\label{eq:ic2}
u_{2}(x,y,0) = 0,\quad \ell < x < L,\enspace 0 < y < H,\\
\label{eq:bc1}
u_{1}(0,y,t) = c_{0}(t),\quad \frac{\partial u_{2}}{\partial x}(L,y,t) = \revision{q_{L}}(t),\quad 0 < y < H,\enspace t > 0,\\
\label{eq:bc2a}
\frac{\partial u_{1}}{\partial y}(x,0,t) = 0,\quad\frac{\partial u_{1}}{\partial y}(x,H,t) = 0,\quad 0 < x < \ell + \varepsilon w(y),\enspace t > 0,\\
\label{eq:bc2b}
\frac{\partial u_{2}}{\partial y}(x,0,t) = 0,\quad\frac{\partial u_{2}}{\partial y}(x,H,t) = 0,\quad \ell + \varepsilon w(y) < x < L,\enspace t > 0.
\end{gather}
Substituting the expansions (\ref{eq:ansatz1}) and (\ref{eq:ansatz2}) into the above initial and boundary conditions (\ref{eq:ic1})--(\ref{eq:bc2b}) and matching powers of the $\mathcal{O}(\varepsilon^{n})$ terms yields the appropriate initial and boundary conditions for $u_{1}^{(n)}$ and $u_{2}^{(n)}$. Combining these conditions with equations (\ref{eq:ordern_pde1})--(\ref{eq:ordern_int2}) yields the following initial-boundary value problem for all $n\in\mathbb{N}$:
\begin{gather}
\label{eq:ordern2_pde1}
\frac{\partial u_{1}^{(n)}}{\partial t} = D_{1}\Delta u_{1}^{(n)},\quad 0 < x < \ell,\enspace 0 < y < H,\enspace t > 0,\\
\label{eq:ordern2_pde2}
\frac{\partial u_{2}^{(n)}}{\partial t} = D_{2}\Delta u_{2}^{(n)},\quad \ell < x < L,\enspace 0 < y < H,\enspace t > 0,\\
\label{eq:ordern2_ic1}
u_{1}^{(n)}(x,y,0) = 0,\quad 0 < x < \ell,\enspace 0 < y < H,\\
\label{eq:ordern2_ic2}
u_{2}^{(n)}(x,y,0) = 0,\quad \ell < x < L,\enspace 0 < y < H,\\
\label{eq:ordern2_bc1}
u_{1}^{(n)}(0,y,t) = \begin{cases} c_{0}(t), & \text{if $n = 0$,}\\ 0, & \text{if $n\in\mathbb{N}^{+}$,}\end{cases}\quad \frac{\partial u_{2}^{(n)}}{\partial x}(L,y,t) = \begin{cases} \revision{q_{L}}(t), & \text{if $n = 0$,}\\ 0, & \text{if $n\in\mathbb{N}^{+}$,}\end{cases}\quad 0 < y < H,\enspace t > 0,\\
\label{eq:ordern2_bc2}
\frac{\partial u_{1}^{(n)}}{\partial y}(x,0,t) = 0,\quad\frac{\partial u_{1}^{(n)}}{\partial y}(x,H,t) = 0,\quad 0 < x < \ell,\enspace t > 0,\\
\label{eq:ordern2_bc3}
\frac{\partial u_{2}^{(n)}}{\partial y}(x,0,t) = 0,\quad\frac{\partial u_{2}^{(n)}}{\partial y}(x,H,t) = 0,\quad \ell < x < L,\enspace t > 0,\\
\label{eq:ordern2_int1}
u_{1}^{(n)}(\ell,y,t) = u_{2}^{(n)}(\ell,y,t) + f^{(n)}(y,t),\quad 0 < y < H,\enspace t > 0,\\
\label{eq:ordern2_int2}
D_{1}\frac{\partial u_{1}^{(n)}}{\partial x}(\ell,y,t) = D_{2}\frac{\partial u_{2}^{(n)}}{\partial x}(\ell,y,t) + g^{(n)}(y,t),\quad 0 < y < H,\enspace t > 0,
\end{gather}
where $\mathbb{N}^{+} := \{1,2,\hdots\}$. We remark that the initial-boundary value problem for $u_{1}^{(n)}$ and $u_{2}^{(n)}$ depends on $u_{1}^{(0)},u_{2}^{(0)},\hdots,u_{1}^{(n-1)},u_{2}^{(n-1)}$ (through $f^{(n)}(y,t)$ (\ref{eq:fn}) and $g^{(n)}(y,t)$ (\ref{eq:gn})) and therefore each problem must be solved for $n = 0,1,2,\hdots$ sequentially.
In this work, we solve the initial-boundary value problem (\ref{eq:ordern2_pde1})--(\ref{eq:ordern2_int2}) using the Laplace transform. Let $U_{1}^{(n)}(x,y,s) = \mathcal{L}\{u_{1}^{(n)}(x,y,t)\}$ and $U_{2}^{(n)}(x,y,s) = \mathcal{L}\{u_{2}^{(n)}(x,y,t)\}$ be the Laplace transformations of $u_{1}^{(n)}(x,y,t)$ and $u_{2}^{(n)}(x,y,t)$ with respect to $t$, respectively, where $s$ is the transformation variable. Taking the Laplace transform of (\ref{eq:ordern2_pde1})--(\ref{eq:ordern2_int2}) yields the boundary value problem:
\begin{gather}
\label{eq:ordern_pde1_lt}
sU_{1}^{(n)} = D_{1}\Delta U_{1}^{(n)},\quad 0 < x < \ell,\enspace 0 < y < H,\\
\label{eq:ordern_pde2_lt}
sU_{2}^{(n)} = D_{2}\Delta U_{2}^{(n)},\quad \ell < x < L,\enspace 0 < y < H,\\
\label{eq:ordern_bc1_lt}
U_{1}^{(n)}(0,y,s) = \begin{cases} C_{0}(s), & \text{if $n = 0$,}\\ 0, & \text{if $n\in\mathbb{N}^{+}$,}\end{cases}\quad \frac{\partial U_{2}^{(n)}}{\partial x}(L,y,s) = \begin{cases} \revision{Q_{L}}(s), & \text{if $n = 0$,}\\ 0, & \text{if $n\in\mathbb{N}^{+}$,}\end{cases}\quad 0 < y < H,\\
\label{eq:ordern_bc2_lt}
\frac{\partial U_{1}^{(n)}}{\partial y}(x,0,s) = 0,\quad\frac{\partial U_{1}^{(n)}}{\partial y}(x,H,s) = 0,\quad 0 < x < \ell,\\
\label{eq:ordern_bc3_lt}
\frac{\partial U_{2}^{(n)}}{\partial y}(x,0,s) = 0,\quad\frac{\partial U_{2}^{(n)}}{\partial y}(x,H,s) = 0,\quad \ell < x < L,\\
\label{eq:ordern_int1_lt}
U_{1}^{(n)}(\ell,y,s) = U_{2}^{(n)}(\ell,y,s) + F^{(n)}(y,s),\quad 0 < y < H,\\
\label{eq:ordern_int2_lt}
D_{1}\frac{\partial U_{1}^{(n)}}{\partial x}(\ell,y,s) = D_{2}\frac{\partial U_{2}^{(n)}}{\partial x}(\ell,y,s) + G^{(n)}(y,s),\quad 0 < y < H,
\end{gather}
for all $n\in\mathbb{N}$, where $C_{0}(s) = \mathcal{L}\{c_{0}(t)\}$, $\revision{Q_{L}}(s) = \mathcal{L}\{\revision{q_{L}}(t)\}$ and $F^{(n)}(y,s)$ and $G^{(n)}(y,s)$ are the Laplace transformations of $f^{(n)}(y,t)$ (\ref{eq:fn}) and $g^{(n)}(y,t)$ (\ref{eq:gn}):
\begin{gather}
\label{eq:Fn}
F^{(n)}(y,s) = \sum_{k=1}^{n}\frac{w(y)^{k}}{k!}\left[\frac{\partial^{k}U_{2}^{(n-k)}}{\partial x^{k}}(\ell,y,s)-\frac{\partial^{k}U_{1}^{(n-k)}}{\partial x^{k}}(\ell,y,s)\right]\!,\\
\nonumber
G^{(n)}(y,s) = \sum_{k=1}^{n}\frac{w(y)^{k}}{k!}\left[D_{2}\frac{\partial^{k+1}U_{2}^{(n-k)}}{\partial x^{k+1}}(\ell,y,s)-D_{1}\frac{\partial^{k+1}U_{1}^{(n-k)}}{\partial x^{k+1}}(\ell,y,s)\right]\\
\label{eq:Gn}
\hspace*{7em} + w'(y)\sum_{k=0}^{n-1}\frac{w(y)^{k}}{k!}\left[D_{1}\frac{\partial^{k+1}U_{1}^{(n-k-1)}}{\partial x^{k}\partial y}(\ell,y,s) - D_{2}\frac{\partial^{k+1}U_{2}^{(n-k-1)}}{\partial x^{k}\partial y}(\ell,y,s)\right]\!.
\end{gather}
In the following subsections we obtain expressions for $U_{1}^{(n)}$ and $U_{2}^{(n)}$, considering the cases of $n = 0$ and $n\in\mathbb{N}^{+}$ separately.
\subsection{Leading order term}
\label{sec:leading_order}
Consider the initial-boundary value problem (\ref{eq:ordern_pde1_lt})--(\ref{eq:ordern_int2_lt}) for $n = 0$. Due to the boundary conditions (\ref{eq:ordern2_bc2}) and (\ref{eq:ordern2_bc3}) and the fact that $f^{(0)}(y,t) = g^{(0)}(y,t) = 0$, both $u_{1}^{(0)}$ and $u_{2}^{(0)}$ are independent of $y$ and we write $u_{1}^{(0)}(x,y,t) \equiv u_{1}^{(0)}(x,t)$ and $u_{2}^{(0)}(x,y,t) \equiv u_{2}^{(0)}(x,t)$ and hence $U_{1}^{(n)}(x,y,s) \equiv U_{1}^{(n)}(x,s)$ and $U_{2}^{(n)}(x,y,s) \equiv U_{2}^{(n)}(x,s)$. It follows that the initial-boundary value problem (\ref{eq:ordern_pde1_lt})--(\ref{eq:ordern_int2_lt}) for $n = 0$ reduces to the following two-layer one-dimensional problem:
\begin{gather}
\label{eq:order0_pde1}
sU_{1}^{(0)} = D_{1}\frac{\partial^{2} U_{1}^{(0)}}{\partial x^{2}},\quad 0 < x < \ell,\\
\label{eq:order0_pde2}
sU_{2}^{(0)} = D_{2}\frac{\partial^{2} U_{2}^{(0)}}{\partial x^{2}},\quad \ell < x < L,\\
\label{eq:order0_bc1}
U_{1}^{(0)}(0,s) = C_{0}(s),\quad \frac{\partial U_{2}^{(0)}}{\partial x}(L,s) = \revision{Q_{L}}(s),\\
\label{eq:order0_int1}
U_{1}^{(0)}(\ell,s) = U_{2}^{(0)}(\ell,s),\\
\label{eq:order0_int2}
D_{1}\frac{\partial U_{1}^{(0)}}{\partial x}(\ell,s) = D_{2}\frac{\partial U_{2}^{(0)}}{\partial x}(\ell,s).
\end{gather}
To solve (\ref{eq:order0_pde1})--(\ref{eq:order0_int2}), we reformulate the problem by setting
\begin{align*}
V^{(0)}(s) := D_{2}\frac{\partial U_{2}^{(0)}}{\partial x}(\ell,s),
\end{align*}
which gives standard boundary value problems on each layer:
\bigskip
\noindent\textit{Layer 1:}
\begin{gather}
\label{eq:order0_layer1_pde1}
sU_{1}^{(0)} = D_{1}\frac{\partial^{2} U_{1}^{(0)}}{\partial x^{2}},\quad 0 < x < \ell,\\
\label{eq:order0_layer1_bc1}
U_{1}^{(0)}(0,s) = C_{0}(s),\quad D_{1}\frac{\partial U_{1}^{(0)}}{\partial x}(\ell,s) = V^{(0)}(s).
\end{gather}
\textit{Layer 2:}
\begin{gather}
\label{eq:order0_layer2_pde1}
sU_{2}^{(0)} = D_{2}\frac{\partial^{2} U_{2}^{(0)}}{\partial x^{2}},\quad \ell < x < L,\\
\label{eq:order0_layer2_bc1}
D_{2}\frac{\partial U_{2}^{(0)}}{\partial x}(\ell,s) = V^{(0)}(s),\quad \frac{\partial U_{2}^{(0)}}{\partial x}(L,s) = \revision{Q_{L}}(s).
\end{gather}
The boundary value problems (\ref{eq:order0_layer1_pde1})--(\ref{eq:order0_layer1_bc1}) and (\ref{eq:order0_layer2_pde1})--(\ref{eq:order0_layer2_bc1}) involve second-order constant coefficient differential equations and thus can be solved using standard techniques to give:
\begin{align}
\label{eq:U10}
U_{1}^{(0)}(x,s) &= \left[\gamma_{11}(s) + \gamma_{12}(s)V^{(0)}(s)\right]\exp(\mu_{1}(s)x) + \left[\gamma_{13}(s) + \gamma_{14}(s)V^{(0)}(s)\right]\exp(-\mu_{1}(s)x),\\
\label{eq:U20}
U_{2}^{(0)}(x,s) &= \left[\gamma_{21}(s) + \gamma_{22}(s)V^{(0)}(s)\right]\exp(\mu_{2}(s)x) + \left[\gamma_{23}(s) + \gamma_{24}(s)V^{(0)}(s)\right]\exp(-\mu_{2}(s)x),
\end{align}
where all variables (except $V^{(0)}(s)$) are defined in the Appendix and $V^{(0)}(s)$ is identified by enforcing the interface condition (\ref{eq:order0_int1}), which was absent in the reformulated problems (\ref{eq:order0_layer1_pde1})--(\ref{eq:order0_layer1_bc1}) and (\ref{eq:order0_layer2_pde1})--(\ref{eq:order0_layer2_bc1}), yielding:
\begin{align*}
V^{(0)}(s) = \frac{\gamma_{11}(s)\exp(\mu_{1}(s)\ell) + \gamma_{13}(s)\exp(-\mu_{1}(s)\ell) - \gamma_{21}(s)\exp(\mu_{2}(s)\ell) - \gamma_{23}(s)\exp(-\mu_{2}(s)\ell)}{\gamma_{22}(s)\exp(\mu_{2}(s)\ell) + \gamma_{24}(s)\exp(-\mu_{2}(s)\ell) - \gamma_{12}(s)\exp(\mu_{1}(s)\ell) - \gamma_{14}(s)\exp(-\mu_{1}(s)\ell)}.
\end{align*}
In summary, with $V^{(0)}(s)$ now known, both $U_{1}^{(0)}(x,s)$ (\ref{eq:U10}) and $U_{2}^{(0)}(x,s)$ (\ref{eq:U20}) are fully identified.
\subsection{Higher order terms}
Consider the boundary value problem (\ref{eq:ordern_pde1_lt})--(\ref{eq:ordern_int2_lt}) for $n\in\mathbb{N}^{+}$. To solve this problem we again reformulate the problem by setting
\begin{align*}
V^{(n)}(y,s) = D_{2}\frac{\partial U_{2}^{(n)}}{\partial x}(\ell,y,s),
\end{align*}
which gives standard initial-boundary value problems on each layer for all $n\in\mathbb{N}^{+}$:
\bigskip
\noindent\textit{Layer 1:}
\begin{gather}
\label{eq:ordern_layer1_pde1}
sU_{1}^{(n)} = D_{1}\Delta U_{1}^{(n)},\quad 0 < x < \ell,\enspace 0 < y < H,\\
\label{eq:ordern_layer1_bc1}
U_{1}^{(n)}(0,y,s) = 0,\quad D_{1}\frac{\partial U_{1}^{(n)}}{\partial x}(\ell,y,s) = V^{(n)}(y,s) + G^{(n)}(y,s),\quad 0 < y < H,\\
\label{eq:ordern_layer1_bc2}
\frac{\partial U_{1}^{(n)}}{\partial y}(x,0,s) = 0,\quad \frac{\partial U_{1}^{(n)}}{\partial y}(x,H,s) = 0,\quad 0 < x < \ell.
\end{gather}
\textit{Layer 2:}
\begin{gather}
\label{eq:ordern_layer2_pde1}
sU_{2}^{(n)} = D_{2}\Delta U_{2}^{(n)},\quad \ell < x < L,\enspace 0 < y < H,\\
\label{eq:ordern_layer2_bc1}
D_{2}\frac{\partial U_{2}^{(n)}}{\partial x}(\ell,y,s) = V^{(n)}(y,s),\quad \frac{\partial U_{2}^{(n)}}{\partial x}(L,y,s) = 0,\quad 0 < y < H,\\
\label{eq:ordern_layer2_bc2}
\frac{\partial U_{2}^{(n)}}{\partial y}(x,0,s) = 0,\quad \frac{\partial U_{2}^{(n)}}{\partial y}(x,H,s) = 0,\quad \ell < x < L.
\end{gather}
Both boundary value problems (\ref{eq:ordern_layer1_pde1})--(\ref{eq:ordern_layer1_bc2}) and (\ref{eq:ordern_layer2_pde1})--(\ref{eq:ordern_layer2_bc2}) can be solved using the standard techniques of separation of variables and eigenfunction expansion to give:
\begin{align}
\label{eq:U1n}
U_{1}^{(n)}(x,y,s) &= \sum_{m=0}^{\infty} \alpha_{m}^{(n)}(s)\sinh(\mu_{1,m}(s)x)\cos(\lambda_{m}y),\\
\label{eq:U2n}
U_{2}^{(n)}(x,y,s) &= \sum_{m=0}^{\infty} \beta_{m}^{(n)}(s)\cosh(\mu_{2,m}(s)[x-L])\cos(\lambda_{m}y),
\end{align}
where
\begin{gather*}
\alpha_{m}^{(n)}(s) = \widetilde{\alpha}_{m}^{(n)}(s)\widetilde{V}_{m}^{(n)}(s),\quad\beta_{m}^{(n)}(s) = \widetilde{\beta}_{m}^{(n)}(s)[\widetilde{V}_{m}^{(n)}(s) - \widetilde{G}_{m}^{(n)}(s)],\\\widetilde{V}_{m}^{(n)}(s) = \int_{0}^{H}V^{(n)}(y,s)\cos(\lambda_{m}y)\,\text{d}y,\quad \widetilde{G}_{m}^{(n)}(s) = \int_{0}^{H}G^{(n)}(y,s)\cos(\lambda_{m}y)\,\text{d}y,
\end{gather*}
and all other remaining variables are defined in the Appendix. Here, we see that $V^{(n)}(y,s)$ is not explicitly required, only $\widetilde{V}_{m}^{(n)}(s)$ for $m\in\mathbb{N}^{+}$, the latter of which is identified by enforcing that $U_{1}^{(n)}$ (\ref{eq:U1n}) and $U_{2}^{(n)}$ (\ref{eq:U2n}) satisfy the interface condition (\ref{eq:ordern_int1_lt}) and using orthogonality of the eigenfunctions $\cos(\lambda_{m}y)$ to give:
\begin{align*}
\widetilde{V}_{m}^{(n)}(s) = \begin{cases} \dfrac{\frac{1}{H}\widetilde{F}_{m}^{(n)}(s) - \widetilde{\beta}_{m}^{(n)}(s)\widetilde{G}_{m}^{(n)}(s)\cosh(\mu_{2,m}(s)[\ell-L])}{\widetilde{\alpha}_{m}^{(n)}(s)\sinh(\mu_{1,m}(s)\ell) - \widetilde{\beta}_{m}^{(n)}(s)\cosh(\mu_{2,n}[\ell-L])}, & \text{if $m = 0$},\\ \dfrac{\frac{2}{H}\widetilde{F}_{m}^{(n)}(s) - \widetilde{\beta}_{m}^{(n)}(s)\widetilde{G}_{m}^{(n)}(s)\cosh(\mu_{2,m}(s)[\ell-L])}{\widetilde{\alpha}_{m}^{(n)}(s)\sinh(\mu_{1,m}(s)\ell) - \widetilde{\beta}_{m}^{(n)}(s)\cosh(\mu_{2,n}[\ell-L])}, & \text{if $m\in\mathbb{N}^{+}$},\end{cases}
\end{align*}
where
\begin{gather*}
\widetilde{F}_{m}^{(n)}(s) = \int_{0}^{H}F^{(n)}(y,s)\cos(\lambda_{m}y)\,\text{d}y.
\end{gather*}
In summary, with $V_{m}^{(n)}(s)$ now known for all $m\in\mathbb{N}$, both $U_{1}^{(n)}$ (\ref{eq:U1n}) and $U_{2}^{(n)}$ (\ref{eq:U2n}) are fully identified. The last thing to address is computation of the integral expressions for $\widetilde{F}_{m}^{(n)}(s)$ and $\widetilde{G}_{m}^{(n)}(s)$ which involve the functions $F^{(n)}(y,s)$ (\ref{eq:Fn}) and $G^{(n)}(y,s)$ (\ref{eq:Gn}). The derivatives appearing in $F^{(n)}(y,s)$ (\ref{eq:Fn}) and $G^{(n)}(y,s)$ (\ref{eq:Gn}) are calculated exactly and given in the Appendix. While the integral expressions for $\widetilde{F}_{m}^{(n)}(s)$ and $\widetilde{G}_{m}^{(n)}(s)$ can be calculated exactly for specific choices of $w(y)$, we use MATLAB's \verb"integral" function to keep our code as general as possible.
\subsection{Inverse Laplace transform}
\label{sec:inverse_laplace_transform}
With $U_{1}^{(0)}$, $U_{2}^{(0)}$, $U_{1}^{(n)}$ for $n\in\mathbb{N}^{+}$, and $U_{2}^{(n)}$ for $n\in\mathbb{N}^{+}$ now known, the final step is to transform the solution from the Laplace domain back to the time domain. \revision{This yields the following perturbation solution of (\ref{eq:pde1})--(\ref{eq:int2}) and (\ref{eq:ic1})--(\ref{eq:bc2b}):}
\begin{align}
\label{eq:u1}
u_{1}(x,y,t) = \mathcal{L}^{-1}\left\{U_{1}^{(0)}(x,s)\right\} + \sum_{n=1}^{N-1} \varepsilon^{n}\mathcal{L}^{-1}\left\{U_{1}^{(n)}(x,y,s)\right\}\!,\\
\label{eq:u2}
u_{2}(x,y,t) = \mathcal{L}^{-1}\left\{U_{2}^{(0)}(x,s)\right\} + \sum_{n=1}^{N-1} \varepsilon^{n}\mathcal{L}^{-1}\left\{U_{2}^{(n)}(x,y,s)\right\}\!,
\end{align}
\revision{when truncating the expansions at a finite number of terms, $N$. Due to the complicated form of $U_{1}^{(n)}$ (\ref{eq:U1n}) and $U_{2}^{(n)}$ (\ref{eq:U2n}) and the fact that they depend recursively on the previous terms $U_{1}^{(0)},\hdots,U_{1}^{(n-1)}$ and $U_{2}^{(0)},\hdots,U_{2}^{(n-1)}$ (through $F^{(n)}(y,t)$ (\ref{eq:Fn}) and $G^{(n)}(y,t)$ (\ref{eq:Gn})), analytically inverting the Laplace transforms in (\ref{eq:u1}) and (\ref{eq:u2}) is very challenging.} To address this, we use a numerical inverse Laplace transform approximation by \citet{trefethen_2006}, which has frequently been used for other similar heterogeneous transport problems (e.g. \cite{carr_2016,ilic_2010,carr_2021}).
\revision{During our numerical investigations, we found that both $\mathcal{L}^{-1}\{U_{1}^{(n)}(x,y,s)\}$ and $\mathcal{L}^{-1}\{U_{2}^{(n)}(x,y,s)\}$ tend to increase in magnitude as $n$ increases. However, provided $\varepsilon$ is sufficiently small this is balanced out by the increasing powers of $\varepsilon$ in the expansions (\ref{eq:u1}) and (\ref{eq:u2}). It is also important to note here that the expansions (\ref{eq:u1}) and (\ref{eq:u2}) converge in the limit as $\varepsilon\rightarrow 0$ but not necessarily in the limit as $N\rightarrow\infty$. This means that for a fixed value of $\varepsilon$, increasing $N$ does not necessarily improve the accuracy of the perturbation solution. As we will see in the next section, the usefulness comes from the fact that expansions (\ref{eq:u1}) and (\ref{eq:u2}) produce accurate approximations to the solution for small values of $N$.}
\revision{\section{Results}}
\label{sec:results}
We now compare our semi-analytical perturbation solution (developed in section \ref{sec:test_case}) to a numerical solution obtained using a standard finite volume spatial discretisation (described in detail in previous work \cite{pontrelli_2020,simpson_2021}). In all results, we compare both solutions at $t = 0.2$ for $D_{1} = 1$, $D_{2} = 0.01$, $L = H = 1$, $\ell = 0.5$, \revision{$c_{0}(t) = 1$, $q_{L}(t) = 0$} and four different combinations of $\varepsilon$ and $w(y)$. In all cases, the perturbation solution is computed using the first $30$ terms in the eigenfunction expansions (\ref{eq:U1n})--(\ref{eq:U2n}) and (\ref{eq:diffU1n})--(\ref{eq:diffU2n}). Both the perturbation and numerical solutions are evaluated/computed at the vertices of an unstructured triangular mesh conforming to the perturbed domain (Figure \ref{fig:skin}(b)). Here vertices are positioned along the interface at $x = \ell + \varepsilon w(y)$ and each triangular element is located entirely within either region $1$ or $2$. All meshes are generated using GMSH \cite{gmsh_2009} with refinement controlled by setting a mesh element size of 0.01, which yields a mesh consisting of approximately 12,000 nodes and 24,000 elements for all geometries tested. Complete details of our code implementation and experiments can be found in our MATLAB code available at \href{https://github.com/elliotcarr/Carr2022a}{https://github.com/elliotcarr/Carr2022a}.
Results in Figure \ref{fig:results1} show the comparison between the perturbation and numerical solutions when using the first five terms in the perturbation expansions (\ref{eq:u1})--(\ref{eq:u2}). From these plots, it is evident that the perturbation solution captures the solution behaviour remarkably well and is in good agreement with the numerical solution in all cases. Figures \ref{fig:results1}(a)--(c), show results for the unperturbed problem computed by setting $\varepsilon=w(y) = 0$. These results highlight the best case scenario: \revision{the leading order term in the perturbation solution (section \ref{sec:leading_order}) provides an exact solution to the problem in the Laplace domain and thus any discrepancies between the perturbation and numerical solutions are fully explained by a combination of (i) the approximation error incurred from numerical inversion of the Laplace transforms $u_{1}^{(0)}(x,t) = \mathcal{L}^{-1}\{U_{1}^{(0)}(x,s)\}$ and $u_{2}^{(0)}(x,t) = \mathcal{L}^{-1}\{U_{2}^{(0)}(x,s)\}$ and (ii) the spatial/temporal discretisation error associated with the finite volume method}. Comparing Figures \ref{fig:results1}(f)(i)(l) to Figure \ref{fig:results1}(c) we see that differences between the perturbation and numerical solutions are comparable to those for the unperturbed domain, which is an encouraging sign for the accuracy of the higher order terms, $u_{1}^{(n)}$ and $u_{2}^{(n)}$ ($n = 1,\hdots,4$), in the asymptotic expansions.
\begin{figure}[!t]
\centering
\includegraphics[width=0.97\textwidth]{Figures/Case21.pdf}
\includegraphics[width=0.97\textwidth]{Figures/Case22.pdf}
\includegraphics[width=0.97\textwidth]{Figures/Case23.pdf}
\includegraphics[width=0.97\textwidth]{Figures/Case24.pdf}
\caption{Perturbation solution (first column -- (a)(d)(g)(j)), numerical solution (second column -- (b)(e)(h)(k)) and absolute difference between the perturbation and numerical solutions (third column -- (c)(f)(i)(l)) at $t = 0.2$ for four different choices of the interface $x = 0.5 + \varepsilon w(y)$ (a)--(c) $\varepsilon = 0$, $w(y) = 0$, (d)--(f) $\varepsilon = 0.05$, $w(y) = y$, (g)--(i) $\varepsilon = 0.05$, $w(y) = \sin(\pi y)$ and (j)--(k) $\varepsilon = 0.02$, $w(y) = \sin(7\pi y)$. The semi-analytical perturbation solution is computed using the first $30$ terms in the eigenfunction expansions (\ref{eq:U1n})--(\ref{eq:U2n}) and (\ref{eq:diffU1n})--(\ref{eq:diffU2n}) and the first five terms in the perturbation expansions (\ref{eq:u1})--(\ref{eq:u2}). Colormaps available from \cite{cobeldick_2021}.}
\label{fig:results1}
\end{figure}
As is the case for all perturbation methods, the accuracy of our perturbation solution depends on the value of $\varepsilon$ and the number of terms, $N$, taken in the expansions (\ref{eq:u1})--(\ref{eq:u2}). Results in Table \ref{tab:results} provide the maximum absolute difference between the perturbation and numerical solutions (across all vertices in the mesh) for all 45 combinations of $w(y) = y,\sin(\pi y),\sin(7\pi y)$, $\varepsilon = 0.01,0.02,0.05$ and $N = 1,2,3,4,5$ terms in the expansions (\ref{eq:u1})--(\ref{eq:u2}) \revision{with the specific case of $w(y) = \sin(7\pi y)$ and $N = 5$ shown visually in Figure \ref{fig:results2} for $\varepsilon = 0.01,0.02,0.05$.} As expected the match between the perturbation and numerical solutions \revision{improves when $\varepsilon$ is decreased but does not necessarily improve when $N$ is increased, a common feature of perturbation solutions \cite{holmes_2013}.} These results also highlight that care must be taken when using the perturbation solution as it can be unreliable if $\varepsilon$ is too large, as evident in \revision{Figure \ref{fig:results2}(g)--(i) where the accuracy of the solution has deteriorated around the interface. Finally, we} remark also that the perturbation and numerical solutions compared similarly well at later times, for example, a maximum absolute difference of \num{6.97e-03} was recorded at $t = 1$ for $w(y) = \sin(7\pi y)$, $\varepsilon = 0.02$ and $N = 5$ compared with \num{1.13e-02} at $t = 0.2$ (as per Table \ref{tab:results}).
\begin{table}[!t]
\def0.8{0.8}
\centering
\begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}cllll}
$N$ & $\varepsilon$ & $w(y) = y$ & $w(y) = \sin(\pi y)$ & $w(y) = \sin(7\pi y)$\\
\hline
& $0.01$ & \num{1.35e-01} & \num{1.23e-01} & \num{1.62e-01}\\
1 & $0.02$ & \num{2.29e-01} & \num{2.24e-01} & \num{3.35e-01}\\
& $0.05$ &\num{4.90e-01} & \num{4.20e-01} & \num{8.90e-01}\\
\hline
& $0.01$ & \num{9.49e-03} & \num{9.07e-03} & \num{2.96e-02}\\
2 & $0.02$ & \num{3.67e-02} & \num{3.50e-02} & \num{1.32e-01}\\
& $0.05$ & \num{1.97e-01} & \num{1.85e-01} & \num{1.47e-00}\\
\hline
& $0.01$ & \num{9.80e-04} & \num{8.93e-04} & \num{5.83e-03}\\
3 & $0.02$ & \num{2.88e-03} & \num{2.84e-03} & \num{4.03e-02}\\
& $0.05$ & \num{4.36e-02} & \num{4.05e-02} & \num{1.92e+00}\\
\hline
& $0.01$ & \num{7.17e-04} & \num{8.82e-04} & \num{5.41e-03}\\
4 & $0.02$ & \num{1.01e-03} & \num{1.42e-03} & \num{2.17e-02}\\
& $0.05$ & \num{7.22e-03} & \num{6.61e-03} & \num{3.27e+00}\\
\hline
& $0.01$ & \num{7.07e-04} & \num{8.82e-04} & \num{5.43e-03}\\
5 & $0.02$ & \num{8.11e-04} & \num{1.42e-03} & \num{1.13e-02}\\
& $0.05$ & \num{6.48e-03} & \num{3.90e-03} & \num{1.27e+01}\\
\hline
\end{tabular*}
\caption{Maximum absolute difference between the perturbation and numerical solutions (across all vertices in the mesh) at $t = 0.2$ for \revision{different} choices of $\varepsilon$, $w(y)$ and number of terms, $N$, taken in the perturbation expansions (\ref{eq:u1})--(\ref{eq:u2}). In all cases, the perturbation solution is computed using the first $30$ terms in the eigenfunction expansions (\ref{eq:U1n})--(\ref{eq:U2n}) and (\ref{eq:diffU1n})--(\ref{eq:diffU2n}). For reference a maximum absolute difference of \num{6.81e-4} was recorded for the case of $\varepsilon = w(y) = 0$ shown in Figure \ref{fig:results1}(a)--(c). }
\label{tab:results}
\end{table}
\begin{figure}[p]
\centering
\includegraphics[width=0.97\textwidth]{Figures/Case31.pdf}
\includegraphics[width=0.97\textwidth]{Figures/Case32.pdf}
\includegraphics[width=0.97\textwidth]{Figures/Case33.pdf}
\caption{Perturbation solution (first column -- (a)(d)(g)), numerical solution (second column -- (b)(e)(h)) and absolute difference between the perturbation and numerical solutions (third column -- (c)(f)(i)) at $t = 0.2$ for $w(y) = \sin(7\pi y)$ and three different choices of the perturbation parameter (a)--(c) $\varepsilon = 0.01$ (d)--(f) $\varepsilon = 0.02$ (g)--(i) $\varepsilon = 0.05$. The semi-analytical perturbation solution is computed using the first $30$ terms in the eigenfunction expansions (\ref{eq:U1n})--(\ref{eq:U2n}) and (\ref{eq:diffU1n})--(\ref{eq:diffU2n}) and the first five terms in the perturbation expansions (\ref{eq:u1})--(\ref{eq:u2}). In (g) white shading indicates regions where the solution falls outside of the range $[0,1]$. Colormaps available from \cite{cobeldick_2021}.}
\label{fig:results2}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In summary, we have developed a perturbation solution for the problem of time-dependent diffusion across a perturbed interface separating two finite regions of distinct diffusivity. Analogous to similar problems on perturbed domains with constant diffusivity, our analysis shows that each term in the asymptotic expansion satisfies an initial-boundary value problem on the unperturbed domain subject to interface conditions involving the previously determined terms in the asymptotic expansion. Demonstration of the perturbation solution was carried out for a specific, practically-relevant set of initial and boundary conditions and several choices for the perturbed interface with reported results shown to be in good agreement with a standard numerical solution obtained via finite volume discretisation.
The developed solutions expand the suite of solutions for transient diffusion problems and provide analytical insight into important practical problems arising in heat conduction and groundwater contamination. Our perturbation method presented in Sections \ref{sec:solution_method} and \ref{sec:test_case} is quite general allowing for different choices of $\varepsilon$, $w(y)$, $\ell$, $D_{1}$, $D_{2}$, $L$ and $H$ and arbitrary numbers of terms in the asymptotic and eigenfunction expansions. \revision{Although the solutions are semi-analytical due to the application of a numerical inverse Laplace transform, they retain the desirable property of being closed-form expressions that can be evaluated at any point in continuous space and time. This property distinguishes our semi-analytical solutions from standard numerical solutions that discretise the governing equations in space and time and require small temporal and spatial discretisation step sizes to ensure sufficient accuracy.}
The semi-analytical solution presented in Section \ref{sec:test_case} is limited to a specific set of boundary conditions, however, extension to other (perhaps more sophisticated) boundary conditions is fairly straightforward given that the initial-boundary value problem (for each term in the asymptotic expansion) applies on the unperturbed domain. Possible avenues for future work include perturbing the boundary at $x = 0$ or the boundary at $x = L$ instead of the interface, considering a three layer problem with one or two perturbed interfaces or treating more complex governing equations such as diffusion-decay equations.
\section*{Acknowledgements}
We acknowledge funding from Queensland University of Technology's (QUT) Vacation Research Experience Scheme (VRES), which provided DJO with a stipend to undertake this research over the 2020-2021 Australian summer. \revision{We thank Professor Mark McGuinness and one other anonymous referee for their helpful suggestions.}
| {
"timestamp": "2022-01-12T02:05:55",
"yymm": "2105",
"arxiv_id": "2105.10116",
"language": "en",
"url": "https://arxiv.org/abs/2105.10116",
"abstract": "Motivated by practical applications in heat conduction and contaminant transport, we consider heat and mass diffusion across a perturbed interface separating two finite regions of distinct diffusivity. Under the assumption of continuity of the solution and diffusive flux at the interface, we use perturbation theory to develop an asymptotic expansion of the solution valid for small perturbations. Each term in the asymptotic expansion satisfies an initial-boundary value problem on the unperturbed domain subject to interface conditions depending on the previously determined terms in the asymptotic expansion. Demonstration of the perturbation solution is carried out for a specific, practically-relevant set of initial and boundary conditions with semi-analytical solutions of the initial-boundary value problems developed using standard Laplace transform and eigenfunction expansion techniques. Results for several choices of the perturbed interface confirm the perturbation solution is in good agreement with a standard numerical solution.",
"subjects": "Biological Physics (physics.bio-ph)",
"title": "Approximate analytical solution for transient heat and mass transfer across an irregular interface",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290897113961,
"lm_q2_score": 0.7310585903489891,
"lm_q1q2_score": 0.7099522633713102
} |
https://arxiv.org/abs/1808.06569 | Splitter Theorems for Graph Immersions | We establish splitter theorems for graph immersions for two families of graphs, $k$-edge-connected graphs, with $k$ even, and 3-edge-connected, internally 4-edge-connected graphs. As a corollary, we prove that every $3$-edge-connected, internally $4$-edge-connected graph on at least seven vertices that immerses $K_5$ also has $K_{3,3}$ as an immersion. | \section{Introduction}
\label{intro}
Throughout the paper, we use standard definitions and notation for graphs as in \cite{MR1367739}. Let $G$ be a graph with a certain connectivity. One natural question is that whether there is a way to ``reduce" $G$ while preserving the same connectivity, and possibly also the presence of a particular graph ``contained" in $G$. Broadly speaking, in answering such questions, two types of theorems arise. In the first type, {\it chain theorems}, one tries to ``reduce" the graph down to some basic starting point, which is typically a particular small graph, or a small family of graphs. The other type of theorems are {\it splitter theorems}. Here, there is the extra information that another graph $H$ is properly ``contained" in $G$ , and both have a certain connectivity. The idea is then to ``reduce" $G$ to a graph ``one step smaller", while preserving the connectivity, and the ``containment" of $H$.
The best known such results are the ones in the world where the connectivity concerned is vertex-connectivity, the``reduction" is an edge-contraction or edge-deletion, and the ``containment" relation is that of minor. In this realm, the first chain result is due to Tutte, who showed if a graph $G$ is $2$-connected, for every edge $e\in E(G)$, either $G\setminus e$ or $G/e$ is $2$-connected. The next result, also due to Tutte, is a classical result of a reduction theorem of chain variety. Here, a {\it wheel} is a graph formed by connecting a single vertex to all vertices of a cycle.
\begin{thm}[Tutte \cite{MR0140094}]
If $G$ is a simple $3$-connected graph, then there exists $e\in E(G)$ such that either $G\setminus e$ or $G/e$ is simple and $3$-connected, unless $G$ is a wheel.
\end{thm}
Another classical result of chain type is that every simple 3-connected graph other than $K_4$ has an edge whose contraction results in a $3$-connected graph, see \cite{MR1373655}.
There is a wide body of literature sharpening these results and extending them to other connectivity, see, for instance \cite{MR1892444, MR2467821}.
Reduction theorems of splitter variety for graph minors started with a result for $2$-connected graphs, independently discovered by Brylawski \cite{MR0309764}, and Seymour \cite{MR0439663}.
This result asserts that if $G, H$ are $2$-connected graphs, and $H$ is a proper minor of $G$, then there is an edge $e\in E(G)$ such that $G\setminus e$ or $G/e$ is $2$-connected, and has $H$ as a minor. The more famous splitter theorem is Seymour's Splitter Theorem for $3$-connected graphs, which asserts:
\begin{thm}[Seymour \cite{MR579077}]
Let $G, H$ be $3$-connected simple graphs, where $H$ is a proper minor of $G$, and $|E(H)|\ge 4$. Also, suppose if $H$ is a wheel, then $G$ has no larger wheel minor. Then $G$ has an edge $e$ such that either $G\setminus e$ or $G/e$ is simple and $3$-connected, and contains $H$ as a minor.
\end{thm}
There is an extremely wide body of literature extending these results to other connectivity and the realm of matroids, and binary matroids, see, for example, \cite{MR3641803,MR2900811,costalonga2018splitter}.
In this paper, however, we are not concerned with vertex-connectivity and minors, but rather the world of edge-connectivity and a less-explored type of containment, {\it immersion}. A pair of distinct edges $xy, yz$ with a common neighbour is said to {\it split off at} $y$ if we delete these edges and add a new edge $xz$. We say a graph $G$ {\it immerses} $H$, or {\it has an $H$-immersion}, and write $G\succ_{im}H$, if a subgraph of $G$ can be transformed to a graph isomorphic to $H$ through a series of splitting pairs of edges.\footnote{This is sometimes called {\it weak immersion}.} Also, we say a vertex $v \in V(G)$ of even degree is {\it completely split} if $d(v)/2$ consecutive splits are performed at $v$, and then the resulting isolated vertex $v$ is deleted.
In the world of edge-connectivity, and immersions, there is a chain theorem due to Lov\'{a}sz (\cite{MR1265492}, Problem $6.53$, see also \cite{MR1373655}).
\begin{thm}[Lov\'{a}sz \cite{MR1265492}]
Suppose $G$ is $2k$-edge-connected. Then by repeatedly applying complete split, and edge-deletion it can be reduced to a graph on two vertices, with $2k$ parallel edges between them.
\end{thm}
This theorem was later generalized by a significant theorem of Mader that is key in our proofs, and will be stated in section \ref{kecon}. The goal of this paper is to establish two splitter theorems for immersions, the first of which is an analogue of the aforementioned result of Lov\'{a}sz.
\begin{thm}
\label{sp-thm1intro}
Suppose $G\ncong H$ are $2k$-edge-connected loopless graphs, and $G\succ _{im} H$. Then there exists an operation taking $G$ to $G'$ so that $G'$ is $2k$-edge-connected and $G'\succ _{im} H$, where an operation is either
\begin{itemize}
\item deleting an edge,
\item splitting at a vertex of degree $\geq 2k+2$,
\item completely splitting a $2k$-vertex,
\end{itemize}
each followed by iteratively deleting any loops, and suppressing vertices of degree $2$.
\end{thm}
In comparison with graph minors, the literature on splitter theorems for graph immersions is extremely sparse. Indeed, we only know of two significant papers concerning this, namely \cite{MR2216473,MR2646129}, where Ding and Kanno have proved a handful of splitter theorems for immersion for cubic graphs, and $4$-regular graphs. In particular, they have shown the following (see \cite{MR2646129}, Theorem $9$):
\begin{thm}[Ding, Kanno\cite{MR2646129}]
\label{DingKanno}
Suppose $G\ncong H$ are $4$-edge-connected $4$-regular loopless graphs, and $G\succ_{im} H$. Then there exists a vertex whose complete split takes $G$ to $G'$ so that $G'$ is $4$-edge-connected $4$-regular, and $G'\succ _{im} H$.
\end{thm}
Our second theorem, stated below, is similar to the first one, but is for a different type of connectivity, and generalizes Theorem \ref{DingKanno}. Here, $G$ is said to be {\it internally $k$-edge-connected} if every edge-cut containing less than $k$ edges is the set of edges incident with a single vertex. Also, in the statement of the theorem, $Q_3$ denotes the graph of the cube, and $K_2^3$ denotes the graph on two vertices with three parallel edges between them.
\begin{thm}
\label{sp-thm2intro}
Let $G \ncong H$ be $3$-edge-connected, and internally $4$-edge connected loopless graphs, with $G \succ _{im} H$. Further, assume $|V(H)|\ge 2$, and $(G, H) \ncong (Q_3, K_4), (Q_3, K_2^3)$. Then there exists an operation taking $G$ to $G'$ such that $G'$ is $3$-edge connected, internally $4$-edge connected, and $G' \succ _{im} H$, where an operation is either
\begin{itemize}
\item deleting an edge,
\item splitting at a vertex of degree $\geq 4$,
\end{itemize}
each followed by iteratively deleting any loops, and suppressing vertices of degree $2$.
\end{thm}
In the world of graph minors, an immediate simple consequence of Seymour's Splitter Theorem, first observed by Wagner\cite{MR1513158}, is that every $3$-connected graph on at least six vertices containing $K_5$ as a minor, has a $K_{3,3}$-minor. This fact is then used to obtain a precise structural description of graphs with no $K_{3,3}$-minor. In parallel to this, and as an application of Theorem \ref{sp-thm2intro}, we will establish the following analogue of this result for immersions. The result will be a step towards understanding graphs with no $K_5$-immersion.
\begin{cor}
\label{corintro}
Suppose $G$ is $3$-edge-connected, and internally $4$-edge-connected, where $G\succ_{im}K_5$. Then
\begin{enumerate}
\item
if $|V(G)|\ge 6$ then $G\succ_{im}K_{3,3}$, or $G\cong K_{2,2,2}$.
\item
If $|V(G)|\ge 7$ then $G\succ K_{3,3}$.
\end{enumerate}
\end{cor}
The rest of the paper is organized as follows: In section \ref{kecon}, we state the preliminary definitions and key tools, and prove Theorem \ref{sp-thm1intro}. Section \ref{in4econ} is dedicated to the family of $3$-edge-connected, internally $4$-edge-connected graphs, and includes the proof of Theorem \ref{sp-thm2intro} and Corollary \ref{corintro}.
\section{$k$-edge-connected graphs, $k$ even}
\label{kecon}
We will assume the graphs are undirected and finite, which may have loops or parallel edges. For $X\subset V(G)$, we use $\delta_G(X)$ to denote the edge-cut consisting of all edges of $G$ with exactly one endpoint in $X$, the number of which is called the {\it size} of this edge-cut, and is denoted by $d_G(X)$. When $G$ is connected we refer to both $X$ and $X^c (=V(G)\setminus X)$ as {\it sides} of the edge-cut $\delta (X)$. An edge-cut is called {\it trivial} if at least one side of the cut consists of only one vertex. Note that graph is $k$-edge-connected (internally $k$-edge-connected) if every edge-cut (non-trivial edge-cut) has size $\ge k$. For distinct vertices $x, y \in V(G)$, we let $\lambda_G({x,y})$ denote the maximum size of a collection of pairwise edge-disjoint paths between $x$ and $y$. Whenever the graph concerned is clear from the context, we may drop the subscript $G$. In Section \ref{intro} the notion of graph immersions was introduced. Equivalently, we could say $H$ is immersed in $G$ if there is a one-to-one mapping $\phi:V(H) \rightarrow V(G)$ and a $\phi(u)-\phi(v)$-path $P_{uv}$ in $G$ corresponding to every edge $uv$ in $H$ so that $P_{uv}$ paths are pairwise edge-disjoint. In this case, a vertex in $\phi(V(H))$ is called a {\it terminal} of $H$-immersion. \footnote{It is worth mentioning that if $G\succ_{im}H$ and the collection of paths $P_{uv}$ are internally disjoint from $\phi(V(H))$, it is standard in the literature to say $G$ {\it strongly immerses} $H$. It would be then in contrast with the notion of {\it weak immersion}, where $P_{uv}$ paths are not necessarly internally disjoint from $\phi(V(H))$. However, we are only studying weak immersion and, for the sake of simplicity, refer to it as immersion.}
We proceed by listing a couple facts and theorems which will feature in our proofs. The first is the observation below.
\begin{obs}
Suppose $G$ is a graph, and $X\neq Y$ are distinct nonempty subsets of $V(G)$. Then, by counting the edges contributing to the edge-cuts, we have
$$d(X\cap Y)+d(X\cup Y)+2e(X^c\cap Y, X\cap Y^c)=d(X)+d(Y).$$
Observe it also implies the following inequality $$d(X\cap Y)+d(X\cup Y)\le d(X)+d(Y).$$
\end{obs}
Another frequently used fact is the classical theorem of Menger. A proof may be found, for instance, in \cite{MR1367739}.
\begin{thm}[Menger]
\label{Menger}
Let $G$ be a graph, and $x, y$ distinct vertices of $G$. Then $\lambda_G(x,y)$ equals the minimum size of an edge-cut of $G$ separating $x$ from $y$.
\end{thm}
The next theorem, due to Mader, is an extremely powerful tool when working with immersions, and is also a key ingredient in our proofs.
\begin{thm}[Mader \cite{MR499117}, see also Frank \cite{MR1172364}]
\label{Mader}
Suppose for $s\in V(G)$ we have $d(s)\neq 3$, and $s$ is not incident with a cut-edge. Then there is a split at $s$ such that in the resulting graph $G'$, for any $x,y\in V(G')$ other than $s$, we have $\lambda_G(x,y)=\lambda_{G'}(x,y)$.
\end{thm}
Throughout the rest of this section, we assume $k$ is even. The main result of this section is Theorem \ref{sp-thm1intro} which is restated below for convenience.
\begin{thm}
\label{sp-thm1}
Suppose $G\ncong H$ are $k$-edge-connected loopless graphs, and $G\succ _{im} H$. Then there exists an operation taking $G$ to $G'$ so that $G'$ is $k$-edge-connected and $G'\succ _{im} H$, where an operation is either
\begin{itemize}
\item deleting an edge,
\item splitting at a vertex of degree $\geq k+2$,
\item completely splitting a $k$-vertex,
\end{itemize}
each followed by iteratively deleting any loops, and suppressing vertices of degree $2$.
\end{thm}
Note that in order to have a splitter theorem for the family of $k$-edge-connected graphs, we do need to embrace completely splitting a $k$-vertex as one of our operations, since as soon as we do a split at a $k$-vertex, the graph will have a trivial $(k - 2)$-edge-cut.
Theorem \ref{sp-thm1} will be proved through a series of lemmas. We will begin by introducing a few definitions.
\begin{defn}
A graph $G$ is called {\it nearly $k$-edge-connected} if either $G$ is $k$-edge-connected, or there exists a single vertex, $u$, called {\it the special vertex}, of even degree $<k$ so that every nonempty edge-cut in $G$ apart from $\delta (u)$ has size at least $k$.
\end{defn}
\begin{defn}
Suppose $G$ is (nearly) $k$-edge-connected, and $H$ a $k$-edge-connected graph, with $G\succ _{im} H$. We define a {\it good operation} to be either a split, a complete split for either a vertex of degree $k$ or the special vertex, or a deletion of an edge from $G$ which preserves (nearly) $k$-edge-connectivity of $G$ (for the same special vertex), and an immersion of $H$ in the resulting graph.
\end{defn}
\noindent Note that the theorem can now be restated as follows: If we can step down from $G$ towards $H$ doing each of the three operations, then there is a good operation.
Throughout the rest of this section, we will assume that $H$ is $k$-edge-connected, with $k$ even, and $G\succ _{im} H$.
\begin{lem}
\label{lem1}
Suppose $G$ is (nearly) $k$-edge-connected, and there exists $X \subset V(G)$ such that $ d(X)=k$, and every $x \in X$ is of degree $k+1$. Then there exists an edge lying in $X$ such that $G \setminus e$ is (nearly) $k$-edge-connected.
\end{lem}
\begin{pf}
Choose $X'\subseteq X$ such that $d(X')=k$, and subject to this $X'$ is minimal. Since every $x \in X'$ has degree $k+1$, we have $|X'|\neq 1$, and $X'$ does not contain the special vertex (if existent at all). Also $X'$ must be connected, so there exists an edge $e \in X'$. We will show that $G\setminus e$ is nearly $k$-edge-connected. For a contradiction, suppose $e$ is in a $k$-edge cut, $\delta (Y)$.
Note that $d({X'}^c)=d(X')=k$ implies that ${X'}^c$ contains at least one vertex rather than the special vertex. We may assume (by possibly replacing $Y$ by $Y^c$) it is in $Y^c$. As $G$ is (nearly) $k$-edge-connected, $d(X' \cap Y), d({X'}^c \cap Y^c)\geq k$. However, it follows from
$$k+k\leq d(X' \cap Y) +d({X'}^c \cap Y^c)\leq d(X')+d(Y)=k+k$$
that $d(X'\cap Y)=k$, which contradicts minimality of $X'$.
\end{pf}
{\noindent \textbf{Notation.}} If $G$ is a graph with $X \subset V(G)$, we will denote the graph obtained from identifying $X$ to a single node by $G.X$.
\begin{obs}
Suppose $G$ is a graph with $X \subset V(G)$ such that there exists an immersion of $H$ with all terminals in $X$. Then $G.X^c$ contains $H$ as an immersion.
\end{obs}
The following lemma enables us to handle $k$-edge-cuts in (nearly) $k$-edge-connected graphs:
\begin{lem}
\label{lem2}
If $G$ is a (nearly) $k$-edge-connected graph with a nontrivial edge-cut $\delta (X)$ of size $k$ such that some immersion of $H$ has no terminal in $X$, then there exists a good operation.
\end{lem}
\begin{pf}
If the special vertex is in $X$, we will split it using Mader's Theorem (Theorem \ref{Mader}). Else, if there exists a vertex $x \in X$ of degree $\neq k+1$, Mader's Theorem may be applied to either split (if $d(x)\geq k+2$) or completely split $x$ (if $d(x)=k$). Call the graph resulting from applying Mader's Theorem $G'$. To see that the operation is good, observe that there remain $k$ edge-disjoint paths between any pair of nonspecial vertices, one in $X$, and the other in $X^c$(as it was the case in $G$). Therefore $G'$ immerses $G.X$, thus, immerses $H$.
Now suppose every vertex in $X$ is of degree $k+1$. Applying Lemma \ref{lem1}, we can delete an edge from $X$ preserving (nearly) $k$-edge-connectivity. Also, the same argument as above shows that the resulting graph still has $H$ as an immersion, thus the deletion is indeed a good operation.
\end{pf}
The next two lemmas which concern a broader family of graphs, will later be helpful dealing with $(k+1)$-edge-cuts in (nearly) $k$-edge-connected graphs.
\begin{lem}
\label{lem3}
Let $G$ be an internally $k$-edge-connected graph in which every vertex of degree $<k$ is of even degree. If $d(x)$ is odd, then there exists $y \in V(G)\setminus x$ such that $\lambda (x, y)\geq k+1$.
\end{lem}
\begin{pf}
We prove the statement by induction on $|V(G)|$. Note that by parity, there must exist another vertex of odd degree, $y$, in $G$. If every cut separating $x$ from $y$ is of size $\geq k+1$, by Menger's Theorem (Theorem \ref{Menger}) we are done. Otherwise, there exists a $k$-edge-cut $\delta (Y)$, with $y \in Y$, separating $x$ from $y$.
Note that degree properties imply that $|Y|\geq 2$, so the graph ${G'}=G.Y$, which satisfies the lemma's hypothesis, has fewer vertices than $G$.
Also $x$ is of odd degree in $G'$ as well, thus, by induction hypothesis there exists $y' \in V({G'})\setminus x$ such that $\lambda _{G'}(x, y')\geq k+1$. It follows, however, that $\lambda _{G}(x, y')\geq k+1$ as well, since $\lambda_G(x,y)=k$ implies that $G \succ_{im} {G'}$.
\end{pf}
\begin{lem}
\label{lem4}
Let $G$ be an internally $k$-edge-connected graph in which every vertex of degree $<k$ is of even degree.
If $\delta (X)$ is a $(k+1)$-edge-cut in $G$, there exist $x \in X, y \in X^c$ such that $\lambda(x,y)\geq k+1$.
\end{lem}
\begin{pf}
Let $G_1= G.X, G_2=G.X^c$, with $s, t$ being the nodes replacing $X, X^c$, respectively. Note that both $G_1, G_2$ satisfy Lemma \ref{lem3}'s hypothesis. Also, $s$ is a vertex of odd degree in $G_1$, so, by Lemma \ref{lem3}, there exists $y\in X^c$ such that $\lambda_{G_1}(s,y)\geq k+1$, thus $G \succ_{im} G_2$. It can be similarly argued that there exists $x\in X$ such that $\lambda_{G_2}(x,t)\geq k+1$, which together with $G \succ_{im} G_2$ shows that $\lambda_G(x,y)\geq k+1$.
\end{pf}
Having the lemma above in hand, we can now efficiently handle $(k+1)$-edge-cuts:
\begin{lem}
\label{lem5}
If $G$ is a (nearly) $k$-edge-connected graph with a nontrivial $(k+1)$-edge-cut $\delta (X)$ such that some immersion of $H$ has no terminal in $X$, then there exists a good operation.
\end{lem}
\begin{pf}
If the special vertex is in $X$, we will split it using Mader's Theorem. Else, if there exists a vertex in $X$ of degree $\neq k+1$, we will apply Mader's Theorem to either split or completely split it. We claim this operation is good. Let $G'$ be the resulting (nearly) $k$-edge-connected graph. First, note that $\delta (X)$ remains a $(k+1)$-edge-cut in $G'$, since doing a split changes the size of an edge-cut by an even number, and, by the edge-connectivity of $G'$, $d_{G'}(X) \geq k$. We may now apply Lemma \ref{lem4} to choose $x \in X$, $y \in X^c$ with $\lambda(x,y) \geq k+1$. Thus $G'$ immerses $G.X$, and therefore, immerses $H$.
Now, suppose every vertex in $X$ is of degree $k+1$, and take an edge $e \in X$, which has to exist because $|X|\geq 2$, and $X$ must be connected. If $G\setminus e$ is nearly $k$-edge-connected, then the same argument as above shows that deletion of $e$ is a good operation. So, we may now assume that $e$ is in a $k$-edge-cut, $\delta (Y)$.
\noindent \textbf{Remark.} Let $Z \subset V(G), Z=Z_1 \cup Z_2, Z_1 \cap Z_2=\emptyset$, and denote the number of edges from $Z_1$ to $Z_2$ by $e(Z_1, Z_2)$. Then we have
$$d(Z)=d(Z_1)+d(Z_2)-2e(Z_1, Z_2). \qquad \qquad (*)$$
Using $(*)$, by possibly replacing $Y$ with $Y^c$, we may assume that $d(X \cap Y^c)$ is even and $d(X \cap Y)$ is odd. Also, by $(*)$, we conclude that $d(X^c \cap Y)$ is odd, and so $X^c \cap Y$ is nonempty. Thus, both $X\cap Y^c$ and $X^c \cap Y$ contain a non special vertex. We also have
$$2k+1=d(X)+d(Y)\geq d(X \cap Y^c) +d(X^c \cap Y) \geq 2k, $$
so by parity $d(X\cap Y^c)=k$. Therefore, $\delta (X \cap Y^c)$ is a nontrivial (as every vertex in $X$ is of degree $k+1$) $k$-edge-cut with no terminal of $H$ in $X \cap Y^c$. Applying Lemma \ref{lem2} we may conclude that a good operation exists.
\end{pf}
The Next three lemmas concern the three operations allowed in stepping from $G$ towards $H$, which are $k$-edge-connected graphs with $G\succ_{im} H$, and show that in each case we can take a step maintaining $k$-edge-connectivity.
\begin{lem}
\label{lem6}
If $G\succ_{im} H$ are $k$-edge connected graphs, and there is a complete split of a $k$-vertex $u$ of $G$ preserving an $H$-immersion, then there is a good operation.
\end{lem}
\begin{pf}
Consider the complete split of $u$ as $\frac{k}{2}$ many splits at $u$, and choose a sequence of splits which, while preserving an $H$-immersion, results in the fewest number of loops. If $u$ could be completely split without ever creating a too small of an edge-cut rather than $\delta(u)$ along the way, we are done. Otherwise, we will stop doing these splits the first time the resulting graph ${G'}$ is about to lose nearly $k$-edge-connectivity (with $u$ being the special vertex).
In $G'$, therefore, there exists a subset $X \neq \{ u \} , \{ u\}^c$ of $V(G)$ for which $d_{G'}(X)\geq k $, doing the next split, however, makes it a $<k$-edge-cut, so $d_{{G'}}(X)=k$ or $k+1$. Moreover, since completely splitting $u$ results in $d(X)<k$, and preserves an immersion of $H$, we may conclude that there is an immersion of $H$ with all terminals on one side of $\delta (X)$, say $X^c$.
If $\delta (X)$ is a nontrivial cut, Lemma \ref{lem2} or \ref{lem5} applied to $G'$ guarantee the existence of a good operation, which may or may not be a split at $u$. If it is not a split at $u$, undoing the splits that took $G$ to $G'$ recovers $k$-edge-connectivity.
Now suppose $\delta(X)$ is a trivial cut with $X=\{ v \}$. Therefore the next split at $u$ would create a loop at $v$. Note that there cannot be a vertex $w \in N_{G'}(u)\setminus v$, because if there was one, then we could have split off $vuw$ instead. It is because splitting $vuw$ creates no loop while maintaining an immersion of $H$, as splitting off $wvu$ in the graph obtained results in the same graph as splitting off $vuv$ would.
Therefore $N_{G'}(u)=\{v\}$, implying that in $G'$, $d(v)=d(\{u, v\}) +d(u)$. This, however, contradicts $d(X)=k$, or $k+1$, as $d(X)=d(v)=d(\{u, v\}) +d(u)\ge k+ d(u)\ge k+2$, where the inequalities hold because $G'$ is nearly $k$-edge-connected, and $u$ is of even degree. This completes the proof.
\end{pf}
\begin{lem}
\label{lem7}
If $G\succ_{im} H$ are $k$-edge-connected graphs, and there is an edge $e$ such that $G \setminus e$ has an $H$-immersion, then a good operation exists.
\end{lem}
\begin{pf}
Suppose $e$ is in a $k$-edge-cut. If it is incident with a $k$-vertex $u$, then, by previous lemma, $u$ could be completely split off maintaining an $H$-immersion and $k$-edge-connectivity. Otherwise, $e$ is in a nontrivial $k$-edge-cut, with all terminals of $H$ on one side of the cut, thus we can use Lemma \ref{lem2} to find a good operation.
\end{pf}
\begin{lem}
\label{lem8}
If $G\succ_{im} H$ are $k$-edge connected graphs, and there is a split at a vertex $v$ preserving an $H$-immersion, then a good operation exists.
\end{lem}
\begin{pf}
Suppose splitting at $v$ makes an edge-cut $\delta (X)$ too small, then $d(X)=k$ or $k+1$. Also, all terminals of $H$ are on one side of the cut, say $X^c$. If $\delta (X)$ is a nontrivial edge-cut Lemma \ref{lem2} or \ref{lem5} may be applied. If $|X|=1$, with $d(X)=k$, we apply Lemma \ref{lem6} to completely split the vertex in $X$, and if $d(X)=k+1$ we will apply Lemma \ref{lem7} to delete an edge incident to it.
\end{pf}
The proof of Theorem \ref{sp-thm1} is now immediate:
{\noindent \textbf{Proof of Theorem \ref{sp-thm1}. }}{Apply Lemmas \ref{lem6}, \ref{lem7}, and \ref{lem8}.
\hfill{$\square$}
\section{3-edge-connected, internally 4-edge-connected graphs}
\label{in4econ}
In this section we establish Theorem \ref{sp-thm2intro}. Later, as an application, we will see that if a $3$-edge-connected, internally $4$-edge-connected graph other than $K_{2,2,2}$ immerses $K_5$, it also has a $K_{3,3}$-immersion. First, we move towards proving Theorem \ref{sp-thm2intro}, which, for convenience is restated here.
\begin{thm}
\label{sp-thm2}
Let $G \ncong H$ be $3$-edge-connected, and internally $4$-edge connected loopless graphs, with $G \succ _{im} H$. Further, assume $|V(H)|\ge 2$, and $(G, H) \ncong (Q_3, K_4), (Q_3, K_2^3)$. Then there exists an operation taking $G$ to $G'$ such that $G'$ is $3$-edge connected, internally $4$-edge connected, and $G' \succ _{im} H$, where an operation is either
\begin{itemize}
\item deleting an edge,
\item splitting at a vertex of degree $\geq 4$,
\end{itemize}
each followed by iteratively deleting any loops, and suppressing vertices of degree $2$.
\end{thm}
As in the proof of Theorem \ref{sp-thm1}, we will consider each operation separately, and the proof of the theorem will then be immediate. First, we will adjust our notion of a good operation as follows:
\begin{defn}
Suppose $G, H$ are 3-edge-connected, and internally 4-edge connected loopless graphs, with $G \succ _{im} H$. We define a {\it good operation} to be either a split at a vertex of degree $\geq 4$, or a deletion of an edge from $G$ which preserves 3-edge-connectivity, internal 4-edge-connectivity, and an immersion of $H$ in the resulting graph.
\end{defn}
\begin{lem}
\label{lem9}
Suppose $G, H$ are as in Theorem $\ref{sp-thm2}$, and there is an edge $e$ such that $G \setminus e$ has an $H$-immersion. Then if $(G, H) \ncong (Q_3, K_4), (Q_3, K_2^3)$, a good operation exists.
\end{lem}
\begin{pf}
Since deletion of $e$ is followed by suppression of any resulting vertices of degree two, $G\setminus e$ is clearly $3$-edge-connected. If deletion of $e$ does not preserve internal 4-edge-connectivity, then $e$ must be contributing to some $4$-edge-cut, $\delta (X)$, in which each side has either at least three vertices, or has two vertices which are not both of degree 3. We call such a cut an {\it interesting cut}.
Note that $H$ too is internally 4-edge-connected, thus all, but possibly one, of the terminals of an immersion of $H$ lie on one side of this cut, say $X$. Let $X'$ be the maximal subset of $V(G)$ containing $X$, such that $\delta (X')$ is interesting. Suppose there is an edge $uv$ in ${X'}^c$ not contributing to an interesting edge-cut, then deleting $uv$ is a good operation. It is because $G \setminus uv$ is $3$-edge-connected, internally $4$-edge-connected. Also $G \setminus uv$ has an $H$-immersion, because it immerses $(G\setminus e).{X}^c$.
We may now assume that $uv$ is in some interesting edge-cut $\delta (Y)$. Note that maximality of $X'$ implies that $X' \cap Y, X' \cap Y^c \neq \emptyset $. Also, we claim that there cannot be edges contributing to both $\delta (X'), \delta(Y)$. To prove the claim, suppose, to the contrary, that there are edges between, say, $X' \cap Y, {X'}^c \cap Y^c$, i.e. $e \neq 0$ in Figure \ref{two4cutcross}.
\begin{figure}[htbp]
\centering
\includegraphics[height=5cm]{two4cutcross.pdf}\\
\caption{ Cuts $\delta (X')$, $\delta (Y)$ relative to each other}
\label{two4cutcross}
\end{figure}
Then it follows from
$$ 8=d(X')+d(Y) = d({X'}^c \cap Y) +d(X' \cap Y^c)+2e
\geq 3+3+2e$$
that if $e \neq 0$, it equals to 1, and, moreover, $d({X'}^c \cap Y)=d(X' \cap Y^c)=3$.
Using a similar argument, one can see that, if in addition to $e \neq 0$, there were also edges between $X'\cap Y^c, {X'}^c\cap Y$,
then $d({X'}^c \cap Y^c)=3$. Thus, both ${X'}^c \cap Y^c, {X'}^c \cap Y$ would consist of a single vertex of degree $3$, contradicting $\delta (X')$ being interesting. Therefore the number of edges contributing to both $\delta(X'), \delta (Y)$ equals $e$.
We will now show that $e \neq 0$ results in a contradiction. Note that from $d({X'}^c \cap Y)=3$ we may conclude, without loss of generality, that $b \geq 2$. Now, by alternatively looking at the cuts $\delta ({X'}^c\cap Y), \delta (X'), \delta (X'\cap Y^c), \delta (Y) $, we see that if $b\ge 2$, then $c\le 1$, so $a \ge 2$, thus $d \le 1$. Therefore, $d(X'\cap Y)=c+d+e\le 3$, so $X'\cap Y$ consists of a single vertex of degree three. This, however, together with the earlier conclusion of $X'\cap Y^c$ consisting of a single vertex of degree three contradicts $\delta(X')$ being interesting. Therefore $e=0$, so there are no edges contributing to both $\delta(X')$ and $\delta(Y)$.
Now, we show that $a=b=c=d=2$. For a contradiction, we will assume that, say $a>2$, and, similar to the argument above, alternatively look at the cuts $ \delta (X'), \delta (X'\cap Y), \delta (Y) $. It then follows that $c\le 1$, so $d \ge 2$, thus $b\le 2$. So, in order for $d({X'}^c\cap Y)=b+c\ge 3$, we must have $b=2, c=1$. Also, we have $d(Y)=4=b+d$, so $d=2$, thus $d(X'\cap Y)=c+d=3$.
Hence, each ${X'}^c\cap Y$ and $X'\cap Y$ consist of a single vertex of degree three, which contradicts $\delta(Y)$ being interesting.
Therefore, $a=b=c=d=2$, and thus $\delta({X'}^{c} \cap Y), \delta({X'}^c \cap Y^c)$ are 4-edge-cuts. However, by maximality of $X'$, they cannot be interesting cuts. Thus each of ${X'}^{c} \cap Y, {X'}^c \cap Y^c$ consists of only one vertex, or two vertices both of degree 3.
We are now ready to prove that a good operation exists unless $(G, H) \cong (Q_3, K_4)$ or $(G, H) \cong (Q_3, K_2^3)$. Consider different possibilities for ${X'}^{c} \cap Y, {X'}^c \cap Y^c$:
\begin{itemize}
\item Both sets consist of one vertex, see Fig. \ref{onevx}$(a)$. Here, a good operation is to split off $wuv$. Note that the resulting graph immerses $H$, as it immerses $(G\setminus e).{X}^c$.
\item Only one set consists of one vertex. Then it is easy to verify that ${X'}^c$ should be as in Fig. \ref{onevx}$(b)$. Here, deleting $vw$ is a good operation.
\begin{figure}[htbp]
\centering
\includegraphics[height=4.5cm]{del1.pdf}\\
\caption{At least one of ${X'}^{c} \cap Y, {X'}^c \cap Y^c$ consists of only one vertex}
\label{onevx}
\end{figure}
\item Both sets have two vertices in them, see Fig. \ref{del31}.
\begin{figure}[htbp]
\centering
\includegraphics[height=4cm]{del31.pdf}\\
\caption{Both ${X'}^{c} \cap Y, {X'}^c \cap Y^c$ consist of two vertices}
\label{del31}
\end{figure}
Here the operation will be deleting $uw$ or $vz$, from which we claim at least one is a good operation unless $G \cong Q_3$. Suppose that deleting both $uw$ and $vz$ destroy internal 4-edge-connectivity, thus both these edges contribute to some interesting cuts.
As before, it can be argued that the cuts look like as in Fig. \ref{del32} with respect to each other.
\begin{figure}[htbp]
\centering
\includegraphics[height=4cm]{del32.pdf}\\
\caption{Both $uw$ and $vz$ are in interesting edge-cuts}
\label{del32}
\end{figure}
Now, ignoring $\{ u, v, w, z\}$ in Figures \ref{del31}, and \ref{del32}, we can see that there exists a 2-edge cut separating $\{ n_u, n_w\}$ from $\{n_v, n_z\}$, and another one separating $\{n_u, n_v\}$ from $\{ n_w, n_z\}$, implying that $n_u, n_v, n_w, n_z$ form a square, thus $G\cong Q_3$. It has now only remained to notice that $K_4, K_2^3$ are the only internally 4-edge-connected graph that $Q_3$ immerses.
\end{itemize}
\end{pf}
Our next task is to deal with splits in $G$ that preserve an $H$-immersion, which will be done in Lemma \ref{lem10}. The following statement, which holds for a broader family of graphs than the ones we work with, features in the proof of Lemma \ref{lem10}.
\begin{lem}
\label{3econdel}
Suppose $H$ is a $3$-edge-connected graph, and $Y$ is a minimal subset of $V(H)$ such that $\delta(Y)$ is a nontrivial $3$-edge-cut in $H$. Then for every edge $e$ in $H[Y]$, $H\setminus e$ is internally $3$-edge-connected.
\end{lem}
\begin{pf}
For a contradiction, suppose an edge $e=yz$ in $H[Y]$ contributes to some nontrivial $3$-edge-cut $\delta(Z)$, where $z\in Z$. We will look into how $Y,Z$ look like with respect to one another. Note both $Y\cap Z$ and $Y\cap Z^c$ are nonempty, as $z\in Y\cap Z, y\in Y\cap Z^c$. Also, both $Y^c\cap Z$ and $Y^c\cap Z^c$ are nonempty. It is because, if, say $Y^c\cap Z=\emptyset$, then $\delta(Y\cap Z)$ would be a nontrivial $3$-edge-cut, which contradicts the choice of $Y$, as $Y\cap Z\subsetneq Y$.
Now, since $H$ is $3$-edge-connected, we have $d(Y\cap Z),d(Y^c\cap Z^c)\ge 3$. It now follows from $d(Y\cap Z)+d(Y^c\cap Z^c)+2e(Y^c\cap Z, Y\cap Z^c)=d(Y)+d(Z)$ that $d(Y\cap Z)=d(Y^c\cap Z^c)=3$ and $e(Y^c\cap Z, Y\cap Z^c)=0$. Similarly, we obtain $d(Y\cap Z^c)=d(Y^c\cap Z)=3$ and $e(Y\cap Z, Y^c\cap Z^c)=0$. Now, since $d(Y)=3=e(Y\cap Z, Y^c\cap Z)+e(Y\cap Z^c,Y^c\cap Z^c)$, we have, say, $e(Y\cap Z, Y^c\cap Z)\le 1$. Similarly, it follows from $d(Z)=3$ that we have, say, $e(Y\cap Z, Y\cap Z^c)\le 1$. Hence, $d(Y\cap Z)\le 2$, a contradiction.
\end{pf}
\begin{lem}
\label{lem10}
Suppose $G, H$ are as in Theorem $\ref{sp-thm2}$, and there is a split at a vertex $v$ preserving an $H$-immersion. Then if $(G, H) \ncong (Q_3, K_4), (Q_3, K_2^3)$, a good operation exists.
\end{lem}
\begin{pf}
Let $uvw$ be the $2$-edge-path that is to be split. Note if $d(v)=3$, then deleting the edge incident to $v$ other than $vu,vw$ preserves the $H$-immersion. Hence, by Lemma \ref{lem9} we are done. Also, observe that if a split is done at a vertex of degree at least four, the resulting graph is $3$-edge-connected. Therefore, we only need to look into the case where splitting off $uvw$ destroys internal $4$-edge-connectivity. So, it must be the case that $uv, vw$ contribute to some 4- or 5-edge-cut $\delta (X)=\{ uv, wv, x_1y_1, x_2y_2 (, x_3y_3): u, w, x_i \in X \}$, where $|X|,|X^c|\ge 2$. We now split the analysis into cases depending on $d(X)$.
\begin{claim}
\label{splitdX4}
If $d(X)=4$, a good operation exists.
\end{claim}
Since $H$ is $3$-edge-connected, all terminals of $H$ lie on one side of the cut. Also, since $G$ is $3$-edge-connected, each side of the cut contains an edge completely lying in it, i.e. $E(G[X]), E(G[X^c])\neq \emptyset$.
First, suppose all terminals of $H$ are in $X$. Observe that if we can modify $X^c$ in a way that it preserves the connectivity of $y_1, y_2$ in $G[X^c]$, an $H$-immersion is present in the resulting graph. We propose to delete an edge $e\in E(G[X^c])$, and claim that deleting $e$ preserves the $H$-immersion. It suffices to show $e$ is not a cut-edge in $G[X^c]$ separating $y_1, y_2$. For a contradiction, suppose $e=\delta(Y)$ separates $y_1,y_2$ in $G[X^c]$, where $y_1\in Y$. We may also assume, without loss of generality, that $v\in Y$. Then $\delta_G(Y^c)$ would be a $2$-edge-cut in $G$, a contradiction. Therefore, we can delete $e$ using Lemma \ref{lem9}.
Next, suppose all terminals of $H$ are in $X^c$. Similar to the previous case, if we modify $X$ in a way that preserves the connectivity of $x_1, x_2$ in $G[X]$, an $H$-immersion is sure to exist in the resulting graph. Again, we propose to delete an edge $e\in E(G[X])$, and claim that deleting $e$ preserves the $H$-immersion. It suffices to show $e$ is not a cut-edge in $G[X]$ separating $x_1, x_2$. For a contradiction, suppose $e=\delta(Y)$ separates $x_1,x_2$ in $G[X]$, where $x_1\in Y$. Note $3$-edge-connectivity of $G$ implies that $\delta(Y)$ separates $u, w$ as well. We may assume, without loss of generality, that $u\in Y, w\in Y^c$. Then $d_G(Y)=d_G(Y^c)=3$, thus it follows from internal $4$-edge-connectivity of $G$ that $|Y|=|Y^c|=1$ and $Y=\{u=x_1\}, Y^c=\{w=x_2\}$. Therefore, $X$ consists of two vertices $u,w$ of degree three, and thus deleting $uw$ preserves the $H$-immersion.
\begin{claim}
If $d(X)=5$, a good operation exists.
\end{claim}
By the internal edge-connectivity of $H$, all terminals of $H$, but possibly one, lie on one side of the cut. First, suppose that most terminals of $H$ are in $X$. Observe that if $X^c$ is modified in a way that preserves the presence of three edge-disjoint paths form a vertex in it to $X$ not using $uv, vw$, the presence of $H$-immersion is guaranteed. Next, suppose that most terminals of $H$ are in $X^c$. In this case, if we manage to modify $X$ in a way that preserves the presence of three edge-disjoint paths form a vertex in it to $X^c$ covering $\delta(X)\setminus \{uv,vw\}$, the presence of $H$-immersion is guaranteed. We claim such modifications are possible.
Let $G'$ be the graph resulting from splitting off $uvw$, followed by suppressing $v$ in case $d_{G}(v)=4$. We denote the edge created by splitting $uvw$ by $e'$. Note, by \ref{splitdX4}, we may assume $G'$ is 3-edge-connected.
Take an arbitrary nontrivial $3$-edge-cut $\delta_{G'}(Y)$ in $G'$. Observe that $\delta_G(Y)$ must have been a $5$-edge-cut in $G$, which both edges of the split $2$-path $uvw$ contributed to. So, in particular, $e'$ lies either completely in $Y$ or in $Y^c$. Also, there must be an edge other than $e'$ in $G'[Y]$. It is because $3$-edge-connectivity of $G$ implies $6\le \sum_{v\in Y} d_G(v)=d_G(Y)+2 e_G(G[Y])=5+2 e_G(G[Y])$. Thus $e_G(G[Y])>0$, and so there is an edge $\neq e'$ in $G'[Y]$.
Now, let $Z$ denote the side of $\delta(X)$ containing most terminals of $H$ (, so $Z=X$ or $X^c$). We will show that there is an edge lying in $Z^c$ which we could delete, while preserving an $H$- immersion. Since $\delta_{G'}(Z)$ is a nontrivial $3$-edge-cut, we may choose a minimal $Y\subseteq Z^c$ such that $\delta_{G'}(Y)$ is a nontrivial $3$-edge-cut.
It is argued above that there exists an edge $e\neq e'$ in $G'[Y]$. We claim deletion of $e$ preserves the $H$-immersion. It is because it follows from Lemma \ref{3econdel} that $G'\setminus e$ is internally $3$-edge-connected. Now, $3$-edge-connectivity of $G'$ and $d_{G'}(Y)=3$ imply that $G'[Y]\setminus e$ has a vertex of degree at least three. Therefore, there exists in $G'\setminus e$ three edge-disjoint paths from such a vertex to $Z$. Observe that since these set of paths cover $\delta(Z)$, deletion of $e$ from $G$ preserves the presence of $H$-immersion. We now can use Lemma \ref{lem9} to delete $e$ from $G$.
\end{pf}
Proof of Theorem \ref{sp-thm2} is now immediate.
{\noindent \textbf{Proof of Theorem \ref{sp-thm2}. }}{Apply Lemmas \ref{lem9}, and \ref{lem10}.
\hfill{$\square$}
Having established Theorem \ref{sp-thm2}, we will now take advantage of it to prove Corollary \ref{corintro}. The idea is to examine $3$-edge-connected, internally $4$-edge-connected graphs ``one step bigger", or perhaps ``a few steps bigger", than $K_5$, and see if they immerse $K_{3,3}$. One subtlety here is that we are working with multigraphs, thus even graphs ``much bigger than" $K_5$ may happen to be on five vertices, and thus not possess $K_{3,3}$-immersions. Therefore, we need some tool to limit the graphs necessary to examine. Given that $K_5$ itself is $4$-edge-connected, Lemma \ref{lem11} serves very well in doing so. First, however, we need the following definition.
\begin{defn}
We define a {\it good sequence} from $G$ to $H$ to be a sequence of graphs
$$G=G_l, G_{l-1}, \ldots , G_2, G_1, G_0\cong H$$
in which each $G_i$ is 3-edge-connected, and internally 4-edge-connected, and $G_i$ is resulting from applying an operation $o_{i+1}$ (as defined in the statement of theorem \ref{sp-thm2}) to $G_{i+1}$.
\end{defn}
\begin{lem}
\label{lem11}
Let $G$ be $3$-edge-connected, internally $4$-edge-connected, and $H$ be $4$-edge-connected. Suppose there is a good sequence from $G$ to $H$, and choose a good sequence from $G$ to $H$
$$G=G_l, G_{l-1}, \ldots , G_2, G_1, G_0\cong H$$
such that $\min \{ k: |V(G_k)|>|V(H)| \}$ is as small as possible. Then either
\begin{itemize}
\item[(a)] $G_1$ is as in Fig. $\ref{gi}(a)$, with $v_1 \neq v_2$, $v_3 \neq v_4$, and the last operation, $o_1$, is to split off $v_1uv_2$, and $v_3uv_4$.
\begin{figure}[htbp]
\centering
\includegraphics[height=3.5cm]{gi.pdf}\\
\caption{The last graphs in the sequence}
\label{gi}
\end{figure}
\item[(b)] $G_1$ is as in Fig. $\ref{gi}(b)$, with $v_1 \neq v_2$, $v_3 \neq v_4$, and $o_1$ is deleting $uw$.
\item[(c)] $G_1$ is as in Fig. $\ref{gi}(c)$ and $o_1$ is to delete $uv_1$.
\item[(d)] $G_2$ is as in Fig. $\ref{gi}(c)$ and $o_2$ is deletion of $uv_1$ (thus forming an edge $v_2v_3$), and $o_1$ is deletion of $v_2v_3$, so $G_2\setminus u \cong H$.
\end{itemize}
\end{lem}
\begin{pf}
Let $G_k$ be the graph in the sequence which attains the $\min \{ k: |V(G_k)|>|V(H)| \}$, thus $V(G_{k-1})=V(H)=\{ v_1, v_2, \ldots v_{|H|} \}$. First, consider the case where $o_k$ is a split. Since this split reduces the number of vertices, it must be a split at a vertex $u$ of degree 4, see Fig. \ref{gi}$(a)$. Let $v_1v_2, v_3v_4$ be the edges resulting from splitting $v$. We claim that $G_{k-1}=H$, since if there was $k'<k$ so that $o_{k'}$ was
\begin{itemize}
\item splitting a 2-edge-path where both edges are present in $G_k$, or deleting an edge present in $G_k$, then it could have been done before $o_k$.
\item splitting a $v_1v_2v_i$ path, then we could have split $uv_2v_i$ instead.
\item splitting a 2-edge-path, with both edges $v_1v_2$, and $v_3v_4$, with, say, $v_2=v_3$, resulting from $o_k$, then we could have deleted one of $uv_2$ edges instead.
\item deleting one of the edges, say $v_1v_2$, created by $o_k$, then we could have deleted $uv_1$. (It also implies that $v_1\neq v_2$, and $v_3 \neq v_4$.)
\end{itemize}
Note that in all the cases above the alternative operation would result in another good sequence, with smaller $\min \{ k: |V(G_k)|>|V(H)| \}$, contradicting our choice of the good sequence. Therefore the claim is proved, thus $k=1$, and $(a)$ occurs.
Now, consider the case where $o_k$ is a deletion of an edge $uw$. Since this deletion reduces the number of vertices, at least one of its endpoints is of degree 3. If both $u$ and $w$ are of degree 3 (see Fig. \ref{gi}$(b)$), the same argument as above shows that $k=1$, and thus $(b)$ happens.
Otherwise, only $u$ is of degree 3, and $o_k$ is deleting $uv_1$, see Fig. \ref{gi}$(c)$. As before, it could be argued that there cannot be a $k'<k$ with $o_{k'}$ being splitting a 2-edge-path with both edges present in $G_k$, or deleting an edge present in $G_k$. Also, $o_{k'}$ cannot be splitting $v_2v_3v_i$, since we could have split $uv_3v_i$ before $o_k$, obtaining a good sequence with smaller $\min \{ k: |V(G_k)|>|V(H)| \}$. However, it could be that $o_{k'}$ is deleting $v_2v_3$. Thus, if $v_2v_3$ is not to be deleted, we have $k=1$, and $(c)$ happens; else, $k=2$, and $o_{k-1}$ would be deleting $v_2v_3$, i.e. $(d)$ occurs.
\end{pf}
Now, we use this lemma to establish a result on $K_5$-immersions discussed in Section \ref{intro} and restated here.
\begin{cor}
\label{cor}
Suppose $G$ is $3$-edge-connected, and internally $4$-edge-connected, where $G\succ_{im}K_5$. Then
\begin{enumerate}
\item
if $|V(G)|\ge 6$ then $G\succ_{im}K_{3,3}$, or $G\cong$ octahedron, where octahedron is the graph in Fig. $\ref{Octahedron}$.
\item
If $|V(G)|\ge 7$ then $G\succ K_{3,3}$.
\end{enumerate}
\end{cor}
\begin{pf}
Observe part $(2)$ is an immediate consequence of part $(1)$. We will then prove $(1)$.
Suppose $G\succ_{im}K_5$, and $|V(G)| >5$. By Theorem \ref{sp-thm2}, a good sequence from $G$ to $K_5$ exists. Thus, we can choose a good sequence
$$G=G_l, G_{l-1}, \ldots , G_2, G_1, G_0\cong K_5$$
such that $\min \{ k: |V(G_k)|>5 \}$ is as small as possible, and apply the previous lemma. It could be easily verified that if cases $(b)$, or $(c)$ of the previous lemma occur, then $G_1\succ_{im}K_{3,3}$, and if case $(d)$ happens, $G_2\succ_{im}K_{3,3}$, thus $G\succ_{im}K_{3,3}$.
So, suppose case $(a)$ of the previous lemma occurs. Again, it can easily be verified that if the two edges created by $o_1$ share an endpoint, then $G_1\succ_{im}K_{3,3}$, thus $G\succ_{im}K_{3,3}$. Otherwise, $K_{3, 3}$ is not immersed in $G_1$, as $G_1$ would be the octahedron, which, being planar, doesn't have $K_{3,3}$ as a subgraph. On the other hand, it has six vertices, all of degree 4, so an immersion of $K_{3,3}$ cannot be found doing splits either.
\begin{figure}[htbp]
\centering
\includegraphics[height=3cm]{octahedron.pdf}\\
\caption{Octahedron}
\label{Octahedron}
\end{figure}
Therefore, if $G\cong$ octahedron, $G\nsucc_{im}K_{3,3}$. However, if $G$ properly immerses octahedron, then it immerses $K_{3,3}$ as well. To see that, note that the 6-vertex graphs from which octahedron is obtained after deletion of an edge or splitting a 2-edge path, all immerse $K_{3,3}$. On the other hand, if $|V(G)|>6$, we may again use Lemma \ref{lem11}, since octahedron itself is 4-edge-connected.
To reduce the number of graphs we examine, it now helps to notice that we only need to consider the case where a 4-vertex 7 gets split to create edges $\{23, 15\}$, or $\{ 23, 14\}$. It is because in all other cases, the graph obtained by splitting 2-paths 163, 264 would be one of the graphs we already looked at, all of which immerse $K_{3, 3}$.
If vertex 7 is split to create $\{23, 15\}$, an immersion of $K_{3,3}$ may be found after splitting 2-path 173. Also, if vertex 7 is split to create $\{23, 14\}$, then $K_{3, 3}$ lies as a subgraph in $G$.
\end{pf}
\bibliographystyle{plain}
| {
"timestamp": "2018-08-21T02:19:21",
"yymm": "1808",
"arxiv_id": "1808.06569",
"language": "en",
"url": "https://arxiv.org/abs/1808.06569",
"abstract": "We establish splitter theorems for graph immersions for two families of graphs, $k$-edge-connected graphs, with $k$ even, and 3-edge-connected, internally 4-edge-connected graphs. As a corollary, we prove that every $3$-edge-connected, internally $4$-edge-connected graph on at least seven vertices that immerses $K_5$ also has $K_{3,3}$ as an immersion.",
"subjects": "Combinatorics (math.CO)",
"title": "Splitter Theorems for Graph Immersions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290947248701,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.7099522613461303
} |
https://arxiv.org/abs/1909.06882 | Lagrange interpolation over division rings | For a division ring $\mathbb F$, the polynomials $f\in\mathbb F$ can be evaluated "on the left" and "on the right" giving rise to left and right Lagrange interpolation problems. The problems containig interpolation conditions of the same type were considered in \cite{lam1} where the solvability criterion was given in terms of polynomial independence of interpolation nodes. We establish the solvability criterion and describe all solutions of low degree (less than the number of interpolation conditions imposed) for the problem containing both "left" and "right" conditions. | \section{Introduction}
\setcounter{equation}{0}
Given distinct nodes $\alpha_1,\ldots,\alpha_n$ and target values $c_1,\ldots,c_n$ in a field $\mathbb F$,
the classical Lagrange interpolation problem consists of finding a polynomial $f\in\mathbb F[z]$ such that
\begin{equation}
f(\alpha_i)=c_i\quad\mbox{for}\quad i=1,\ldots,n.
\label{1.1}
\end{equation}
If we consider $P_{n}(\mathbb F):=\{g\in\mathbb F[z]: \, \deg g< n\}$ and $\mathbb F^n$ as $n$-dimensional vector spaces over
$\mathbb F$, then the linear operator $T: \, P_{n}(\mathbb F)\to\mathbb F^n$
defined by $Tf=(f(\alpha_1),\ldots,f(\alpha_n))$ is injective, since a nonzero $f\in\mathbb F[z]$
cannot have more zeros in $\mathbb F$ than $\deg f$. Since $\dim P_{n}(\mathbb F)=\dim\mathbb F^n$,
$T$ is also surjective, which leads us to the following observation.
\begin{remark}
Given any distinct $\alpha_1,\ldots,\alpha_n$ and any $c_1,\ldots,c_n$ in a field $\mathbb F$,
there is a unique polynomial $f_L\in P_{n}(\mathbb F)$ subject to conditions \eqref{1.1}.
\label{R:1.0}
\end{remark}
The explicit formula for that unique $f_L$ (the {\em Lagrange interpolation formula})
\begin{equation}
f_L(z)=\sum_{i=1}^n \frac{p_i(z)c_i}{p_i(\alpha_i)},\quad \mbox{where}\quad p_i(z)=\prod_{j\neq i}(z-\alpha_j)
\quad\mbox{for}\quad i=1,\ldots,n,
\label{1.2}
\end{equation}
is easily verified as the $i$-th term $f_i(z)=\frac{p_i(z)c_i}{p_i(\alpha_i)}\in P_{n}(\mathbb F)$
satisfies conditions $f_i(\alpha_i)=c_i$ and $f_i(\alpha_j)=0$ for all $j\neq i$.
\smallskip
We now turn to Lagrange interpolation over a {\em division ring} $\mathbb F$, which is different from the
commutative case in two regards. First, left and right evaluation functionals $f\mapsto f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)$ and
$f\mapsto f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)$ on $\mathbb F[z]$ (see formulas \eqref{2.2} below)
give rise to two different (left and right) Lagrange interpolation problems: {\em given sets
\begin{equation}
\Lambda=\{\alpha_1,\ldots,\alpha_n\}\quad\mbox{and}\quad
\Omega=\{\beta_1,\ldots,\beta_k\}
\label{1.7}
\end{equation}
of distinct elements in $\mathbb F$ along with given target values $c_1,\ldots,c_n$ and
$d_1,\ldots,d_k$ in $\mathbb F$, find a polynomial
$f\in\mathbb F[z]$ subject to left or right interpolation conditions
\begin{align}
f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)&=c_i\quad\mbox{for}\quad i=1,\ldots,n,\label{1.18}\\
f^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)&=d_j\quad\mbox{for}\quad j=1,\ldots,k.\label{1.19}
\end{align}}
Evaluation functionals \eqref{2.2} also give rise to non-equivalent notions of left and right zeros;
consequently the solution sets of homogeneous problems \eqref{1.18} and \eqref{1.19} are respectively,
the right ideal generated by the left minimal polynomial of $\Lambda$ and the left ideal generated by
right minimal polynomial of $\Omega$.
\smallskip
Another distinction with the commutative case
was indicated in \cite{gm}: as any polynomial $f\in\mathbb F[z]$ having two distinct left (right) zeros in the
same conjugacy class of $\mathbb F$, actually has infinitely many zeros in this class, the target values
cannot be assigned arbitrarily (at least a'priori) at more than two points within the same conjugacy class.
The latter has been clarified in \cite{lam1} by introducing the notion of left (right) {\em polynomial
independency} ($P$-independency; see Definition \ref{D:1.1}). Loosely speaking, a finite set $\Lambda\subset \mathbb F$
contains a maximal $P$-independent subset $\Lambda_0$, and the left values of each polynomial $f$ on
$\Lambda_0$ uniquely determine $f^{\boldsymbol{\mathfrak{e}_\ell}}$ on the whole $\Lambda$. This leads to consistency conditions
for target values which either indicate that the problem \eqref{1.18}
has no solutions, or allow us to disregard the interpolation conditions on $\Lambda\backslash\Lambda_0$, hence making the
problem \eqref{1.18} with left $P$-independent interpolation nodes a generic one. Similar observations hold true for
the right Lagrange problem \eqref{1.19}. The results concerning Lagrange problems with $P$-independent interpolation nodes
are the same (up to minor noncommutative adjustments) as in the commutative case. This material is briefly recalled in Section 2
in the form suitable for the subsequent analysis.
\smallskip
The main purpose of the present paper is to study the two-sided Lagrange problem that contains {\em both}
left and right interpolation conditions \eqref{1.18}, \eqref{1.19}. We do not assume that the sets \eqref{1.7}
are disjoint, so left and right target values can be assigned to the same interpolation node. Without loss of generality,
we will assume that the sets $\Lambda$ and $\Omega$ in \eqref{1.7} are respectively left and right $P$-independent,
so that the left and right subproblems are consistent. Still, the combined problem may be inconsistent,
and on the other hand, it may admit many low-degree solutions. In Section 3, we present the solvability criterion for
the two-sided problem \eqref{1.18}, \eqref{1.19} to have a solution, establish a parametrization formula
(which is fairly explicit under the assumption that the interpolation nodes are algebraic over the center of $\mathbb F$)
producing all low-degree solutions. Two-sided polynomial independence and the two-sided Lagrange interpolation formula
are also discussed in Section 3.
\section{Background}
\setcounter{equation}{0}
In what follows, $\mathbb F$ is assumed to be a {\em division ring} with the center $Z_{\mathbb F}$, and for
each $\alpha\in\mathbb F$, we let $[\alpha]:=\{h\alpha h^{-1}: \, h\in\mathbb F\backslash\{0\}\}$ to denote its conjugacy class.
\smallskip
We let $\mathbb F[z]$ to denote the ring of polynomials in one formal variable $z$ which commutes with
coefficients from $\mathbb F$. Since the division algorithm holds in $\mathbb F[z]$ on
either side, any ideal (left or right) in $\mathbb F[z]$ is principal. We will write $\langle p\rangle_{\bf r}$
and $\langle p\rangle_{\boldsymbol\ell}$ for the right and the left ideal generated by $p\in\mathbb F[z]$
dropping the subscript if the ideal is two-sided.
Any two-sided ideal of $\mathbb F[z]$ is generated by a polynomial with coefficients in $Z_\mathbb F$
(see e.g., \cite[Proposition 2.2.2]{cohn3}); the converse is clear since $Z_{\mathbb F[z]}=Z_{\mathbb F}[z]$.
\smallskip
The intersection of two left (right) ideals is a left (right)
ideal; the {\em least right} and {\em left common multiples} ${\bf lrcm}(f,g)$ and ${\bf llcm}(f,g)$
of two monic polynomials $f,g\in\mathbb F[z]$ are defined as generators of the respective ideals
\begin{equation}
\langle f\rangle_{\bf r}\cap\langle g\rangle_{\bf r}=
\langle{\bf lrcm}(f,g)\rangle_{\bf r}\quad\mbox{and}\quad
\langle f\rangle_{\boldsymbol\ell}\cap\langle g\rangle_{\boldsymbol\ell}=
\langle{\bf llcm}(f,g)\rangle_{\boldsymbol\ell}.
\label{2.1}
\end{equation}
\subsection{Evaluation functionals}
Left and right evaluations of an $f\in\mathbb F[z]$ at $\alpha\in\mathbb F$ can be defined as
the remainders of $f$ when divided by $\boldsymbol{\rho}_\alpha(z)=z-\alpha$ on the left and on the
right, respectively. As is easily verified, for any $\alpha\in\mathbb F$ and $f\in\mathbb F[z]$,
\begin{equation}
f(z)=f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)+\boldsymbol{\rho}_\alpha\cdot(L_\alpha f)=f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)+(R_\alpha f)(z)\cdot\boldsymbol{\rho}_{\alpha}\qquad
(\boldsymbol{\rho}_\alpha(z):=z-\alpha),
\label{2.1u}
\end{equation}
where $f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)$ and $f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)$ are left and right evaluations of $f$ at $\alpha$:
\begin{equation}
f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)=\sum_{j=0}^m\alpha^j f_j\quad\mbox{and}\quad
f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)=\sum_{j=0}^m f_j\alpha^j\quad\mbox{if}\quad f(z)=\sum_{j=0}^m z^j f_j,
\label{2.2}
\end{equation}
and where $L_\alpha f$ and $R_\alpha f$ are the polynomials given by
\begin{equation}
(L_\alpha f)(z)=\sum_{i+j=0}^{m-1}\alpha^if_{i+j+1}z^j,\quad
(R_\alpha f)(z)=\sum_{i+j=0}^{m-1}z^j f_{i+j+1}\alpha^i .
\label{2.3}
\end{equation}
\begin{remark}
{\rm The quantities introduced in \eqref{2.2} and \eqref{2.3} are related as follows:
\begin{align}
(L_\alpha f)^{\boldsymbol{\mathfrak{e}_r}}(\beta)=
\sum_{k=0}^{\deg f-1}\sum_{j=0}^k \alpha^kf_{k+j+1}\beta^{k-j}&=(R_\beta f)^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha),\label{4.3}\\
\alpha \cdot(L_\alpha f)^{\boldsymbol{\mathfrak{e}_r}}(\beta)-(L_\alpha f)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\cdot\beta&=f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)-f^{\boldsymbol{\mathfrak{e}_r}}(\beta),
\label{4.4}
\end{align}
for any $\alpha,\beta\in\mathbb F$ and $f\in\mathbb F[z]$.
Indeed, equalities \eqref{4.3} are immediate from \eqref{2.3}, whhereas applying the right evaluation at $z=\beta$ to the first equality in
\eqref{2.1u} gives
$$
f^{\boldsymbol{\mathfrak{e}_r}}(\beta)=f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)+(L_\alpha f)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\cdot\beta-\alpha \cdot(L_\alpha f)^{\boldsymbol{\mathfrak{e}_r}}(\beta)
$$
which is equivalent to \eqref{4.4}.}
\label{R:4.4a}
\end{remark}
We next recall the product formulas for evaluations \eqref{2.2}. From the definitions \eqref{2.2}, one can see that
for any $f,g\in\mathbb F[z]$ and $\alpha\in\mathbb F$,
\begin{align}
(gf)^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)&=\sum \alpha^kg^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)f_k=\big(g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\cdot f\big)^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha),\label{2.3a}\\
(gf)^{\boldsymbol{\mathfrak{e}_r}}(\alpha)&=\sum g_k f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)\alpha^k=\big(g\cdot f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)\big)^{\boldsymbol{\mathfrak{e}_r}}(\alpha),\label{2.3b}
\end{align}
which imply
\begin{align}
(gf)^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)&=\left\{\begin{array}{ccc}
g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\cdot f^{\boldsymbol{\mathfrak{e}_\ell}}\left(g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)^{-1}\alpha
g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\right)&\mbox{if} &
g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\neq 0, \\
0 & \mbox{if} & g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)= 0,\end{array}\right.
\label{2.6}\\
(gf)^{\boldsymbol{\mathfrak{e}_r}}(\alpha)&=\left\{\begin{array}{ccc} g^{\boldsymbol{\mathfrak{e}_r}}\left(f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)\alpha
f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)^{-1}\right)\cdot
f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)&\mbox{if} & f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)\neq 0, \\
0 & \mbox{if} & f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)= 0.\end{array}\right.
\label{2.7}
\end{align}
Indeed, the top formula in \eqref{2.6} follows from \eqref{2.3a} and the computation
$$
\sum \alpha^kg^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)f_k=g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\sum (g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)^{-1}\alpha g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha))^kf_k
=g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\cdot f^{\boldsymbol{\mathfrak{e}_\ell}}\left(g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)^{-1}\alpha
g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\right).
$$
The top formula in \eqref{2.7} is justified similarly, while the bottom formulas in \eqref{2.6}, \eqref{2.7}
follow immediately from \eqref{2.3a}, \eqref{2.3b}.
\begin{proposition}
For any $\alpha\in\mathbb F$ and $f,g\in\mathbb F[z]$,
\begin{equation}
L_\alpha(gf)=\left\{\begin{array}{lcc}
(L_\alpha g)\cdot f+g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\cdot L_{\widetilde{\alpha}}f, &\mbox{if}& g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\neq 0,\\
(L_\alpha g)\cdot f,&\mbox{if}& g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)= 0,\end{array}\right.
\label{Lprod}
\end{equation}
\label{P:lprod}
where $L_\alpha$ is defined as in \eqref{2.3} and where $\widetilde{\alpha}:=g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)^{-1}\alpha g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)$.
\end{proposition}
\begin{proof}
If $g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)=0$, then $gf=\boldsymbol{\rho}_\alpha (L_\alpha g)\cdot f$, which proves the bottom formula in \eqref{Lprod}.
If $g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\neq 0$, we define $\widetilde{\alpha}$ as above and observe that
$\boldsymbol{\rho}_{\alpha}g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)= g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\boldsymbol{\rho}_{\widetilde{\alpha}}$. Now we have, on account of \eqref{2.6},
\begin{align*}
\boldsymbol{\rho}_{\alpha}\cdot L_\alpha(gf)=gf-(gf)^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)&=(g-g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha))\cdot f+
g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)(f-f^{\boldsymbol{\mathfrak{e}_\ell}}(\widetilde{\alpha}))\\
&=\boldsymbol{\rho}_{\alpha}\cdot (L_\alpha g)\cdot f+g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)\boldsymbol{\rho}_{\widetilde{\alpha}}\cdot (L_{\widetilde{\alpha}}f)\\
&=\boldsymbol{\rho}_{\alpha}\cdot (L_\alpha g)\cdot f+\boldsymbol{\rho}_{\alpha}\cdot (L_{\widetilde{\alpha}}f),
\end{align*}
which completes the proof of \eqref{Lprod}.
\end{proof}
\subsection{Polynomial independence}
An element $\alpha\in\mathbb F$ is called a {\em left (right) zero} of $f\in\mathbb F[z]$ if
$f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)=0$ (respectively, $f^{\boldsymbol{\mathfrak{e}_r}}(\alpha)=0$). We will denote by ${\mathcal Z}_{\boldsymbol\ell}(f)$
and ${\mathcal Z}_{{\boldsymbol r}}(f)$ the respective sets of left and right zeros of $f$
and observe from \eqref{2.1u} that
\begin{equation}
\alpha\in{\mathcal Z}_{\boldsymbol\ell}(f) \; \; \Longleftrightarrow \; f\in\langle \boldsymbol{\rho}_\alpha\rangle_{\bf r}
\quad\mbox{and}\quad \alpha\in{\mathcal Z}_{\boldsymbol r}(f) \; \; \Longleftrightarrow \;
f\in \langle \boldsymbol{\rho}_\alpha\rangle_{\boldsymbol\ell}.
\label{2.4}
\end{equation}
More generally, given an algebraic set $\Delta\subset\mathbb F$, the polynomials
\begin{equation}
P_{\Delta,{\boldsymbol\ell}}={\bf lrcm}\left(\boldsymbol{\rho}_{\alpha}: \, \alpha\in\Delta\right)\quad\mbox{and}
\quad P_{\Delta,{\bf r}}={\bf llcm}\left(\boldsymbol{\rho}_{\alpha}: \, \alpha\in\Delta\right)
\label{minpol}
\end{equation}
generate the ideals $\langle P_{\Delta,{\boldsymbol\ell}}\rangle_{\bf r}$ and
$\langle P_{\Delta,{\bf r}}\rangle_{\boldsymbol\ell}$ consisting of polynomials $f\in\mathbb F[z]$ such that
$f^{\boldsymbol{\mathfrak{e}_\ell}}\vert_\Delta=0$ and $f^{\boldsymbol{\mathfrak{e}_r}}\vert_\Delta=0$, respectively:
\begin{equation}
\Delta\subseteq{\mathcal Z}_{\boldsymbol\ell}(f) \; \; \Longleftrightarrow \;
f\in\langle P_{\Delta,{\boldsymbol\ell}}\rangle_{\bf r}
\quad\mbox{and}\quad \Delta\subseteq{\mathcal Z}_{\boldsymbol r}(f) \; \;
\Longleftrightarrow \; f\in \langle P_{\Delta,{\bf r}}\rangle_{\boldsymbol\ell}.
\label{2.4u}
\end{equation}
The polynomials $P_{\Delta,{\boldsymbol\ell}}$ and $P_{\Delta,{\bf r}}$ are called {\em left} and
{\em right minimal polynomials} of $\Delta$. In particular, it follows from \eqref{2.4u} that
\begin{equation}
\Delta\subseteq{\mathcal Z}_{\boldsymbol\ell}(P_{\Delta,{\boldsymbol\ell}})
\quad\mbox{and}\quad
\Delta\subseteq {\mathcal Z}_{\bf r}(P_{\Delta,{\bf r}});
\label{minpol1}
\end{equation}
both inclusions can be proper, by Gordon-Motzkin theorem \cite{gm}.
It is clear from \eqref{minpol1} that the numbers $\deg P_{\Delta,{\boldsymbol\ell}}$ and
$\deg P_{\Delta,{\bf r}}$ cannot exceed the cardinality of $\Delta$.
\begin{definition}
{\rm A set $\Delta\subset\mathbb F$ is called} left polynomially independent {\rm if
$\deg P_{\Delta,{\boldsymbol\ell}}=|\Delta|$, and it is called}
right polynomially independent {\rm if $\deg P_{\Delta,{\bf r}}=|\Delta|$}.
\label{D:1.1}
\end{definition}
The notion of polynomial independence ($P$-independence) was introduced in \cite{lam1}; see also
\cite{lamler1}, \cite{lamler2} for later elaborations. On account of \eqref{minpol1}, the equality
$\deg P_{\Delta,{\boldsymbol\ell}}=|\Delta|$ means that the polynomials
$\left(\boldsymbol{\rho}_{\alpha}: \, \alpha\in\Delta\right)$ are left relatively prime, i.e., each one (say, $\boldsymbol{\rho}_\beta$)
is left coprime with the {\bf lrcm} of the others, i.e., with the left minimal polynomial
$P_{\Delta\backslash\{\beta\},{\boldsymbol\ell}}$ of the set $\Delta\backslash\{\beta\}$. Since $\beta$ is
the only zero of $\boldsymbol{\rho}_\beta$, the latter simply means that
$P_{\Delta\backslash\{\beta\},{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\beta)\neq 0$. We record this observation along
with its right counter-part.
\begin{remark}
An algebraic set $\Delta\subset\mathbb F$ is left (right) $P$-independent if and only if
\begin{equation}
P_{\Delta\backslash\{\beta\},{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\beta)\neq 0\qquad (\mbox{respectively, \; }
P_{\Delta\backslash\{\beta\},{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta)\neq 0)
\quad\mbox{for all}\quad \beta\in\Delta.
\label{pind}
\end{equation}
\label{R:1.1r}
\end{remark}
The following theorem characterizes $P$-independent sets in interpolation terms and provides two
(left and right) noncommutative counter-parts of Remark \ref{R:1.0}.
\begin{theorem}
(1) A set $\Lambda=\{\alpha_1,\ldots,\alpha_n\}\subset\mathbb F$ is left $P$-independent if and only if
the left problem \eqref{1.18} has a solution in $P_n(\mathbb F)$ for any $c_1,\ldots, c_n\in\mathbb F$.
In this case, a unique $f_{\boldsymbol\ell}\in P_n(\mathbb F)$ subject to conditions \eqref{1.18} is given by the formula
\begin{equation}
f_{\boldsymbol\ell}(z)=\sum_{i=1}^n p_i(z)p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}c_i,\; \; \mbox{where}\; p_i=P_{\Lambda\backslash\{\alpha_i\},{\boldsymbol\ell}}:=
{\bf lrcm}\big(\boldsymbol{\rho}_{\alpha_j}: \, j\neq i\big).
\label{1.16}
\end{equation}
(2) A set $\Omega=\{\beta_1,\ldots,\beta_k\}\subset \mathbb F$ is right $P$-independent if and only if
the right problem \eqref{1.19} has a solution in $P_k(\mathbb F)$ for any $d_1,\ldots, d_k\in\mathbb F$.
In this case, a unique $f_{\bf r}\in P_k(\mathbb F)$ subject to conditions \eqref{1.19} is given by
\begin{equation}
f_{\bf r}(z)=\sum_{i=1}^k d_i q_i^{\boldsymbol{\mathfrak{e}_r}}(\beta_i)^{-1}q_i(z),\; \;\mbox{where}\; \; q_i=P_{\Lambda\backslash\{\beta_i\},{\bf r}}
:={\bf llcm}\big(\boldsymbol{\rho}_{\beta_j}: \, j\neq i\big).
\label{1.16r}
\end{equation}
\label{T:1.1}
\end{theorem}
\begin{proof}
To argue as in the commutative case, we consider $P_n(\mathbb F)$ and $\mathbb F^n$ as right $\mathbb F$-modules over $\mathbb F$ and
define the right-linear operator $T: \, P_n(\mathbb F)\to \mathbb F^n$ by the formula
$Tf=(f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_1),\ldots,f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_n))$. Since $\dim_{_{\mathbb F}}P_n(\mathbb F)=\dim_{_{\mathbb F}}\mathbb F^n$, this operator is surjective
(i.e., the problem \eqref{1.18} has a solution in $P_n(\mathbb F)$ for any $c_1,\ldots, c_n\in\mathbb F$)
if and only if it is injective, i.e., {\em no nonzero polynomial of degree less than $n$ left-vanishes at $\Lambda$}.
The latter means that the left minimal polynomial $P_{\Lambda,{\boldsymbol\ell}}$ of $\Delta$ is of degree at least $n$, which means that
the set $\Lambda$ is left $P$-independent.
\smallskip
Conversely, if $\Lambda$ is left $P$-independent, then $P_{\Lambda\backslash\{\alpha_i\},{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)\neq 0$ for all $i=1,\ldots,n$, by Remark
\ref{R:1.1r}. Then the formula \eqref{1.16} makes sense and defines a polynomial $f_\ell\in P_n(\mathbb F)$ satisfying conditions
\eqref{1.18}. The uniqueness follows since the operator $T$ is injective. This completes the proof of part (1) of the theorem.
The proof of part (2) is similar once we consider
$P_k(\mathbb F)$ and $\mathbb F^k$ as left $\mathbb F$-modules over $\mathbb F$ and deal with the left linear operator
$T: \, P_k(\mathbb F)\to \mathbb F^k$ given by $Tf=(f^{\boldsymbol{\mathfrak{e}_r}}(\beta_1),\ldots,f^{\boldsymbol{\mathfrak{e}_r}}(\beta_k))$.
\end{proof}
\subsection{Consistency of interpolation conditions} By \eqref{2.4u}, the solution sets of homogeneous problems \eqref{1.18} and \eqref{1.19} are the ideals
$\langle P_{\Lambda,{\boldsymbol\ell}}\rangle_{\bf r}$ and $\langle P_{\Omega,{\bf r}}\rangle_{\boldsymbol\ell}$. Combining the latter
Theorem \ref{T:1.1} leads us to the following conclusion.
\begin{remark}
If the sets $\Lambda$ and $\Omega$ in \eqref{1.7} are respectively, left and right $P$-independent, then
all polynomials $f\in\mathbb F[z]$ satisfying conditions \eqref{1.18} and \eqref{1.19} are parametrized by respective formulas
\begin{equation}
f=f_{\boldsymbol\ell}+P_{\Lambda,{\boldsymbol\ell}}h\quad \mbox{and}\quad f=f_{\bf r}+gP_{\Omega,{\bf r}},\quad h,g\in\mathbb F[z]
\label{ap16}
\end{equation}
where $f_{\boldsymbol\ell}$ and $f_{\bf r}$ are defined in \eqref{1.16} and \eqref{1.16r}.
\label{R:1.3}
\end{remark}
Let us now consider the left interpolation problem
\begin{equation}
f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=c_i\quad\mbox{for}\quad i=1,\ldots,N
\label{ap1}
\end{equation}
where the set $\Delta=\{\alpha_1,\ldots,\alpha_N\}$ is not necessarily left $P$-independent.
If $\deg P_{\Delta,{\boldsymbol\ell}}=n<N$, we can find a left $P$-independent subset $\Lambda\subset\Delta$ consisting of exactly $n$ elements
(a {\em left $P$-basis} of $\Delta$) and having the same left minimal polynomial as $\Delta$, that is, $P_{\Lambda,{\boldsymbol\ell}}=P_{\Delta,{\boldsymbol\ell}}$.
Without loss of generality we may let $\Lambda=\{\alpha_1,\ldots, \alpha_n\}$.
\smallskip
By Remark {R:1.3}, any polynomial $f$ satisfying conditions \eqref{ap1} (for $i=1,\ldots,n$) is of the form $f=f_{\boldsymbol\ell}+P_{\Delta,{\boldsymbol\ell}}h$
for some $h\in\mathbb F[z]$ and $f_{\boldsymbol\ell}$ given in \eqref{1.16}. Therefore
\begin{equation}
f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma)=f_{\boldsymbol\ell}^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma)\quad \mbox{for all}\quad \gamma\in
{\mathcal Z}_{\boldsymbol\ell}(P_{\Delta,{\boldsymbol\ell}}).
\label{ap20}
\end{equation}
By \eqref{minpol1}, we have in particular,
$f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_j)=f_{\boldsymbol\ell}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_j)$ for $j=1,\ldots,N$.
Combining the latter equalities with \eqref{ap1} (for $j>n)$ and the formula \eqref{1.16} for $f_{\boldsymbol\ell}$, we get
\begin{equation}
\sum_{i=1}^n p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_j)p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}c_i=c_j\quad\mbox{for}\quad j=n+1,\ldots,N.
\label{ap2}
\end{equation}
Thus, if the problem \eqref{ap1} is solvable, then the Lagrange polynomial $f_{\boldsymbol\ell}$ is a solution. For this to happen, the
target value $c_j$ (for $j>n$) has to be equal to the actual value $f_{\boldsymbol\ell}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_j)$. If
at least one of the equalities \eqref{ap2} fails, then the problem \eqref{ap1} is inconsistent. Otherwise, any polynomial
$f\in\mathbb F[z]$ satisfying the first $n$ conditions in \eqref{ap1} will satisfy the remaining conditions
automatically. After removing the redundant conditions we get a reduced interpolation problem based on the
left $P$-independent set $\Lambda$ and with the same solution set as the original problem.
\smallskip
The same observations apply to the right-sided problem: a set $\Delta=\{\beta_1,\ldots, \beta_M\}$ with the right minimal
polynomial $P_{\Delta,{\bf r}}$ of degree $k$, can be rearranged so that its subset $\Omega=\{\beta_1,\ldots,\beta_k\}$
be right $P$-independent. Then the right interpolation problem
\begin{equation}
f^{\boldsymbol{\mathfrak{e}_r}}(\beta_i)=d_i\quad\mbox{for}\quad i=1,\ldots,M
\label{ap3}
\end{equation}
has a solution if and only if the following compatibility conditions are satisfied
\begin{equation}
\sum_{i=1}^n d_iq_i^{\boldsymbol{\mathfrak{e}_r}}(\beta_i)^{-1}q_i^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=d_j\quad\mbox{for}\quad j=k+1,\ldots,M,
\label{ap4}
\end{equation}
where $q_i$ are given in \eqref{1.16r}. If the latter equalities hold true, then the last $M-k$ conditions in \eqref{ap3} are redundant and
can be disregarded.
\begin{remark}
{\rm Since a set $\Delta$ is left (right) $P$-independent if (and clearly, only if) its intersection with each conjugacy class is,
it suffices to verify compatibility conditions \eqref{ap2} and \eqref{ap4} within each conjugacy class
having non-empty intersection with $\Delta$. In other words, the problems \eqref{ap1} and \eqref{ap3} are solvable if their subproblems
within each conjugacy class are.}
\label{R:2.3}
\end{remark}
In the subsequent analysis, we will make frequent use of polynomials over $Z_{\mathbb F}$, for which the notions of
left and right values, and consequently, the notions of left and right zeros coincide (see formulas \eqref{2.2}).
Without any ambiguity, we may write $g(\alpha)$ and ${\mathcal Z}(g)$ for the values and the
zero set of a central polynomial $g$. Besides, if $g\in Z_{\mathbb F}[z]$, then for each $\alpha$ and $\tau\neq 0$, we have
$g(\tau \alpha\tau^{-1})=\tau g(\alpha)\tau^{-1}$, so that ${\mathcal Z}(g)$ contains with each $\alpha$ the whole conjugacy class $[\alpha]$.
\subsection{Extension formulas}\label{sb} The formula \eqref{ap20} shows that given an algebraic set $\Delta$ with a
fixed left $P$-basis $\Lambda\subset \Delta$ and given any $f\in\mathbb F[z]$, the Lagrange polynomial $f_{\boldsymbol\ell}$
constructed from the left values of $f$ on $\Lambda$ provides a unique extension of $f$ from $\Delta$ (even from $\Lambda$)
to a possibly larger set ${\mathcal Z}_{\boldsymbol\ell}(P_{\Delta,{\boldsymbol\ell}})$
(the {\em left $P$-closure} of $\Delta$, in the terminology of \cite{lam1}).
On the other hand, if $\gamma\not\in {\mathcal Z}_{\boldsymbol\ell}(P_{\Delta,{\boldsymbol\ell}})$,
then the set $\Lambda\cup\{\gamma\}$ is left $P$-independent and the value $f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma)$ is independent of
$f^{\boldsymbol{\mathfrak{e}_\ell}}\vert_{\Lambda}$ (and therefore, of $f^{\boldsymbol{\mathfrak{e}_\ell}}\vert_{\Delta}$), by Theorem \ref{T:1.1}.
Thus, it makes sense to consider extensions of polynomials within conjugacy classes. Similar observations apply to right evaluations.
\smallskip
If $V$ is an {\em algebraic} conjugacy class in $\mathbb F$, its left and right minimal polynomials \eqref{minpol}
are equal to the same central polynomial which will be denoted by $\mathcal X_{_V}$. Thus,
$\mathcal X_{_V}=P_{V,\boldsymbol\ell}=P_{V,{\bf r}}$ and ${\mathcal Z}(\mathcal X_{_V})=V$.
\smallskip
For any polynomial to be uniquely extended from a given $\Delta\subset V$ to the whole $V$, we need $\Delta$ to contain a
left $P$-basis for $V$. Without loss of generality (and in order to use Lagrange interpolation formulas) we may assume that
$\Delta$ itself is a left $P$-basis for $V$. In this case, the restriction of $f^{\boldsymbol{\mathfrak{e}_\ell}}$ to a left $P$-basis of a conjugacy class $V$ uniquely determine not only
$f^{\boldsymbol{\mathfrak{e}_\ell}}\vert_{_V}$ but also $f^{\boldsymbol{\mathfrak{e}_r}}\vert_{_V}$. Similarly. the restriction of $f^{\boldsymbol{\mathfrak{e}_r}}$ to a right $P$-basis of $V$
uniquely determine $f^{\boldsymbol{\mathfrak{e}_\ell}}\vert_{_V}$ and $f^{\boldsymbol{\mathfrak{e}_r}}\vert_{_V}$. Details are furnished below.
\begin{lemma}
Let $\Delta=\{\gamma_1,\ldots,\gamma_m\}$ be a left $P$-basis for the conjugacy class $V$.
Then for any $f\in\mathbb F[z]$ and $\gamma\in V$,
\begin{align}
f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma)&=\sum_{i=1}^m
P_{\Delta\backslash\{\gamma_i\},{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma)P_{\Delta\backslash\{\gamma_i\},{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_i)^{-1}f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_i),\label{6.2r}\\
f^{\boldsymbol{\mathfrak{e}_r}}(\gamma)&=\sum_{i=1}^m \big(P_{\Delta\backslash\{\gamma_i\},{\boldsymbol\ell}}\cdot P_{\Delta\backslash\{\gamma_i\},{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_i)^{-1}
f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_i)\big)^{\boldsymbol{\mathfrak{e}_r}}(\gamma).
\label{6.3r}
\end{align}
\label{L:ext}
\end{lemma}
\begin{proof}
Since the set $\Delta$ is a left $P$-basis for $\Delta$, the formulas \eqref{6.2r}, \eqref{6.3r} make sense and besides,
$P_{\Delta,{\boldsymbol\ell}}=P_{V,{\boldsymbol\ell}}=\mathcal {X}_{_V}$. Since the polynomial
$$
g(z)=f(z)-\sum_{i=1}^m P_{\Delta\backslash\{\gamma_i\},{\boldsymbol\ell}}(z)P_{\Delta\backslash\{\gamma_i\},{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_i)^{-1}f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_i)
$$
satisfies conditions $g^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_i)=0$ for $i=1,\ldots,m$ (i.e., $g^{\boldsymbol{\mathfrak{e}_\ell}}\vert_{\Delta}=0$), it follows from
\eqref{2.4u} that $g\in\langle P_{\Delta,{\boldsymbol\ell}}\rangle_{\boldsymbol\ell}=\langle \mathcal {X}_{_V}\rangle$, and the latter
ideal is two-sided, since $\mathcal {X}_{_V}\in Z_{\mathbb F}[z]$.
Then $g^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma)=g^{\boldsymbol{\mathfrak{e}_r}}(\gamma)=0$ for all $\gamma\in {\mathcal Z}(\mathcal {X}_{_V})=V$, which implies
\eqref{6.2r} and \eqref{6.3r}.
\end{proof}
The right-sided version of Lemma \ref{L:ext} asserts that for a right $P$-basis $\Delta=\{\gamma_1,\ldots,\gamma_m\}$,
any polynomial $f\in\mathbb F$ and the right Lagrange polynomial constructed from $f^{\boldsymbol{\mathfrak{e}_r}}\vert_{\Delta}$
have the same left and right values at any $\gamma\in V$. We omit the precise formulation.
\begin{example}
{\rm If $\mathbb F=\mathbb H$, the skew field of real quaternions, any set $\Delta=\{\gamma_1,\gamma_2\}$ in a conjugacy class $V$
is a left and right $P$-basis for $V$. Adapting formulas \eqref{6.2r} and \eqref{6.3r} to this particular ``two-point" case, where
$P_{\Delta\backslash\{\gamma_1\},{\boldsymbol\ell}}=\boldsymbol{\rho}_{\gamma_2}$ and
$P_{\Delta\backslash\{\gamma_2\},{\boldsymbol\ell}}=\boldsymbol{\rho}_{\gamma_1}$, we conclude:}
for any $f\in\mathbb H$ and any $\gamma_1,\gamma_2$ and $\gamma$ in the same conjugacy class,
\begin{align*}
f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma)&=(\gamma-\gamma_2)(\gamma_1-\gamma_2)^{-1}f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_1)+(\gamma-\gamma_1)
(\gamma_2-\gamma_1)^{-1}f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_2),\\
f^{\boldsymbol{\mathfrak{e}_r}}(\gamma)&=(\gamma_1-\gamma_2)^{-1}f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_1)\gamma-
\gamma_2(\gamma_1-\gamma_2)^{-1}f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_1)\notag\\
&\qquad +\gamma_1(\gamma_1-\gamma_2)^{-1}f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_2)-(\gamma_1-\gamma_2)^{-1}f^{\boldsymbol{\mathfrak{e}_\ell}}(\gamma_2)\gamma.
\end{align*}
{\rm Hence, the latter (well known) formulas turn out to be particular instances of Lagrange interpolation formula.}
\label{E:1.1}
\end{example}
\section{The two-sided problem}
\setcounter{equation}{0}
Assuming that the sets $\Lambda$ and $\Omega$ of interpolation nodes in \eqref{1.7} are respectively, left and right $P$-independent, i.e.,
such that
\begin{equation}
\deg P_{\Lambda,\boldsymbol\ell}=n\quad\mbox{and}\quad\deg P_{\Omega,{\bf r}}=k,
\label{ap17}
\end{equation}
we now address the two-sided problem \eqref{1.18}, \eqref{1.19}. This problem can be approached from two directions. First, one can start with
the formula \eqref{ap16} describing all solutions to the left subproblem \eqref{1.18}, and then to characterize all parameters
$h\in\mathbb F[z]$ such that $f=f_{\boldsymbol\ell}+P_{\Lambda,{\boldsymbol\ell}}h$ satisfies right-sided conditions \eqref{1.19}.
The main difficulty here is that, according to \eqref{2.7},
$$
(P_{\Lambda,{\boldsymbol\ell}}h)^{\boldsymbol{\mathfrak{e}_r}}(\beta)=P_{\Lambda,{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_r}}\big(h^{\boldsymbol{\mathfrak{e}_r}}(\beta)\beta h^{\boldsymbol{\mathfrak{e}_r}}(\beta)^{-1}\big)\cdot h^{\boldsymbol{\mathfrak{e}_r}}(\beta),
$$
which does not allow us to separate $P_{\Lambda,{\boldsymbol\ell}}$ and $h$. Alternatively, we can start with more restricted (but simpler) problem
by imposing extra interpolation conditions, and then to use the target values in these conditions as parameters describing solutions of the original problem.
More precisely, if the problem \eqref{1.18}, \eqref{1.19} admits a solution $f\in\mathbb F[z]$, then it follows from \eqref{4.4}
that the elements $\psi_{ij}=(L_{\alpha_i}f)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)$ satisfy equalities
\begin{equation}
\alpha_i \psi_{ij}-\psi_{ij}\beta_j=c_i-d_j\quad \mbox{for all}\quad i=1,\ldots, n; \; j=1,\ldots,k.
\label{5.3b}
\end{equation}
Note that equalities \eqref{5.3b} can be equivalently written as a single matrix equality
\begin{equation}
\sbm{\alpha_1 && 0 \\ &\ddots& \\ 0&& \alpha_n}X-X\sbm{\beta_1 && 0 \\ &\ddots& \\ 0&& \beta_k}
=\sbm{c_1 \\ \vdots \\ \\ c_n}\sbm{1 & \cdots & 1}-\sbm{1 \\ \vdots \\ \\ 1}\sbm{d_1 & \cdots & d_k}
\label{ap19}
\end{equation}
satisfied by the matrix $X=[\psi_{ij}]\in\mathbb F^{n\times k}$.
\smallskip
We are going to use $\psi_{ij}$ as the prescribed target values for an unknown interpolant $f$, thus arriving at the following modified interpolation problem:
{\em given two sets $\Lambda$ and $\Omega$ as in \eqref{1.7}
along with prescribed $c_i$, $d_j$, $\psi_{ij}\in\mathbb F$, find an $f\in\mathbb F[z]$ such that
\begin{equation}
f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=c_i,\quad f^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=d_j\quad\mbox{and}\quad (L_{\alpha_i} f)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=\psi_{ij}
\label{5.3a}
\end{equation}
for $i=1,\ldots,n$ and $j=1,\ldots,k$.}
\smallskip
This modified problem is quite simple: if the necessary conditions \eqref{5.3a} are met, the problem admits a unique solution in
$P_{n+k}(\mathbb F)$, whereas the solution set of its homogeneous counter-part equals the product of ideals
$\langle P_{\Lambda,\boldsymbol\ell}\rangle_{\bf r}$ and $\langle P_{\Omega,{\bf r}}\rangle_{\boldsymbol\ell}$.
Details are given in Propositions \ref{P:hom} and \ref{P:nhom} below.
\begin{proposition}
Given sets \eqref{1.7}, a polynomial $f\in\mathbb F[z]$ satisfies conditions
\begin{equation}
f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=0,\quad f^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=0\quad\mbox{and}\quad
(L_{\alpha_i} f)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=0
\label{5.3}
\end{equation}
for all $\alpha_i\in\Lambda$, $\beta_j\in\Omega$
if and only if it belongs to $\langle P_{\Lambda,\boldsymbol\ell}\rangle_{\bf r}\cdot \langle P_{\Omega,{\bf r}}\rangle_{\boldsymbol\ell}
= P_{\Lambda,\boldsymbol\ell}\cdot\mathbb F[z]\cdot P_{\Omega,{\bf r}}$.
\label{P:hom}
\end{proposition}
\begin{proof} For any $h\in\mathbb F[z]$, the polynomial $f=P_{\Lambda,\boldsymbol\ell}\cdot h\cdot P_{\Omega,\bf
r}$ satisfies conditions $f^{\boldsymbol{\mathfrak{e}_\ell}}\vert_\Lambda=0$ and $f^{\boldsymbol{\mathfrak{e}_r}}\vert_{\Omega}=0$, by formulas \eqref{2.3a}, \eqref{2.3b}
and the definitions \eqref{minpol} of minimal polynomials. Since $P_{\Lambda,\boldsymbol\ell}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=0$
for any $\alpha_i\in\Lambda$, we have
$$
L_{\alpha_i} f=L_{\alpha_i}(P_{\Lambda,\boldsymbol\ell}\cdot h\cdot P_{\Omega,\bf
r})=(L_{\alpha_i}P_{\Lambda,\boldsymbol\ell})\cdot h\cdot P_{\Omega,\bf r}
$$
and since $P_{\Omega,\bf r}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=0$ for any $\beta_j\in\Omega$,
evaluating the latter equality at $z=\beta_j$ on the right gives $(L_{\alpha_i} f)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=0$.
Conversely, for fixed $\alpha_i$ and $\beta_j$, we have by \eqref{2.1u},
$$
f=f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)+\boldsymbol{\rho}_{\alpha_i}\cdot(L_{\alpha_i} f)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)+\boldsymbol{\rho}_{\alpha_i} \cdot (R_{\beta_j}L_{\alpha_i}f)\cdot\boldsymbol{\rho}_{\beta_j}.
$$
If $f$ satisfies conditions \eqref{5.3}, we have $f=\boldsymbol{\rho}_{\alpha_i}h \boldsymbol{\rho}_{\beta_j}$ with $h=R_{\beta_j}L_{\alpha_i}f$. Hence,
$f$ belongs to $\langle \boldsymbol{\rho}_{\alpha_i}\rangle_{\bf r}
\cdot \langle\boldsymbol{\rho}_{\beta_j}\rangle_{\boldsymbol\ell}$ for all
$(\alpha_i,\beta_j)\in\Lambda\times \Omega$. By \eqref{minpol}, it then follows that for each fixed $\beta_j$, $R_{\beta_j} f$ belongs to
$\langle P_{\Lambda,\boldsymbol\ell}\rangle_{\bf r}$ so that $f$ belongs to
$\langle P_{\Lambda,\boldsymbol\ell}\rangle_{\bf r}\cdot \langle\boldsymbol{\rho}_{\beta_j}\rangle_{\boldsymbol\ell}$ for all $\beta_j\in\Omega$.
Using the same argument as above we come the desired conclusion.
\end{proof}
\begin{remark}
{\rm Conditions \eqref{5.3a} are not independent: it follows from \eqref{4.4} that after dropping left (or right) conditions
in \eqref{5.3b}, the remaining conditions still define the product-ideal $P_{\Lambda,\boldsymbol\ell}\cdot\mathbb F[z]\cdot P_{\Omega,{\bf r}}$.
It is of some interest to characterize the latter set in terms of (presumably, $n+k$) independent conditions. One possible choice is to take
all left conditions in \eqref{5.3b} and certain $k$ linear combinations of the two-sided conditions. In more detail, if $[v_1 \; v_2 \; \ldots \; v_n]$
denote the bottom row of the matrix $W^{-1}$, where $W=\big[\alpha_i^{j-1}\big]_{i,j=1}^n$
is the Vandermonde matrix associated with $\Lambda$ (it is invertible since $\Lambda$ is left $P$-independent; see \cite{lam1}), then, whenever
a polynomial $f$ satisfies conditions
$$
f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=0 \; \; (i=1,\ldots,n)\quad\mbox{and}\quad \sum_{i=1}^n v_i L_{\alpha_i}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=0 \; \; (j=1,\ldots,k),
$$
it belongs to $P_{\Lambda,\boldsymbol\ell}\cdot\mathbb F[z]\cdot P_{\Omega,{\bf r}}$.}
\label{R:3.31}
\end{remark}
\begin{proposition}
Under the assumptions \eqref{ap17}, the equalities \eqref{5.3b} are necessary and sufficient for
the problem \eqref{5.3a} to have a solution. In this case, the formula
\begin{equation}
f(z)=\sum_{i=1}^n p_i(z)p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}c_i+P_{\Lambda,\boldsymbol\ell}(z)\cdot\sum_{i=1}^n\sum_{j=1}^k
p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\psi_{ij}q_j^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1}q_j(z),
\label{bap1}
\end{equation}
where $p_i=P_{\Lambda\backslash\{\alpha_i\},\boldsymbol\ell}$ and $q_j=P_{\Omega\backslash\{\beta_j\},{\bf r}}$,
defines a unique polynomial $f\in P_{n+k}(\mathbb F)$ satisfying conditions \eqref{5.3a}.
\label{P:nhom}
\end{proposition}
\begin{proof}
The necessity of \eqref{5.3b} follows from equality \eqref{4.4}. The polynomial $f$ in \eqref{bap1} is of the form
$f=f_{\boldsymbol\ell}+P_{\Lambda,{\boldsymbol\ell}}h$ (where $f_{\boldsymbol\ell}$ is the left Lagrange polynomial \eqref{1.16}) and hence,
it satisfies the left-sided conditions in \eqref{5.3}, by Remark \ref{R:1.3}. It remains to show that if equalities \eqref{5.3b} hold, the
polynomial\eqref{bap1} satisfies the rest of conditions in \eqref{5.3a}. Once the two-sided conditions in \eqref{5.3a} will be confirmed, the
right-sided conditions will follow automatically, by \eqref{4.4} and \eqref{5.3b}:
$$
f^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)-\alpha_i \cdot(L_{\alpha_i} f)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)+(L_{\alpha_i} f)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=c_i-\alpha_i\psi_{ij}+\psi_{ij}\beta_j=d_j,
$$
for $j=1,\ldots,k$. To verify that $f$ satisfies the third condition in \eqref{5.3a} we first note that
\begin{equation}
P_{\Lambda,\boldsymbol\ell}=p_i\cdot \boldsymbol{\rho}_{\widetilde{\alpha}_i},\quad\mbox{where}\quad
\widetilde{\alpha}_i=p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\alpha_i p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)
\label{5.3c}
\end{equation}
for every $i=1,\ldots,n$. Indeed, the polynomial $g=p_i\cdot \boldsymbol{\rho}_{\widetilde{\alpha}_i}$ satisfies
$g^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_j)=0$ for $j\neq i$ (by the definition of $p_i=P_{\Lambda\backslash\{\alpha_i\},\boldsymbol\ell}$) and for $j=i$ (by the formula \eqref{2.6}).
Since $\deg g=n$, it follows that $g$ is the minimal polynomial of $\Lambda$, i.e., that $g=P_{\Lambda,\boldsymbol\ell}$.
By Proposition \ref{P:lprod}, we now have, for each $i,s=1,\ldots,n$,
\begin{equation}
L_{\alpha_s}P_{\Lambda,\boldsymbol\ell}=L_{\alpha_s}(p_i\cdot \boldsymbol{\rho}_{\widetilde{\alpha}_i})=
(L_{\alpha_s}p_i)\cdot \boldsymbol{\rho}_{\widetilde{\alpha}_i}+p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_s).
\label{Lproda}
\end{equation}
We also observe the identity
\begin{equation}
p_1(z)p_1^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_1)^{-1}+p_2(z)p_2^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_2)^{-1}+\ldots +p_n(z)p_n^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_n)^{-1}- 1\equiv 0.
\label{5.3d}
\end{equation}
Indeed, the polynomial on the left side is of degree less than $n$ and has left zeros at $\alpha_1,\ldots,\alpha_n$. Since
the set $\Lambda$ is left $P$-independent, \eqref{5.3d} follows. In particular, we conclude from \eqref{5.3d} that
\begin{equation}
L_{\alpha}\bigg(\sum_{i=1}^n p_i\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\bigg)=0\quad \mbox{for any} \quad \alpha\in\mathbb F.
\label{5.3e}
\end{equation}
For any fixed $\alpha_s\in\Lambda$ and $\beta_t\in\Omega$, we have for
$f$ of the form \eqref{bap1},
\begin{equation}
(L_{\alpha_s}f)^{\boldsymbol{\mathfrak{e}_r}}(\beta)=\sum_{i=1}^n \big((L_{\alpha_s}p_i)\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}c_i\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)+\Phi
\label{ap2a}
\end{equation}
where we have set for short,
$$
\Phi=\sum_{i=1}^n\sum_{j=1}^k \big (L_{\alpha_s}\big(P_{\Lambda,\boldsymbol\ell}\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\psi_{ij}q_j^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1}q_j\big)\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t).
$$
Due to equalities $P_{\Lambda,\boldsymbol\ell}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_s)=0$ and $q_j^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)=P_{\Omega\backslash\{\beta_j\},{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)=0$
(for all $j\neq t$), the latter expression for $\Phi$ can be written as
$$
\Phi=\sum_{i=1}^n \big (\big(L_{\alpha_s}P_{\Lambda,\boldsymbol\ell}\big)\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\psi_{it}\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t).
$$
Substituting \eqref{Lproda} into the latter equality and taking into account that $p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_s)=0$ for all $i\neq s$, we have
\begin{align}
\Phi&=\sum_{i=1}^n \big (\big((L_{\alpha_s}p_i)\cdot \boldsymbol{\rho}_{\widetilde{\alpha}_i}+p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_s)\big)\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}
\psi_{it}\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)\notag\\
&=\sum_{i=1}^n \big ((L_{\alpha_s}p_i)\cdot \boldsymbol{\rho}_{\widetilde{\alpha}_i}\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\psi_{it}\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)
+\psi_{st}\notag\\
&=\sum_{i=1}^n \big ((L_{\alpha_s}p_i)p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\cdot \boldsymbol{\rho}_{\alpha_i} \cdot\psi_{it}\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)+\psi_{st},
\label{ap3a}
\end{align}
where the last equality holds, by the definition of $\widetilde{\alpha}_i$ in \eqref{5.3c}. Since by \eqref{5.3b},
$$
\big(\boldsymbol{\rho}_{\alpha_i}\cdot \psi_{it}\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)=\psi_{it}\beta_t-\alpha_i\psi_{it}=d_t-c_i,
$$
we may invoke formula \eqref{2.3b} to write \eqref{ap3a} as
\begin{align*}
\Phi&=\sum_{i=1}^n \big ((L_{\alpha_s}p_i)p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\cdot (\boldsymbol{\rho}_{\alpha_i} \cdot\psi_{it})^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)+\psi_{st}\\
&=\sum_{i=1}^n \big ((L_{\alpha_s}p_i)p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\cdot (d_t-c_i)\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)+\psi_{st}.
\end{align*}
Substituting the latter expression for $\Phi$ into \eqref{ap2a} and making use of \eqref{5.3d} we get
\begin{align*}
(L_{\alpha_s}f)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)&=\sum_{i=1}^n \big ((L_{\alpha_s}p_i)p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\cdot d_t\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)+\psi_{st}\\
&=\bigg(L_{\alpha_s}\bigg(\sum_{i=1}^n p_i\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\bigg)\cdot d_t\bigg)^{\boldsymbol{\mathfrak{e}_r}}(\beta_t)+\psi_{st}=\psi_{st}.
\end{align*}
The difference of two polynomials $f,g$ of degree less than $n+k$ and satisfying conditions \eqref{5.3a} is in
$P_{\Lambda,\boldsymbol\ell}\cdot\mathbb F[z]\cdot P_{\Omega,{\bf r}}$, by Proposition \ref{P:hom}. Therefore $f=g$ and the
uniqueness of $f\in P_{n+k}(\mathbb F)$ satisfying conditions \eqref{5.3a} follows.
\end{proof}
\begin{remark}
{\rm The formula \eqref{bap1} looks asymmetric with respect to the left and right interpolation subproblems. To dismiss this asymmetry, note
that the polynomial $f$ in \eqref{bap1} can be alternatively written in terms of the right Lagrange polynomial \eqref{1.16r} as
\begin{equation}
f(z)=\sum_{j=1}^k d_j q_j^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1}q_j(z)+\sum_{i=1}^n\sum_{j=1}^k p_i(z)p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\psi_{ij}q_j^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1}P_{\Omega,{\bf r}}(z).
\label{ap1a}
\end{equation}
Verification of equality of right-hand side expressions in formulas \eqref{bap1} and \eqref{ap1a} relies on relations \eqref{5.3b} and is quite straightforward.}
\label{R:3.30}
\end{remark}
Upon interpreting the target values $\psi_{ij}\in\mathbb F$ in \eqref{5.3a} as unspecified parameters subject to {\em Sylvester equations} \eqref{5.3b}
we arrive at the following consequence of Proposition \ref{P:nhom}
\begin{proposition}
Under assumptions \eqref{ap17}, the formula \eqref{bap1} (or \eqref{ap1a}) establishes a bijection between $nk$-tuples $\{\psi_{ij}\}$ of solutions
to the Sylvester equations \eqref{5.3b} (equivalently, solutions $X=[\psi_{ij}]\in\mathbb F^{n\times k}$ to the
matrix Sylvester equation \eqref{ap19}) and all polynomials $f\in P_{n+k}(\mathbb F)$ satisfying conditions \eqref{1.18}, \eqref{1.19}.
\label{P:nhoma}
\end{proposition}
Consequently, the problem \eqref{1.18}, \eqref{1.19} has a solution (a unique solution in $P_{n+k}(\mathbb F)$)
if and only if each equation in \eqref{ap19} has a solution in $\mathbb F$ (respectively, has a unique solution in $\mathbb F$). To proceed
further, we recall some needed results concerning the solvability in $\mathbb F$ of the scalar Sylvester equation
\begin{equation}
\alpha x-x\beta=\gamma, \qquad \alpha,\beta,\gamma\in\mathbb F;
\label{4.25f}
\end{equation}
the study of the latter in the context of general division rings goes back to \cite{jacob1} and \cite{johnson}.
(see also \cite{cohn2}, \cite[Section 6]{lamler2}) and to Hamilton (see e.g., \cite[p. 123]{tait}) in the case of real quaternions.
We will assume that $\alpha$ is algebraic over $Z_{\mathbb F}$. In this case, the conjugacy class $[\alpha]$ is algebraic and its
minimal polynomial $\mathcal X_{_{[\alpha]}}\in Z_{\mathbb F}[z]$ turns out to be the {\em minimal central polynomial} for $\alpha$ (as well as for any
$\beta\in[\alpha]$). Furthermore,
\begin{align}
\mathcal X_{_{[\alpha]}}=\boldsymbol{\rho}_\beta \cdot (L_\beta \mathcal X_{_{[\alpha]}})&=(R_\beta \mathcal X_{_{[\alpha]}})\cdot\boldsymbol{\rho}_\beta,\quad L_\beta \mathcal X_{_{[\alpha]}}=R_\beta \mathcal X_{_{[\alpha]}},\label{ma11}\\
\mathcal X_{_{[\alpha]}}^\prime(\beta)&=(L_\beta \mathcal X_{_{[\alpha]}})^{\boldsymbol{\mathfrak{e}_\ell}}(\beta)\neq 0\quad\mbox{for each}\quad \beta\in[\alpha].
\label{ma10}
\end{align}
Indeed, the first equality in \eqref{ma11} holds since each $\beta\in [\alpha]$ is a left and a right zero of $\mathcal X_{_{[\alpha]}}$, and the second equality holds since
$\mathcal X_{_{[\alpha]}}\in Z_{\mathbb F}[z]$. The chain rule gives $\mathcal X_{_{[\alpha]}}^\prime=L_\beta \mathcal X_{_{[\alpha]}}+\boldsymbol{\rho}_\beta \cdot (L_\beta \mathcal X_{_{[\alpha]}})^\prime$, which being
evaluated at $\alpha$ on the left, implies the equality in \eqref{ma10}. Since the formal derivative $\mathcal X_{_{[\alpha]}}^\prime$ also belongs to $Z_{\mathbb F}[z]$ and
$\deg \mathcal X_{_{[\alpha]}}^\prime<\deg \mathcal X_{_{[\alpha]}}$, it follows that $\mathcal X_{_{[\alpha]}}^\prime(\beta)\neq 0$ for any $\beta\in[\alpha]$.
\smallskip
Given a triple $(\alpha,\beta;\gamma)$ with $\alpha$ algebraic and $\deg \mathcal X_{_{[\alpha]}}=\kappa$, the element
\begin{equation}
\Psi_{\alpha,\beta}(\gamma):=\left\{\begin{array}{cc}
-\big(L_\alpha \mathcal X_{_{[\alpha]}}\gamma\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\cdot \mathcal X_{_{[\alpha]}}(\beta)^{-1}, & \mbox{if} \; \beta\not\in[\alpha],\\
{\displaystyle\sum_{j=1}^{\kappa-1}\sum_{i=0}^{j-1}\frac{(-1)^{i+j}}{(j+1)!}\cbm{j-1 \\ i}
\alpha^i\gamma\mathcal X_{_{[\alpha]}}^{(j+1)}(\beta)\beta^{j-i-1}\cdot \mathcal X_{_{[\alpha]}}^\prime(\beta)^{-1}},& \mbox{if} \; \beta\in[\alpha],
\end{array}\right.
\label{combo}
\end{equation}
is well defined, due to \eqref{ma10} and since ${\mathcal Z}(\mathcal X_{_{[\alpha]}})=[\alpha]$ (and hence, $\mathcal X_{_{[\alpha]}}(\beta)\neq 0$ for $\beta\not\in[\alpha]$).
\begin{proposition}
Let $\alpha\in\mathbb F$ be algebraic and let $\mathcal X_{_{[\alpha]}}\in Z_{\mathbb F}[z]$ be its minimal polynomial.
\smallskip
\noindent
{\rm (1)} If $\beta\not\in[\alpha]$, then for any $\gamma\in\mathbb F$, the equation \eqref{4.25f} has a unique solution
$x\in\mathbb F$, given by the top formula in \eqref{combo}: $x=\Psi_{\alpha,\beta}(\gamma)$.
\smallskip
\noindent
{\rm (2)} If $\beta\in[\alpha]$, then the equation \eqref{4.25f} has a solution in $\mathbb F$ if and only if
\begin{equation}
\big(L_\alpha \mathcal X_{_{[\alpha]}}\gamma\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta)=0.
\label{4.25ha}
\end{equation}
In this case, all solutions $x\in\mathbb F$ to the equation \eqref{4.25f} are given by
\begin{equation}
x=\Psi_{\alpha,\beta}(\gamma)+\varphi,
\label{gsyl}
\end{equation}
where $\Psi_{\alpha,\beta}(\gamma)$ is defined by the bottom formula in \eqref{combo} and
$\varphi$ is any intertwiner of $\alpha$ and $\beta$ (i.e., $\alpha\varphi=\varphi\beta_j$).
\label{P:klj}
\end{proposition}
\begin{proof}[Proof of (1)] To verify that $x=\Psi_{\alpha,\beta}(\gamma)$ of the form \eqref{combo} solves the equation
\eqref{4.25f}, we use \eqref{4.4} (with $f=\mathcal X_{_{[\alpha]}}\gamma$)
and relations $\beta \mathcal X_{_{[\alpha]}}(\beta)=\mathcal X_{_{[\alpha]}}(\beta)\beta$ and $\mathcal X_{_{[\alpha]}}(\alpha)=0$:
\begin{align*}
(\alpha x-x\beta)\mathcal X_{_{[\alpha]}}(\beta)&=-
\alpha(L_\alpha \mathcal X_{_{[\alpha]}}\gamma)^{\boldsymbol{\mathfrak{e}_r}}(\beta)+(L_\alpha \mathcal X_{_{[\alpha]}}\gamma)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\beta\\
&=(\mathcal X_{_{[\alpha]}}\gamma)^{\boldsymbol{\mathfrak{e}_r}}(\beta)-(\mathcal X_{_{[\alpha]}}\gamma)^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)=\gamma\mathcal X_{_{[\alpha]}}(\beta),
\end{align*}
which implies \eqref{4.25f}, as $\mathcal X_{_{[\alpha]}}(\beta)\neq 0$.
For the uniqueness part, let $x$ be any solution to the equation \eqref{4.25f}, or equivalently,
to the equation
$\; x\boldsymbol{\rho}_\beta-\boldsymbol{\rho}_\alpha x=\gamma$. Multiplying both sides in the latter equality by $L_\alpha \mathcal X_{_{[\alpha]}}$
on the left we get, on account of \eqref{ma11},
$$
(L_\alpha \mathcal X_{_{[\alpha]}}) x\boldsymbol{\rho}_\beta-\mathcal X_{_{[\alpha]}} x=(L_\alpha\mathcal X_{_{[\alpha]}})\gamma,
$$
which being evaluated at $\beta$ on the right gives (since $\mathcal X_{_{[\alpha]}}\in Z_{\mathbb F}[z]$)
\begin{equation}
-x\mathcal X_{_{[\alpha]}}(\beta)=\big((L_\alpha \mathcal X_{_{[\alpha]}})\gamma\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta)
\label{klj1}
\end{equation}
which uniquely defines $x$ via the top formula in \eqref{combo}.
\end{proof}
\begin{proof}[Proof of (2)] By part (1), equality \eqref{klj1} holds
for any solution $x$ (if exists) to the equation \eqref{4.25f}.
If $\beta\in[\alpha]$, then $\mathcal X_{_{[\alpha]}}(\beta)=0$ and \eqref{klj1} amounts to \eqref{4.25ha},
which completes the proof of the ``only if" part.
To prove the ``if" part, we start with the general formula
$$
(L_\alpha f)(z)=\sum_{j=0}^{\deg f-1}\frac{(-1)^{j}}{(j+1)!}\, (z-\alpha)^{j}f^{(j+1)}(z)
$$
relating the left backward shift $L_\alpha$ of a polynomial $f$ with its formal derivatives.
Applying the latter formula to $f=\mathcal X_{_{[\alpha]}} \gamma$ gives
$$
L_\alpha (\mathcal X_{_{[\alpha]}}\gamma)=\sum_{j=0}^{\kappa-1}\frac{(-1)^{j}}{(j+1)!}\, \boldsymbol{\rho}_\alpha^j\mathcal X_{_{[\alpha]}}^{(j+1)}\gamma,\quad\mbox{where}\quad \kappa=\deg\mathcal X_{_{[\alpha]}}.
$$
Assuming that \eqref{4.25ha} is in force, we evaluate both sides of the last equality at $\beta$ on the right and arrive at
\begin{align}
0&=\sum_{j=0}^{\kappa-1}\frac{(-1)^{j}}{(j+1)!}\big(\boldsymbol{\rho}_\alpha^j\mathcal X_{_{[\alpha]}}^{(j+1)}\gamma\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\notag\\
&=(\mathcal X_{_{[\alpha]}}^\prime\gamma)^{\boldsymbol{\mathfrak{e}_r}}(\beta)+
\sum_{j=1}^{\kappa-1}\frac{(-1)^{j}}{(j+1)!}\big(\boldsymbol{\rho}_\alpha \boldsymbol{\rho}_\alpha^{j-1}\mathcal X_{_{[\alpha]}}^{(j+1)}\gamma\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta).
\label{klj3}
\end{align}
Since $\mathcal X_{_{[\alpha]}}^{(j)}\in Z_{\mathbb F}[z]$ for all $j\ge 0$ and since
$$
(\boldsymbol{\rho}_\alpha \, f)^{\boldsymbol{\mathfrak{e}_r}}(\beta)=f^{\boldsymbol{\mathfrak{e}_r}}(\beta)\beta-\alpha f^{\boldsymbol{\mathfrak{e}_r}}(\beta)\quad\mbox{for all}\quad f\in\mathbb F[z],
$$
we can write \eqref{klj3} equivalently as
$$
\gamma\mathcal X_{_{[\alpha]}}^\prime(\beta)=\sum_{j=1}^{\kappa-1}\frac{(-1)^{j}}{(j+1)!}\left(\alpha\big(\boldsymbol{\rho}_\alpha^{j-1}\mathcal X_{_{[\alpha]}}^{(j+1)}\gamma\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta)
-\big(\boldsymbol{\rho}_\alpha^{j-1}\mathcal X_{_{[\alpha]}}^{(j+1)}\gamma\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\beta\right).
$$
Since $\mathcal X_{_{[\alpha]}}^\prime(\beta)\neq 0$ and $\mathcal X_{_{[\alpha]}}^\prime(\beta)\beta=\beta \mathcal X_{_{[\alpha]}}^\prime(\beta)$,
we can divide both sides of the last equality by $\mathcal X_{_{[\alpha]}}^\prime(\beta)$
on the right and write the resulting equality as $\gamma=\alpha x_0-x_0\beta$, where
\begin{equation}
x_0=\sum_{j=1}^{\kappa-1}\frac{(-1)^{j}}{(j+1)!}\big(\boldsymbol{\rho}_\alpha^{j-1}\mathcal X_{_{[\alpha]}}^{(j+1)}\gamma\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\cdot \mathcal X_{_{[\alpha]}}^\prime(\beta)^{-1}.
\label{klm}
\end{equation}
The latter means that $x_0$ is a solution to the equation \eqref{4.25f}. A more detailed formula for $x_0$ as in \eqref{combo}
follows upon plugging in the equalities
$$
\big(\boldsymbol{\rho}_\alpha^{j-1}\mathcal X_{_{[\alpha]}}^{(j+1)}\gamma\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta)=\sum_{i=0}^{j-1}(-1)^{i}\cbm{j-1 \\ i}
\alpha^i\gamma \mathcal X_{_{[\alpha]}}^{(j+1)}(\beta)\beta^{j-i-1}.
$$
into the right side of \eqref{klm}. Combining $x_0$ with
the general solution $\varphi$ of the homogeneous Sylvester equation $\alpha x-x\beta=0$ gives \eqref{gsyl}.
\end{proof}
\begin{remark}
{\rm If $\beta$ is algebraic and $\alpha\not\sim\beta$, one can multiply the identity
$\; x\boldsymbol{\rho}_\beta-\boldsymbol{\rho}_\alpha x=\gamma$ by $R_\beta \mathcal X_{_{[\beta]}}=L_\beta \mathcal X_{_{[\beta]}}$ on the right
and evaluate the resulting identity at $\alpha$ on the left to get an alternative formula for $\Psi_{\alpha,\beta}(\gamma)$ in case
$\alpha\not\in\beta$.}
\label{R:alt}
\end{remark}
\begin{remark}
{\rm The case where $\alpha$ and $\beta$ are both transcendental is more subtle. An example in \cite{berg} shows that
even with $\alpha\not\sim\beta$, the equation \eqref{4.25f} may have no solutions. We are not aware
of explicit solvability or uniqueness criteria for the transcendental case. For this reason, our further results on
the two-sided problem \eqref{1.18}, \eqref{1.19} are established under the (certainly restrictive) assumption that either
all left or all right interpolation nodes are algebraic.}
\label{R:alta}
\end{remark}
\begin{theorem}
Let us assume that the set $\Lambda=\{\alpha_1,\ldots,\alpha_n\}$ is algebraic over $Z_{\mathbb F}$ and left $P$-independent, whereas the set $\Omega=\{\beta_1,\ldots,\beta_k\}$
is right $P$-independent. The two-sided Lagrange problem \eqref{1.18}, \eqref{1.19} has a solution if and only if
\begin{equation}
\big(L_{\alpha_i}\mathcal {X}_{[\alpha_i]}c_i\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=\big(L_{\alpha_i}\mathcal {X}_{[\alpha_i]}d_j\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j), \quad \mbox{whenever}\quad \alpha_i\sim\beta_j.
\label{4.25hab}
\end{equation}
In this case, all $f\in P_{n+k}(\mathbb F)$ satisfying conditions \eqref{1.18}, \eqref{1.19} are given by the formula
\begin{align}
f(z)=\sum_{i=1}^n p_i(z)p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}c_i&+P_{\Lambda,\boldsymbol\ell}(z)\cdot\sum_{i=1}^n\sum_{j=1}^k
p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\Psi_{\alpha_i,\beta_j}(c_i-d_j)q_j^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1}q_j(z)\notag\\
&+P_{\Lambda,\boldsymbol\ell}(z)\cdot\sum_{\alpha_i\sim\beta_j}
p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\varphi_{ij}q_j^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1}q_j(z),
\label{brap1}
\end{align}
where $p_i=P_{\Lambda\backslash\{\alpha_i\},\boldsymbol\ell}$, $q_j=P_{\Omega\backslash\{\beta_j\},{\bf r}}$, where $\Psi_{\alpha_i,\beta_j}(c_i-d_j)$
are defined via formulas \eqref{combo}, and where
$\varphi_{ij}$ is any intertwiner of $\alpha_i$ and $\beta_j$ (i.e., $\alpha_i\varphi_{ij}=\varphi_{ij}\beta_j$).
\label{T:4.1}
\end{theorem}
\begin{proof}
By Proposition \ref{P:nhoma}, the problem \eqref{1.18}, \eqref{1.19} has a solution if and only if each Sylvester equation in \eqref{5.3b} is solvable.
This is the case for each non-conjugate pair $\alpha_i\not\sim\beta_j$, by part (1) in Proposition \ref{P:klj}. If $\alpha_i\sim\beta_j$, then the corresponding
Sylvester equation in \eqref{5.3b} has a solution if and only if \eqref{4.25ha} holds with $\alpha=\alpha_i$, $\beta=\beta_j$ and $\gamma=c_i-d_j$, that is,
$$
\big(L_{\alpha_i}\mathcal {X}_{[\alpha_i]}(c_i-d_j)\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=0.
$$
Since $L_{\alpha_i}$ and $\boldsymbol{\mathfrak{e}_r}(\beta_j)$ are additive on $\mathbb F[z]$, the latter equality is equivalent to \eqref{4.25hab}. Again, by Proposition \ref{P:nhoma},
all $f\in P_{n+k}(\mathbb F)$ satisfying conditions \eqref{1.18}, \eqref{1.19} are given by the formula \eqref{brap1} where $\psi_{ij}$ is any solution
to the respective Sylvester equation \eqref{5.3b}. By Proposition \ref{P:klj}, $\psi_{ij}=\Psi_{\alpha_i,\beta_j}(c_i-d_j)+\varphi_{ij}$ where
$\alpha_i\varphi_{ij}=\varphi_{ij} \beta_j$. Combining the latter representations with \eqref{brap1} gives \eqref{brap1}. The third sum on the right side
is taken over all conjugate pairs $\alpha_i\sim\beta_j$ as for non-conjugate pairs $\alpha_i\not\sim\beta_j$, we have $\varphi_{ij}=0$, by part (1) in Proposition \ref{P:klj}.
\end{proof}
\begin{corollary}
Under the assumptions of Theorem \ref{T:4.1}, a polynomial $g\in\mathbb P_{n+k}(\mathbb F)$
satisfies conditions $g^{\boldsymbol{\mathfrak{e}_\ell}}\vert_{\Lambda}=0$ and $g^{\boldsymbol{\mathfrak{e}_r}}\vert_\Omega=0$ if and only if it is of the form
\begin{equation}
g(z)=P_{\Lambda,\boldsymbol\ell}(z)\cdot\sum_{\alpha_i\sim\beta_j}p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\varphi_{ij}q_j^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1}q_j(z)
\label{ap22}
\end{equation}
where $p_i=P_{\Lambda\backslash\{\alpha_i\},\boldsymbol\ell}$, $q_j=P_{\Omega\backslash\{\beta_j\},{\bf r}}$ and where $\varphi_{ij}$ is any
intertwiner of $\alpha_i$ and $\beta_j$.
\label{C:4.2}
\end{corollary}
The formula \eqref{ap22} follows upon letting $c_i=d_j=0$ for all $i,j$ in \eqref{brap1}. We next observe from the division algorithm that
upon adding the term $P_{\Lambda,\boldsymbol\ell}hP_{\Omega,{\bf r}}$ on the right side of \eqref{ap22}
and letting $h$ to run through $\mathbb F[z]$, leads to a parametrization of the set of all polynomials $f\in\mathbb F[z]$ solving the homogeneous problem
\eqref{1.18}, \eqref{1.19}, that is, the intersection
of two (left and right) ideals $\langle P_{\Delta,{\boldsymbol\ell}}\rangle_{\bf r}\cap\langle P_{\Delta,{\bf r}}\rangle_{\boldsymbol\ell}$
(a quasi-ideal) of $\mathbb F[z]$, in the terminology of \cite{stein2}).
\subsection{Two-sided $P$-independence} The property of an algebraic set $\Delta\subset \mathbb F$ to be left (right) $P$-independent
can be characterized as follows (see the proof Theorem \ref{T:1.1}): {\em there is no nonzero polynomial
$f\in P_{_{|\Delta|}}(\mathbb F)$ such that
$f^{\boldsymbol{\mathfrak{e}_\ell}}\vert_{\Delta}=0$ (respectively, $f^{\boldsymbol{\mathfrak{e}_r}}\vert_{\Delta}=0$)}. Combining the latter characterizations, we say that
\begin{definition}
The pair $(\Delta_{\boldsymbol\ell},\Delta_{\bf r})$ consisting of two algebraic sets $\Delta_{\boldsymbol\ell}$ and
$\Delta_{\bf r}$
is $P$-independent if there is no nonzero $g\in P_{_{|\Delta_{\boldsymbol\ell}|+|\Delta_{\bf r}|}}(\mathbb F)$
such that
\begin{equation}
g^{\boldsymbol{\mathfrak{e}_\ell}}\vert_{\Delta_{\boldsymbol\ell}}=0\quad\mbox{and}\quad g^{\boldsymbol{\mathfrak{e}_r}}\vert_{\Delta_{\bf r}}=0.
\label{opana}
\end{equation}
\noindent
{\rm Since the minimal polynomials $P_{\Lambda,\boldsymbol\ell}$, $P_{\Omega, {\bf r}}$ of (algebraic) sets $\Delta_{\boldsymbol\ell}$ ,
$\Delta_{\bf r}$ satisfy inequalities $\deg P_{\Delta_{\boldsymbol\ell},\boldsymbol\ell}\le |\Delta_{\boldsymbol\ell}|$,
$\deg P_{\Delta_{\bf r},{\bf r}}\le |\Delta_{\bf r}|$, whereas their product
$g=P_{\Delta_{\boldsymbol\ell},\boldsymbol\ell}\cdot P_{\Delta_{\bf r}, {\bf r}}$ satisfies conditions \eqref{opana}, we conclude
(by Definition \ref{D:1.1}) that if the pair $(\Delta_{\boldsymbol\ell},\Delta_{\bf r})$ is $P$-independent (and hence,
$\deg(P_{\Delta_{\boldsymbol\ell},\boldsymbol\ell}\cdot P_{\Delta_{\bf r},{\bf r}})\ge
|\Delta_{\boldsymbol\ell}|+|\Delta_{\bf r}|$), then $\Delta_{\boldsymbol\ell}$ and $\Delta_{\bf r}$ are respectively,
left and right $P$-independent. In the case where at least one of them is algebraic over $Z_{\mathbb F}$, we can say more.
Given a set $\Delta\subset\mathbb F$ we will denote by $\left[\Delta\right]:=
\bigcup_{\alpha\in\Delta}[\alpha]$ the minimal superset of $\Delta$ closed under conjugation.}
\label{D:1}
\end{definition}
\begin{proposition}
Let us assume that $\Delta_{\boldsymbol\ell}$ is algebraic over $Z_{\mathbb F}$. Then the pair $(\Delta_{\boldsymbol\ell},\Delta_{\bf r})$
is $P$-independent if and only if $\Delta_{\boldsymbol\ell}$ is left $P$-independent, $\Delta_{\bf r}$ is
right $P$-independent, and $\left[\Delta_{\boldsymbol\ell}\right]\cap \left[\Delta_{\bf r}\right]=\varnothing$.
\label{P:ap12}
\end{proposition}
\begin{proof}
As we have already observed, if $(\Delta_{\boldsymbol\ell},\Delta_{\bf r})$ is $P$-independent, then
$\Delta_{\boldsymbol\ell}$ and $\Delta_{\bf r}$ are left and right $P$-independent and therefore,
contain finitely many elements. Hence we may let $\Delta_{\boldsymbol\ell}=\Lambda$ and $\Delta_{\bf r}=\Omega$ as in \eqref{1.7}.
It remains to show that under the assumptions as in Theorem \ref{T:4.1}, the pair $(\Lambda,\Omega)$ is $P$-independent if and only if
$\left[\Lambda\right]\cap \left[\Omega\right]=\varnothing$. The latter follows by Corollary \ref{C:4.2}. Indeed, the formula
\eqref{ap22} produces all polynomials $g\in P_{n+k}(\mathbb F)$ satisfying conditions \eqref{opana}. By Definition \ref{D:1},
the $(\Lambda,\Omega)$ is $P$-independent if and only if any $g$ of the form \eqref{ap22} is the zero polynomial, which means that
the only $x\in\mathbb F$ subject to $\alpha_i x=x\beta_j$ is $x=0$, i.e., that
$\alpha_i\not\sim\beta_j$ for all $\alpha_i\in\Lambda$ and $\beta_j\in\Omega$.
\end{proof}
The next statement can be considered as a two-sided analog of Theorem \ref{T:1.1}.
\begin{theorem}
Given two sets $\Lambda$ and $\Omega$ as in \eqref{1.7}, let us assume that $\Lambda$ is algebraic over $Z_{\mathbb F}$. Then
the problem \eqref{1.18}, \eqref{1.19} has a solution in $P_{n+k}(\mathbb F)$ for any $c_i,d_j\in \mathbb F$ if and only if the pair
$(\Lambda,\Omega)$ is $P$-independent.
In this case, a unique $f\in P_{n+k}(\mathbb F)$ subject to conditions \eqref{1.18}, \eqref{1.19} is given by
\begin{equation}
f(z)=\sum_{i=1}^n P_{{\Lambda}\backslash\{\alpha_i\},{\boldsymbol\ell}}(z)\cdot \rho_i\cdot
P_{\Omega,{\bf r}}(z)+\sum_{j=1}^k P_{\Lambda,{\boldsymbol\ell}}(z)\cdot \gamma_j\cdot P_{\Omega\backslash\{\beta_j\},{\bf r}}(z)
\label{ma1}
\end{equation}
where the elements $\rho_i,\gamma_j\in\mathbb F$ are defined by
\begin{align}
\rho_i&=-\sum_{j=1}^k P_{{\Lambda}\backslash\{\alpha_i\},{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}
\Psi_{\alpha_i,\beta_j}(c_i)
P_{\Omega\backslash\{\beta_j\},{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1},\label{ma2}\\
\gamma_j&=\sum_{i=1}^n P_{{\Lambda}\backslash\{\alpha_i\},{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}
\Psi_{\alpha_i,\beta_j}(d_j)
P_{\Omega\backslash\{\beta_j\},{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1},
\label{ma3}
\end{align}
whereas $\Psi_{\alpha_i,\beta_j}(c_i)$ and $\Psi_{\alpha_i,\beta_j}(d_j)$ are defined via the top formula in \eqref{combo}.
\label{T:ap2}
\end{theorem}
\begin{proof}
If the problem \eqref{1.18}, \eqref{1.19} has a solution for any choice of left and right target values, then $\Lambda$ is left $P$-independent
and $\Omega$ is right $P$-independent (by Theorem \ref{T:1.1}). To complete the proof of the "only if" statement,
it remains (due to Proposition \ref{P:ap12}) to show that $\left[\Lambda\right]\cap \left[\Omega\right]=\varnothing$.
To argue via contradiction, let us assume that $\alpha_i\sim\beta_j$ (for some $i,j$). Then, by condition \eqref{4.25hab} in Theorem \ref{T:4.1}, we have
$$
\big(L_{\alpha_i}\mathcal {X}_{[\alpha_i]}c\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=\big(L_{\alpha_i}\mathcal {X}_{[\alpha_i]}d\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\quad\mbox{for any}\quad c,d\in\mathbb F.
$$
Letting $c=0$, we then conclude, by formula \eqref{2.7}, that
$$
0=\big(L_{\alpha_i}\mathcal {X}_{[\alpha_i]}d\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=\big(L_{\alpha_i}\mathcal {X}_{[\alpha_i]}\big)^{\boldsymbol{\mathfrak{e}_r}}(d^{-1}\beta_j d)\cdot d\quad\mbox{for any}\quad d\in\mathbb F.
$$
The latter means that the polynomial $L_{\alpha_i}\mathcal {X}_{[\alpha_i]}$ takes zero right value at any element in the conjugacy class $\alpha_i]$ and hence,
belongs to the ideal $\langle \mathcal {X}_{\alpha_i}\rangle$, which is impossible, as $\deg L_{\alpha_i}\mathcal {X}_{[\alpha_i]}<\deg \mathcal {X}_{[\alpha_i]}$ and
$L_{\alpha_i}\mathcal {X}_{[\alpha_i]}\not\equiv 0$. This completes the proof of the "only if" part. The converse implication follows from Theorems
\ref{T:1.1} and \ref{T:4.1}.
\smallskip
A unique low-degree solution to the problem \eqref{1.18}, \eqref{1.19} is given by the formula \eqref{brap1}, which, as will now show,
can be written in the form \eqref{ma1}. To this end, we first observe that for each $j\in\{1,\ldots,k\}$,
Theorem \ref{T:4.1} applies to the interpolation problem
\begin{equation}
f^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=d_t,\quad f^{\boldsymbol{\mathfrak{e}_r}}\vert_{\Omega\backslash\{\beta_j\}}=0,\quad f^{\boldsymbol{\mathfrak{e}_\ell}}\vert_{\Lambda}=0.
\label{ma4}
\end{equation}
With the target values as above (that is, with $c_i=d_t=0$ for all $i$ and $t\neq j$), the formula \eqref{combo} gives
$\Psi_{\alpha_i,\beta_t}(c_i-d_t)=0$ for all $t\neq j$. Hence, the formula \eqref{brap1} takes the form
\begin{equation}
f_{{\bf r},j}(z)=P_{\Lambda,{\boldsymbol\ell}}(z)\cdot \gamma_j\cdot P_{\Omega\backslash\{\beta_j\},{\bf r}}(z),
\label{5.15}
\end{equation}
with $\gamma_j$ defined as in \eqref{ma3}. By Theorem \ref{T:4.1}, $f_{{\bf r},j}$ is a unique polynomial in $P_{n+k}(\mathbb F)$
satisfying conditions \eqref{ma4}. Similarly, by applying Theorem \ref{T:4.1} to the interpolation problem
\begin{equation}
f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=c_i,\quad f^{\boldsymbol{\mathfrak{e}_\ell}}\vert_{\Lambda\backslash\{\alpha_i\}}=0,\quad f^{\boldsymbol{\mathfrak{e}_r}}\vert_{\Omega}=0,
\label{ma5}
\end{equation}
and adapting the formula \eqref{ap1a} to the present case, we conclude that a unique polynomial in $P_{n+k}(\mathbb F)$
subject to conditions \eqref{ma5} is given by the formula
\begin{equation}
f_{{\boldsymbol \ell},i}(z)=P_{{\Lambda}\backslash\{\alpha_i\},{\boldsymbol\ell}}(z)\cdot \rho_i\cdot P_{\Omega,{\bf r}}(z),
\label{5.19}
\end{equation}
with $\rho_i$ defined as in \eqref{ma2}. Combining \eqref{ma4} and \eqref{ma5} we see that the formula
\eqref{ma1} defines a polynomial $f\in P_{n+k}(\mathbb F)$ satisfying conditions \eqref{1.18}, \eqref{1.19}.
By the uniqueness part in Theorem \ref{T:4.1}), this is the same polynomial as in \eqref{brap1}.
\end{proof}
\begin{remark}
{\rm A decomposition of a low-degree solution to the Lagrange problem into the sum of ``elementary"
polynomials each of which satisfies one requisite interpolation condition and equals zero at all other interpolation nodes,
is commonly termed as the Lagrange interpolation formula. For this reason, the formula \eqref{ma1} (rather than \eqref{brap1} or \eqref{ap1a}) can be referred
to as to the {\em two-sided Lagrange interpolation formula}. Other examples (commutative, left, right) are provided by respective formulas
\eqref{1.2}, \eqref{1.16}, \eqref{1.16r}.}
\label{R:bal}
\end{remark}
\subsection{Interpolation within an algebraic conjugacy class} Let us assume that the sets $\Lambda$ and
$\Omega$ in \eqref{1.7} are respectively, left and right $P$-independent, and moreover, that they are contained in the same algebraic conjugacy class $V$.
By Theorem \ref{T:4.1}, the problem \eqref{1.18}, \eqref{1.19} has a solution if and only equalities \eqref{4.25hab} hold for all $i\in\{1,\ldots,n\}$
and $j\in\{1,\ldots,k\}$, in which case all $f\in P_{n+k}(\mathbb F)$ solving the problem are given by the formula \eqref{brap1}, where
$\Psi_{\alpha_i,\beta_j}(c_i-d_j)$ is defined by the bottom formula in \eqref{combo} for all $i,j$.
\begin{remark}
{\rm In the present case, the parametrization formula \eqref{brap1} cannot be written in the form of the Lagrange interpolation formula \eqref{ma1}
since the polynomials $f_{{\bf r},j}$ and $f_{{\boldsymbol\ell},i}$ solving ``elementary" interpolation problems \eqref{ma4} and \eqref{ma5} may not exist.
By the general criterion \eqref{4.25ha}, these polynomials do exist if and only if
\begin{equation}
\big(L_{\alpha_i}\mathcal {X}_{_{V}} c_i\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=\big(L_{\alpha_i}\mathcal {X}_{_{V}}d_j\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=0\quad
\mbox{for all} \quad \alpha_i\in\Lambda, \; \beta_j\in\Omega,
\label{jun4}
\end{equation}
that is, if and only if the elements $c_i\beta_j c_i^{-1}$ and $d_j\beta_j d_j^{-1}$ are
right zeros of the polynomial $L_{\alpha_i}\mathcal {X}_{_{V}}$ for all $i,j$ such that $c_i\neq 0$ and $d_j\neq 0$.
(note that if one of the conditions \eqref{jun4} holds, then all other conditions hold as well).
In this case, a particular solution $f\in P_{n+k}(\mathbb F)$ to the problem \eqref{1.18}, \eqref{1.19}
is given by the formulas \eqref{ma1}-\eqref{ma3}, where $\Psi_{\alpha_i,\beta_j}(c_i)$ and $\Psi_{\alpha_i,\beta_j}(d_j)$ are defined via the bottom
formula in \eqref{combo} rather the top one.}
\label{R:jun2}
\end{remark}
Equalities \eqref{4.25hab} guarantee the consistency of interpolation conditions \eqref{1.18} and \eqref{1.19}. Via these equalities,
the left target values impose certain restrictions on the right ones (and vice versa) and in general, none of them can be eliminated as redundant.
The case where $\Lambda$ (or $\Omega$) is a $P$-basis for $V$ is more rigid.
\begin{proposition}
If $\Lambda$ is a left $P$-basis for $V$, then {\rm (1)} $d_j$'s are uniquely determined from \eqref{4.25hab} and {\rm (2)} any polynomial $f\in\mathbb F[z]$ satisfying
left conditions \eqref{1.18}, automatically satisfies right conditions \eqref{1.19}. Similar statements hold if $\Omega$ is a right $P$-basis for $V$.
\label{P:aut}
\end{proposition}
\begin{proof} By part (2) in Proposition \ref{P:klj}, relations
\eqref{4.25hab} guarantee the existence of elements $\psi_{ij}\in\mathbb F$ subject to equations \eqref{5.3b}. Multiplying both sides of \eqref{5.3c}
by $p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)$ on the right and taking into account the definition
of $\widetilde{\alpha}_i$ in \eqref{5.3c}, we get
$$
p_i\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\boldsymbol{\rho}_{\alpha_i}=P_{\Lambda,{\boldsymbol\ell}}\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\quad\mbox{for}\quad i=1,\ldots, n.
$$
Since in the present case, $P_{\Lambda,{\boldsymbol\ell}}=\mathcal {X}_{_{V}}\in Z_{\mathbb F}[z]$ and $\beta_j\in V$, we have
$$
(p_i\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\psi_{ij}\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\psi_{ij}\cdot\mathcal {X}_{_{V}}(\beta_j)=0.
$$
Making use of the latter equalities along with \eqref{5.3d} and \eqref{4.25hab}, we get
\begin{align}
\sum_{i=1}^n \big( p_i\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}
c_i\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)
&=\sum_{i=1}^n \big(p_i \cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}
(d_j+\psi_{ij}\cdot\boldsymbol{\rho}_{\beta_j}-\boldsymbol{\rho}_{\alpha_i}\cdot\psi_{ij}\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\notag\\
&=d_j-\sum_{i=1}^n \big( p_i\cdot p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\psi_{ij}\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=d_j.\label{aut1}
\end{align}
By the formula \eqref{6.3r} in Lemma \ref{L:ext} (with $m=n$, $\gamma_i=\alpha_i$ and
$f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=c_i$ for $i=1,\ldots,n$), if $f$ satisfies conditions \eqref{1.18}, then $f^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)$ is defined by the expression
on the left side of \eqref{aut1}, i.e., conditions $f^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=d_j$ are satisfied automatically.
\end{proof}
Thus, if $\Lambda$ is a left $P$-basis for $V$, conditions \eqref{1.19} can be dismissed leaving us with a left-sided problem \eqref{1.18}.
More generally, if $\Lambda$ {\em contains} a left $P$-basis for a conjugacy class $V$, then all right sided conditions at $\beta_j\in V$ can be
dismissed as redundant. Similar observations apply to the case where $\Omega$ contains a right $P$-basis for some conjugacy class.
Note that without the above dismissal, one can still use the parametrization formula \eqref{brap1}, which now takes the form
$$
f(z)=\sum_{i=1}^n p_i(z)p_i^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}c_i+\mathcal {X}_{_{V}}(z)\cdot h(z),\qquad h\in P_{k}(\mathbb F),
$$
and produces all polynomials $f\in P_{n+k}(\mathbb F)$ subject to the left conditions \eqref{1.18}.
\subsection{Generalized Lagrange interpolation formula}
As we observed in Remark \ref{R:jun2}, the Lagrange interpolation formula \eqref{ma1} may not exist if $[\Lambda]\cap[\Omega]\neq\varnothing$.
However, it is possible to decompose a low-degree solution to the problem into the sum of ``elementary" polynomials each one of which satisfies
the required interpolation conditions within one conjugacy class and vanishes at all interpolation nodes outside this class.
To be more precise, let $V_1,\ldots,V_m$ be all conjugacy classes in $\mathbb F$
having non-empty intersection with both $\Lambda$ and $\Omega$. Letting
$$
\Lambda_0=\Lambda\backslash \left[\Omega\right],\quad \Omega_0=\Omega\backslash \left[\Lambda\right],\quad
\Lambda_s:=V_s\cap \Lambda,\quad \Omega_s:=V_s\cap \Omega\quad (s=1,\ldots,m)
$$
we arrive at the partitions $\Lambda=\bigcup_{s=0}^m\Lambda_s\;$ and $\; \Omega=\bigcup_{s=0}^m\Omega_s$ of the sets $\Lambda$ and $\Omega$.
By the {\em generalized Lagrange formula}, we mean
a representation of a low-degree solution $f\in P_{n+k}(\mathbb F)$ to the problem \eqref{1.18}, \eqref{1.19} in the form
\begin{equation}
f=\sum_{\alpha_i\in\Lambda_0} P_{{\Lambda}\backslash\{\alpha_i\},{\boldsymbol\ell}}\cdot \rho_i\cdot
P_{\Omega,{\bf r}}+\sum_{\beta_j\in\Omega_0} P_{\Lambda,{\boldsymbol\ell}}\cdot \gamma_j\cdot P_{\Omega\backslash\{\beta_j\},{\bf r}}
+\sum_{s=1}^m P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}\cdot g_s \cdot P_{\Omega\backslash \Omega_s,{\bf r}}
\label{ma54}
\end{equation}
for some $\rho_i,\gamma_j\in\mathbb F$ and $g_s\in P_{|\Lambda_s|+|\Omega_s|}(\mathbb F)$. Note that in case $[\Lambda]\cap[\Omega]=\varnothing$, the
formula \eqref{ma54} amounts to \eqref{ma1}. The polynomials
\begin{equation}
f_{{\boldsymbol \ell},i}=P_{{\Lambda}\backslash\{\alpha_i\},{\boldsymbol\ell}}\cdot \rho_i\cdot P_{\Omega,{\bf r}}, \; \;
f_{{\bf r},j}=P_{\Lambda,{\boldsymbol\ell}}\cdot \gamma_j\cdot P_{\Omega\backslash\{\beta_j\},{\bf r}}, \; \;
f_{_{V_s}}=P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}\cdot g_s \cdot P_{\Omega\backslash \Omega_s,{\bf r}}
\label{ma56}
\end{equation}
on the right side of \eqref{ma54} clearly satisfy the following homogeneous conditions
\begin{align}
&f_{{\boldsymbol\ell},i}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)=0, \quad f_{{\boldsymbol\ell},i}^{\boldsymbol{\mathfrak{e}_\ell}}(\beta)=0\quad
\mbox{for all}\; \; \alpha\in\Lambda\backslash\{\alpha_i\}, \; \beta\in\Omega,\label{5.20u}\\
&f_{{\bf r},j}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)=0, \quad f_{{\bf r},j}^{\boldsymbol{\mathfrak{e}_r}}(\beta)=0\qua
\mbox{for all}\; \; \alpha\in\Lambda, \; \beta\in\Omega\backslash\{\beta_j\},\label{5.20v}\\
&f_{_{V_s}}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha)=0, \quad f_{_{V_s}}^{\boldsymbol{\mathfrak{e}_r}}(\beta)=0\quad
\mbox{for all}\; \; \alpha\in\Lambda\backslash\Lambda_s, \; \beta\in\Omega\backslash\Omega_s.\notag
\end{align}
Therefore, for $f$ of the form \eqref{ma54}, we have
$$
f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=f_{{\boldsymbol\ell},i}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i) \; \; \mbox{for} \; \alpha_i\in\Lambda_0, \quad
f^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=f_{{\bf r},j}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\; \; \mbox{for} \; \beta_j\in\Omega_0,\quad\mbox{and}
$$
$$
f^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=f_{_{V_s}}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i),\quad f^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=f_{_{V_s}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\quad\mbox{for}\quad \alpha_i,\beta_j\in V_s; \; \; s=1,\ldots,m.
$$
To make sure that $f$ of the form \eqref{ma54} satisfies interpolation conditions \eqref{1.18}, \eqref{1.19}, it remains to appropriately
specify the elements $\rho_i$, $\gamma_j$ and the polynomials $g_s$ in \eqref{ma54}. The elements $\rho_i$, $\gamma_j$ are defined uniquely
by formulas \eqref{5.18} and \eqref{5.14} below (which are alternative to formulas \eqref{ma2} and \eqref{ma3}), whereas $g_s$ is any polynomial
in $P_{|\Lambda_s|+|\Omega_s|}(\mathbb F)$ solving a two-sided problem \eqref{5.20y} below (at modified interpolation nodes \eqref{5.9u} in $V_s$ and
modified target values \eqref{5.22}, \eqref{5.21}). Here we will use an approach based on left and right $\lambda$-transforms introduced
and studied in \cite{lamler13, lamler2}. We assume that all interpolation nodes are algebraic and lay out some extra notation.
\smallskip
For a polynomial $g\in\mathbb F[z]$, we denote by $\mathfrak D_g\in Z_{\mathbb F}[z]$
the greatest central divisor of $g$, i.e., the generator of the smallest two-sided ideal containing
$\langle g\rangle_{\boldsymbol\ell}$ or $\langle g\rangle_{\bf r}$. We will denote by $\mathfrak Q_g$ a unique polynomial
such that
$$
g=\mathfrak D_g \mathfrak Q_g=\mathfrak Q_g\mathfrak D_g.
$$
A polynomial $g$ is called {\em bounded}
if there exists a central multiple of $g$, in which case we will denote by $\mathfrak M_g\in Z_{\mathbb F}[z]$ the {\em least
central multiple} of $g$ (the generator of the largest two-sided ideal contained in $\langle g\rangle_{\boldsymbol\ell}$
or in $\langle g\rangle_{\bf r}$). We will denote by $g^\diamondsuit$ a unique polynomial
such that
$$\mathfrak M_g=gg^\diamondsuit=g^\diamondsuit g.$$
From these definitions, it is readily seen that
\begin{equation}
g^\diamondsuit =(\mathfrak Q_g)^\diamondsuit,\quad (g^{\diamondsuit})^{\diamondsuit}=\mathfrak Q_g,\quad
\mathfrak D_g \mathfrak M_{g^\diamondsuit}=\mathfrak D_g \mathfrak Q_g(\mathfrak Q_g)^\diamondsuit=\mathfrak M_{g}.
\label{4.38}
\end{equation}
Following \cite{lamler2}, we associate with a given polynomial $h\in\mathbb F[z]$ and an element $\beta\in\mathbb F$ two self-maps of $\mathbb F\backslash\{0\}$
(left and right $\lambda_{h,\beta}$-transforms)
$$
\delta\mapsto (\delta h)^{\boldsymbol{\mathfrak{e}_\ell}}(\beta)=\delta\cdot h^{\boldsymbol{\mathfrak{e}_\ell}}(\delta^{-1}\beta\delta)\quad\mbox{and}\quad
\delta\mapsto (h\delta)^{\boldsymbol{\mathfrak{e}_r}}(\beta)=h^{\boldsymbol{\mathfrak{e}_r}}(\delta\beta\delta^{-1})\cdot\delta.
$$
The formulas for inverse transformations are presented in the next lemma.
\begin{lemma}
Given $\beta\in\mathbb F$ and bounded $h\in\mathbb F[z]$ such that $\mathfrak M_h(\beta)\neq 0$,
\begin{align}
&d=(h\delta)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\; \; \Leftrightarrow \; \;
\delta=(h^{\diamondsuit}d)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\cdot\mathfrak M_h(\beta)^{-1};
\label{5.12} \\
&d=(\delta h)^{\boldsymbol{\mathfrak{e}_\ell}}(\beta) \; \; \Leftrightarrow \; \;
\delta=\mathfrak M_h(\beta)^{-1}\cdot (dh^{\diamondsuit})^{\boldsymbol{\mathfrak{e}_\ell}}(\beta)
\label{5.13}
\end{align}
for any $d,\delta\in\mathbb F\backslash\{0\}$.
\label{L:7.1}
\end{lemma}
\begin{proof}
If $d=(h\delta)^{\boldsymbol{\mathfrak{e}_r}}(\beta)$, then by the formula \eqref{2.7}
(with $f=h^\diamondsuit$ and $g=h\delta$) we have
$$
(h^{\diamondsuit}d)^{\boldsymbol{\mathfrak{e}_r}}(\beta)=(h^\diamondsuit h\delta)^{\boldsymbol{\mathfrak{e}_r}}(\beta)=\delta\cdot\mathfrak M_h(\beta),
$$
where the second equality holds since $\mathfrak M_h\in Z_{\mathbb F}[z]$. Since $\mathfrak M_h(\beta)\neq 0$, the latter formula
implies the formula for $\delta$ in \eqref{5.12} proving the
implication $\Rightarrow$ in \eqref{5.12}. For the reverse implication, write the second equality in \eqref{5.12} equivalently as
$$
\delta\cdot\mathfrak M_h(\beta)=(h^{\diamondsuit}d)^{\boldsymbol{\mathfrak{e}_r}}(\beta).
$$
We then apply the implication $\Rightarrow$ (just proven) to the latter equality (i.e.,
to $h^\diamondsuit$, $d$ and $\delta\cdot\mathfrak M_h(\beta)$ rather than $h$, $\delta$ and $d$) and then make use
of the second and the third relations in \eqref{4.38} to get
\begin{equation}
d=(h^{\diamondsuit\diamondsuit}\delta)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\cdot \mathfrak M_h(\beta)\cdot\mathfrak M_{h^\diamondsuit}(\beta)^{-1}
=(\mathfrak Q_h \delta)^{\boldsymbol{\mathfrak{e}_r}}(\beta)\cdot\mathfrak D_h(\beta).
\label{po21}
\end{equation}
Taking into account that $\mathfrak D_h\in Z_{\mathbb F}[z]$ and that $\mathfrak D_h(\beta)$ commutes with $\beta$,
we use the formula \eqref{2.7} to compute
\begin{align*}
(h\delta)^{\boldsymbol{\mathfrak{e}_r}}(\beta)=(\mathfrak Q_h \mathfrak D_h\delta)^{\boldsymbol{\mathfrak{e}_r}}(\beta)
&=\mathfrak Q_h^{\boldsymbol{\mathfrak{e}_r}}\big(\delta \mathfrak D_h(\beta)\beta \mathfrak D_h(\beta)^{-1}\delta^{-1}\big)\cdot \delta \cdot \mathfrak D_h(\beta)\\
&=\mathfrak Q_h^{\boldsymbol{\mathfrak{e}_r}}\big(\delta\beta\delta^{-1}\big)\cdot \delta \cdot \mathfrak D_h(\beta)=(\mathfrak Q_h\delta)^{\boldsymbol{\mathfrak{e}_r}}(\beta)
\cdot \mathfrak D_h(\beta),
\end{align*}
which together with \eqref{po21} implies $d=(h\delta)^{\boldsymbol{\mathfrak{e}_r}}(\beta)$, thus
completing the proof of \eqref{5.12}. The equivalence \eqref{5.13} is verified similarly.
\end{proof}
\noindent
We next apply Lemma \ref{L:7.1} to get the formulas for the elements $\rho_i, \gamma_j$ and to specify polynomials $g_s$ in \eqref{ma54}.
\begin{lemma}
{\rm (1)} If $\alpha_i\in\Lambda_0=\Lambda\backslash\left[\Omega\right]$, then the polynomial
$f_{{\boldsymbol \ell},i}=P_{{\Lambda}\backslash\{\alpha_i\},{\boldsymbol\ell}}\rho_i P_{\Omega,{\bf r}}$ satisfies
$f_{{\boldsymbol \ell},i}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=c_i$ if and only if
\begin{equation}
\rho_i=\left\{\begin{array}{ccc}P_{\Lambda\backslash\{\alpha_i\},{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}
\cdot \mathfrak M_{P_{\Omega,{\bf r}}}(\alpha_i)^{-1}\cdot
(c_i P_{\Omega,{\bf r}}^{\diamondsuit})^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i), & \mbox{if}& c_i\neq 0,\\
0,& \mbox{if}& c_i=0.\end{array}\right.\label{5.18}
\end{equation}
{\rm (2)} If $\beta_j\in \Omega_0=\Omega\backslash\left[\Lambda\right]$, then the polynomial
$f_{{\bf r},j}=P_{\Lambda,{\boldsymbol\ell}}\gamma_jP_{\Omega\backslash\{\beta_j\},{\bf r}}$
satisfies $f_{{\bf r},j}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=d_j$ if and only if
\begin{equation}
\gamma_j=\left\{\begin{array}{ccc} (P_{\Lambda,{\boldsymbol\ell}}^{\diamondsuit}d_j)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\cdot
\mathfrak M_{P_{\Lambda,{\boldsymbol\ell}}}(\beta_j)^{-1}\cdot P_{\Omega\backslash\{\beta_j\},{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1},&\mbox{if}& d_j\neq 0,\\
0,&\mbox{if}& d_j=0.\end{array}\right.
\label{5.14}
\end{equation}
\label{L:7.2}
\end{lemma}
\begin{proof} The case $c_i=\rho_i=0$ is obvious.
If $c_i\neq 0$, we apply the equivalence \eqref{5.13} to $h=P_{\Omega,{\bf r}}$,
$\delta=P_{\Lambda\backslash\{\alpha_i\},{\boldsymbol\ell}}\cdot\rho_i$, $d=c_i$ and $\beta=\alpha_i$ to conclude that
$c_i=f_{{\boldsymbol\ell},i}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=( P_{{\Lambda}\backslash\{\alpha_i\},{\boldsymbol\ell}}\cdot \rho_i\cdot
P_{\Omega,{\bf r}})^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)$
if and only if
$$
P_{\Lambda\backslash\{\alpha_i\},{\boldsymbol\ell}}\cdot\rho_i=
\mathfrak M_{P_{\Omega,{\bf r}}}(\alpha_i)^{-1}\cdot (c_iP_{\Omega,{\bf r}}^{\diamondsuit})^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i).
$$
The latter is equivalent to the top formula in \eqref{5.18}. The proof of part (2) relies on the equivalence \eqref{5.12} and is quite similar.
\end{proof}
\begin{lemma}
Let $\alpha_i$ and $\beta_j$ belong to the conjugacy class $V_s$.
A polynomial $f_{_{V_s}}=P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}\cdot g_s \cdot P_{\Omega\backslash \Omega_s,{\bf r}}$ satisfies conditions
\begin{equation}
f_{_{V_s}}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=c_i\quad\mbox{and}\quad f_{_{V_s}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=d_j
\label{5.20uu}
\end{equation}
if and only if $g_s\in\mathbb F[z]$ is subject to
\begin{equation}
g_s^{\boldsymbol{\mathfrak{e}_\ell}}(\widetilde{\alpha}_i)=\rho_i\quad\mbox{and}\quad g_s^{\boldsymbol{\mathfrak{e}_r}}(\widetilde{\beta}_j)=\gamma_j
\label{5.20y}
\end{equation}
where $\widetilde{\alpha}_i, \widetilde{\beta}_j\in V_s$ are defined by
\begin{equation}
\widetilde{\alpha}_i=P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}\cdot\alpha_i\cdot
P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i),\quad
\widetilde{\beta}_j=P_{\Omega\backslash \Omega_s,{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\cdot\beta_j\cdot
P_{\Omega\backslash \Omega_s,{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1},
\label{5.9u}
\end{equation}
and where the elements $\rho_i$ and $\gamma_j$ are given by (compare with \eqref{5.14} and \eqref{5.18})
\begin{equation}
\rho_i=\left\{\begin{array}{ccc}P_{\Lambda\backslash \Lambda_s,{\boldsymbol\ell}}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)^{-1}
\cdot \mathfrak M_{P_{\Omega\backslash \Omega_s,{\bf r}}}(\alpha_i)^{-1}\cdot
(c_i P_{\Omega\backslash \Omega_s,{\bf r}}^{\diamondsuit})^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i ), & \mbox{if}& c_i\neq 0,\\
0,& \mbox{if}& c_i=0,\end{array}\right.
\label{5.22}
\end{equation}
\begin{equation}
\gamma_j=\left\{\begin{array}{ccc}(P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}^{\diamondsuit}d_j)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\cdot
\mathfrak M_{P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}}(\beta_j)^{-1}\cdot P_{\Omega\backslash \Omega_s,{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)^{-1},
&\mbox{if}& d_j\neq 0,\\ 0,&\mbox{if}& d_j=0.\end{array}\right.
\label{5.21}
\end{equation}
\label{L:5.6}
\end{lemma}
\begin{proof}
Since the polynomials $P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}$ and $P_{\Omega\backslash \Omega_s,{\bf r}}$
have no zeros in $V_s$, their values at $\alpha_i,\beta_j\in V_i$ are not zeros, and the formulas \eqref{5.9u}, \eqref{5.22},
\eqref{5.21} make sense. We next verify that $d_j$ and $c_i$ are recovered from \eqref{5.21} and \eqref{5.22} by
\begin{equation}
d_j=\left(P_{{\Lambda}\backslash \Lambda_s,{\boldsymbol \ell}}\cdot\gamma_j\cdot P_{\Omega\backslash \Lambda_s,{\bf r}}\right)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j),
\qquad c_i=\left(P_{\Lambda\backslash \Lambda_s,{\boldsymbol \ell}}\cdot\rho_i\cdot P_{\Omega\backslash \Lambda_s,{\bf
r}}\right)^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i).
\label{5.29}
\end{equation}
The trivial cases where $d_j=0$ and $c_i=0$ are clear. If $d_j\neq 0$, we have from \eqref{5.21},
$$
\gamma_j\cdot P_{\Omega\backslash \Omega_s,{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=
(P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}^{\diamondsuit}d_j)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\cdot
\mathfrak M_{P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}}(\beta_j)^{-1}
$$
and by implication $\; \Leftarrow\; $ in \eqref{5.12} and formula \eqref{2.7} we conclude
$$
d_j=\big(P_{\Lambda\backslash \Lambda_s,{\boldsymbol \ell}}\cdot\gamma_j\cdot P_{\Omega\backslash \Omega_s,{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\\
=\left(P_{\Lambda\backslash \Lambda_s,{\boldsymbol \ell}}\cdot\gamma_j\cdot P_{\Omega\backslash \Omega_s,{\bf r}}\right)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j),
$$
which confirms the first equality in \eqref{5.29}. The second equality for $c_s\neq 0$ is verified in much the same way. On the other hand, for
$f_{_{V_s}}$ defined as in \eqref{ma56}, we have, by the formulas \eqref{2.3b} and \eqref{2.7} and by the definition \eqref{5.9u},
of $\widetilde{\beta}_j$,
\begin{align}
f_{_{V_s}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)&=\big(P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}\cdot g_s \cdot
P_{\Omega\backslash \Omega_s,{\bf r}}\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\notag\\
&=\big(P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}\cdot (g_s \cdot P_{\Omega\backslash \Omega_s,{\bf r}})^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\notag\\
&=\big(P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}\cdot g_s^{\boldsymbol{\mathfrak{e}_r}}(\widetilde{\beta}_t)\cdot P_{\Omega\backslash \Omega_s,{\bf r}}^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)
\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)\notag\\
&=\big(P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}\cdot g_s^{\boldsymbol{\mathfrak{e}_r}}(\widetilde{\beta}_s)\cdot
P_{\Omega\backslash \Omega_s,{\bf r}}\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j),\label{4.80}
\end{align}
and quite similarly,
\begin{equation}
f_{_{V_s}}^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=\big(P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}\cdot g_s \cdot
P_{\Omega\backslash \Omega_s,{\bf r}}\big)^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i)=\big(P_{\Lambda\backslash \Lambda_s,\boldsymbol\ell}\cdot g_s^{\boldsymbol{\mathfrak{e}_\ell}}(\widetilde{\alpha}_i)\cdot
P_{\Omega\backslash \Omega_s,{\bf r}}\big)^{\boldsymbol{\mathfrak{e}_\ell}}(\alpha_i).\label{4.81}
\end{equation}
Comparing \eqref{4.80}, \eqref{4.81} with equalities \eqref{5.29} we conclude that $f_{_{V_s}}$ of the form \eqref{ma56} satisfies
\eqref{5.20uu} if and only $g_s$ is subject to conditions \eqref{5.20y}.
\end{proof}
\begin{remark}
{\rm By Theorem \ref{T:4.1}, the existence of a polynomial $f_{_{V_s}}$ satisfying conditions \eqref{5.20uu} is equivalent to the equality
$$
\big(L_{\alpha_i} \mathcal X_{_{V_s}} d_j\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j)=\big(L_{\alpha_i} \mathcal X_{_{V_s}} c_i\big)^{\boldsymbol{\mathfrak{e}_r}}(\beta_j),
$$
while the existence of a polynomial $g_s$ satisfying conditions \eqref{5.20y} is equivalent to
$$
\big(L_{\widetilde{\alpha}_i} \mathcal X_{_{V_s}} \gamma_j\big)^{\boldsymbol{\mathfrak{e}_r}}(\widetilde{\beta}_j)=\big(L_{\widetilde{\alpha}_i}
\mathcal X_{_{V_s}} \rho_i\big)^{\boldsymbol{\mathfrak{e}_r}}(\widetilde{\beta}_j).
$$
By Lemma \ref{L:5.6}, we conclude that the two latter equalities are equivalent.}
\label{R;last}
\end{remark}
Lemma \ref{L:5.6} clarifies the choice of $g_s$ in the formula \eqref{ma54}. We consider {\em all} interpolation conditions in the original problem
\eqref{1.18}, \eqref{1.19} within the conjugacy class $V_s$ and then take $g_s$ to be any solution of the associated problem \eqref{5.20y}
(with equally many interpolation conditions within the same conjugacy class). Parametrization of all such $g_s$ can be obtained via general formula \eqref{brap1}
as explained in Section 3.2. Substituting these parametrizations for all $s=1,\ldots,m$ into \eqref{ma54} one can get a slightly more structured generalized
Lagrange interpolation formula.
\bibliographystyle{amsplain}
| {
"timestamp": "2019-09-17T02:18:49",
"yymm": "1909",
"arxiv_id": "1909.06882",
"language": "en",
"url": "https://arxiv.org/abs/1909.06882",
"abstract": "For a division ring $\\mathbb F$, the polynomials $f\\in\\mathbb F$ can be evaluated \"on the left\" and \"on the right\" giving rise to left and right Lagrange interpolation problems. The problems containig interpolation conditions of the same type were considered in \\cite{lam1} where the solvability criterion was given in terms of polynomial independence of interpolation nodes. We establish the solvability criterion and describe all solutions of low degree (less than the number of interpolation conditions imposed) for the problem containing both \"left\" and \"right\" conditions.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Lagrange interpolation over division rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290922181331,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.7099522595135587
} |
https://arxiv.org/abs/0810.1378 | Long-time behaviour of discretizations of the simple pendulum equation | We compare the performance of several discretizations of the simple pendulum equation in a series of numerical experiments. The stress is put on the long-time behaviour. We choose for the comparison numerical schemes which preserve the qualitative features of solutions (like periodicity). All these schemes are either symplectic maps or integrable (preserving the energy integral) maps, or both. We describe and explain systematic errors (produced by any method) in numerical computations of the period and the amplitude of oscillations. We propose a new numerical scheme which is a modification of the discrete gradient method. This discretization preserves (almost exactly) the period of small oscillations for any time step. | \section{Introduction}
New but more and more important direction in the numerical analysis is
geometric numerical integration \cite{HLW,Is,IZ,MQ}.
Numerical methods within this approach are
tailored for specific equations rather than for large general classes of equations.
The aim is to preserve qualitative features, invariants and geometric properties
of studied equations, e.g.,
integrals of motion, long-time behaviour and sometimes even trajectories
(but it is difficult, sometimes even impossible, to preserve all properties
by a single numerical scheme).
``Although the apparent desirability of this practice might be obvious at first glance,
it nonetheless calls for a justification'' \cite{Is-multi}.
In this paper we perform a series of numerical experiments
comparing the performance of several standard and geometric
methods on the example of the simple pendulum equation. The
equation itself is very well known but its discrete counterparts
show many interesting and unexpected features, for instance the
appearance of chaotic behaviour for large time steps \cite{FA,Yo}.
We focus our attention on the stability and time step dependence
of the period and the amplitude for several discretizations of the
simple pendulum (assuming that the time step is sufficiently small).
We describe and explain small periodic oscillations of
the period and of the amplitude around their average values.
We confine our studies either to symplectic maps or to energy-preserving maps. It is well known that symplectic integrators
are very stable as far as the conservation of the energy is concerned. Since the beginning of 1990s they are
successfully used in the long time integration of the solar system \cite{SW,WH1,Yo}, see also \cite{BFRB,GBB}.
The reason is that using any symplectic scheme of $n$th order
the error of the Hamiltonian for an exponentially long time is of the order $O(\varepsilon^n)$ where $\varepsilon$ is the constant step
of the integration \cite{BG,HLW,MPQ}.
Therefore, in studies of the long-time behaviour, symplectic
algorithms have a great advantage at the very beginning. Fortunatelly, the class of symplectic integrators includes
such well known and relatively simple numerical schemes as the standard leap-frog method and the implicit midpoint rule.
In this paper we compare these classical methods with new geometric methods which preserve the energy integral.
We also propose a new discretization (a modification of the discrete gradient method)
which has some advantages: it is almost exact for small oscillations (even for large
time steps) and keeps some outstanding properties of the discrete gradient method
(e.g., its precision in describing motions in the neighbourhood of
the separatrix).
\section{Symplectic discretizations of Newton equations}
We consider scalar autonomous Newton equations:
\begin{equation} \label{Newton}
\ddot \varphi = f (\varphi) \ ,
\end{equation}
which can be written as the following first order system
\begin{equation} \label{simham}
\dot \varphi = p \ , \quad
\dot p = f (\varphi) \ .
\end{equation}
The equations are integrable for any function $f = f (\varphi)$ (in this case by integrability we mean
the existence of the integral of motion, compare \cite{Suris}). The energy conservation law reads
\begin{equation} \label{ener-int}
\frac{1}{2} {\dot \varphi}^2 + V(\varphi) = E \ , \qquad f (\varphi ) = - \frac{d V (\varphi)}{d \varphi} \ ,
\end{equation}
where $E = {\rm const}$. The Hamiltonian is given by
\begin{equation} \label{H-V}
H (p, q) = \frac{p^2}{2} + V (q) \ .
\end{equation}
As an example to test quantitatively various numerical methods we will use
the simple pendulum equation
\begin{equation} \label{pendul}
\ddot \varphi = - k \sin\varphi \ .
\end{equation}
In this case the energy conservation law has the form
\begin{equation} \label{pend1}
\frac{1}{2} p^2 - k \cos\varphi = E \ .
\end{equation}
The constant $k$ is not important. It can be eliminated by a change of the variable $t$.
In the sequel (in any numerical computations) we assume $k =1$.
By the discretization of \rf{Newton} we mean an $\varepsilon$-family of difference equations (of the second order)
which in the continuum limit $\varepsilon \rightarrow 0$ yields \rf{Newton}. The initial conditions should be
discretized as well, i.e., we have to map $\varphi (0) \mapsto \varphi_0$, $\dot \varphi (0) \mapsto p_0$.
It is convenient to discretize \rf{simham} which automatically gives the discretization of $p$.
Thus we have an $\varepsilon$-dependent map $(\varphi_n, p_n) \mapsto (\varphi_{n+1}, p_{n+1})$. This map is called symplectic if for any $n$
\begin{equation}
d \varphi_{n+1} \wedge d p_{n+1} = d \varphi_n \wedge d p_n \ .
\end{equation}
The following lemmas give a convenient characterization of symplectic maps and we will apply them in the next sections.
\begin{lem} \label{PR}
The map \ $(\varphi_n, p_n) \mapsto (\varphi_{n+1}, p_{n+1})$,
implicitly defined by
\begin{equation} \label{mFG}
\varphi_{n+1} - \varphi_n = P (p_n, p_{n+1}, \varepsilon) \ , \quad p_{n+1} - p_n = R (\varphi_n, \varphi_{n+1}, \varepsilon) \ ,
\end{equation}
where $P$ and $R$ are differentiable functions, is symplectic if and only if
\begin{equation}
\frac{\partial P}{\partial p_n} \frac{\partial R}{\partial \varphi_n} =
\frac{\partial P}{\partial p_{n+1}} \frac{\partial R}{\partial \varphi_{n+1}} \neq 1 \ .
\end{equation}
\end{lem}
\noindent The proof is straightforward. Differentiating \rf{mFG} we get
\[ \begin{array}{l}
d \varphi_{n+1} - d \varphi_n = P,_1 d p_n + P,_2 d p_{n+1} \ , \\
d p_{n+1} - d p_n = R,_1 d \varphi_n + R,_2 d \varphi_{n+1} \ ,
\end{array} \]
(where the comma denotes partial differentiation). Then
\[ \begin{array}{l} \displaystyle
d \varphi_{n+1} = \frac{1 + P,_2 R,_1}{1 - P,_2 R,_2} \ d \varphi_n + \frac{P,_1 + P,_2}{1 - P,_2 R,_2} \ d p_n \ , \\[2ex]
\displaystyle
d p_{n+1} = \frac{R,_1 + R,_2}{1 - P,_2 R,_2} \ d \varphi_n + \frac{1 + P,_1 R,_2}{1 - P,_2 R,_2} \ d p_n \ ,
\end{array} \]
provided that $P,_2 R,_2 \neq 1$ (this condition means that the map defined by $P, R$
is non-degenerate). Therefore
\[ \displaystyle
d \varphi_{n+1} \wedge d p_{n+1} = \frac{1 - P,_1 R,_1}{1 - P,_2 R,_2} \ d \varphi_n \wedge d p_n \ .
\]
Hence the map is symplectic if $P,_1 R,_1 = P,_2 R,_2 \neq 1$ which ends the proof.
\begin{lem} \label{AB}
The map \ $(\varphi_n, p_n) \mapsto (\varphi_{n+1}, p_{n+1})$,
defined by
\begin{equation} \label{mAB}
\varphi_{n+1} - A (\varphi_n, \varepsilon) + \varphi_{n-1} = 0 \ , \quad p_n = \mu_0 (\varepsilon) \ \varphi_{n+1} + B (\varphi_n, \varepsilon) \ ,
\end{equation}
is symplectic for any differentiable functions $A, B$.
\end{lem}
\noindent In order to prove Lemma~\ref{AB} we compute
\[
d p_{n+1} = \mu_0 \ d \varphi_{n+2} + TB' d \varphi_{n+1} = \mu_0 \ TA' \ d \varphi_{n+1} - \mu_0 \ d \varphi_n + TB' \ d \varphi_{n+1} \ ,
\]
where the prime denotes the differentiation and $T$ denotes the shift. Therefore
\[
d \varphi_{n+1} \wedge d p_{n+1} = - \mu_0 \ d \varphi_{n+1} \wedge d \varphi_n \ .
\]
On the other hand \ $d \varphi_n \wedge d p_n = \mu_0 \ d \varphi_n \wedge d \varphi_{n+1}$, which ends the proof.
\section{Nonintegrable symplectic discretizations}
\label{sec-stand}
In this section we present some well known discretizations which preserve the symplectic structure of the
Newton equations (compare \cite{HLW}, p. 189-190) but have no integrals of motion.
\subsection{Standard discretization}
The standard discretization of the simple pendulum equation
\begin{equation} \label{standard}
\frac{\varphi_{n+1} - 2 \varphi_{n} + \varphi_{n-1}}{\varepsilon^2} = - k \sin \varphi_n
\end{equation}
is non-integrable \cite{Suris}. This discretization can be obtained by the application of
either leap-frog (St\"ormer-Verlet) scheme or one of the symplectic splitting methods.
It is interesting that we get the same discrete equation \rf{standard} but a different dependence
of $p_n$ on $\varphi_n, \varphi_{n+1}$ (compare \rf{PSV}, \rf{PSS}):
\begin{equation} \label{pedy3}
p_n = \frac{\varphi_{n+1}- \varphi_n}{\varepsilon} + c k \varepsilon \sin \varphi_n \ ,
\end{equation}
where $c = 0, \frac{1}{2}, 1$. By virue of Lemma~\ref{AB} standard discretizations are symplectic
(for any $c$).
\subsection{St\"ormer-Verlet (leap-frog) scheme}
The numerical integration scheme
\begin{equation} \left\{ \begin{array}{l} \label{SVzaba}
p_{n+\frac{1}{2} } = p_n + \frac{1}{2} \varepsilon f (\varphi_n) \ , \\[1ex]
\varphi_{n+1} = \varphi_n + \varepsilon p_{n+\frac{1}{2} } \ , \\[1ex]
p_{n+1} = p_{n+\frac{1}{2} }+ \frac{1}{2} \varepsilon f (\varphi_{n+1}) \ ,
\end{array} \right.
\end{equation}
is known as the St\"ormer-Verlet (or leap-frog) method (compare, e.g., \cite{HLW}).
Eliminating $p_{n + \frac{1}{2}}$, we can easily formulate the St\"ormer-Verlet as a one-step method:
\begin{equation} \begin{array}{l} \label{SVonestep}
\varphi_{n+1} = \varphi_n + \varepsilon p_n + \frac{1}{2} \varepsilon^2 f (\varphi_n) \ , \\[3ex]
p_{n+1} = p_n + \frac{1}{2} \varepsilon \left( f (\varphi_n) + f \left( \varphi_n + \varepsilon p_n + \frac{1}{2} \varepsilon^2 f (\varphi_n) \right) \right) \ .
\end{array} \end{equation}
We can also formulate this method as
\begin{equation} \label{stanf}
\frac{\varphi_{n+1} - 2 \varphi_n + \varphi_{n-1}}{\varepsilon^2} = f (\varphi_n) \ ,
\end{equation}
\begin{equation} \label{PSV}
p_n = \frac{\varphi_{n+1}- \varphi_n}{\varepsilon} - \frac{\varepsilon}{2} f (\varphi_n) \ .
\end{equation}
In the simple pendulum case ($f (\varphi) = - k \sin\varphi$) we recognize in
the equations \rf{stanf}, \rf{PSV}
the standard discretization \rf{standard}, \rf{pedy3} with $c = 1/2$.
\subsection{Symplectic splitting methods}
The system \rf{simham} belongs to the class of
``partitioned systems'' which have the form
\begin{equation}
\dot \varphi = g (\varphi, p) \ , \quad \dot p = h (\varphi, p) \ ,
\end{equation}
where $g, h$ are given functions of two variables. We can discretize such systems
in one of the following two ways:
\begin{equation}
\varphi_{n+1} = \varphi_n + \varepsilon g (\varphi_n, p_{n+1} ) \ , \quad p_{n+1} = p_n + \varepsilon h (\varphi_n, p_{n+1} ) \ ,
\end{equation}
\begin{equation}
\varphi_{n+1} = \varphi_n + \varepsilon g (\varphi_{n+1}, p_n ) \ , \quad p_{n+1} = p_n + \varepsilon h (\varphi_{n+1}, p_n ) \ .
\end{equation}
Both these discretizations are called either symplectic Euler methods \cite{HLW} or symplectic splitting methods \cite{MQR2}.
In our case (see \rf{simham}) we have, respectively,
\begin{equation} \label{ss1}
\varphi_{n+1} = \varphi_n + \varepsilon p_{n+1} \ , \quad
p_{n+1} = p_n + \varepsilon f (\varphi_n ) \ ,
\end{equation}
\begin{equation} \label{ss0}
\varphi_{n+1} = \varphi_n + \varepsilon p_n \ , \quad
p_{n+1} = p_n + \varepsilon f ( \varphi_{n+1} ) \ .
\end{equation}
Finally, both \rf{ss1} and \rf{ss0} yield \rf{stanf}, but
instead of \rf{PSV} we have
\begin{equation} \label{PSS}
p_n = \frac{\varphi_{n+1}- \varphi_n}{\varepsilon} - \varepsilon f (\varphi_n) \quad {\rm or} \quad p_n = \frac{\varphi_{n+1}- \varphi_n}{\varepsilon} \ ,
\end{equation}
i.e., in the simple pendulum case we get \rf{pedy3} with $c=1$ and $c=0$, respectively.
\subsection{Implicit midpoint rule}
Any first order equation $\dot x = F (x)$ can be discretized using implicit midpoint rule
(which coincides with the implicit 1-stage Gauss-Legendre-Runge-Kutta method, compare \cite{HLW}).
The first derivative is replaced by the difference quotient and the right hand side is evaluated
at midpoint $\frac{1}{2} (x_n + x_{n+1})$. In the case of the simplest Hamiltonian systems, given by \rf{simham},
we have:
\begin{equation} \begin{array}{l} \label{mid1}
\varphi_{k+1} = \varphi_k + \frac{1}{2} \varepsilon ( p_{k} + p_{k+1} ) \ , \\[2ex]
p_{k+1} = p_k + \varepsilon f (\frac{\varphi_k + \varphi_{k+1}}{2} ) \ .
\end{array} \end{equation}
In the special case of the simple pendulum we get
\begin{equation} \begin{array}{l}
\frac{\varphi_{k+1} - 2 \varphi_k + \varphi_{k-1}}{\varepsilon^2} = - \frac{1}{2} k \left( \sin (\frac{\varphi_{k+1}+ \varphi_k}{2} )
+ \sin \left( \frac{\varphi_k + \varphi_{k-1} }{2} \right) \right) \ , \\[2ex]
p_k = \frac{\varphi_{k+1} - \varphi_k}{\varepsilon} + \frac{1}{2} \varepsilon k \sin \frac{\varphi_{k+1} + \varphi_k}{2} \ .
\end{array} \end{equation}
The implicit midpoint rule has quite good properties: this is a symplectic, time-reversible method of order 2.
The symplecticity follows directly from Lemma~\ref{PR}. Indeed, \rf{mid1} implies $P,_1 = P,_2$ and $R,_1 = R,_2$.
\section{Projection methods}
Non-integrable discretizations can be modified so as to preserve the energy integral "by force", i.e., projecting
the result of every step on the constant energy manifold. In principle, any one-step method can be converted into
the corresponding projection method. In this paper we apply these procedures
to the St\"ormer-Verlet (leap-frog) method.
Therefore referring to the "standard projection"
and "symmetric projection" we always mean standard (or symmetric) projection applied to the leap-frog scheme.
\subsection{Standard projection method}
There are given a first order equation $\dot x = F (x)$, $x \in {\mathbb R}^2$, any one-step
numerical method $x_{n+1} = \Phi_\varepsilon (x_n)$
(a discretization of the ODE), and a constraint $ g (x) = 0$ \ we would like to preserve.
The standard projection consists in computing $\tilde x_{n+1} := \Phi_\varepsilon (x_n)$, and then
orthogonally projecting $\tilde x_{n+1}$ on the manifold $g (x) = 0$, see \cite{HLW}.
This projection, denoted by $x_{n+1}$, yields the next step: $x_n \rightarrow x_{n+1}$. In other words, we define
\begin{equation} \label{iter1}
x_{n+1} = \tilde x_{n+1} + \lambda \nabla g (\tilde x_{n+1} )
\end{equation}
where $\lambda$ is such that $g (x_{n+1}) = 0$.
Applying this approach to the simple pendulum \rf{pendul} it is convenient to define $x$ as
\begin{equation}
x = \left( \varphi, \ \frac{\dot \varphi}{\omega} \right) \equiv (\varphi,\ p) \ ,
\end{equation}
where $\omega = \sqrt{k}$. The above definition of $p$ yields dimensionless components of $x$.
If $k=1$ (which is assumed throughout this paper), then this definition of $p$ coincides
with the previous one, see \rf{simham}. The constraint $g(x)=0$ is given by \rf{pend1}, i.e.,
\begin{equation}
g (x) = \frac{1}{2} p^2 - \cos\varphi - h \ .
\end{equation}
where $h = E/\omega^2 $.
The equation \rf{iter1} becomes
\begin{equation}
\varphi_{n+1} = \tilde\varphi_{n+1} + \lambda \sin\tilde\varphi_{n+1} \ , \quad
p_{n+1} = (1 + \lambda) \tilde p_{n+1}
\end{equation}
and $\lambda$ is computed from
\begin{equation} \label{constr2}
\frac{1}{2} (1 + \lambda)^2 \tilde p_{n+1}^2 - \cos (\tilde\varphi_n + \lambda \sin\tilde\varphi_{n+1} ) = h \ .
\end{equation}
In order to solve \rf{constr2}
we use Newton's iteration $\lambda_{j+1} = \lambda_j - \Delta \lambda_j $, where
\begin{equation}
\Delta \lambda_j = - \frac{\frac{1}{2} (1 + \lambda_j)^2 \tilde p_{n+1}^2 - \cos (\tilde\varphi_n +
\lambda_j \sin\tilde\varphi_{n+1} ) - h }{{{\tilde p}_{n+1}^2} + \sin^2\tilde\varphi_{n+1}} \ ,
\end{equation}
and it is sufficient and convenient to choose $\lambda_0 = 0$.
The approximated solution to \rf{constr2} is given by $\lambda = \lim_{j\rightarrow \infty} \lambda_j$.
\subsection{Symmetric projection method}
A one-step algorithm $x_{n+1} = \Phi_\varepsilon (x_n)$ is called symmetric (or time-reversible) if $\Phi_{-\varepsilon} = \Phi_\varepsilon^{-1}$.
Equations of the classical mechanics are time-reversible, therefore the preservation of this property is convenient
and is expected to improve numerical results. The symplectic splitting methods
are not time-reversible while the St\"ormer-Verlet method and implicit midpoint rule are symmetric.
The symmetry can be easily noticed in the form \rf{SVzaba} of the leap-frog method.
The symmetric projection method preserves the time-reversibility. The method is applied under similar assumptions as
standard projection (additionally we demand the time-reversibility of $\Phi_\varepsilon$) and consists of the following steps
\cite{AR,H-sym}:
\begin{equation} \begin{array}{l}
\hat x_n = x_n + \lambda \nabla g (x_n) \ , \\[2ex]
\tilde x_{n+1} = \Phi_\varepsilon (\hat x_n) \ , \\[2ex]
x_{n+1} = \tilde x_{n+1} + \lambda \nabla g (x_{n+1}) \ ,
\end{array} \end{equation}
where we assume $g (x_n) = 0$ and compute
the parameter $\lambda$ from the condition $g (x_{n+1} ) = 0$.
\section{Integrable discretizations}
Throughout this paper by integrability we mean the existence of an integral of motion.
The Newton equation \rf{Newton} has the energy integral \rf{ener-int}. Its discretization
is called integrable when it has an integral of motion as well. In the continuum limit this integral
becomes the energy integral, so it may be treated as a discrete analogue of the energy.
\subsection{Standard-like discretizations}
Standard-like discretizations are defined by \cite{Suris}
\begin{equation} \begin{array}{l} \label{stlike}
\varphi_{n+1} = \varphi_n + \varepsilon p_{n+1} \ , \\[2ex]
p_{n+1} = p_n + \varepsilon F (\varphi_n,\varepsilon)
\end{array} \end{equation}
where $F$ has to satisfy $F (\varphi_n, 0) = f (\varphi_n)$.
For a given $f$ there exist inifinitely many functions $F$ satisfying this conditions.
All of them are symplectic, which can be easily seen applying Lemma~\ref{PR} with $P,_1 = R,_2 = 0$.
Similarly as in Section~\ref{sec-stand} we obtain from \rf{stlike}:
\begin{equation} \begin{array}{l} \label{Fqn}
\varphi_{n+1} - 2 \varphi_n + \varphi_{n-1} = \varepsilon^2 F (\varphi_n, \varepsilon) \ ,
\\[3ex] \displaystyle
p_n = \frac{\varphi_{n+1} - \varphi_n}{\varepsilon} - \varepsilon F (\varphi_n, \varepsilon) = \frac{\varphi_n - \varphi_{n-1}}{\varepsilon} \ .
\end{array} \end{equation}
We are interested in integrable cases, i.e., in discretizations preserving the energy integral.
Suris found that two standard-like discretization of the simple pendulum are integrable \cite{Sur89,Suris}:
\begin{equation} \label{Sur1}
\varphi_{n+1} - 2 \varphi_n + \varphi_{n-1} = - 2 \arctan \left( \frac{ k \varepsilon^2 \sin\varphi_n}{2 + k \varepsilon^2 \cos\varphi_n} \right) \ ,
\end{equation}
\begin{equation} \label{Sur2}
\varphi_{n+1} - 2 \varphi_n + \varphi_{n-1} = - 4 \arctan \left( \frac{ k \varepsilon^2 \sin\varphi_n}{4 + k \varepsilon^2 \cos\varphi_n} \right) \ .
\end{equation}
The equation \rf{Sur1}, referred to as the Suris1 scheme, has the integral of motion given by
\begin{equation} \label{E1-Sur1}
E_1 = \frac{1}{2} \left( \frac{2 \sin\frac{\varphi_{n+1}-\varphi_n}{2}}{ \varepsilon } \right)^2 - \frac{1}{2} k \left( \cos\varphi_n + \cos\varphi_{n+1} \right) \ ,
\end{equation}
or, in terms of $\varphi_n$ and $p_n$,
\begin{equation} \label{E-Sur1}
E_1 = \frac{1 - \cos\varepsilon p_n }{\varepsilon^2} - \frac{1}{2} k \left( \cos\varphi_n + \cos (\varphi_n - \varepsilon p_n) \right) \ .
\end{equation}
The equation \rf{Sur2}, referred to as the Suris2 scheme, has the following integral of motion
\begin{equation} \label{E2-Sur2}
E_2 = \frac{1}{2} \left( \frac{4 \sin\frac{\varphi_{n+1}-\varphi_n}{4}}{ \varepsilon } \right)^2 - k \cos { \frac{ \varphi_n + \varphi_{n+1} }{2} } \ ,
\end{equation}
which can be expressed in terms of $\varphi_n$ and $p_n$ as follows
\begin{equation} \label{E-Sur2}
E_2 = \frac{4}{\varepsilon^2} \left( 1 - \cos \frac{\varepsilon p_n}{2} \right) - k \cos ( \varphi_n - \frac{\varepsilon p_n}{2} ) \ .
\end{equation}
One can verify the preservation of these integrals by direct computation.
\subsection{Discrete gradient method}
The discrete gradient method \cite{MQR2,QC,QT} is a general and very powerful method to
generate numerical schemes preserving any number of integrals of motion and some other properties
\cite{MQR1}. However, this method in general is not symplectic.
In this paper we need to preserve one integral (the energy) and the system is hamiltonian, compare \rf{H-V},
\begin{equation} \label{hameq}
\dot \varphi = \frac{\partial H}{\partial p} \ , \quad
\dot p = - \frac{\partial H}{\partial \varphi} \ .
\end{equation}
In such case the discrete gradient method reduces to the following simple scheme.
Left hand sides of the formulas \rf{hameq} are discretized in the simplest way (difference quotients) while the right hand sides
are replaced by the so called discrete (or average) gradients:
\begin{equation}
\frac{\varphi_{n+1} - \varphi_n}{\varepsilon} = \frac{\Delta H}{\Delta \varphi}\ , \quad
\frac{p_{n+1} - p_n }{\varepsilon} = - \frac{\Delta H}{\Delta p} \ .
\end{equation}
The discrete gradient $\bar \nabla H \equiv \left( \frac{\Delta H}{\Delta \varphi} , \ \frac{\Delta H}{\Delta p} \right)$
of a differentiable function $H ( \varphi, p)$ by definition (see \cite{MQR2}) satisfies the condition
\begin{equation}
H (\varphi_{n+1}, p_{n+1}) - H (\varphi_n, p_n) = \frac{\Delta H}{\Delta \varphi} ( \varphi_{n+1} - \varphi_n ) +
\frac{\Delta H}{\Delta p} ( p_{n+1} - p_n ) \ .
\end{equation}
The explicit form of $\bar \nabla H$ is, in general, not unique. One of the possibilities is the coordinate increment
discrete gradient \cite{IA}
\begin{equation}
\frac{\Delta H}{\Delta \varphi} = \frac{ H ( \varphi_{n+1}, p_n ) - H (\varphi_n, p_n) }{\varphi_{n+1} - \varphi_n } \ ,
\quad \frac{\Delta H}{\Delta p} = \frac{ H (\varphi_{n+1}, p_{n+1} ) - H ( \varphi_{n+1}, p_n) }{ p_{n+1} - p_n } \ .
\end{equation}
Other possibilities are, for instance, mean value discrete gradient \cite{MQR2} and midpoint discrete gradient \cite{Gon}.
All these definitions coincide in the case $H (\varphi, p) = T (p) + V (\varphi)$. In such case $\bar \nabla H = \bar \nabla T + \bar \nabla V$, where
\begin{equation}
\bar \nabla T = \frac{ T (p_{n+1}) - T (p_n )}{p_{n+1} - p_n} \ , \quad
\bar \nabla V = \frac{ V (\varphi_{n+1}) - V (\varphi_n )}{\varphi_{n+1} - \varphi_n} \ .
\end{equation}
Thus we have got the discrete gradient scheme:
\begin{equation} \left\{ \begin{array}{l} \displaystyle \label{disgrad}
\frac{p_{n+1} + p_n }{2} = \frac{ \varphi_{n+1} - \varphi_n}{\varepsilon} \ , \\[2ex]
\displaystyle
\frac{ p_{n+1} - p_n }{\varepsilon} = - \frac{ V (\varphi_{n+1}) - V(\varphi_n) }{\varphi_{n+1} - \varphi_n} \ . \\[1ex]
\end{array} \right.
\end{equation}
This numerical scheme can be also obtained as a special case of the modified midpoint rule
\cite{LG}.
The system \rf{disgrad} can be rewritten
as the following second order equation for $\varphi_n$ plus the defining equation for $p_n$:
\begin{equation} \begin{array}{l} \label{pQ} \displaystyle
\frac{\varphi_{n+1} - 2 \varphi_n + \varphi_{n-1} }{\varepsilon^2} = - \frac{1}{2} \left( \frac{ V (\varphi_{n+1}) - V (\varphi_n) }{ \varphi_{n+1} - \varphi_n }
+ \frac{ V (\varphi_n) - V (\varphi_{n-1}) }{ \varphi_n - \varphi_{n-1} } \right) \\[4ex] \displaystyle
p_n = \frac{ \varphi_{n+1} - \varphi_n}{\varepsilon} + \frac{1}{2} \varepsilon \left( \frac{ V (\varphi_{n+1}) - V (\varphi_n )}{ \varphi_{n+1} - \varphi_n} \right) \ .
\end{array} \end{equation}
Substituting $ V(\varphi) = - k \cos\varphi$ we get the simple pendulum case. Multiplying both equations
\rf{disgrad} side by side, we easily prove that the system \rf{pQ} has the first integral
\begin{equation}
E = \frac{1}{2} p_n^2 + V (\varphi_n )
\end{equation}
which exactly coincides with the hamiltonian \rf{H-V} evaluated at $\varphi_n$, $p_n$. Note that
the integrals of motion
\rf{E-Sur1}, \rf{E-Sur2} coincide with \rf{H-V} (where $V (\varphi) = - k \cos\varphi$) only approximately,
in the limit $\varepsilon \rightarrow 0$.
\section{A correction which preserves the period of small oscillations}
The classical harmonic oscillator equation $\ddot \varphi + \omega^2 \varphi = 0$ admits the exact discretization
(\cite{CR}, compare also \cite{Ag,Reid}), i.e., a discretization such that the solution $\varphi(t)$ evaluated at
$n \varepsilon$ equals $\varphi_n$ (for any $\varepsilon$, and any $n$):
\begin{equation} \begin{array}{l} \label{exact-osc}
\varphi_{n+1} - 2 \varphi_n \cos\varepsilon\omega + \varphi_{n-1} = 0 , \\[2ex]
\displaystyle p_n = \frac{\omega}{\sin\omega\varepsilon} \left( \varphi_{n+1} - \varphi_n \cos\omega\varepsilon \right) \ .
\end{array} \end{equation}
The energy is also exactly preserved, i.e.,
\begin{equation}
E = \frac{1}{2} p_n^2 + \frac{1}{2} \omega^2 \varphi_n^2
\end{equation}
does not depend on $n$ (which can be easily checked by direct calculation). The existence of the exact discretization
of the harmonic oscillator equation has been recently used to discretize the Kepler problem (preserving all integrals
of motion and trajectories) \cite{Ci-Kep}.
We consider the class of Newton equations \rf{Newton}. Let us confine ourselves to equations
which have a stable equilibrium at $\varphi = 0$, i.e., $f'(0) < 0$. Then
$V = V(\varphi)$ has a local minimum at $\varphi = 0$, i.e., $V'(0) = f (0) = 0$. We denote
\begin{equation} \label{omega0}
\omega_0 = \sqrt{V''(0)} \ .
\end{equation}
Thus
\begin{equation} \label{Tay}
V (\varphi) = V_0 + \frac{1}{2} \omega_0^2 \varphi^2 + \ldots \ ,
\end{equation}
and small oscillations around the equilibrium can be approximated by the classical harmonic
oscillator equation with $\omega = \omega_0$.
Do exist discretizations which in the limit $\varphi_n \approx 0$ ($\varepsilon$ is fixed) become exact? Known discretizations,
including those presented in this paper, do not have this property. Fortunatelly, we found such discretization
by modifying the discrete gradient method. It is sufficient to replace
$\varepsilon$ by some function $\delta = \delta (\varepsilon)$ in the formulae \rf{disgrad}. The form of this function
will be obtained by the comparison with the harmonic oscillator equation (in the limit $\varphi \approx 0$).
We linearize the equations \rf{pQ} (with $\varepsilon$ replaced by $\delta$) around $\varphi_n = 0$
(i.e., we take into account \rf{Tay}). Thus we get
\begin{equation} \begin{array}{l}
\displaystyle \frac{\varphi_{n+1} - 2 \varphi_n + \varphi_{n-1}}{\delta^2} = - \frac{\omega_0^2}{4} \left( \varphi_{n+1} + 2 \varphi_n + \varphi_{n-1} \right) \ , \\[3ex]
\displaystyle p_n = \frac{ \varphi_{n+1} - \varphi_n}{\delta} + \frac{1}{4} \omega_0^2 \delta \left( \varphi_{n+1} + \varphi_n \right) \ ,
\end{array} \end{equation}
which is equivalent to
\begin{equation} \begin{array}{l} \label{exact-?}
\displaystyle \varphi_{n+1} - 2 \left( \frac{4 - \omega_0^2 \delta^2}{4 + \omega_0^2 \delta^2} \right) \varphi_n + \varphi_{n-1} = 0 \ , \\[3ex]
\displaystyle p_n = \frac{4 + \omega_0^2 \delta^2}{4 \delta} \left( \varphi_{n+1} - \left( \frac{4 - \omega_0^2 \delta^2}{4 + \omega_0^2 \delta^2} \right) \varphi_n \right) \ .
\end{array} \end{equation}
We compare \rf{exact-osc} with \rf{exact-?}. Both systems coincide if and only if
\begin{equation} \label{komega}
\frac{4 - \omega_0^2 \delta^2}{4 + \omega_0^2 \delta^2} = \cos\varepsilon\omega \ , \quad \frac{4 + \omega_0^2 \delta^2}{4 \delta} = \frac{\omega}{\sin\varepsilon\omega} \ .
\end{equation}
Solving the system \rf{komega} we get
\begin{equation} \label{delta}
\omega =\omega_0 \ , \quad \delta = \frac{2}{\omega_0} \tan \left( \frac{\varepsilon \omega_0}{2} \right) \ .
\end{equation}
Therefore, we propose the following new discretization of the Newton equation \rf{Newton}, \rf{ener-int}
(modified discrete gradient scheme):
\begin{equation} \label{dis-delta}
\begin{array}{l} \displaystyle
\frac{\varphi_{n+1} - 2 \varphi_n + \varphi_{n-1} }{\delta^2} = - \frac{1}{2} \left( \frac{ V (\varphi_{n+1})
- V(\varphi_n) }{ \varphi_{n+1} - \varphi_n} + \frac{ V (\varphi_n) - V (\varphi_{n-1})}{\varphi_n - \varphi_{n-1}} \right) \\[4ex] \displaystyle
p_n = \frac{ \varphi_{n+1} - \varphi_n}{\delta} + \frac{1}{2} \delta \left( \frac{ V (\varphi_{n+1}) - V (\varphi_n) }{ \varphi_{n+1} - \varphi_n } \right)
\end{array} \end{equation}
where $\delta$ is defined by \rf{delta} (and $\omega_0$ is given by \rf{omega0}). This discretization becomes exact for small
oscillations for any fixed $\varepsilon$. It means that for $\varphi_n \approx 0$ the period and the amplitude of the approximated solution should be very close to
the exact values (even for large $\varepsilon$!). In the next sections we will verify this point
experimentally.
\section{Numerical experiments}
We performed a number of numerical experiments applying the numerical
schemes presented above.
The initial data were parameterized by the velocity
$p_0$ while the initial position was always the same: $\varphi_0 = 0$.
In the continuous case \rf{pendul} we have 3 possibilities: oscillating motion ($|p_0| < 2$), rotating
motion ($|p_0| > 2$) and the motion along the separatrix ($p_0 = \pm 2$), from
$\varphi = 0$ to (asymptotically) $\varphi = \pm \pi$.
The (theoretical) amplitude $A_{th}$ for the oscillating motions can be easily computed
from the energy conservation law \rf{pend1} (where $k=1$, i.e., $\frac{1}{2} p_0^2 - 1 = -\cos A_{th}$):
\begin{equation}
2 \sin \frac{A_{th}}{2} = p_0 \ .
\end{equation}
In particular, we performed many numerical computations for the following initial data:
\begin{itemize}
\item $p_0 = 0.1$, then $A_{th} \approx 0.0318443 \pi \approx 0.1000417$ (small amplitude)
\item $p_0 = 1.8$, then $A_{th} \approx 0.712867 \pi \approx 2.239539$ (very large amplitude).
\end{itemize}
To estimate the actual amplitude of a given discrete simulation we
apply the following procedure: if $\varphi_m$ is a local maximum of the discrete trajectory
(i.e., $\varphi_m > \varphi_{m-1}$ and
$\varphi_m > \varphi_{m+1}$),
then we estimate the maximum of the approximated function by the maximum of the parabola best fitted to
the following five points: $\varphi_{m-2}, \varphi_{m-1}, \varphi_m, \varphi_{m+1}, \varphi_{m+2}$.
The analogical procedure is done also at local minima (we take the absolute value of the obtained minimum).
Thus we obtain a sequence of the amplitudes, $A_N$. The index
$N$ is common for all extrema (maxima and minima), and on some figures we denote
it by $N_{1/2}$ (the number of half periods) to discern it from $N$ (the number of periods).
Every numerical scheme used in the present paper yields a discrete trajectory with rather stable amplitude.
It is not constant but oscillates in a regular way around an average value:
\begin{equation}
A_N = A ( 1 + \alpha_N ) \ ,
\end{equation}
where both the average amplitude $A$ and relative (dimensionless)
oscillations $\alpha_N$ can depend on the time step $\varepsilon$ and on the initial
velocity $p_0$,
i.e., $A = A (p_0, \varepsilon)$ and
$\alpha_N = \alpha_N (p_0, \varepsilon) $. Of course, both $A$ and $\alpha_N$
differ for different numerical schemes.
In a similar way we estimated the period of discrete motions.
The exact periodicity ($\varphi_{k+n} = \varphi_k$ for some $k, n$) is a rare phenomenon and, of course,
we did not observe it.
To define the approximate period we fit a continuous curve to the discrete graph,
estimate zeros of this function, and compute the distance between the neighbouring zeros.
Suppose that $\varphi_{m} \varphi_{m+1} < 0$ \ for some $m$. It means that one of the zeros, say $z_N$, lays between
$\varphi_m$ and $\varphi_{m+1}$. We estimate it by
zero of the interpolating cubic polynomial based on the points $\varphi_{m-1}$, $\varphi_{m}$,
$\varphi_{m+1}$, $\varphi_{m+2}$ (another natural, but less accurate, possibility could be a line joining
$\varphi_m$ and $\varphi_{m+1}$).
Then, denoting subsequent estimated zeros by $z_N$ ($N=1,2,3,\ldots$) and
$z_0 = \varphi_0 = 0$,
we define
\begin{equation}
T_N = z_{2N} - z_{2N-2} \ ,
\end{equation}
which we take as an estimate of the period.
Our numerical experiments have shown that $T_N$ is not exactly constant but
oscillates with a relatively small amplitude. The average value of $T_N$
is constant with high accuracy (see the next section).
Therefore we have
\begin{equation}
T_N = T ( 1 + \tau_N ) \ ,
\end{equation}
where both the average period $T$ and relative (dimensionless)
oscillations $\tau_N$ can depend on the time step $\varepsilon$ and on the initial
velocity $p_0$,
i.e., $T = T (p_0, \varepsilon)$ and
$\tau_N = \tau_N (p_0, \varepsilon) $. Moreover, $T$ and $\tau_N$
essentially depend on the discretization (numerical scheme).
The amplitude of small oscillations is defined in a natural way
\begin{equation}
\tau (\varepsilon, p_0) := \max_N |\tau_N (\varepsilon, p_0) | \ , \qquad \alpha (\varepsilon, p_0) := \max_N |\alpha_N (\varepsilon, p_0) \ .
\end{equation}
Fortunatelly, $|\tau_N|$ and $|\alpha_N|$ oscillate (as functions of $N$), with small amplitudes,
in a very regular way. Thus
we can estimate $\tau (\varepsilon, p_0)$ and $\alpha (\varepsilon, p_0)$ considering a series of, say, 40 local extrema of
$\tau_N$ and $\alpha_N$, and taking an average value.
\section{Periodicity and stability}
Discrete trajectories generated by symplectic or integrable schemes considered in our paper
are stable for $\varepsilon$ which are not too large (for very large $\varepsilon$ one can observe
chaotic behaviour, \cite{FA,Yo}). We confine ourselves to sufficiently small $\varepsilon$, i.e.
$\varepsilon \leqslant 0.5$,
but sometimes (for $p_0 < 1.5$) we can take even $\varepsilon \approx 1$. In this region the motion is very stable
and both the average period $T$ and the average amplitude $A$ are well defined.
The average amplitude is computed simply as
\begin{equation}
A_{avg}(N,M) = \frac{1}{M} \sum_{j=0}^{M-1} |A_{N+j}| \ ,
\end{equation}
where we usually assume $M = 50$. The definition of the average period is similar.
In many cases we use the formula
\begin{equation} \label{TNM}
T_{avg} (N, M) = \frac{1}{M} \left( z_{N+ 2 M} - z_N \right) \ ,
\end{equation}
where the dependence (very essential!) on $\varepsilon$ and $p_0$ is omitted for the sake of brevity.
Note that $T_N \equiv T_{avg} (2N-2, 1)$.
Computing $T_{avg}$ it is necessary to choose $M$ arbitrarily, we usually take $M=20$.
Sometimes we denote $N \equiv N_0$ to point out that the average is taken over
indices greater than $N_0$.
Considering very long discrete evolutions (many thousands of periods) we use
another definition of the average period. Namely,
we average $T_{avg} (N, M)$ over some range of the parameter $M$ ($K < M \leqslant L$):
\begin{equation}
{\bar T}_{avg} (N,K,L) = \frac{1}{L-K} \sum_{M = K+1}^{L} T_{avg} (N,M) \ .
\end{equation}
Usually we assume $K = 100$, $L = 200$.
All discretizations considered in the present paper are characterized by very high stability
of the period and the amplitude.
One can hardly notice any dependence of $T_{avg}$ and $A_{avg}$ on $N$, even when
testing very large $N$
(like $10^3$, $10^5$ or $10^6$), and ${\bar T}_{avg}$ is even more stable.
As a typical example we present long-time behaviour of the Suris1 scheme, see Fig.~\ref{Suris1-195-02} and
Fig.~\ref{Suris1-195-02-long}, where we used the definition \rf{TNM} with $M=20$.
An interesting phenomenon is associated with changing $M$. The pictures for different $M$ usually
are very similar but the amplitude of oscillations becomes
smaller and smaller for larger $M$ (compare Fig.~\ref{Suris1-195-02-TN}, where $M=1$,
with Fig.~\ref{Suris1-195-02-long}, where $M=20$).
Table~\ref{stability} shows how stable are periods of the oscillations.
Maximal $T_N$ is defined as $\max_{J+100 < N \leqslant J +200} T_N$ for either $J=0$ or $J=1.8 \cdot 10^6$.
Minimal values and the average are taken over the same range of values. The standard error of the average is
about $5.7 \cdot 10^{-8}$ (the maximal error is about $10^{-7}$). Therefore, the average period is practically constant for all studied
discretizations. The Suris1 scheme is exceptionally stable. In this case any variations of the period are well
within the error limits and we did not observe any dependence of $T_{avg} (N, M)$ on $N$.
Taking into account the observed stability of the period, throughout this paper
we identify the average period with $T \equiv T_{avg} (0, 20)$.
The observed stability of the period (for symplectic and integrable discretizations)
is in sharp contrast with the results given by standard
(non-symplectic and non-integrable) numerical methods. For instance, the most popular (explicit) 4th order Runge-Kutta scheme
yields the period noticeably decreasing in time (see Fig.~\ref{RK-195-02}). For small $N_0$ we get reasonably good estimation
of the period (interpolating the discrete curve we get $T = 11,64602$ for $N_0=0$, which is quite close to the theoretical
value $T_{th} = 11,65758528$. From among our discretizations only both gradient schemes produce
comparable (even a little bit better) results, namely the discrete gradient scheme yields $T= 11.64698$.
However, for larger $N_0$ the Runge-Kutta method yields
worse and worse estimation of the period (in fact this is an
exponential decrease, although very slow) while both gradient methods remain stable for very long time,
compare Table~\ref{stability}. In this particular case ($p_0 = 1.95$, $\varepsilon = 0.2$)
the error produced by the Runge-Kutta
method becomes greater than the errors of all methods considered in this paper beginning from
$N_0 \approx 2000$.
Numerical experiments show that the oscillations of the period and the amplitude are very small.
For $\varepsilon \rightarrow 0$ we have $\tau (\varepsilon, p_0) \rightarrow 0$, up to the
round-off error.
The largest values of $\tau (\varepsilon, p_0)$, obtained for both projection methods (for large $\varepsilon$
and small $p_0$), are of order 0.2. All other discretizations
yield oscillations smaller by one or two orders of magnitude
(even for large $\varepsilon$). A typical picture is given at Fig.~\ref{Tosc-18} representing $\tau (\varepsilon, p_0)$ for
$p_0 = 1.8$.
\section{Why the period and the amplitude oscillate in a very regular way?}
In a large range of parameters the oscillations $\tau_N$ are very regular and their amplitude
is greater than numerical errors by several orders of magnitude.
This phenomenon turns out to be caused mainly by systematic numerical by-effects.
Our explanation is associated with the above
procedure of estimating zeros.
In general, the period $T \equiv T_{avg}$ and $\varepsilon$ are incommensurable.
Therefore the relative position of $z_N$ between $\varphi_{m}$ and $\varphi_{m+1}$
depends on $N$.
We conjecture that the periodic phenomena one observes at Fig.~\ref{T-leapfrog-005-18}, Fig.~\ref{A-leapfrog-005-18},
Fig.~\ref{T-leapfrog-01-005}, Fig.~\ref{A-leapfrog-01-005}, Fig.~\ref{T-Suris1-01-005} and Fig.~\ref{A-Suris1-01-005}
are associated with the properties of the real number $T/\varepsilon$, namely, with the approximation of
$T/\varepsilon$ and $T/(2\varepsilon)$ by rational numbers.
We begin with a simple definitions. Given $T, \varepsilon \in \mathbb R$ ($T > \varepsilon > 0$) and $K \in \mathbb N$ we define:
\begin{equation} \label{munu}
\mu_K := \frac{K T}{\varepsilon} - M_K \ , \quad \nu_K := \frac{K T}{2 \varepsilon} - L_K \ ,
\end{equation}
such that $- 0.5 < \mu_K \leqslant 0.5$, $- 0.5 < \nu_K \leqslant 0.5$ and $M_K, L_K \in \mathbb N$.
In other words, for a given $K$ we take $M_K$ such that $M_K/K$ is the best rational approximation
(with a given denominator $K$) of the real number $T/\varepsilon$, and $L_K/K$ is the best rational approximation
(with the denominator $K$) of $T/\varepsilon$. For given $T, \varepsilon, K$ the formulas \rf{munu} define uniquely
$\mu_K$, $\nu_K$, $M_K$, $L_K$.
The following lemma can be derived directly from the above definitions.
\begin{lem} \label{lemmunu}
Suppose that $T > \varepsilon > 0$ are given.
\begin{enumerate}
\item If \ $|\mu_K + \mu_J| < 0.5$, then $M_{K+J} = M_K +M_J$ and $\mu_{K+J} = \mu_K + \mu_J$.
\item If \ $|\nu_K + \nu_J| < 0.5$, then $L_{K+J} = L_K +L_J$ and $\nu_{K+J} = \nu_K + \nu_J$.
\item If \ $|\nu_K| < 0.25$, then $M_K = 2 L_K$ and $\mu_K = 2 \nu_K$.
\item If \ $K$ is even, then $M_{K/2} = L_K$ and $\mu_{K/2} = \nu_K$.
\end{enumerate}
\end{lem}
\begin{cor}
If $\nu_K \approx 0$, then $\mu_K \approx 0$ and, for $K$ even, also
$\mu_{K/2} \approx 0$.
\end{cor}
If $\mu_K \approx 0$, then the configuration
of $z_N$, $\varphi_m$, $\varphi_{m+1}$ practically repeats after every $K$
periods. Therefore it is natural to expect
some periodic recurrences
with the period $K T$. In particular, $\tau_{N+K} \approx \tau_N$ for any $N$.
To obtain a ``good'' approximation we usually demand at least $\mu_K < 0.01$.
Sometimes, especially for small $K$ (e.g., $K \leqslant 5$), interesting effects can be observed also for
larger $\mu_K$ (but, anyway, $\mu_K < 0.1$): the graph of the function $N \rightarrow T_N$
apparently splits into $K$ ``discrete curves'' ($T_N$ and $T_M$ belong to the same curve if $N = M \ ({\rm mod} K$)).
Similar considerations can be made for the oscillations $\alpha_N$ of the amplitude.
In this case the period is $T/2$ and ``good'' approximations correspond to $\nu_K \approx 0$.
\begin{ex}[\rm leap-frog scheme, $\varepsilon = 0.05$, $p_0 = 1.8$, $T \approx 9.1254146$] \label{Ex1}
We compute $T/\varepsilon \approx 182.508291$ and
easily check that
$\mu_2 \approx 0.017$, $\mu_{59} \approx - 0.011$, $\mu_{61} \approx 0.0058$, $\mu_{120} \approx - 0.0051$, $\mu_{181} \approx 0.00067$.
Fig.~\ref{T-leapfrog-005-18} confirms that the characteristic "time scales" responsible for
the pattern of the oscillations are 2, 120, and 181, indeed.
The period 2 corresponds to oscillations between two
sinusoid-like curves. Namely, $T_N$ belong to the first ``sinusoid''
for $N$ odd, and to second ``sinusoid'' for $N$ even. Both discrete curves are periodic with the period 120.
Actually, the whole picture seems to have the translational symmetry with the period 60.
The difference between $T_{N+60}$ and $T_N$ is quite large (in this sense 60 is not a period, indeed),
however $T_N$ lays between $T_{N+59}$ and $T_{N+61}$.
The next period, 181, is more dificult to be noticed and
corresponds to more subtle effects, like the configuration of points near intersections of both
"sinusoids" which approximately repeats every three
"sinusoid"-half-periods.
Similarly, we compute
$\nu_4 \approx 0.017$, $\nu_{59} \approx - 0.0054$, $\nu_{181} \approx 0.00034$ and $\nu_{240} \approx -0.0051$.
On Fig.~\ref{A-leapfrog-005-18} we recognize four discrete curves, periodic with the period 240.
The whole picture has the period 60 but looking closely on some details (e.g., at peaks or at intersections)
we can also notice another periodicity with the period 181.
Finally, we point out that all equalities suggested by Lemma~\ref{lemmunu} hold (e.g., $\mu_{61} = \mu_2 + \mu_{59}$,
$\nu_{240} = \nu_{59} + \nu_{181}$, $\mu_{59} = 2 \nu_{59}$, $\mu_4 = \nu_2$ etc.).
\end{ex}
\begin{ex}[\rm leap-frog scheme, $\varepsilon = 0.1$, $p_0 = 0.05$, $T \approx 6.28155042$] \label{Ex2}
$T/\varepsilon \approx 62.815504$ and we check that
$\mu_5 = 0.078$, $\mu_{11} = - 0.029$, $\mu_{27} = 0.019$, $\mu_{38} = - 0.011$, $\mu_{65} = 0.0078$,
$\mu_{103} = - 0.0031$.
Fig.~\ref{T-leapfrog-01-005} does not look so regularly as Fig.~\ref{T-leapfrog-005-18}.
Note that $\mu_K$
are now relatively large, the first $\mu_K$ smaller that $0.01$ has the index $K=65$ and the next one is $K=103$.
However, a
closer inspection reveals similar features in both figures. We have five sinusoid-like curves (periodic with the period
65). The distance between them is 13 but the difference between $T_{N+13}$ and $T_N$ is large. Note that the period
$103 \approx 8 \times 13$, so points of only every eighth ``sinusoid'' practically coincide.
The other periods ($K = 11, 27, 38$) can be derived from $103$ and $65$, namely: $38 = 103-65$, $27=65-38$, $11=38-27$.
They can be noticed on Fig.~\ref{T-leapfrog-01-005} as well. For instance, the lowest points ($T_N$ between 6.28155037 and 6.28155038)
have $N = 6, 17, 22, 33, 44, 49, 60, 71, 82, 87, 98$, the distances between them are
given by $\Delta N = 11, 5, 11, 11, 5, 11, 11, 11, 5, 11$
(note that $11 + 11 + 5 = 27$).
To explain regularities on Fig.~\ref{A-leapfrog-01-005} we compute
$\nu_5 = 0.039$, $\nu_{22} = - 0.029$, $\nu_{27} = 0.0093$, $\nu_{49} = - 0.020$,
$\nu_{76} = - 0.011$, $\nu_{103} = - 0.0015$, $\nu_{130} = 0.0078$ and also $\nu_{645} = 0.00010$.
In this case the structure is also quite complicated because we have
several candidates for periods. Some of them admit
a clear interpretation.
Joining every fifth point we get five sinusoidal curves
with the period 130.
Thus the distance between neighbouring ``sinusoids'' is 26 which is very close to the period 27.
The subsequent minima are at $N = 3, 25, 52, 79, 106, 128, 155, 182, 209$, therefore $\Delta N = 22, 27, 27, 27, 22,
27, 27, 27$ (note that $|\nu_{22}|$ is also relatively small). Looking at configurations of points near every minimum we can notice
a distinct periodicity with the period 103.
\end{ex}
\begin{ex}[\rm Suris1 scheme, $\varepsilon=0.1$, $p_0=0.05$, $T \approx 6.29723795$] \label{Ex3}
In this case the structure of Fig.~\ref{T-Suris1-01-005} is extremaly simple (a single discrete curve).
It can be explained by the non-existence of any ``small'' periods. The smallest one,
distinctly seen at Fig.~\ref{T-Suris1-01-005}, is 36. Namely,
$\mu_{36} = 0.0057$, $\mu_{145} = 0.0050$, $\mu_{181} = 0.00069$. The period
181 is even more exact than the period 36 ($\mu_{181}$ is much smaller than $\mu_{36}$). Therefore
after every five basic periods ($181 \approx 5 \times 36$) the periodicity improves.
Fig.~\ref{A-Suris1-01-005} consists of two intersecting discrete curves (periodic with the period 72),
because $\nu_2 = - 0.028$ is relatively small and $\nu_{72} = 0.0057$. Actually the important point is that
$\nu_3 = 0.46$ is
much greater than $|\nu_2|$. Note that $\mu_2 = - 0.055$ is also not very large but
$\mu_3 = - 0.083$ is of the same order. The whole structure has the period 36 but (similarly as in Example~\ref{Ex1})
the difference between $A_{N+36}$ and $A_N$ is quite large, $A_N$ is close to $A_{N+35}$ and $A_{N+37}$ ($\nu_{35} = 0.017$,
$\nu_{37} = - 0.011$). Moreover, we have the period 181, quite accurate ($\nu_{181} = 0.00034$). This periodicity
can be noticed by looking at the minima or at points where the discrete curves ``intersect''.
\end{ex}
Similar remarks concern the case presented at Fig.~\ref{Suris1-195-02}, Fig.~\ref{Suris1-195-02-long},
Fig.~\ref{Suris1-195-02-TN}, where
$T \approx 11,88884005$ and
$\mu_9 = - 0.0044$, $\mu_{448} = 0.0035$, $\mu_{457} = - 0.00093$.
The patttern on any of these figures consists of nine discrete curves and is periodic with the period
close to 457.
The behaviour described on the above examples is typical and similar periodic phenomena can be observed
for other discretizations and for other choices of parameters except very small values of $\varepsilon$
(e.g., $\varepsilon \leqslant 0.01$) when periodic oscillations are comparable or smaller than the round-off error
(then the oscillations become chaotic with a very small amplitude).
\section{Numerical estimates of the amplitude and the period}
All discretizations considered in this paper are characterized by very good stability
of their trajectories. Therefore, such quantities as average period and
(in the case of oscillating motions) average amplitude are well defined for
every discretization (provided that $\varepsilon$ is not too large, it is sufficient to assume
$\varepsilon \leqslant 0.5$).
\subsection{Average amplitude}
Relative errors for the average amplitude are presented in Table~\ref{error-amp}
(for $\varepsilon = 0.02$ and $\varepsilon = 0.5$). They were computed as differences between the
numerical results and exact amplitudes given in terms of elliptic functions.
One can immediately see that in any case the best results are given
by both gradient schemes
(and the worst ones are given by Suris1 and Suris2 schemes).
The relative error of the leap-frog and Suris' methods practically
does not depend on $p_0$.
The accuracy of gradient methods
increases for larger $p_0$, both for $\varepsilon=0.02$ and $\varepsilon=0.5$.
For small $\varepsilon$ (e.g., $\varepsilon = 0.02$) also projection schemes yield very small
errors,
like $10^{-8}$ or $10^{-9}$ (similar as gradient
methods). However, for some $p_0$ their accuracy is very high (e.g, for $p_0 =1.6$) while
for some other $p_0$ -- relatively worse (e.g., for $p_0 = 0.8$).
The implicit midpoint rule is comparable
to gradient methods but only for small $p_0$ (e.g.,
$p_0 < 0.1$). The leap-frog method, both Suris' discretization
and (for $p_0 > 1.6$) the
implicit midpoint rule yield much larger errors (by 4 orders of magnitude).
For greater $\varepsilon$ (e.g., $\varepsilon = 0.5$) the differences between the studied methods
are much smaller (they differ at most by 2 orders of magnitude).
Gradient methods are most accurate.
The implicit midpoint rule has similar accuracy for $p_0 < 1.2$ while
projection methods are not much worse for $p_0 > 1.8$.
Leap-frog method and both Suris' methods have larger relative errors for any $p_0$.
We point out, however, that even those ``large'' errors are not so bad (only several percent)
with the exception of $p_0$ approaching $2$ (when these discretizations fail to
reproduce properly even the qualitative behaviour).
Fig.~\ref{Aavg-18} illustrates the dependence of the average amplitude on $\varepsilon$ for
$p_0 = 1.8$. Gradient methods and (especially for $\varepsilon < 0.3$) projection
methods are most accurate.
\subsection{Average period}
Relative errors for the average period are presented in Table~\ref{error-per}
and also in Table~\ref{error-sep} (in both cases for $\varepsilon = 0.02$ and $\varepsilon = 0.5$).
For $p_0 < 0.5$ all discretizations except the modified discrete gradient method
have similar
relative errors (Suris1 scheme is the worst among them).
The modified discrete gradient methods is much better
(for $p_0 \approx 0$ its error is smaller by 4 orders of magnitude, at least),
compare Fig.~\ref{Tavg-01} ($p_0 = 0.1$) and Fig.~\ref{delta-error} ($p_0 = 0.02$).
Then, with increasing $p_0$, all discretizations become to have similar accuracy
with two very interesting exceptions: leap-frog and implicit midpoint schemes
have a kind of ``resonance values'' for which their accuracy is much better than the accuracy
of all other method. Fig.~\ref{rezonans} shows how accurate is the leap-frog scheme for
$p_0 = 1.21$ and for practically any $\varepsilon$.
There are shown also next two discretizations: implicit midpoint and modified
discrete gradient, much worse (for this value of $p_0$) than leap-frog
(other discretizations are even
less accurate).
Implicit midpoint scheme has an analogical ``resonance value'', namely
$p_0 \approx 1.6$. It is worthwhile to point out that, surprisingly,
projections applied to the leap frog method have strong negative effect
on the accuracy of the average period for $0.8 < p_0 < 1.8$,
especially for larger $\varepsilon$ (e.g., $\varepsilon = 0.5$).
If $p_0$ approaches $2$, then both gradient methods become more accurate than other methods
(only for small $\varepsilon$ the projection methods are better). For
$p_0$ very close to this limiting value the accuracy of all methods decreases rapidly,
and the leap-frog method and both Suris' methods produce rotating motions instead of
oscillations, see Table~\ref{error-sep}. The closest neighbourhood of the separatrix
($p_0=2$) is discussed in more detail below. Here we remark only that, for $p_0$ slightly
greater than 2, the implicit
midpoint method fails to reproduce rotations
and has wrong qualitative behaviour (i.e., oscillations).
In the case of rotating motions the relative error of the average period is very similar
for all considered methods except the discrete gradient scheme which is better by one or
two orders of magnitude.
\section{Interesting special cases}
In this section we briefly present several points which seem to be encouraging to
further studies.
\subsection{Extrapolation $\varepsilon \rightarrow 0$}
For all studied discretizations we expect
\begin{equation} \label{limes}
\lim_{\varepsilon \rightarrow 0} T (\varepsilon, p_0) = T_{th} (p_0) \ ,
\quad \lim_{\varepsilon \rightarrow 0} A (\varepsilon, p_0) = A_{th} (p_0)
\end{equation}
where $T_{th} (p_0)$, $A_{th} (p_0)$ do not depend on the discretization and are equal
to theoretical values computed from the analytic formula (in terms of elliptic functions),
compare Fig.~\ref{Tavg-01} and Fig.~\ref{Aavg-18}.
Let us analyse quantitatively the case presented at Fig.~\ref{Tavg-01}
(the exact period is $T_{th} \approx 6.28711783$).
Fitting 3rd-order polynomials (very close to parabolas, in fact)
to twelve points ($\varepsilon = 0.01, 0.02, \ldots, 0.11, 0.12$) we get
{\small
\begin{equation} \begin{array}{l}
T = -0,03867\varepsilon^3 + 1,310512\varepsilon^2 - 0,0001050\varepsilon + 6,28711875
\quad ({\rm Suris1}) \ , \\
T = -0,00909\varepsilon^3 + 0,524053\varepsilon^2 - 0,0000247\varepsilon + 6,28711805
\quad ({\rm Suris2}) \ , \\
T = -0,00475\varepsilon^3 - 0,260242\varepsilon^2 - 0,0000130\varepsilon + 6,28711794
\quad ({\rm leap}{\rm -}{\rm frog}) \ . \\
\end{array} \end{equation}}
The last terms estimate the exact period quite well. Taking $10^{-7}$ as a unit we
compute their absolute errors as: $9.2$, $2.2$ and $0.9$, respectively. They
are comparable with
the errors at $\varepsilon = 0.001$ (given by $13.1$, $5.2$ and
$- 2.6$, respectively).
The errors at $\varepsilon=0.01$ (namely, $1307.2$, $523.3$ and $260.6$) are higher by two orders
of magnitude. The modified discrete gradient scheme (with the $\delta$-correction)
beats all other discretizations:
its error at $\varepsilon = 0.01$ is only $1.3$ (in the same units).
\subsection{The neighbourhood of the separatrix}
The separatrix is a border between oscillating and rotational motions.
Table~\ref{error-sep} presents the values of the period for motions near the separatrix,
i.e., $p_0 \approx 2$.
This is certainly the range of parameters most difficult for
accurate numerical simulations. The gradient schemes and projection methods yield
satisfying results, especially for small $\varepsilon$, and are much better than
all other methods. For rotating motions very close to the separatrix even projection
methods (especially the symmetric projection) become less accurate and
only gradient methods yield relatively good
quantitative results, see Table~\ref{error-sep}.
The other discretizations produce wrong results (in the neighbourhood of the separatrix)
even qualitatively.
Namely, the leap-frog
and both Suris' schemes begin to simulate rotating motions for $p_0 < 2$ (e.g., for $p_0 =1.99$
if $\varepsilon = 0.5$, and for $p_0 = 1.99999$ if $\varepsilon = 0.02$), while the implicit midpoint rule
produces oscillating motions for $p_0 > 2$ (e.g., for $p_0 = 2.000001$ if $\varepsilon = 0.02$, and
for $p_0 = 2.001$ if $\varepsilon = 0.5$). Even in the case of good qualitative behaviour
these methods yield very large relative errors,
especially for larger $\varepsilon$ (for $\varepsilon = 0.5$ and $|p_0 - 2| \leqslant 0.001$ leap-frog,
implicit midpoint and both Suris' schemes yield relative errors like $30\%-70\%$ and more.
If $p_0 = 2$, then (in the continuous case) we have the motion along the separatrix, i.e.,
$\varphi \rightarrow \pi$ for $t \rightarrow \infty$. For larger $\varepsilon$ (e.g., $\varepsilon = 0.2$) this behaviour is not
reproduced by any discretization. Interesting results are given by both gradient schemes,
see Fig.~\ref{separatrix} ($\varepsilon = 0.2$).
The standard gradient scheme produces oscillations, but after three periods one rotation
is performed. The modified discrete gradient scheme gives a strange motion: first oscillations
(two periods), then backward rotation (3 periods), forward rotation and the return to
oscillations. This picture depends on $\varepsilon$ and the round-off error chosen. In any case,
for both gradient schemes, we
have a number of chaotic-looking switches between oscillations and rotations in both
directions. Qualitatively this behaviour may be considered as satisfying. It reflects
the fact that the equilibrium at $\varphi = \pi$ is unstable. In the same time, the
projective discretizations (quite good at qualitative
description of motions near the separatrix) produce relatively slow rotational motion
(similarly as the standard leap-frog method and both Suris schemes). However, for very small
$\varepsilon$ (e.g., $\varepsilon \leqslant 0.00025$) the symmetric projection method seems to have the proper
qualitative behaviour and is much better than other considered numerical schemes, see
Fig.~\ref{sep-all}.
\subsection{Advantages of the new method}
The discrete gradient method with $\delta$-correction turned out to
be very efficient as far as the numerical estimation of the period (for
relatively small amplitudes) is concerned. The range of these "small"
amplitudes is quite large, up to $\varphi \approx \pi/4$, which corresponds
to $p_0 < 0.8$. Thus it contains also the cases which cannot be
approximated by the linear oscillator.
Even for $p_0 \approx 0.8$ the new method is several times better than the
best of other considered schemes, and for smaller $p_0$ it becomes better even
by 4 orders of magnitude (e.g., for $p_0 = 0.02$ the errors of other
discretizations are greater by the factor at least $0.5 \cdot 10^{4}$,
see Table~\ref{error-per}).
Fig.~\ref{Tavg-01} ($p_0 = 0.1$) shows how precise is the period given
by our new method in comparison to the period given by other numerical schemes.
Similarly, Fig~\ref{delta-error} presents the relative error
for $p_0 = 0.02$ and a large range of $\varepsilon$.
We see that even for $\varepsilon =1$ the relative error is only $10^{-5}$!
For small $\varepsilon$ the error is
$10^{-9}$ and less.
Our method works very well also for larger amplitudes, but for $p_0$ larger
than $1,4$ the discrete gradient method is better, and the leap-frog scheme
and implicit midpoint are unbeatable around their "resonance" amplitudes ($p_0 \approx 1,2$
and $p_0 \approx 1,6$, respectively). In the case $p_0 > 2$ the delta correction
have negative influence on the accuracy of the gradient discretization (which is the
best for rotating motions). However, the accuracy of the modified
discrete gradient method is on the same level as the accuracy of all
other considered methods.
In the close neighbourhood of the separatrix
the modified discrete gradient scheme behaves similarly to the discrete gradient method and
its qualitative behaviour is perfect. What is more,
also the quantitative results are very good (compare Table~\ref{error-sep}).
Fig.~\ref{2E-06} compares the behaviour of our method with the leap-frog and
implicit midpoint schemes for $p_0 = 2.000001$. The points generated by the
modified discrete gradient method practically coincide with the exact solution
(the relative error of the period is 0.59\%), almost as good result as that given by
the discrete gradient scheme (the error is 0.25\%). The leap-frog
scheme produces good qualitative behaviour but with the period two times smaller than
the exact one. The implicit midpoint scheme gives wrong qualitative result: oscillations
instead of rotation.
\section{Conclusions}
All methods considered in this paper are characterized by very high stability
of periodic motions they generate (provided that $\varepsilon$ is not too large).
The average period is practically constant (with the accuracy
close to $10^{-7}$ or better) for a very long time (we checked even several millions
of periods).
The period and the amplitude perform regular small
oscillations (they are relatively larger for both projection methods). The periodic
character of
these oscillations turns out to be of a systematic origin and we explained it
considering rational approximations (with possibly small denominators)
of the real number $T/\varepsilon$.
The main aim of this paper was the comparison of
several numerical schemes.
The standard leap-frog method, although non-integrable, is quite good when compared with
typical integrable discretizations. Its performance should be enhanced by use of projection
methods which impose the conservation of the energy integral. The projections work very well
for small values of the time step (e.g., the symmetric projection gives excellent results
simulating the motion along the separatrix),
while for larger time steps they produce relatively large
fluctuations of the period and the amplitude. In any case the projections produce much more accurate
values of the average amplitude. The average period is of the same order, or even worse
(in comparison to the standard leap-frog method).
Surprising resonances occur for
$p_0 \approx 1.21$ (for the leap-frog method) and $p_0 = 1.6$ (for the implicit midpoint rule).
In the neighbourhood of these ``resonance'' values these methods have exclusively high
accuracy of the estimated period (practically for any $\varepsilon$),
much better than all other methods. It would be interesting to explain this phenomenon.
Discretizations found by Suris \cite{Sur89} are very stable but,
in the same time, they have relatively
large errors as compared to other numerical schemes. This is surprising because
these methods are both integrable and symplectic.
In this case the error (i.e., deviation from the exact solution) seems
to be of a systematic origin.
We plan to construct appropriate modifications of Suris' discretizations
in order to enhance their precision without destroying their stability.
The discrete gradient method is (for any $\varepsilon$ and any $p_0$) among the most accurate methods.
For rotating motions this is certainly the best method.
We proposed a modification of the discrete gradient method which proved to be quite
successful, especially when applied to simulate oscillating motions. Our new method is extremaly
efficient for small oscillations. The relative error of the period computed by this method
is less at least by 4 order of magnitude in comparison with other numerical schemes.
{\it Acknowledgements.} The authors are grateful to Prof. Grzegorz Sitarski for useful comments
and turning our attention on Refs.~\cite{BFRB},\cite{GBB}. The first author was partially supported
by the Polish Ministry of Science and Higher Education
(grant no.\ 1 P03B 017 28).
| {
"timestamp": "2008-10-08T10:20:53",
"yymm": "0810",
"arxiv_id": "0810.1378",
"language": "en",
"url": "https://arxiv.org/abs/0810.1378",
"abstract": "We compare the performance of several discretizations of the simple pendulum equation in a series of numerical experiments. The stress is put on the long-time behaviour. We choose for the comparison numerical schemes which preserve the qualitative features of solutions (like periodicity). All these schemes are either symplectic maps or integrable (preserving the energy integral) maps, or both. We describe and explain systematic errors (produced by any method) in numerical computations of the period and the amplitude of oscillations. We propose a new numerical scheme which is a modification of the discrete gradient method. This discretization preserves (almost exactly) the period of small oscillations for any time step.",
"subjects": "Computational Physics (physics.comp-ph)",
"title": "Long-time behaviour of discretizations of the simple pendulum equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290913825541,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.7099522589027014
} |
https://arxiv.org/abs/1311.7441 | Almost involutive Hopf algebras | We define the concept of \emph{companion automorphism} of a Hopf algebra $H$ as an automorphism $\sigma:H \rightarrow H$: $\sigma^2=S^2$ --where $S$ denotes the antipode--. A Hopf algebra is said to be \emph{almost involutive} (AI) if it admits a companion automorphism that can be viewed as a special additional symmetry. We present examples and study some of the basic properties and constructions of AI-Hopf algebras centering the attention in the finite dimensional case. In particular we show that within the family of Hopf algebras of dimension smaller or equal than 15, only in dimension eight and twelve, there are non almost involutive Hopf algebras. | \section{Introduction.}
We start by summarizing the contents of this paper.
\bigskip
\noindent
In Section \ref{section:defejemplos} we introduce the definition of an \emph{almost involutive Hopf algebra}, show that
Sweedler's Hopf algebra is an example and observe that compact quantum groups are also examples --in the infinite
dimensional case--. Then we show that some of the standardt constructions, such as
that of a matched pair or in particular of the Drinfel'd double, when applied to almost involutive Hopf algebras yield as result an almost involutive Hopf algebra.
Moreover, as in the case that the order of the antipode squared is odd the AI property is authomatic, we concentrate in the interesting case that is when the order of the antipode squared is even.
\bigskip
\noindent
In Section \ref{section:prelim} we recall some aspects of the theory of integrals in finite dimensional Hopf algebras,
the modular function, the modular element, etc. These tools will be used in the rest of the paper.
\bigskip
\noindent
In Section \ref{section:examples} we present some examples and non--examples of almost involutive Hopf algebras, and show that
except at dimension 8 and 12 and for only a few types, all Hopf algebras of dimension smaller or equal than 15 are almost
involutive.
\bigskip
\noindent
In the Appendix, we present --a probably well known-- result on square roots of linear automorphisms of Hopf algebras that yield necessary
and sufficient conditions for a finite order automorphism to have a square root that is also a Hopf automorphism. This general result can be
used in some examples, for example we use it to show that the Taft algebras are almost involutive.
\bigskip
\noindent
We finish this introduction with some commentaries about notations.
\bigskip
\noindent
We work over an algebraically closed field $\k$ of characteristic zero and adopt the usual Sweedler's notation and the other conventions
in the theory of Hopf algebras as they appear for example in \cite{kn:mont}, e.g. $\mathcal{S}$ denotes the antipode, and $\lambda$ is an integral, .
Also a non zero element $g$ in $H$ is called a \emph{group-like} element if $\Delta(g)=g\otimes g$ and we
designate as $G(H)$ and $G(H^\vee)$ the set of group-like elements in the original Hopf algebra and its dual. Moreover, if $x\in H$
is such that $\Delta(x)=x\otimes g+ h\otimes x$, where $g,h\in G(H)$, then $x$ is called a {$(g,h)$-primitive} element and we write as $P_{g,h}$ the space of $(g,h)$-primitive elements oh $H$.
A Hopf algebra is \textit{pointed} if its only simple subcoalgebras are the dimensional.
\bigskip
\noindent
In general we concentrate our considerations in the case that finite dimensional Hopf algebras.
\section{Main definition and general examples.}\label{section:defejemplos}
\subsection{General considerations}
Recall that a Hopf algebra $H$ is called \textit{involutive} if $\mathcal{S}^2=\ensuremath{\mathrm{id}}$. Examples of involutive Hopf algebras are commutative, cocommutative or semisimple Hopf algebras over ${\mathbb C}$.
\begin{defi}
We say that a Hopf algebra $H$ is \textit{almost involutive} or that is an \textit{AI-Hopf algebra} if there exists a Hopf algebra automorphism
$\sigma:H\to H$ such that $\mathcal{S}^2=\sigma^2$. An automorphism $\sigma$ as above, is called a {\em companion automorphism} of $\mathcal{S}$ or simply a {\em companion automorphism} of $H$.
\end{defi}
It follows from Radford's formula --see \cite{kn:bbt}, \cite{kn:radfordbasic} and \cite{kn:schcordoba}-- that $\mathcal{S}^2$ is a diagonalizable
Hopf algebra automorphism of finite order, more precisely: $\operatorname{ord}(\mathcal{S}^2)| 2\operatorname{ord}(a)\operatorname{ord}(\alpha)$ --where $a$ is the modular element and $\alpha$ the modular form, see Section \ref{section:prelim}.
It can also be proved that if $H_n$ is the coradical filtration of $H$, then
$\operatorname{ord}{\mathcal{S}^2}|_{H_1}=\operatorname{ord}{\mathcal{S}^2}$ --see \cite[Lemma 4]{kn:radfordsch}--.
The observation that follows shows that if the antipode squared is of finite order $m$
--in particular if $H$ is finite dimensional--, then the only case in which it is interesting to consider the AI condition is when $m$ is even.
\begin{obse} \label{obse:oddtrivial}
Let $H$ be a Hopf algebra with order of $\mathcal{S}^2$ odd. Then $H$ is almost involutive.
Indeed, if $(\mathcal{S}^2)^{2k-1}=\operatorname{id}$, then $\mathcal{S}^2=(\mathcal{S}^{2k})^2$.
\end{obse}
\begin{example}\label{exam:first}
\emph{Sweedler's Hopf algebra.}\label{example:sweedler}
As an illustration of an almost involutive Hopf algebra with the antipode squared of even order, we take Sweedler's Hopf algebra $H_4$. As an
algebra $H_4$ is given by generators and relations as $H_4=\langle g,x:\ g^2=1,\ x^2=0,\ xg+gx=0\rangle$.
The comultiplication is given by $\Delta(g)=g\otimes g$, $\Delta(x)=x\otimes g+1\otimes x$, and the antipode is defined by
$\mathcal{S}(g)=g$, $\mathcal{S}(x)=-xg$. Clearly, $\mathcal{S}^2(g)=g$ and $\mathcal{S}^2(x)=-x$ and $|\mathcal{S}^2|=2$.
\noindent
A direct verification shows that the map $\sigma$ defined by $\sigma:g \mapsto g$, $\sigma: x \mapsto ix$ and extended multiplicatively,
is a companion automorphism where $i$ is a square root of $-1$.
The map: $g \mapsto g,\, x \mapsto -ix$ is also a companion automorphism.
\end{example}
The following result shows that almost involutive non involutive Hopf algebras abound.
\begin{teo}
Let $H$ be a finite dimensional pointed Hopf algebra.
If the order $|G(H)|$ is odd, then $H$ is almost involutive.
In particular this happens if the dimension of $H$ is odd.
\end{teo}
\begin{proof}
We know $\operatorname{ord}{\mathcal{S}^2}=\operatorname{ord}{\mathcal{S}^2}|_{H_1}$, being $H_n$ the coradical filtration of $H$.
As $H$ is pointed, then $H_1={\mathbb K} G(H)\oplus\oplus_{g,h\in G(H)}P'_{g,h}(H)$, where $P'_{g,h}(H)$ is an arbitrary linear complement of $\k(g-h) \subseteq P_{g,h}(H)$ --see \cite[Thm. 0.1]{kn:stefan}--.
It is easy to see that $\mathcal{S}^2|_{G(H)}=\ensuremath{\mathrm{id}}$ and if $x\in P_{g,h}$, then $\mathcal{S}^2(x)=gh^{-1}xg^{-1}h$.
This last formula implies $\mathcal{S}^{2n}(x)=\left(gh^{-1}\right)^nx\left(g^{-1}h\right)^n$, for all $n$.
Hence $\operatorname{ord}{\mathcal{S}^2}$ divides $|G(H)|$ and we can apply the observation \ref{obse:oddtrivial}.
The last assertion follows because $|G(H)|$ divides the dimension of $H$.
\end{proof}
If the order of $|G(H)|$ is even, then the above theorem is no longer true -- see examples \ref{ej:8} and \ref{ej:12} below--.
\bigbreak
Concerning the infinite dimensional situation, we mention that in \cite{kn:andres-walter-mariana}, the authors --together with M. Haim--
proved the following general theorem.
\begin{teo}\emph{[\cite{kn:andres-walter-mariana}, Theorem 8.7.]}
Let $H$ be a compact quantum group with antipode $\mathcal{S}$.
Then, $H$ admits a unique companion automorphism --that in this case is denoted as $\mathcal{S}_+$ instead of $\sigma$--, that is a positive operator with respect to the natural inner product of $H$,
and there is a linear functional $\beta: H \rightarrow \mathbb C$ such that $\mathcal{S}_+(x)=\sum \beta(x_1)x_2\beta^{-1}(x_3)$.
\end{teo}
In the mentioned paper the following explicit example is constructed.
\begin{example} See Woronowicz's \cite{kn:worosu2}. Let $\mu \in \mathbb C$ be a non zero real number such that $|\mu|<1$, and call
$\operatorname{SU}_\mu(2,\mathbb C)$ the algebra generated by $\{\alpha,\alpha^*, \gamma, \gamma^*\}$, subject to the following relations:
$\alpha^*\alpha+\mu^2 \gamma^*\gamma=1$, $\alpha \alpha^*+\mu^4 \gamma \gamma^*=1$, $\gamma^*\gamma=\gamma\gamma^*$,
$\mu\gamma\alpha=\alpha\gamma$, $\mu\gamma^*\alpha=\alpha\gamma^*$, $\mu^{-1}\gamma\alpha^*=\alpha^*\gamma$,
$\mu^{-1}\gamma^*\alpha^*=\alpha^*\gamma^*$. The star structure in $\operatorname{SU}_\mu(2,\mathbb C)$, is defined as being antimultiplicative,
involutive, conjugate linear and defined on the generators as shown. This algebra admits a natural compatible comultiplication and becomes a
\emph{compact quantum group}
--see \cite{kn:worosu2} for details-- with antipode: $\mathcal{S}(\alpha)=\alpha^*, \mathcal{S}(\alpha^*)= \alpha, \mathcal{S}(\gamma)
=-\mu \gamma, \mathcal{S}(\gamma^*)=-\mu^{-1} \gamma^*$. One shows that the companion automorphism $\mathcal{S}_+$ is given as:
$\mathcal{S}_+(\alpha)=\alpha$, $\mathcal{S}_+(\alpha^*)=\alpha^*$, $\mathcal{S}_+(\gamma)=|\mu|\gamma$, $\mathcal{S}_+(\gamma^*)=|\mu|^{-1}\gamma^*$.
\end{example}
\begin{obse}
\begin{enumerate}
\item Recall that if $\tau:H \rightarrow H$ is a Hopf algebra homomorphism, then it commutes with $\mathcal{S}$. In particular
in the situation above the antipode $\mathcal{S}$ and the companion automorphism $\sigma$ commute.
\item In general one cannot guarantee the uniqueness of $\sigma$. See the comment at the end of Example \ref{exam:first}.
\item More examples and non examples will be provided in later sections, here we mention the following: involutive Hopf algebras and the
quantum enveloping algebra of $\mathfrak{sl} (2)$.
\end{enumerate}
\end{obse}
\subsection{Constructions of almost involutive Hopf algebras.} \label{subsection:const}\quad
\smallskip
\noindent
The following constructions always produce AI-Hopf algebras.
\begin{enumerate}
\item \emph{Duals and tensor products.} If $H$ is a finite dimensional AI-Hopf algebra, so is its dual. If $H$ and $K$ are AI-Hopf algebras so is $H \otimes K$.
\item \emph{Matched pairs.} Assume that we have a matched pair of Hopf algebras $(A,H,\triangleleft,\triangleright)$. If $A$ and $H$ are almost involutive with companion morphisms $\sigma_A$ and $\sigma_H$, so is the bicrossed product $A \bowtie H$ provided that the given companion automorphisms for $A$ and $H$ satisfy the following compatibility conditions:
\begin{enumerate}
\item $\sigma_A(x \triangleright a)= \sigma_H(x) \triangleright \sigma_A(a)$
\item $\sigma_H(x \triangleleft a)= \sigma_H(x) \triangleleft \sigma_A(a)$.
\end{enumerate}
This assertion follows easily form the fact that the antipode for the bicrossed product is simply $\mathcal{S}(ax)=\mathcal{S}_H(x)\mathcal{S}_A(a)$ for $a \in A, x \in H$ --see for example \cite{kn:andres-walter-mariana-2} or \cite{kn:Majid} for a detailed description of the structure of $A \bowtie H$--. Hence if we write $\mathcal{S}_A^2=\sigma_A^2$ and $\mathcal{S}_H^2=\sigma_H^2$, the Hopf algebra morphism $\sigma(ax)=\sigma_A(a) \sigma_H(x)$ does the job.
\item \emph{Drinfel'd double.} In particular, if $H$ is almost involutive, so is its Drinfel'd double $D(H)$. This follows from the fact that the Drinfel'd double can be viewed as a bicrossed product (see \cite{kn:andres-walter-mariana-2} or \cite{kn:Majid}) $D(H)=H^{\vee\ensuremath{\mathrm{cop}}} \bowtie H$ where the structure of matched pair is given by
$H \stackrel{\triangleleft}{\leftarrow} H\otimes H^\vee\stackrel{\triangleright}{\to}H^\vee$ defined as:
\begin{equation*}
(x\triangleright\alpha)(y)=\sum \alpha\left(\mathcal{S}^{-1}(x_2) y x_1\right),\quad
x\triangleleft \alpha = \sum \alpha\left(\mathcal{S}^{-1}(x_3)x_1 \right) x_2
,\quad \forall \alpha\in H^\vee,\ x,y\in H.
\end{equation*}
In this situation one easily verifies that if $\sigma$ is a companion automorphism for $H$, its natural extension to $D(H)$ is a companion automorphism for the double.
\item \label{item:trivial} \emph{Trivial extensions.}
\begin{enumerate}
\item Consider a Hopf algebra $H$ that can be written $H=K \oplus M$
where $K$ and $M$ satisfy the following conditions:
\begin{enumerate}
\item $K$ is a sub Hopf algebra of $H$.
\item $M$ is a $K$--bimodule and a $K$--bicomodule, in other words the following holds:
\begin{enumerate}
\item $KM + MK \subseteq M$;
\item $\Delta(M) \subseteq K \otimes M + M\otimes K$;
\end{enumerate}
\item $M$ is invariant by $\mathcal{S}$.
\item The extension of $K$ by $M$ is trivial, in other words, $M^2=0$.
\end{enumerate}
If $m \in M$ we have that $\Delta(m)=\sum h_i \otimes m_i + n_i \otimes k_i$ with $h_i,k_i \in K$ and $m_i,n_i \in M$. Then $m=\sum \varepsilon(m_i)h_i + \varepsilon(k_i)n_i$. If the $h_i$ are linearly independent, we deduce that $\varepsilon(m_i)=0$ and similarly for $n_i$. Hence, if we write write the expression $\Delta(m)=\sum h_i \otimes m_i + n_i \otimes k_i$ with $h_i$ and $k_i$ linearly independent, and apply $\varepsilon \otimes \varepsilon$ to the expression obtain that $\varepsilon(m)=0$. Hence $\varepsilon(M)=0$.
Assume that $K$ is almost involutive, and call $\sigma_K$ the corresponding companion automorphism. Suppose also that we can find a linear map $\sigma_M$, with the property that $\sigma_M^2=\mathcal{S}^2|_M$ and such that:
\begin{enumerate}
\item $\sigma_M(hm)=\sigma_K(h)\sigma_M(m)\,,\,\sigma_M(mh)=\sigma_M(m)\sigma_K(h)$.
\item If $\Delta(m)=\sum h_i \otimes m_i + n_i \otimes k_i \in K \otimes M + M \otimes K$, then $\Delta(\sigma_M(m))=\sum \sigma_K(h_i) \otimes \sigma_M(m_i) + \sigma_M(n_i) \otimes \sigma_K(k_i)$.
\end{enumerate}
In that situation the map $\sigma(h + m)=\sigma_K(h)+\sigma_M(m)$ is a companion automorphism in $H$. First, it is clear that $\sigma^2=\mathcal{S}^2$. Morover $\sigma((h+m)(k+n))=\sigma(hk+hn+mk)=\sigma_K(hk)+\sigma_M(hn+mk)=
\sigma_K(h)\sigma_K(k)+ \sigma_K(h)\sigma_M(n)+\sigma_M(m)\sigma_K(k)= (\sigma_K(h)+\sigma_M(m))(\sigma_K(k)+\sigma_M(n))=\sigma(h+m)\sigma(k+n)$.
\noindent
Also, a direct verification shows that the map $\sigma$ is an automorphism of coalgebras.
\item
Consider the particular case of a trivial extension such that $K=\left\{x\in H:\ \mathcal{S}^2(x)=x\right\}$.
Using the techniques developped in the Appendix --see Example \ref{example:general}-- one can prove that in this situation, $H$ is an almost involutive Hopf algebra.
Sweedler's Hopf algebra is an example. Clearly, $H_4$ can be written in the above manner with: $K=\k+\k g$ and $M=\k x + \k gx$.
In this situation $\sigma_K=\operatorname{id}$ and $\sigma_M=i\operatorname{id}$ satisfies all the required properties. Other examples of the above situation will appear later --see Section \ref{section:examples}--.
\end{enumerate}
\end{enumerate}
\section{Basic results on finite dimensional Hopf algebras.} \label{section:prelim}
Here we recall basic results and constructions concerning Hopf algebras that will be used later in the study of some of the examples
--see \cite{kn:Dascalescu} and \cite{kn:radfordbasic} for proofs and details. We concentrate on the basic properties of integrals,
modular function and modular element.
There exists $\lambda \in H^\vee$ and $\ell \in H$ that satisfy $\lambda(\ell)=1$, and invertible elements $a\in G(H)$ and
$\alpha\in \text{Alg}(H,{\mathbb K})$ such that for all $x \in H$:
\begin{equation}\label{eq:intl}
\sum x_1\lambda(x_2)=\lambda(x)1\,\quad
x\ell=\varepsilon(x)\ell,\quad
\sum \lambda(x_1)x_2=\lambda(x)a,\quad
\ell x=\alpha(x)\ell.
\end{equation}
The elements $\lambda$ and $\ell$ that are \emph{left integrals} and the elements $a$ and $\alpha$ are called
\textit{modular element} and \textit{modular function} respectively.
\begin{enumerate}
\item
The modular function and the modular element are of finite order in $H^\vee$ and $H$ respectively and then $\alpha(a)$ is a root of one of
order less or equal than the dimension of $H$.
In fact $\operatorname{ord}(\alpha(a))|\gcd\{\operatorname{ord}(\alpha),\operatorname{ord}(a)\}$.
\item
The elements $\lambda \,{\scriptstyle \circ }\, \mathcal{S}= \rho$ and $\mathcal{S}^{-1}(\ell)=r$ are \emph{right integrals} such that $\rho(r)=1$. We have that for all $x \in H$:
\begin{equation}\label{eq:rint}
\sum \rho(x_1)x_2=\rho(x)1,\quad
rx=\varepsilon(x)r,\quad
\sum x_1\rho(x_2)=\rho(x)a^{-1},\quad
xr=\alpha^{-1}(x)r.
\end{equation}
\item
The elements $\lambda$ and $\ell$ ($\rho$ and $r$) are uniquely determined up to a non zero scalar.
\item
The Hopf algebra $H$ (or $H^\vee$) is unimodular, i.e. a left integral is also a right integral, if and only if $\alpha=\varepsilon$
(or $a = 1$) respectively.
\end{enumerate}
\begin{rems} The following holds --see \cite{kn:schcordoba}--:
\begin{enumerate}
\item
\begin{equation}\label{eq:antipode}
\mathcal{S}(x)=\sum \ell_1 \lambda(x\ell_2), \quad \forall x\in H .
\end{equation}
\item
From the formula \eqref{eq:antipode}, we can easily deduce that:
\begin{equation}\label{eq:lambdaS}
\mathcal{S}(\ell)= \sum \ell_1 \alpha(\ell_2)
\quad\text{and}\quad
\lambda\big(\mathcal{S}(x)\big) = \lambda(xa), \quad \forall x\in H .
\end{equation}
Indeed, applying $\lambda$ we have that: $\lambda(\mathcal{S}(x))=\sum \lambda(\ell_1) \lambda(x\ell_2)=\sum \lambda(x\lambda(\ell_1)\ell_2) = \lambda(xa)$. The other formula is the dual.
\item
In the situation above, we have that:
\begin{eqnarray} \label{eq:bilinear}
\mathcal{S}^2(\ell)= \alpha(a) \ell,\quad
\mathcal{S}^{2}(r)= \alpha(a) r, \quad
\lambda \,{\scriptstyle \circ }\, \mathcal{S}^2=\alpha(a) \lambda,\quad
\rho \,{\scriptstyle \circ }\, \mathcal{S}^2=\alpha(a) \rho .
\end{eqnarray}
\noindent
Indeed, by iteration of the formula \eqref{eq:lambdaS} we obtain that $\lambda\/\,{\scriptstyle \circ }\,\/\mathcal{S}^2 = a {\rightharpoonup} \lambda {\leftharpoonup} a^{-1}$.
Being $\lambda \,{\scriptstyle \circ }\, \mathcal{S}^2$ another left integral, we conclude that it has to be a scalar multiple of $\lambda$.
As a $(a {\rightharpoonup} \lambda {\leftharpoonup} a^{-1})(\ell)=\lambda(a^{-1}\ell a)=\varepsilon(a^{-1}) \lambda(\ell) \alpha(a)=\alpha(a)$ we conclude that $\lambda \,{\scriptstyle \circ }\, \mathcal{S}^2=\alpha(a) \lambda$.
The other formul\ae\/ are proved similarly. The formul\ae\/ for right integrals are obtained by composition with $\mathcal{S}^{\pm 1}$.
Notice that both $\ell$ and $r$ are eigenvectors of $\mathcal{S}^2$ with eigenvalue $\alpha(a)$.
\item
In particular $\lambda(\mathcal{S}(\ell))=\alpha(a)$ and $\lambda(r)=1$.
Indeed, if we put $x=\ell$ in the equation \eqref{eq:lambdaS} we obtain that $\lambda(\mathcal{S}(\ell))= \lambda(\ell a)= \lambda\big(\alpha(a)\ell\big)=\alpha(a)$.
Moreover, applying the equality $\lambda \circ \mathcal{S}^2=\alpha(a) \lambda$ to the element $r=\mathcal{S}^{-1}(\ell)$ we deduce that: $\lambda(r)=1$.
\end{enumerate}
\end{rems}
Next we look at the behaviour of the elements above when transformed with a companion automorphism.
\begin{obse} Let $\sigma$ be a companion automorphism of $H$.
\begin{enumerate}
\item The following holds because $\sigma$ is a Hopf algebra map --the proof is omitted as it is standard--:
\begin{gather}
\label{eqn:first}
\sigma(\ell) =\lambda(\sigma(\ell))\ell ,\quad
\sigma(r)=\rho(\sigma(r))r,\quad
\lambda \circ \sigma = \lambda(\sigma(\ell))\lambda ,\quad
\rho \circ \sigma = \rho(\sigma(r))\rho ,
\\
\label{eq:alfasigma}
\sigma(a)=a,\quad
\sigma\left(a^{-1}\right)=a^{-1},\quad
\alpha \circ \sigma = \alpha ,\quad
\alpha^{-1} \circ \sigma = \alpha^{-1}.
\end{gather}
\item
From the definition of $\rho$ and $r$ and using that $\sigma$ is a Hopf algebra map we deduce $\rho(\sigma(r))=\lambda(\sigma(\ell))$.
Then using \eqref{eq:bilinear} and \eqref{eqn:first} we deduce $\lambda(\sigma(\ell))^2=\alpha(a)$.
Hence we have that:
\begin{align}\label{eq:erresigma}
\sigma(\ell)=r_\sigma \ell, \quad
\sigma(r)=r_\sigma r ,\quad
\lambda \circ \sigma= r_\sigma \lambda ,\quad
\rho \circ \sigma= r_\sigma \rho
\quad \text{with} \quad r_\sigma^2=\alpha(a).
\end{align}
\item
We consider the Sweedler's algebra.
The set $\{1,g,x,gx\}$ is a basis of $H_4$ and we will denote as $\{1^*,g^*,x^*,(gx)^*\}$ its dual basis.
We get
\[
\ell= (1+g)x;\ r=x(1+g);\ \lambda= x^*; \ \rho=-(gx)^*;\ a=g\,\,;\,\, \alpha:H_4 \rightarrow {\mathbb K},\ \text{is given by } \alpha(g)=-1,\ \alpha(x)=0.
\]
Recall that the map $\sigma:H_4 \rightarrow H_4$, defined as $\sigma(g)=g$ and $\sigma(x)=ix$ and extended multiplicatively, is a companion automorphism for $H_4$.
As $\sigma(\ell)=i\ell$
, in this case $r_\sigma=i$.
\item
Call $E_{\sigma,\nu}$ the eigenspace of $\sigma$ corresponding to the eigenvalue $\nu$. Then:
\begin{gather}
\ell,r \in E_{\sigma,r_\sigma},\quad 1,a,a^{-1} \in E_{\sigma,1}, \nonumber\\
\bigoplus_{\nu \neq r_\sigma}E_{\sigma,\nu} \subset \operatorname{Ker}(\lambda), \nonumber \\
\label{eqn:fifth}
\bigoplus_{\nu \neq 1}E_{\sigma,\nu} \subset \operatorname{Ker}(\alpha)\cap \operatorname{Ker}\left(\alpha^{-1}\right)\cap \operatorname{Ker}(\varepsilon), \\
E_{\sigma,\nu} \ell = \ell E_{\sigma,\nu} =0,\quad \nu\ne 1. \nonumber
\end{gather}
First observe that the equations \eqref{eq:alfasigma} and \eqref{eq:erresigma} mean that $\ell,r \in E_{\sigma,\lambda(\sigma(\ell))}$ and $a,a^{-1} \in E_{\sigma,1}$, and is clear that $1 \in E_{\sigma,1}$.
Moreover, if $x \in E_{\sigma,\nu}$, applying respectively $\lambda,\alpha,\alpha^{-1},\varepsilon$ to the equality $\sigma(x)=\nu x$, we deduce that
$r_\sigma\lambda(x)= \nu \lambda(x)$, $\alpha(x)=\nu \alpha(x)$, $\alpha^{-1}(x)=\nu \alpha^{-1}(x)$ and $\varepsilon(x)=\nu\varepsilon(x)$.
From the first of these equalities we deduce that if $\nu\ne r_\sigma$, then $\lambda(x)=0$, and similarly for the others.
For the last assertion: if $x \in E_{\sigma,\nu}$, from the equality $x\ell=\varepsilon(x)\ell$ and \eqref{eqn:fifth} we deduce that $x\ell=0$. Similarly we deduce that $\ell x=0$.
\end{enumerate}
\end{obse}
\section{Description of the almost involutive Hopf algebras up to dimension 15.}\label{section:examples}
In the following examples we will often have to deal with a Hopf algebra $H$ with a given group-like element $g$ and a $(g,1)$-primitive element $x$, then $\mathcal{S}(g)=g^{-1}$, $\mathcal{S}(x)=-xg^{-1}$, $\mathcal{S}^2(g)=g$ and $\mathcal{S}^2(x)=gxg^{-1}$.
\begin{example} [Example 1, in \cite{kn:radfordbasic}] \label{example:rad}
Let be $\omega\in\k$ a primitive root of order $n$ of 1 and
\[
H=\left\langle g,x,y:\ g^n=1,\ x^n=0,\ y^n=0,\ gx-\omega^{-1}xg=0,\ gy-\omega yg=0,\ xy-\omega yx=0 \right\rangle.
\]
$H$ is a Hopf algebra when it is equipped with coalgebra structure given by $g$ being a group-like element and $x,y$ $(g,1)$-primitive elements.
The set $\{g^rx^py^s:\ 0\leq r,p,s\leq n-1\}$ is a basis of $H$, so $\ensuremath{\mathrm{dim}} H=n^3$.
We have $\mathcal{S}^2(g)=g$, $\mathcal{S}^2(x)=\omega^{-1} x$ and $\mathcal{S}^2(y)=\omega y$, hence the order of $\mathcal{S}^2$ is $n$.
Using again the same method than before but with more labour, we can prove that $H$ is an almost involutive Hopf algebra.
Here we can also obtain four companion automorphisms $\sigma$ by direct inspection, defining $\sigma(g)=g$, $\sigma(x)=\pm \nu^{-1} x$ and $\sigma(y)=\pm \nu y$, being $\nu\in\k$ such that $\nu^2=\omega$.
\end{example}
\begin{example}[Hopf algebras of dimension 8] \label{ej:8}
If $H$ is a 8-dimensional not semisimple Hopf algebra, then Stefan shows in \cite{kn:stefan} that $H$ is isomorphic to one and only one of the following list
\[
A_{C_2},\quad A'_{C_4},\quad A''_{C_4},\quad A'''_{C_4,\omega},\quad A_{C_2\times C_2},\quad \left( A''_{C_4}\right)^*,
\]
being
\begin{enumerate}
\item \label{item:ac2}
$A_{C_2}=\left\langle g,x,y:\ g^2=1,\ x^2=0,\ y^2=0,\ gx + xg=0,\ gy+yg=0,\ xy+ yx=0 \right\rangle$, $g$ is a group-like element and $x,y$ are $(g,1)$-primitive elements.
\item \label{item:a1c4}
$A'_{C_4}=\left\langle g,x:\ g^4=1,\ x^2=0,\ gx + xg=0 \right\rangle$, $g$ is a group-like element and $x$ is a $(g,1)$-primitive element.
\item \label{item:a2c4}
$A''_{C_4}=\left\langle g,x:\ g^4=1,\ x^2=g^2-1,\ gx + xg=0 \right\rangle$, $g$ is a group-like element and $x$ is a $(g,1)$-primitive element.
\item \label{item:a3c4}
$A'''_{C_4,\omega}=\left\langle g,x:\ g^4=1,\ x^2=0,\ gx- \omega xg=0 \right\rangle$, $g$ is a group-like element, $x$ is a $(g,1)$-primitive element and $\omega\in{\mathbb K}$ is a primitive root of unity of order 4.
\item \label{item:a2c2c2}
$A_{C_2\times C_2}=\left\langle g,h,x:\ g^2=1,\ h^2=1,\ x^2=0,\ gx + xg=0,\ hx + xh=0,\ gh-hg=0 \right\rangle$, $g$ and $h$ are group-like elements and $x$ is a $(g,1)$-primitive element.
\end{enumerate}
The algebras $A_{C_2}, A'_{C_4}, A'''_{C_4,\omega}$ and $A_{C_2\times C_2}$ are almost involutive. We give the companion automorphism $\sigma$ by its values in the generators.
\begin{itemize}
\item
$A_{C_2}$, $\sigma(g)=g$, $\sigma(x)=ix$ and $\sigma(y)=iy$. Observe that this is the case $n=2$ in the example \ref{example:rad}.
\item
$A'_{C_4}$, $\sigma(g)=g$ and $\sigma(x)=ix$.
\item
$A'''_{C_4,\omega}$, $\sigma(g)=g$ and $\sigma(x)=\omega x$.
\item
$A_{C_2\times C_2}$, $\sigma(g)=g$, $\sigma(h)=h$ and $\sigma(x)=ix$.
\end{itemize}
Notice, that of these cases, the situation described in \eqref{item:a1c4}, \eqref{item:a3c4} and \eqref{item:a2c2c2} fit into the pattern of the results appearing in Subsection \ref{subsection:const}, \eqref{item:trivial}.
Now we consider the algebra $A''_{C_4}$.
The set $\left\{1,g,g^2,g^3,x,gx,g^2x,g^3x \right\}$ is a basis of $A''_{C_4}$ with decomposition $E_{\mathcal{S}^2,1}=\langle1,g,g^2,g^3\rangle_{\k}$, $E_{\mathcal{S}^2,-1}=\langle x,gx,g^2x,g^3x\rangle_{\k}$ and with the following normalized integrals:
\[
\ell=\left(1+g+g^2+g^3\right)x
,\quad r=\left(-1+g-g^2+g^3\right)
,\quad \lambda=x^
,\quad \rho=\left(g^3x\right)^
.
\]
The modular element is $a=g$ and the modular function $\alpha$ is defined by $\alpha(g)=-1$ and $\alpha(x)=0$.
Suppose there exists a companion automorphism $\sigma$ in $A''_{C_4}$. From \eqref{eq:alfasigma} we get $\sigma(g)=g$.
Then the condition $gx + xg=0$ implies $\sigma(x)=b_0x+b_1gx+b_2g^2x+b_3g^3x$, for some $b_0,b_1,b_2,b_3\in{\mathbb K}$.
Using \eqref{eq:erresigma} for $\lambda$ and $\rho$ we have $\sigma(x)=r_\sigma x+b_1gx+b_2g^2x$, where $r_\sigma^2=-1$.
Now using \eqref{eq:erresigma} for $\ell$ and $r$ we conclude $\sigma(x)=r_\sigma x$.
So $\sigma$ verifies $\sigma(g)=g$ and $\sigma(x)=r_\sigma x$, but then $\sigma$ can not preserve the relation $x^2=g^2-1$.
Hence we have shown that $A''_{C_4}$ is not almost involutive. As the property of being almost involutive is preserved by duality, we deduce that $\left( A''_{C_4}\right)^*$ is also not almost involutive.
Note that $A''_{C_4}$ is pointed but $\left( A''_{C_4}\right)^*$ it is not --see \cite{kn:stefan}--.
\end{example}
\begin{example}[Hopf algebras of dimension 12] \label{ej:12}
Let $H$ be a Hopf algebra of dimension $12$.
Natale shows in \cite{kn:natale} that if $H$ is non semisimple, then $H$ or $H^\vee$ is pointed; she also shows that if $H$ is pointed, then it is isomorphic to one and only one of the following list:
\begin{itemize}
\item
$A_{0}=\left\langle g,x:\ g^6=1,\ x^2=0,\ gx + xg=0 \right\rangle$, $g$ is a group-like element and $x$ is $(g,1)$-primitive.
\item
$A_{1}=\left\langle g,x:\ g^6=1,\ x^2=1-g^2,\ gx + xg=0 \right\rangle$, $g$ is a group-like element and $x$ is $(g,1)$-primitive.
\item
$B_{0}=\left\langle g,x:\ g^6=1,\ x^2=0,\ gx + xg=0 \right\rangle$, $g$ is a group-like element and $x$ is $\left(g^3,1\right)$-primitive.
\item
$B_{1}=\left\langle g,x:\ g^6=1,\ x^2=0,\ gx -\omega xg=0 \right\rangle$, $g$ is a group-like element, $x$ is $\left(g^3,1\right)$-primitive and $\omega\in{\mathbb K}$ is a primitive root of unity of order 6.
\end{itemize}
The Hopf algebras in this list satisfy the following: $A_0^*=B_1$, $B_0^*=B_0$ and $A_1^*$ is not pointed. Moreover, $A_0$, $B_0$ and $B_1$ are of the type appearing in Subsection \ref{subsection:const}, \eqref{item:trivial}.
The algebras in this list appear analogous to the ones in dimension 8, so we can expect that they have similar properties. Indeed, we have that $A_0$, $B_0$ and $B_1$ are almost involutive but $A_1$ --and so its dual-- is not.
The proof that $A_0$, $B_0$ and $B_1$ are almost involutive follows a similar pattern than the eight dimensional case.
For $A_1$, if it has a companion morphism $\sigma$, then similarly than for $A''_{C_4}$ we obtain that
$\sigma(g)=g$ and also prove the existence of scalars $a,b\in{\mathbb K}$ such that $\sigma(x)=r_\sigma x + a\left(gx-g^3x \right)+b\left(gx^2-g^4x \right)$.
Being $\sigma$ a Hopf algebra map, then $\mathcal{S}(\sigma(x))=\sigma(\mathcal{S}(x))$, and this relation implies $\sigma(x)=r_\sigma x$ and we obtain the same contradiction than for $A''_{C_4}$.
\end{example}
\begin{rema}
The Hopf algebras of dimension 13, 14 and 15 are semisimple --see \cite{kn:beattie-gaston}-- and the ones of dimension $n\leq 11$ and $n\ne 8$ are semisimple or Taft algebras --see \cite{kn:stefan}--.
Semisimple Hopf algebras are involutive and in the example \ref{example:taft} below we show that the Taft algebra is almost involutive.
Hence, of the Hopf algebras of dimension $n\leq 15$, the only cases when there are non almost involutive examples is for $n=8$ or $n=12$.
\end{rema}
\section{Appendix: square roots of finite order linear automorphisms.}
We start with some elementary considerations about the square root of a finite order linear automorphism $D:V \rightarrow V$ where $V$ is a finite dimensional vector space over a field $\mathbb K$.
We call $m=|D|$ the order of $D$.
Given such $m$, we take $r \in \mathbb K$ with the property that its order is $|r|=2m$, if $m$ is even or $|r|=m$, if $m$ is odd.
We call $q=r^2$. Notice that $|q|=m$.
Define $\mathcal E=\big\{0 \leq i \leq m-1: q^i \in \operatorname{Spec}(D)\big\}$; then $V=\bigoplus_{i \in \mathcal E}E_{D,q^i}$ where $E_{D,q^i}=\{x \in H: Dx=q^i x\}$.
Assume that $\sigma:V \rightarrow V$ is a linear automorphism of $V$ such that $\sigma^2=D$.
Any such $\sigma$ will satisfy that $\sigma^2|_{E_{D,q^i}}=q^i\operatorname{id}$ and hence, the minimal polynomial $m_{\sigma|_{E_{D,q^i}}} | (t^2-q^i)=t^2-r^{2i}=(t-r^i)(t+r^i)$.
Then for all $i \in \mathcal E$ we can find two subspaces $E_{\sigma,r^i},E_{\sigma,-r^i} \subseteq E_{D,q^i}$ --one of them could be $\{0\}$-- such that
$E_{D,q^i}= E_{\sigma,r^i} \oplus E_{\sigma,-r^i}$ and $\sigma|_{E_{\sigma,\pm r^i}}=\pm r^i\operatorname{id}$.
Conversely, if for every $i \in \mathcal E$ we are given an arbitrary direct sum decomposition of $E_{D,q^i}=V_{+,i} \oplus V_{-,i}$ as above,
then we can define an operator $\sigma: V\rightarrow V$, by requiring that its restriction to each of the summands are $\pm r^i\operatorname{id}$.
In other words, if we write $x \in E_{D,q^i}$ as $x=x_+ + x_-$ with $x_\pm \in V_{\pm,i}$, then $\sigma(x)=r^i x_{+} - r^i x_-$.
By construction $\sigma^2=D$ on $E_{D,q^i}$ for all $i\in \mathcal E$ and then $\sigma$ is a square root of $D$ on all of $V$. For such $\sigma$ we have that for all $i \in \mathcal E$: $E_{\sigma,\pm r^i}=V_{\pm,i}$.
Hence, to define a \emph{linear transformation} that is a square root of $D$, we have to take for each eigenspace $E_{D,q^i}$ of $D$ with $i \in \mathcal E$, a decomposition on two subspaces $E_{D,q^i}=V_{+,i} \oplus V_{-,i}$.
Given the decomposition, a square root is defined by the equations: $\sigma|_{V_{\pm,i}}=\pm r^i\operatorname{id}$.
\subsection{The case of an automorphism of Hopf algebras}
Assume now that $H$ is a finite dimensional Hopf algebra and that $D:H \rightarrow H$ is an \emph{automorphism} of Hopf algebras of order $|D|=m$.
We want to find conditions for the pair of subspaces $V_{+,i}$ and $V_{-,i}$ that guarantee that the $\sigma$ thus defined is a Hopf algebra automorphism. The elementary results that we present below, are probably well known, we wrote them for the lack of an adequate reference.
\begin{obse} \label{obse:Dcase}In the situation above with $H$ a finite dimensional Hopf algebra and $D$ a linear automorphism of finite order $m$.
\begin{enumerate}
\item $D$ is a morphism of algebras if and only if the following holds:
\begin{enumerate}
\item $1 \in E_{D,1}$;\\
\item For $i,j \in \mathcal E$, $E_{D,q^i}E_{D,q^j} \subseteq \begin{cases}E_{D,q^{i+j}}\quad &\text{if}\quad i+j<m;\\ E_{D,q^{i+j-m}} &\text{if}\quad i+j \geq m.\end{cases}$
\end{enumerate}
\item $D$ is a morphism of coalgebras if and only if the following holds:
\begin{enumerate}
\item If $0 \neq i \in \mathcal E$, then $\varepsilon(E_{D,q^i})=0$ ;
\item For $i \in \mathcal E$:
\[
\Delta(E_{D,q^i}) \subseteq \bigoplus_{\{a,b \in \mathcal E: a+b=i\}}(E_{D,q^a} \otimes E_{D,q^b}) \quad \oplus \bigoplus_{\{a,b \in \mathcal E: a+b=i+m\}}(E_{D,q^a} \otimes E_{D,q^b}).
\]
\end{enumerate}
\end{enumerate}
Observe that in the considerations above we used that if $r,s \in \mathcal E$, then $0 \leq r,s \leq m-1$, and then $0 \leq r+s \leq 2m-2$.
\end{obse}
\begin{theo}\label{theo:algebraconditions}
Let $A$ be an algebra and let $D:A \rightarrow A$ be an automorphism of algebras of finite order $m$. Define as above, $q,r$, $\mathcal E$ and $E_{D,q^i}$ --for $i \in \mathcal E$--. For each $i \in \mathcal E$ when we take an arbitrary decomposition of $E_{D,q^i}=V_{+,i}\oplus V_{-,i}$ and define a linear transformation $\sigma$ on $A$ as: $\sigma|_{V_{\pm,i}}=\pm r^i\operatorname{id}$ for all $i \in \mathcal E$, then $\sigma^2=D$. Moreover, $\sigma$ is an automorphism of algebras if and only if the following conditions hold:
\begin{enumerate}
\item $1 \in V_{+,0}$\emph{;}
\item
\begin{enumerate}
\item If $0 \leq i+j \leq m-1$, then $V_{+,i} V_{+,j}+ V_{-,i}V_{-,j} \subseteq V_{+,i+j}$ and $V_{+,i} V_{-,j}+ V_{-,i}V_{+,j} \subseteq V_{-,i+j}$.
\item If $m \leq i+j \leq 2m-2$ then\emph{:}
\begin{enumerate}
\item If $m$ is even, then $V_{+,i} V_{+,j}+ V_{-,i}V_{-,j} \subseteq V_{-,i+j-m}$ and $V_{+,i} V_{-,j}+ V_{-,i}V_{+,j} \subseteq V_{+,i+j-m}$\emph{;}
\item If $m$ is odd, then $V_{+,i} V_{+,j}+ V_{-,i}V_{-,j} \subseteq V_{+,i+j-m}$ and $V_{+,i} V_{-,j}+ V_{-,i}V_{+,j} \subseteq V_{-,i+j-m}$.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{theo}
\begin{proof}
\begin{enumerate}
\item \emph{Conditions for the unit.} The unit element, $1 \in E_{D,q^0}=E_{D,1}$ and as we want that $\sigma(1)=1$, in the decomposition of $E_{D,1}=V_{+,0} \oplus V_{-,0}$, $1 \in V_{+,0}$.
\item \emph{Multiplicativity.} It is enough to prove that for all $i,j \in \mathcal E, x \in V_{\pm,i}, y \in V_{\pm,j} \Rightarrow \sigma(xy)=\sigma(x)\sigma(y)$. Being $D(xy)=q^{i+j}xy$, we have two alternatives:
\begin{enumerate}
\item If $xy=0$, then $\sigma(x)\sigma(y)=(\pm r^ix)(\pm r^jy)=0=\sigma(xy)$.
\item If $xy \neq 0$, $q^{i+j}\in \operatorname{Spec}(D)$ and $\exists k \in \mathcal E, i+j \equiv k(\!\!\!\mod m)$.
Then: $k= \begin{cases}i+j\quad &\text{for}\,\, 0 \leq i+j \leq m-1;\\
i+j-m\quad &\text{for}\,\, m \leq i+j\leq 2m-2.\end{cases}$
As $xy \in E_{D,q^k}=V_{+,k} \oplus V_{-,k}$ for $k \in \mathcal E$, we can find $(xy)_\pm \in V_{\pm,k}$, such that:
$xy=(xy)_++(xy)_-\,,\, \sigma(xy)=r^k(xy)_+-r^k (xy)_{-}$.\\
We consider the following alternatives.\\
(A) $x \in V_{+,i}$ and $y \in V_{+,j}$ or $x \in V_{-,i}$ and $\in V_{-,j}$.
In this case $\sigma(x)\sigma(y)=r^{i+j}xy=r^{i+j}(xy)_{+}+r^{i+j}(xy)_{-}$.
Now, if $i+j=k \leq m-1$ then $r^{i+j}=r^k$ and in accordance with the above formul\ae\ the multiplicativity holds if and only if $(xy)_{-}= 0$.
If $m \leq i+j=k+m$, then $r^{i+j}=r^kr^m=\begin{cases}-r^k &\quad \text{if $m$ is even}\\ \,\,\,\, r^k &\quad \text{if $m$ is odd}.\end{cases}$\\
Then, the multiplicativity holds if and only if
$\begin{cases}(xy)_+=0 &\,\text{if $m$ is even}\\ (xy)_-=0 &\,\text{if $m$ is odd}.\end{cases}$\\
(B) $x \in V_{+,i}$ and $y \in V_{-,j}$ or $x \in V_{-,i}$ and $y \in V_{+,j}$.
In this case $\sigma(x)\sigma(y)=-r^{i+j}xy=-r^{i+j}(xy)_+-r^{i+j}(xy)_-$. Now, if $i+j=k \leq m-1$ then $r^{i+j}=r^k$ and the multiplicativity holds if and only if $(xy)_+ = 0$.
If $m \leq i+j=k+m$, then $r^{i+j}=r^kr^m=\begin{cases}-r^k &\quad \text{if $m$ is even}\\ \,\,\,\,r^k &\quad \text{if $m$ is odd}.\end{cases}$\\
Hence, in this situation the multiplicativity holds if and only if
$\begin{cases}(xy)_-=0 &\,\text{if $m$ is even}\\ (xy)_+=0 &\,\text{if $m$ is odd}.\end{cases}$
\end{enumerate}
\end{enumerate}
\end{proof}
As we are dealing with finite dimensional objects, we may proceed by duality and obtain the following result:
\begin{theo}\label{theo:coalgebraconditions}
Let $C$ be a coalgebra and let $D:C \rightarrow C$ be a automorphism of coalgebras of finite order $m$.
Define as above, $q,r$, $\mathcal E$ and $E_{D,q^i}$ --for $i \in \mathcal E$--.
For each $i \in \mathcal E$ when we take an arbitrary decomposition of $E_{D,q^i}=V_{+,i}\oplus V_{-,i}$ and define a linear transformation $\sigma$ on $C$ as: $\sigma|_{V_{\pm,i}}=\pm r^i\operatorname{id}$, then $\sigma^2=D$.
Moreover, $\sigma$ is an automorphism of coalgebras if and only if the following conditions hold:
\begin{enumerate}
\item $\varepsilon(V_{-,0})=0$
\item
\[\Delta(V_{+,i}) \subseteq \bigoplus_{\{a,b\in \mathcal E:a+b=i\}}(V_{+,a} \otimes V_{+,b} + V_{-,a} \otimes V_{-,b}) \oplus \begin{cases}\bigoplus_{\{a,b\in \mathcal E:a+b=i+m\}}(V_{+,a} \otimes V_{-,b} + V_{-,a} \otimes V_{+,b})\,,\, \text{m even};\\\bigoplus_{\{a,b\in \mathcal E:a+b=i+m\}}(V_{+,a} \otimes V_{+,b} + V_{-,a} \otimes V_{-,b})\,,\, \text{m odd}.\end{cases}\]
\[\Delta(V_{-,i}) \subseteq \bigoplus_{\{a,b\in \mathcal E:a+b=i\}}(V_{+,a} \otimes V_{-,b} + V_{-,a} \otimes V_{+,b}) \oplus \begin{cases}\bigoplus_{\{a,b\in \mathcal E:a+b=i+m\}}(V_{+,a} \otimes V_{+,b} + V_{-,a} \otimes V_{-,b})\,,\, \text{m even};\\\bigoplus_{\{a,b\in \mathcal E:a+b=i+m\}}(V_{+,a} \otimes V_{-,b} + V_{-,a} \otimes V_{+,b})\,,\, \text{m odd}.\end{cases}
\]
\qed
\end{enumerate}
\end{theo}
\begin{coro} \label{coro:Hopf-conditions}
Let $H$ be a Hopf algebra and let $D:H \rightarrow H$ be an automorphism of Hopf algebras of finite order $m$.
Define as above, $q,r$, $\mathcal E$ and $E_{D,q^i}$ --for $i \in \mathcal E$--.
For each $i \in \mathcal E$ we take an arbitrary decomposition of $E_{D,q^i}=V_{+,i}\oplus V_{-,i}$ and define a linear transformation $\sigma$ on $C$ as: $\sigma|_{V_{\pm,i}}=\pm r^i\operatorname{id}$.
If the hypothesis of theorems \ref{theo:algebraconditions} and \ref{theo:coalgebraconditions} are simultaneously satisfied, then $\sigma$ is a Hopf algebra automorphism and $\sigma^2=D$. \qed
\end{coro}
\subsection{A particular situation.}
We consider the following special cases of the above Corollary \ref{coro:Hopf-conditions}.
Assume that the splitting of the eigenspaces $E_{D,q^i}$ is trivial: \[V_{+,i}=E_{D,q^i}\,\,\,\,\text{and}\quad V_{-,i}=0.\]
In this situation, and using the considerations of Observation \ref{obse:Dcase}, it is clear that some of the conditions of Corollary \ref{coro:Hopf-conditions} --i.e.
of Theorems \ref{theo:algebraconditions} and \ref{theo:coalgebraconditions}-- are authomatically verified. In particular the case $m$ odd becomes conditionless.
Hence, we have the following particular result that in some cases provides an answer for the existence of a square root of a Hopf automorphism that is both multiplicative and comultiplicative.
\begin{coro}\label{coro:sqrpart}
Assume that we are in the situation above.
If $m$ is odd, then the square root of $D$ associated to the family of subspaces $V_{+,i}=E_{D,q^i}$ and $V_{-,i}=0$ is an automorphism of Hopf algebras.
If $m$ is even, it is an automorphism of Hopf algebras if and only if $E_{D,q^i} E_{D,q^j}=0 $ for all $i,j \in \mathcal E$ such that $m \leq i+j \leq 2m-2$ and
$\Delta(E_{D,q^i}) \subseteq \bigoplus_{\{a,b\in \mathcal E:a+b=i\}}(E_{D,q^a} \otimes E_{D,q^b})$ for all $i\in \mathcal E$.
\qed
\end{coro}
Observe that the case in which $m$ is odd has already been treated by an elementary reasonement in Observation \ref{obse:oddtrivial}.
\begin{example}[Taft algebra]\label{example:taft}
The Taft algebra $T_n$ is a generalization of the Sweedler's algebra $H_4$.
Let be $\omega\in\k$ a primitive root of order $n$ of 1 and
\[
T_n=\left\langle g,x:\ g^n=1,\ x^n=0,\ gx-\omega xg=0\right\rangle.
\]
$T_n$ is a Hopf algebra with coalgebra structure given by $g$ being a group-like element and $x$ a $(g,1)$-primitive element.
\noindent
The set $\{g^rx^p:\ 0\leq r,p\leq n-1\}$ is a basis of $T_n$, so $\ensuremath{\mathrm{dim}} T_n=n^2$.
We have $\mathcal{S}^2(g)=g$ and $\mathcal{S}^2(x)=\omega x$, therefore the order of $\mathcal{S}^2$ is $n$.
\noindent
The eigenvalues of $\mathcal{S}^2$ are $\{1,\omega,\cdots,\omega^{n-1}\}$, and the corresponding eigenspaces are $E_{\mathcal{S}^2,\omega^i}=\{x^i,gx^i,\cdots,g^{n-1}x^i\}_{\mathbb K}$.
\noindent
With regard of the conditions of Corollary \ref{coro:sqrpart}, we have that $E_{\mathcal{S}^2,\omega^i}E_{\mathcal{S}^2,\omega^j}=\{g^k \omega^{i+j}:k=0,\cdots,n-1\}$.
\noindent
Hence $E_{\mathcal{S}^2,\omega^i}E_{\mathcal{S}^2,\omega^j}=0$ for $i+j \geq n$ as required.
\noindent
Morever, as $\Delta(g^kx^i)=(g^k \otimes g^k)(x \otimes g + 1 \otimes x)^i$, it is clear that $\Delta(E_{\mathcal{S}^2,\omega^i}) \subseteq \sum_{a+b=i} E_{\mathcal{S}^2,\omega^a} \otimes E_{\mathcal{S}^2,\omega^b}$.
\noindent
Hence, in this manner we prove that $T_n$ is almost involutive.
\noindent
A direct verification shows that if we take $\nu\in\k$, $\nu^2=\omega$. The maps $\sigma_\pm$ defined as $\sigma_\pm(g)=g$ and $\sigma_\pm(x)=\pm \nu x$ are companion automorphisms.
\end{example}
\begin{example}\label{example:general}
\noindent
Assume that we have a trivial extension of a finite dimensional Hopf algebra, with the additional property that $K=E_1$. The spectral decomposition of $H$ with respect to $\mathcal{S}^2$ becomes:
\[
H= E_{\mathcal{S}^2,1} \oplus \bigoplus_{i \in \mathcal E_M}E_{\mathcal{S}^{2},q^i} \quad \text{with}\quad M=\bigoplus_{i \in \mathcal E_M}E_{\mathcal{S}^{2},q^i}.
\]
\noindent
In this case it is clear that the conditions of Corollary \ref{coro:sqrpart} are satisfied.
\noindent
Indeed, if we look at the condition regarding the product, the only cases in which the sum of exponents of the corresponding eigenvalues
of two eigenvectors may be larger than $m$, is for the case that the exponents of the eigenvectors are in $\mathcal E_M$. In this case, the condition $M^2=0$, guarantees that the product of the corresponding eigenspaces is trivial.
\noindent
An argument along the same lines and using the condition that $M$ is a $K$--bicomodule, shows that the condition regarding the coproduct in Corollary \ref{coro:sqrpart} is satisfied.
\end{example}
| {
"timestamp": "2013-12-02T02:12:27",
"yymm": "1311",
"arxiv_id": "1311.7441",
"language": "en",
"url": "https://arxiv.org/abs/1311.7441",
"abstract": "We define the concept of \\emph{companion automorphism} of a Hopf algebra $H$ as an automorphism $\\sigma:H \\rightarrow H$: $\\sigma^2=S^2$ --where $S$ denotes the antipode--. A Hopf algebra is said to be \\emph{almost involutive} (AI) if it admits a companion automorphism that can be viewed as a special additional symmetry. We present examples and study some of the basic properties and constructions of AI-Hopf algebras centering the attention in the finite dimensional case. In particular we show that within the family of Hopf algebras of dimension smaller or equal than 15, only in dimension eight and twelve, there are non almost involutive Hopf algebras.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Almost involutive Hopf algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290963960278,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7099522568775212
} |
https://arxiv.org/abs/2008.09405 | A Tipping Point for the Planarity of Small and Medium Sized Graphs | This paper presents an empirical study of the relationship between the density of small-medium sized random graphs and their planarity. It is well known that, when the number of vertices tends to infinite, there is a sharp transition between planarity and non-planarity for edge density d=0.5. However, this asymptotic property does not clarify what happens for graphs of reduced size. We show that an unexpectedly sharp transition is also exhibited by small and medium sized graphs. Also, we show that the same "tipping point" behavior can be observed for some restrictions or relaxations of planarity (we considered outerplanarity and near-planarity, respectively). | \section{Introduction}\label{se:intro}
Several popular Graph Drawing algorithms devised to draw graphs of small-medium size assume that the graph to be drawn is planar both in the static setting~\cite{t-hdg-63,fpp-hdpgg-90,s-epgg-90} and in the dynamic one~\cite{10.1007/s00454-018-0018-9,ddfpr-upm-tcs-20,bdfp-gssa-j-20}. Hence, to assess the practical applicability of such algorithms it is crucial to study the probability that a small-medium sized graph (say of about $100$--$200$ vertices) is planar.
In particular, it is interesting to consider how this probability varies as a function of the density of the graph. We might have that the probability of planarity changes smoothly or that it changes abruptly, exhibiting a tipping-point behaviour.
A \emph{tipping point} is a threshold that, when exceeded, leads to a sharp change in the state of a system. In sociology, for example, a tipping point is a time when most of the members of a group suddenly change their behavior by adopting a practice that before was considered rare. In climate study, a tipping point is a quick and irreversible change in the climate, triggered by some specific cause, like the growth of the global mean surface temperature. Even in graph theory, tipping points have been found.
As an example, in 1960 Erd{\"o}s and R{\`e}nyi established that a random graph $G(n,m)$ with $n$ vertices and $m$ edges undergoes an abrupt change when the average vertex degree is equal to one, that is when $m \approx n/2$~\cite{er-erg-60}.
Namely, when $m = cn/2$ and $c < 1$, asymptotically almost surely the connected components are all of size $O(\log n)$, and are either trees or unicyclic graphs. Conversely, when $c > 1$, almost surely there is a unique giant component of size~$\Theta(n)$. The density $d=m/n=1/2$ is sometimes referred to as the \emph{critical density} or \emph{phase transition density}. See~\cite{b-rg-85,jlr-rg-00} for a discussion of these concepts.
In this paper we
investigate
whether the density plays a similar role for the planarity of small-medium sized graphs. Namely, when the the density of such graphs increases, does the probability of planarity change smoothly or abruptly?
\begin{figure}[tb]
\centering
\subfigure[]{\includegraphics[trim=0 -30 0 0,clip,width=0.23\columnwidth]{figures/example-1-20.pdf}}
\label{fig:function1}
\begin{picture}(0,0)
\put(-78,0){\scriptsize Number of vertices}
\put(-92,35){\hbox{\rotatebox{90}{\scriptsize Density}}}
\end{picture}
\hfil
\subfigure[]{\includegraphics[trim=0 -30 0 0,clip,width=0.23\columnwidth]{figures/example-2-08.pdf}}
\label{fig:function2}
\begin{picture}(0,0)
\put(-78,0){\scriptsize Number of vertices}
\end{picture}
\hfil
\subfigure[]{\includegraphics[trim=0 -30 0 0,clip,width=0.23\columnwidth]{figures/example-3-04.pdf}}
\label{fig:function3}
\begin{picture}(0,0)
\put(-78,0){\scriptsize Number of vertices}
\end{picture}
\hfil
\subfigure[]{\includegraphics[trim=0 -30 0 0,clip,width=0.23\columnwidth]{figures/example-4-10-0dot5-1-0dot5.pdf}}
\label{fig:function4}
\begin{picture}(0,0)
\put(-78,0){\scriptsize Number of vertices}
\end{picture}
\hfil
\caption{Function $\zeta(n,d)$ for $n \in [1,400]$ and $d \in [0,3]$ in four cases:
(a) $c_1$=$5$, $c_2$=$0.5$, $c_3$=$20$, $c_4$=$0.5$;
(b) $c_1$=$5$, $c_2$=$0.5$, $c_3$=$8$, $c_4$=$0.5$;
(c) $c_1$=$5$, $c_2$=$0.5$, $c_3$=$4$, $c_4$=$0.5$; and
(d) $c_1$=$10$, $c_2$=$0.5$, $c_3$=$1$, $c_4$=$0.5$.
}\label{fig:function}
\end{figure}
To answer this question one could think of using the result of {\L}uczak {\em et al.}~\cite{lpw-srgpp-94} who show that a random graph is almost surely non-planar if and only if the number of edges is $n/2 + O(n^{2/3})$. From the point of view of the density this means that a graph is almost surely non-planar if the density is $1/2 + O(n^{-1/3})$. However, the result shows only an asymptotic bound and does not clarify what happens for small-medium sized graphs.
Essentially, this means that, for $n \rightarrow \infty$ graphs with density greater than $1/2$ are almost surely non-planar and that the ``transition range'' of density within which the probability of planarity falls from $1$ to $0$ is $\Theta(n^{-1/3})$. This result has been confirmed in~\cite{nrr-pprgn-13}, where it is proved that a graph with infinitely many vertices and density $1/2$ has probability $\approx0.998$ to be planar. Again, this gives no hint about how large is in practice this transition range for small values of $n$.
For example, Fig.~\ref{fig:function} shows four plots for different values of the constants $c_1,\dots,c_4$ of the function $\zeta(n,d)$ which has both the asymptotic behaviors described in~\cite{lpw-srgpp-94}
\ifArxiv
(see Appendix~\ref{app:zeta}).
\else
(see~\cite{bdp-tppsm-20-arxiv}).
\fi
$$\zeta(n,d) = \frac{1}{2^{(d-(0.5+c_1/n^{c_2}))\cdot(c_3 + c_4 n^{1/3})}+1}$$
Depending on the values of $c_1,\dots,c_4$ the function shows quite different behaviours in the range $n \in [1,400]$.
In this paper we adopt a pragmatic point of view. Namely, we are interested into investigating what are the properties of a random graph of small-medium size $n$ when its density increases.
In particular, we experimentally measured that, for each graph size $n \leq 400$, there is a value of density that marks a sharp transition from planar graphs to non-planar ones.
This behavior is shared also by restrictions or relaxations of planarity, such as outerplanarity, and near-planarity.
The paper is structured as follows. Section~\ref{se:methods} describes the methodology used for all experiments. Section~\ref{se:experiments} describes each experiment in detail. Our conclusions are given in Section~\ref{se:conclusions}.
\section{Experimental Setting}\label{se:methods}
All the experiments described in Section~\ref{se:experiments} are composed of three phases: generation of graphs; measurement; and analysis. In this section we describe the characteristics of the three phases common to all experiments.
\smallskip\noindent
\textbf{Generation of graphs.} In all experiments (but for near-planarity) we used graphs with a number $n$ of vertices that varies from $1$ to $400$, increasing at each step by one.
The density $d=\frac{m}{n}$, where $m$ is the number of edges, varies in a range that depends on the type of property that we are investigating. In fact, given a specific property, there always exists an interval of densities, that we call the \emph{\interval}, such that for a graph outside the \interval either the property is granted or the property is ruled out, while inside the \interval there are both graphs that have the property and graphs that do not.
This is the interval of densities that we aim to experimentally explore\footnote{For the smallest graphs we may not have all densities. For example, there is no graph with $5$ vertices and density greater than $2$.}.
For each combination of size $n$ and density $d$ we determined the number of edges $m = \textrm{Round}(n\cdot d)$ of the graphs to be generated, and generated $10,000$ random graphs with $n$ vertices and $m$ edges\footnote{Function $\textrm{Round}()$ rounds a value to the nearest integer, where $\textrm{Round}(0.5)=1.0$.}.
In particular, we used function \texttt{randomSimpleGraph} of the OGDF library~\cite{cgjkkm-ogdf-14} for uniformly-at-random generating labeled graphs with a given number of vertices and edges. All graphs were simple (no loops or multiple edges allowed).
\smallskip\noindent
\textbf{Measurement.}
For each combination of size and density we counted how many graphs have the desired property.
\smallskip\noindent
\textbf{Analysis.}
We used Wolfram Mathematica 12.0.0.0 for producing the plots that are in this paper. In particular, we used function \texttt{ListPlot3D} that joins points with flat polygons.
For the property of acyclicity it is also possible to compute the exact percentage of random graphs that are acyclic. This allowed us to compare the measured frequency distribution with its probability counterpart
\ifArxiv
(see Appendix~\ref{app:validation}).
\else
(see~\cite{bdp-tppsm-20-arxiv}).
\fi
We used Mathematica also for sampling contour lines of surfaces and for computing fitting functions of sets of value pairs.
\section{Experimental Results}\label{se:experiments}
In this section we report the results of the experiments to determine how density and size impact graph-theoretic properties of random graphs of small-medium size. Since the purpose of the experiments is to show that planarity exhibits a tipping point behavior when the density increases, we start our experiments with acyclicity, a property that notoriously does not have tipping points~\cite[p. 118]{b-rg-85}.
Then, we consider planarity, outerplanarity, and near-planarity, the main targets of our investigation.
\begin{figure}[tb]
\centering
\hfill
\subfigure[]{\includegraphics[trim=0 -20 0 0,clip,width=0.45\columnwidth]{figures/arboricity-from-above-400.jpg}
\label{fig:acyclic-from-above}
}
\begin{picture}(0,0)
\put(-120,0){Number of vertices}
\put(-173,80){\hbox{\rotatebox{90}{Density}}}
\end{picture}
\hfill
\subfigure[]{\includegraphics[trim=0 -20 0 0,clip,width=0.45\columnwidth]{figures/planarity-from-above-400.jpg}
\label{fig:planarity-from-above}
}
\begin{picture}(0,0)
\put(-120,0){Number of vertices}
\put(-173,80){\hbox{\rotatebox{90}{Density}}}
\end{picture}
\caption{(a) Measured fraction of random graphs that are acyclic. (b) Measured fraction of random graphs that are planar.}\label{fig:acyclic-and-planar}
\end{figure}
\smallskip\noindent\textbf{Acyclicity in Random Graphs.}
Simple graphs with less than three edges are acyclic. Conversely, since a tree has $m=n-1$ edges, when $m=n$ a graph has at least one cycle. Hence, the \interval of densities for acyclicity is $[\frac{3}{n},1-\frac{1}{n}]$. We used densities ranging from $0.0$ to $1.0$, with a step of~$0.05$ performing a total of $84 \times 10^6$ tests.
The plot in Fig.~\ref{fig:acyclic-from-above} shows the measured frequency of acyclic graphs as a function of density and size. Is it apparent that the density is the main cause of the loss of acyclicity, while the size of the graph seems to have weaker effects. In particular, bigger graphs tend to loose acyclicity earlier than smaller graphs.
Overall, the percentage of acyclic graphs seems to decrease smoothly through the \interval of densities, without any quick transition or drop.
Acyclic graphs allow us to compare a case where the tipping point is absent with the cases discussed in the next sections where a tipping point is present.
Also, for acyclicity we were able to compute the actual probability of a graph of having this property and we used the comparison between experimental and theoretical values to validate the experimental pipeline
\ifArxiv
(see Appendix~\ref{app:validation}).
\else
(see~\cite{bdp-tppsm-20-arxiv}).
\fi
\smallskip\noindent\textbf{Planarity in Random Graphs.}
We now consider the property of the graph of being planar.
All graphs with less than $9$ edges are planar and there is no planar graph with more than $3n-6$ edges. Hence, the \interval of densities for planarity is $[\frac{9}{n},\frac{3n-6}{n}]$. For our experiments we used densities from $0.0$ to $3.0$, with a step of~$0.1$, performing a total of $124 \times 10^6$ planarity tests.
In order to test the generated graphs for planarity we first used the OGDF function \texttt{makeConnected} that adds the minimum number of edges to make the graph connected and then called a single planarity test on the obtained graph: it can be easily seen that the minimality of the added edges implies that the connected graph is planar if and only if the connected components of the original graph were all planar.
\begin{figure}[tbp]
\centering
\subfigure[]{\includegraphics[trim=0 -50 0 0,width=0.48\columnwidth]{figures/planarity-from-the-side-400.jpg}
\label{fig:planarity-from-the-side}
}
\begin{picture}(0,0)
\put(-120,20){\hbox{\rotatebox{-13}{\scriptsize Density}}}
\put(-47,15){\hbox{\rotatebox{60}{\scriptsize Number of vertices}}}
\end{picture}
\hfill\hfill
\subfigure[]{\includegraphics[trim=0 -30 0 0,clip,width=0.45\columnwidth]{figures/planarity-from-above-fitting-2.jpg}
\label{fig:planarity-fitting}
}
\begin{picture}(0,0)
\put(-120,5){Number of vertices}
\put(-170,80){\hbox{\rotatebox{90}{\scriptsize Density}}}
\put(-140,150){\tiny Fitting curve: }
\put(-140,140){\tiny $f_{50\%}= 0.5 + \frac{4.28796}{n^{0.80709}} + \frac{1.20455}{n^{1/3}}$}
\put(-140,37){\footnotesize Horizontal Asymptote at $0.5$}
\end{picture}
\caption{(a) View from the side of the same graph of Fig.~\ref{fig:planarity-from-above}. (b) The samples at height $50\%$ (red dots) and a possible fitting curve (solid blue line).}\label{fi:planarity}
\end{figure}
Figs.~\ref{fig:planarity-from-above} and~\ref{fig:planarity-from-the-side} show a plot of the frequency of planar graphs in random simple graphs as a function of density and size.
It is apparent that the percentage of planar graphs drops from $100\%$ to $0\%$ in a short range of density values.
As an example, for $n=200$ we have that the fraction of planar graphs drops from $99\%$ to $1\%$ in the interval of densities
$[0.915,0.598]$, that corresponds to the $10.6\%$
of the significant interval. In contrast, for the same value of $n$, the fraction of acyclic graphs depicted in Fig.~\ref{fig:acyclic-from-above} drops from $99\%$ to $1\%$ in
the $53\%$ of the significant interval.
The tipping point is strongly related with density and appears earlier in larger graphs.
\ifArxiv
Figure~\ref{fig:planarity-contours} in the Appendix shows a plot of $9$ equally spaced contour lines at height $10\%, 20\%, \dots, 90\%$.
\else
Figure~\ref{fig:planarity-contours} in~\cite{bdp-tppsm-20-arxiv} shows a plot of $9$ equally spaced contour lines at height $10\%, 20\%, \dots, 90\%$.
\fi
In order to quantitatively study the behavior of the plot we determined the sample points of the contour line at height $50\%$ and computed a fitting of such points. For the fitting, because of the results in \cite{lpw-srgpp-94}, we selected a function of type $d=1/2+c_1/n^{c_2}+c_3/n^{1/3}$. The result of the fitting is shown in Fig. \ref{fig:planarity-fitting}. Observe that the value of $c_2$ is consistent with the theory.
\begin{figure}[tbp]
\centering
\subfigure[]{\includegraphics[trim=0 -30 0 0,clip,width=0.45\columnwidth]{figures/planarity-from-above-fitting-low-high.jpg}
\label{fig:planarity-fitting-low-high}
}
\begin{picture}(0,0)
\put(-120,0){Number of vertices}
\put(-173,80){\hbox{\rotatebox{90}{Density}}}
\put(-136,150){\tiny $f_{1\%}=0.5 + \frac{7.84819}{n^{1.01034}} + \frac{2.20906}{n^{1/3}}$}
\put(-140,130){\tiny $f_{99\%}=0.5 + \frac{3.65264}{n^{0.68018}} - \frac{0.01296}{n^{1/3}}$}
\put(-140,37){\footnotesize Horizontal Asymptote at $0.5$}
\end{picture}
\hfil
\subfigure[]{\includegraphics[trim=0 -30 0 0,clip,width=0.45\columnwidth]{figures/planarity-from-above-delta-low-high.jpg}
\label{fig:planarity-delta}
}
\begin{picture}(0,0)
\put(-120,0){Number of vertices}
\put(-173,80){\hbox{\rotatebox{90}{Density}}}
\put(-105,50){\footnotesize $f_{1\%} - f_{99\%}$}
\end{picture}
\hfil
\caption{(a) The sample points of the contour lines at height $1\%$ and $99\%$ and the corresponding fitting curves. (b) Difference between the fitting curves in (a).}\label{fig:planarity-delta-pair}
\end{figure}
In order to evaluate the width of the transition range we determined the sample points of the contour lines at height $1\%$ and $99\%$ and computed two fittings, one for each set of such points. For both the fittings, again, we selected a function of type $d=1/2+c_1/n^{c_2}+c_3/n^{1/3}$. The result are shown in Fig.~\ref{fig:planarity-fitting-low-high}. Observe how the difference between the two curves is very small (Fig.~\ref{fig:planarity-delta}).
Surprisingly, for random graphs of small-medium size the drop value for the measured fraction of planar graph is much smaller than it would have been hoped for: if you grow the density of a random graph of small-medium size you very likely loose planarity way before you have any chance to get connectivity ($d=1$). Practically speaking, if you were interested into graphs with density one, planarity is almost granted for number of vertices in the range $[1,40]$ but is almost absent above $100$ vertices.
For density $1.5$, instead, a random graph with more than $25$ vertices is very likely non-planar.
\smallskip\noindent\textbf{Outerplanarity in Random Graphs.}
An \emph{outerplanar} graph is a graph that admits a planar drawing where all vertices are on the external face. All graphs with less than $6$ edges are outerplanar --- the smallest non-outerplanar graphs being $K_4$ and $K_{2,3}$ --- and there is no outerplanar graph with more than $2n-3$ edges. Hence, the \interval of densities for outerplanarity is $[\frac{6}{n},\frac{2n-3}{n}]$. For our experiments we used densities from $0.0$ to $2.0$, with a step of~$0.1$.
\begin{figure}[htbp]
\centering
\hfill
\subfigure[]{\includegraphics[trim=0 -30 0 0,clip,width=0.45\columnwidth]{figures/outerplanarity-from-above-400.jpg}
\label{fi:outerplanar}
}
\begin{picture}(0,0)
\put(-120,0){Number of vertices}
\put(-173,80){\hbox{\rotatebox{90}{Density}}}
\end{picture}
\hfill
\subfigure[]{\includegraphics[trim=0 -30 0 0,clip,width=0.45\columnwidth]{figures/near-planarity-from-above-200.jpg}
\label{fi:near-planar}
}
\begin{picture}(0,0)
\put(-120,0){Number of vertices}
\put(-173,80){\hbox{\rotatebox{90}{Density}}}
\end{picture}
\caption{(a) Measured fraction of random graphs that are outerplanar. (b) Measured fraction of random graphs that are near-planar.}\label{fi:outer-near-planar}
\end{figure}
Figure~\ref{fi:outerplanar} shows the fraction of outerplanar graphs as a function of the number of vertices and density.
\smallskip\noindent\textbf{Near-Planarity in Random Graphs.}
A \emph{near-planar} graph is a graph that can be made planar by removing (at most) one edge~\cite{cm-aoepg-13}. Near-planar graphs are also called \emph{skewness-$1$} or \emph{almost planar} graphs~\cite{dlm-sgdbp-19}. The smallest not near-planar graph is $K_{3,4}$, with $12$ edges.
From the definition of near-planar graphs it follows that such graphs have a maximum of $3n-6+1$ vertices. Hence, the \interval of densities for near-planarity is $[\frac{14}{n},\frac{3n-5}{n}]$. In our experiments we used densities ranging from $0.0$ to $3.0$ increasing by $0.1$.
The recognition of near-planar graphs can be made in quadratic-time: it suffices to test for planarity any graph obtained by removing one edge.
Figure~\ref{fi:near-planar} shows the measured
fraction of random graphs that are near-planar as a function of the number of vertices (from $1$ to $200$) and the density.
Observe that the transition from near-planar graphs to non-near-planar ones is sharper than what we measured for planarity or quasi-planarity, although it occurs for higher values of densities.
\section{Conclusion and Future Work}\label{se:conclusions}
We reported empirical evidence of the existence of a tipping point for planarity in random graphs of small-medium size.
The same phenomenon appears to be present for restrictions and relaxations of planarity as outerplanarity and near-planarity.
It would be interesting to measure whether other popular families of `beyond planar' graphs, as 1-planar or quasiplanar graphs, also feature the same abrupt transition in their distribution in random graphs. Unfortunately, testing 1-planarity is NP-complete~\cite{km-moihp-13} even for near-planar graphs~\cite{cm-aoepg-13} and, to our knowledge, no implementation of the FPT algorithm in~\cite{bce-pc1p-18} for testing 1-planarity is available. Also, no testing algorithm has been proposed for quasi-planarity.
Finally, we could consider other types of graphs, as random bipartite, biconnected, or triconnected graph, as well as other graph models like small-world graphs or scale-free graphs.
\subsection*{Acknowledgments}
We thank Carlo Batini for posing us the first question about rapid transitions of graph properties. Sometimes questions are more important than answers. We also thank the anonymous reviewer for pointing out that the smallest not near-planar graph in terms of number of edges is $K_{3,4}$.
\bibliographystyle{splncs04}
| {
"timestamp": "2020-08-27T02:01:24",
"yymm": "2008",
"arxiv_id": "2008.09405",
"language": "en",
"url": "https://arxiv.org/abs/2008.09405",
"abstract": "This paper presents an empirical study of the relationship between the density of small-medium sized random graphs and their planarity. It is well known that, when the number of vertices tends to infinite, there is a sharp transition between planarity and non-planarity for edge density d=0.5. However, this asymptotic property does not clarify what happens for graphs of reduced size. We show that an unexpectedly sharp transition is also exhibited by small and medium sized graphs. Also, we show that the same \"tipping point\" behavior can be observed for some restrictions or relaxations of planarity (we considered outerplanarity and near-planarity, respectively).",
"subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "A Tipping Point for the Planarity of Small and Medium Sized Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.971129095560449,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.709952256266664
} |
https://arxiv.org/abs/1401.4969 | Local and Parallel Finite Element Algorithm Based On Multilevel Discretization for Eigenvalue Problem | A local and parallel algorithm based on the multilevel discretization is proposed in this paper to solve the eigenvalue problem by the finite element method. With this new scheme, solving the eigenvalue problem in the finest grid is transferred to solutions of the eigenvalue problems on the coarsest mesh and a series of solutions of boundary value problems by using the local and parallel algorithm. The computational work in each processor can reach the optimal order. Therefore, this type of multilevel local and parallel method improves the overall efficiency of solving the eigenvalue problem. Some numerical experiments are presented to validate the efficiency of the new method. | \section{Introduction}
Solving large scale eigenvalue problems becomes a fundamental problem in modern science and engineering society.
However, it is always a very difficult task to solve high-dimensional eigenvalue problems which come from
physical and chemistry sciences. Xu and Zhou \cite{XuZhou_Eigen} give a type of two-grid discretization method
to improve the efficiency of the solution of eigenvalue problems. By the two-grid method,
the solution of eigenvalue problem on a fine mesh is reduced to a solution of
eigenvalue problem on a coarse mesh (depends on the fine mesh) and a
solution of the corresponding boundary value problem on the fine mesh \cite{XuZhou_Eigen}.
For more details, please read \cite{Xu_Two_Grid,Xu_Nonlinear}.
Combing the two-grid idea and the local and parallel finite element
technique \cite{XuZhou_FEM}, a type of local and parallel finite element technique
to solve the eigenvalue problems is given in \cite{XuZhou_Parallel} (also see \cite{DaiShenZhou}).
Recently, a new type of multilevel correction method for solving eigenvalue problems
which can be implemented on multilevel grids is proposed in \cite{LinXie}.
In the multilevel correction scheme, the solution of eigenvalue problem
on a finest mesh can be reduced to a series of solutions of the eigenvalue problem on a very coarse
mesh (independent of finest mesh) and a series of solutions of the boundary value problems
on the multilevel meshes. The multilevel correction method gives a way to
construct a type of multigrid scheme for the eigenvalue problem \cite{LinXie_MultiGrid_Eigenvalue}.
In this paper, we propose a type of multilevel local and parallel scheme to solve the
eigenvalue problem based on the combination of
the multilevel correction method and the local and parallel technique. An special property of
this scheme is that we can do the local and parallel computing for any times and then
the mesh size of original coarse triangulation is independent of the finest triangulation.
With this new method, the solution of the eigenvalue problem will not be more difficult
than the solution of the boundary value problems by the local and parallel algorithm
since the main part of the computation in the multilevel local and parallel method
is solving the boundary value problems.
The standard Galerkin finite element method for eigenvalue problem
has been extensively investigated, e.g. Babu\v{s}ka and Osborn
\cite{Babuska2,BabuskaOsborn}, Chatelin \cite{Chatelin} and
references cited therein. There also exists analysis for the local and parallel
finite element method for the boundary value problems and eigenvalue problems
\cite{DaiShenZhou,SchatzWahlbin,Wahlbin,XuZhou_FEM,XuZhou_Eigen,XuZhou_Parallel}.
Here we adopt some basic results in these papers for our analysis.
The corresponding error and computational work estimates of the proposed multilevel
local and parallel scheme for the eigenvalue problem will be analyzed. Based
on the analysis, the new method can obtain optimal errors with an optimal
computational work in each processor.
An outline of this paper goes as follows. In the next section,
a basic theory about the local error estimate of the finite element method is introduced.
In Section 3, we introduce the finite element method for the eigenvalue problem
and the corresponding error estimates.
A local and parallel type of one correction step and multilevel correction algorithm will be given in Section 4.
The estimate of the computational work for the multilevel local and parallel algorithm
is presented in section 5. In Section 6, two numerical examples are presented
to validate our theoretical analysis and some concluding remarks are given in the last section.
\section{Discretization by finite element method}
In this section, we introduce some notation and error estimates of
the finite element approximation for linear elliptic problem.
The letter $C$ (with or without subscripts) denotes a generic
positive constant which may be different at its different occurrences through the paper.
For convenience, the symbols $\lesssim$, $\gtrsim$ and $\approx$
will be used in this paper. That $x_1\lesssim y_1, x_2\gtrsim y_2$
and $x_3\approx y_3$, mean that $x_1\leq C_1y_1$, $x_2 \geq c_2y_2$
and $c_3x_3\leq y_3\leq C_3x_3$ for some constants $C_1, c_2, c_3$
and $C_3$ that are independent of mesh sizes (see, e.g., \cite{Xu}).
We shall use the standard notation for Sobolev spaces $W^{s,p}(\Omega)$ and their
associated norms and seminorms (see, e.g., \cite{Adams}). For $p=2$, we denote
$H^s(\Omega)=W^{s,2}(\Omega)$ and $H_0^1(\Omega)=\{v\in H^1(\Omega):\ v|_{\partial\Omega}=0\}$,
where $v|_{\partial\Omega}=0$ is in the sense of trace, $\|\cdot\|_{s,\Omega}=\|\cdot\|_{s,2,\Omega}$.
For $G\subset D\subset \Omega$, the notation $G\subset\subset D$ means
that ${\rm dist}(\partial D\setminus\partial\Omega,\partial G\setminus\partial\Omega)>0$
(see Figure \ref{fig:DomainDG}). It is well known that
any $w\in H_0^1(\Omega_0)$ can be naturally extended to be a function in $H_0^1(\Omega)$ with zero
outside of $\Omega_0$, where $\Omega_0\subset\Omega$.
Thus we will show this fact by the abused notation $H_0^1(\Omega_0)\subset H_0^1(\Omega)$.
\begin{figure}[htb]
\centering
\includegraphics[width=10cm,height=4.5cm]{DomainDG.ps}
\caption{$G\subset\subset D\subset\subset\Omega$}\label{fig:DomainDG}
\end{figure}
\subsection{Finite element space}
Now, let us define the finite element space.
First we generate a shape-regular
decomposition $\mathcal{T}_h(\Omega)$ of the computing domain $\Omega\subset \mathcal{R}^d\
(d=2,3)$ into triangles or rectangles for $d=2$ (tetrahedrons or
hexahedrons for $d=3$). The diameter of a cell $K\in\mathcal{T}_h(\Omega)$
is denoted by $h_K$. The mesh size function is denoted by $h(x)$ whose value
is the diameter $h_K$ of the element $K$ including $x$.
For generality, following \cite{XuZhou_FEM,XuZhou_Parallel},
we shall consider a class of finite element spaces that satisfy certain assumptions.
Now we describe such assumptions.
{\bf A.0}.\ There exists $\gamma>1$ such that
\begin{eqnarray*}\label{Mesh_Size_Condition}
h_{\Omega}^{\gamma}\lesssim h(x),\ \ \ \ \forall x\in\Omega,
\end{eqnarray*}
where $h_{\Omega}=\max_{x\in\Omega}h(x)$ is the
largest mesh size of $\mathcal{T}_h(\Omega)$.
Based on the triangulation $\mathcal{T}_h(\Omega)$, we define the finite element space
$V_h(\Omega)$ as follows
\begin{eqnarray*}\label{FES_Pk}
V_h(\Omega)=\big\{v\in C(\bar{\Omega}):\ v|_K\in \mathcal{P}_k,\ \ \forall K\in\mathcal{T}_h(\Omega)\big\},
\end{eqnarray*}
where $\mathcal{P}_k$ denotes the space of polynomials of degree not greater than a positive integer
$k$. Then we know $V_h(\Omega)\subset H^1(\Omega)$ and define
$V_{0h}(\Omega)=V_h(\Omega)\cap H_0^1(\Omega)$. Given $G\subset \Omega$,
we define $V_h(G)$ and $\mathcal{T}_h(G)$ to be the restriction
of $V_h(\Omega)$ and $\mathcal{T}_h(\Omega)$ to $G$, respectively, and
\begin{eqnarray*}\label{FES_G}
V_{0h}(G)=\big\{v\in V_h(\Omega):\ {\rm supp}v\subset\subset G\big\}.
\end{eqnarray*}
For any $G\subset \Omega$ mentioned in this paper, we assume that it aligns with the partition
$\mathcal{T}_h(\Omega)$.
As we know, the finite element space $V_h$ satisfy the following proposition (see, e.g., \cite{BrennerScott,CiarletLions,XuZhou_FEM,XuZhou_Parallel}).
\begin{proposition}\label{Proposition_Fractional_Norm}({\it Fractional Norm})
For any $G\subset \Omega$, we have
\begin{eqnarray}\label{Fractional_Norm}
\inf_{v\in V_{0h}(G)}\|w-v\|_{1,G}\lesssim \|w\|_{1/2,\partial G},\ \ \ \forall w\in V_h(\Omega).
\end{eqnarray}
\end{proposition}
\subsection{A linear elliptic problem}
In this subsection, we repeat some basic properties of a second order elliptic boundary
value problem and its finite element discretization, which will be used in this paper.
The following results is presented in \cite{SchatzWahlbin,Wahlbin,XuZhou_FEM,XuZhou_Parallel}.
We consider the homogeneous boundary value problem
\begin{equation}\label{Liner_Elliptic}
\left\{
\begin{array}{rcl}
Lu&=&f,\ \ {\rm in}\ \Omega,\\
u&=&0,\ \ {\rm on}\ \partial\Omega.
\end{array}
\right.
\end{equation}
Here the linear second order elliptic operator $L:H_0^{1}(\Omega)\rightarrow H^{-1}(\Omega)$ is define as
\begin{eqnarray*}
Lu=-{\rm div}(A\nabla u),
\end{eqnarray*}
where $A=(a_{ij})_{1\leq i,j\leq d}\in \mathcal{R}^{d\times d}$ is uniformly
positive definite symmetric on $\Omega$ with $a_{ij}\in W^{1,\infty}(\Omega)$.
The weak form for (\ref{Liner_Elliptic}) is as follows:
Find $u\equiv L^{-1}f\in H_0^1(\Omega)$ such that
\begin{eqnarray}\label{Weak_Linear_Elliptic}
a(u, v) = (f, v), \ \ \ \forall v\in H_0^1(\Omega),
\end{eqnarray}
where $(\cdot,\cdot)$ is the standard inner-product of $L^2(\Omega)$ and
\begin{eqnarray*}
a(u,v)=\big(A\nabla u, \nabla v\big).
\end{eqnarray*}
As we know
\begin{eqnarray*}
\|w\|_{1,\Omega}^2\lesssim a(w,w),\ \ \ \forall w\in H_0^1(\Omega).
\end{eqnarray*}
We assume (c.f. \cite{Grisvard}) that the following regularity estimate holds for
the solution of (\ref{Liner_Elliptic}) or (\ref{Weak_Linear_Elliptic})
\begin{eqnarray*}\label{Regularity_Estimate}
\|u\|_{1+\alpha,\Omega}\lesssim \|f\|_{-1+\alpha,\Omega}
\end{eqnarray*}
for some $\alpha\in (0,1]$ depending on $\Omega$ and the coefficient of $L$.
For some $G\subset \Omega$, we need the following regularity assumption
{\bf R(G)}.\ For any $f\in L^2(G)$, there exists a $w\in H_0^1(G)$ satisfying
\begin{eqnarray*}
a(v,w)=(f,v),\ \ \ \forall v\in H_0^1(G)
\end{eqnarray*}
and
\begin{eqnarray*}
\|u\|_{1+\alpha,G}\lesssim \|f\|_{-1+\alpha,G}.
\end{eqnarray*}
For the analysis, we define the Galerkin-Projection operator $P_h:\ H_0^1(\Omega)\rightarrow V_{0h}(\Omega)$ by
\begin{eqnarray}\label{Projection_Problem}
a(u-P_hu,v)=0,\ \ \ \ \forall v\in V_{0h}(\Omega)
\end{eqnarray}
and apparently
\begin{eqnarray}\label{Projection_Inequality}
\|P_hu\|_{1,\Omega}\lesssim \|u\|_{1,\Omega},\ \ \ \forall u\in H_0^1(\Omega).
\end{eqnarray}
Based on (\ref{Projection_Inequality}), the global priori error estimate
can be obtained from the approximate properties of the finite dimensional subspace $V_{0h}(\Omega)$
(cf. \cite{BrennerScott,CiarletLions}). For the following analysis, we introduce the following quantity:
\begin{eqnarray}
\rho_{\Omega}(h)&=&\sup_{f\in L^2(\Omega),\|f\|_{0,\Omega}=1}\inf_{v\in V_{0h}(\Omega)}\|L^{-1}f-v\|_{1,\Omega}.
\end{eqnarray}
Similarly, we can also define $\rho_G(h)$ if Assumption R(G) holds.
The following results can be found in \cite{BabuskaOsborn,BrennerScott,CiarletLions,XuZhou_Eigen,XuZhou_Parallel}.
\begin{proposition}
\begin{eqnarray*}
\|(I-P_h)L^{-1}f\|_{1,\Omega}&\lesssim&\rho_{\Omega}(h)\|f\|_{0,\Omega},\ \ \ \forall f\in L^2(\Omega),\\
\|u-P_hu\|_{0,\Omega}&\lesssim&\rho_{\Omega}(h)\|u-P_hu\|_{1,\Omega},\ \ \ \forall u\in H_0^1(\Omega).
\end{eqnarray*}
\end{proposition}
Now, we state an important and useful result about the local error estimates \cite{SchatzWahlbin,Wahlbin,XuZhou_Parallel}
which will be used in the following.
\begin{proposition}\label{Prop:Local_Estimate}
Suppose that $f\in H^{-1}(\Omega)$ and $G\subset\subset \Omega_0\subset\Omega$. If Assumptions
A.0 holds and $w\in V_{h}(\Omega_0)$ satisfies
\begin{eqnarray*}
a(w,v)=(f,v),\ \ \ \ \forall v\in V_{0h}(\Omega_0).
\end{eqnarray*}
Then we have the following estimate
\begin{eqnarray*}\label{Local_Estimate}
\|w\|_{1,G}\lesssim \|w\|_{0,\Omega_0}+\|f\|_{-1,\Omega_0}.
\end{eqnarray*}
\end{proposition}
\section{Error estimates for eigenvalue problems}
In this section, we introduce the concerned eigenvalue problem and the corresponding
finite element discretization.
In this paper, we consider the following eigenvalue problem:
Find $(\lambda, u )\in \mathcal{R}\times H^1_0(\Omega)$ such that
$b(u,u)=1$ and
\begin{eqnarray}\label{Weak_Eigenvalue_Problem}
a(u,v)&=&\lambda b(u,v),\quad \forall v\in H^1_0(\Omega),
\end{eqnarray}
where
\begin{eqnarray*}
b(u,u)=(u,u).
\end{eqnarray*}
For the eigenvalue $\lambda$, there exists the following Rayleigh
quotient expression (see, e.g., \cite{Babuska2,BabuskaOsborn,XuZhou_Eigen})
\begin{eqnarray*}\label{Rayleigh_Quotient}
\lambda=\frac{a(u,u)}{b(u,u)}.
\end{eqnarray*}
From \cite{BabuskaOsborn,Chatelin}, we know the eigenvalue problem
(\ref{Weak_Eigenvalue_Problem}) has an eigenvalue sequence $\{\lambda_j \}:$
$$0<\lambda_1\leq \lambda_2\leq\cdots\leq\lambda_k\leq\cdots,\ \ \
\lim_{k\rightarrow\infty}\lambda_k=\infty,$$ and the associated
eigenfunctions
$$u_1,u_2,\cdots,u_k,\cdots,$$
where $b(u_i,u_j)=\delta_{ij}$. In the sequence $\{\lambda_j\}$, the
$\lambda_j$ are repeated according to their multiplicity.
Then we can define the discrete approximation for the exact eigenpair $(\lambda,u)$ of
(\ref{Weak_Eigenvalue_Problem}) based on the finite element space as:
Find $(\lambda_h, u_h)\in \mathcal{R}\times V_{0h}(\Omega)$ such that
$b(u_h,u_h)=1$ and
\begin{eqnarray}\label{Discrete_Weak_Eigen_Problem}
a(u_h,v_h)&=&\lambda_hb(u_h,v_h),\quad \forall v_h\in V_{0h}(\Omega).
\end{eqnarray}
From (\ref{Discrete_Weak_Eigen_Problem}), we know the following
Rayleigh quotient expression for $\lambda_h$ holds
(see, e.g., \cite{Babuska2,BabuskaOsborn,XuZhou_Eigen})
\begin{eqnarray*}\label{Discrete_Rayleigh_Quotient}
\lambda_h &=&\frac{a(u_h,u_h)}{b(u_h,u_h)}.
\end{eqnarray*}
Similarly, we know from \cite{BabuskaOsborn,Chatelin} the eigenvalue
problem (\ref{Discrete_Weak_Eigen_Problem}) has eigenvalues
$$0<\lambda_{1,h}\leq \lambda_{2,h}\leq\cdots\leq \lambda_{k,h}\leq\cdots\leq \lambda_{N_h,h},$$
and the corresponding eigenfunctions
$$u_{1,h}, u_{2,h},\cdots, u_{k,h}, \cdots, u_{N_h,h},$$
where $b(u_{i,h},u_{j,h})=\delta_{ij}, 1\leq i,j\leq N_h$ ($N_h$ is
the dimension of the finite element space $V_{0h}(\Omega)$).
From the minimum-maximum principle (see, e.g., \cite{Babuska2,BabuskaOsborn}),
the following upper bound result holds
$$\lambda_i\leq \lambda_{i,h}, \ \ \ i=1,2,\cdots, N_h.$$
Let $M(\lambda_i)$ denote the eigenspace corresponding to the
eigenvalue $\lambda_i$ which is defined by
\begin{eqnarray}
M(\lambda_i)&=&\big\{w\in V: w\ {\rm is\ an\ eigenvalue\ of\
(\ref{Weak_Eigenvalue_Problem})\ corresponding\ to} \ \lambda_i\nonumber\\
&&\ \ \ {\rm and}\ \|w\|_b=1\big\},
\end{eqnarray}
where $\|w\|_b = \sqrt{b(w,w)}$.
Then we define
\begin{eqnarray}
\delta_h(\lambda_i)=\sup_{w\in M(\lambda_i)}\inf_{v\in
V_{0h}(\Omega)}\|w-v\|_1.
\end{eqnarray}
For the eigenpair approximations by the finite element method, there
exist the following error estimates.
\begin{proposition}(\cite[Lemma 3.7, (3.28b,3.29b)]{Babuska2}, \cite[P. 699]{BabuskaOsborn} and
\cite{Chatelin})\label{Prop:Eigen_Error_Estimate}
\noindent(i) For any eigenfunction approximation $u_{i,h}$ of
(\ref{Discrete_Weak_Eigen_Problem}) $(i = 1, 2, \cdots, N_h)$, there is an
eigenfunction $u_i$ of (\ref{Weak_Eigenvalue_Problem}) corresponding to
$\lambda_i$ such that $\|u_i\|_b = 1$ and
\begin{eqnarray*}\label{Eigenfunction_Error}
\|u_i-u_{i,h}\|_{1,\Omega}&\leq& C_i\delta_h(\lambda_i).
\end{eqnarray*}
Furthermore,
\begin{eqnarray*}\label{Eigenfunction_Error_Negative}
\|u_i- u_{i,h}\|_{0,\Omega} &\leq& C_i\rho_{\Omega}(h)\delta_h(\lambda_i).
\end{eqnarray*}
(ii) For each eigenvalue, we have
\begin{eqnarray*}
\lambda_i \leq \lambda_{i,h}\leq \lambda_i + C_i\delta_h^2(\lambda_i).
\end{eqnarray*}
Here and hereafter $C_i$ is some constant depending on $i$ but independent of the mesh size $h$.
\end{proposition}
To analyze our method, we introduce the error expansion of
eigenvalue by the Rayleigh quotient formula which comes from
\cite{Babuska2,BabuskaOsborn,XuZhou_Eigen}.
\begin{proposition}\label{Prop:Rayleigh_Quotient_Error}
Assume $(\lambda,u)$ is the true solution of the eigenvalue problem
(\ref{Weak_Eigenvalue_Problem}) and $0\neq \psi\in H_0^1(\Omega)$. Let us define
\begin{eqnarray*}
\widehat{\lambda}=\frac{a(\psi,\psi)}{b(\psi,\psi)}.
\end{eqnarray*}
Then we have
\begin{eqnarray*}
\widehat{\lambda}-\lambda
&=&\frac{a(u-\psi,u-\psi)}{b(\psi,\psi)}-\lambda
\frac{b(u-\psi,u-\psi)}{b(\psi,\psi)}.
\end{eqnarray*}
\end{proposition}
\section{Multilevel local and Parallel algorithms}\label{sec:LPA}
In this section, we present a new multilevel parallel algorithm to solve the eigenvalue problem
based on the combination of the local and parallel finite element technique and the multilevel correction method.
First, we introduce an one correction step with the local and parallel finite element scheme and then
present a parallel multilevel method for the eigevalue problem.
For the description of the numerical scheme, we need to define some notation.
Given an coarsest triangulation $\mathcal{T}_H(\Omega)$, we first divide the domain $\Omega$ into a number of disjoint
subdomains $D_1$, $\cdots$, $D_m$ such that $\bigcup_{j=1}^m\bar{D}_j=\bar{\Omega}$, $D_i\cap D_j=\emptyset$
(see Figure \ref{fig:DomainGi}), then enlarge each $D_j$ to obtain $\Omega_j$
that aligns with $\mathcal{T}_H(\Omega)$. We pick another sequence of subdomains
$G_j\subset\subset D_j\subset\Omega_j\subset \Omega$
and (see Figure \ref{fig:DomainGi})
\begin{eqnarray*}
G_{m+1} = \Omega\setminus (\cup_{j=1}^m\bar{G_j}).
\end{eqnarray*}
\begin{figure}[htb]
\centering
\includegraphics[width=10cm,height=4.5cm]{DomainGi.ps}
\caption{$m=4$}\label{fig:DomainGi}
\end{figure}
In this paper we assume the domain decomposition satisfies the following property
\begin{equation}\label{Domain_Decomposition_Property}
\sum_{j=1}^{m}\|v\|_{\ell,\Omega_j}^2\lesssim \|v\|_{\ell,\Omega}^2
\end{equation}
for any $v\in H^{\ell}(\Omega)$ with $\ell=0,\ 1$.
\subsection{One correction step}
First, we present the one correction step to improve the
accuracy of the given eigenvalue and eigenfunction approximation.
This correction method contains solving an auxiliary boundary value problem
in the finer finite element space on each subdomain
and an eigenvalue problem on the coarsest finite element space.
For simplicity of notation, we set
$(\lambda,u)=(\lambda_i,u_i)\ (i=1,2,\cdots,k,\cdots)$ and
$(\lambda_h, u_h)=(\lambda_{i,h},u_{i,h})\ (i=1,2,\cdots,N_h)$ to
denote an eigenpair and its corresponding approximation of problems (\ref{Weak_Eigenvalue_Problem}) and
(\ref{Discrete_Weak_Eigen_Problem}), respectively. For the clear understanding, we only describe the algorithm for
the simple eigenvalue case. The corresponding algorithm for the multiple eigenvalue case can be
given in the similar way as in \cite{Xie_Nonconforming}.
In order to do the correction step, we build original coarsest finite element space $V_{0H}(\Omega)$
on the background mesh $\mathcal{T}_H(\Omega)$. This coarsest finite element space $V_{0H}(\Omega)$ will be
used as the background space in our algorithm.
Assume we have obtained an eigenpair approximation
$(\lambda_{h_k},u_{h_k})\in\mathcal{R}\times V_{0h_k}(\Omega)$.
The one correction step will improve the accuracy of the
current eigenpair approximation $(\lambda_{h_k},u_{h_k})$.
Let $V_{0h_{k+1}}(\Omega)$ ba a finer finite element space such that
$V_{0h_k}(\Omega)\subset V_{0h_{k+1}}(\Omega)$. Here we assume the
finite element spaces $V_{0h_k}(\Omega)$ and $V_{0h_{k+1}}(\Omega)$
are consistent with the domain decomposition and $V_{0H}(\Omega)\subset V_{0h_k}(\Omega)$.
Based on this finer finite element space $V_{0h_{k+1}}(\Omega)$, we define the following one correction step.
\begin{algorithm}\label{Algm:One_Step_Correction}
One Correction Step
We have a given eigenpair approximation $(\lambda_{h_k},u_{h_k})\in\mathcal{R}\times V_{0h_k}(\Omega)$.
\begin{enumerate}
\item Define the following auxiliary boundary value problem:
For each $j = 1,2,\cdots,m$, find ${e}_{h_{k+1}}^j \in V_{0h_{k+1}}(\Omega_j)$ such that
\begin{equation}\label{Aux_Problem}
a({e}_{h_{k+1}}^j,v_{h_{k+1}})=\lambda_{h_k}b(u_{h_k},v_{h_{k+1}})-a({u}_{h_k},v_{h_{k+1}}),\ \
\ \forall v_{h_{k+1}}\in V_{0h_{k+1}}(\Omega_j).
\end{equation}
Set $\widetilde{u}_{h_{k+1}}^j=u_{h_k}+e_{h_{k+1}}^j\in V_{h_{k+1}}(\Omega_j)$.
\item Construct $\widetilde{u}_{h_{k+1}}\in V_{0h_{k+1}}(\Omega)$ such that
$\widetilde{u}_{h_{k+1}} = \widetilde{u}_{h_{k+1}}^j$ in $G_j$ $(j = 1, \cdots, m)$
and $\widetilde{u}_{h_{k+1}} = \widetilde{u}_{h_{k+1}}^{m+1}$ in $G_{m+1}$ with
$\widetilde{u}_{h_{k+1}}^{m+1}$ being defined by solving the following problem:
Find $\widetilde{u}_{h_{k+1}}^{m+1}\in V_{h_{k+1}}(G_{m+1})$ such that
$\widetilde{u}_{h_{k+1}}^{m+1}|_{\partial G_j\cap \partial G_{m+1}}= \widetilde{u}_{h_{k+1}}^j$ $(j = 1, \cdots, m)$ and
\begin{equation}\label{G_m+1_Problem}
a(\widetilde{u}_{h_{k+1}}^{m+1}, v_{h_{k+1}}) = \lambda_{h_k} b(u_{h_k}, v_{h_{k+1}}),\ \ \ \forall v_{h_{k+1}}\in V_{0h_{k+1}}(G_{m+1}).
\end{equation}
\item Define a new finite element
space $V_{H,h_{k+1}}=V_{0H}(\Omega)+{\rm span}\{\widetilde{u}_{h_{k+1}}\}$ and solve
the following eigenvalue problem:
Find $(\lambda_{h_{k+1}},u_{h_{k+1}})\in\mathcal{R}\times V_{H,h_{k+1}}$ such
that $b(u_{h_{k+1}},u_{h_{k+1}})=1$ and
\begin{eqnarray}\label{Eigen_Augment_Problem}
a(u_{h_{k+1}},v_{H,h_{k+1}})&=&\lambda_{h_{k+1}} b(u_{h_{k+1}},v_{H,h_{k+1}}),\ \ \
\forall v_{H,h_{k+1}}\in V_{H,h_{k+1}}.
\end{eqnarray}
\end{enumerate}
Summarize the above three steps into
\begin{eqnarray*}
(\lambda_{h_{k+1}},u_{h_{k+1}})={\it
Correction}(V_{0H}(\Omega),\lambda_{h_k}, u_{h_k},V_{0h_{k+1}}(\Omega)),
\end{eqnarray*}
where $\lambda_{h_k}$
and $u_{h_k}$ are the given eigenvalue and eigenfunction approximation, respectively.
\end{algorithm}
\begin{theorem}\label{Thm:Error_Estimate_One_Step_Correction}
Assume the current eigenpair approximation
$(\lambda_{h_k},u_{h_k})\in\mathcal{R}\times V_{0h_k}(\Omega)$ has the
following error estimates
\begin{eqnarray}
\|u-u_{h_k}\|_{1,\Omega} &\lesssim &\varepsilon_{h_k}(\lambda),\label{Estimate_u_u_h_k}\\
\|u-u_{h_k}\|_{0,\Omega}&\lesssim&\rho_{\Omega}(H)\varepsilon_{h_k}(\lambda),\label{Estimate_u_u_h_k_zero}\\
|\lambda-\lambda_{h_k}|&\lesssim&\varepsilon_{h_k}^2(\lambda).\label{Estimate_lambda_lambda_h_k}
\end{eqnarray}
Then after one step correction, the resultant approximation
$(\lambda_{h_{k+1}},u_{h_{k+1}})\in\mathcal{R}\times V_{0h_{k+1}}(\Omega)$ has the
following error estimates
\begin{eqnarray}
\|u-u_{h_{k+1}}\|_{1,\Omega} &\lesssim &\varepsilon_{h_{k+1}}(\lambda),\label{Estimate_u_u_h_{k+1}}\\
\|u-u_{h_{k+1}}\|_{0,\Omega}&\lesssim&
\rho_{\Omega}(H)\varepsilon_{h_{k+1}}(\lambda),\label{Estimate_u_u_h_{k+1}_zero}\\
|\lambda-\lambda_{h_{k+1}}|&\lesssim&\varepsilon_{h_{k+1}}^2(\lambda),\label{Estimate_lambda_lambda_h_{k+1}}
\end{eqnarray}
where
$\varepsilon_{h_{k+1}}(\lambda):=\rho_{\Omega}(H)\varepsilon_{h_k}(\lambda)
+\varepsilon_{h_k}^2(\lambda)+\delta_{h_{k+1}}(\lambda)$.
\end{theorem}
\begin{proof}
We focus on estimating $\|u-\widetilde{u}_{h_{k+1}}\|_{1,\Omega}$. First, we have
\begin{equation}\label{equ:Key0}
\|u-\widetilde{u}_{h_{k+1}}\|_{1,\Omega}
\lesssim \|u-P_{h_{k+1}}u\|_{1,\Omega} + \|\widetilde{u}_{h_{k+1}}-P_{h_{k+1}}u\|_{1,\Omega},
\end{equation}
and
\begin{equation}\label{equ:Key1}
\|\widetilde{u}_{h_{k+1}}-P_{h_{k+1}}u\|_{1,\Omega}^2
=\sum_{j=1}^m\|\widetilde{u}_{h_{k+1}}^j-P_{h_{k+1}}u\|_{1,G_j}^2
+\|\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u\|_{1,G_{m+1}}^2.
\end{equation}
From problems (\ref{Projection_Problem}), (\ref{Weak_Eigenvalue_Problem}) and
(\ref{Aux_Problem}), the following equation holds
\begin{equation*}
a(\widetilde{u}_{h_{k+1}}^j-P_{h_{k+1}}u,v)=b(\lambda_{h_k}u_{h_k}-\lambda u,v),\ \ \ \forall v\in V_{0h_{k+1}}(\Omega_{j}),
\end{equation*}
for $j=1,2,\cdots,m$.
According to Proposition \ref{Prop:Local_Estimate}
\begin{eqnarray}\label{equ:Key2}
&&\|\widetilde{u}_{h_{k+1}}^j-P_{h_{k+1}}u\|_{1,G_j} \lesssim
\|\widetilde{u}_{h_{k+1}}^j-P_{h_{k+1}}u\|_{0,\Omega_j}+\|\lambda_{h_k}u_{h_k}-\lambda u\|_{-1,\Omega_j}\nonumber\\
&\lesssim & \|\widetilde{u}_{h_{k+1}}^j-u_{h_k}\|_{0,\Omega_j}+\|u_{h_k}-P_{h_{k+1}}u\|_{0,\Omega_j}
+\|\lambda_{h_k}u_{h_k}-\lambda u\|_{0,\Omega_j}.
\end{eqnarray}
We will estimate the first term, i.e. $\|{e}_{h_{k+1}}^j\|_{0,\Omega_j}$ by
using the Aubin-Nitsche duality argument.
Given any $\phi\in L^2(\Omega_j)$, there exists $w^j\in H^1_0(\Omega_j)$ such that
\begin{eqnarray*}
a(v,w^j) &=& b(v,\phi),\ \ \ \forall v\in H^1_0(\Omega_j).
\end{eqnarray*}
Let $w^j_{h_{k+1}}\in V_{0h_{k+1}}(\Omega_j)$ and $w^j_{H}\in V_{0H}(\Omega_j)$ satisfying
\begin{eqnarray*}
a(v_{h_{k+1}},w^j_{h_{k+1}}) &=& a(v_{h_{k+1}},w^j),\ \ \ \forall v_{h_{k+1}}\in V_{0h_{k+1}}(\Omega_j),\\
a(v_{H},w^j_{H}) &=& a(v_{H},w^j),\ \ \ \ \ \ \forall v_{H}\in V_{0H}(\Omega_j).
\end{eqnarray*}
Then the following equations hold
\begin{eqnarray}\label{Equation_Nitsche}
&&b(\widetilde{u}_{h_{k+1}}^j-u_{h_k},\phi) = a(\widetilde{u}_{h_{k+1}}^j-u_{h_k},w^j_{h_{k+1}})\nonumber\\
&=& b(\lambda_{h_k}u_{h_k},w^j_{h_{k+1}})-a(u_{h_k},w^j_{h_{k+1}})\nonumber\\
&=& b(\lambda_{h_k}u_{h_k}-\lambda u,w^j_{h_{k+1}})+a(P_{h_{k+1}}u-u_{h_k},w^j_{h_{k+1}})\nonumber\\
&=& b(\lambda_{h_k}u_{h_k}-\lambda u,w^j_{h_{k+1}}-w^j_{H})+b(\lambda_{h_k}u_{h_k}-\lambda u,w^j_{H})\nonumber\\
&&\ \ \ \ \ \ +a(P_{h_{k+1}}{u}-u_{h_k},w^j_{h_{k+1}})\nonumber\\
&=& b(\lambda_{h_k}u_{h_k}-\lambda u,w^j_{h_{k+1}}-w^j_{H})+a(P_{h_{k+1}}{u}-u_{h_k},w^j_{h_{k+1}}-w^j_{H}),
\end{eqnarray}
where $V_{0H}(\Omega)\subset V_{0h_k}(\Omega)$ and (\ref{Projection_Problem}), (\ref{Weak_Eigenvalue_Problem}), (\ref{Discrete_Weak_Eigen_Problem}), (\ref{Aux_Problem}) are used in the last equation.
Combining (\ref{Equation_Nitsche}) and the following error estimates
\begin{eqnarray*}
\|w-w^j_{h_{k+1}}\|_{1,\Omega_j} \lesssim \rho_{\Omega_j}(h_{k+1}) \|\phi\|_{0,\Omega_j},\ \ \
\|w-w^j_{H}\|_{1,\Omega_j} \lesssim \rho_{\Omega_j}(H) \|\phi\|_{0,\Omega_j},
\end{eqnarray*}
we have
\begin{equation}\label{equ:Key5}
\|\widetilde{u}_{h_{k+1}}^j-u_{h_k}\|_{0,\Omega_j}
\lesssim\rho_{\Omega_j}(H)\big(\|u_{h_k}-P_{h_{k+1}}u\|_{1,\Omega_j}+\|\lambda_{h_k}u_{h_k}-\lambda u\|_{0,\Omega_j}\big).
\end{equation}
From (\ref{equ:Key2}) and (\ref{equ:Key5}), for $j = 1,2\dots,m$, we have
\begin{eqnarray}\label{equ:Key3}
&&\|\widetilde{u}_{h_{k+1}}^j-P_{h_{k+1}}u\|_{1,G_j}\lesssim \rho_{\Omega_j}(H)\|u_{h_k}-P_{h_{k+1}}u\|_{1,\Omega_j}\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ +\|u_{h_k}-P_{h_{k+1}}u\|_{0,\Omega_j}+\|\lambda_{h_k}u_{h_k}-\lambda u\|_{0,\Omega_j}.
\end{eqnarray}
Now, we estimate $\|\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u\|_{1,G_{m+1}}$.
From (\ref{Projection_Problem}), (\ref{Weak_Eigenvalue_Problem}) and (\ref{G_m+1_Problem}), we obtain
\begin{equation*}
a(\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u,v)=b(\lambda_{h_k}u_{h_k}-\lambda u,v),\ \ \ \forall v\in V_{0h_{k+1}}(G_{m+1}).
\end{equation*}
For any $v\in V_{0h_{k+1}}(G_{m+1})$, the following estimates hold
\begin{eqnarray}\label{Estimate_G_m+1}
&&\|\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u\|_{1,G_{m+1}}^2\nonumber\\
&\lesssim& a(\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u,\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u-v)
+b(\lambda_{h_k}u_{h_k}-\lambda u,v)\nonumber\\
&\lesssim & \|\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u\|_{1,G_{m+1}}
\inf_{\chi\in V_{0h_{k+1}}(G_{m+1})}\|\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u-\chi\|_{1,G_{m+1}}\nonumber\\
&&+\|\lambda_{h_k} u_{h_k}-\lambda u\|_{-1,G_{m+1}}
\big(\|\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u\|_{1,G_{m+1}}\nonumber\\
&& \ \ \ \ \ \
+\inf_{\chi\in V_{0h_{k+1}}(G_{m+1})}\|\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u-\chi\|_{1,G_{m+1}}\big).
\end{eqnarray}
Combining (\ref{Estimate_G_m+1}) and the following estimate
\begin{eqnarray*}
\|\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u\|_{1/2,\partial G_{m+1}}^2
&\lesssim& \sum_{j=1}^m\|\widetilde{u}_{h_{k+1}}^j-P_{h_{k+1}}u\|_{1/2,\partial G_{j}}^2\nonumber\\
&\lesssim& \sum_{j=1}^m\|\widetilde{u}_{h_{k+1}}^j-P_{h_{k+1}}u\|_{1,G_{j}}^2
\end{eqnarray*}
and Proposition \ref{Proposition_Fractional_Norm}, we have
\begin{eqnarray}\label{equ:Key4}
&&\|\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u\|_{1,G_{m+1}}^2\nonumber\\
&\lesssim & \inf_{\chi\in V_{0h_{k+1}}(G_{m+1})}\|\widetilde{u}_{h_{k+1}}^{m+1}-P_{h_{k+1}}u-\chi\|_{1,G_{m+1}}^2
+\|\lambda_{h_k} u_{h_k}-\lambda u\|_{-1,G_{m+1}}^2\nonumber\\
&\lesssim & \|\widetilde{u}_{h_{k+1}}-P_{h_{k+1}}u\|_{1/2,\partial G_{m+1}}^2
+\|\lambda_{h_k} u_{h_k}-\lambda u\|_{0,G_{m+1}}^2\nonumber\\
&\lesssim &
\sum_{j=1}^m\|\widetilde{u}_{h_{k+1}}^j-P_{h_{k+1}}u\|_{1,G_{j}}^2
+\|\lambda_{h_k} u_{h_k}-\lambda u\|_{0,G_{m+1}}^2.
\end{eqnarray}
Combining (\ref{Domain_Decomposition_Property}), (\ref{equ:Key1}), (\ref{equ:Key3})
and (\ref{equ:Key4}) leads to
\begin{eqnarray*}
&&\|\widetilde{u}_{h_{k+1}}-P_{h_{k+1}}u\|_{1,\Omega}^2\nonumber\\
&\lesssim& \sum_{j=1}^m\rho_{\Omega_j}(H)^2\|u_{h_k}-P_{h_{k+1}}u\|_{1,\Omega_j}
+\sum_{j=1}^m\|{u}_{h_k}-P_{h_{k+1}}u\|_{0,\Omega_{j}}^2\nonumber\\
&&\ \ \ +\sum_{j=1}^m\|\lambda_{h_k} u_{h_k}-\lambda u\|_{0,\Omega_{j}}^2+\|\lambda_{h_k} u_{h_k}-\lambda u\|_{0,G_{m+1}}^2\nonumber\\
&\lesssim& \rho^2_{\Omega}(H)\|{u}_{h_k}-P_{h_{k+1}}u\|_{1,\Omega}^2+\|{u}_{h_k}-P_{h_{k+1}}u\|_{0,\Omega}^2
+\|\lambda_{h_k} u_{h_k}-\lambda u\|_{0,\Omega}^2\nonumber\\
&\lesssim&\rho^2_{\Omega}(H)\|{u}_{h_k}-u\|_{1,\Omega}^2
+\rho^2_{\Omega}(H)\|u-P_{h_{k+1}}u\|_{1,\Omega}^2+\|{u}_{h_k}-u\|_{0,\Omega}^2\nonumber\\
&&+\|u-P_{h_{k+1}}u\|_{0,\Omega}^2+|\lambda-\lambda_{h_k}|^2\|u\|_{0,\Omega}^2
+\lambda^2\|u_{h_k}- u\|_{0,\Omega}^2.
\end{eqnarray*}
Together with the error estimate of the finite element projection
\begin{eqnarray*}
\|u-P_{h_{k+1}}u\|_{1,\Omega} &\lesssim&\delta_{h_{k+1}}(\lambda)
\end{eqnarray*}
and (\ref{Estimate_lambda_lambda_h_k}), (\ref{equ:Key0}), we have
\begin{eqnarray}\label{Error_tilde_u_h_{k+1}}
\|u-\widetilde{u}_{h_{k+1}}\|_{1,\Omega}&\lesssim&\|u-P_{h_{k+1}}u\|_{1,\Omega}+
|\lambda-\lambda_{h_k}|+\|u-u_{h_k}\|_{0,\Omega}\nonumber\\
&&\ \ \ +\rho_{\Omega}(H)\|u-u_{h_k}\|_{1,\Omega}\nonumber\\
&\lesssim&\rho_{\Omega}(H)\varepsilon_{h_k}(\lambda)+\varepsilon_{h_k}^2(\lambda)
+\delta_{h_{k+1}}(\lambda).
\end{eqnarray}
From (\ref{Error_u_h_{k+1}}) and (\ref{Error_tilde_u_h_{k+1}}), we can obtain (\ref{Estimate_u_u_h_{k+1}}).
We come to estimate the error for the eigenpair solution
$(\lambda_{h_{k+1}},u_{h_{k+1}})$ of problem (\ref{Eigen_Augment_Problem}).
Based on the error estimate theory of eigenvalue problems by finite
element methods (see, e.g., Proposition \ref{Prop:Eigen_Error_Estimate} or
\cite[Theorem 9.1]{BabuskaOsborn}) and the definition of the space $V_{H,h_{k+1}}$,
the following estimates hold
\begin{eqnarray}\label{Error_u_h_{k+1}}
\|u-u_{h_{k+1}}\|_{1,\Omega}&\lesssim& \sup_{w\in M(\lambda)}\inf_{v\in
V_{H,h_{k+1}}}\|w-v\|_{1,\Omega}\lesssim \|u-\widetilde{u}_{h_{k+1}}\|_{1,\Omega},
\end{eqnarray}
and
\begin{eqnarray*}\label{Error_u_h_{k+1}_negative}
\|u-u_{h_{k+1}}\|_{0,\Omega}&\lesssim&\widetilde{\rho}_{\Omega}(H)\|u-u_{h_{k+1}}\|_{1,\Omega},
\end{eqnarray*}
where
\begin{eqnarray*}\label{Eta_a_H}
\widetilde{\rho}_{\Omega}(H)&=&\sup_{f\in V,\|f\|_{0,\Omega}=1}\inf_{v\in
V_{H,h_{k+1}}}\|L^{-1}f-v\|_{1,\Omega} \leq \rho_{\Omega}(H).
\end{eqnarray*}
So we obtain the desired result (\ref{Estimate_u_u_h_{k+1}}), (\ref{Estimate_u_u_h_{k+1}_zero}) and
the estimate (\ref{Estimate_lambda_lambda_h_{k+1}}) can be obtained by
Proposition \ref{Prop:Rayleigh_Quotient_Error} and (\ref{Estimate_u_u_h_{k+1}}).
\end{proof}
\subsection{Multilevel correction process}
Now we introduce a type of multilevel local and parallel
scheme based on the one correction step defined in Algorithm
\ref{Algm:One_Step_Correction}. This type of multilevel method can obtain the same optimal error
estimate as solving the eigenvalue problem directly in the finest
finite element space.
In order to do multilevel local and parallel
scheme, we define a sequence of triangulations $\mathcal{T}_{h_k}(\Omega)$ of $\Omega$ determined as follows.
Suppose $\mathcal{T}_{h_1}(\Omega)$ is obtained from $\mathcal{T}_H(\Omega)$ by the regular refinement
and let $\mathcal{T}_{h_k}(\Omega)$ be obtained from $\mathcal{T}_{h_{k-1}}(\Omega)$ via
regular refinement (produce $\beta^d$ congruent elements) such that
$$h_k\approx\frac{1}{\beta}h_{k-1}\ \ \ \ \ {\rm for}\ k\geq 2.$$
Based on this sequence of meshes, we construct the corresponding linear finite element spaces such that
for each $j = 1,2,\cdots,m$
\begin{eqnarray*}
V_{0H}(\Omega_j)\subset V_{0h_1}(\Omega_j)\subset V_{0h_2}(\Omega_j)\subset\cdots\subset V_{0h_n}(\Omega_j)
\end{eqnarray*}
and the following relation of approximation errors holds
\begin{eqnarray}\label{Error_k_k_1}
\delta_{h_k}(\lambda)\approx\frac{1}{\beta}\delta_{h_{k-1}}(\lambda),\ \ \ k=2,\cdots,n.
\end{eqnarray}
\begin{remark}
The relation (\ref{Error_k_k_1}) is reasonable since we can choose
$\delta_{h_k}(\lambda)=h_k\ (k=1,\cdots,n)$. Always the upper bound of
the estimate $\delta_{h_k}(\lambda)\lesssim h_k$ holds. Recently, we also obtain the
lower bound $\delta_{h_k}(\lambda)\gtrsim h_k$ (c.f. \cite{LinXieXu}).
\end{remark}
\begin{algorithm}\label{Algm:Multi_Correction}
Multilevel Correction Scheme
\begin{enumerate}
\item Solve the following eigenvalue problem in $V_{0h_1}(\Omega)$:
Find $(\lambda_{h_1},u_{h_1})\in \mathcal{R}\times V_{0h_1}(\Omega)$ such that
$b(u_{h_1},u_{h_1})=1$ and
\begin{equation*
a(u_{h_1},v_{h_1})=\lambda_{h_1}b(u_{h_1},v_{h_1}),\ \ \ \ \forall v_{h_1}\in V_{0h_1}(\Omega).
\end{equation*}
\item Construct a series of finer finite element
spaces $V_{0h_2}(\Omega_j),\cdots,V_{0h_n}(\Omega_j)$
such that $\rho_{\Omega}(H)\gtrsim
\delta_{h_1}(\lambda)\geq \delta_{h_2}(\lambda)\geq\cdots\geq
\delta_{h_n}(\lambda)$ and (\ref{Error_k_k_1}) holds.
\item Do $k=1,\cdots,n-1$
\begin{itemize}
\item Obtain a new eigenpair approximation
$(\lambda_{h_{k+1}},u_{h_{k+1}})\in \mathcal{R}\times V_{0h_{k+1}}(\Omega)$
by Algorithm \ref{Algm:One_Step_Correction}
\begin{eqnarray*}
(\lambda_{h_{k+1}},u_{h_{k+1}})={\it Correction}(V_{0H}(\Omega),\lambda_{h_k},u_{h_k},V_{0h_{k+1}}(\Omega)).
\end{eqnarray*}
\end{itemize}
end Do
\end{enumerate}
Finally, we obtain an eigenpair approximation $(\lambda_{h_n},u_{h_n})\in\mathcal{R}\times V_{0h_n}(\Omega)$.
\end{algorithm}
\begin{theorem}\label{Thm:Multi_Correction}
After implementing Algorithm \ref{Algm:Multi_Correction}, there exists an eigenfunction
$u\in M(\lambda)$ such that the resultant
eigenpair approximation $(\lambda_{h_n},u_{h_n})$ has the following
error estimate
\begin{eqnarray}
\|u-u_{h_n}\|_{1,\Omega} &\lesssim&\delta_{h_n}(\lambda),\label{Multi_Correction_Err_fun1}\\
\|u-u_{h_n}\|_{0,\Omega}&\lesssim&\rho_{\Omega}(H)\delta_{h_n}(\lambda),\label{Multi_Correction_Err_fun0}\\
|\lambda-\lambda_{h_n}|&\lesssim&\delta_{h_n}^2(\lambda),\label{Multi_Correction_Err_eigen}
\end{eqnarray}
under the condition $C\beta\rho_{\Omega}(H)<1$ for some constant $C$.
\end{theorem}
\begin{proof}
Based on Proposition \ref{Prop:Eigen_Error_Estimate}, there exists an eigenfunction $u\in M(\lambda)$ such that
\begin{eqnarray}
|\lambda-\lambda_{h_1}| &\lesssim & \delta_{h_1}^2(\lambda),\label{Initial_Error_Eigenvalue}\\
\|u-u_{h_1}\|_{1,\Omega} &\lesssim& \delta_{h_1}(\lambda),\label{Initial_Error_Eigenfunc_1}\\
\|u-u_{h_1}\|_{0,\Omega}&\lesssim& \rho_{\Omega}(h_1)\delta_{h_1}(\lambda).\label{Initial_Error_Eigenfunc_0}
\end{eqnarray}
Let $\varepsilon_{h_1}(\lambda):=\delta_{h_1}(\lambda)$. From
(\ref{Initial_Error_Eigenvalue})-(\ref{Initial_Error_Eigenfunc_0}) and
Theorem \ref{Thm:Error_Estimate_One_Step_Correction}, we have
\begin{eqnarray*}
\varepsilon_{h_{k+1}}(\lambda)
&\lesssim&\rho_{\Omega}(H)\varepsilon_{h_k}(\lambda)
+\varepsilon_{h_k}^2(\lambda)+\delta_{h_{k+1}}(\lambda)\\
&\lesssim&\rho_{\Omega}(H)\varepsilon_{h_k}(\lambda)+\delta_{h_{k+1}}(\lambda),
\ \ \ \ {\rm for}\ 1\leq k\leq n-1.
\end{eqnarray*}
by a process of induction with the condition $\rho_{\Omega}(H)\gtrsim\delta_{h_1}(\lambda)\geq
\delta_{h_2}(\lambda)\geq\cdots\geq \delta_{h_n}(\lambda)$.
Then by recursive relation, we obtain
\begin{eqnarray}\label{varepsilon_h_n}
\varepsilon_{h_{n}}(\lambda)
&\lesssim&\rho_{\Omega}(H)\varepsilon_{h_{n-1}}(\lambda)+\delta_{h_{n}}(\lambda)\nonumber\\
&\lesssim&\rho^2_{\Omega}(H)\varepsilon_{h_{n-2}}(\lambda)+
\rho_{\Omega}(H)\delta_{h_{n-1}}(\lambda)+\delta_{h_{n}}(\lambda)\nonumber\\
&\lesssim&\sum\limits_{k=1}^{n}(\rho_{\Omega}(H))^{n-k}\delta_{h_k}(\lambda).
\end{eqnarray}
Based on the proof in Theorem \ref{Thm:Error_Estimate_One_Step_Correction}, (\ref{Error_k_k_1})
and (\ref{varepsilon_h_n}), the final eigenfunction approximation $u_{h_n}$ has the error estimate
\begin{eqnarray*}\label{Error_u_h_n_Multi_Correction}
\|u-u_{h_n}\|_{1,\Omega}&\lesssim&\varepsilon_{h_{n}}(\lambda)
\lesssim \sum_{k=1}^n(\rho_{\Omega}(H))^{n-k}\delta_{h_k}(\lambda)\\
&=&\sum_{k=1}^n\big(\beta\rho_{\Omega}(H)\big)^{n-k}\delta_{h_n}(\lambda)
\lesssim \frac{\delta_{h_n}(\lambda)}{1-\beta\rho_{\Omega}(H)}\\
&\lesssim&\delta_{h_n}(\lambda).
\end{eqnarray*}
The desired result (\ref{Multi_Correction_Err_fun0}) and
(\ref{Multi_Correction_Err_eigen}) can also be proved with the similar way in the proof
of Theorem \ref{Thm:Error_Estimate_One_Step_Correction}.
\end{proof}
\section{Work estimate of algorithm}
In this section, we turn our attention to the estimate of computational work
for Algorithm \ref{Algm:Multi_Correction}. We will show that
Algorithm \ref{Algm:Multi_Correction} makes solving eigenvalue problem need almost the
same work as solving the boundary value problem by the local and parallel finite element method.
First, we define the dimension of each level linear
finite element space as
\begin{equation*}
N_k^j:={\rm dim}V_{0h_k}(\Omega_j)\text{ and }N_k:={\rm dim}V_{0h_k}(\Omega),\ \
k=1,\cdots,n,\ j=1,\cdots,m+1.
\end{equation*}
Then we have
\begin{equation}\label{relation_dimension}
N_k^j \thickapprox\Big(\frac{1}{\beta}\Big)^{d(n-k)}N_n^j\ \ {\rm and}\ \
N_k^j\approx \frac{N_k}{m},\ \ \ k=1,\cdots, n.
\end{equation}
\begin{theorem}
Assume the eigenvalue problem solving in the coarsest spaces $V_{0H}(\Omega)$ and $V_{0h_1}(\Omega)$ need work
$\mathcal{O}(M_H)$ and $\mathcal{O}(M_{h_1})$, respectively,
and the work of solving the boundary value problem in $V_{h_k}(\Omega_j)$ and $V_{h_k}(G_{m+1})$ be
$\mathcal{O}(N_k^j)$ and $\mathcal{O}(N_k^{m+1})$,
$\forall k=1,2,\cdots,n \text{ and } j = 1,2,\cdots,m$.
Then the work involved in Algorithm \ref{Algm:Multi_Correction} is
$\mathcal{O}(N_n/m+M_H\log N_n+M_{h_1})$ for each processor.
Furthermore, the complexity in each processor
will be $\mathcal{O}(N_n/m)$ provided $M_H\ll N_n/m$ and $M_{h_1}\leq N_n/m$.
\end{theorem}
\begin{proof}
Let $W_k$ denote the work in any processor of the one correction step in
the $k$-th finite element space $V_{h_k}$. Then with the definition, we have
\begin{eqnarray}\label{work_k}
W_k&=&\mathcal{O}(N_k/m+M_H)\ \ \ \ {\rm for}\ k\geq 2.
\end{eqnarray}
Iterating (\ref{work_k}) and using the fact (\ref{relation_dimension}), we obtain
\begin{eqnarray}\label{Work_Estimate}
&&\text{The total work in any processor}\leq\sum_{k=1}^nW_k\nonumber\\
&=& \mathcal{O}\Big(M_{h_1}+\sum_{k=2}^n\big(N_k/m+M_H\big)\Big)\nonumber\\
&=&\mathcal{O}\Big(\sum_{k=2}^nN_k/m+(n-2)M_H+M_{h_1}\Big)\nonumber\\
&=&\mathcal{O}\Big(\sum_{k=2}^n\big(\frac{1}{\beta}\big)^{d(n-k)}N_n/m+(n-2)M_H+M_{h_1}\Big)\nonumber\\
&=&\mathcal{O}(N_n/m+M_H\log N_n+M_{h_1}).
\end{eqnarray}
This is the desired result $\mathcal{O}(N_n/m+M_H\log N_n+M_{h_1})$ and the
one $\mathcal{O}(N_n/m)$ can be obtained by the conditions $M_H\ll N_n/m$ and $M_{h_1}\leq N_n/m$.
\end{proof}
\begin{remark}
The linear complexity $\mathcal{O}(N_k^j)$ and $\mathcal{O}(N_k^{m+1})$
can be arrived by the so-called multigrid method (see, e.g., \cite{Bramble,BrambleZhang,Hackbush,McCormick,Xu}).
\end{remark}
\section{Numerical result}\label{Numerical_Result_Section}
In this section, we give two numerical examples to illustrate the
efficiency of the multilevel correction algorithm {(Algorithm \ref{Algm:Multi_Correction})}
proposed in this paper.
\begin{example}\label{Example_1}
In this example, the eigenvalue problem (\ref{Weak_Eigenvalue_Problem}) is solved on the square $\Omega=(-1,1)\times(-1,1)$ with $a(u,v) = \int_{\Omega}\nabla u\cdot\nabla v\mathrm{d}\Omega$
and ${b(u,v)} = \int_{\Omega}u v \mathrm{d}\Omega$.
\end{example}
As {in} Figure \ref{fig:ComputeDomain}, we first divide the domain $\Omega$ into
{ four disjoint subdomains} $D_1$, $\cdots$, $D_4$ such that
$\bigcup_{j=1}^4\bar{D}_j=\bar{\Omega}$, $D_i\cap D_j=\emptyset$, {
then enlarge each $D_j$ to obtain $\Omega_j$ such that
$G_j\subset\subset D_j \subset\Omega_j\subset \Omega$ for $i,j=1,2,3,4$} and
\begin{equation*}
G_{5} = \Omega\setminus (\cup_{j=1}^4\bar{G_j}).
\end{equation*}
\begin{figure}[htb]
\centering\includegraphics[width=8cm,height=4cm]{ComputeDomain.ps}
\caption{$\bigcup_{j=1}^4\bar{D}_j=\bar{\Omega}$, $G_{5} = \Omega\setminus (\cup_{j=1}^4\bar{G_j})$ }
\label{fig:ComputeDomain}
\end{figure}
The sequence of finite element spaces is constructed by
using the linear or quadratic element on the nested sequence of triangulations which are produced by the
regular refinement with $\beta =2$ (connecting the midpoints of each edge).
Algorithm \ref{Algm:Multi_Correction} is applied to solve the eigenvalue problem. If the linear element is used,
from Theorem \ref{Thm:Multi_Correction}, we have the following error estimates for eigenpair approximation
\begin{eqnarray*}
|\lambda_{h_n}-\lambda| \lesssim h_n^2,\ \ \|u_{h_n}-u\|_{1,\Omega} \lesssim h_n
\end{eqnarray*}
which means the multilevel correction method can also obtain the optimal convergence order.
The numerical results for the first five eigenvalues and the $1$-st, $4$-th eigenfunctions (they are simple)
{by the linear finite element method with five levels grids are shown in
Tables \ref{tab:num_res_eva} and \ref{tab:num_res_eve}.}
It is observed from Tables \ref{tab:num_res_eva} and \ref{tab:num_res_eve} that the
numerical results confirm the efficiency of the proposed algorithm.
\begin{table}[]
\centering
\caption{The errors for the first $5$ eigenvalue approximations}
\label{tab:num_res_eva}
\begin{tabular}{c|ccccc}
\hline
Eigenvalues & $|\lambda-\lambda_{h_1}|$ & $|\lambda-\lambda_{h_2}|$ & $|\lambda-\lambda_{h_3}|$ & $|\lambda-\lambda_{h_4}|$ & $|\lambda-\lambda_{h_5}|$\\
\hline
1-st & 0.073555 & 0.018534 & 0.004651 & 0.001164 & 0.000291 \\
Order & -- & 1.988649 & 1.994561 & 1.998450 & 2.000000 \\
\hline
2-nd & 0.426525 & 0.106936 & 0.026747 & 0.006689 & 0.001673 \\
Order & -- & 1.995883 & 1.999299 & 1.999515 & 1.999353 \\
\hline
3-rd & 0.426534 & 0.106939 & 0.026748 & 0.006689 & 0.001673 \\
Order & -- & 1.995873 & 1.999285 & 1.999569 & 1.999353 \\
\hline
4-th & 1.078632 & 0.267624 & 0.066859 & 0.016717 & 0.004180\\
Order & -- & 2.010923 & 2.001014 & 1.999806 & 1.999741 \\
\hline
5-th & 1.490468 & 0.385000 & 0.097106 & 0.024349 & 0.006093 \\
Order & -- & 1.952835 & 1.987226 & 1.995698 & 1.998638 \\
\hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{The errors for the simple ($1$-st and $5$-th) eigenfunction approximations}\label{tab:num_res_eve}
\begin{tabular}{c|ccccc}
\hline
{\footnotesize Eigenfunctions} & {\footnotesize$\|u-u_{h_1}\|_{1,\Omega}$} & {\footnotesize$\|u-u_{h_2}\|_{1,\Omega}$} & {\footnotesize$\|u-u_{h_3}\|_{1,\Omega}$} &
{\footnotesize$\|u-u_{h_4}\|_{1,\Omega}$} & {\footnotesize$\|u-u_{h_5}\|_{1,\Omega}$}\\
\hline
1-st & 0.269991 & 0.135956 & 0.068195 & 0.034119 & 0.017064 \\
Order & -- & 0.989771 & 0.995402 & 0.999091 & 0.999619 \\
\hline
4-th & 1.025704 & 0.514925 & 0.259424 & 0.129254 & 0.064645 \\
Order & -- & 0.994180 & 0.989050 & 1.005103 & 0.999598 \\
\hline
\end{tabular}
\end{table}
Next we discuss the effectiveness of $\delta$ and the coarsest mesh size $H$ to
the numerical results by Algorithm \ref{Algm:Multi_Correction}. Figure \ref{fig:P1HD}
shows the errors for the different choices of $\delta$ and $H$ by the linear finite element method.
From Figure \ref{fig:P1HD}, we can find Algorithm \ref{Algm:Multi_Correction} can obtain the optimal
convergence order when $H\leq 0.25$ and $\delta\geq 0.1$ which are soft requirements.
\begin{figure}[htb]
\centering
\includegraphics[width=5cm,height=5cm]{P1H05D.ps}
\includegraphics[width=5cm,height=5cm]{P1D05H.ps}
\caption{The error estimate for the first $5$ eigenvalue approximations by the linear
element: The left subfigure is for $H=0.5$ and $\delta= 0.05$, $0.1$, $0.2$. The right
subfigure is for $\delta=0.05$ and $H=0.5$, $0.25$, $0.125$}\label{fig:P1HD}
\end{figure}
The case becomes better when we use the quadratic finite element method (see Figure \ref{fig:P2HD}).
For the quadratic finite element, the convergence order ($4$-th) is always optimal
even when $\delta$ is very small.
\begin{figure}[htb]
\centering
\includegraphics[width=5cm,height=5cm]{P2H25D.ps}
\includegraphics[width=5cm,height=5cm]{P2D05H.ps}
\caption{The error estimate for the first $5$ eigenvalue approximations by the quadratic
element: The left subfigure is for $H=0.25$ and $\delta=0.05$, $0.1$, $0.2$. The right
subfigure is for $\delta=0.05$ and $\delta=0.25$, $0.125$, $0.0625$}\label{fig:P2HD}
\end{figure}
\begin{example}\label{Example_2}
In the second example, we solve the eigenvalue problem (\ref{Weak_Eigenvalue_Problem})
using linear and quadratic element on the square $\Omega=(-1,1)\times(-1,1)$
with $a(u,v) = \int_{\Omega}A\nabla u\cdot\nabla v\mathrm{d}\Omega$,
$b(u,v) = \int_{\Omega}\phi u v\mathrm{d}\Omega$ and
\begin{equation*}
A = \begin{pmatrix} e^{1+x^2} & e^{xy} \\ e^{xy} & e^{1+y^2} \end{pmatrix} \ \
\text{ and } \ \ \phi = (1+x^2)(1+y^2).
\end{equation*}
\end{example}
Since the exact eigenvalue is not known, we use the accurate enough approximations
$[17.982932, 33.384973, 38.381968, 47.670103, 66.874113, 68.323961]$
by the extrapolation method as the first $6$ exact eigenvalues to investigate the errors.
\begin{figure}[htb]
\centering
\includegraphics[width=5cm,height=5cm]{P1D1H1.ps}
\includegraphics[width=5cm,height=5cm]{P2D1H1.ps}
\caption{The error estimate for the first $6$ eigenvalue approximations with
$H=0.1$ and $\delta=0.1$: The left subfigure is for linear element and the right
subfigure is for quadratic element}\label{fig:P1P2}
\end{figure}
Figure \ref{fig:P1P2} shows the corresponding numerical results
for the first $6$ eigenvalues by the linear and quadratic finite element methods, respectively.
Here, we use four level grids to do the numerical experiments. From Figure \ref{fig:P1P2}, the
numerical results also confirm the efficiency of the proposed algorithm in this paper.
\section{Concluding remarks}
In this paper, we give a new type of multilevel local and parallel method based on multigrid discretization
to solve the eigenvalue problems. The idea here is to use the multilevel correction method
to transform the solution of eigenvalue problem to a series of solutions of the corresponding boundary value
problems with the local and parallel method. As stated in the numerical examples, Algorithm
\ref{Algm:Multi_Correction} for simple eigenvalue cases can be extended to the corresponding version
for multiple eigenvalue cases. For more information, please refer \cite{Xie_Nonconforming}.
Furthermore, the framework here can also be coupled with the adaptive refinement technique.
The ideas can be extended to other types of linear and nonlinear eigenvalue problems.
These will be investigated in our future work.
| {
"timestamp": "2014-01-21T02:16:15",
"yymm": "1401",
"arxiv_id": "1401.4969",
"language": "en",
"url": "https://arxiv.org/abs/1401.4969",
"abstract": "A local and parallel algorithm based on the multilevel discretization is proposed in this paper to solve the eigenvalue problem by the finite element method. With this new scheme, solving the eigenvalue problem in the finest grid is transferred to solutions of the eigenvalue problems on the coarsest mesh and a series of solutions of boundary value problems by using the local and parallel algorithm. The computational work in each processor can reach the optimal order. Therefore, this type of multilevel local and parallel method improves the overall efficiency of solving the eigenvalue problem. Some numerical experiments are presented to validate the efficiency of the new method.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Local and Parallel Finite Element Algorithm Based On Multilevel Discretization for Eigenvalue Problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.97112909472487,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7099522556558068
} |
https://arxiv.org/abs/1301.4157 | On the Product Rule for Classification Problems | We discuss theoretical aspects of the product rule for classification problems in supervised machine learning for the case of combining classifiers. We show that (1) the product rule arises from the MAP classifier supposing equivalent priors and conditional independence given a class; (2) under some conditions, the product rule is equivalent to minimizing the sum of the squared distances to the respective centers of the classes related with different features, such distances being weighted by the spread of the classes; (3) observing some hypothesis, the product rule is equivalent to concatenating the vectors of features. | \section{Introduction}\label{sec:introduction}
\vspace{-0.25cm}
With the advance of the Machine Learning field, and the discovery
of many different techniques,
the subject of \emph{combining multiple learners} \cite{Alpaydin2004} eventually
drove attention, in particular the problem of \emph{combining classifiers}.
Many different methods appeared, and soon they
were compared in terms of efficiency in solving problems.
The \emph{product rule} has been present in some of these works
(e.g., \cite{Alexandre2001,Kittler1998,Breukelen1998,Duin2000,Cicconet2010,Cicconet2010B,Li07}),
in contexts ranging from the accuracy of the different combination rules
to some analytical properties of the different methods.
In \cite{Breukelen1998} it was shown that,
in the context of handwritten digit recognition, the product rule performs better
for combining linear classifiers.
In general, however, the product rule does not stand out from competitors \cite{Duin2000}.
For the problem of combining audio and video signals in guitar-chord recognition,
the product rule is better then the sum rule \cite{Cicconet2010},
but on the problem of identity verification using face and voice profiles, the
sum rule wins \cite{Kittler1998}.
On the theoretical realm, \cite{Alexandre2001} shows that
for problems with two classes, the sum and product rules are equivalent
when using two classifiers and the sum of the estimates of the a posteriori probabilities
is equal to one. In \cite{Kittler1998}, the product rule is derived from the hypothesis of
conditional statistical independence between different representations of the data.
There are also some intuitive explanations for the choice of the product rule,
as for instance the fact that the product (``END'' operator) is preferred with respect to the
sum rule (``OR'' operator) because it enforces all qualities defined by the measures at once
\cite{Mertens2007}.
In this text, analytical properties of the product rule are further analyzed,
in the contexts of two or more classifiers.
We show that
(1)~the product rule arises from the MAP classifier
supposing equivalent priors and conditional independence given a class;
(2)~under some conditions, the product rule is equivalent to minimizing the sum
of the squared distances to the respective centers of the classes related with
different features, such distances being weighted by the spread of the classes;
(3)~observing some hypothesis, the product rule is equivalent to concatenating the vectors of features.
Our work extends the current theoretical understanding of the product rule provided by Alexandre \emph{et al} \cite{Alexandre2001}
and Kittler \emph{et al} \cite{Kittler1998}, as it was made in the direction of the sum rule by Li and Zong \cite{Li07}.
\section{Theoretical Facts}\label{sec:tf}
\vspace{-0.25cm}
\begin{definition}\label{def:pr}
Let $X,Y$ be (continuous) random variables corresponding to $2$ distinct feature vectors, and
$C$ the (discrete) random variable corresponding to the class, whose output can be $c_1,...,c_K$.
For any $Z \in \{X,Y\}$ and $k \in \{1,\ldots,K\}$, let
$p_{Z,k}$ be a function that outputs the \emph{confidence} that the class
is $c_k$ considering that the features-variable is $Z$.
Supposing that the features are $X = x$ and $Y = y$,
the \emph{product rule} for classification will assign $C = c_{\hat{k}}$ provided
\begin{equation*}
p_{X,\hat{k}}(x) \cdot p_{Y,\hat{k}}(y) = \max_{k = 1,...,K}p_{X,k}(x) \cdot p_{Y,k}(y)\text{ .}
\end{equation*}
\end{definition}
In this definition and in the following results we are using, for simplicity,
only two random variables, named $X$ and $Y$.
We could have used, instead, a set of $N$ random variables, say $X^1,...,X^N$,
but that would unnecessarily overload the notation.
\begin{definition}
Let $(X,Y)$ be the random variable obtained by concatenating
the features $X$ and $Y$, and
$p(\cdot| C = c_k)$
the density function for the variable $(X,Y)$ conditioned to $C = c_k$.
We will denote the value of this function at the point
$(x,y)$ by $p(X = x, Y = y | C = c_k)$.
Let $P(C = c_k)$ be the \emph{prior} probability that
the class is $C = c_k$.
Finally, let us define $p_{(X,Y),k}(x,y)$ as follows:
\begin{equation*}
p_{(X,Y),k}(x,y) = p(X=x,Y=y | C = c_k) \cdot P(C = c_k)\text{ .}
\end{equation*}
Given a sampled value $(X,Y) = (x,y)$,
the \emph{MAP} (Maximum a Posteriori) classifier will assign $C = c_{\hat{k}}$ provided
\begin{equation*}
p_{(X,Y),\hat{k}}(x,y) = \max_{k = 1,...,K}p_{(X,Y),k}(x,y)
\end{equation*}
\end{definition}
\begin{fact}\label{fact:map}
When using the MAP classifier,
the product rule arises under the hypothesis of (1)
conditional independency given the class and (2)
same prior probability for the classes.
\end{fact}
\begin{proof}
The MAP classifier is given by
\begin{equation*}
p(X = x, Y = y | C = c_k) \cdot P(C = c_k)\text{ .}
\end{equation*}
\noindent
Now hypothesis 1 means
\begin{eqnarray*}
&p(X = x, Y = y | C = c_k) =& \\
&= p(X = x | C = c_k)\cdot p(Y = y | C = c_k)\text{ ,}&
\end{eqnarray*}
\noindent
and hypothesis 2 implies that $P(C = c_{\tilde{k}}) = P(C = c_{\hat{k}})$ for all $\tilde{k},\hat{k} = 1,...,K$.
Therefore
\begin{eqnarray*}
&\max_{k = 1,...,K}p_{(X,Y),k}(x,y) =&\\
&= \max_{k = 1,...,K}p(X = x | C = c_k)\cdot p(Y = y | C = c_k)\text{ ,}&
\end{eqnarray*}
which is the product rule (see definition~\ref{def:pr}) for
$p_{X,k}(x) = p(X = x | C = c_k)$ and $p_{Y,k}(y) = p(Y = y | C = c_k)$.
\end{proof}
\begin{fact}
For each $Z \in \{X,Y\}$,
let $d_Z$ be the (finite) dimension of the variable $Z$,
$I_{d_Z}$ the identity matrix of dimensions $d_Z \times d_Z$,
and $\Sigma_{Z,k} = \sigma_{Z,k}^2I_{d_Z}$ (where $\sigma_{Z,k}$ is positive number).
Also, for each $k=1,\ldots,K$, let $\mu_{Z,k}$ be fixed points in $\mathbb{R}^{d_Z}$.
Defining confidence functions (see definition~\ref{def:pr})
\begin{eqnarray}
p_{X,k}(x) = e^{-\frac{1}{2}(x-\mu_{X,k})^\top\Sigma_{X,k}^{-1}(x-\mu_{X,k})}\text{ , and} \\
p_{Y,k}(y) = e^{-\frac{1}{2}(y-\mu_{Y,k})^\top\Sigma_{Y,k}^{-1}(y-\mu_{Y,k})}\text{ ,}
\end{eqnarray}
\noindent
the product rule is equivalent to
\begin{equation*}
\min_{k = 1,...,K} {\frac{1}{\sigma_{X,k}^2}\|x-\mu_{X,k}\|^2
+\frac{1}{\sigma_{Y,k}^2}\|y-\mu_{Y,k}\|^2}\text{ .}
\label{eq:prod_rule}
\end{equation*}
\noindent
That is, supposing gaussian-like classifiers with covariances parallel to the axis,
the product rule tries to minimize the sum of the squared distances
to the respective ``centers'' of classes for $X$ and $Y$,
such distances being weighted by the inverse of the ``spread'' of the
the classes (an intuitively reasonable strategy, in fact).
\end{fact}
\begin{proof}
Under the mentioned hypothesis, we have
\begin{eqnarray*}
&\max_{k = 1,...,K}p_{X,k}(x) \cdot p_{Y,k}(y) =&\\
&= \max_{k = 1,...,K} e^{-\left(\frac{1}{2\sigma_{X,k}^2}\|x-\mu_{X,k}\|^2
+\frac{1}{2\sigma_{Y,k}^2}\|y-\mu_{Y,k}\|^2\right)}\text{ .}&
\end{eqnarray*}
Applying $\log$ and multiplying by $2$ the second member of the above equality results in
\begin{eqnarray*}
&\max_{k = 1,...,K}p_{X,k}(x) \cdot p_{Y,k}(y) =&\\
&= \min_{k = 1,...,K} {\frac{1}{\sigma_{X,k}^2}\|x-\mu_{X,k}\|^2
+\frac{1}{\sigma_{Y,k}^2}\|y-\mu_{Y,k}\|^2}\text{ .}&
\end{eqnarray*}
\end{proof}
\begin{fact}\label{fact:concatenation}
Let us now define confidence functions as follows:
\begin{equation*}
p_{X,k}(x) = \frac{1}{(2\pi)^{d_X}|\Sigma_{X,k}|^{1/2}}e^{-\frac{1}{2}(x-\mu_{X,k})^\top\Sigma_{X,k}^{-1}(x-\mu_{X,k})}\text{ , and} \label{eq:conc:1}
\end{equation*}
\begin{equation*}
p_{Y,k}(y) = \frac{1}{(2\pi)^{d_Y}|\Sigma_{Y,k}|^{1/2}}e^{-\frac{1}{2}(y-\mu_{Y,k})^\top\Sigma_{Y,k}^{-1}(y-\mu_{Y,k})}\text{ ,}\label{eq:conc:2}
\end{equation*}
where, for each $Z\in \{X,Y\}$,
$|\Sigma_{Z,k}|$ is the determinant of $\Sigma_{Z,k}$.
Let us suppose also that, conditioned to the class $c_j$, $X$ and $Y$ are uncorrelated, that is,
being $\Sigma_k$ the covariance of $(X,Y)|C = c_k$, we can write
\begin{equation*}
\Sigma_k =
\left[
\begin{array}{cc}
\Sigma_{X,k} & 0 \\
0 & \Sigma_{Y,k}
\end{array}
\right] \text{ ,}
\end{equation*}
where, for each $Z \in \{X,Y\}$, $\Sigma_{Z,k}$ is the covariance of $Z | C = c_k$.
Then, putting $\mu_j = (\mu_{X,j}, \mu_{Y,j})$, we have
\begin{eqnarray*}
&p_{X,k}(x)\cdot p_{Y,k}(y) =&\\
&= \frac{1}{(2\pi)^{d_X+d_Y}|\Sigma_k|^{1/2}}
e^{-\frac{1}{2}((x,y)-\mu_k)^\top\Sigma_j^{-1}((x,y)-\mu_k)} \text{ .}&
\end{eqnarray*}
That is, supposing gaussian classifiers, the product rule is equivalent to learning
using the concatenated vectors of features.
\end{fact}
\begin{proof}
The inverse of $\Sigma_k$ is
\begin{equation*}
\Sigma_k^{-1} =
\left[
\begin{array}{cc}
\Sigma_{X,k}^{-1} & 0 \\
0 & \Sigma_{Y,k}^{-1}
\end{array}
\right] \text{ .}
\end{equation*}
This way, the expression
\begin{equation*}
(x-\mu_{X,k})^\top\Sigma_{X,k}^{-1}(x-\mu_{X,k})+(y-\mu_{Y,k})^\top\Sigma_{Y,k}^{-1}(y-\mu_{Y,k})
\end{equation*}
reduces to
\begin{equation*}
((x,y)-\mu_k)^\top\Sigma_k^{-1}((x,y)-\mu_k) \text{ .}
\end{equation*}
Now
\begin{equation*}
\frac{1}{(2\pi)^{d_X}|\Sigma_{X,k}|^{1/2}} \cdot \frac{1}{(2\pi)^{d_Y}|\Sigma_{Y,k}|^{1/2}}
=
\frac{1}{(2\pi)^{d_X+d_Y}|\Sigma_k|^{1/2}} \text{ .}
\end{equation*}
Therefore
\begin{eqnarray*}
&p_{X,k}(x) \cdot p_{Y,k}(y) =&\\
&= \frac{1}{(2\pi)^{d_X+d_Y}|\Sigma_k|^{1/2}}
e^{-\frac{1}{2}((x,y)-\mu_k)^\top\Sigma_k^{-1}((x,y)-\mu_k)} \text{ .}&
\end{eqnarray*}
\end{proof}
\section{Discussion}
According to Fact~\ref{fact:map}, the product rule arises when maximizing
the posterior under the hypothesis of
equivalent priors and conditional independence given a class.
We have just seen (Fact~\ref{fact:concatenation}) that, supposing only uncorrelation
(which is less then independency), the product rule appears
as well. But in fact we have used gaussian classifiers,
i.e., we supposed the data was normally distributed.
This is in accordance with the fact that
normality and uncorrelation implies independency.
An important consequence of Fact~\ref{fact:concatenation} has to do with the \emph{curse of dimensionality}.
If there is strong evidence that the conditional joint distribution of $(X, Y)$ given any class $C = c_k$
is well approximated by a normal distribution, and that $X|C = c_k$ and $Y |C = c_k$ are uncorrelated, than the product rule is an interesting option, because we do not have to deal with a feature vector with dimension larger the largest of the dimensions of the original descriptors. Besides, the product rule allows parallelization.
| {
"timestamp": "2013-01-18T02:02:51",
"yymm": "1301",
"arxiv_id": "1301.4157",
"language": "en",
"url": "https://arxiv.org/abs/1301.4157",
"abstract": "We discuss theoretical aspects of the product rule for classification problems in supervised machine learning for the case of combining classifiers. We show that (1) the product rule arises from the MAP classifier supposing equivalent priors and conditional independence given a class; (2) under some conditions, the product rule is equivalent to minimizing the sum of the squared distances to the respective centers of the classes related with different features, such distances being weighted by the spread of the classes; (3) observing some hypothesis, the product rule is equivalent to concatenating the vectors of features.",
"subjects": "Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)",
"title": "On the Product Rule for Classification Problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290955604489,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7099522505763406
} |
https://arxiv.org/abs/0712.0864 | A New Error Bound for Shifted Surface Spline Interpolation | A New Error Bound for shifted surface spline interpolation is presented. This error bound probably is the most powerful one up to now. | \section{Introduction}
In the theory of radial basis functions, it's well known that any conditionally positive definite radial function can form an interpolant for any set of scattered data. We make a simple sketch of this process as follows.
Suppose $h$ is a continuous function on $R^{n}$ which is strictly conditionally positive definite of order $m$. For any set of data points $(x_{j},f_{j}),j=1,\ldots,N$, where $X=\{ x_{1},\ldots ,x_{N}\}$ is a subset of $R^{n}$ and the $f_{j}'s$ are real or complex numbers, there is a unique function of the form
\begin{equation}
s(x)=p(x)+\sum_{j=1}^{N}c_{j}h(x-x_{j})
\end{equation}
, where $p(x)$ is a polynomial in $P_{m-1}^{n}$, satisfying
\begin{equation}
\sum_{j=1}^{N}c_{j}q(x_{j})=0
\end{equation}
for all polynomials $q$ in $P_{m-1}^{n}$ and
\begin{equation}
p(x_{i})+\sum_{j=1}^{N}c_{j}h(x_{i}-x_{j})=f_{i},\ i=1,\ldots , N
\end{equation}
if $X$ is a determining set for $P_{m-1}^{n}$.
A complete treatment of this topic can be seen in \cite{MN1} and many other papers.
The function $s(x)$ is called the $h$-spline interpolant of the data points and is of central importance in the theory of radial basis functions. In this paper $h$ always denotes a radial function in the sense that the value of $h(x)$ is completely determined by the norm $| x|$ of $x$. Here, $P_{m-1}^{n}$ denotes the class of those n-variable polynomials of degree not more than $m-1$.
In this paper we are mainly interested in a radial function called shifted surface spline defined by
\begin{eqnarray}
h(x)& := &(-1)^{m}(|x|^{2}+c^{2})^{\frac{\lambda}{2}}log(|x|^{2}+c^{2})^{\frac{1}{2}},\ \lambda \in Z_{+},\ m=1+\frac{\lambda}{2},\ c>0, \nonumber \\
& & x\in R^{n},\ \lambda,n\ even,
\end{eqnarray}
where $|x|$ is the Euclidean norm of $x$, and $\lambda,c$ are constants. In fact, the definition of shifted surface spline covers odd dimensions. For odd dimensions, it's of the form
\begin{eqnarray}
h(x)& := & (-1)^{\lceil \lambda -\frac{n}{2} \rceil }(|x|^{2}+c^{2})^{\lambda -\frac{n}{2}},\ n\ odd,\ \lambda \in Z_{+}=\{ 1,2,3,\ldots \} \nonumber \\
& & and\ \lambda >\frac{n}{2}.
\end{eqnarray}
However, this is just multiquadric and we treat it in another paper \cite{Lu4}. Therefore we will not discuss it. Instead, we will focus on (4) and even dimensions only.
\subsection{Polynomials and Simplices}
Let $E$ denote an n-dimensional simplex\cite{Fl} with vertices $v_{1},\ldots ,v_{n+1}$. If we adopt barycentric coordinates, then any point $x\in E$ can be written as a convex combination of the vertices:
$$x=\sum_{i=1}^{n+1}\lambda_{i}v_{i},\ \sum_{i=1}^{n+1}\lambda_{i}=1,\ \lambda_{i}\geq 0.$$
We define the ``{\bf equally spaced}'' points of degree k to be those points whose barycentric coordinates are of the form
$$(k_{1}/k,k_{2}/k,\ldots ,k_{n+1}/k),\ k_{i}\ nonnegative\ integers\ and\ k_{1}+\cdots +k_{n+1}=k.$$
It's easily seen that the number of such points is exactly $dimP_{k}^{n}$, i.e., the dimension of $P_{k}^{n}$. In this section we use N to denote $dimP_{k}^{n}$.
The above-defined equally spaced points can induce a polynomial interpolation process as follows. Let $x_{1},\ldots ,x_{N}$ be the equally spaced points in $E$ of degree k. The associated Lagrange polynomials $l_{i}$ of degree $k$ are defined by the condition $l_{i}(x_{j})=\delta_{ij},\ 1\leq i,j\leq N$. For any continuous map $f\in C(E),\ (\Pi_{k}f)(x):= \sum_{i=1}^{N}f(x_{i})l_{i}(x)$ is its interpolating polynomial. If both spaces are equipped with the supremum norm, the mapping
$$\Pi_{k}:C(E)\rightarrow P_{k}^{n}$$
has a well-known norm
$$\| \Pi_{k}\| =\max _{x}\sum_{i=1}^{N}|l_{i}(x)|$$
which is the maximum value of the famous Lebesgue function. It's easily seen that for any $p\in P_{k}^{n}$,
$$\| p\| _{\infty}:= \max _{x\in E}|p(x)| \leq \| \Pi_{k}\| \max _{1\leq i\leq N}|p(x_{i})|.$$
The next result is important in our construction of the error bound, and we cite it directly from \cite{Bo}.
\begin{lem}
For the above equally spaced points $\{ x_{1},\ldots ,x_{N}\}$, $\| \Pi_{k}\| \leq \left( \begin{array}{c}
2k-1 \\ k
\end{array} \right) $. Moreover, as $n\rightarrow \infty, \ \| \Pi_{k}\| \rightarrow \left( \begin{array}{c}
2k-1 \\ k
\end{array} \right) $.
\end{lem}
Then we need another lemma which must be proven because it plays a crucial role in our development.
\begin{lem}
Let $Q\subseteq R^{n}$ be an n simplex in $R^{n}$ and $Y$ be the set of equally spaced points of degree $k$ in $Q$. Then, for any point $x$ in $Q$, there is a measure $\sigma$ supported on $Y$ such that
$$\int p(y)d\sigma(y)=p(x)$$
for all $p$ in $P_{k}^{n}$, and
$$\int d|\sigma |(y)\leq \left( \begin{array}{c}
2k-1 \\ k
\end{array} \right).$$
\end{lem}
{\bf Proof}. Let $Y=\{ y_{1},\ldots , y_{N}\} $ be the set of equally spaced points of degree $k$ in $Q$. Denote $P_{k}^{n}$ by $V$. For any $x\in Q$, let $\delta_{x}$ be the point-evaluation functional. Define $T:V\rightarrow T(V)\subseteq R^{N}$ by $T(v)=(\delta_{y_{i}}(v))_{y_{i}\in Y}$. Then $T$ is injective. Define $\tilde{\psi}$ on $T(V)$ by $\tilde{\psi}(w)=\delta_{x}(T^{-1}w)$. By the Hahn-Banach theorem, $\tilde{\psi}$ has a norm-preserving extension $\tilde{\psi}_{ext}$ to $R^{N}$. By the Riesz representation theorem, each linear functional on $R^{N}$ can be represented by the inner product with a fixed vector. Thus, there exists $z\in R^{N}$ with
$$\tilde{\psi}_{ext}(w)=\sum_{j=1}^{N}z_{j}w_{j}$$
and $\| z\| _{(R^{N})^{*}}=\| \tilde{\psi}_{ext}\| $. If we adopt the $l_{\infty}$-norm on $R^{N}$, the dual norm will be the $l_{1}$-norm. Thus $\| z\| _{(R^{N}))^{*}}=\| z\| _{1}=\| \tilde{\psi}_{ext}\| =\| \tilde{\psi}\| =\| \delta_{x}T^{-1}\|$.
Now, for any $p\in V$, by setting $w=T(p)$, we have
$$\delta_{x}(p)=\delta_{x}(T^{-1}w)=\tilde{\psi}(w)=\tilde{\psi}_{ext}(w)=\sum_{j=1}^{N}z_{j}w_{j}=\sum_{j=1}^{N}z_{j}\delta_{y_{j}}(p).$$
This gives
\begin{equation}
p(x)=\sum_{j=1}^{N}z_{j}p(y_{j})
\end{equation}
where $|z_{1}|+\cdots +|z_{N}|=\| \delta_{x}T^{-1}\|$.
Note that
\begin{eqnarray*}
\| \delta_{x}T^{-1}\| & = & \sup _{\begin{array}{c}
w\in T(V) \\ w\neq 0
\end{array} }\frac{\| \delta_{x}T^{-1}(w)\| }{\| w\| _{R^{N}}}\\
& = & \sup _{\begin{array}{c}
w\in T(V) \\ w\neq 0
\end{array}}\frac{|\delta_{x}p|}{\| T(p)\| _{R^{N}}}\\
& \leq & \sup _{\begin{array}{c}
p\in V \\ p\neq 0
\end{array}}\frac{|p(x)|}{\max _{j=1,\ldots ,N}|p(y_{j})|}\\
& \leq & \sup _{\begin{array}{c}
p\in V \\ p\neq 0
\end{array}}\frac{\| \Pi_{k}\| \max _{j=1,\ldots ,N}|p(y_{j})|}{\max _{j=1,\ldots ,N}|p(y_{j})|}\\
& = & \| \Pi_{k}\| \\
& \leq & \left( \begin{array}{c}
2k-1 \\ k
\end{array} \right) .
\end{eqnarray*}
Therefore $|z_{1}|+\cdots +|z_{N}|\leq \left( \begin{array}{c}
2k-1 \\ k
\end{array} \right)$ and our lemma follows immediately from (6) by letting $\sigma(\{ y_{j}\} )=z_{j},\ j=1,\ldots ,N$. \hspace{10.6cm} $\sharp$
\subsection{Radial Functions and Borel Measures}
Our theory is based on a fundamental fact that any continuous conditionally positive definite radial function corresponds to a unique positive Borel measure. Before discussing this property in detail, we first clarify some symbols and definitions. In this paper ${\cal D}$ denotes the space of all compactly supported and infinitely differentiable complex-valued functions on $R^{n}$. For each function $\phi$ in ${\cal D}$, its Fourier transform is
$$\hat{\phi}(\xi)= \int e^{-i<x,\xi>}\phi(x)dx.$$
Then we have the following lemma which is introduced in \cite{GV} but modified by Madych and Nelson in \cite{MN2}.
\begin{lem}
For any continuous conditionally positive definite function $h$ on $R^{n}$ of order $m$, there are a unique positive Borel measure $\mu$ on $R^{n}\sim \{ 0\}$ and constants $a_{r},\ |r|=2m$ such that for all $\psi\in {\cal D}$,
\begin{eqnarray}
\int h(x)\psi(x)dx & = & \int \{ \hat{\psi}(\xi)-\hat{\chi}(\xi)\sum_{|r|<2m}D^{r}\hat{\psi}(0)\frac{\xi^{r}}{r!}\} d\mu(\xi) \nonumber \\
& & +\sum_{|r|\leq 2m}D^{r}\hat{\psi}(0)\frac{a_{r}}{r!}
\end{eqnarray}
, where for every choice of complex numbers $c_{\alpha},\ |\alpha|=m$,
$$\sum_{|\alpha|=m}\sum_{|\beta|=m}a_{\alpha+\beta}c_{\alpha}\overline{c_{\beta}}\geq 0.$$
Here $\chi$ is a function in ${\cal D}$ such that $1-\hat{\chi}(\xi)$ has a zero of order $2m+1$ at $\xi=0$; both of the integrals
$$\int_{0<|\xi|<1}|\xi|^{2m}d\mu(\xi),\ \int_{|\xi|\geq 1}d\mu(\xi)$$
are finite. The choice of $\chi$ affects the value of the coefficients $a_{r}$ for $|r|<2m$.
\end{lem}
\section{Main Result}
In order to show our main result, we need some lemmas, including the famous Stirling's formula.\\
\\
{\bf Stirling's Formula}: $n!\sim \sqrt{2\pi n}(\frac{n}{e})^{n}$.\\
\\
The approximation is very reliable even for small $n$. For example, when $n=10$, the relative error is only $0.83\%$. The larger $n$ is, the better the approximation is. For further details, we refer the reader to \cite{GG} and \cite{GKP}.
\begin{lem}
For any positive integer k,
$$\frac{\sqrt{(2k)!}}{k!}\leq 2^{k}.$$
\end{lem}
{\bf Proof}. This inequality holds for $k=1$ obviously. We proceed by induction.
\begin{eqnarray*}
\frac{\sqrt{[2(k+1)]!}}{(k+1)!} & = & \frac{\sqrt{(2k+2)!}}{k!(k+1)}=\frac{\sqrt{(2k)!}}{k!}\cdot \frac{\sqrt{(2k+2)(2k+1)}}{k+1} \\
& \leq & \frac{\sqrt{(2k)!}}{k!}\cdot \frac{\sqrt{(2k+2)^{2}}}{k+1}\leq 2^{k}\cdot \frac{(2k+2)}{k+1}=2^{k+1}. \hspace{4cm}\ \ \ \ \ \ \ \ \ \sharp
\end{eqnarray*}
Now recall that the function $h$ defined in (4) is conditionally positive definite of order $m=1+\frac{\lambda}{2}$. This can be found in \cite{Dy} and many relevant papers. Its Fourier transform \cite{GS} is
\begin{equation}
\hat{h}(\theta)=l(\lambda,n)|\theta|^{-\lambda-n}\tilde{{\cal K}}_{\frac{n+\lambda}{2}}(c|\theta|)
\end{equation}
where $l(\lambda,n)>0$ is a constant depending on $\lambda$ and $n$, and $\tilde{{\cal K}}_{\nu}(t)=t^{\nu}{\cal K}_{\nu}(t)$, ${\cal K}_{\nu}(t)$ being the modified Bessel function of the second kind\cite{AS}. Then we have the following lemma.
\begin{lem}
Let $h$ be as in (4) and $m$ be its order of conditional positive definiteness. There exists a positive constant $\rho$ such that
\begin{equation}
\int_{R^{n}}|\xi|^{k}d\mu(\xi)\leq l(\lambda,n)\cdot \sqrt{\frac{\pi}{2}}\cdot n\cdot \alpha_{n}\cdot c^{\lambda-k}\cdot \Delta_{0}\cdot \rho^{k}\cdot k!
\end{equation}
for all integer $k\geq 2m+2$ where $\mu$ is defined in (6), $\alpha_{n}$ denotes the volume of the unit ball in $R^{n}$, $c$ is as in (4), and $\Delta_{0}$ is a positive constant.
\end{lem}
{\bf Proof}. We first transform the integral of the left-hand side of the inequality into a simpler form.
\begin{eqnarray}
& & \int_{R^{n}}|\xi|^{k}d\mu(\xi) \nonumber \\
& = & \int_{R^{n}}|\xi|^{k}l(\lambda,n)\tilde{{\cal K}}_{\frac{n+\lambda}{2}}(c|\xi|)|\xi|^{-\lambda-n}d\xi \nonumber \\
& = & l(\lambda,n)c^{\frac{n+\lambda}{2}}\int_{R^{n}}|\xi|^{k-\frac{n+\lambda}{2}}\cdot {\cal K}_{\frac{n+\lambda}{2}}(c|\xi|)d\xi \nonumber \\
& \sim & l(\lambda,n)c^{\frac{n+\lambda}{2}}\sqrt{\frac{\pi}{2}}\int_{R^{n}}|\xi |^{k-\frac{n+\lambda}{2}}\cdot \frac{1}{\sqrt{c|\xi|}\cdot e^{c|\xi|}}d\xi \nonumber \\
& = & l(\lambda,n)c^{\frac{n+\lambda}{2}}\cdot \sqrt{\frac{\pi}{2}}\cdot n\cdot \alpha_{n}\int_{0}^{\infty}r^{k-\frac{n+\lambda}{2}}\cdot \frac{r^{n-1}}{\sqrt{cr}\cdot e^{cr}}dr \nonumber \\
& = & l(\lambda,n)c^{\frac{n+\lambda}{2}}\sqrt{\frac{\pi}{2}}\cdot n\cdot \alpha_{n}\cdot \frac{1}{\sqrt{c}}\int _{0}^{\infty}\frac{r^{k+\frac{n-\lambda -3}{2}}}{e^{cr}}dr \nonumber \\
& = & l(\lambda,n)c^{\frac{n+\lambda}{2}}\sqrt{\frac{\pi}{2}}\cdot n\cdot \alpha_{n}\cdot \frac{1}{\sqrt{c}}\cdot \frac{1}{c^{k+\frac{n-\lambda -1}{2}}}\int_{0}^{\infty}\frac{r^{k+\frac{n-\lambda -3}{2}}}{e^{r}}dr \nonumber \\
& = & l(\lambda,n)\sqrt{\frac{\pi}{2}}\cdot n\cdot \alpha_{n}\cdot c^{\lambda-k}\int _{0}^{\infty}\frac{r^{k'}}{e^{r}}dr \ where\ k'=k+\frac{n-\lambda -3}{2}. \nonumber
\end{eqnarray}
Note that $k\geq 2m+2=4+\lambda$ implies $k'\geq \frac{n+\lambda+5}{2}>0$.
Now we divide the proof into three cases. Let $k''=\lceil k'\rceil $ which is the smallest integer greater than or equal to $k'$.
Case1. Assume $k''>k$. Let $k''=k+s$. Then
$$\int_{0}^{\infty}\frac{r^{k'}}{e^{r}}dr\leq \int_{0}^{\infty}\frac{r^{k''}}{e^{r}}dr=k''!=(k+s)(k+s-1)\cdots (k+1)k!$$
and
$$\int_{0}^{\infty}\frac{r^{k'+1}}{e^{r}}dr\leq \int_{0}^{\infty}\frac{r^{k''+1}}{e^{r}}dr=(k''+1)!=(k+s+1)(k+s)\cdots (k+2)(k+1)k!.$$
Note that
$$\frac{(k+s+1)(k+s)\cdots (k+2)}{(k+s)(k+s-1)\cdots (k+1)}=\frac{k+s+1}{k+1}.$$
The condition $k\geq 2m+2$ implies that
$$\frac{k+s+1}{k+1}\leq \frac{2m+3+s}{2m+3}=1+\frac{s}{2m+3}.$$
Let $\rho=1+\frac{s}{2m+3}$. Then
$$\int_{0}^{\infty}\frac{r^{k''+1}}{e^{r}}dr\leq \Delta_{0}\cdot \rho^{k+1}\cdot (k+1)!$$
if $\int_{0}^{\infty}\frac{r^{k''}}{e^{r}}dr\leq \Delta_{0}\cdot \rho^{k}\cdot k!$. The smallest $k''$ is $k_{0}''=2m+2+s$ when $k=2m+2$. Now,
\begin{eqnarray}
\int_{0}^{\infty}\frac{r^{k_{0}''}}{e^{r}}dr & = & k_{0}''!=(2m+2+s)(2m+1+s)\cdots(2m+3)(2m+2)! \nonumber \\
& = & \frac{(2m+2+s)(2m+1+s)\cdots (2m+3)}{\rho^{2m+2}}\cdot \rho^{2m+2}\cdot (2m+2)! \nonumber \\
& = & \Delta_{0}\cdot \rho^{2m+2}\cdot(2m+2)! \nonumber \\
& & where\ \Delta_{0}=\frac{(2m+2+s)(2m+1+s)\cdots (2m+3)}{\rho^{2m+2}}. \nonumber
\end{eqnarray}
It follows that $\int_{0}^{\infty}\frac{r^{k'}}{e^{r}}dr\leq \Delta_{0}\cdot \rho^{k}\cdot k!$ for all $k\geq 2m+2$.
Case2. Assume $k''<k$. Let $k''=k-s$ where $s>0$. Then
$$\int_{0}^{\infty}\frac{r^{k'}}{e^{r}}dr\leq \int_{0}^{\infty}\frac{r^{k''}}{e^{r}}dr=k''!=(k-s)!=\frac{1}{k(k-1)\cdots (k-s+1)}\cdot k!$$
and
\begin{eqnarray}
\int_{0}^{\infty}\frac{r^{k'+1}}{e^{r}}dr & \leq & \int_{0}^{\infty}\frac{r^{k''+1}}{e^{r}}dr \nonumber \\
& = & (k''+1)!=(k-s+1)!=\frac{1}{(k+1)k\cdots (k-s+2)}\cdot (k+1)!. \nonumber
\end{eqnarray}
Note that
\begin{eqnarray*}
& & \left\{ \frac{1}{(k+1)k\cdots (k-s+2)}/\frac{1}{k(k-1)\cdots (k-s+1)}\right\} \\
& = & \frac{k(k-1)\cdots (k- s+1)}{(k+1)k\cdots (k-s+2)} \\
& = & \frac{(k-s+1)}{k+1} \\
& \leq &1.
\end{eqnarray*}
Let $\rho =1$. Then
$$\int_{0}^{\infty}\frac{r^{k''+1}}{e^{r}}dr\leq \Delta_{0}\cdot \rho^{k+1}\cdot (k+1)!$$
if $\int_{0}^{\infty}\frac{r^{k''}}{e^{r}}dr\leq \Delta_{0}\cdot \rho^{k}\cdot k!$. The smallest $k$ is $k_{0}=2m+2$. Hence the smallest $k''$ is $k_{0}''=k_{0}-s=2m+2-s$. Now,
\begin{eqnarray}
\int_{0}^{\infty}\frac{r^{k_{0}''}}{e^{r}}dr & = & k_{0}''!=(2m+2-s)!=(k_{0}-s)! \nonumber \\
& = & \frac{1}{k_{0}(k_{0}-1)\cdots (k_{0}-s+1)}\cdot (k_{0}!) \nonumber \\
& = & \Delta_{0}\cdot \rho^{k_{0}}\cdot k_{0}! \ where\ \Delta_{0}=\frac{1}{(2m+2)(2m+1)\cdots (2m-s+3)}. \nonumber
\end{eqnarray}
It follows that $\int_{0}^{\infty}\frac{r^{k'}}{e^{r}}dr\leq \Delta_{0}\cdot \rho^{k}\cdot k!$ for all $k\geq 2m+2$.
Case3. Assume $k''=k$. Then
$$\int_{0}^{\infty}\frac{r^{k'}}{e^{r}}dr\leq \int_{0}^{\infty}\frac{r^{k''}}{e^{r}}dr=k!\ \ and\ \ \int_{0}^{\infty}\frac{r^{k'+1}}{e^{r}}dr\leq (k+1)!.$$
Let $\rho=1$. Then $\int_{0}^{\infty}\frac{r^{k'}}{e^{r}}dr\leq \Delta_{0}\cdot \rho^{k}\cdot k!$ for all $k$ where $\Delta_{0}=1$.
The lemma is now an immediate result of the three cases. \hspace{5.7cm} $\sharp$ \\
\\
{\bf Remark}: For the convenience of the reader we should express the constants $\Delta_{0}$ and $\rho$ in a clear form. It's easily shown that\\
(a)$k''>k$ if and only if $n-\lambda>3$,\\
(b)$k''<k$ if and only if $n-\lambda\leq 1$, and\\
(c)$k''=k$ if and only if $1<n-\lambda \leq 3$,\\
where $k''$ and $k$ are as in the proof of the lemma. We thus have the following situations.\\
(a)$n-\lambda>3$. Let $s=\lceil \frac{n-\lambda-3}{2}\rceil $. Then
$$\rho=1+\frac{s}{2m+3}\ \ and\ \ \Delta_{0}=\frac{(2m+2+s)(2m+1+s)\cdots (2m+3)}{\rho^{2m+2}}.$$
(b)$n-\lambda\leq 1$. Let $s=-\lceil \frac{n-\lambda-3}{2}\rceil $. Then
$$\rho=1\ \ and\ \ \Delta_{0}=\frac{1}{(2m+2)(2m+1)\cdots (2m-s+3)}.$$
(c)$1<n-\lambda\leq 3$. We have
$$\rho=1\ \ and \ \ \Delta_{0}=1.$$
Before introducing our main theorem, we must introduce a function space called {\bf native space}, denoted by ${\bf {\cal C}_{h,m}}$, for each conditionally positive definite radial function $h$ of order $m$. If
$${\cal D}_{m}=\{ \phi\in {\cal D}: \ \int x^{\alpha}\phi(x)dx=0\ for\ all\ |\alpha|<m\}$$
, then ${\cal C}_{h.m}$ is the class of those continuous functions $f$ which satisfy
\begin{equation}
\left| \int f(x)\phi(x)dx\right| \leq c(f)\left\{ \int h(x-y)\phi(x)\overline{\phi(y)}dxdy\right\} ^{1/2}
\end{equation}
for some constant $c(f)$ and all $\phi$ in ${\cal D}_{m}$. If $f\in {\cal C}_{h,m}$, let $\| f\| _{h}$ denote the smallest constant $c(f)$ for which (10) is true. Then $\| \cdot \| _{h}$ is a semi-norm and ${\cal C}_{h,m}$ is a semi-Hilbert space; in the case $m=0$ it is a norm and a Hilbert space respectively. For further details, we refer the reader to \cite{MN1} and \cite{MN2}. This definition of native space is introduced by Madych and Nelson, and characterized by Luh in \cite{Lu1} and \cite{Lu2}. Although there is an equivalent definition \cite{We} which is easier to handle, we still adopt Madych and Nelson's definition to show the author's respect for them.
Now we have come to the main theorem of this paper.
\begin{thm}
Let $h$ be as in (4), and $b_{0}$ be any positive number. Let $\Omega$ be any subset of $R^{n}$ satisfying the property that for any $x$ in $\Omega$, there is an $n$ simplex E of diameter equal to $b_{0}$ and $x\in E\subseteq \Omega$. There are positive constants $\delta_{0},c_{1},\omega',\ 0<\omega'<1,$ for which the following is true:If $f\in {\cal C}_{h,m}$, the native space induced by $h$, and $s$ is the h spline that interpolates f on a subset X of $R^{n}$, then
\begin{equation}
|f(x)-s(x)|\leq c_{1}\sqrt{\delta}(\omega')^{\frac{1}{\delta}}\cdot \| f\| _{h}
\end{equation}
for all x in $\Omega$ and $0<\delta \leq \delta_{0}$ if $\Omega$ satisfies the property that for any x in $\Omega$ and any number r with $\frac{1}{3C}\leq r\leq b_{0}$, there is an n simplex Q with diameter diamQ=r, $x\in Q\subseteq \Omega$, such that for any integer k with $\frac{1}{3C\delta}\leq k\leq \frac{b_{0}}{\delta}$, there is in Q an equally spaced set of centers from X of degree k-1.(Once $\delta$ and k are chosen, Q is in fact an n simplex of diameter $k\delta$ and X can be chosen to consist only of the equally spaced points in Q of degree k-1.) The number C is defined by
$$C:=\max \left\{ 8\rho',\ \frac{2}{3b_{0}}\right\} ,\ \rho'=\frac{\rho}{c}$$
where $\rho$ and c appear in Lemma2.2 and (4) respectively. Here $\| f\| _{h}$ is the h-norm of f in the native space. The numbers $\delta_{0},c_{1}$ and $\omega'$ are given by $\delta_{0}:=\frac{1}{3C(m+1)}$ where m appears in (4); $c_{1}:=\sqrt{l(\lambda,n)}\cdot (\pi/2)^{1/4}\cdot \sqrt{n\alpha_{n}}\cdot c^{\lambda/2}\cdot \sqrt{\Delta_{0}}\sqrt{3C}\cdot \sqrt{(16\pi)^{-1}}$ where $\lambda$ is as in (4), $l(\lambda,n)$ appears in (8), $\alpha_{n}$ is the volume of the unit ball in $R^{n}$, and $\Delta_{0}$, together with the computation of $\rho$, is defined in Lemma2.2 and the remark following its proof; $\omega':=(\frac{2}{3})^{1/3C}$.
\end{thm}
{\bf Proof}. Let $\delta_{0}$, and $C$ be as in the statement of the theorem. For any $0<\delta\leq \delta_{0}$, we have $0<\delta \leq \frac{1}{3C(m+1)}$ and $0<3C\delta\leq \frac{1}{m+1}$. Since $\frac{1}{m+1}<1$, there exists an integer $k$ such that
$$1\leq 3C\delta k\leq 2.$$
For such $k$, $\delta k\leq \frac{2}{3C}\leq b_{0},\ \frac{1}{3C\delta}\leq k\leq \frac{b_{0}}{\delta}$, and $8\rho'\delta k\leq \frac{2}{3}$.
For any $x\in \Omega$, choose arbitrarily an n simplex $Q$ of diameter $diamQ=\delta k$ with vertices $v_{0},v_{1},\ldots ,v_{n}$ such that $x\in Q\subseteq \Omega$. Let $x_{1},\ldots ,x_{N}$ be equally spaced points of degree k-1 on $Q$ where $N=dimP_{k-1}^{n}$. By (9) and Lemma2.1, whenever $k>m$,
\begin{eqnarray}
c_{k} & := & \left\{ \int_{R^{n}}\frac{|\xi|^{2k}}{(k!)^{2}}d\mu(\xi)\right\} ^{1/2} \nonumber \\
& \leq & \sqrt{l(\lambda,n)}\cdot (\pi/2)^{1/4}\cdot \sqrt{n\alpha_{n}}\cdot c^{\lambda/2}\cdot c^{-k}\cdot \sqrt{\Delta_{0}}\cdot (2\rho)^{k}.
\end{eqnarray}
Recall that Theorem4.2 of \cite{MN2} implies that
\begin{equation}
|f(x)-s(x)|\leq c_{k}\| f\| _{h}\cdot \int_{R^{n}}|y-x|^{k}d|\sigma|(y)
\end{equation}
whenever $k>m$, and $\sigma$ is any measure supported on $X$ such that
\begin{equation}
\int_{R^{n}}p(y)d\sigma(y)=p(x)
\end{equation}
for all polynomials $p$ in $P_{k-1}^{n}$.
Let $\sigma$ be the measure supported on $\{ x_{1},\ldots ,x_{N}\}$ as mentioned in Lemma1.2. We essentially need to bound the quantity
$$I=c_{k}\int_{R^{n}}|y-x|^{k}d|\sigma|(y)$$
only. Thus for $k$ mentioned as above and $\Delta_{0}$ defined in Lemma2.2, by Lemma1.2,
\begin{eqnarray*}
I & \leq & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}c^{-k}\sqrt{\Delta_{0}}(2\rho)^{k}(\delta k)^{k}\left( \begin{array}{c}
2(k-1)-1 \\ k-1
\end{array}\right) \\
& \sim & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}c^{-k}\sqrt{\Delta_{0}}(2\rho)^{k}(\delta k)^{k}\frac{1}{\sqrt{\pi}}\frac{1}{\sqrt{k-1}}2^{2(k-1)}\ by\ Stirling's\ Formula\\
& \sim & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}c^{-k}\sqrt{\Delta_{0}}(2\rho)^{k}(\delta k)^{k}\frac{1}{\sqrt{\pi}}\frac{1}{\sqrt{k}}4^{k-1}\ when\ k\ is \ large\\
& = & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}\sqrt{\Delta_{0}}\frac{1}{\sqrt{\pi}}\frac{1}{\sqrt{k}}\left( \frac{2\rho \delta k}{c}\right) ^{k}\frac{4^{k}}{4}\\
& = & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}\sqrt{\Delta_{0}}\frac{1}{\sqrt{16\pi}}\frac{1}{\sqrt{k}}\left( \frac{8\rho \delta k}{c}\right) ^{k}\\
& \leq & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}\sqrt{\Delta_{0}}\frac{1}{\sqrt{16\pi}}\frac{1}{\sqrt{k}}\left( \frac{2}{3}\right) ^{k}\\
& \leq & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}\sqrt{\Delta_{0}}\frac{1}{\sqrt{16\pi}}\frac{1}{\sqrt{k}}\left( \frac{2}{3}\right) ^{\frac{1}{3C\delta}}\\
& = & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}\sqrt{\Delta_{0}}\frac{1}{\sqrt{16\pi}}\frac{1}{\sqrt{k}}\left[ \left( \frac{2}{3}\right) ^{\frac{1}{3C}}\right] ^{\frac{1}{\delta}}\\
& = & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}\sqrt{\Delta_{0}}\frac{1}{\sqrt{16\pi}}\frac{1}{\sqrt{k}}[\omega']^{\frac{1}{\delta}}\ where\ \omega'=\left( \frac{2}{3}\right) ^{\frac{1}{3C}}\\
& \leq & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}\sqrt{\Delta_{0}}\frac{1}{\sqrt{16\pi}}\sqrt{3C\delta}[\omega']^{\frac{1}{\delta}}\\
& = & \sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}\sqrt{\Delta_{0}}\frac{\sqrt{3C}}{\sqrt{16\pi}}\sqrt{\delta}[\omega']^{\frac{1}{\delta}}
\end{eqnarray*}
Our theorem thus follows from (13).\hspace{9.5cm} $\sharp$\\
\\
{\bf Remark}. In the preceding theorem we didn't mention the well-known fill-distance. In fact $\delta$ is in spirit equivalent to the fill-distance $d(Q,X):=\sup_{y\in Q}\min _{1\leq i\leq N}\| y-x_{i}\|$ where $X=\{ x_{1},\ldots ,x_{N}\}$ as mentioned in the proof. Note that $\delta\rightarrow 0$ if and only if $d(Q,X)\rightarrow 0$. However we avoid using fill-distance because in our approach the data points are not purely scattered. This to some extent seems to be a drawback. However it does not pose any trouble for us both theoretically and practically. The equally spaced centers $x_{1},\ldots ,x_{N}$ in the simplex $Q$ are friendly and easily tractable.
\section{Comparison}
The exponential-type error bound for (4) is presented by Luh in \cite{Lu3} and is of the form
\begin{equation}
|f(x)-s(x)|\leq c_{1}\omega^{\frac{1}{\delta}}\| f\| _{h}
\end{equation}
where $c_{1}=\sqrt{l(\lambda,n)}(\pi/2)^{1/4}\sqrt{n\alpha_{n}}c^{\lambda/2}\sqrt{\Delta_{0}}$, $\delta$ is equivalent to the fill-distance and
$$\omega=\left( \frac{2}{3}\right) ^{\frac{1}{3C\gamma_{n}}}$$
where
$$C=\max \left\{ 2\rho'\sqrt{n}e^{2n\gamma_{n}},\ \frac{2}{3b_{0}}\right\} ,\ \rho'=\frac{\rho}{c}$$
, $\rho$ and $c$ being the same as this paper, $b_{0}$ be the side length of a cube, and $\gamma_{n}$ being defined recursively by
$$\gamma_{1}=2,\ \gamma_{n}=2n(1+\gamma_{n-1})\ if\ n>1.$$
The constant $c_{1}$ is almost the same as the $c_{1}$ in (11). The number $b_{0}$ plays the same role as the $b_{0}$ of Theorem2.3. However, $\gamma_{n}\rightarrow \infty$ rapidly as $n\rightarrow \infty$. This can be seen by
$$\gamma_{1}=2,\ \gamma_{2}=12,\ \gamma_{3}=78,\ \gamma_{4}=632,\ \gamma_{5}=6330,\cdots $$
The fast growth of $\gamma_{n}$ forces $e^{2n\gamma_{n}}$ and hence $C$ to grow rapidly as dimension $n\rightarrow \infty$. This means that the crucial constant $\omega$ in (15) turns to 1 rapidly as $n\rightarrow \infty$, making the error bound (15) meaningless for high dimensions.
The advantages of our new approach are:first, there is $\sqrt{\delta}$ in (11) which contributes to the convergence rate of the error bound as $\delta\rightarrow 0$; second, the crucial constant $\omega'$ in (11) are only mildly dependent of dimension $n$. Although $\omega'$ dependends on $\rho$ which in turn depends on $n$, the situation is much better. In fact, $\omega'$ can be made completely independent of $n$ by changing $\lambda$ in (4) to keep $n-\lambda \leq 3$. This can be seen in the remark following Lemma2.2. In other words, we have significantly improved the error bound (15).
| {
"timestamp": "2007-12-06T03:22:54",
"yymm": "0712",
"arxiv_id": "0712.0864",
"language": "en",
"url": "https://arxiv.org/abs/0712.0864",
"abstract": "A New Error Bound for shifted surface spline interpolation is presented. This error bound probably is the most powerful one up to now.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A New Error Bound for Shifted Surface Spline Interpolation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.97112909472487,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7099522499654835
} |
https://arxiv.org/abs/2301.10803 | Evaluating Probabilistic Classifiers: The Triptych | Probability forecasts for binary outcomes, often referred to as probabilistic classifiers or confidence scores, are ubiquitous in science and society, and methods for evaluating and comparing them are in great demand. We propose and study a triptych of diagnostic graphics that focus on distinct and complementary aspects of forecast performance: The reliability diagram addresses calibration, the receiver operating characteristic (ROC) curve diagnoses discrimination ability, and the Murphy diagram visualizes overall predictive performance and value. A Murphy curve shows a forecast's mean elementary scores, including the widely used misclassification rate, and the area under a Murphy curve equals the mean Brier score. For a calibrated forecast, the reliability curve lies on the diagonal, and for competing calibrated forecasts, the ROC and Murphy curves share the same number of crossing points. We invoke the recently developed CORP (Consistent, Optimally binned, Reproducible, and Pool-Adjacent-Violators (PAV) algorithm based) approach to craft reliability diagrams and decompose a mean score into miscalibration (MCB), discrimination (DSC), and uncertainty (UNC) components. Plots of the DSC measure of discrimination ability versus the calibration metric MCB visualize classifier performance across multiple competitors. The proposed tools are illustrated in empirical examples from astrophysics, economics, and social science. | \section{Introduction} \label{sec:introduction}
Across science and society, probability forecasts for the occurrence of a binary outcome, also referred to as probabilistic classifiers or confidence scores, are widely used. Prominent examples include a patient's recovery or survival, weather events, solar flares, the designation of email as spam, credit approval, and recidivism of criminal defendants, to name but a few applications. Evidently, our ability to develop and improve probability forecasts depends on the availability of diagnostic tools for the assessment and comparison of predictive power. \vfill \pagebreak
While some applications call for the use of a single numerical performance measure, with forecast contests and leader boards being prime examples, the condensation of forecast quality into a single number prevents detailed analyses. As \citet{Janssens2020} notes,
\begin{quote}
\footnotesize
``Some prediction researchers prefer one metric or graph that captures the overall performance of prediction models. Others prefer one for each different aspect of performance, such as calibration, discrimination, predictive value (risks), and utility.''
\end{quote}
Not surprisingly, numerous types of diagnostic graphics for the evaluation of probability forecasts exist \citep{Murphy1992, Prati2011, Filho2021}, and practitioners may wonder which ones are to be preferred.
In this article we propose the use of a triptych of diagnostic graphics and provide theoretical support for our choices. The triptych consists of the reliability diagram in the recently proposed CORP (Consistent, Optimally binned, Reproducible, and Pool-Adjacent-Violators (PAV) algorithm based) form to assess calibration \citep{Dimitriadis2021}, the receiver operating characteristic (ROC) curve to judge discrimination ability \citep{Swets1973, Gneiting2022a}, and the Murphy diagram for the assessment of overall predictive performance and utility \citep{Ehm2016}. Figure \ref{fig:C1_triptych} illustrates the triptych for probabilistic classifiers from an astrophysical forecast challenge \citep{Leka2019a, Leka2019b} as introduced in Table \ref{tab:C1} and discussed in detail in Section \ref{sec:solar}.
\begin{table}[t]
\centering
\footnotesize
\caption{Probability forecasts for class C1.0+ solar flares at a prediction horizon of a day ahead from a joint test set within calendar years 2016 and 2017 \citep{Leka2019a, Leka2019b}: Acronym, source, mean Brier score, mean logarithmic (Log) score, and misclassification rate (MR). Details of the data example are discussed in Section \ref{sec:solar}.} \label{tab:C1}
\begin{tabular}{llrccc}
\toprule
\multicolumn{2}{c}{Probability Forecast} && \multicolumn{3}{c}{Mean Score} \\
\cmidrule{1-2} \cmidrule{4-6}
Acronym & Source && Brier & Log & MR \\
\cmidrule{1-2} \cmidrule{4-6}
NOAA & National Oceanic and Atmospheric Administration && 0.144 & 0.449 & 0.205 \\
SIDC & Royal Observatorium Belgium && 0.172 & 0.515 & 0.263 \\
ASSA & Korean Space Weather Agency && 0.184 & $\infty$ & 0.273 \\
MCSTAT & Trinity College Dublin && 0.193 & 0.587 & 0.275 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig01_triptych_C1Flares.pdf}
\vspace{-11mm}
\caption{Triptych of diagnostic graphics for evaluating and comparing the probability forecasts of class C1.0+ solar flares from Table \ref{tab:C1}: Murphy curve, reliability diagram, and ROC curve.} \label{fig:C1_triptych}
\end{figure}
Starting from the left, the Murphy diagram assesses overall predictive performance in terms of proper scoring rules. To provide background, a scoring rules assigns a score $\textsf{S}(x,y)$ to each pair of a probability forecast $x \in [0,1]$ and a binary outcome $y \in \{ 0, 1 \}$, where 1 stands for an event and 0 for a non-event. A scoring rule is proper if a forecaster minimizes the expected score by issuing a probability forecast that corresponds to her true belief, with the Brier score $\textsf{S}(x,y) = (x-y)^2$ and the logarithmic (Log) score $\textsf{S}(x,y) = -y \log x - (1-y) \log(1-x)$ being prominent examples \citep{Gneiting2007a}. Scores then are averaged over a test set, and the forecast with the smallest mean score is considered best. The widely used misclassification rate (MR) arises as a special case, namely, by assigning a score of 1 if the probability forecast is less than $\frac{1}{2}$ and the event realizes, or the forecast is greater than $\frac{1}{2}$ and the event does not realize, and assigning a score of 0 otherwise. Distinct proper scoring rules may yield distinct forecast rankings, so practitioners may wonder which one to use, and guidance is essential. In the case of a binary outcome, any proper scoring rule can be represented as a mixture over so-called elementary scoring rules, and so it suffices to consider only those. Fortunately, the family of the elementary scoring rules is linearly parameterized by a threshold or cost-loss parameter $\theta$. In a nutshell, a Murphy curve depicts the mean elementary score as a function of the threshold $\theta$, with lower scores being preferable. The height of the Murphy curve at $\theta = \frac{1}{2}$ equals the misclassification rate, and the area under the Murphy curve equals the mean Brier score. If a forecast has a Murphy curve below that of a competitor, then it is superior in terms of any proper scoring rule, and has superior economic utility to any decision maker. For example, we see from the Murphy diagram in Figure \ref{fig:C1_triptych} that the NOAA forecast dominates the ASSA forecast, regardless of the use intended.
A probability forecast is calibrated if, conditional on any forecast value $p$, the event realizes in $100 \cdot p$ percent of the instances considered. Reliability diagrams visualize calibration, by plotting an estimate of the conditional event probability (\text{CEP}) as a function of the forecast value. While reliability curves close to the diagonal are compatible with assumptions of calibration, notable departures from the diagonal suggest miscalibration and can be interpreted diagnostically. We adopt the recently proposed CORP approach of \citet{Dimitriadis2021} for the estimation of CEPs by nonparametric isotonic regression, as illustrated in Figure \ref{fig:C1_triptych}, where the SIDC and MCSTAT forecasts exhibit overprediction, with estimates below the diagonal.
Receiver operating characteristic (ROC) curves visualize the discrimination ability of the forecasts --- that is, they judge to what extent the forecast values distinguish situations with lower or higher true event probabilities. Specifically, as one issues hard classifiers based on successively higher forecast thresholds, a ROC curve plots the hit rate (HR) on the ordinate against the false alarm rate (FAR) on the abscissa. As the ROC curve is invariant under strictly increasing transformations of the forecast values, it diagnoses discrimination ability only, while ignoring issues of calibration. Hit rates close to 1 and false alarm rates close to 0 are desirable, so ROC curves at upper left are indicative of superior discrimination ability. In the ROC curves in Figure \ref{fig:C1_triptych} the NOAA forecast shows the highest and the ASSA forecast the lowest discrimination ability. Featuring both excellent calibration and superior discrimination ability, the NOAA forecast also performs best in terms of scoring rules and economic utility, as evidenced by the Murphy curves.
The choice of the triptych graphics reflects theoretically supported, desirable properties. Reliability curves exclusively diagnose calibration, ROC curves assess discrimination ability only, and Murphy curves quantify overall predictive performance. Moreover, the novel Fact \ref{fact:crossingpoints} and Theorem \ref{thm:crossingpoints} below demonstrate that, under perfect calibration, Murphy curves and ROC curves yield congruent insights, as they share the same number of crossing points. Similarly, for forecasts with shared discrimination ability, Murphy curves assess calibration only.
Following the pioneering work of \citet{Murphy1973}, researchers have sought decompositions of mean scores into intuitively appealing components that reflect calibration and discrimination, respectively. We utilize the CORP decomposition of \citet{Dimitriadis2021}, which decomposes a mean score
\begin{align*}
\bar{\myS} = \text{MCB} - \text{DSC} + \text{UNC}
\end{align*}
into readily interpretable components that represent miscalibration (\text{MCB}), discrimination (\text{DSC}), and uncertainty (\text{UNC}), respectively. In contrast to earlier approaches, CORP reliability diagrams and CORP score components do not depend on user choices or tuning parameters, and they show appealing finite and large sample optimality properties. The mean score $\bar{\myS}$ equals a weighted area under the Murphy curve and serves as a summary measure of predictive performance, the {\text{MCB}} component quantifies deviations of the CORP reliability diagram from the diagonal and can be used as a calibration metric, and the {\text{DSC}} component serves as an appealing alternative to the widely used Area Under the ROC Curve (\text{AUC}) measure of discrimination ability.
If many competing forecasting methods are to be compared, the triptych graphics yield crowded displays. With such settings in mind, we propose a simple alternative, namely, {\text{MCB}}--{\text{DSC}} plots, that show, for each competitor involved, the {\text{DSC}} measure plotted against the {\text{MCB}} component, augmented by parallel contour lines that indicate an equal mean score. Due to their simplicity and the joint assessment of overall predictive ability, calibration, and discrimination, {\text{MCB}}--{\text{DSC}} plots visualize strengths and weaknesses of forecasting methods and facilitate the identification of methods of interest that can be analyzed in more detail via the triptych graphics. In Figure \ref{fig:MCB_DSC} we show Brier score {\text{MCB}}--{\text{DSC}} plots for probability forecasts from solar flare and social science forecast contests.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig02_MCBDSC_Illustration.pdf}
\caption{Brier score {\text{MCB}}--{\text{DSC}} plots for competitors in forecast contests for (a) class C1.0+ solar flares \citep{Leka2019a}, and (b) job training in the Fragile Families Challenge \citep{Salganik2020b}. The colors in panel (a) align with Figure \ref{fig:C1_triptych}; in panel (b) benchmark forecasts are represented in green. The green square at the origin represents the ex post best constant forecast, that is, the unconditional event frequency, and the thick green line separates forecasts that are better (above the line) and that are worse (below the line) than this baseline. Details of the data examples from astrophysics and social science are discussed in Sections \ref{sec:solar} and \ref{sec:FFC}, respectively.}
\label{fig:MCB_DSC}
\end{figure}
While there is a rich literature on the evaluation of probabilistic classifiers and associated graphical displays, as reviewed by \citet{Murphy1992}, \citet{Prati2011}, \citet{Richardson2012}, \citet{Alba2017}, \citet{Filho2021}, and \citet{Xenopoulos2023}, and variants of the joint triptych display feature in the extant literature, as in Figure 1 of \citet{Flach2017} and graphics in \citet{TaillardatMestre2020}, the original contributions of our work include the presentation of the triptych as an argumentatively complete set of displays, connections to the CORP approach of \citet{Dimitriadis2021}, the development of {\text{MCB}}--{\text{DSC}} plots, and novel theoretical results proved in Appendix \ref{app:proofs}.
The remainder of the article is organized as follows. Sections \ref{sec:scores}, \ref{sec:reliability}, and \ref{sec:ROC} concern the individual triptych displays, by discussing proper scoring rules and Murphy curves, CORP reliability diagrams and score decompositions, and ROC curves, respectively. Section \ref{sec:together} argues for the simultaneous use of the triptych displays, studies the connections between the individuals displays, and discusses {\text{MCB}}--{\text{DSC}} plots. In particular, we show that for two competing forecasts that are both calibrated, Murphy curves and ROC curves yield congruent insights, as they share the same number of crossing points. In Section \ref{sec:empirical} we apply the proposed methods in case studies from astrophysics, economics, and social science. The paper closes in Section \ref{sec:discussion}, where we discuss open problems and directions for future research. Software for the implementation of the proposed tools is available in \textsf{R} \citep{R, replication_triptych}.
\section{Scoring rules assess overall predictive performance} \label{sec:scores}
Comparative assessments of overall forecast quality rely on proper scoring rules that encourage honest and careful forecasting \citep{Brier1950, Gneiting2007a}. A scoring rule is a function $\textsf{S}(x,y)$ that assigns a numerical score or penalty based on the probability forecast $x \in [0,1]$ and the binary outcome $y \in \{ 0, 1\}$, where 1 stands for an event and 0 for a non-event. Infinite penalties are permitted only if an outcome was declared to have probability zero and, thus, be impossible. Throughout the paper we assume that scoring rules are negatively oriented, so that smaller scores are preferable.
\subsection{Proper and strictly proper scoring rules} \label{sec:proper}
A scoring rule is proper if, given a Bernoulli random variable $Y$ with success probability $p$,
\begin{align} \label{eq:proper} \textstyle
\mathbb{E} \left[ \textsf{S}(p,Y) \right] \leq \mathbb{E} \left[ \textsf{S}(x,Y) \right]
\end{align}
for all forecast values $x$. It is strictly proper if, furthermore, equality in \eqref{eq:proper} implies that $x = p$, so that the true success probability is the unique minimizer of the expected score. The key benefit of propriety is the implicit enforcement of honest and careful forecasts: If a forecaster believes that an event has success probability $p$, then $p$ is her best forecast in terms of the expected score or penalty. In practice, for a given record
\begin{align} \label{eq:data}
(x_1, y_1), \ldots, (x_n, y_n)
\end{align}
of probability forecasts $x_1, \ldots, x_n$ and associated binary outcomes $y_1, \ldots, y_n$, the mean score
\begin{align} \label{eq:SX}
\bar{\myS} = \frac{1}{n} \sum_{i=1}^n \textsf{S}(x_i,y_i)
\end{align}
is used to rank competing forecasts. Evidently, the expression on the right-hand side of equation \eqref{eq:SX} corresponds to the expectation $\mathbb{E} \left[ \textsf{S}(X, Y) \right]$ when the tuple $(X, Y)$ of random quantities follows the joint empirical distribution of the record \eqref{eq:data}.
The most popular examples of strictly proper scoring rules are the Brier score (\textsf{BS}) and the Logarithmic score (\textsf{LogS}), defined by
\begin{align}
\textsf{BS}(x, y) = (x-y)^2 \quad \text{and} \quad \textsf{LogS}(x,y) = - y \log(x) - (1-y) \log(1-x)
\end{align}
for $x \in [0,1]$ and $y \in \{ 0, 1 \}$. The zero--one loss or score
\begin{align} \label{eq:MR} \textstyle
\textsf{S}_{\frac{1}{2}}(x, y) = \mathds{1} \! \left( x > \frac{1}{2}, y = 0 \right)
+ \mathds{1} \! \left( x < \frac{1}{2}, y = 1 \right)
+ \frac{1}{2} \, \mathds{1} \! \left( x = \frac{1}{2} \right)
\end{align}
is a prominent example of a scoring rule that is proper, but not strictly proper. When averaged in the form of \eqref{eq:SX}, it yields the widely reported misclassification rate. The zero--one loss arises as the special case $\theta = \frac{1}{2}$ of the general elementary scoring rule
\begin{align} \label{eq:elementary} \textstyle
\textsf{S}_\theta(x,y) = 2 \theta \, \mathds{1}(x > \theta, y = 0)
+ 2 (1-\theta) \, \mathds{1}(x < \theta, y = 1)
+ 2 \theta(1-\theta) \, \mathds{1}(x = \theta)
\end{align}
with decision threshold or cost--loss parameter $\theta \in (0,1)$, which is proper, but not strictly proper, as it only takes into account whether a predicted probability is smaller or larger than $\theta$, so cannot distinguish between forecasts that are on the same side of $\theta$. From an economic perspective, $\textsf{S}_\theta$ specifies the loss of a rational decision maker when the ratio of the monetary cost of a false alarm versus the cost of a missed event equals $\theta/(1-\theta)$; see \citet{Ehm2016} and references therein.\footnote{The economic interpretation applies to the left-continuous version of $\textsf{S}_\theta$ in eq.~(14) of \citet{Ehm2016}. Here we use the symmetric version in \eqref{eq:elementary}, which assigns a fixed penalty of $2 \theta(1-\theta)$ when $x = \theta$, independently of the binary outcome $y \in \{ 0, 1 \}$. Both versions are proper, but not strictly proper.} In turn, $\textsf{S}_\theta$ can be identified with the special case $t = c = \theta$ of the general, cost-weighted misclassification loss at decision threshold $t$ and cost proportion $c$, as studied in the machine learning literature \citep{Hand2009, HernandezOrallo2011, HernandezOrallo2012, HernandezOrallo2013}.
\subsection{Representations of proper scoring rules} \label{sec:mixture}
The special role of the elementary scoring functions $\textsf{S}_\theta$ from \eqref{eq:elementary} is highlighted in a mixture representation studied by \citet{Schervish1989}. Subject to technical conditions that are immaterial in practice, every proper scoring rule admits a representation of the form
\begin{align} \label{eq:Schervish}
\textsf{S}(x,y) = \int_0^1 \textsf{S}_\theta(x,y) \: \mathrm{d}H(\theta)
\end{align}
for forecast values $x \in [0,1]$ and outcomes $y \in \{ 0, 1 \}$, where $H$ is a measure that assigns non-negative weight to cost--loss parameters $\theta \in (0,1)$. If the assigned weight is positive almost everywhere, then the corresponding score is strictly proper. The elementary score $\textsf{S}_\eta$ arises when $H$ is a point measure that assigns mass one to $\eta \in (0,1)$ and no mass elsewhere, the Brier score emerges when the mixing measure $H$ is uniform, and the logarithmic score arises when $H$ has density proportional to $(\theta(1-\theta))^{-1}$. Hence, the logarithmic score assigns infinite mass to the integrand in \eqref{eq:Schervish} at the very boundaries of the unit interval. It discourages predictions with forecast probabilities at or near 0 or 1, and may render a single (and, hence, a mean) score infinite, as for the ASSA forecast from Table \ref{tab:C1}. As Nobel laureate \citet[p.~51]{Selten1998} argued,
\begin{quote}
\footnotesize
``The use of the logarithmic scoring rule implies the value judgment that small differences between small probabilities should be taken very seriously and that wrongly describing something extremely improbable as having zero probability is an unforgivable sin. The author thinks that this value judgment is unacceptable.''
\end{quote}
The mixture representation \eqref{eq:Schervish} also is a powerful tool for the construction of proper scoring rules. For instance, \citet{Buja2005} introduce the flexible Beta family that arises when the mixing measure $H$ has density proportional to a Beta density. The members of the Beta family include the Brier score, the logarithmic score, and the H-measure of \citet{Hand2009}.
An alternative, essentially equivalent characterization is due to \citet{Savage1971}, who showed that, subject to technical conditions, any proper scoring rule allows a representation of the form
\begin{align} \label{eq:Savage}
\textsf{S}(x,y) = \phi(y) - \phi(x) - \phi'(x) (y-x),
\end{align}
where the function $\phi$ is convex with subgradient $\phi'$. A subgradient is a generalized version of the classical derivative, and whenever the latter exists the subgradient equals the derivative. In relation to the mixture representation from \eqref{eq:Schervish} it holds that $\mathrm{d}H(\theta) = \mathrm{d}\phi'(\theta) = \phi''(\theta) \, \mathrm{d}\theta$, with slight technical adaptations when $\phi$ is convex, but not strictly convex \citep{Gneiting2007a}. Under the Savage representation \eqref{eq:Savage} the Brier score arises when $\phi(t) = t^2$, the logarithmic score emerges when $\phi(t) = t \log(t) + (1-t) \log(1-t)$, and the elementary scoring rule $\textsf{S}_\theta$ arises under the convex, but not strictly convex, function $\phi(t) = 2 \max(\theta t, (1-\theta) (1-t))$.
In practice, it is not uncommon that different proper scoring rules yield distinct forecast rankings. For example, in Table \ref{tab:C1} the Brier score and the logarithmic score disagree in the ranking of the ASSA and MCSTAT forecasts. As there is no obvious reason for a specific strictly proper scoring rule to be preferred over any other, a natural question is which one to choose \citep{Merkle2013}.
\subsection{Murphy curves and Murphy diagrams} \label{sec:Murphy}
The mixture representation \eqref{eq:Schervish} allows for a compelling resolution of the challenge for guidance in the choice of proper scoring rules. As the representation shows, any strictly proper scoring rule arises as a mixture over the family of the elementary scoring functions $\textsf{S}_\theta$ from \eqref{eq:elementary}. Thus, it suffices to consider the family of mean elementary scores,
\begin{align} \label{eq:SXtheta}
\bar{\myS}_\theta = \frac{1}{n}\sum_{i=1}^n \textsf{S}_\theta(x_i,y_i),
\end{align}
where $\theta \in (0,1)$. \citet{Ehm2016} proposed plots of the Murphy curve, that is, the graph of $\bar{\myS}_\theta$ as a function of the decision threshold or cost--loss parameter $\theta \in (0,1)$, which allows users to assess forecast performance with respect to all scoring rules simultaneously. In particular, the height of a Murphy curve at $\theta = \frac{1}{2}$ equals the misclassification rate, but note that sole focus on the misclassification rate as a general measure of predictive performance is problematic, because any single $\textsf{S}_\theta$ represents a particular economic scenario as mentioned in Section \ref{sec:proper} and fails to be strictly proper. For a general measure, it is better to look at the area under a Murphy curve which equals the mean Brier score.
\citet{HernandezOrallo2011} had proposed the same tool under the name of Brier curve, and related displays had been studied by \citet{Murphy1992} and \citet{Drummond2006}, among other authors. A Murphy diagram then is a plot of Murphy curves for competing forecasts. If a forecast exhibits a lower mean score than another under all $\textsf{S}_\theta$, then it dominates the competitor in terms of any proper scoring rule $\textsf{S}$, as lower values under $\textsf{S}_\theta$ carry over to $\textsf{S}$ through integration in the mixture representation \eqref{eq:Schervish}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig03_MurphyIllustration_C1Flares.pdf}
\vspace{-9mm}
\caption{Murphy curves for the probability forecasts of class C1.0+ solar flares from Table \ref{tab:C1}. Panel (b) shows the mean elementary score from \eqref{eq:SXtheta}; panels (a) and (c) show the difference between the mean elementary score for SIDC and NOAA, and for ASSA and MCSTAT, respectively.}
\label{fig:C1_Murphy}
\end{figure}
In Figure \ref{fig:C1_Murphy} we take a closer look at the Murphy diagram for the class C1.0+ solar flare forecasts. The panel at left compares the leading contenders from Table \ref{tab:C1}, the NOAA and the SIDC forecasts, and we note that NOAA has smaller $\bar{\myS}_\theta$ from \eqref{eq:SXtheta} nearly throughout, so that integration over $\bar{\myS}_\theta$ with respect to typically used mixing measures yields smaller mean scores. The panel at right compares the MCSTAT and ASSA forecasts, and we see that, while for decision thresholds $\theta$ up to about 0.30 the former has lower $\bar{\myS}_\theta$, the situation is reversed for $\theta$ greater than 0.50. Hence, it depends on the mixing measure $H$ in \eqref{eq:Schervish} whether a scoring rule prefers the MCSTAT or the ASSA forecast.
To summarize, Murphy diagrams assess overall predictive performance, with the evaluation being complete in terms of proper scoring rules and economic utility. Still, a more detailed assessment of the merits and deficiencies of competing forecasts is often desirable. For example, a forecast might be deficient by systematically over- or underpredicting, or by an inability to distinguish between instances of higher and lower true CEPs. In a nutshell, these two types of deficiencies correspond to lack of calibration and lack of discrimination, respectively. While Murphy diagrams rank forecasts with such deficiencies lower, they are not capable of diagnosing the form and extent of these issues. Reliability diagrams and ROC curves, to which we turn in the sequel, serve this purpose.
\section{Reliability diagrams assess calibration} \label{sec:reliability}
A crucial, desirable property of a probabilistic classifier is that, when looking back at a collection of forecasts and associated binary outcomes, whenever the forecast value $x$ was issued, the outcome ought to occur in about $100 \cdot x$ percent of the respective instances. To formalize this property, it is useful to think of the forecast and the outcome as random variables $X$ and $Y$, respectively, with joint distribution $\mathbb{Q}$. Then the probability forecast $X$ is calibrated if the conditional event probability,
\begin{align} \label{eq:CEP}
\text{CEP}(x) = \mathbb{Q} \left( Y=1 \mid X=x \right) = \mathbb{E} \left[ Y \mid X=x \right] \! ,
\end{align}
agrees with the forecast value $x$ for all relevant $x \in [0,1]$. As \citet[Theorem 2.11]{Gneiting2013} have shown, the condition in \eqref{eq:CEP} serves as as unified notion of calibration for binary outcomes.
\subsection{Reliability curves and reliability diagrams} \label{sec:reliabilitydiagrams}
Calibration is typically assessed graphically via reliability diagrams \citep{Murphy1977a, Murphy1992, Brocker2007} that plot an estimated version of the conditional event probability $\text{CEP}(x)$ against the forecast value $x$, with deviations from the diagonal suggesting lack of calibration. Classical approaches to estimating $\text{CEP}(x)$ rely on binning and counting and have been hampered by ad hoc implementation decisions, instability under unavoidable choices regarding binning, and inefficiency \citep{Dimitriadis2021, ArrietaIbarra2022, Roelofs2022}. To resolve these issues, \citet{Dimitriadis2021} introduced the CORP (Consistent, Optimally binned, Reproducible, and Pool-Adjacent-Violators (PAV) algorithm based) reliability diagram that plots an estimate of $\text{CEP}(x)$ obtained through nonparametric isotonic regression, subject to the regularizing constraint of isotonicity in $x$, as implemented via the Pool-Adjacent-Violators (PAV) algorithm \citep{Ayer1955, deLeeuw2009, Jordan2022}.
While isotonic regression and the PAV algorithm have been well known as tools for re-calibration \citep{Zadrozny2002}, their usage in the construction of reliability diagrams is a recent development. For a given record of the form \eqref{eq:data} suppose without loss of generalization that $x_1 \leq \cdots \leq x_n$, and let
\begin{align} \label{eq:xhat}
\hat{x}_1 \leq \cdots \leq \hat{x}_n
\end{align}
denote the re-calibrated values generated by the PAV algorithm. The CORP reliability diagram shows the piecewise linear curve that connects the points $(x_1, \hat{x}_1), \ldots, (x_n, \hat{x}_n)$. If the original forecast is calibrated, then $x_1 = \hat{x}_1, \ldots, x_n = \hat{x}_n$, and the reliability curve lies on the diagonal. Otherwise, systematic deviations from the diagonal suggest a lack of calibration.
In contrast to classical approaches that estimate a reliability curve by assigning forecast values to bins, and counting events per bin, which mandates user choices, the CORP approach does not require any tuning parameters or user intervention, benefits from the regularizing constraint of isotonicity, and has appealing finite sample optimality and asymptotic consistency properties \citep{Dimitriadis2021}. If desired, CORP reliability curves allow for an interpretation in terms of binning and counting, by identifying any horizontal segment with a bin, and interpreting the CEP as the corresponding empirical event frequency. Generally, we supplement the reliability curve with a histogram that depicts the unconditional distribution of the forecast values.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig04_ReliabilityDiagrams_C1Flares.pdf}
\vspace{-7mm}
\caption{CORP reliability diagrams for the probability forecasts of class C1.0+ solar flares from Table \ref{tab:C1}, along with (a) consistency bands around the diagonal, and (b) confidence bands, both at the 90 percent level.} \label{fig:C1_reliability}
\end{figure}
In Figure \ref{fig:C1_reliability}, we return to the solar flare forecasts from Table \ref{tab:C1} and Figure \ref{fig:C1_triptych}. At left the CORP reliability diagrams are supplemented by consistency bands. Segments of a reliability curve that lie outside the consistency band, which represents 90 percent of the reliability curves that arise under the assumption of a calibrated forecast, suggest that the lack of calibration should not be attributed to estimation noise alone. At right we show universally valid confidence bands \citep{Dimitriadis2022}. The NOAA and ASSA forecasts are well calibrated, with reliability curves mostly within the consistency bands, and the diagonal well within the confidence bands. In contrast, the SIDC and MCSTAT forecasts shows substantial underprediction.
While consistency bands and confidence bands have complementary tasks \citep{Dimitriadis2021}, we use the former as a default, as they address the natural question whether (or not) observed differences between a CORP reliability curve and the diagonal can reasonably be attributed to chance alone, despite the forecast probabilities being perfectly calibrated.
\subsection{Empirical score decomposition: Miscalibration (\text{MCB}), discrimination (\text{DSC}), and uncertainty (\text{UNC}) components} \label{sec:decomposition}
For several decades, researchers have sought decompositions of the mean score $\bar{\myS}$ from \eqref{eq:SX} into nonnegative components that allow for persuasive interpretation \citep{Murphy1973, Ferro2012}. Typically, a decomposition involves a reliability or miscalibration (\text{MCB}) term that indicates how much the predicted probabilities differ from the conditional event frequencies, a resolution or discrimination (\text{DSC}) term that measures a forecast's ability to distinguish between events and non-events, and an uncertainty (\text{UNC}) component that quantifies the inherent difficulty of the prediction problem, but does not depend on the forecast under consideration. While extant approaches lack stability under mandatory user decisions, particularly about binning, and may fail to provide an exact decomposition, or might yield components that fail to be nonnegative, the CORP approach yields a new type of decomposition that resolves these issues.
As before, for a given record \eqref{eq:data} suppose without loss of generality that $x_1 \leq \cdots \leq x_n$, and let $\hat{x}_1 \leq \cdots \leq \hat{x}_n$ denote the PAV re-calibrated values from \eqref{eq:xhat}, as plotted in the CORP reliability diagram. Furthermore, let $r = \bar{y} = \frac{1}{n} \sum_{i=1}^n y_i$ be the realized unconditional event frequency. With $\textsf{S}$ being any proper scoring rule, let
\begin{equation} \label{eq:SCR}
\bar{\myS}_{\textsf{C}} = \frac{1}{n} \sum_{i=1}^n \textsf{S}(\hat{x}_i,y_i) \quad \textrm{and} \quad \bar{\myS}_{\textsf{R}} = \frac{1}{n} \sum_{i=1}^n \textsf{S}(r,y_i)
\end{equation}
denote the mean score for the (re)Calibrated probabilities, and for the constant Reference forecast $r$, respectively. Then the mean score $\bar{\myS}$ from \eqref{eq:SX} decomposes as
\begin{equation} \label{eq:decomposition}
\bar{\myS} = \underbrace{\left( \hspace{0.3mm} \bar{\myS} - \bar{\myS}_{\textsf{C}} \right)}_\text{MCB}
- \underbrace{\left( \hspace{0.3mm} \bar{\myS}_{\textsf{R}} - \bar{\myS}_{\textsf{C}} \right)}_\text{DSC}
+ \underbrace{\bar{\myS}_{\textsf{R}}}_{\text{UNC}}.
\end{equation}
The miscalibration term $\text{MCB} = \bar{\myS} - \bar{\myS}_{\textsf{C}}$ equals the difference in the mean score of the original versus the (re)calibrated forecast. It expresses deviations of the CORP reliability curve from the diagonal in terms of the score under consideration. The discrimination component $\text{DSC} = \bar{\myS}_{\textsf{R}} - \bar{\myS}_{\textsf{C}}$ quantifies how much the (re)calibrated forecast improves upon the reference score $\bar{\myS}_{\textsf{R}}$ that is based on a calibrated, but constant forecast, and we note that, by construction, {\text{DSC}} is invariant under strictly increasing transformations of the forecast values. While small values of {\text{MCB}} are preferable, so are large values of the {\text{DSC}} component. The uncertainty term $\text{UNC} = \bar{\myS}_{\textsf{R}}$ is independent of the forecast at hand and provides a natural benchmark, as it equals the score of the (ex post) best constant forecast. In contrast to earlier types of decomposition, the CORP decomposition from \eqref{eq:SCR} and \eqref{eq:decomposition} is exact and guarantees that $\text{MCB} \geq 0$ with equality if the original forecast is calibrated, and $\text{DSC} \geq 0$ with equality if the (re)calibrated forecast is constant \citep[Theorem 1]{Dimitriadis2021}.
\begin{table}[t]
\centering
\footnotesize
\caption{CORP decomposition of the mean score in \eqref{eq:SX} for the probability forecasts of class C1.0+ solar flares from Table \ref{tab:C1}, under the Brier score ($\text{UNC} = 0.211$), the logarithmic score ($\text{UNC} = 0.614$), and misclassification rate ($\text{UNC} = 0.303$).} \label{tab:C1_decom}
\begin{tabular}{l r ccc r ccc r ccc}
\toprule
Forecast && \multicolumn{3}{c}{Brier Score} && \multicolumn{3}{c}{Logarithmic Score} && \multicolumn{3}{c}{Misclassification Rate} \\
\cmidrule{3-5} \cmidrule{7-9} \cmidrule{11-13}
&& $\bar{\myS}$ & {\text{MCB}} & {\text{DSC}} && $\bar{\myS}$ & {\text{MCB}} & {\text{DSC}} && $\bar{\myS}$ & {\text{MCB}} & {\text{DSC}} \\
\midrule
NOAA && 0.144 & 0.006 & 0.073 && 0.449 & 0.027 & 0.191 && 0.205 & 0.004 & 0.102 \\
SIDC && 0.172 & 0.014 & 0.053 && 0.515 & 0.036 & 0.135 && 0.263 & 0.038 & 0.078 \\
ASSA && 0.184 & 0.007 & 0.035 && $\infty$ & $\infty$ & 0.085 && 0.273 & 0.006 & 0.036 \\
MCSTAT && 0.193 & 0.034 & 0.052 && 0.587 & 0.101 & 0.128 && 0.275 & 0.042 & 0.071 \\
\bottomrule
\end{tabular}
\end{table}
The CORP decomposition applies under any proper scoring rule $\textsf{S}$. When $\textsf{S}$ is the Brier score, it agrees with the classical decomposition of \citet{Murphy1973} in the special case where the bins reduce to unique forecast values with an associated nondecreasing sequence of conditional event frequencies \citep[Theorem 2]{Dimitriadis2021}. Under the misclassification rate that arises from \eqref{eq:SX} under the zero--one score in \eqref{eq:MR}, the components admit appealing interpretations in terms of the original, the (re)calibrated, and the constant reference forecast being on the same or distinct side(s) of $\frac{1}{2}$. Table \ref{tab:C1_decom} shows the CORP decomposition of the mean Brier score, the mean logarithmic score and the misclassification rate for the solar flare forecasts from Table \ref{tab:C1}. The {\text{MCB}} components confirm the visual appearance of the CORP reliability diagrams in Figures \ref{fig:C1_triptych} and \ref{fig:C1_reliability}. The NOAA forecast exhibits the least and MCSTAT the most pronounced lack of calibration. As discussed, the mean score $\bar{\myS}$ and the {\text{MCB}} component under the logarithmic score are infinite for the ASSA forecast. We defer consideration of the {\text{DSC}} components to Section \ref{sec:ROC}, where we focus attention on ROC curves.
\subsection{Calibration metrics and the Brier score {\text{MCB}} component} \label{sec:MCB}
Recently, there has been a surge of interest in calibration metrics in the machine learning literature. The widely used metric of the expected or estimated calibration error \citep[ECE:][]{Naeini2015, Guo2017} depends on binning and counting and thus is subject to the aforementioned types of instabilities \citep{Dimitriadis2021, Roelofs2022} and biases \citep{Brocker2012, Ferro2012}. In a recent review, \citet[p.~3]{ArrietaIbarra2022} summarize that
\begin{quote}
\footnotesize
``the classical empirical calibration errors based on binning vary significantly based on the choice of bins. The choice of bins is fairly arbitrary and enables the analyst to fudge results (whether purposefully or unintentionally).''
\end{quote}
To address these issues, \citet{Roelofs2022} use equal-mass bins and select the number of bins as large as possible while preserving isotonicity in the calibration curve. \citet{ArrietaIbarra2022} and \citet{Brocker2022} recommend graphical displays, calibration metrics, and tests, that are based on cumulative differences between predicted and observed event probabilities. While approaches of this type have appealing mathematical properties, cumulative quantities lack intuition and ease of interpretation. Similar to the method proposed by \citet{Roelofs2022}, the CORP approach to reliability diagrams enforces isotonicity, but uses the PAV algorithm to select the number and the arrangement of the bins in fully automated, optimal ways \citep{Dimitriadis2021}. The CORP decomposition from \eqref{eq:SCR} and \eqref{eq:decomposition} is based on the CORP reliability curve, and when $\textsf{S}$ is the Brier score it yields an {\text{MCB}} component that reduces to a classical calibration metric under modest conditions \citep[Theorem 2]{Dimitriadis2021}. For these reasons, we propose the use of the Brier score {\text{MCB}} component as a calibration metric.
\section{Receiver operating characteristic (ROC) curves visualize discrimination ability} \label{sec:ROC}
While calibration is an important quality of probabilistic classifiers, a calibrated forecast is not necessarily powerful, as it may lack the ability to discriminate between events of low and high event probability. ROC curves are key tools in the assessment of this ability \citep{Egan1975, Swets1973, Fawcett2006}. In a nutshell, ROC curves visualize potential predictive power, detached from considerations of calibration.
\subsection{ROC curves} \label{sec:ROCcurves}
To introduce ROC curves, suppose that we use the threshold $t$ to construct a hard classifier from the probability forecast $x$ in the usual way, by predicting an event ($y = 1$) if $x > t$, and predicting a non-event ($y = 0$) if $x \leq t$. For a record of the form \eqref{eq:data}, the resulting False Alarm Rate (FAR) and Hit Rate (HR) are given by
\begin{align}
\text{HR}(t) = \frac{\sum_{i=1}^n \mathds{1}(y_i = 1, x_i > t)}{\sum_{i=1}^n \mathds{1}(y_i = 1)}
\quad \text{and} \quad
\text{FAR}(t) = \frac{\sum_{i=1}^n \mathds{1}(y_i = 0, x_i > t)}{\sum_{i=1}^n \mathds{1}(y_i = 0)},
\end{align}
respectively. The ROC curve is the piecewise linear curve that connects the at most $n + 1$ unique points of the form $(\text{FAR}(t), \text{HR}(t))$ that arise as the threshold $t$ decreases.\footnote{Some researchers talk of ROC as relative operating characteristic \citep{Swets1973}, and the hit rate is also referred to as probability of detection, recall, sensitivity, or true positive rate. The false alarm rate is also known as probability of false detection, fall-out, or false positive rate. It equals one minus the specificity, selectivity, or true negative rate. See \url{https://en.wikipedia.org/wiki/Precision_and_recall\#Definition_(classification_context)}, accessed 27 No\-vember 2022. Moreover, some researchers define the ROC curve as a display that connects the points $(1-\text{HR}(t), 1-\text{FAR}(t))$ for $t \in [0,1]$, resulting in a curve that is mirrored at the anti-diagonal of the unit square, which maintains the interpretation that ROC curves at upper left are desirable \citep{HernandezOrallo2013}.} Informally, the threshold $t$ parameterizes the ROC curve, with the points (0,0) and (1,1) corresponding to $t \geq 1$ and $t < 0$, respectively.
ROC curves assess the discrimination ability of forecasts and can be interpreted diagnostically \citep{Marzban2004}. If the empirical conditional distributions for data \eqref{eq:data} given an event ($y = 1$) and a non-event ($y = 0$) coincide, then the forecast is unable to distinguish between events and non-events, and its ROC curve lies on the diagonal. The larger the separation between these conditional distributions, the higher the discriminatory power of the forecast, and the further to the upper left the ROC curve. For a perfectly discriminating probabilistic classifier, there is a threshold value $t$ such that $y_i = 0$ if $x_i \leq t$ and $y_i = 1$ if $x_i > t$, and hence it exhibits an ideal ROC curve along the left and upper edges of the unit square. By construction, ROC curves are invariant under strictly increasing transformations of the forecast values.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{Fig05_A_ROC_Illustration_C1Flares.pdf}
\vspace{-2mm}
\includegraphics[width=0.56\linewidth]{Fig05_B_legend.pdf}
\caption{ROC curves for the probability forecasts of class C1.0+ solar flares from Table \ref{tab:C1}. Panel (a) shows the original ROC curves for the forecasts $x_1, \ldots, x_n$ from \eqref{eq:data}, panel (c) the concave ROC curves for the PAV re-calibrated forecasts $\hat{x}_1, \ldots, \hat{x}_n$ from \eqref{eq:xhat}. In panel (b) both the original and the concave ROC curves are shown for the NOAA and SIDC forecasts. The magnified details demonstrate that the original ROC curve morphs into its concave hull.}
\label{fig:C1_ROC}
\end{figure}
An often neglected but important consideration concerns the concavity of ROC curves. In practice, the original ROC curves constructed from empirical data \eqref{eq:data} almost inevitably fail to be concave, as illustrated on the forecasts from Table \ref{tab:C1} in Figure \ref{fig:C1_ROC}. This observation is explained by Theorems 3 and 4 of \citet{Gneiting2022a}, according to which a ROC curve is concave if, and only if, the conditional event probability is nondecreasing with the forecast value $x$, which for empirical data is hardly ever the case. ROC curves assess discrimination ability --- that is, potential predictive ability --- and while potential predictive ability is ignorant of calibration, it can only be assessed under the assumption of larger forecast values implying higher event probabilities. In this light, displays of nonconcave ROC curves have been harshly criticized, with researchers positing that they ``must be considered irrational'' and ``unethical when applied to medical decisions'' \citep{Pesce2010}.
Fortunately, there is a straightforward remedy. If one computes the ROC curve from the PAV transformed forecast values \eqref{eq:xhat} in lieu of the original forecasts from \eqref{eq:data}, the ROC curve morphs into its concave hull, that is, the smallest concave curve that lies to its upper left \citep{Fawcett2007}. The corrected, concave version of the ROC curve serves exclusively to compare discrimination ability, as differences in calibration get eliminated through re-calibration. In contrast, while original ROC curves focus on discrimination ability, conditional events frequencies that fail to be monotone generate confounding effects. In general, the transformation from the original probabilities $x_1 \leq \cdots \leq x_n$ to the PAV transformed, re-calibrated probabilities $\hat{x}_1 \leq \cdots \leq \hat{x}_n$ is monotonic, but not strictly monotonic, so a change in the ROC curve does not contradict the aforementioned invariance under strictly increasing transformations.
In summary, we strongly recommend the use of concave ROC curves, as computed from PAV transformed forecast values, in triptych graphics for empirical data. In Figure \ref{fig:C1_ROC} the left-hand panel illustrates the original versions of the ROC curves for the solar flare forecasts, the right-hand panel displays the concave ROC curves, as in the triptych graphics in Figure \ref{fig:C1_triptych}, and the middle panel illustrates the transition from the original curve to the concave hull. The NOAA forecast clearly discriminates the most and the ASSA forecast the least. The MCSTAT and SIDC forecasts exhibit roughly equal discrimination ability, with ROC curves that are nested in between the curves for the NOAA and ASSA forecasts.
\subsection{The area under the curve (\text{AUC}) measure and the Brier score discrimination (\text{DSC}) component} \label{sec:AUC}
Myriads of scientific papers have employed the Area Under the ROC Curve \citep[AUC:][]{Hanley1982, DeLong1988, Bradley1997, Marzban2004} measure to compare the predictive performance of probabilistic classifiers. AUC admits an appealing interpretation as the probability of a value drawn at random from the empirical distribution of forecast values for an event being higher than a value drawn from the distribution for a non-event. An {\text{AUC}} value of 1 signifies perfect discrimination ability; a value of $\frac{1}{2}$ indicates no discrimination, corresponding to the trivial ROC curve on the diagonal. A value smaller than $\frac{1}{2}$ implies that interchanging the predictions for 0 and 1 would improve forecast accuracy. As the ROC curve is invariant under strictly increasing transformations, so is {\text{AUC}}, and we note that {\text{AUC}} exclusively concerns discrimination ability, while ignoring (mis)calibration.
However, {\text{AUC}} is not suitable as an overall performance metric. To summarize arguments of \citet{Hand2009} in an informal way, and specializing them to calibrated forecasts, {\text{AUC}} assumes the form in \eqref{eq:Schervish} with a mixing measure $H$ that depends on the forecast values in complex ways. As argued by \citet{Hand2009}, such a dependence ``is absurd'', with \citet{Hand2022} recently adding that
\begin{quote}
\footnotesize
``The dependence of the {\text{AUC}} on the classifier itself means that it can lead to seriously misleading results, so it is concerning that it continues to be widely used.''
\end{quote}
Both {\text{AUC}} and the {\text{DSC}} measure from the CORP score decomposition in \eqref{eq:decomposition} are measures of discrimination ability. Under calibration, the {\text{MCB}} component vanishes, and as the {\text{UNC}} component is independent of the forecast, $\bar{\myS}$ equals the {\text{DSC}} component, except for an additive constant. Furthermore, if $\textsf{S}$ is the Brier score then $\text{DSC}$ reduces to a classical component, subject to conditions \citep[Theorem 2]{Dimitriadis2021}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig06_AUC_Scatter.pdf}
\caption{Scatter plots of (a) {\text{AUC}} vs.~mean Brier score, and (b) {\text{AUC}} vs.~the Brier score {\text{DSC}} component for the forecasts from panel (b) of Figure \ref{fig:MCB_DSC} with mean Brier score less than 0.195. Benchmark forecasts are represented in green. While {\text{AUC}} and the Brier score {\text{DSC}} component are positively oriented, the mean Brier score is negatively oriented.}
\label{fig:AUC_DSC}
\end{figure}
These relationships suggest the use of the Brier score {\text{DSC}} component as an attractive alternative to {\text{AUC}} if a measure of discrimination ability is sought. For illustration, Figure \ref{fig:AUC_DSC} shows scatter plots of performance measures for the solar flare forecasts from Figure \ref{fig:MCB_DSC}. In contrast to the association between {\text{AUC}} and the mean Brier score itself, which is weak, the association between {\text{AUC}} and the Brier score {\text{DSC}} component is strong. While both {\text{AUC}} and {\text{DSC}} are invariant under strictly increasing classifier transformations, $\text{DSC} = \bar{\myS}_{\textsf{R}} - \bar{\myS}_{\textsf{C}}$ admits the appealing interpretation as (the negative of) the mean Brier score for the PAV (re)calibrated forecast, up to the constant $\bar{\myS}_{\textsf{R}}$. Evidently, this interpretation is retained when $\textsf{S}$ is the logarithmic score or any other proper scoring rule.
\section{Putting it together: The triptych graphics and {\text{MCB}}--{\text{DSC}} plots} \label{sec:together}
Considering the class C1.0+ solar flares example from the previous sections and Figures \ref{fig:C1_triptych}--\ref{fig:C1_ROC}, the NOAA forecast is superior in all facets. However, rankings of the other forecasts from Table \ref{tab:C1} depend on the criteria used: The ASSA forecast is well calibrated, but exhibits poor discrimination ability. The MCSTAT and SIDC forecasts show discrimination ability in between the NOAA and ASSA forecast, but lack calibration. Murphy curves provide an overall assessment of predictive performance, covering both calibration and discrimination ability, and favor the NOAA forecast, followed by the SIDC forecast, whereas rankings of the MCSTAT and ASSA forecasts depend on the scoring rule used.
These considerations illustrate that the evaluation of predictive performance is multi-faceted and highly complex, even in the basic setting of probabilistic classifiers for binary outcomes. To address this challenge, we propose the use of a triptych of diagnostic graphics, consisting of Murphy curves, CORP reliability diagrams, and ROC curves (Figure \ref{fig:C1_triptych}). The triptych displays present a wealth of information in disentangled form: The CORP reliability diagram diagnoses calibration (only), the ROC curve in its concave form assesses discrimination ability (only), and Murphy curves consider economic utility and overall predictive performance, encompassing both calibration and discrimination.
\subsection{Theoretical guarantees} \label{sec:theory}
In the triptych graphics, information about overall predictive ability in the Murphy curves is disentangled into facets of calibration, as displayed in CORP reliability diagrams, and facets of discrimination ability, as visualized by ROC curves. We now summarize existing and new theoretical results that support this intuition and provide new insights about links between the displays, with technical details being available in Appendix \ref{app:proofs}. The findings are illustrated in idealized settings, where we consider the joint distribution of a pair $(X,Y)$ of random variables, with $X$ representing the probability forecast and $Y$ the binary outcome. The triptych graphics in these ideal settings derive from the population quantities, and can be interpreted as the triptych graphics that arises when a record of the form in \eqref{eq:data} is generated by ever larger samples from the joint distribution of $(X,Y)$. As the populations involved show nondecreasing CEPs, original and re-calibrated probabilities coincide, and yield the same, concave ROC curve.
We begin with a discussion of the role of calibration. As noted, any forecast can be re-calibrated ex post, by applying the PAV algorithm that converts the original forecast probabilities $x_1 \leq \cdots \leq x_n$ from \eqref{eq:data} into the calibrated probabilities $\hat{x}_1 \leq \cdots \leq \hat{x}_n$ from \eqref{eq:xhat}. The following stylized fact summarizes findings from \citet[Theorem 6.3]{Schervish1989} and \citet[Corollary 2]{Holzmann2014}.
\begin{fact} \label{fact:re-calibrated}
If a probability forecast fails to be calibrated, its re-calibrated version is superior in terms of Murphy curves.
\end{fact}
To illustrate Fact \ref{fact:re-calibrated} we consider the idealized Scenario A, where the binary outcome $Y$ has event probability $X_0$, which is uniformly distributed on the unit interval. We compare to the probability forecast $X_1 = \frac{3}{8} + \frac{1}{4} X_0$, which is a strictly increasing transformation of $X_0$. Part (a) of Figure \ref{fig:ABC} shows idealized triptych plots, where the reliability diagrams are augmented by density plots for the unconditional distribution of the forecast values. While $X_0$ and $X_1$ have the same discrimination ability and identical ROC curves, $X_1$ fails to be calibrated, whereas $X_0$ is calibrated. In fact, $X_0$ is the re-calibrated version of $X_1$, and thus is superior in terms of Murphy curves.
\begin{figure}[p]
\centering
\begin{subfigure}{\linewidth}
\caption{Scenario A} \label{fig:A}
\includegraphics[width=\linewidth]{Fig07_A_PopQuantities.pdf}
\end{subfigure}
\\
\begin{subfigure}{\linewidth}
\caption{Scenario B} \label{fig:B}
\includegraphics[width=\linewidth]{Fig07_B_PopQuantities.pdf}
\end{subfigure}
\\
\begin{subfigure}{\linewidth}
\caption{Scenario C} \label{fig:C}
\includegraphics[width=\linewidth]{Fig07_C_PopQuantities.pdf}
\end{subfigure}
\vspace{-3mm}
\caption{Triptych displays in idealized Scenarios A, B, and C} \label{fig:ABC}
\end{figure}
The next fact summarizes a crucial, novel finding. We provide a rigorous version as Theorem \ref{thm:crossingpoints} in Appendix \ref{app:proofs}, which also contains its proof.
\begin{fact} \label{fact:crossingpoints}
For two competing probability forecasts that are both calibrated, the number of crossing points of the ROC curves equals the number of crossing points of the Murphy curves.
\end{fact}
For illustration we tend to Scenario B, where the forecasts $X_1$ and $X_2$ are both calibrated. As before, let $X_0$ be uniformly distributed, and let the outcome $Y$ have true event probability $X_0$. We consider the probability forecasts
\begin{align} \label{eq:B}
X_1 = \begin{cases}
X_0 & \text{if } X_0 < \frac{1}{4}, \\
\frac{1}{2} & \text{if } \frac{1}{4} \leq X_0 \leq \frac{3}{4}, \\
X_0 & \text{if } X_0 > \frac{3}{4},
\end{cases}
\quad \text{and} \quad
X_2 = \begin{cases}
\frac{1}{8} & \text{if } X_0 < \frac{1}{4}, \\
X_0 & \text{if } \frac{1}{4} \leq X_0 \leq \frac{3}{4}, \\
\frac{7}{8} & \text{if } X_0 > \frac{3}{4},
\end{cases}
\end{align}
respectively. The triptych plots in part (b) of Figure \ref{fig:ABC} illustrate that the ROC and Murphy curves share the same number, namely two, of crossing points.
In particular, Fact \ref{fact:crossingpoints} implies that if two competing probability forecasts are both calibrated, then there is a superiority relation in terms of ROC curves if, and only if, there is a superiority relation in terms of Murphy curves. The following fact sharpens this statement and relates to considerations of sharpness \citep{Gneiting2007}. Informally, a probability forecast is sharper than another if its forecast values are closer to the most confident values of 0 and 1, respectively. In Appendix \ref{app:proofs} we state and prove a rigorous version of the subsequent Fact \ref{fact:sharper} in Theorem \ref{thm:sharper}.
\begin{fact} \label{fact:sharper}
If two competing probability forecasts are both calibrated, and one of them is sharper than the other, then the sharper one is superior in terms of both ROC curves and Murphy curves, and vice versa.
\end{fact}
For an illustration in terms of nested information sets, which imply Murphy dominance as proved by \citet[Corollary 4]{Holzmann2014} and \citet[Proposition 3.1]{Kruger2021}, we consider Scenario C. Specifically, let the binary outcome $Y$ have true event probability $X_0 = \Phi( \, \sum_{i=1}^4 a_i \, )$, where $a_1, a_2, a_3$, and $a_4$, respectively, are independent standard normal variates, and $\Phi$ is the cumulative distribution function of the standard normal distribution. We consider the probability forecasts
\begin{align} \label{eq:C}
X_j = \Phi \left( \frac{1}{(j+1)^{1/2}} \sum_{i=1}^{4-j} a_i \right)
\end{align}
for $j = 0, 1, 2$, and $3$. So, there are four independent sources of information, represented by $a_1, a_2, a_3$, and $a_4$, and the forecast $X_j$ provides the correct specification of the event probability conditional on $4 - j$ sources being available. Thus the information sets are nested, the forecasts are calibrated, and they exhibit an increase in sharpness as $j$ decreases. The triptych graphs in part (c) of Figure \ref{fig:ABC}, which include density plots in the reliability diagrams, illustrate the increase in sharpness and the associated gain in terms of both ROC curves and Murphy curves. Pairwise comparisons between the forecasts illustrate the relationships guaranteed by Fact \ref{fact:sharper}.
\subsection{Visualizing classifier performance for many competitors simultaneously: {\text{MCB}}--{\text{DSC}} plots} \label{sec:MCB_DSC}
It is not uncommon that a multitude of competing forecasts are to be compared, with forecast contests being prime examples of such settings \citep{Leka2019a, Salganik2020b}. The consideration of all competing forecasts in the triptych graphics then results in overcrowded displays. However, the components of the CORP decomposition of a mean score $\bar{\myS}$ from \eqref{eq:decomposition} can serve as numerical summaries. For a succinct comparison that considers multiple facets of forecast performance we propose a simple tool, which we call an {\text{MCB}}--{\text{DSC}} plot, namely, a scatter plot of the miscalibration (\text{MCB}) versus the discrimination (\text{DSC}) component of the CORP decomposition, augmented with a set of parallel contour lines that according to \eqref{eq:decomposition} correspond to an equal mean score. Importantly, the joint consideration of the {\text{MCB}} and {\text{DSC}} components enables a comparison in terms of the mean score $\bar{\myS}$ as well, contrary to the joint use of the (traditional) Brier score reliability component and AUC in the extant literature, as exemplified in \citet[Figure 2]{Hewson2021}.
{\text{MCB}}--{\text{DSC}} plots admit appealing interpretations that apply under any choice of the underlying proper scoring rule $\textsf{S}$, as summarized now.
\begin{itemize}
\item
For any forecast method considered, the mean score $\bar{\myS}$ and the associated {\text{MCB}} and {\text{DSC}} components from \eqref{eq:decomposition} can be read off immediately. The {\text{UNC}} component depends on the outcomes only, is shared by all methods considered, and equals the label attached to the diagonal.
\item
The origin of the coordinate system in an {\text{MCB}}--{\text{DSC}} plot, where $\text{MCB} = \text{DSC} = 0$, corresponds to the best constant forecast, namely, the unconditional event frequency in the test set. As noted, the diagonal corresponds to its mean score, namely, $\bar{\myS}_{\textsf{R}} = \text{UNC}$. Forecasts that appear above the diagonal perform better than this reference, forecast below the diagonal perform worse.
\item
The mean score $\bar{\myS}_{\textsf{C}} = \text{UNC} - \text{DSC}$ of the PAV-(re)calibrated forecast equals the (negative of the $\text{DSC}$ component, up to a constant. This illustrates that the forecast with the largest {\text{DSC}} component has the greatest potential, provided (re)calibration is an option.
\end{itemize}
Figures \ref{fig:MCB_DSC} and \ref{fig:M1_MCB_DSC} show {\text{MCB}}--{\text{DSC}} plots for competing solar flare forecasts \citep{Leka2019a, Leka2019b}, and for a considerably larger number of forecasts for a binary outcome from the Fragile Families Challenge \citep{Salganik2020b}, at which we take a closer look in Section \ref{sec:FFC}. We focus on the Brier score decomposition, which is of particular appeal, as all terms involved are guaranteed to be finite, the mean Brier score $\bar{\myS}$ equals the area under the Murphy curve, and under modest conditions, the Brier score $\text{MCB}$ component reduces to a classical measure of deviations from the diagonal in a reliability diagram \citep{Dimitriadis2021}. Under the logarithmic score, the mean score $\bar{\myS}$ equals a weighted area under the Murphy curve, and both $\bar{\myS}$ and the $\text{MCB}$ component may become infinite. While {\text{MCB}}--{\text{DSC}} plots might mask details of the predictive performance, they are well suited to the task of selecting subsets of interesting forecasts that can be analyzed further by plotting triptych graphics, as exemplified in the subsequent suite of data examples.
\section{Empirical examples} \label{sec:empirical}
We illustrate the use of the triptych displays and {\text{MCB}}--{\text{DSC}} plots for probabilistic classifiers from the academic literature in astrophysics, economics, and the social sciences.
\subsection{Solar flares} \label{sec:solar}
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{Fig08_M1Flares_MCBDSC.pdf}
\caption{{\text{MCB}}--{\text{DSC}} plots for probability forecasts of class M1.0+ solar flares under (a) the Brier score and (b) the logarithmic score. Colors align with Figure \ref{fig:M1_triptych}. The green square at the origin represents the ex post best constant forecast, that is, the unconditional event frequency, and the thick green line separates forecasts that are better (above the line) and that are worse (below the line) than this baseline. Forecasts shown along the right margin in panel (b) have an infinite mean logarithmic score.} \label{fig:M1_MCB_DSC}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{Fig09_triptych_M1Flares.pdf}
\vspace{-10mm}
\caption{Triptych graphics for probability forecasts of class M1.0+ solar flares. Reliability curves are shown on (the smallest contiguous interval containing) the support of the forecast distribution.} \label{fig:M1_triptych}
\end{figure}
Solar flares are energetic phenomena with potentially disastrous effects on modern terrestrial communications systems. Owing to the increased availability of astrophysical data in real time, numerous forecasting systems for solar flares have been developed, and in a series of workshops a data repository for comparative evaluation has been created \citep{Barnes2016, Leka2019a, Leka2019b}.
We consider operational probability forecasts at a prediction horizon of a day ahead for solar flares of class C1.0+ and M1.0+, in exceedance of $10^{-6}$ and $10^{-5}$ Watts per square meter, respectively, as issued in calendar years 2016 and 2017. While \citet{Leka2019a} and \citet{Leka2019b} describe 11 and 19 competing forecasts for C1.0+ and M1.0+ flares, respectively, there are substantial amounts of missing data in the records. As fair comparisons require evaluation on a joint set of forecast situations, we restrict our analysis to test sets of 9 forecasts for C1.0+ flares on 577 days, as analyzed in Figures \ref{fig:C1_triptych} and \ref{fig:MCB_DSC}, and 17 forecasts for M1.0+ flares on 431 days. On these test sets, records are complete and flares have unconditional event frequencies of 30.3 and 3.5 percent, respectively.
Turning to M1.0+ flares, Figure \ref{fig:M1_MCB_DSC} shows {\text{MCB}}--{\text{DSC}} plots under the Brier score and the logarithmic score. Notably, under the logarithmic score most forecasts are outperformed by the best constant forecast. For the triptych graphics in Figure \ref{fig:M1_triptych} we select the NOAA forecast, which performs well under either scoring rule, and the NICT forecast, which is by far the best performing method in terms of the Brier score. Furthermore, we consider the MCSTAT forecast, which is poorly calibrated, and the ASSA forecast as a technique that lacks discrimination ability. Due to the low unconditional event probability, most forecast values are small. The NOAA forecast is of high quality in every regard. The NICT forecast is a hard classifier, that is, it issues forecast probabilities of 0 and 1 only. It performs best under most thresholds $\theta$ in the Murphy diagram, except at very high values. Not surprisingly, it is penalized by an infinite mean logarithmic score. The MCSTAT forecast exhibits good discrimination ability, but overpredicts, that is, the conditional event frequency is persistently lower than the forecast value, resulting in poor overall performance. These issues can be addressed by (re)calibration, as opposed to the lack of discrimination ability of the ASSA forecast, which cannot be remedied.
\subsection{Survey of Professional Forecasters (SPF) probability forecasts of economic recessions} \label{sec:GDP}
We proceed to study probability forecasts for US GDP recessions, that is, quarters with a negative real GDP growth rate. The data base of the Survey of Professional Forecasters \citep[SPF:][]{Croushore2019} includes probability forecasts for a GDP decline in the current and the following four quarters from the fourth quarter of 1968 through the third quarter of 2019.\footnote{SPF forecasts are available under \href{https://www.philadelphiafed.org/surveys-and-data/recess}{https://www.philadelphiafed.org/surveys-and-data/recess} and binary outcomes under \href{https://www.philadelphiafed.org/surveys-and-data/real-time-data-research/routput}{https://www.philadelphiafed.org/surveys-and-data/real-time-data-research/routput}. Data for the third quarter of 1975 are missing.} Following \citet{Lahiri2013}, we consider the mean over all individual SPF forecasts, which we denote SPF Consensus, and SPF forecaster \#65, who reports the second most frequently among the survey participants. \citet{Lahiri2013} study SPF probability forecasts through the first quarter of 2011 by evaluating calibration, assessing potential predictive ability through ROC curves, and reporting mean Brier and mean logarithmic scores. While their analysis is in the spirit of the triptych approach, it differs by necessity, as the methods proposed here depend on recent methodological advances \citep{Ehm2016, Dimitriadis2021} not yet available then.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig10_triptych_SPF_Consensus.pdf}
\vspace{-10mm}
\caption{Triptych graphics for SPF Consensus forecasts of US recessions at different prediction horizons} \label{fig:SPF_horizons}
\end{figure}
The triptych graphics in Figure \ref{fig:SPF_horizons} serve to compare SPF Consensus forecasts at prediction horizons ranging from current quarter nowcasts to four quarters ahead. The test set comprises 195 quarters between the second quarter of 1971 and the first quarter of 2019, with unconditional event frequency 0.16. We note the unsurprising yet drastic effects of the forecast horizon on the quality of the SPF Consensus forecast. While forecasts are reasonably well calibrated at all prediction horizons, the discrimination ability as shown by the ROC curves and, hence, overall predictive ability as visualized by the Murphy curves, deteriorate dramatically with the prediction horizon. Forecasts four quarters ahead have virtually no discrimination ability. Table \ref{tab:SPF_decom} concerns a test set of 61 quarters between 1972 and 2006, for which predictions by SPF forecaster \#65 are available, with unconditional event frequency 0.23. The SPF Consensus forecast outperforms SPF forecaster \#65 at all prediction horizons considered. The predictive performance of the individual forecaster falls markedly below that of the unconditional reference forecast at a prediction horizon of two quarters already. The SPF Consensus forecast maintains superior discrimination ability and overall performance at a prediction horizon of two quarters, and performs en par with the unconditional reference forecast at a prediction horizon of four quarters ahead.
\begin{table}[t]
\centering
\footnotesize
\caption{CORP decomposition of mean Brier score for probability forecasts of US recessions from SPF Consensus and SPF \#65. The {\text{UNC}} component equals the mean Brier score for the best constant forecast, namely, the unconditional event frequency in the test set, at 0.177.} \label{tab:SPF_decom}
\begin{tabular}{l r ccc r ccc r ccc}
\toprule
Forecast && \multicolumn{3}{c}{$h = 1$} && \multicolumn{3}{c}{$h = 2$} && \multicolumn{3}{c}{$h = 4$} \\
\cmidrule{3-5} \cmidrule{7-9} \cmidrule{11-13}
&& $\bar{\myS}$ & {\text{MCB}} & {\text{DSC}} && $\bar{\myS}$ & {\text{MCB}} & {\text{DSC}} && $\bar{\myS}$ & {\text{MCB}} & {\text{DSC}} \\
\midrule
SPF Consensus && 0.118 & 0.045 & 0.104 && 0.144 & 0.043 & 0.075 && 0.177 & 0.018 & 0.018 \\
SPF \#65 && 0.143 & 0.019 & 0.053 && 0.207 & 0.043 & 0.013 && 0.212 & 0.036 & 0.001 \\
\bottomrule
\end{tabular}
\medskip
\end{table}
\subsection{Fragile Families Challenge} \label{sec:FFC}
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{Fig11_FFC_Eviction_MCBDSC.pdf}
\caption{{\text{MCB}}--{\text{DSC}} plots for probability forecasts of eviction from the Fragile Families Challenge under (a) the Brier score and (b) the logarithmic score. Benchmark forecasts are marked in green; the other colors align with Figure \ref{fig:FFC_triptych}. The green square at the origin represents the ex post best constant forecast, that is, the unconditional event frequency, and the thick green line separates forecasts that are better (above the line) and that are worse (below the line) than this baseline. Forecasts shown along the right margin in panel (b) have infinite mean logarithmic score. Various forecasts are not represented in the displays, due to trivial submissions \citep[Table S5]{Salganik2020b}, overlap in symbols or labels, or a particularly poor (but finite) mean score.} \label{fig:FFC_MCB_DSC}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{Fig12_Triptych_FFC_Eviction.pdf}
\vspace{-10mm}
\caption{Triptych graphics for probability forecasts of eviction from the Fragile Families Challenge.} \label{fig:FFC_triptych}
\end{figure}
The Fragile Families Challenge \citep{Salganik2020a, Salganik2020b, Salganik2021} is a scientific mass collaboration where teams supplied predictions for six (three binary and three real-valued) variables about life trajectories of children and families, based on a rich data set from the Fragile Families and Child Wellbeing Study \citep{Reichman2001}. \citet{Salganik2020b} posit in their abstract that
\begin{quote}
\footnotesize
''despite using a rich dataset and applying machine-learning methods optimized for prediction, the best predictions were not very accurate and were only slightly better than those from a simple benchmark model''.
\end{quote}
We use the triptych methodology to shed detailed light on this claim for one of the binary outcomes in the study, namely, eviction (from a family's home or apartment, for not paying rent or mortgage). For eviction, and also for the binary outcome job training (specifically, primary caregiver participation in job training) analyzed in panel (b) of Figure \ref{fig:MCB_DSC}, probability forecasts were sought for a holdout set of 1,103 families, with 160 teams providing valid contributions. In addition, the Challenge organizers supplied nine benchmark forecasts based on commonly used statistical and machine learning techniques. The unconditional event frequency in the holdout set is 0.059 for eviction and 0.246 for job training.
The substantial numbers of up to 169 forecasts to be compared discourage the immediate use of the triptych graphics. To enable the selection of methods of interest, Figure \ref{fig:FFC_MCB_DSC} shows {\text{MCB}}--{\text{DSC}} plots for the eviction data set under the Brier score and the logarithmic score, respectively. Benchmark forecasts are represented in green and cluster near the origin, that is, they are well calibrated but lack discrimination. Interestingly, a substantial number of competitors outperform the benchmarks of \citet{Salganik2020b} and the best constant forecast in terms of both scores, though the improvement is small. However, for the job training data set in panel (b) of Figure \ref{fig:MCB_DSC} none of the teams show predictive ability superior to the benchmark forecasts.
For the triptych graphics in Figure \ref{fig:FFC_triptych} we select mrdc as the best forecast in terms of the Brier score, bjgoode and Justajwu as the best discriminating forecasts with respect to the Brier score and the logarithmic score, respectively, and the baseline technique benchmark\_logit\_full of \citet{Salganik2020b}. The mrdc, bjgoode, and Justajwu forecasts outperform the baseline model in terms of discrimination ability, as depicted by the ROC curve. Due to the low unconditional event frequency, the forecasts take on values below 0.40 only, and we restrict the reliability diagrams and Murphy curves to this range. The baseline model is particularly well calibrated, which makes it competitive in terms of overall predictive performance, as demonstrated in the Murphy curves. However, (re)calibrated versions of the mrdc, bjgoode, and Justajwu forecasts are bound to outperform the benchmarks by notable margins.
\section{Discussion} \label{sec:discussion}
In this paper, we have proposed the joint use of a triptych of diagnostic graphics in the evaluation of probability forecasts, including the reliability diagram in the recently proposed CORP form to assess calibration, the concave variant of the receiver operating characteristic (ROC) curve to elucidate discrimination ability, and the Murphy curve for the overall assessment of predictive performance and economic utility. For a succinct overview of the performance of multiple forecasts we have introduced {\text{MCB}}--{\text{DSC}} plots that leverage the CORP decomposition of a mean proper score into miscalibration (\text{MCB}), discrimination (\text{DSC}), and uncertainty (\text{UNC}) components. Software for the implementation of these tools and for the replication of the results in the article are available in \textsf{R} \citep{R, replication_triptych}. An \textsf{R} package implementation is currently in development.
Our work builds on and supplements, and in a sense completes, extant software for the evaluation of probabilistic classifiers, or probabilistic forecasts in general, including but not limited to the \texttt{ROCR} \citep{Sing2005}, \texttt{pROC} \citep{Robin2011}, \texttt{Murphydiagram} \citep{Ehm2016}, \texttt{verification} \citep{R_verification}, and \texttt{reliabilitydiag} \citep{Dimitriadis2021} packages in \textsf{R}. Arguably, closest in spirit are the \texttt{classifierplots} \citep{R_classifierplots} package, which generates a ``grid of diagnostic plots'' that includes reliability diagrams and ROC curves, and the interactive \texttt{Calibrate} approach of \citet{Xenopoulos2023}. However, these packages do not use the CORP approach of \citet{Dimitriadis2021} for the generation of reliability diagrams and score decompositions, nor do they implement Murphy diagrams.
In view of the general theory of calibration and score decompositions developed by \citet{Gneiting2021}, the triptych approach to the diagnostic evaluation of probability forecasts might serve as a blueprint for evaluation strategies in similar settings, including but not limited to ordinary least squares regression, forecasts in the form of the expected value of a general real-valued outcome, quantile regression, and quantile forecasts. The case of quantiles has been studied by \citet{Gneiting2023}, whose toolbox includes variants of Murphy curves, CORP reliability diagrams, and the CORP score decomposition. The recently developed universal ROC (UROC) curve of \citet{Gneiting2022b} generalizes the ROC curve from the classical case of a binary outcome to a general real-valued outcome, and the UROC curve might join CORP reliability diagrams and Murphy curves to form triptych graphics in the above types of settings. While currently available implementations of the triptych graphics and {\text{MCB}}--{\text{DSC}} plots involve static graphics only, ever increasing numbers of competitors in forecast contests \citep{Salganik2020b, Makridakis2022} may warrant the development of interactive versions, where users can select competitors of interest in an {\text{MCB}}--{\text{DSC}} plot and generate the respective triptych graphics on the fly.
\section*{Acknowledgement}
Timo Dimitriadis, Tilmann Gneiting and Alexander Jordan gratefully acknowledge support by the Klaus Tschira Foundation. Timo Dimitriadis gratefully acknowledges financial support from the German Research Foundation (DFG) through grant number 502572912. The work of Peter Vogel was funded by DFG through grant number 257899354. We thank Kajal Lahiri, Johannes Resin, Stefan Trautmann, and participants at the 2019 and 2022 International Symposium on Forecasting in Thessaloniki and Oxford, respectively, COMPSTAT 2022 in Bologna, and CFE 2022 in London for comments and advice.
| {
"timestamp": "2023-01-27T02:00:48",
"yymm": "2301",
"arxiv_id": "2301.10803",
"language": "en",
"url": "https://arxiv.org/abs/2301.10803",
"abstract": "Probability forecasts for binary outcomes, often referred to as probabilistic classifiers or confidence scores, are ubiquitous in science and society, and methods for evaluating and comparing them are in great demand. We propose and study a triptych of diagnostic graphics that focus on distinct and complementary aspects of forecast performance: The reliability diagram addresses calibration, the receiver operating characteristic (ROC) curve diagnoses discrimination ability, and the Murphy diagram visualizes overall predictive performance and value. A Murphy curve shows a forecast's mean elementary scores, including the widely used misclassification rate, and the area under a Murphy curve equals the mean Brier score. For a calibrated forecast, the reliability curve lies on the diagonal, and for competing calibrated forecasts, the ROC and Murphy curves share the same number of crossing points. We invoke the recently developed CORP (Consistent, Optimally binned, Reproducible, and Pool-Adjacent-Violators (PAV) algorithm based) approach to craft reliability diagrams and decompose a mean score into miscalibration (MCB), discrimination (DSC), and uncertainty (UNC) components. Plots of the DSC measure of discrimination ability versus the calibration metric MCB visualize classifier performance across multiple competitors. The proposed tools are illustrated in empirical examples from astrophysics, economics, and social science.",
"subjects": "Methodology (stat.ME); Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "Evaluating Probabilistic Classifiers: The Triptych",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290930537121,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.709952248743769
} |
https://arxiv.org/abs/2103.11893 | Thresholding Greedy Pursuit for Sparse Recovery Problems | We study here sparse recovery problems in the presence of additive noise. We analyze a thresholding version of the CoSaMP algorithm, named Thresholding Greedy Pursuit (TGP). We demonstrate that an appropriate choice of thresholding parameter, even without the knowledge of sparsity level of the signal and strength of the noise, can result in exact recovery with no false discoveries as the dimension of the data increases to infinity. | \section{Introduction} \label{section1}
In this section, we introduce our algorithm and associated theorems for theoretical guarantees. We also give an overview of sparse recovery algorithms and related literatures.
\subsection{Sparsity Promoting Optimization}
We are interested in finding sparse \textit{signals} $\bm{x} \in \mathbb C^K$ from \textit{measurements} $\bm{b} \in \mathbb C^{N}$ that are related by
\begin{equation}\label{eq:sparse}
\mathcal{A} \bm{x} +\bm{e} = \bm{b},
\end{equation}
where $\mathcal{A}\in \mathbb C^{N\times K}$ is the \textit{measurement matrix} and $\bm{e}$ is an unknown noise vector.
Typically, the system \eqref{eq:sparse} is \textit{underdetermined} because we can only gather a few measurements, so $N \ll K$.
When $N \ll K$ it not possible to solve this system uniquely without additional a priori information. Then it is usually assumed that the
signal vector $\bm{x}$ is $M$-sparse, which means it has $M$ nonzero entries, and $M \ll K$. In the noiseless case ($\bm{e} =0 $), one can find a solution using the Basis Pursuit~\cite{Claerbout1973}:
\begin{equation}\label{eq:basispursuit}
\text{Find } \argmin\|\bm{x}\|_1\quad \text{s.t. }\mathcal{A}\bm{x} = \bm{b}.
\end{equation}
The solution of the optimization problem~\eqref{eq:basispursuit} recovers the original signal $\bm{x}$ exactly under certain conditions on the measurement matrix $\mathcal{A}$ and sparsity level
$M$~\cite{ Candes2006, Chen2001, Elad2002, Feuer2003}.
In the presence of noise, the exact recovery is no longer possible. However, again under certain assumptions on $\mathcal{A}$ and $\bm{x}$ we still can recover the support of $\bm{x}$ with
only the knowledge of $\mathcal{A}$ and $\bm{b}$. The idea is that since $\bm{x}$ is sparse, only a few columns of $\mathcal{A}$ are used to produce $\bm{b}$. Then we can effectively detect which columns are best to approximate $\bm{b}$.
A popular approach is a modification of \eqref{eq:basispursuit}, the Basis Pursuit Denoising (BPDN)~\cite{Chen1994} or the Least Absolute Shrinkage and Selection Operator (LASSO)~\cite{Tibshirani1996}.
It is an $l_2$-optimization method that promotes sparsity by penalizing $l_1$-norm:
\begin{equation}\label{eq:lasso}
\text{Find }\bm{x}_{\lambda}\in \argmin\left(\frac{1}{2}\|\bm{b}-\mathcal{A}\bm{x}\|_2^2+\lambda\|\bm{x}\|_1 \right),\quad \lambda \ge 0.
\end{equation}
Convexity ensures that there is
always a solution to~\eqref{eq:lasso}. The tuning/penalty parameter $\lambda$ is appropriately chosen to obtain the desired properties of the minimizer $\bm{x}_{\lambda}$.
As $\lambda$ increases, BPD will choose the minimizer with fewer non-zero entries. On the other hand, smaller $\lambda$ will produce a solution closer to the least-squares
approximation. Given the level of noise one can choose $\lambda$ optimally so that the support of $\bm{x}_{\lambda}$ recovers as much of the support of the true solution
$\bm{x}$ as possible, see for example, \cite{Fuchs2005,Tropp2006,Wainwright2009}. These conditions depend on the knowledge of variance
of the noise $\bm{e}$ and also on the choice of $\lambda$. Some examples to choose $\lambda$ include \textit{cross-validation} \cite{Tibshirani1996} and choosing $\lambda$ \textit{adaptively} \cite{Zou2006, Chichignoud2016}.
These methods may be computationally expensive and/or require additional estimates of, for example, the level of noise.
Therefore, there has been some research on designing algorithms with parameters that are independent of the level of noise. These algorithms may require other constraints; for example, the work of~\cite{Laska2009} assumes the noise to be sparse,
or the work of~\cite{Kueng2018} requires non-negativity of the signal.
We discuss here two algorithms that do not have such restrictions and are directly related to our work.
The first work is \textit{Square-Root LASSO} \cite{BELLONI2011} which is the following optimization problem
\begin{equation}\label{eq:sqrtlasso}
\text{Find }\bm{x}_{\lambda}\in \argmin\left(\|\bm{b}-\mathcal{A}\bm{x}\|_2+\lambda\|\bm{x}\|_1 \right),\quad \lambda \ge 0.
\end{equation}
The functionals in~\eqref{eq:sqrtlasso} and in~\eqref{eq:lasso}
differ in the exponent of the $l_2$-norm. This difference allows to choose $\lambda$ independent of knowledge of noise in the Square-Root LASSO approach~\cite{BELLONI2011}.
The authors called their method pivotal with respect to $\lambda$, because they do not need to know the level of noise to choose $\lambda$. Our algorithm is a greedy implementation
of~\eqref{eq:sqrtlasso}.
The second method is the \textit{Noise Collector}~\cite{Moscoso2020,Moscoso2020a}. The Noise Collector is the Basis pursuit applied to the following augmented linear system
\begin{equation}\label{eq:nc}
\text{Find $(\bm{x}_{\tau},\bm{\eta}_{\tau})\in \argmin (\tau\|\bm{x}\|_1+\|\bm{\eta}\|_1)$, subject to $\mathcal{A} \bm{x}+\mathcal{C} \bm{\eta} = \bm{b}$,}
\end{equation}
where $\mathcal{C}$ is the noise collector matrix, and $\tau$ is a tuning parameter.
If the columns of $\mathcal{C}$ are drawn at random and the parameter $\tau$ is chosen appropriately, then the noise collector will ``absorb'' \textit{all} the noise:
$\mathcal{C} \bm{\eta} \approx \bm{e}$ (and some signal) for any level of noise~\cite{Moscoso2020a}. Further
the Noise Collector method is also pivotal with respect to $\tau$. Our proofs are inspired by the proofs for the Noise Collector by~\cite{Moscoso2020a}.
The objective of this paper is to
propose a \textbf{Thresholding Greedy Pursuit} (TGP), a fast algorithm that finds solutions of~\eqref{eq:sqrtlasso}.
\begin{figure*}[ht] \centering
\captionsetup{width=.8\linewidth}
\subfloat{\includegraphics[width=0.5\textwidth]{sparse.eps}}
\caption{An illustation for Sparse Recovery Problems: The system is underdetermined as $\mathcal{A}$ has more columns than rows; $\bm{x}$ has several non-zero (blue) entries, while $\mathcal{A}$ and $\bm{b}$ are known and random in general.}
\end{figure*}
Similar to Noise Collector and Square-Root LASSO, TGP does not have parameters
that depend on the level of noise.
TGP is a \textbf{Greedy Pursuit}. Greedy Pursuits are an important category of sparse recovery algorithms.
Their idea is to build up an approximation one step at a time by making locally optimal choices at each step.
The representatives are Orthogonal Matching Pursuit (OMP)~\cite{Tropp2007} and Compressive Sampling Matching Pursuit (CoSaMP)~\cite{Needell2009}.
The advantages of these algorithms are speed and sampling efficiency.
TGP is a modification of CoSaMP. The first difference between CoSaMP and TGP
lies in the greedy selection criteria: CoSaMP chooses the largest entries of the proxy signal, TGP chooses all the entries that are above a certain threshold.
The second difference is in the choice of parameters. CoSaMP needs to know the sparsity level to perform optimally. TGP does not need it.
The paper is organized as follows. In Section \ref{section1}, we introduce our main results regarding the Thresholding Greedy Pursuit; we also give an overview of other sparse recovery algorithms. In Section \ref{section2}, we explain how the algorithm works and compare performance with its predecessor, CoSaMP. Finally, in Section \ref{section3}, we prove theoretical results about TGP and discuss future directions.
\subsection{Notations}
For any tall full-rank matrix $A$, denote its pseudoinverse $A^{\dag}:=(A^*A)^{-1}A^*$. For any set of indices $S$, denote $\mathcal{A}_S$ as the matrix with columns of $\mathcal{A}$ drawn from the set $S$.
Note that the matrix operator $\mathcal{A}_{S}\mathcal{A}_{S}^{\dag}$ represents the orthogonal projection onto the vector space spanned by columns of $\mathcal{A}$ indexed by the set $S$.
We also denote $\supp(\bm{x})$ as the set of indices of nonzero entries of a vector $\bm{x}$, and call that the support of this vector.
Each column of $\mathcal{A}$ is normalized to have unit $l_2$-norm.
Bold letters $\bm{a},\bm{b},\ldots$ are reserved for column vectors.
The length of the signal is $K=N^{\gamma}$ for some $\gamma \ge 1$.
The signal vector $\bm{x}$ is expressed as $\bm{x}:= (x_1,\ldots,x_K)$. For any set of indices $S$, denote the restriction to $S$ of $\bm{x}$ as $\bm{x}_S:=(x_k)_{k\in S}$.
We use $\| \bm{x} \|_p$ to denote the $l_p$-norm of a vector $\bm{x}$ and $\|A\|:=\max\limits_{\|\bm{x}\|_2=1}\|A\bm{x}\|_2$ to denote the operator norm of $A$.
The identity matrix is denoted by $I$ whose dimension can vary. For any $N>0$, we use $\unif(\mathbb S^{N-1})$ to denote the uniform distribution on the $N$-dimensional unit sphere. Denote $\langle \bm{u},\bm{v}\rangle:= \bar{\bm u}^T\bm v$ the complex inner product between two complex vectors $\bm{u}$ and $\bm{v}$. We also denote $\text{Re}(x)$ as the real part of a complex number $x$.
\newpage
\subsection{Thresholding Greedy Pursuit Algorithm}
\begin{mdframed}
\textsf{Thresholding}\textsf{ Greedy Pursuit Algorithm}
\textbf{INPUT: }\text{measurement matrix $\mathcal{A}\in \mathbb C^{N\times K}$, measurement vector $\bm{b}\in \mathbb C^N$}, thresholding parameter $\tau>0$.
\textbf{OUTPUT: }\text{a set $\Omega$ of indices of columns of $\mathcal{A}$}.
\smallskip
\quad$\bm{x}^0 \leftarrow 0, \bm{b}^0 \leftarrow \bm{b}, \Omega^0 \leftarrow \supp(\bm{x}^0)$\hfill \textbf{(Initialization)}
\smallskip
\textbf{repeat }
\smallskip
\quad$\bm{x}^{n+1} \leftarrow \dfrac{\mathcal{A}^* \bm{b}^n}{\|\bm{b}^n\|_2}$\hfill \textbf{(Proxy Signal)}
\quad$\bm{x}^{n+1} \leftarrow \max(|\bm{x}^{{n+1}}|-\tau,0)$\hfill \textbf{(Thresholding)}
\quad$\text{If $\bm{x}^{n+1}=0$, then \textbf{break};}$ \hfill\textbf{(Stopping Criterion 1)}
\quad$\Omega^{n+1}\leftarrow\Omega^n\cup\supp(\bm{x}^{n+1})$ \hfill\textbf{(Support Merging)}
\quad$\bm{b}^{n+1}\leftarrow \bm{b}-\mathcal{A}_{\Omega^{n+1}}\mathcal{A}_{\Omega^{n+1}}^{\dag}\bm{b}$ \hfill\textbf{(Complement Projection)}
\quad$\text{If $\|\bm{b}^{n+1}\|_2=0$, then \textbf{break};}$ \hfill\textbf{(Stopping Criterion 2)}
\smallskip
\textbf{until }\text{stopped}
\end{mdframed}
The Thresholding Greedy Pursuit Algorithm is an iterative algorithm that produces a sequence of sets of indices $\Omega^1,\Omega^2,\ldots,$ that try to match the support of the signal vector $\bm{x}$ by projecting the measurement
vector on vector spaces that are orthogonally complement to vector spaces spanned by some columns of $\mathcal{A}$. The detection at each iteration is done by a proxy step and a thresholding procedure. The output
of the algorithm is a set of indices, which we denote $\Omega$. The role of each step is as follows.
\begin{itemize}
\item \textbf{Initialization}: The first approximation to the support is the empty set.
\item \textbf{Proxy Signal}: Sparse recovery algorithms often rely on the fact that columns of the measurement matrix are almost orthogonal or not too collinear. If that is the case, large entries the vector $\mathcal{A}^* \bm{b}=\mathcal{A}^*(\mathcal{A} \bm{x}+\bm{e})$ will be likely to match large entries of the vector $\bm{x}$. As a consequence, at each iteration, large entries of the vector $\bm{x}^{n+1}=\mathcal{A}^*\bm{b}^n/\|\bm{b}^n\|_2$ will be good candidates for where the true support lies. The next step will show how to retrieve these large entries.
\item \textbf{Thresholding}: This is the heart of the algorithm. This procedure will remove all the small entries of the proxy signal that are below a certain threshold $\tau$. This thresholding parameter is chosen so that the algorithm will not produce any false discoveries (locations that are detected by the algorithm but not in the true support; in other words, those are the indices that are in $\Omega$ but not in $\supp(\bm{x})$).
\item \textbf{Stopping Criterion 1}: The algorithm will stop if nothing new is detected in the thresholding step.
\item \textbf{Support Merging}: New index set $\Omega^{n+1}$ is created by merging the old index set $\Omega^{n}$ and new locations in $\supp(\bm{x}^{n+1})$.
\item \textbf{Complement Projection}: We need to remove the part of measurement vector $\bm{b}$ that approximates the locations that are already detected. This is done by projecting $\bm{b}$ onto the space that is orthogonally complement to the space spanned by columns in $\mathcal{A}_{\Omega^{n+1}}$.
\item \textbf{Stopping Criterion 2}: The algorithm will stop if the whole support is detected, or if $\bm{b}^{n+1}=\bm{b}-\mathcal{A}_{\Omega^{n+1}}\mathcal{A}_{\Omega^{n+1}}^{\dag}\bm{b}=0$, and so $\Omega^{n+1}$ will be the exact support of $\bm{x}$.
\end{itemize}
The recovered signal with the support $\Omega$ is $\mathcal{A}_{\Omega}^{\dag}\bm{b}$, which is the solution to the $l_2$-approximation problem $\min\limits_{\bm{x}}\|\mathcal{A}_{\Omega}\bm{x}-\bm{b}\|_2$. The value of $\tau$ plays an important role in the performance of the algorithm. The next sections will show how we choose this parameter, independent of knowing the strength of the noise $\bm{e}$, in order to guarantee support detection with no false discoveries.
\subsection{Main Theorems}\label{sec:maintheorem}
We are ready to state our main results.
\begin{theorem}\label{theorem:nophantom} (\textbf{No Phantom Signal}) Let $\mathcal{A}\in \mathbb{C}^{N\times K}$, $K=N^{\gamma}$.
Suppose there is no signal, that is $\bm{x}=0$ in~\eqref{eq:sparse}, and the noise $\bm{e}$ in~\eqref{eq:sparse} is such that
$\bm{e}/\| \bm{e} \|_{\ell_2}$ is uniformly distributed on $\mathbb{S}^{N-1}$. For any $\kappa>0$ there exists $c_0=\sqrt{2(\gamma+\kappa)}$ such that for any $\tau\ge c_0\sqrt{\log N}/\sqrt{N}$
the set $\Omega$, the output of the TGP algorithm, is empty with probability $1-2/N^{\kappa}$.
\end{theorem}
The above theorem guarantees the algorithm does not recover anything if the input is pure noise. The bound $ \tau \geq c_0\sqrt{\log N}/\sqrt{N}$ comes from estimating how large the inner product of random vectors in high
dimensions could be. We follow~\cite{Vershynin2018} in obtaining this and similar estimates. The next theorem assures zero False Discovery Rate even when there is some information in the signal, that is the algorithm does not detect any entry outside of the support of the signal. In order to guarantee the zero False Discovery Rate we need to assume incoherence of columns of $\mathcal{A}$. Define the mutual coherence parameter
\begin{equation}\label{inco}
\mu:= \max\limits_{1\le i<j\le K}|\langle \bm{a}_i,\bm{a}_j\rangle|.
\end{equation}
\begin{theorem}(\textbf{No False Discoveries}) \label{theorem:nofalse}
Assume $\mathcal{A}$, $\gamma$, $\kappa$, $c_0$, $\tau$, $\bm{e}$, $\Omega$ are as in the previous theorem, and $\mu$ is given by~\eqref{inco}.
Suppose $\bm{x}$ is the $M$-sparse solution of~\eqref{eq:sparse}.
If $M\le 1/(4\mu)$,
then $\Omega \subset\supp(\bm{x})$ with probability $1-2/N^{\kappa}$.
\end{theorem}
The next theorem shows that, if the noise is not large, our method recovers the exact support of the signal.
\begin{theorem} (\textbf{Exact Recovery})\label{theorem:exact}
Assume $\mathcal{A}$, $\gamma$, $\kappa$, $c_0$, $\mu$, $\bm{e}$, $\Omega$ are as in the previous theorem, but
\begin{align}\label{mutualcondition}
M\le\min\left\{\frac{1}{4\mu},\frac{\sqrt{N}}{4c_0\sqrt{\log N}} \right\},
\end{align}
and
\begin{equation}\label{threspara}
\tau:= \sqrt{\frac{4}{3}\left(\frac{\mu}{4}+\frac{c_0^2\log N}{N} \right)}.
\end{equation}
If $\|\bm{e}\|_2\le 0.03 \sqrt{M} \min_{i \in {{\scriptsize{ \supp}}}(\bm{x}) } (|x_i|)$,
then $\Omega = {{{\supp}}}(\bm{x}) $ with probability $1-2/N^{\kappa}$.
\end{theorem}
{\it Remark.} In the last Theorem the pessimistic constant $0.03 \sqrt{M}$ could be improved, see formulas~\eqref{eq:noisestrength} and~\eqref{eq:functionf}. We keep $0.03 \sqrt{M}$ for simplicity of presentation.
\subsection{Overview of Sparse Recovery Algorithms and Our Contributions}
Sparse recovery problems have found applications in the fields of compressed sensing \cite{Eldar2009}, signal denoising \cite{Gupta2016}, optical imaging \cite{Dileep2020}, machine learning \cite{Yang}, and more. The key idea that drives sparse recovery is that a high dimensional sparse signal can be inferred from a few linear observations. An intensive survey of many sparse recovery algorithms is presented in \cite{CrespoMarques2019}. They typically fall into three categories: \textbf{Combinatorial Algorithms}, \textbf{Convex Relaxation}, and \textbf{Greedy Pursuits}.
The first category requires a large number of structured samples of the signal for reconstruction via group testing. This includes Fourier sampling \cite{Gilbert2002,Gilbert2003}, chaining pursuit \cite{Gilbert2006} and HHS pursuit \cite{Gilbert2007}. The algorithms in this group are extremely fast but demand a huge number of samples that are not easy to obtain.
The second class, Convex Relaxation, mostly deals with the BPDN problem \cite{Chen1994,Tibshirani1996} defined by
\[
\min\left(\frac{1}{2}\|\bm{b}-\mathcal{A}\bm{x}\|_2^2+\lambda\|\bm{x}\|_1 \right).
\]
The solution to this minimization problem achieves two objectives at the same time: solving the linear algebra problem while maintaining a small $l_1$-norm. The techniques to solve the optimization problem include interior-point methods \cite{Candes2006}, projected gradient \cite{Figueiredo2007}, and iterative thresholding \cite{Daubechies2004}. These algorithms require a very small number of measurements, but they rely on choosing the parameter $\lambda$ carefully to achieve the best performance \cite{Homrighausen2018}.
Our algorithm falls into the last class of being ``greedy." Examples include OMP \cite{Tropp2007}, stagewise OMP \cite{Donoho2012}, regularized OMP \cite{Needell2009a}, and CoSaMP \cite{Needell2009}. These methods build up
an approximation by making locally optimal choices at each iteration. Their advantages are being fast and requiring modest samplings. The idea of using greedy algorithms in signal processing was used by \cite{Mallat1993} who also coined the name \textit{matching pursuit} for one of them. \cite{Gilbert2002,Gilbert2002a} developed fast algorithms of greedy nature for sparse approximation and
established novel rigorous guarantees for greedy methods. \cite{Tropp2007} then proposed a greedy iterative algorithm called \textit{orthogonal matching pursuit} (OMP) and proved the algorithm was effective for compressive sampling. OMP takes the form similar to our description of TGP above. While TGP uses thresholding, OMP chooses the location of the column vector of $\mathcal{A}$ that makes the largest inner product of the form $|\langle \bm{a}_i,\bm{b}\rangle|$. \cite{Needell2009} built upon OMP to create an algorithm called \textit{compressive sampling matching pursuit} (CoSaMP). Instead of choosing the largest component, CoSaMP identifies many large inner products at each iteration and create the support from those. The analysis of CoSaMP is based on an important feature of the measurement matrix called the \textit{restricted isometry propery} (RIP). This property was introduced by \cite{Candes2006,Candes2006a} in their work on convex relation methods.
RIP quantifies how a matrix preserves the distance between signals and it is crucial for the measurement matrix to have such property to be able to recover sparse signals. RIP is also important in another version of OMP called \textit{regularized OMP} (ROMP) which was developed by \cite{Needell2010}. By improving OMP, the authors established that under RIP the algorithm can also work with noisy data.
Convergence theory of TGP could be developed if one assumes a RIP condition instead of assuming smallness of $\mu$ in~\eqref{inco}.
We chose to work with the incoherence condition \eqref{inco} because RIP is computationally harder to check.
Moreover, deterministic matrices satisfying RIP are difficult to construct. The best result in this direction is the work of~\cite{Bourgain2011}. In addition, we are motivated by sparse recovery problems in imaging applications,
where the measurement matrix may not satisfy RIP.
Our Thresholding Greedy Pursuit algorithm was inspired by the Noise Collector algorithm~\cite{Moscoso2020,Moscoso2020a}.
We realized the analysis of the Noise Collector can be applied to the Square-Root LASSO \cite{BELLONI2011} as well.
Then, Anna Gilbert suggested to look at CoSaMP and investigate whether its greedy framework also could be covered by the analysis of the Noise Collector.
The TGP algorithm is the result of these three ingredients. The TGP uses the conjugate gradient to update the measurement vector.
This type of update is not new. For example, this idea was used in an algorithm of~\cite{Donoho2012} called \textit{stagewise OMP} (StOMP).
Their method of choosing the thresholding parameter $\tau$ is based on the assumption that columns of $\mathcal{A}$ are normally distributed.
To the best of our knowledge, no rigorous results are available for StOMP.
The conjugate gradient update is also used by~\cite{Yang2015a} when columns of $\mathcal{A}$ satisfy RIP. Their analysis requires the knowledge of the strength of the noise in choosing its parameter.
On the other hand, we are interested in usage of sparse recovery problems in imaging applications, the measurement matrix may not have normally distributed columns or satisfy RIP.
Our contributions are the three results above in which by carefully choosing the thresholding parameter we can rigorously guarantee exact recovery even with noisy data.
\section{Ideas of the Proofs and Performance of TGP} \label{section2}
In this section, we explain the main ideas to prove the main theorems. We then compare the performance of TGP and CoSaMP in various settings.
\subsection{An Outline of the Proofs}
This outline contains the ideas for all three theorems. For simplicity of presentation, assume that we want to prove Theorem~\ref{theorem:exact}, and that
the signal is nonzero only in its first entry. We then can write the measurement vector as
\[
\bm{b} = x_1\bm{a}_1+\bm{e}.
\]
The algorithm will detect $x_1$ in the first iteration if the following inequality holds:
\begin{equation}\label{geq}
\frac{|\langle \bm{a}_1,\bm{b}\rangle|}{\|\bm{b}\|_2} > \tau.
\end{equation}
However, we also must ensure that any other column of $\mathcal{A}$ is not detected. Thus we must have
\begin{equation}\label{leq}
\frac{|\langle \bm{a}_{i},\bm{b}\rangle|}{\|\bm{b}\|_2}\le \tau,
\end{equation}
for all $i \neq 1$.
The algorithm will perform correctly if we can choose $\tau$ so that both inequalities~\eqref{geq} and~\eqref{leq} are true. The condition \eqref{mutualcondition} will play a vital role in estimating the right value of $\tau$.
We have now found $x_1$. In the next iteration we want to remove dependence on $\bm{a}_1$ by projecting the measurement vector onto the space that is orthogonal to the vector space spanned by $\bm{a}_1$.
Hence, the new measurement vector that goes into the next iteration will be
\[
\bm{b}^{1}:= \bm{b}_{\text{new}}= \bm{b} - \langle \bm{a}_1,\bm{b}\rangle \bm{a}_1 = x_1\bm{a}_1+\bm{e}-\langle \bm{a}_1,x_1\bm{a}_1+\bm{e}\rangle\bm{a}_1= \bm{e} - \langle \bm{a}_1,\bm{e}\rangle\bm{a}_1.
\]
Now, if condition~\eqref{leq} holds for all $i$, then we stop.
Note that the term $x_1\bm{a}_1$ has disappeared in the new measurement vector $\bm{b}^{1}$. The remainder is essentially the noise vector $\bm{e}$ as the expression $\langle \bm{a}_1,\bm{e}\rangle\bm{a}_1$ can be
made small in a high dimensional space. Therefore~\eqref{leq} implies that if
\begin{equation}\label{leq2}
\frac{|\langle \bm{a}_{i},\bm{e} \rangle|}{\|\bm{e} \|_2}\le \tau/2, \mbox{ for all } i.
\end{equation}
then we can detect all nonzero entries of $\bm{x}$.
We will then make use of the fact that $\bm{e}/\|\bm{e}\|_2$ is uniformly distributed on the unit sphere and
therefore it is essentially Gaussian. This will give us a lower bound estimate on the level of noise that we can handle.
If there are more than one nonzero entry of $\bm{x}$, we will use induction to show that at each iteration at least one of the remaining nonzero $x_i$
is detected by the algorithm. More specifically, we will prove that at the $n$-iteration the projected measurement vector $\bm{b}^{n}:= \bm{b}_{\text{new}}$ will satisfy
\begin{equation}\label{eq:exact}
|\langle \bm{a}_{i_n},\bm{b}^n\rangle|> \tau \|\bm{b}^n\|_2
\end{equation}
for at least some $i_n$ with $x_{i_n} \neq0$. This implies that we will always detect at least one new $x_{i} \neq0$. It also implies that the algorithm will always stop after at most $M$ iterations.
The details are provided in Section \ref{section3}.
\subsection{Comparison with CoSaMP}
In this section we compare TGP with CoSaMP, its predecessor \cite{Needell2009}. CoSaMP takes measurement matrix $\mathcal{A} \in \mathbb C^{N\times K}$ and measurements $\bm{b}\in \mathbb C^N$ as inputs. Each iteration has computational complexity of order $O(NK)$. In comparison, we detail the computational complexity of TGP at each iteration:
\begin{itemize}
\item \textbf{Proxy Signal} step takes $NK$ flops to compute $\mathcal{A}^*\bm{b}^{n}$ and $O(N)$ flops to compute $\|\bm{b}^n\|_2$.
\item \textbf{Thresholding} step takes $K$ flops to sweep through $\bm{x}^{n+1}$.
\item \textbf{Support Merging} takes no more than $O(M)$ flops.
\item Inside the \textbf{Complement Projection} step, Conjugate Gradient (CG) solver \cite{Golub2012} takes at most $\nu\cdot 2NK+O(K)$ flops where $\nu$ is a fixed number of iterations for CG, and then, one application of $\mathcal{A}$ takes an additional $NK$ flops. We note that the condition $\max\limits_{i\neq j}|\langle \bm{a}_i,\bm{a}_j\rangle|\le 1/(4M)$ implies small condition number for the matrix $\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega}$ for any $|\Omega|\le M$, which is beneficial for CG solver:
\[
\kappa(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega}) = \frac{\lambda_{\max}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})}{\lambda_{\min}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})}\le \frac{5/4}{3/4}= \frac{5}{3} \approx 1.667.
\]
\end{itemize}
Overall, the operation count for each iteration of TGP amounts to $(2\nu+2)NK+O(K)$. We note that CoSaMP also requires the user to specify how many iterations it will run; CoSaMP needs to have an estimate on the sparsity level $M$ to be efficient. In contrast, TGP is fully automatic and it will surely stop after $M$ iterations, without even knowing $M$. Numerics indicates that TGP stops after at most 2 iterations and still achieves a good performance.
The settings for experiments are as follows. We generate a measurement matrix $\mathcal{A}\in \mathbb R^{1600\times 3200}$ where each entry is a standard Gaussian random variable, an $M$-sparse vector $\bm{x}\in\mathbb R^{1600}$ whose entries are randomly generated as $1+\chi$ where $\chi \sim N(0,1)$. We then input $\mathcal{A}$ and $\bm{b} = \mathcal{A}\bm{x}$ into both algorithms. We run both TGP and CoSaMP on the same data at different levels of noise $\delta = \|\bm{e}\|_2/\|\bm{b}\|_2$: $\delta=0$ (noiseless), $\delta=0.5$ (moderate level of noise), and $\delta=1$ (high level of noise). We vary $M$ from $1$ to $10$. For CoSaMP, at each $M$, we run the algorithm in $M$ iterations. For each $M$, we repeat the experiment for 20 times by regenerating $\bm{x}$ and $\bm{e}$. We want our sparse recovery algorithm to be fast, recover most of the support (or the whole support if the noise is not too large), and have zero false discoveries. Therefore, we
measure the following parameters:
\begin{itemize}
\item \textit{Recovery Time}: the time it takes the algorithm to complete, measured in seconds (less is better).
\item \textit{Recovered Support:} the number of locations that are detected (more is better).
\item \textit{False Discoveries}: the number of locations that are detected but not in the true support (less is better).
\end{itemize}
The resulting numbers are recorded and taken average over 20 times for each algorithm. The plots for these numbers are presented in Figures \ref{fig:gauss0}, \ref{fig:gauss05}, \ref{fig:gauss1} with red lines for TGP and green lines for CoSaMP. We can see the distinctions between the two algorithms. In every scenario, TGP runs faster than CoSaMP. In the noiseless case ($\delta=0$), TGP recovers the support exactly even with a high sparsity level. In the noisy case ($\delta>0$) and even at high sparsity levels, while CoSaMP starts having false discoveries, TGP does not and detects more of the support.
\begin{figure}[ht] \centering
\captionsetup{width=.8\linewidth}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta0Time.eps}} \subfloat{\includegraphics[width=0.333\textwidth]{N1600delta0Support.eps}}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta0False.eps}}
\caption{$\mathcal{A}$ is a Gaussian matrix, noise level is $\delta=0$.}
\label{fig:gauss0}
\end{figure}
\begin{figure}[ht] \centering
\captionsetup{width=.8\linewidth}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta05Time.eps}} \subfloat{\includegraphics[width=0.333\textwidth]{N1600delta05Support.eps}}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta05False.eps}}
\caption{$\mathcal{A}$ is a Gaussian matrix, noise level is $\delta=0.5$. }
\label{fig:gauss05}
\end{figure}
\newpage
\begin{figure}[ht] \centering
\captionsetup{width=.8\linewidth}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta1Time.eps}} \subfloat{\includegraphics[width=0.333\textwidth]{N1600delta1Support.eps}}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta1False.eps}}
\caption{$\mathcal{A}$ is a Gaussian matrix, noise level is $\delta=1$. }
\label{fig:gauss1}
\end{figure}
We achieve optimal results for TGP, when we calibrate $\tau$ numerically to be the smallest constant so that Theorem \ref{theorem:nophantom} holds. That is, we find numerically the smallest $\tau$, such that the algorithm outputs empty set
when it is fed with pure noise measurements. We remark that choosing such $\tau$ does not require estimating the strength of noise $\|\bm{e}\|_2$. We run TGP with input $\bm{b}\in \mathbb R^{N}$ as a Gaussian vector, then vary $\tau$ from 0 to 1 with an increment of 0.003 until the algorithm only outputs empty set. For each $\tau$, we repeat the experiment by regenerating $\bm{b}$ 50 times. We then record the rate of success ($\text{the number of successes}/50$). Each success is defined as each time the algorithm outputs empty support. We obtain the transition diagram for $\tau$ in Figure \ref{tautuning} (\textit{left}).
\begin{figure}[ht] \centering
\captionsetup{width=.8\linewidth}
\subfloat[$\mathcal{A}$ is a Gaussian matrix.]{\includegraphics[width=0.45\textwidth]{tautuninggaussian.eps}}
\subfloat[$\mathcal{A}$ is a Partial Fourier matrix]{\includegraphics[width=0.45\textwidth]{tautuningdft.eps}}
\caption{Transition diagrams of $\tau$ for No Phantom Signal test. Ordinate and abscissa are $\tau$ and rate of success.}
\label{tautuning}
\end{figure}
We see that starting from $\tau=0.124$, the rate of success remains 1, that is TGP outputs empty set 50 times out of 50 experiments. Therefore, we choose $\tau=0.124$ to be the thresholding parameter for TGP in the above experiments.
We also run the same experiments for the case of Partial Fourier matrix where $\mathcal{A}$ is a uniformly random set of $N=1600$ rows drawn from the $K \times K$ ($K=3200$) unitary discrete Fourier transform (DFT). In these experiments, we choose $\tau=0.086$ by calibrating the parameter using the same procedure as in the Gaussian case. The transition diagram for $\tau$, in this case, is in Figure \ref{tautuning} (\textit{right}). The performance plots are presented in Figures \ref{fig:fdt0}, \ref{fig:fdt05}, \ref{fig:fdt1}. TGP recovers slightly less of the support than CoSaMP but has no false discoveries even at high levels of sparsity and noise.
\begin{figure}[ht] \centering
\captionsetup{width=.8\linewidth}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta0Timedft.eps}} \subfloat{\includegraphics[width=0.333\textwidth]{N1600delta0Supportdft.eps}}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta0Falsedft.eps}}
\caption{$\mathcal{A}$ is a Partial Fourier matrix, noise level is $\delta=0$. }
\label{fig:fdt0}
\end{figure}
\begin{figure}[ht] \centering
\captionsetup{width=.8\linewidth}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta05Timedft.eps}} \subfloat{\includegraphics[width=0.333\textwidth]{N1600delta05Supportdft.eps}}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta05Falsedft.eps}}
\caption{$\mathcal{A}$ is a Partial Fourier matrix, noise level is $\delta=0.5$. }
\label{fig:fdt05}
\end{figure}
\newpage
\begin{figure}[ht] \centering
\captionsetup{width=.8\linewidth}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta1Timedft.eps}} \subfloat{\includegraphics[width=0.333\textwidth]{N1600delta1Supportdft.eps}}
\subfloat{\includegraphics[width=0.333\textwidth]{N1600delta1Falsedft.eps}}
\caption{$\mathcal{A}$ is a Partial Fourier matrix, noise level is $\delta=1$. }
\label{fig:fdt1}
\end{figure}
\section{Proofs of Main Results}\label{section3}
In this section, we prove the main theorems introduced in Section \ref{sec:maintheorem}. We then end with an discussion on related questions and future directions.
\subsection{No Phantom Signal}
The proof of Theorem \ref{theorem:nophantom} is as follows.
\begin{proof}
If we show that
\begin{equation}\label{thm1_1}
\max\limits_{1\le i\le K}|\langle \bm{a}_i,\bm{e}\rangle|\le \tau \|\bm{e}\|_2
\end{equation}
p with probability $1-2/N^{\kappa}$, then after the \textbf{Thresholding} step, $\bm{x}^{1}$ is always a zero vector. Without loss of generality, we assume $\|\bm{e}\|_2=1$. By independence, we have that $\mathbb{P}\left(|\langle \bm{a}_i,\bm{e}\rangle|\ge t/\sqrt{N} \right)\le 2\exp(-t^2/2)$ for each $\bm{a}_i$. Here, we make use of the fact that uniformly distributed vectors in high dimension behave like Gaussian \cite{Vershynin2018}. Consequently, the union bound gives
\[
\mathbb{P}\left(\max\limits_{1\le i\le K}|\langle \bm{a}_i,\bm{e}\rangle|\ge t/\sqrt{N}\right)\le 2K\exp(-t^2/2)\le 2N^{\gamma}\exp(-t^2/2).
\]
For $t=c_0\sqrt{\log N}$, the right-hand side becomes $2N^{\gamma-c_0^2/2}$. Therefore,
\begin{equation}
\mathbb{P}\left( \max\limits_{1\le i\le K}|\langle \bm{a}_i,\bm{e}\rangle|\le c_0\frac{\sqrt{\log N}}{\sqrt{N}}\right)\ge 1-\frac{2}{N^{c_0^2/2-\gamma}}\label{eq:rotational}.
\end{equation}
Choosing $c_0=\sqrt{2(\gamma+\kappa)}$, we deduce that inequality~\eqref{thm1_1} holds for $\tau\ge c_0\sqrt{\log N}/\sqrt{N}$.
\end{proof}
\subsection{No False Discoveries}
The proof of Theorem \ref{theorem:nofalse} is as follows.
\begin{proof} Consider the event
\[
\mathcal{O} = \left\{ \max\limits_{1\le i\le K}|\langle \bm{a}_i,\bm{e}\rangle|\le c_0\frac{{\sqrt{\log N}}}{\sqrt{N}} \|\bm{e}\|_2 \right\}.
\]
According to \eqref{eq:rotational}, $\mathcal{O}$ holds with probability $1-2/N^{\kappa}$. Suppose the event $\mathcal{O}$ occurs, and the following analysis is deterministic on this event.
Without loss of generality, let us assume that $\bm{b} = \sum_{i=1}^Mx_i\bm{a}_i+\bm{e}$. Our proof is by induction on the number of iterations. Consider the first iteration. We want to show that
\[
|\langle \bm{a}_j, \bm{b}\rangle| \le \tau \|\bm{b}\|_2, \mbox{ for all } j \not\in \supp(\bm{x}).
\]
Pick $j \not\in \supp(\bm{x})$, and let $P = [\bm{a}_1,\ldots,\bm{a}_M,\bm{e}/\|\bm{e}\|_2]$, that is a matrix whose columns are $\bm{a}_1$, $\ldots$, $\bm{a}_M$, $\bm{e}/\|\bm{e}\|_2$. It suffices to show that the length of the orthogonal projection of $\bm{a}_j$
onto the vector space spanned by columns of $P$ never exceeds $\tau$. Indeed, if that is the case, then by Cauchy-Schwarz,
\[
|\langle \bm{a}_j,\bm{b}\rangle| = |\langle P(P^*P)^{-1}P^*\bm{a}_j,\bm{b}\rangle|\le\|P(P^*P)^{-1}P^*\bm{a}_j\|_2\|\bm{b}\|_2\le \tau \|\bm{b}\|_2.
\]
Thus it suffices to show
\begin{equation}\label{eq:form1}
\|P(P^*P)^{-1}P^*\bm{a}_j\|_2^2 \le \tau^2 \mbox{ for all } j \not\in \supp(\bm{x}).
\end{equation}
We have that
\[
\|P(P^*P)^{-1}P^*\bm{a}_j\|_2^2 = \langle P^*\bm{a}_j,(P^*P)^{-1}P^*\bm{a}_j \rangle \le \|(P^*P)^{-1}\|\|P^*\bm{a}_j\|_2^2.
\]
Using condition $M\le 1/(4\mu)$ and $|\langle \bm{a}_j,\bm{e}/\|\bm{e}\|_2\rangle|\le c_0\sqrt{\log N}/\sqrt{N}$, we have
\[
\|P^*\bm{a}_j\|_2^2 = \sum\limits_{i=1}^M|\langle \bm{a}_j,\bm{a}_i\rangle|^2 + |\langle\bm{a}_j,\bm{e}/\|\bm{e}\|_2\rangle|^2\le M\cdot \mu^2+\frac{c_0^2\log N}{N} \le \frac{\mu}{4}+\frac{c_0^2\log N}{N}.
\]
To estimate the operator norm $\|(P^*P)^{-1}\|$ we note that $P^*P-I$ is an $(M+1)\times(M+1)$ matrix whose diagonal entries $a_{ii}$ are zero and off-diagonal entries $a_{ik}$ ($i\neq k$) are no bigger than $1/(4M)$. By Gershgorin circle theorem
(see, for example,~\cite{Golub2012}), we obtain
\[
\|P^*P-I\| \le M\cdot \frac{1}{4M}=\frac{1}{4}.
\]
This implies that $\|P^*P\|\ge 1-1/4=3/4$. Hence, $\|(P^*P)^{-1}\|\le 4/3$. Thereofore we obtain
\begin{equation}\label{eq:form}
\|P(P^*P)^{-1}P^*\bm{a}_j\|_2^2 \le \frac{4}{3}\cdot \left( \frac{\mu}{4}+\frac{c_0^2\log N}{N} \right)=\tau^2 \mbox{ for all } j \not\in \supp(\bm{x}).
\end{equation}
We proceed to the induction step. Suppose we have already recovered $\Omega^n \subset \supp(\bm{x})$ during the previous $n$ iterations and did not have any false discoveries. We now show that in the next iteration the algorithm will not make any false discoveries.
Denote, for brevity, $\Omega=\Omega^n$, and $\bm{b}=\bm{b}^n$. Suppose that $|\Omega|=k$ where $0\le k\le M$. It suffices to show that
\[
|\langle \bm{a}_j,(I-\mathcal{A}_{\Omega}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*)\bm{b}\rangle|\le \tau \|(I-\mathcal{A}_{\Omega}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*)\bm{b}\|_2 \mbox{ for all } j \not\in \supp(\bm{x}).
\]
We have the following orthogonal decomposition
\[
\bm{a}_j = (I- P(P^*P)^{-1}P^*)\bm{a}_j+P(P^*P)^{-1}P^*\bm{a}_j \mbox{ for any } j.
\]
Pick $j \not\in \supp(\bm{x})$, and observe that $(I - P(P^*P)^{-1}P^*)\bm{a}_j$ is orthogonal to $\bm{b}$. Moreover, since the image of the orthogonal projection matrix $(I-\mathcal{A}_{\Omega}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*)$ is the vector space that is orthogonal to the space spanned by columns of $\mathcal{A}_{\Omega}$, and columns of $\mathcal{A}_{\Omega}$ are also columns of $P$, we must have that
\[
(I - P(P^*P)^{-1}P^*)\bm{a}_j \perp (I-\mathcal{A}_{\Omega}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*)\bm{b}.
\]
From this, we obtain
\[
\langle \bm{a}_j, (I-\mathcal{A}_{\Omega}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*)\bm{b}\rangle = \langle P(P^*P)^{-1}P^*\bm{a}_j, (I-\mathcal{A}_{\Omega}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*)\bm{b}\rangle.
\]
Then, by Cauchy-Schwarz inequality, we have
\begin{align*}
| \langle P(P^*P)^{-1}P^*\bm{a}_j,(I-\mathcal{A}_{\Omega}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*)\bm{b}\rangle|&\le \|P(P^*P)^{-1}P^*\bm{a}_j\|_2 \|(I-\mathcal{A}_{\Omega}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*)\bm{b}\|_2\\&\le \tau \|(I-\mathcal{A}_{\Omega}(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*)\bm{b}\|_2.
\end{align*}
The proof is complete.
\end{proof}
\subsection{Exact Recovery }
The proof of Theorem \ref{theorem:exact} is as follows.
\begin{proof} Similarly to Theorem \ref{theorem:nofalse}, we consider the event $\mathcal{O}$ holds.
\bigskip
The objective is to demonstrate that at least one nonzero entry of $\bm{x}$ is detected at every iteration. It suffices to look at the nonzero entry of $\bm{x}$ with the largest magnitude.
Consider the $n$-th iteration, and assume $\Omega:=\Omega^n$ is strictly contained in the support of $\bm{x}$. According to Theorem \ref{theorem:nofalse}, $|\Omega|\le M$. Let $\Omega^{c}=\supp(\bm{x})\backslash \Omega$, the set of undetected indices.
Without loss of generality, assume that the first entry $x_1$ has is the nonzero entry with the largest magnitude among $\{x_k, k\in\Omega^c\}$. We want to show that
\begin{equation}\label{supportineq}
|\langle \bm{a}_1,\bm{b}^{n}\rangle|>\tau \|\bm{b}^{n}\|_2,
\end{equation}
so then the index $1$ will be included in $\Omega^{n+1}$.
Decompose $\bm{b}$ into $\bm{a}_1x_1 + \mathcal{A}_{\Omega^{c}\backslash\{1\}}\bm{x}_{\Omega^c\backslash\{1\}}+\mathcal{A}_{\Omega}\bm{x}_{\Omega}+\bm{e}.$ Notice that, by projecting $\bm{b}$ onto the orthogonal complement of the vector space spanned by $\{\bm{a}_k:k\in \Omega\}$, we have
\[
\bm{b}^{n}=\bm{b}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{b} = \bm{a}_1 x_1+\mathcal{A}_{\Omega^c\backslash\{1\}}\bm{x}_{\Omega^c\backslash\{1\}} -\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}(\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c})+(\bm{e}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{e}).
\]
For convenience, let us now set $\bm{v}:=\mathcal{A}_{\Omega^c\backslash\{1\}}\bm{x}_{\Omega^c\backslash\{1\}} -\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}(\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c})+(\bm{e}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{e})$. We then observe that
\begin{align}
|\langle \bm{a}_1,\bm{b}^n\rangle|-\tau \|\bm{b}^n\|_2&= |\langle \bm{a}_1,\bm{a}_1x_1+\bm{v}\rangle|^2- \tau^2\|\bm{a}_1x_1+\bm{v}\|_2^2\nonumber\\
&= (1-\tau^2)\left|x_1+\langle \bm{a}_1,\bm{v}\rangle\right|^2+\tau^2|\langle \bm{a}_1,\bm{v}\rangle|^2- \tau^2\|\bm{v}\|_2^2 \nonumber\\
&> (1-\tau^2)\left|x_1+\langle \bm{a}_1,\bm{v}\rangle\right|^2- \tau^2\|\bm{v}\|_2^2.\nonumber
\end{align}
By triangle inequality, we have $|x_1+\langle \bm{a}_1,\bm{v}\rangle|\ge |x_1|-|\langle\bm{a}_1,\bm{v}\rangle|$. Therefore, it suffices to show that
\begin{equation}\label{eq:noiseless}
(1-\tau^2)\left(|x_1|-|\langle \bm{a}_1,\bm{v}\rangle|\right)^2>\tau^2\|\bm{v}\|_2^2.
\end{equation}
Let us estimate $|\langle \bm{a}_1,\bm{v}\rangle|$. We have that
\[
\begin{aligned}
|\langle \bm{a}_1,\bm{v}\rangle|\le |\langle \bm{a}_1,\mathcal{A}_{\Omega^c\backslash\{1\}}\bm{x}_{\Omega^c\backslash\{1\}}\rangle|+|\langle\bm{a}_1, \mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}(\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c})\rangle| + |\langle \bm{a}_1,\bm{e}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{e}\rangle|.
\end{aligned}
\]
We estimate each term on the right-hand side as follows.
\begin{itemize}
\item For $|\langle \bm{a}_1,\mathcal{A}_{\Omega^c\backslash\{1\}}\bm{x}_{\Omega^c\backslash\{1\}}\rangle|$:
By using condition \eqref{mutualcondition} that $\max\limits_{i\neq j}|\langle \bm{a}_i,\bm{a}_j\rangle|\le 1/(4M)$ and knowing $|x_1|$ being the largest among $\{|x_k|,k\in \Omega ^c\}$, we have that
\[|\langle \bm{a}_1,\mathcal{A}_{\Omega^c\backslash\{1\}}\bm{x}_{\Omega^c\backslash\{1\}}\rangle|\le \frac{(M-1)|x_1|}{4M} <\frac{|x_1|}{4}.\]
\item For $|\langle\bm{a}_1, \mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}(\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c})\rangle| $:
We have that
\[
\begin{aligned}
|\langle \bm{a}_1,\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}(\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c})\rangle|&=|\langle \mathcal{A}_{\Omega}^*\bm{a}_1,(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c}\rangle|\\
&\le \|\mathcal{A}_{\Omega}^*\bm{a}_1\|_2\|(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\|\|\mathcal{A}^*_{\Omega}\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c}\|_2.\\
\end{aligned}
\]
We will estimate each term in the product. The first term is as follows
\[
\|\mathcal{A}_{\Omega}^*\bm{a}_1\|_2=\sqrt{\sum\limits_{k\in \Omega}|\langle \bm{a}_k,\bm{a}_1\rangle|^2}\le \sqrt{M\cdot\frac{1}{16M^2}}=\frac{1}{4\sqrt{M}}.
\]
The second term is estimated by using Gershgorin circle theorem, which yields $\|(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\|\le 4/3$. For the third term, by using incoherence again, each entry of the vector $\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c}$ is less than, in absolute value,
\[
\frac{1}{4M}(|x_1|+\ldots+|x_{|\Omega^c|}|)\le \frac{1}{4M}\cdot M|x_1|=\frac{|x_1|}{4}.
\]
Therefore, we have that
\begin{equation}
\|\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c}\|_2\le \frac{\sqrt{M}|x_1|}{4}.\label{eq:estimate}
\end{equation} Overall, we obtain the following bound
\[
|\langle\bm{a}_1,\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}(\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c})\rangle|\le \frac{1}{4\sqrt{M}}\cdot \frac{4}{3}\cdot \frac{\sqrt{M}|x_1|}{4}=\frac{|x_1|}{12}.
\]
\item For $|\langle \bm{a}_1,\bm{e}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{e}\rangle|$:
We will show that
\[
|\langle \bm{a}_1,\bm{e}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{e}\rangle|\le \frac{4}{3}\tau\|\bm{e}\|_2.
\]
If $|\Omega| = 0$, it is true since the event $\mathcal O$ holds. We then consider the case when $1\le |\Omega|\le M$. Suppose that $\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{a}_1 = \sum_{j\in \Omega}\xi_j \bm{a}_j$. Let $j^*\in \Omega$ be the index such that $|\xi_{j^*}| = \max_{j\in \Omega}|\xi_j|=\| \bm{\xi}\|_{\infty}$. By using $\max\limits_{i\neq j}|\langle \bm{a}_i,\bm{a}_j\rangle|\le 1/(4M)$, we have that
\[
\displaystyle\frac{1}{4M}\ge \left |\left\langle \bm{a}_1,\bm{a}_{j^*}\right\rangle\right |= |\langle \mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{a}_1,\bm{a}_{j^*}\rangle|= |\langle \sum_{j\in \Omega}\xi_j \bm{a}_j,\bm{a}_{j^*}\rangle|\ge \| \bm{\xi}\|_{\infty}\left(1-\frac{|\Omega|-1}{4M} \right)\ge \frac{3}{4}\| \bm{\xi}\|_{\infty}.
\]
Therefore, $\| \bm{\xi}\|_{\infty}\le 1/(3M)$, which implies that $\| \bm{\xi}\|_{1}\le M\cdot \|\bm{\xi}\|_{\infty}\le 1/3$. Consequently, we have
\begin{align*}
|\langle \bm{a}_1,\bm{e}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{e}\rangle|&= |\langle (I-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag})\bm{a}_1,\bm{e}\rangle| \\
&\le |\langle\bm{a}_1,\bm{e}\rangle|+ \sum_{j\in\Omega}|\xi_j||\langle \bm{a}_j,\bm{e}\rangle|\\
&\le \tau \|\bm{e}\|_2+\|\bm{\xi}\|_1\cdot \tau \|\bm{e}\|_2\\
&\le \frac{4}{3}\tau\|\bm{e}\|_2.
\end{align*}
\end{itemize}
Combining the above estimates, we obtain
\[
|x_1|-|\langle \bm{a}_1,\bm{v}\rangle|\ge |x_1|-\frac{|x_1|}{4}-\frac{|x_1|}{12}-\frac{4}{3}\tau\|\bm{e}\|_2=\frac{2|x_1|}{3}-\frac{4}{3}\tau\|\bm{e}\|_2.
\]
Therefore, the left-hand side of \eqref{eq:noiseless} is bigger than $(1-\tau^2)(2|x_1|/3-4\tau\|\bm{e}\|_2/3)^2$. Now, we look at the right-hand side. We have that
\[
\|\bm{v}\|_2\le \|\mathcal{A}_{\Omega^c\backslash\{1\}}\bm{x}_{\Omega^c\backslash\{1\}}\|_2+\|\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}(\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c})\|_2+\|\bm{e}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{e}\|_2.
\]
We estimate each term as follows
\begin{itemize}
\item For $\|\mathcal{A}_{\Omega^c\backslash\{1\}}\bm{x}_{\Omega^c\backslash\{1\}}\|_2$:
We have that
\begin{align*}
\|\mathcal{A}_{\Omega^c\backslash\{1\}}\bm{x}_{\Omega^c\backslash\{1\}}\|_2^2& = \sum\limits_{i\in \Omega^c\backslash\{1\}} |x_i|^2+2\sum\limits_{i<j,i,j\in \Omega^c\backslash\{1\}}\text{Re} (\langle \bm{a}_i,\bm{a}_j\rangle \bar{x}_ix_j)\\
&\le (M-1)|x_1|^2+2\cdot \frac{1}{4M}\cdot \frac{(M-1)(M-2)}{2}|x_1|^2\\
&=\left(\frac{5}{4}M-\frac{7}{4}+\frac{1}{2M}\right)|x_1|^2
\end{align*}
which implies
\[
\|\mathcal{A}_{\Omega^c\backslash\{1\}}\bm{x}_{\Omega^c\backslash\{1\}}\|_2\le \sqrt{\frac{5}{4}M-\frac{7}{4}+\frac{1}{2M}}|x_1|.
\]
\item For $\|\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}(\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c})\|_2$:
We have that
\[
\begin{aligned}
\|\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}(\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c})\|_2^2&=|\langle\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c},(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c} \rangle|\\
&\le\|\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c}\|_2\|(\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega})^{-1}\|\|\mathcal{A}_{\Omega}^*\mathcal{A}_{\Omega^c}\bm{x}_{\Omega^c}\|_2\\
&\le \frac{\sqrt{M}|x_1|}{4}\cdot \frac{4}{3}\cdot\frac{\sqrt{M}|x_1|}{4}\quad (\text{from }\eqref{eq:estimate})\\
&\le \frac{M|x_1|^2}{12} . \end{aligned}
\]
\item For $\|\bm{e}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{e}\|_2$:
Since $\bm{e}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{e}$ is an orthogonal projection of $\bm{e}$, we have that
\[
\|\bm{e}-\mathcal{A}_{\Omega}\mathcal{A}_{\Omega}^{\dag}\bm{e}\|_2\le \|\bm{e}\|_2.
\]
\end{itemize}
Overall, in order to show \eqref{eq:noiseless}, it suffices to show that
\[
(1-\tau^2)\left(\frac{2|x_1|}{3}-\frac{4}{3}\tau\|\bm{e}\|_2\right)^2>\tau^2\left[\left(\sqrt{\frac{5}{4}M-\frac{7}{4}+\frac{1}{2M}}+\frac{1}{\sqrt{12}}\sqrt{M}\right)|x_1|+\|\bm{e}\|_2\right]^2,
\]
or equivalently,
\begin{equation}\label{eq:noisestrength}
\underbrace {\left[ {\frac{2}{3}\sqrt {\frac{{1 - {\tau ^2}}}{{{\tau ^2}}}} - \left( {\sqrt {\frac{5}{4}M - \frac{7}{4} + \frac{1}{{2M}}} + \frac{1}{{\sqrt {12} }}\sqrt M } \right)} \right]}_F|{x_1}| > \underbrace {\left(\frac{4}{3}\sqrt {1 - {\tau ^2}} + 1\right)}_G\|\bm{e}\|_2.
\end{equation}
We choose
\begin{equation}
f(M,\tau)=\frac{F}{G}. \label{eq:functionf}
\end{equation}
With $\tau \le 1/(\sqrt{6M})$ (implied from \eqref{threspara} and \eqref{mutualcondition}), we note that $F>0$ for $M\ge 1$. We then have if $\|\bm{e}\|_2 \le f(M,\tau)\min_{i\in \supp(\bm{x})}(|\bm{x}_i|)$, then the inequality \eqref{eq:noisestrength} holds. Consequently, the inequality \eqref{eq:noiseless} holds, and thus \eqref{supportineq} holds. As noted from the beginning of the proof, the above deterministic analysis is done conditioned on the event $\mathcal O$, which holds with probability $1-2/N^{\kappa}$. The proof is complete.
\end{proof}
\subsection{Discussion}
In this paper, we consider a sparse recovery algorithm named Thresholding Greedy Pursuit (TGP). The algorithm is based on CoSaMP with the addition of thresholding procedure. The only assumption we need for the probability distribution of the noise is rotational invariance. No knowledge about its strength is needed. This assumption is reasonable in high dimension, and so we can borrow techniques from high dimensional probability.
We analyze the performance of TGP algorithm in the regime where $N$ tends to infinity. As mentioned in the proofs, the sparsity level $M$ is required to be of level less than $O(\sqrt{N}/\sqrt{\log N})$. For bigger $M$, the number of measurements $N$ needs to increase as well. In Figure \ref{performance}, we illustrate the performance of TGP for different sparsity levels $M$ and $\|\bm{e}\|_2/\|\bm{b}_0\|_2$. In this experiment, the matrix $\mathcal{A}$ consists of normally distributed columns, and the signal $\bm{x}$ is uniformly 1 at nonzero locations. Success in recovering the true support of the unknown corresponds to a value of one (the color yellow), and failure corresponds to a value of zero (the color blue). The small phase transition zone (the color green) contains intermediate values. The black lines are the graphs of $\sqrt{N}/\sqrt{M\log N}$ which represent the relative level of noise $\|\bm{e}\|_2/\|\bm{b}_0\|_2$ the algorithm can sustain. This can be seen from \eqref{eq:functionf} when $x_1=1$ that $\|\bm{e}\|_2\lesssim \sqrt{N}/\sqrt{\log N}$ since $\tau \approx \sqrt{\log N}/\sqrt{N}$.
\begin{figure*}[ht] \centering
\captionsetup{width=0.8\linewidth}
\subfloat{\includegraphics[width=0.333\textwidth]{phase400.eps}} \subfloat{\includegraphics[width=0.333\textwidth]{phase630.eps}}
\subfloat{\includegraphics[width=0.333\textwidth]{phase1000.eps}}
\caption{Algorithm performance for exact support recovery. Success corresponds to a value of one (yellow), and failure corresponds to a value of zero (blue). The small phase transition zone (green) contains intermediate values. The black lines are the theoretical estimate $\sqrt{N}/\sqrt{M\log N}$. Ordinate and abscissa are the sparsity $M$ and $\|\bm{e}\|_2/\|\bm{b}_0\|_2$. The data sizes are $N=400$ (left), $N=630$ (center), and $N=1000$ (right).}
\label{performance}
\end{figure*}
We also tested TGP for the case when the measurement matrix has some of the columns that are close to collinear. In general, if a location in the support corresponds to one of those columns, TGP will return all of the columns that are nearby. This is reasonable as the corresponding dot products will be approximately equal and the Thresholding procedure will identify all of them. This amplifies the nature of the algorithm: the more we know about the measurement matrix $\mathcal{A}$, the better we can design the thresholding parameter $\tau$. A possible research in this direction is to ensure no false negatives present when there are clusters of columns that are close to each other.
The MATLAB codes for the TGP and for the comparison with CoSaMP are available at \url{https://github.com/randomwalk94/TGP}.
{\textbf{Acknowledgement.} We are grateful to Anna Gilbert for suggesting to apply the conjugate gradient approach of CoSaMP to design of an iterative algorithm to solve Square-Root LASSO.
This work was partially supported by NSF DMS-1813943 and AFOSR FA9550-20-1-0026.}
\vskip 0.2in
\bibliographystyle{alpha}
| {
"timestamp": "2021-03-23T01:38:14",
"yymm": "2103",
"arxiv_id": "2103.11893",
"language": "en",
"url": "https://arxiv.org/abs/2103.11893",
"abstract": "We study here sparse recovery problems in the presence of additive noise. We analyze a thresholding version of the CoSaMP algorithm, named Thresholding Greedy Pursuit (TGP). We demonstrate that an appropriate choice of thresholding parameter, even without the knowledge of sparsity level of the signal and strength of the noise, can result in exact recovery with no false discoveries as the dimension of the data increases to infinity.",
"subjects": "Signal Processing (eess.SP); Information Theory (cs.IT); Probability (math.PR)",
"title": "Thresholding Greedy Pursuit for Sparse Recovery Problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9711290930537121,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.709952248743769
} |
https://arxiv.org/abs/2004.04593 | The Importance of Good Starting Solutions in the Minimum Sum of Squares Clustering Problem | The clustering problem has many applications in Machine Learning, Operations Research, and Statistics. We propose three algorithms to create starting solutions for improvement algorithms for this problem. We test the algorithms on 72 instances that were investigated in the literature. Forty eight of them are relatively easy to solve and we found the best known solution many times for all of them. Twenty four medium and large size instances are more challenging. We found five new best known solutions and matched the best known solution for 18 of the remaining 19 instances. | \section{Introduction}
{\color{black}We present three new approaches
that may be used to generate good starting solutions to clustering problems and then we} show that these
starting solutions vastly improve the performance of popular local searches (improvement algorithms)
used in clustering such as k-means \citep{L57,HW79,M67a}. In several benchmark cases, we were even able to improve
on
best-known results.
A set of $n$ points is given. The problem is to find $k$ cluster centers. Each point is assigned to its closest center. The objective is to minimize the total sum of squares of the distances to the closest cluster center. Let $d_{i}(X_j)$ be the Euclidean distance between given point $i$ and center of cluster $j$ located at $X_j$. The points are represented in $d$-dimensional space. The vector of unknown centers is ${\bf X}=\{X_1,\ldots,X_k\}$, and thus, the objective function to be minimized is:
\begin{equation}\label{probK}
F({\bf X})=\sum\limits_{i=1}^n \min\limits_{1\le j\le k}\left\{d_{i}^2(X_j)\right\}
\end{equation}
Recent publications on clustering problems minimizing the sum of squared distances between all given points and their cluster's center are \cite{A09}, {\color{black}\cite{AHL12},} \cite{BODX15}, \cite{PABM19} {\color{black} and \cite{GV19}.
\citet{GV19} recently suggested a complex hybrid genetic algorithm for the solution of the problem which obtains best reported results to date. Our approach applies a multi-start local search using a set of specially designed high quality starting solutions. Despite the relative simplicity of our method, \citet{GV19} obtained a better solution in only one instance out of the 72 instances tested here. These results confirm in a dramatic way the importance of `good' starting solutions for the minimum sum of squares clustering problem. This also corroborates the `less is more' philosophy adopted in some recent papers on heuristic design \citep[e.g., ][]{MTU16,BMT17}. In this case we find that a simple local search from diverse starting solutions of good quality can be as powerful as a sophisticated meta-heuristic based method.}
This problem has similarities to the $p$-median problem \citep{Das95,KH79med,DM15}, also called the multi-source Weber problem \citep{BHMT00,KS72}. The objective of the $p$-median problem is based on the distances rather than the squared distances and each point (or customer) has a positive weight, while in formulation (\ref{probK}) all weights are equal to 1.
\subsection{\label{widely}Widely Used Algorithms}
{\color{black}Algorithms similar to k-means include \citet{S85,L57,Forgy1965,David2007,BMV12,M67a,HW79}}.
\begin{description}
\item[{\color{black}Algorithm of Lloyd \citep{L57}:}]
The Lloyd algorithm is an iterative improvement heuristic which consists of repeating two steps, assignment and update, until there are no more changes in the assignment of points to clusters or the maximum number of iterations is reached. The initial set of $k$ centers $X=\{X_1,X_2,...,X_k\}$ and maximum number of iterations are inputs to the algorithm. The initial set of centers can be determined by random assignment of points to $k$ groups (as in the original Lloyd algorithm) or by an initialization method such as Forgy \citep{Forgy1965}, k-means++ \citep{David2007}, or scalable k-means++ \citep{BMV12}.
{\color{black}In k-means++, the starting solution consists of a subset of original data points selected by a seeding algorithm, which spreads the initial centroids based on their squared distances from the first centroid.}
In the assignment step, the current set of centers is used to determine the Voronoi cells \citep{V08,OKSC00}, and each point $x_i$ is assigned to the nearest center in terms of the squared Euclidean distance:
\begin{equation}\label{voronoi}
V_\ell=\{x_i:d^2_i(X_\ell) \le d^2_i(X_j), \forall j, 1\le j \le k \}, \ell=1,\ldots k
\end{equation}
in such a manner that each point is assigned to exactly one Voronoi cell.
Next, in the update step, the new centers are determined by finding the centroid of points in each cell:
\begin{equation}\label{centroids}
X^\prime_\ell=\frac{1}{|V_\ell|}\sum_{x_i \in V_\ell}x_i.
\end{equation}
The two steps are repeated in the original order until there is no change in the assignment of points to clusters.
This algorithm is very similar to the location-allocation procedure (ALT) proposed by \citet{Co63,Co64} for the solution of the Euclidean p-median problem. However, for the squared Euclidean objective, the solution is expressed as a simple formula (\ref{centroids}), while for the weighted Euclidean distances (not squared), the center of the cluster requires a special algorithm such as \citet{W36,Dr13} which requires longer run times.
\item[{\color{black}Algorithm of MacQueen \citep{M67a}:}]
The initialization of the MacQueen algorithm is identical to Lloyd, i.e., the initial centers are used to assign points to clusters.
In the iteration phase, the MacQueen heuristic reassigns only those points which are nearer a center different from the one they are currently assigned to. Only the centers for the original and new clusters are recalculated after the change which improves the efficiency of the heuristic as compared to Lloyd.
The improvement step is repeated until there is no change in the assignment of points to clusters.
\item[{\color{black}Algorithm of Hartigan-Wong \citep{HW79}:}]
The initialization of the Hartigan-Wong algorithm is identical to MacQueen and Lloyd and points are assigned to centers using the Voronoi's method. However, the improvement step uses the within-group sum of squared Euclidean distances $\sum\limits_{x_i \in V_\ell} d^2_i(X_\ell)$, where $V_\ell$ is a cluster centered at $X_\ell$.
Specifically, for each previously-updated cluster $V_j$, the point $x_i \in V_j$ is reassigned to $V_\ell$ ($\ell \ne j$) if such a reassignment reduces the total within-group sum of squared distances for all clusters.
The improvement step is repeated until there is no change in the assignment of points to clusters.
\end{description}
The paper is organized as follows. In Section \ref{sec2} we discuss the execution of basic operations applied in the paper. In Section \ref{sec3} the algorithms designed in this paper are described. In Section \ref{refine} a new improvement algorithm is detailed. In Section \ref{sec4} we report the results of the computational experiments, and we conclude the paper in Section \ref{sec5}.
\section{\label{sec2}Preliminary Analysis: Basic Operations}
In this paper we apply three basic operations: adding a point to a cluster, removing a point from a cluster, and combining two clusters into one. The objective function is separable to individual dimensions and thus we show the calculation in one dimension. In $d$ dimensions the change in the value of the objective function is the sum of the changes in each dimension.
Let a cluster consist of $m$ points $x_i,i=1,\ldots,m$ with a mean {\color{black}$\bar x_m=\frac1m\sum\limits_{i=1}^mx_i$. Since $\sum\limits_{i=1}^mx_i=m\bar x_m$,}
the objective function is:
$$F_m=\sum\limits_{i=1}^m (x_i-\bar x_m)^2~=~{\color{black}\sum\limits_{i=1}^mx_i^2~-2\bar x_m\sum\limits_{i=1}^mx_i+~m\bar x_m^2~=~}\sum\limits_{i=1}^mx_i^2~-~m\bar x_m^2.$$
\subsection{Adding a point to a cluster}
\Theorem{\label{Th1}When a point $x_{m+1}$ is added to the cluster, the objective function is increased by $\frac{m}{m+1}(x_{m+1}-\bar x_m)^2$.}
\proof{
The new center is at
$$\bar x_{m+1}=\frac{m\bar x_m+x_{m+1}}{m+1}=\bar x_{m}+\frac{x_{m+1}-\bar x_m}{m+1}\equiv \bar x_m+\Delta x_m
$$
We get
\begin{eqnarray}
F_{m+1}&=&F_m+x_{m+1}^2 +m\bar x_m^2-(m+1)\bar x_{m+1}^2
\nonumber\\&=&F_m+\left[\bar x_{m}+(m+1)\Delta x_m\right]^2+m\bar x_m^2-(m+1)\left[\bar x_{m}+\Delta x_m\right]^2
=F_m+m(m+1)\left[\Delta x_m\right]^2\nonumber
\end{eqnarray}
It can be written as
\begin{equation}\label{add}
F_{m+1}=F_m+\frac{m}{m+1}(x_{m+1}-\bar x_m)^2
\end{equation}
which proves the theorem.
}
\subsection{Removing a point from a cluster}
\Theorem{\label{Th2}Suppose that $x_m$ is removed from a cluster. The reduction in the value of the objective function is: $\frac{m}{m-1}\left(x_{m}-\bar x_m\right)^2$.}
\proof{The new center is at
$$\bar x_{m-1}=\frac{m\bar x_m-x_{m}}{m-1}=\bar x_{m}-\frac{x_{m}-\bar x_m}{m-1}
$$
By equation (\ref{add}) for $m-1$
\begin{eqnarray}\label{sub}
F_{m-1}&=&F_m-\frac{m-1}{m}(x_{m}-\bar x_{m-1})^2\nonumber\\&=&F_m-\frac{m-1}{m}\left(x_{m}-\frac{m\bar x_m-x_{m}}{m-1}\right)^2
=F_m-\frac{m}{m-1}\left(x_{m}-\bar x_m\right)^2
\end{eqnarray}
which proves the theorem.
}
\subsection{Combining two clusters}
\Theorem{\label{Th3}Two clusters of sizes $m_1$ and $m_2$ with centers at $\bar x_{m_1}$ and $\bar x_{m_2}$ are combined into one cluster of size $m_1+m_2$. The increase in the value of the objective function is $\frac{m_1m_2}{m_1+m_2}\left[\bar x_{m_1}- \bar x_{m_2}\right]^2$.}
\proof{
The center of the combined cluster is at:
\begin{equation}\label{case3}
\bar x_{m_1+m_2}=\frac{m_1\bar x_{m_1}+m_2\bar x_{m_2}}{m_1+m_2}
\end{equation}
The objective function $F_{m_1+m_2}$ is:
\begin{eqnarray}
F_{m_1+m_2}&=&\sum\limits_{i=1}^{m_1+m_2}x_i^2~-~(m_1+m_2)\bar x_{m_1+m_2}^2
\nonumber\\&=&F_{m_1}+F_{m_2}+m_1\bar x_{m_1}^2+m_2\bar x_{m_2}^2-(m_1+m_2)\bar x_{m_1+m_2}^2
\nonumber\\&=&F_{m_1}+F_{m_2}+m_1\bar x_{m_1}^2+m_2\bar x_{m_2}^2-\frac{\left[m_1\bar x_{m_1}+m_2\bar x_{m_2}\right]^2}{m_1+m_2}
\nonumber\\&=&F_{m_1}+F_{m_2}+\frac{m_1m_2\bar x_{m_1}^2+m_1m_2\bar x_{m_2}^2-2m_1m_2\bar x_{m_1}\bar x_{m_2}}{m_1+m_2}
\nonumber\\&=&F_{m_1}+F_{m_2}+\frac{m_1m_2}{m_1+m_2}\left[\bar x_{m_1}- \bar x_{m_2}\right]^2\label{sum}
\end{eqnarray}
which proves the theorem.
}
\subsection{Multi-dimensional points}
The following theorem considers clustering of points in $d$ dimensions. \Theorem{\label{Th4}Once the clusters' centers and number of points in each cluster are saved in memory, the change in the value of the objective function by adding a point to a cluster, removing a point, or combining two clusters is calculated in $O(d)$.}
\proof{By Theorems \ref{Th1}-\ref{Th3} the calculation in each dimension is done in $O(1)$ and thus the $d$-dimensional calculation is done in $O(d)$ as a sum of $d$ terms.}
\section{\label{sec3}Finding a Starting Solution}
We find initial sets of clusters that can serve as starting solutions for various improvement algorithms. The procedures are based on three algorithms proposed in recent papers for solving various multi-facility location problems, that can be easily extended to the clustering problem.
The first two algorithms described below can be applied without a random component yielding one starting solution. We introduce randomness into the procedures so that the algorithms can be repeated many times in a multi-start heuristic approach and add some diversification to the search. The randomness idea follows the ``Greedy Randomized Adaptive Search Procedure" (GRASP) suggested by \citet{FR95}. {\color{black}It is a greedy approach but in each iteration the move is randomly selected by some rule, rather than always selecting the best one.} For each of the first two algorithms, a different GRASP approach is used and details are provided for each in the appropriate sub-section.
\subsection{Merging \citep{BDMS12a}}
The merging algorithm is based on the START algorithm presented in \cite{DBMS13,BDMS12a} for the solution of the planar $p$-median problem, also known as the multi-source Weber problem. {\color{black} The START algorithm begins} with $n$ clusters, each consisting of one given point. We evaluate combining pairs of clusters and combine the pair which increases the value of the objective function the least, thereby reducing the number of clusters by one. The process continues until $k$ clusters remain.
In order to randomly generate starting solutions to be used in a multi-start approach we used the following GRASP approach. We randomly select a pair of clusters within a specified factor, $\alpha>1$, of the minimum increase. {\color{black} For $\alpha=1$ the move with the minimum increase is always selected. When $\alpha$ increases, more moves, which are not close to the minimum, can be selected.} To simplify the procedure we follow the approach proposed in \cite{Dr10b}. Set $\Delta$ to a large number. The list of increases in the value of the objective function is scanned. If a new best increase $\Delta$ is found, update $\Delta$, select the pair of clusters, and set $r=1$. If an increase less than $\alpha\Delta$ is found, set $r=r+1$ and replace the selected pair with probability $\frac1r$.
The basic merging algorithm is:
\begin{enumerate}
\item $n$ clusters are defined, each containing one point. Set $m_i=1$
for $i=1,\ldots,n$.
\item Repeat the
following until the number of clusters is reduced to $k$.
\begin{enumerate}
\setlength{\itemsep}{1pt
\item Find the pair $i<j$ for which the increase in the value of the objective function by Theorem \ref{Th3} is minimized (applying GRASP). If an increase of 0 is found, there is no need to continue the evaluation of pairs; skip to \ref{b}.
\item \label{b} Combine the selected clusters $\{i,j\}$, find the new center by {\color{black}equation} (\ref{case3}) and replace $m_i$ by $m_i+m_j$.
\item Remove clusters $i$ and $j$, and add the new cluster. The number of clusters is reduced by one.
\end{enumerate}
\end{enumerate}
The complexity of this algorithm is $O(n^3d)$. Only the number of points in each cluster and their centers need to be kept in memory. The list of points belonging to each cluster is not required for the procedure. The final result is a list of $k$ centers. The clusters are found by assigning each point to its closest center. This configuration can serve as a starting solution to improvement algorithms such as k-means, \citep{L57}, location-allocation \citep[{\color{black}ALT,} ][]{Co63,Co64}, {\color{black}\citep[IALT (Improved ALT) ][]{BD12}}, or the improvement algorithm detailed in Section \ref{sec42}.
The complexity can be reduced by storing for each cluster the minimum increase in the value of the objective function when combined with other clusters. Also, for each cluster we store a cluster number that yields the minimum increase. When two clusters are combined, the minimum increase for all clusters related to the two clusters is recalculated. Also, the increase in the value of the objective function when combined with the newly-formed cluster is checked for all other clusters and if it is smaller than the minimum increase saved for a particular cluster, it replaces the minimum increase for that cluster and the cluster it is combined with. The number of related clusters is expected to be small and if this number does not depend on $n$ and thus does not affect the complexity, the complexity is reduced to $O(n^2d)$.
The efficient merging algorithm used in the computational experiments is:
\begin{enumerate}
\item $n$ clusters are defined, each containing one demand point. Set $m_i=1$
for $i=1,\ldots,n$.
\item For each cluster, in order, calculate by Theorem \ref{Th3} the minimum increase in the value of the objective function $\Delta_i$ and the cluster $j(i)\ne i$ for which this minimum is obtained. If $\Delta_i=0$ is encountered for cluster $j$ (which must be $j>i$), combine clusters $i$ and $j$ creating a revised cluster $i$. Note that the center of the revised cluster is unchanged since $\Delta_i=0$. Replace cluster $j$ with the last cluster in the list and reduce the list of clusters by 1. Continue the procedure with the evaluation of $\Delta_i$ (from cluster \#1 upward) for the combined cluster $i$.
\item Repeat the following until the number of clusters is reduced to $k$.
\begin{enumerate}
\setlength{\itemsep}{1pt
\item Find $i$ for which $\Delta_i$ is minimized (applying GRASP).
\item Combine clusters $i$ and $j(i)$, find its center by {\color{black}equation} (\ref{case3}) and replace $m_i$ by $m_i+m_j$.
\item Replace cluster $i$ by the combined cluster and remove cluster $j(i)$. The number of clusters is reduced by one.
\item For each cluster $k\ne i$:
\begin{enumerate}
\item Calculate the increase in the value of the objective function by combining cluster $k$ and the combined cluster $i$. If the increase is less than $\Delta_k$, update $\Delta_k$, set $j(k)=i$ and proceed to the next $k$.
\item If $j(k)=i \mbox{ or }j(i)$, recalculate $\Delta_k$ and $j(k)$.
\end{enumerate}
\item Find $\Delta_i$ and $j(i)$ for the combined cluster.
\end{enumerate}
\end{enumerate}
\subsubsection{Testing the factor $\alpha$ in GRASP}
In the computational experiments we tested ten problems using $\alpha=1.1, 1.5, 2$. Run time increases with larger values of $\alpha$ because more iterations are required by the improvement algorithm. In nine of the ten problems $\alpha=1.5$ and $\alpha=2$ provided comparable results and $\alpha=1.1$ provided somewhat poorer results. However, {\color{black}one data set (kegg)}, exhibited a different behavior. The comparison results are depicted in Table \ref{53}. See also Table \ref{1000} below. For this problem $\alpha=2$ performed the worst.
\begin{table}[ht!]
\begin{center}
\caption{\label{53}Results for the {\color{black}kegg data set} by various approaches (100 runs)}
\medskip
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{|c|c||c|c|c||c|c|c||c|c|c||c|c|c|}
\hline
$k$&Best&&&&\multicolumn{3}{|c||}{Merging $\alpha=1.1$}&\multicolumn{3}{|c||}{Merging $\alpha=1.5$}&\multicolumn{3}{|c|}{Merging $\alpha=2.0$}\\
\cline{6-14}
&Known$\dagger$&(1)$^*$&(2)$^*$&(3)$^*$&$^*$&+&$\ddagger$&$^*$&+&$\ddagger$&$^*$&+&$\ddagger$\\
\hline
2& 1.13853E+09& 0.00& 0.00& 0.00& 0.00& 75& 25.1& 0.00& 24& 27.5& 0.00& 16& 30.7\\
5& 1.88367E+08& 0.00& 0.00& 0.00& 0.00& 100& 26.9& 0.00& 100& 30.5& 0.00& 96& 35.4\\
10& 6.05127E+07& 4.96& 0.00& 36.81& 0.00& 22& 33.9& 0.00& 16& 41.0& 0.00& 5& 50.9\\
15& 3.49362E+07& 0.53& 4.00& 98.23& 0.08& 1& 45.3& 0.00& 1& 56.6& 1.20& 1& 64.0\\
20& 2.47108E+07& 1.12& 13.74& 136.71& 0.03& 1& 45.8& 0.01& 1& 56.8& 1.39& 1& 66.0\\
25& 1.90899E+07& 1.27& 15.48& 190.95& 0.31& 1& 58.4& 0.53& 1& 82.6& 1.79& 1& 129.1\\
\hline
\multicolumn{14}{l}{$\dagger$ Best known solution \citep{GV19}.}\\
\multicolumn{14}{l}{$^*$ Percent above best known solution {\color{black} (relative error)}.}\\
\multicolumn{14}{l}{(1) Best solution of \citep{BODX15}.}\\
\multicolumn{14}{l}{(2) Best of the three procedures available in R using the ``++" starting solution.}\\
\multicolumn{14}{l}{(3) Best of the three procedures available in R from a random starting solution.}\\
\multicolumn{14}{l}{+ Number of times (out of 100) that the best solution found by the particular $\alpha$ was observed.}\\
\multicolumn{14}{l}{$\ddagger$ Time in seconds for one run.}\\
\end{tabular}
\end{center}
\end{table}
\subsection{Construction \citep{KGD19}}
\citet{KGD19} designed a construction algorithm to solve a clustering problem with a different objective. Rather than minimizing the sum of the squares of distances of all the points from their closest center, their objective is to minimize the sum of squares of distances between all pairs of points belonging to the same cluster. This is an appropriate objective if the points are, for example, sport teams and the clusters are divisions. All pairs of teams in a division play against the other teams in the same division, so the total travel distance by teams is minimized.
The first two phases generate a starting solution and the third phase is an improvement algorithm that can be replaced by any improvement algorithm. It can also serve as an improvement algorithm for other starting solutions.
For the ``GRASP" approach we propose to find the best move and the second best move. The best move is selected with probability $\frac23$ and the second best with probability $\frac13$. This random selection rule is indicated in the algorithms below as ``applying GRASP". If no random component is desired, the ``applying GRASP" operation should be ignored. Other GRASP approaches can be used as well.
\begin{description}
\item[Phase 1 (selecting one point for each cluster):]
~
\begin{itemize}
\item Randomly select two points. One is assigned to cluster \#1 and one to cluster \#2. (Note that we tested the selection of the farthest or second farthest two points and obtained inferior results.)
\item Select for the next cluster the point for which the minimum distance to the already selected points is the largest or second-largest (applying GRASP).
\item Continue until $k$ points define $k$ cluster centers.
\end{itemize}
\item[Phase 2 (adding additional points to clusters):]
~
\begin{itemize}
\item Check all unassigned points to be added to each of the clusters. Select the point that adding it to one of the clusters increases the objective function the least or second-least (applying GRASP) and add this point to the particular cluster.
\item Keep adding points to clusters until all the points are assigned to a cluster.
\end{itemize}
\item[Phase 3 (a descent algorithm):]
~
\begin{enumerate}
\item \label{st31} Evaluate all combinations of moving a point from their assigned cluster to another one.
\item If an improvement is found, perform the best move and go to Step \ref{st31}.
\item If no improvement is found, stop.
\end{enumerate}
\end{description}
Phase 3 is similar to the improvement procedure in {\color{black} \citet{HW79}}.
The complexity of Phase 1 is $O(nk^2d)$. Phase 2 is repeated $n-k$ times and each time is of complexity $O(nkd)$ once we save for each cluster $j$, its center and its individual objective. The complexity of the first two phases (finding the starting solution) is $O(n^2kd)$ because evaluating the value of the objective function is of complexity $O(d)$ by Theorem~\ref{Th4}. Phase 3 is an improvement algorithm and can be replaced by other improvement algorithms, such as k-means. The complexity of each iteration is $O(nkd)$ by Theorem \ref{Th4}. In our computational experiments we used the procedure proposed in Section \ref{sec42} which includes Phase~3.
There are similarities between Phase 1 of the construction algorithm and the k-means++ algorithm \citep{David2007}. While in Phase 1 we select the added cluster among the farthest and second farthest points, k-means++ selects such a point, among all points, whose probability is proportional to the squared distance to the closest cluster. k-means++ then applies an improvement algorithm while our proposed construction algorithm has two additional phases.
\subsection{Separation \citep{BD19}}
This algorithm finds a starting solution by solving many smaller clustering problems.
Suppose that $n$ points exist in an area and $k$ cluster centers need to be found. We select $q<k$, for example, $q=\sqrt k$ rounded could work well. We then solve the problem using $q$ clusters by a heuristic or an optimal algorithm. Each of the $q$ centers has a subset allocated to it. We treat these subsets as clusters. It should be noted that for two dimensional problems there are straight lines separating the clusters as in a Voronoi diagram \citep{OKSC00}. For higher dimensions these lines are hyper-planes. This means that the plane is partitioned into polygons (polyhedra in higher dimensions) and a center is located ``near the center" of each polyhedron. All the points inside a polyhedron are closest to the center located at that polyhedron.
We then assign the $k$ centers among the $q$ polyhedra by the procedures described in \citet{BD19}. It is expected that many clusters will get about the same number of points, and hence, many sub-problems will have roughly $\frac k q$ clusters and $\frac nq$ points.
Once the allocation is complete, a solution is found and it can serve as a starting solution to a variety of heuristic algorithms.
We applied the construction algorithm for solving the small problems because smaller problems have significantly lower computation time. The merging algorithm requires similar run times for varying values of $k$ and the same $n$. Therefore, the first phase of finding $q$ clusters takes about the same time as solving the complete problem with $k$ centers, and there would be no time saving.
\subsubsection{Selecting $q$}
The complexity of the construction algorithm is $O(n^2kd)$. Assume that the time required for running the algorithm is $\beta n^2kd$ for some constant $\beta$. We first create $q$ clusters by solving the $q$ clusters problem. The time for this algorithm is $\beta n^2qd$. Then, once $q$ clusters are determined, we solve $k+q-1$ problems ($q$ of them of one cluster which is found in $O(nd)$ time) each having about $\frac nq$ points and up to $\frac kq$ clusters each for a total time of $k\beta\frac{n^2}{q^2}\frac kq d $. The total time is $\beta n^2qd+\beta\frac{n^2k^2}{q^3}d$. The variable term (dividing by $\beta n^2d$) is $q+\frac{k^2}{q^3}$ whose minimum is obtained for $q=\sqrt[4]{3}\sqrt k\approx 1.3\sqrt{k}$. Since many of the $k+q-1$ problems consist of fewer than $\frac k q$ clusters, this value should be rounded down to $q=\lfloor 1.3\sqrt k\rfloor$. For example, for $k=10$ we should choose $q=4$ and for $k=25$ we should choose $q=6$. Note that this procedure for selecting $q$ aims at getting an efficient running time of the algorithm, and not necessarily the best quality of the solution.
\section{\label{refine}The Proposed Improvement Algorithm}
The k-means improvement procedure \citep{L57} is the same as the location-allocation improvement algorithm \citep{Co63,Co64}.
\begin{enumerate}
\item Select an initial set of $k$ centers.
\item \label{stp2} Allocate each point to its closest center forming $k$ clusters.
\item If the clusters did not change, stop. Otherwise, find the optimal location for the center of each cluster and go to Step \ref{stp2}
\end{enumerate}
\subsection{\label{sec41}Comparing Phase 3 of the Construction Algorithm to k-means}
When k-means terminates, Phase 3 of the construction algorithm may find improved solutions. Let $d_i^2(X_j)$ be the squared Euclidean distance between point $i$ and $X_j$ (the center of cluster $j$), and let cluster $j$ have $m_j$ points. The location-allocation algorithm stops when every point is assigned to its closest center. Phase~3 may find a better solution when there exists a move of point $i$ from its closest cluster $j_1$ to another cluster $j_2$ so that the objective function improves. Of course, $d_i^2(X_{j_1})<d_i^2(X_{j_2})$ because the k-means terminated. However, such a move improves the value of the objective function if by Theorems \ref{Th1},\ref{Th2}:
\begin{equation}\label{ratio}\frac{m_{j_1}}{m_{j_1}-1}d_i^2(X_{j_1})>\frac{m_{j_2}}{m_{j_2}+1}d_i^2(X_{j_2})
~\rightarrow~d_i^2(X_{j_2})~<~\frac{1+\frac1{m_{j_2}}}{1-\frac1{m_{j_1}}}d_i^2(X_{j_1})
\end{equation}
For example, for clusters of sizes $m_1=m_2=5$, the squared distance to the center of the second cluster can be up to 50\% larger. The objective function will improve by such a move if the squared distance to the center of the second cluster is less than 1.5 times the squared distance to the center of the closest cluster.
As an example, consider two squares of side 1 with vertices as the given points. The squares' close sides are $x$ units apart (see Figure \ref{two}). Two clusters are sought. There are two ``natural" cluster centers at the centers of the squares, each cluster with 4 points for an objective of 4. This is one of the possible final solutions of k-means. Each given point is closest to the center of its assigned cluster of 4 points.
\begin{figure}[ht!]
\setlength{\unitlength}{1in}
\centering \begin{picture}(3,1.2)
\put(0,0){\circle*{0.1}}
\put(1,1){\circle*{0.1}}
\put(0,1){\circle*{0.1}}
\put(1,0){\circle*{0.1}}
\put(1.366,0){\circle*{0.1}}
\put(1.366,1){\circle*{0.1}}
\put(2.366,0){\circle*{0.1}}
\put(2.366,1){\circle*{0.1}}
\put(1,1){\line(1,0){0.366}}
\put(1.13,1.05){$x$}
\put(1.366,1.1){$A$}
\put(1.366,0.1){$B$}
\put(2.366,1.1){$C$}
\put(-0.4,0){(0, 0)}
\end{picture}
\caption{The squares example. {\color{black}$x$ is the distance between the squares.}}
\label{two}
\end{figure}
The four vertices closest to the other square, for example {\color{black} vertices} $A$ and $B$, are at squared distance of $\frac14+(\frac12+x)^2=\frac12+x+x^2$ from the farther center. Suppose that in Phase 3 point $A$ is ``moved" to the cluster of the left square. The center of the left cluster is moved to $(0.6+\frac15 x,0.6)$ and the center of the right cluster is moved to $(\frac53 +x ,\frac13 )$ for a total objective of $2.4+\frac45x+\frac45x^2 +\frac43=3\frac8{15}+\frac15(1+2x)^2$. This objective is less than 4 for $x<\sqrt{\frac{7}{12}}-\frac12=0.2638$. Point $A$ is at squared distance of $\frac12$ from the right square center, and is at squared distance $\frac56$ from the left square center for the largest $x$. It is greater by a factor of $\frac53$ confirming the factor obtained by equation (\ref{ratio}) substituting $m_1=m_2=4$. If we move point $B$ to the left cluster to have clusters of 6 and 2 points, the ratio of the squared distances by equation (\ref{ratio}) with $m_1=3$, $m_2=5$ is $\frac95$ indicating that for some values of $x$ two clusters of 6 and 2 are even better. Its objective is $\frac{10}{3}+\frac43 (x+x^2)=3+\frac13(1+2x)^2$ which is less than 4 for $x<\frac12(\sqrt3-1)=0.366$. This configuration for the largest $x$ is depicted in Figure \ref{two}. It is better than the 5-3 configuration for $x$ as high as 0.5, practically for all values of $x$ which are of interest. The objective by adding point $C$ to the left cluster for a 7-1 configuration is greater than the 6-2 objective for any $x\ge 0$ and thus it is never optimal.
Note that there are many cluster partitions that are an inferior terminal solution to the clustering two square problem. For example, the solution with clusters consisting of the top 4 points and the bottom 4 points has an objective of $4+4x+2x^2>4$, and cannot be improved by k-means.
We ran the eight point problem for $x=0.25$ by the construction algorithm 1000 times. It found the optimal solution of $3\frac34$ in 873 of the 1000 starting solutions. For $x=0.3$ the optimal solution of $3\frac{64}{75}$ was found 856 times out of 1000. For $x=0.35$ the optimal solution of $3\frac{289}{300}$ was found 840 times out of 1000. The algorithm was so fast that we had to run it a million times to get measurable run time. Run time is 0.86 millionth of a second for one run.
This performance is compared in Table \ref{comp} with commonly used procedures detailed in the introduction. The merging algorithm clearly performed best and the construction algorithm was second-best. Hartigan-Wong performed better than the other commonly used procedures. Note that the four commonly used procedures started from the same 1000 starting solutions.
\begin{table}[htp!]
\begin{center}
\caption{\label{comp}Comparing results for the two squares problem}
\setlength{\tabcolsep}{7pt}
\medskip
\begin{tabular}{|l|c|c|c|}
\hline
Method&$x=0.25$&$x=0.30$&$x=0.35$\\
\cline{2-4}
&\multicolumn{3}{|c|}{Optimum$\dagger$ found in 1,000 runs}\\
\hline
Merging&1,000&1,000&1,000\\
Construction&873&856&840\\
Forgy&348&317&347\\
Hartigan-Wong&794&479&487\\
Lloyd&348&317&347\\
MacQueen&348&317&347\\
\hline
\multicolumn{4}{l}{$\dagger$ optimal value is $3+\frac13(1+2x)^2$}
\end{tabular}
\end{center}
\end{table}
It is interesting that the merging algorithm starting solution is the optimal solution to the two square example. In the first two iterations the two pairs of close points will be combined into two clusters. Then, in the next two iterations the two right points and the two left points will be combined into two clusters yielding after four iterations four clusters of two points each forming a rhombus with centers at the midpoints of the four sides of the rectangle enclosing the two squares. In the fifth iteration the top and bottom clusters will be combined forming a cluster of 4 points at the center of the configuration. There are three centers on the central horizontal line. In the sixth iteration the central cluster will be combined with one of the other two forming two clusters of 6 and 2 points which is the optimal solution.
\subsection{\label{sec42}The Improvement Algorithm Used in the Experiments}
One iteration of both the k-means and Phase 3 has the same complexity of $O(nkd)$.
Since the k-means algorithm may move several points from one cluster to another, it is faster than Phase~3 because fewer iterations are required. However, as is shown in Section \ref{sec41}, Phase 3 may improve terminal results of k-means. We therefore start by applying k-means up to 10 iterations and then apply Phase 3 until it terminates.
\begin{table}[htp!]
\begin{center}
\caption{\label{prob}Test {\color{black}Data Sets}}
\setlength{\tabcolsep}{7pt}
\medskip
\begin{tabular}{|l||c|c|l|}
\hline
{\color{black}Name}& $n$&$d$&Source\\
\hline
\multicolumn{4}{|c|}{Small Problems}\\
\hline
{\color{black}ruspini75}&75&2&\cite{PABM19}\\
{\color{black}fisher}&150&4&\cite{BODX15,PABM19}\\
{\color{black}gr202}&202&2&\cite{PABM19}\\
{\color{black}gr666}&666&2&\cite{PABM19}\\
\hline
\multicolumn{4}{|c|}{Medium Size Problems}\\
\hline
{\color{black}tsplib1060}&1,060&2&\cite{BODX15,PABM19}\\
{\color{black}tsplib3038}&3,038&2&\cite{BODX15,PABM19}\\
{\color{black}pendigit}&10,992&16&\cite{BODX15}\\
\hline
\multicolumn{4}{|c|}{Large Problems}\\
\hline
{\color{black}letter}&20,000&16&\cite{BODX15}\\
{\color{black}kegg}&53,413&20&\cite{BODX15}\\
{\color{black}pla85900}&85,900&2&\cite{BODX15,PABM19}$\dagger$\\
\hline
\multicolumn{4}{l}{\small $\dagger$ Results in \citep{PABM19} for this problem are wrong.}\\\multicolumn{4}{l}{~~\small Only a partial data set is accidentally used.}
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht!]
\begin{center}
\caption{\label{p5}Results for Problems with $k\le 5$ by the Merging and Construction Methods}
\medskip
\setlength{\tabcolsep}{5pt}
\begin{tabular}{|c|c|c|c|c||c|c|c|c|c|}
\hline
Data&$k$&Best$\dagger$&\multicolumn{2}{|c||}{Time (sec.)$\ddagger$}&Data&$k$&Best$\dagger$&\multicolumn{2}{|c|}{Time (sec.)$\ddagger$}\\
\cline{4-5}
\cline{9-10}
Set&&Known&(1)&(2)&Set&&Known&(1)&(2)\\
\hline
{\color{black}ruspini75}& 2& 8.93378E+04& 0.0001& 0.0002& {\color{black}gr666}& 4& 6.13995E+05& 0.0067& 0.0108\\
{\color{black}ruspini75}& 3& 5.10634E+04& 0.0001& 0.0002& {\color{black}gr666}& 5& 4.85088E+05& 0.0086& 0.0108\\
{\color{black}ruspini75}& 4& 1.28810E+04& 0.0001& 0.0002& {\color{black}tsplib1060}& 2& 9.83195E+09& 0.0093& 0.0310\\
{\color{black}ruspini75}& 5& 1.01267E+04& 0.0001& 0.0002& {\color{black}tsplib1060}& 5& 3.79100E+09& 0.025& 0.032\\
{\color{black}fisher}& 2& 1.52348E+02& 0.0002& 0.0009& {\color{black}tsplib3038}& 2& 3.16880E+09& 0.079& 0.243\\
{\color{black}fisher}& 3& 7.88514E+01& 0.0003& 0.0009& {\color{black}tsplib3038}& 5& 1.19820E+09& 0.193& 0.248\\
{\color{black}fisher}& 4& 5.72285E+01& 0.0004& 0.0009& {\color{black}pendigit}& 2& 1.28119E+08& 1.51& 6.75\\
{\color{black}fisher}& 5& 4.64462E+01& 0.0005& 0.0009& {\color{black}pendigit}& 5& 7.53040E+07& 3.90& 6.59\\
{\color{black}gr202}& 2& 2.34374E+04& 0.0004& 0.0013& {\color{black}letter}& 2& 1.38189E+06& 7.38& 19.37\\
{\color{black}gr202}& 3& 1.53274E+04& 0.0005& 0.0013& {\color{black}letter}& 5& 1.07712E+06& 23.47& 21.71\\
{\color{black}gr202}& 4& 1.14556E+04& 0.0008& 0.0013& {\color{black}kegg}& 2& 1.13853E+09& 205.30& 27.47\\
{\color{black}gr202}& 5& 8.89490E+03& 0.0009& 0.0013& {\color{black}kegg}& 5& 1.88367E+08& 300.09& 30.52\\
{\color{black}gr666}& 2& 1.75401E+06& 0.0035& 0.0107& {\color{black}pla85900}& 2& 3.74908E+15& 84.51& 193.81\\
{\color{black}gr666}& 3& 7.72707E+05& 0.0052& 0.0107& {\color{black}pla85900}& 5& 1.33972E+15& 168.16& 200.15\\
\hline
\multicolumn{10}{l}{$\dagger$ Best known solution was found for all instances.}\\
\multicolumn{10}{l}{$\ddagger$ Time in seconds for one run.}\\
\multicolumn{10}{l}{(1) Run time by the construction method.}\\
\multicolumn{10}{l}{(2) Run time by the merging method using $\alpha=1.5$.}\\
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht!]
\begin{center}
\caption{\label{sm}Results for small Problems with $k> 5$ (1000 runs)}
\medskip
\small
\setlength{\tabcolsep}{4pt}
\begin{tabular}{|c|c||c||c|r|c|r|c|c|c|c|}
\hline
Data&$k$&Best&\multicolumn{2}{|c|}{$\alpha=1.5\S$}&\multicolumn{2}{|c|}{$\alpha=2.0\S$}&\multicolumn{2}{|c|}{Construction}&\multicolumn{2}{|c|}{Separation}\\
\cline{4-11}
Set&&Known&$^*$&Time$\ddagger$&$^*$&Time$\ddagger$&$^*$&Time$\ddagger$&$^*$&Time$\ddagger$\\
\hline
{\color{black}ruspini75}& 6& 8.57541E+03& 0.00& 0.0002& 0.00& 0.0003& 0.00& 0.0001& 0.00& 0.0002\\
{\color{black}ruspini75}& 7& 7.12620E+03& 0.00& 0.0003& 0.00& 0.0003& 0.00& 0.0001& 0.00& 0.0002\\
{\color{black}ruspini75}& 8& 6.14964E+03& 0.00& 0.0002& 0.00& 0.0003& 0.00& 0.0002& 0.00& 0.0002\\
{\color{black}ruspini75}& 9& 5.18165E+03& 0.00& 0.0002& 0.00& 0.0003& 0.00& 0.0002& 0.00& 0.0002\\
{\color{black}ruspini75}& 10& 4.44628E+03& 0.00& 0.0003& 0.00& 0.0003& 0.00& 0.0002& 0.00& 0.0002\\
{\color{black}fisher}& 6& 3.90400E+01& 0.00& 0.0009& 0.00& 0.0012& 0.00& 0.0006& 0.00& 0.0007\\
{\color{black}fisher}& 7& 3.42982E+01& 0.00& 0.0009& 0.00& 0.0012& 0.00& 0.0006& 0.00& 0.0007\\
{\color{black}fisher}& 8& 2.99889E+01& 0.00& 0.0010& 0.00& 0.0012& 0.00& 0.0007& 0.00& 0.0007\\
{\color{black}fisher}& 9& 2.77861E+01& 0.00& 0.0010& 0.00& 0.0013& 0.00& 0.0008& 0.00& 0.0009\\
{\color{black}fisher}& 10& 2.58340E+01& 0.00& 0.0010& 0.00& 0.0013& 0.06& 0.0009& 0.00& 0.0008\\
{\color{black}gr202}& 6& 6.76488E+03& 0.00& 0.0013& 0.00& 0.0016& 0.00& 0.0011& 0.00& 0.0014\\
{\color{black}gr202}& 7& 5.81757E+03& 0.00& 0.0014& 0.00& 0.0016& 0.00& 0.0012& 0.00& 0.0015\\
{\color{black}gr202}& 8& 5.00610E+03& 0.00& 0.0014& 0.00& 0.0017& 0.03& 0.0013& 0.00& 0.0014\\
{\color{black}gr202}& 9& 4.37619E+03& 0.00& 0.0014& 0.00& 0.0017& 0.77& 0.0014& 0.00& 0.0019\\
{\color{black}gr202}& 10& 3.79449E+03& 0.00& 0.0015& 0.00& 0.0018& 0.00& 0.0016& 0.00& 0.0016\\
{\color{black}gr666}& 6& 3.82676E+05& 0.00& 0.0108& 0.00& 0.0133& 0.00& 0.0106& 0.00& 0.0126\\
{\color{black}gr666}& 7& 3.23283E+05& 0.00& 0.0109& 0.00& 0.0133& 0.00& 0.0124& 0.00& 0.0125\\
{\color{black}gr666}& 8& 2.85925E+05& 0.00& 0.0110& 0.00& 0.0134& 0.00& 0.0143& 0.00& 0.0123\\
{\color{black}gr666}& 9& 2.50989E+05& 0.00& 0.0110& 0.00& 0.0137& 0.13& 0.0163& 0.00& 0.0163\\
{\color{black}gr666}& 10& 2.24183E+05& 0.00& 0.0113& 0.00& 0.0140& 0.20& 0.0180& 0.00& 0.0131\\
\hline
\multicolumn{11}{l}{$\S$ Variant of the merging procedure}\\
\multicolumn{11}{l}{$\ddagger$ Time in seconds for one run.}\\
\multicolumn{11}{l}{$^*$ Percent above best known solution {\color{black} (relative error)}.}
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht!]
\begin{center}
\caption{\label{med}Results for Medium Problems with $k> 5$ (1000 runs)}
\medskip
\small
\setlength{\tabcolsep}{4pt}
\begin{tabular}{|c|c||c||c|c|r|c|r|c|c|c|c|}
\hline
Data&$k$&Best$\dagger$&Prior&\multicolumn{2}{|c|}{$\alpha=1.5\S$}&\multicolumn{2}{|c|}{$\alpha=2.0\S$}&\multicolumn{2}{|c|}{Construction}&\multicolumn{2}{|c|}{Separation}\\
\cline{5-12}
Set&&Known&BK$^*$&$^*$&Time$\ddagger$&$^*$&Time$\ddagger$&$^*$&Time$\ddagger$&$^*$&Time$\ddagger$\\
\hline
{\color{black}tsplib1060}& 10& 1.75484E+09& 0.00& 0.00& 0.03& 0.00& 0.04& 0.00& 0.05& 0.00& 0.03\\
{\color{black}tsplib1060}& 15& 1.12114E+09& 0.00& 0.00& 0.04& 0.01& 0.04& 0.02& 0.08& 0.02& 0.04\\
{\color{black}tsplib1060}& 20& 7.91790E+08& 0.00& 0.00& 0.04& 0.01& 0.04& 0.06& 0.10& 0.00& 0.05\\
{\color{black}tsplib1060}& 25& {\bf 6.06607E+08} & 0.02& 0.03& 0.04& 0.02& 0.04& 0.46& 0.12& 0.10& 0.06\\
{\color{black}tsplib3038}& 10& 5.60251E+08& 0.00& 0.00& 0.27& 0.00& 0.32& 0.00& 0.41& 0.00& 0.24\\
{\color{black}tsplib3038}& 15& 3.56041E+08& 0.00& 0.00& 0.30& 0.00& 0.37& 0.00& 0.67& 0.00& 0.33\\
{\color{black}tsplib3038}& 20& 2.66812E+08& 0.00& 0.01& 0.34& 0.03& 0.41& 0.01& 0.93& 0.00& 0.38\\
{\color{black}tsplib3038}& 25& {\bf 2.14475E+08} & 0.01& 0.02& 0.37& 0.02& 0.44& 0.11& 1.15& 0.00& 0.44\\
{\color{black}pendigit}& 10& 4.93015E+07& 0.00& 0.00& 6.97& 0.00& 10.31& 0.00& 8.03& 0.00& 5.48\\
{\color{black}pendigit}& 15& 3.90675E+07& 0.00& 0.00& 7.00& 0.00& 10.37& 0.00& 10.49& 0.00& 6.96\\
{\color{black}pendigit}& 20& 3.40194E+07& 0.00& 0.00& 7.26& 0.00& 10.77& 0.14& 14.66& 0.00& 8.30\\
{\color{black}pendigit}& 25& {\bf 2.99865E+07} & 0.17& 0.00& 7.38& 0.00& 10.95& 0.75& 18.74& 0.69& 8.93\\
\hline
\multicolumn{12}{l}{$\dagger$ Best known solution including the results in this paper. New ones displayed in boldface.}\\
\multicolumn{12}{l}{$\S$ Variant of the merging procedure}\\
\multicolumn{12}{l}{$\ddagger$ Time in seconds for one run.}\\
\multicolumn{12}{l}{$^*$ Percent above best known solution {\color{black} (relative error)}.}
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht!]
\begin{center}
\caption{\label{large}Results for Large Problems with $k> 5$ (100 runs)}
\medskip
\small
\setlength{\tabcolsep}{4pt}
\begin{tabular}{|c|c||c||c|c|r|c|r|c|c|c|c|}
\hline
Data&$k$&Best$\dagger$&Prior&\multicolumn{2}{|c|}{$\alpha=1.5\S$}&\multicolumn{2}{|c|}{$\alpha=2.0\S$}&\multicolumn{2}{|c|}{Construction}&\multicolumn{2}{|c|}{Separation}\\
\cline{5-12}
Set&&Known&BK$^*$&$^*$&Time$\ddagger$&$^*$&Time$\ddagger$&$^*$&Time$\ddagger$&$^*$&Time$\ddagger$\\
\hline
{\color{black}letter}& 10& 8.57503E+05& 0.00& 0.00& 22.9& 0.00& 34.4& 0.00& 37.1& 0.00& 29.0\\
{\color{black}letter}& 15& {\bf 7.43923E+05} & 0.09& 0.00& 25.1& 0.00& 38.1& 0.00& 57.2& 0.00& 36.0\\
{\color{black}letter}& 20& 6.72593E+05& 0.00& 0.16& 30.6& 0.00& 44.4& 0.20& 78.1& 0.01& 41.7\\
{\color{black}letter}& 25& {\bf 6.19572E+05} & 0.53& 0.00& 36.1& 0.00& 49.0& 0.50& 96.7& 0.13& 42.2\\
{\color{black}kegg}& 10& 6.05127E+07& 0.00& 0.00& 41.0& 0.00& 50.9& 4.96& 401.2& 0.00& 673.3\\
{\color{black}kegg}& 15& 3.49362E+07& 0.00& 0.00& 56.6& 1.20& 64.0& 11.79& 513.5& 10.34& 824.2\\
{\color{black}kegg}& 20& 2.47108E+07& 0.00& 0.01& 56.8& 1.39& 66.0& 13.09& 697.8& 12.15& 984.1\\
{\color{black}kegg}& 25& 1.90899E+07& 0.00& 0.53& 82.6& 1.79& 129.1& 29.53& 784.0& 15.60& 878.0\\
{\color{black}pla85900}& 10& 6.82941E+14& 0.00& 0.00& 251.6& 0.00& 288.7& 0.00& 379.0& 0.00& 202.2\\
{\color{black}pla85900}& 15& 4.60294E+14& 0.00& 0.00& 325.4& 0.00& 385.6& 0.00& 680.4& 0.00& 264.1\\
{\color{black}pla85900}& 20& 3.49811E+14& 0.00& 0.01& 437.4& 0.00& 509.6& 0.00& 984.5& 0.00& 336.9\\
{\color{black}pla85900}& 25& 2.82217E+14& 0.00& 0.00& 490.4& 0.02& 547.2& 0.00& 1180.2& 0.00& 377.8\\
\hline
\multicolumn{12}{l}{$\dagger$ Best known solution including the results in this paper. New ones displayed in boldface.}\\
\multicolumn{12}{l}{$\S$ Variant of the merging procedure}\\
\multicolumn{12}{l}{$\ddagger$ Time in seconds for one run.}\\
\multicolumn{12}{l}{$^*$ Percent above best known solution {\color{black} (relative error)}.}
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht!]
\begin{center}
\caption{\label{Rsml}Percent above best known solution by the procedures in R for small problems}
\medskip
\setlength{\tabcolsep}{4pt}
\begin{tabular}{|c|c||c||c|c||c|c||c|c|}
\hline
Data&$k$&Best&\multicolumn{2}{|c||}{H-W}&\multicolumn{2}{|c||}{Lloyd}&\multicolumn{2}{|c|}{MacQ}\\
\cline{4-9}
Set&&Known$\dagger$&Rand&++&Rand&++&Rand&++\\
\hline
{\color{black}ruspini75}& 6& 8.57541E+03& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}ruspini75}& 7& 7.12620E+03& 0.00& 0.00& 0.00& 1.34& 0.00& 1.34\\
{\color{black}ruspini75}& 8& 6.14964E+03& 0.00& 0.00& 0.00& 1.61& 0.00& 1.61\\
{\color{black}ruspini75}& 9& 5.18165E+03& 0.00& 2.17& 0.00& 3.78& 0.00& 5.56\\
{\color{black}ruspini75}& 10& 4.44628E+03& 0.00& 0.38& 1.25& 13.99& 0.48& 13.99\\
{\color{black}fisher}& 6& 3.90400E+01& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}fisher}& 7& 3.42982E+01& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}fisher}& 8& 2.99889E+01& 0.00& 0.00& 0.01& 0.01& 0.00& 0.01\\
{\color{black}fisher}& 9& 2.77861E+01& 0.00& 0.00& 0.01& 0.39& 0.01& 0.39\\
{\color{black}fisher}& 10& 2.58341E+01& 0.00& 0.06& 0.16& 0.78& 0.16& 0.78\\
{\color{black}gr202}& 6& 6.76488E+03& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}gr202}& 7& 5.81757E+03& 0.00& 0.00& 0.01& 0.02& 0.02& 0.12\\
{\color{black}gr202}& 8& 5.00610E+03& 0.00& 0.00& 0.03& 0.07& 0.03& 0.09\\
{\color{black}gr202}& 9& 4.37619E+03& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}gr202}& 10& 3.79449E+03& 0.00& 0.00& 0.00& 0.58& 0.29& 2.32\\
{\color{black}gr666}& 6& 3.82677E+05& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}gr666}& 7& 3.23284E+05& 0.00& 0.00& 0.00& 0.06& 0.00& 0.00\\
{\color{black}gr666}& 8& 2.85925E+05& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}gr666}& 9& 2.50989E+05& 0.00& 0.00& 0.00& 0.03& 0.00& 0.04\\
{\color{black}gr666}& 10& 2.24184E+05& 0.00& 0.41& 0.00& 0.14& 0.00& 0.41\\
\hline
\multicolumn{9}{l}{$\dagger$ Best known solution including the results in this paper.}\\
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht!]
\begin{center}
\caption{\label{Rlrg}Percent above best known solution by the procedures in R for medium and large problems}
\medskip
\setlength{\tabcolsep}{4pt}
\begin{tabular}{|c|c||c||c|c||c|c||c|c|}
\hline
Data&$k$&Best&\multicolumn{2}{|c||}{H-W}&\multicolumn{2}{|c||}{Lloyd}&\multicolumn{2}{|c|}{MacQ}\\
\cline{4-9}
Set&&Known$\dagger$&Rand&++&Rand&++&Rand&++\\
\hline
{\color{black}tsplib1060}& 10& 1.75484E+09& 0.00& 0.00& 0.00& 0.24& 0.00& 0.25\\
{\color{black}tsplib1060}& 15& 1.12114E+09& 0.00& 4.59& 0.07& 6.86& 0.04& 6.90\\
{\color{black}tsplib1060}& 20& 7.91790E+08& 0.05& 7.06& 0.05& 13.44& 0.21& 12.80\\
{\color{black}tsplib1060}& 25& 6.06607E+08& 0.06& 16.47& 0.37& 23.66& 0.48& 17.90\\
{\color{black}tsplib3038}& 10& 5.60251E+08& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}tsplib3038}& 15& 3.56041E+08& 0.00& 0.01& 0.00& 0.07& 0.00& 0.10\\
{\color{black}tsplib3038}& 20& 2.66812E+08& 0.01& 0.33& 0.03& 0.25& 0.05& 0.49\\
{\color{black}tsplib3038}& 25& 2.14475E+08& 0.04& 1.15& 0.06& 2.15& 0.05& 1.84\\
{\color{black}pendigit}& 10& 4.93015E+07& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}pendigit}& 15& 3.90675E+07& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}pendigit}& 20& 3.40194E+07& 0.09& 0.02& 0.09& 0.03& 0.01& 0.03\\
{\color{black}pendigit}& 25& 2.99865E+07& 0.17& 0.17& 0.05& 0.18& 0.17& 0.18\\
{\color{black}letter}& 10& 8.57503E+05& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}letter}& 15& 7.43923E+05& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}letter}& 20& 6.72593E+05& 0.00& 0.00& 0.00& 0.01& 0.00& 0.01\\
{\color{black}letter}& 25& 6.19572E+05& 0.00& 0.00& 0.10& 0.00& 0.00& 0.01\\
{\color{black}kegg}& 10& 6.05127E+07& 36.81& 0.00& 36.81& 0.00& 36.81& 0.00\\
{\color{black}kegg}& 15& 3.49362E+07& 98.23& 4.41& 98.23& 7.27& 98.23& 4.00\\
{\color{black}kegg}& 20& 2.47108E+07& 136.71& 13.74& 161.55& 19.13& 161.55& 16.17\\
{\color{black}kegg}& 25& 1.90899E+07& 190.95& 15.48& 208.02& 17.68& 208.02& 35.56\\
{\color{black}pla85900}& 10& 6.82941E+14& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}pla85900}& 15& 4.60294E+14& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\
{\color{black}pla85900}& 20& 3.49810E+14& 0.00& 0.02& 0.00& 0.00& 0.00& 0.00\\
{\color{black}pla85900}& 25& 2.82215E+14& 0.00& 0.14& 0.00& 0.00& 0.00& 0.00\\
\hline
\multicolumn{9}{l}{$\dagger$ Best known solution including the results in this paper.}\\
\end{tabular}
\end{center}
\end{table}
\section{\label{sec4}Computational Experiments}
We experimented with ten data sets and various numbers of clusters for each. Four problems are classified as small problems, three are medium size and three are large. The problems are listed in Table \ref{prob}. The construction, separation, and merging algorithms were
coded in Fortran using double precision arithmetic and were compiled by an Intel 11.1 Fortran Compiler using one thread with no parallel processing. They were run on a desktop with the Intel i7-6700 3.4GHz CPU processor and 16GB RAM. Small and medium size problems were run 1000 times each in a multi-start approach. Large problems were run 100 times each.
{\color{black}The ``stats" package included in the base installation of ``R" version 3.5.3 contains the k-means function\footnote{{\color{black}The source code of the k-means function is available at\\ https://github.com/SurajGupta/r-source/blob/master/src/library/stats/R/kmeans.R.}}, which implements the three widely-used clustering algorithms described in Section \ref{widely}: Lloyd, MacQueen (MacQ), and Hartigan-Wong (H-W).}
They were run on a virtualized Windows environment with 16 vCPUs and 128GB of vRAM. No parallel processing was used in R. All algorithms (and starting solutions) were run as a single thread.
The physical server used was a two PowerEdge R720 Intel E5-2650 CPUs (8 cores each) with 128 GB RAM using shared storage on MD3620i via 10GB interfaces.
Thirty-six instances were tested for small size problems for $k=2,3,\ldots,10$ clusters. Thirty-six instances were tested for medium and large size problems for $k=2,5,10,15,20,25$ clusters, for a total of 72 instances. The best known (for small problems, the optimal) solutions were obtained for the 28 instances of $2\le k\le 5$. Such problems are too small for the separation algorithm so they were tested by the merging and the construction algorithm as well as R's implementations of Lloyd, Hartigan-Wong (H-W), and MacQueen (MacQ) heuristics starting at random solutions, and their "++" starting solution procedure. Run times in seconds for the merging and construction algorithms are depicted in Table \ref{p5}.
We then tested the instances for $k>5$ clusters by the merging, construction, and separation algorithms and R implementations of Lloyd, Hartigan-Wong, and MacQueen heuristics starting at random starting solutions and the ``++" implementation. In Table \ref{sm} the results for small size problems, for which the optimal solution is known \citep{A09}, are reported for the merging, construction, and separation algorithms.
In Table \ref{med} the results for the three medium size problems are reported. Each instance was solved by two variants of the merging algorithm, the construction and separation algorithms, 1000 times each, and the best result reported. New best known solutions were found for three of the twelve instances.
In Table \ref{large} the results for the three large problems are reported. Each instance was solved by the merging, construction and separation algorithms, 100 times, and the best result reported. New best known solutions were found for two of the twelve instances.
In Tables \ref{Rsml}, \ref{Rlrg} we report the results by the three R algorithms as well as the special starting solution "++" \citep{David2007} which requires an extra step of preparation of starting solutions. Algorithms were repeated from 1000 starting solutions.
While nine of the ten problems exhibit similar behavior, one large {\color{black} data set (kegg)} exhibited unique behavior. It turns out to be a challenging problem. The merging procedure performed far better than all other approaches. For example, the $k=25$ instance found a solution 190\% worse by the best R approach (see Table \ref{Rlrg}). When the special starting solution ``++" \citep{David2007} was implemented in R, it was ``only'' 15\% higher. The construction algorithm was 29\% higher, the separation was 15\% higher, {\color{black} and the merging procedure was 0.31\% higher (Table \ref{53})}. Also, we tested in the merging procedure $\alpha=1.1, 1.5, 2.0$. In the other nine problems $\alpha=1.5$ and $\alpha=2$ provided comparable results while $\alpha=1.1$ provided somewhat poorer results and thus are not reported. For the {\color{black} data set (kegg)} instances, $\alpha=2$ performed the poorest while $\alpha=1.1$ performed best.
As a final experiment we solved the {\color{black} data set (kegg)} by the three variants of the merging procedure 1000 times in order to possibly get further improved best known solutions. The results are depicted in Table \ref{1000}. An improved solution was found for $k=20$.
\begin{table}[ht!]
\begin{center}
\caption{\label{1000}Results for the {\color{black} kegg data set} by merging variants (1000 runs)}
\medskip
\setlength{\tabcolsep}{4pt}
\begin{tabular}{|c|c||c|c|c||c|c|c||c|c|c|}
\hline
$k$&Best&\multicolumn{3}{|c||}{$\alpha=1.1$}&\multicolumn{3}{|c||}{$\alpha=1.5$}&\multicolumn{3}{|c|}{$\alpha=2.0$}\\
\cline{3-11}
&Known$\dagger$&$^*$&+&Time$\ddagger$&$^*$&+&Time$\ddagger$&$^*$&+&Time$\ddagger$\\
\hline
2& 1.13853E+09& 0.00& 680& 25.1& 0.00& 247& 27.5& 0.00& 109& 30.7\\
5& 1.88367E+08& 0.00& 1000& 27.0& 0.00& 994& 31.1& 0.00& 985& 35.1\\
10& 6.05127E+07& 0.00& 163& 34.7& 0.00& 125& 41.4& 0.00& 98& 48.0\\
15& 3.49362E+07& 0.00& 1& 41.9& 0.00& 2& 52.2& 0.00& 1& 62.7\\
20& 2.47108E+07& 0.00& 1& 46.1& 0.01& 1& 55.8& 0.29& 1& 73.8\\
25& 1.90899E+07& 0.31& 1& 63.6& 0.36& 1& 86.7& 0.47& 1& 122.5\\
\hline
\multicolumn{11}{l}{$\dagger$ Best known solution including the results in this paper.}\\
\multicolumn{11}{l}{$\ddagger$ Time in seconds for one run.}\\
\multicolumn{11}{l}{$^*$ Percent above best known solution {\color{black} (relative error)}.}\\
\multicolumn{11}{l}{+ Number of times (out of 1000) that the best solution found.}
\end{tabular}
\end{center}
\end{table}
\subsection{Summary of the Computational Experiments}
We tested the merging, construction and the separation (which is based on the construction) procedures as well as six algorithms run on R: H-W, Lloyd, and MacQ based on random starting solutions, and their variants marked with ``++" that require an additional code to design special starting solutions.
These algorithms were tested on ten data sets each with several instances for a total of 72 instances. Four small problems (36 instances) were basically solved to optimality, \citep[proven in ][]{A09}, by all approaches and no further comparisons are suggested in future research for these instances. The six medium and large problems are challenging, especially the {\color{black}kegg} instances. Twenty four instances, $k=10, 15, 20, 25$ for each problem, are compared below.
\begin{itemize}
\item Five new best known solutions were found by the algorithms proposed in this paper. Two of them were also found by the R implementations.
\item The best known solution was found by the algorithms proposed in this paper, within a standard number of runs, except for one instance {\color{black} tsplib1060}, $k=25$. \citet{BODX15,PABM19} report a solution of 6.06700E+08 while our best solution (found by the merging procedure using $\alpha=2$ run 1,000 times) is 6.06737E+08 which is 0.006\% higher. However, when we ran the merging process 10,000 times with $\alpha=2$ (required less than 7.5 minutes), we got a solution of 6.06607E+08, which is an improvement of the best known solution by 0.015\%.
\item The best result of the 6 R algorithms failed to obtain the best known solution in 9 of the 24 medium and large instances with $k\ge 10$ (see Table \ref{Rlrg}). In several cases, the deviation from the best known solution was large.
\end{itemize}
\section{\label{sec5}Conclusions}
Three new algorithms for generating starting solutions for the clustering problem and a new improvement algorithm are proposed. We extensively tested these algorithms on ten widely researched data sets with varying number of clusters for a total of 72 instances. Forty eight of these instances are relatively easy and all approaches, including standard approaches implemented in R, found the best known solutions. Twenty four relatively large instances are more challenging. Five improved best known solutions were found by the three proposed algorithms and two of them were matched by the R procedures. The best known solutions were not found by the R implementations in 9 of the 24 instances.
It is well known that in the planar $p$-median location problem, as well as in other Operations Research problems, that starting with a good initial solution rather than a random one
significantly improves the final solution obtained with improvement algorithms.
This turns out to be true for the minimum sum of squared Euclidean distances clustering problem as well. The main contribution of the paper is finding good starting solutions for this clustering problem.
{\color{black} It would be interesting as a topic for future research to examine the performance of meta-heuristics using good starting solutions, such as the ones developed here, compared with random starting solutions. This should lead to multi-start meta-heuristic based algorithms designed with higher intensifications, that are more effective then current approaches. It is possible that when better starting solutions are used, fewer multi-start replications are required to yield the same quality solutions.}
\bigskip
\noindent{\bf Acknowledgment:} The authors would like to thank professor Adil Bagirov from the Federation University of Australia for providing the source data and detailed results of the experiments reported in \citet{BODX15}.
\renewcommand{\baselinestretch}{1}
\renewcommand{\arraystretch}{1}
\large
\normalsize
\bibliographystyle{apalike}
| {
"timestamp": "2020-04-10T02:14:48",
"yymm": "2004",
"arxiv_id": "2004.04593",
"language": "en",
"url": "https://arxiv.org/abs/2004.04593",
"abstract": "The clustering problem has many applications in Machine Learning, Operations Research, and Statistics. We propose three algorithms to create starting solutions for improvement algorithms for this problem. We test the algorithms on 72 instances that were investigated in the literature. Forty eight of them are relatively easy to solve and we found the best known solution many times for all of them. Twenty four medium and large size instances are more challenging. We found five new best known solutions and matched the best known solution for 18 of the remaining 19 instances.",
"subjects": "Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML)",
"title": "The Importance of Good Starting Solutions in the Minimum Sum of Squares Clustering Problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290913825541,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7099522475220548
} |
https://arxiv.org/abs/1908.10962 | Optimal transport mapping via input convex neural networks | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a {\em discontinuous} transport mapping. | \subsection{High dimensional experiments}
\label{sec:high-dim}
\input{figure_high_dim.tex}
We consider the challenging task of learning optimal transport maps on high dimensional distributions. In particular, we consider both synthetic and real world high dimensional datasets and provide quantitative and qualitative illustration of the performance of our proposed approach.
{\textbf{f} Gaussian to Gaussian.} Source distribution $Q=\mathcal{N}(0,I_d)$ and target distribution $P=\mathcal{N}(\mu,I_d)$, for some fixed $\mu\in\mathbb{R}^d$ and $d=784$. The mean vector $\mu=\alpha (1,\ldots,1)^\top$ for some parameter $\alpha>0$. Because both distributions are Gaussian, the optimal transport map is explicitly known: $T^\ast(x)=x+\mu$ and hence $W_2^2(P,Q)=\norm{\mu}^2/2=\alpha^2 d/2$. In \prettyref{fig:w2-convergence}, we compare our estimated distance $\tilde{W}_2^2(P,Q)$, defined in~\eqref{eq:approxW2}, with the exact value $W_2^2(P,Q)$, as the training progresses for various values of $\alpha \in \{1,5,10\}$. Intuitively, learning is more challenging when $\alpha$ is larger. Further, error in learning the optimal transport map, quantified with the metric $\norm{\mu_{T(Q)}-\mu}^2$, where $\mu_{T(Q)}$ is the mean of the transported distribution $T_\#Q$, is reported in Table~\ref{tab:gauss_gauss_table}.
\begin{table}[h]
\caption{The error between the mean of transported and that of the target distributions. The source and target are $728$-dim. Gaussians.}
\tiny
\label{tab:gauss_gauss_table}
\begin{center}
\begin{sc}
\begin{tabular}{cccc}
Metric & $\alpha=1$ & $\alpha=5$ & $\alpha=10$ \\
\midrule
$\norm{\mu_{T(Q)}-\mu}^2$ & $ 0.19\pm 0.015$ & $13.95 \pm 1.45$ & $29.05 \pm 5.16$ \\
\midrule
\hspace{-1.7em} $100 \cdot (\norm{\mu_{T(Q)}-\mu}/\norm{\mu})^2$ & $ 0.02\pm 0.001$ & $0.07 \pm 0.005$ & $0.04 \pm 0.006$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{center}
\end{table}
{\textbf{f} High-dim. Gaussian to low-dim. mixture.} Source distribution $Q$ is standard Gaussian $\mathcal{N}(0,I_d)$ with $d=784$, and the target distribution $P$ is a mixture of four Gaussians that lie in in the two-dimensional subspace of the high-dimensional space $\mathbb{R}^d$, i.e.\xspace the first two components of the random vector $X\sim P$ is mixture of four Gaussians, and the rest of the components are zero. The projection of the learned optimal transport map onto the first four components is depicted in Figure~\ref{fig:mixture-high-dim}. As illustrated in the left panel of \ref{fig:mixture-high-dim}, our transport map correctly maps the source distribution to the mixture of four Gaussians in the first two components. And it maps the rest of the components to zero, as highlighted by a red blob at zero in the right panel.
{\textbf{f} MNIST $\{0,1,2,3,4\}$ to MNIST $\{5,6,7,8,9\}$.} We consider the standard MNIST dataset~\cite{mnist} with the goal of learning the optimal transport map from the set of images corresponding to first five digits~$\{0,1,2,3,4\}$ to the last five digits~$\{5,6,7,8,9\}$. To achieve this, we embed the images into the a space where the Euclidean norm $\|\cdot\|$ between the embedded images is meaningful. This is in alignment with the reported results in the literature for learning the $L_2$-optimal transport map~\citep[Sec. 4.1]{yang2019potential}. We consider the embeddings into a $16$-dimensional latent feature space given by a pre-trained Variational Autoencoder (VAE). We simulate our algorithm on this feature space.
The results of the learned transport map are depicted in \prettyref{fig:all_high_dim}. Figure~\ref{fig:mnist-firstfive} presents samples from the source distribution and Figure~\ref{fig:mnist-lastfive} illustrates the source samples after transportation under the learned optimal transport map. We observe that the digits that look alike are coupled via the optimal transport map, e.g. $1\to9$, $2\to8$, and $4\to9$.
{\textbf{f} Gaussian to MNIST.} The source is $16$-dimensional standard Gaussian distribution, and the target is the $16$-dimensional latent embeddings of all the MNIST digits. The MNIST like samples that are generated from the learned optimal transport map are depicted in~\prettyref{fig:gauss-mnist}.
These experiments serve as a proof of concept that the algorithm scales to high-dimensional setting and real-world dataset. We believe that further improvements on the performance of the proposed algorithm requires careful tuning of hyper-parameters which
takes time to develop (similar to initial WGAN) and is a subject of ongoing work.
\begin{figure}[h]
\centering
\includegraphics[width=0.25\textwidth]{transported_16.pdf}
\caption{MNIST like samples generated by the learned optimal transport map from Gaussian source distribution in feature space.}
\label{fig:gauss-mnist}
\end{figure}
\section{Introduction}
\label{sec:introduction}
Finding a mapping that transports mass from one distribution $Q$ to another distribution $P$
is an important task in various machine learning applications, such as deep generative models \cite{goodfellow2014generative,2013auto} and domain adaptation \cite{gopalan2011domain,ben2010theory}.
Among infinitely many transport maps $T$ that can map a random variable $X$ from $Q$ such that
$T(X)$ is distributed as $P$, several recent advances focus on
discovering some inductive bias to find a transport map with desirable properties.
Research in optimal transport has been leading such efforts,
in applications such as
color transfer \cite{ferradans2014regularized},
shape matching \cite{su2015optimal},
data assimilation \cite{reich2013nonparametric},
and Bayesian inference \cite{el2012bayesian}.
Searching for an optimal transport
encourages a mapping that minimizes the total cost
of transporting mass from $Q$ to $P$, as originally formulated in \citet{monge1781memoire},
and provides the inductive bias needed in many such applications.
However, finding the optimal transport map in general is a challenging task, especially in high dimensions where
efficient approaches are critical.
Algorithmic solutions are well-established
for discrete variables;
the optimal transport can be found as a solution to linear program.
Building upon this mature area, typical approaches for general distributions use quantization,
and this becomes intractable for
high-dimensional variables we encounter in modern applications \cite{evans1999differential,benamou2000computational,papadakis2014optimal}.
\input{starting_figure.tex}
To this end, we propose a novel minimax optimization approach to search for the
optimal transport under the quadratic distance (i.e.~2-Wassertstein metric).
A major challenge in a minimax formulation of optimal transport is
that the constraints in the Kantorovich dual formulation \eqref{eq:dual_form} are notoriously challenging.
They require the evaluation of the functions at every point in the domain, which is not tractable.
A common straightforward heuristics sample some points and add those sampled constraints as regularizers.
Such regularizations create biases that hinder learning the true optimal transport.
Our key innovation is to depart from this common practice;
we instead eliminate the constraints by restricting our search to the set of all convex functions,
building upon the fundamental connection from Theorem \ref{thm:knott-brenier}.
This leads to a novel minimax formulation in \eqref{eq:max-min}.
Leveraging on recent advances in input convex neural networks,
we propose a new architecture and a training algorithm for solving this minimax optimization.
We establish the consistency of our proposed minimax formulation in Theorem~\ref{thm:our_optim_result}.
In particular, we show that the solution to this optimization problems yields the exact optimal transport map.
We provide stability analysis for the proposed estimator in \prettyref{thm:stability}.
Further, when used to train deep generative models,
our approach can be viewed as a novel framework to train
a generator that is modeled as a {\em gradient of a convex function}.
We provide a principled training rule based on the optimal transport theory.
This ensures that $(i)$ the generator converges to the optimal transport, independent of how we initialize the neural network; and
$(ii)$ represent sharp boundaries when the target has multiple disconnected supports.
Gradient of a neural network naturally represents discontinuous functions,
which is critical in mapping from a single connected support to disconnected supports.
To model convex functions, we leverage Input Convex Neural Networks (ICNNs),
a class of scalar-valued neural networks $f(x;\theta)$ such that the function $x \mapsto f(x;\theta) \in \mathbb{R}$ is convex.
These neural networks were introduced by \citet{amos2016input} to provide efficient inference and optimization procedures for structured prediction, data imputation and reinforcement learning tasks. In this paper, we show that ICNNs can be efficiently trained to learn the optimal transport map between two distributions $P$ and $Q$. To the best of our knowledge, this is the first such instance where ICNNs are leveraged for the well-known task of learning optimal transport maps in a {\em scalable} fashion. This framework opens up a new realm for understanding problems in optimal transport theory using parametric convex neural networks, both in theory and practice.
Figure~\ref{fig:checker-board-OT0} provides an example
where the optimal transport map has been learned via our proposed Algorithm~\ref{alg:W2}
from the orange distribution to the green distribution.
\textbf{Notation.} $\mathcal{P}(\mathcal{X})$ denotes the set of probability measures on a Polish space $\mathcal{X}$, and $\mathcal{B}(\mathcal{X})$ denotes the Borel subsets of $\mathcal{X}$. For $P \in \mathcal{P}(\mathcal{X})$ and $Q\in \mathcal{P}(\mathcal{Y})$, $P \otimes Q $ denotes the product measure on $\mathcal{X} \times \mathcal{Y}$. For measurable map $T:\mathcal{X} \to \mathcal{Y}$, $T_{\#} P$ denotes the push-forward of $P$ under $T$, i.e.\xspace $(T_{\#} P)(A)=P(T^{-1}(A)),~\forall A \in \mathcal{B}(\mathcal{Y})$.
$L^1(P) \triangleq \{f \text{ is measurable } \&~ \int f\,\mathrm{d} P<\infty\}$ denotes the set of integrable functions with respect to $P$. $\mathtt{CVX}(P)$ denotes the set of all convex functions in $L^1(P)$. $\mathrm{Id}:x\mapsto x$ denotes the identity function. $\langle \cdot,\cdot \rangle$ and $\|\cdot\|$ denote the inner-product and $\ell_2$-Euclidean norm.
\section{Conclusion}
We presented a novel minimax framework to learn the optimal transport map under $W_2$-metric. Our framework is in contrast to regularization-based approaches, where the constraint of the dual Kantorovich problem is replaced with a penalty term. Instead, we represent the dual functions with ICNN, so that the constraint is automatically satisfied. Further, the transport map is expressed as gradient of a convex function, which is able to represent discontinuous maps. We believe that our framework paves way for bridging the optimal transport theory and practice.
\section{Proof of Theorem~\ref{thm:our_optim_result}}
\label{app:proof_theorem}
Define $V_f(g) \triangleq \mathbb{E}_Q[\langle Y,\nabla g(Y) \rangle - f(\nabla g(Y))]$. The main step of the proof is to show that $\sup_{g \in \mathtt{CVX}(Q)} V_f(g) = \mathbb{E}_Q[f^*(Y)]$. Then the conclusion follows from \eqref{eq:dual_convex_form}. To prove this, note that for all $g \in \mathtt{CVX}(Q)$, we have
\[
\langle y, \nabla g(y) \rangle - f(\nabla g(y)) \leq \langle y, \nabla f^*(y) \rangle - f(\nabla f^*(y)) = f^*(y),
\]
for all $y \in \mathbb{R}^d$ such that $g$ and $f^*$ are differentiable at $y$. We now claim that both $g$ and $f^*$ are differentiable $Q$-almost everywhere (a.e). If the claim is true, upon taking the expectation w.r.t $Q$:
\[V_f(g) \leq V_f(f^*) = \mathbb{E}_Q[f^*(Y)],\quad \forall g \in \mathtt{CVX}(Q)\]
and the inequality is achieved with $g=f^*$. Now we prove the claim as follows: Since $\int g \,\mathrm{d} Q < \infty$, we have $Q(g=\infty)=0$. Thus $Q(\text{Dom}(g))=1$, where $\text{Dom}(g)$ is the domain of the function $g$. Moreover, $Q(\text{Int}(\text{Dom}(g)) =1$, where $\text{Int}(\cdot)$ denotes the interior, because the boundary has $Q$-measure zero ($Q$ has a density). Since $g$ is convex, it is differentiable on $\text{Int}(\text{Dom}(g))$ except at points of Lebesgue measure zero which have $Q$-measure zero too. Therefore, $g$ is $ Q$-a.e differentiable. Similar arguments hold for $f^*$.
\section{Proof of Theorem~\ref{thm:stability}}
\label{app:proof_stability}
The proof follows from the bounds
\begin{subequations}
\begin{align}
\|\nabla g - \nabla f^* \|^2_{L^2(Q)} \leq \frac{2}{\alpha} \epsilon_1\label{eq:bound1},\\
\|\nabla f^* - \nabla g_0 \|^2_{L^2(Q)} \leq \frac{2}{\alpha} \epsilon_2\label{eq:bound2},
\end{align}
\end{subequations}
and using the triangle inequality. The proof for the first bound is as follows.
If $f$ is $\alpha$-strongly convex, then $f^*$ is $\frac{1}{\alpha}$ smooth. By definition of smoothness,
\begin{equation*}
f^*(z) \leq f^*(y) + \langle \nabla f^*(y), z-y \rangle + \frac{1}{2\alpha}\|z-y\|^2 \triangleq h_y(z),\quad \forall y,z \in \mathbb{R}^d,
\end{equation*}
where $h_y(z)$ is defined to be the quadratic function of $z$ that appears on the right-hand side of the inequality. From $f^*(z)\leq h_y(z)$, it follows that the convex conjugate $f(x) \geq h^*_y(x)$. As a result,
\begin{equation}
f(x) \geq h^*_y(x)= -f^*(y) + \langle y,x \rangle + \frac{\alpha}{2}\|x-\nabla f^*(y)\|^2,\quad \forall x,y\in \mathbb{R}^d.
\label{eq:inequality}
\end{equation}
We use this inequality to control the optimality gap $\epsilon_1(f,g)$:
\begin{align*}
\epsilon_1(f,g)&=\mathcal{V}(f,g) - \inf_{\tilde{g}} \mathcal{V} (f,\tilde{g}) \\&= \mathcal{V}(f,g)-\mathcal{V}(f,f^*) \\&= \mathbb{E}_Q [f^*(Y) - \langle Y,\nabla g(Y) \rangle + f(\nabla g(Y)) ]
\\&\geq \frac{\alpha}{2} \mathbb{E}_Q[\| \nabla g(Y) - \nabla f^*(Y)\|^2 ],
\end{align*}
where the last step follows from~\eqref{eq:inequality}, with $x=\nabla g(y)$. This concludes the proof of the bound~\eqref{eq:bound1}. It remains to prove~\eqref{eq:bound2}. To this end, note that the optimality gap $\epsilon_2(f)$ is given by
\begin{align*}
\epsilon_2(f)&=\mathcal{V}(f_0,g_0) - \inf_{\tilde g} \mathcal{V}(f,\tilde{g}) \\&= \mathcal{V}(f_0,f^*_0) - \mathcal{V}(f,f^*) \\&= - (\mathbb{E}_P[f_0(X)]+ \mathbb{E}_Q [f^*_0(Y)]) + (\mathbb{E}_P[f(X)]+ \mathbb{E}_Q [f^*(Y)])
\\&= - \mathbb{E}_Q [f_0(\nabla f^*_0(Y)) + f^*_0(Y)] + \mathbb{E}_Q [f(\nabla f^*_0(Y)) + f^*(Y)]
\\&= - \mathbb{E}_Q [\langle Y, \nabla f^*_0(Y)\rangle ] +\mathbb{E}_Q [f(\nabla f^*_0(Y)) + f^*(Y)]
\end{align*}
Using the inequality~\eqref{eq:inequality} with $x=\nabla f^*_0(y)$ yields:
\begin{align*}
\epsilon_2(f)
&\geq \frac{\alpha}{2} \mathbb{E}_Q[| \nabla f^*_0(Y) - \nabla f^*(Y)|^2 ]
\end{align*}
concluding~\eqref{eq:bound2} noting that $f^*_0=g_0$.
\section{Experimental set-up}
\label{app:setup}
\subsection{Two-dimensional experiments}
{\textbf{f} Datasets.} We use the following synthetic datasets: (i) Checkerboard, and (ii) Mixture of eight Gaussians. For the Checkerboard dataset, the source distribution $Q$ is the law of the random variable $Y=X + Z$, where $X \sim \mathrm{Unif}(\{(0,0), (1,1),(1,-1),(-1,1),(-1,-1)\})$ and $Z \sim \mathrm{Unif}([-0.5,0.5] \times [-0.5, 0.5])$. Similarly, $P$ is the distribution of random variable $Y=X + Z$, where $X \sim \mathrm{Unif}(\{(0,1), (0,-1),(1,0),(-1,0)\})$ and $Z \sim \mathrm{Unif}([-0.5,0.5] \times [-0.5, 0.5])$. Note that $\mathrm{Unif}(B)$ denotes the uniform distribution over any set $B$. For the mixture of eight Gaussians dataset, we have $Q=\mathcal{N}(0,I_2)$ and $P$ is the law of random variable $Y$, where $Y=X+Z$ with $X \sim \mathrm{Unif}(\{(1,0), (\frac{1} {\sqrt{2}},\frac{1} {\sqrt{2}})\},(0,1),(\frac{-1} {\sqrt{2}},\frac{1} {\sqrt{2}}), (-1,0),(\frac{-1} {\sqrt{2}},\frac{-1} {\sqrt{2}}), (0,-1), (\frac{1} {\sqrt{2}},\frac{-1} {\sqrt{2}}) \}) $ and $Z \sim \mathcal{N}(0,0.5I_2)$.
{\textbf{f} Architecture details.}
For our Algorithm~\ref{alg:W2}, we parametrize both the convex functions $f$ and $g$ by ICNNs. Both these ICNN networks have equal number of nodes for all the hidden layers
followed by a final output layer.
We choose
a square of leaky ReLU function, i.e $\sigma_0(x) = \left(\text{max}(\beta x,x) \right)^2$ with a small positive constant $\beta$ as
the {\em convex}
activation function for the first layer $\sigma_0$.
For the remaining layers, we use the leaky ReLU function, i.e $\sigma_l(x) = \text{max}(\beta x,x)$ for $l=1,\ldots,L-1$,
as the {\em monotonically non-decreasing and convex} activation function.
Note that the assumptions (ii)-(iii) of the ICNN are satisfied.
In all of our experiments, we set the parameter $\beta = 0.2$. In some of the experiments as explained below, we chose the SELU activation function which also obeys the convexity assumptions.
For the three baselines, Barycentric-OT, W1-LP, and W2GAN, we use the implementations of \citet{leygonie2019adversarial}, made publicly available at \url{https://github.com/jshe/wasserstein-2}. For all these methods, we use the default settings of hyperparameters which were fixed to be the best values from the respective papers. Further, for a fair comparison we allow the number of parameters in each of these baselines to be larger than ours; in fact, for W2GAN and Barycentric-OT, the default number of neural network parameters is much larger than ours.
{\textbf{f} Hyperparameters.} For reproducibility, we provide the details of the numerical experiments for each of the figures. For the Checkerboard dataset in \prettyref{fig:comparison} (same as \prettyref{fig:checker-board-OT0}),
we run Algorithm~\ref{alg:W2} with the following parameters: For both the ICNNs $f$ and $g$, we set the hidden size $m=64$, number of layers $L=4$, regularization constant $\lambda=1.0$, Leaky ReLU activation and for training we use batch size $M=1024$, learning rate $10^{-4}$, generator iterations $K=10$, total number of iterations $T=10^5$, and the Adam optimizer with $\beta_1=0.5$, and $\beta_2=0.9$. For each of the baselines, the following are the values of the parameters: (a) Barycentric-OT: $3$ ($1$ corresponding to the dual stage and the rest for the map step) neural networks each with $m=128, L=3, M=512, T=2\times 10^5$ and $l_2$-entropy penalty, (b) W1-LP: Both the discriminator and the generator neural networks with $m=128, L=3, K=5$ and $M=512, T=2\times 10^5$, and (c) W2GAN: $3$ neural networks ($1$ corresponding to the generator whereas the remaining are for two functions in the dual formulation \prettyref{eq:dual_form}) each with $m=128, L=3, K=5, M=512, T=2\times 10^5$. W2GAN also uses six additional regularization terms which set to default values as provided in the code. Also, all these baselines use ReLU activation and Adam optimizer with $\beta_1=0.9$ and $\beta_2=0.990$ and the learning rate for generator parameters being $0.0001$ and $0.0005$ for the rest. For the mixture of eight Gaussians dataset, we use the same parameters except batch-size $M=256$, whereas all the baselines use the same parameters as the above setting. Also, for the multiple trials in \prettyref{fig:W1-GAN} for W1-LP and W2GAN, we use the above parameters but with a different random initialization of the neural network weights and biases.
\subsection{High dimensional experiments}
\label{app:high-dim-exp}
{\textbf{f} Gaussian to Gaussian.} Source distribution $Q=\mathcal{N}(0,I_d)$ and target distribution $P=\mathcal{N}(\mu,I_d)$, for some fixed $\mu\in\mathbb{R}^d$ and $d=784$. The mean vector $\mu=\alpha (1,\ldots,1)^\top$ with $\alpha \in \{1,5,10\}$. For both the ICNNs $f$ and $g$, we have $d=784, m=1024, L=3$, Leaky ReLU activation, batch size $M= 60$, $K=16$, $\lambda=0.1$, $T=40,000$, Adam optimizer with $\beta_1=0.5$ and $\beta_2 =0.99$, learning rate decay by a factor of $0.5$ for every $2,000$ iterations. Note that in \prettyref{fig:w2-convergence}, $1$ epoch corresponds to $1000$ iterations.
{\textbf{f} High-dim. Gaussian to low-dim. mixture.} Source distribution $Q=\mathcal{N}(0,I_d)$ with $d=784$. The target distribution is a mixture of four Gaussians $P=\sum_{i=1}^4\frac{1}{4}\mathcal{N}(\mu_i,\Sigma )$, where $\mu_i=(\pm 1.4,\pm 1.4, 0 , \ldots, 0) \in \mathbb{R}^{784}$ and $\Sigma=\text{diag}(0.2,0.2,0,\ldots,0)$. For both the ICNNs $f$ and $g$, we have $d=784, m=1024, L=3$, Leaky ReLU activation, batch size $M=60$, $K=25$, $\lambda=0.01$, Adam optimizer with $\beta_1=0.5$ and $\beta_2 =0.99$, learning rate decay by a factor of $0.5$ for every two epochs. The algorithm is simulated for $30$ epochs, where each epoch corresponds to $1000$ iterations.
{\textbf{f} MNIST $\{0,1,2,3,4\}$ to MNIST $\{5,6,7,8,9\}$.} To obtain the latent embeddings of the MNIST dataset, we first train a VAE with both the encoder and decoder having $3$ hidden layers with $256$ neurons and the size of latent vector being $16$ dimensional. We then use ICNNs $f$ and $g$ to learn the optimal transport between the embeddings of digits $\{0,1,2,3,4\}$ to that of $\{5,6,7,8,9\}$. For both these ICNNs we have $d=16, m=1024, L=3$, CELU activation, batch size = $128$, $K=16$, $\lambda=1$, $T=100,000$, Adam optimizer with $\beta_1=0.9$ and $\beta_2 =0.99$, learning rate decay by a factor of $0.5$ for every $4,000$ iterations.
{\textbf{f} Gaussian to MNIST.} To obtain the latent embeddings for the MNIST, we use the same pre-trained VAE models as above. Also we use the same hyperparameter settings as that of the ``{\textbf{f} MNIST $\{0,1,2,3,4\}$ to MNIST $\{5,6,7,8,9\}$}" experiment with the only change of batch size being $64$.
\section{Further discussion of related work}
\label{app:related-work-more}
\input{further_related_work}
\section{Do \emph{not} have an appendix here}
\end{document}
\section{A novel minimax formulation to learn optimal transport}
\label{sec:formulation}
Our goal is to learn the optimal transport map $T^*$ from $Q$ to $P$,
from samples drawn from $P$ and $Q$, respectively.
We use the
fundamental connection between optimal transport and Kantorovich dual in Theorem~\ref{thm:knott-brenier},
to formulate learning $T^*$ as a problem of estimating $W_2^2(P,Q)$.
However, $W_2^2(P,Q)$ is notoriously hard to estimate.
The standard Kantorovich dual formulation in Eq.~\eqref{eq:dual_form} involves a
supremum over a set $\Phi_c$ with infinite constraints,
which is challenging to even approximately project onto.
To this end, we derive an alternative optimization formulation in Eq.~\eqref{eq:max-min},
inspired by the convexification trick \citep[Section 2.1.2]{villani2003topics}.
This allows us to eliminate the distance constraint of $\Phi_c$,
and instead constrain our search over all {\em convex functions}.
This constrained optimization can now be seamlessly integrated with recent advances in
designing deep neural architectures with convexity guarantees.
This leads to a novel minimax optimization
to learn the optimal transport.
We exploit the fundamental properties of $W_2^2(P,Q)$ and the corresponding optimal transport to
reparametrize the optimization formulation.
Note that for any $(f,g) \in \Phi_c$,
\begin{align*}
&f(x)+g(y) \leq \frac{1}{2}\norm{x-y}_2^2 \;\; \Longleftrightarrow\\ &\qth{\frac{1}{2}\|x\|_2^2 -f(x)}+\qth{\frac{1}{2}\|y\|_2^2 -g(y)} \geq \inner{x}{y}.
\end{align*}
Hence reparametrizing $\frac{1}{2}\|\cdot \|_2^2-f(\cdot)$ and $\frac{1}{2}\|\cdot\|_2^2-g(\cdot)$ by $f$ and $g$ respectively, and
substituting them in \prettyref{eq:dual_form} yields
\begin{align*}
W_2^2(P,Q) = C_{P,Q} -\inf_{(f,g) \in \tilde{\Phi}_c} \Big\{ \mathbb{E}_P[f(X)]+\mathbb{E}_Q[g(Y)] \Big\},
\end{align*}
where $C_{P,Q}=(1/2)\mathbb{E} [\norm{X}_2^2+\norm{Y}_2^2 ]$ is a constant independent of $(f,g)$ and $\tilde{\Phi}_c \triangleq \{ (f,g) \in L^1(P) \times L^1(Q): f(x)+g(y) \geq \inner{x}{y}, \quad \forall (x,y) ~ dP \otimes dQ ~\text{a.e.} \}$. While the above constrained optimization problem involves a pair of functions $(f,g)$, it can be transformed into the following form involving only a single convex function $f$, thanks to \citet[Theorem 2.9]{villani2003topics}:
\begin{align}\label{eq:dual_convex_form}
\hspace{-2pt}W_2^2(P,Q) \!= \!C_{P,Q}\!- \!\!\inf_{f \in \mathtt{CVX}(P)} \mathbb{E}_P[f(X)]\!+\!\mathbb{E}_Q[f^\ast(Y)],
\end{align}
where $f^\ast(y)=\sup_x \langle x,y\rangle - f(x)$ is the convex conjugate of $f(\cdot)$.
The crucial tools behind our formulation are
the following celebrated results due to Knott-Smith and Brenier \cite{villani2003topics},
which relate the optimal solutions for the dual form in \prettyref{eq:dual_convex_form} and
the primal form in \prettyref{eq:kantor_relax}.
\begin{theorem}[{\citep[Theorem 2.12]{villani2003topics}}
\label{thm:knott-brenier}
Let $P,Q$ be two probability distributions on $\mathbb{R}^d$ with finite second order moments. Then,
\begin{enumerate}
\item
(\textbf{Knott-Smith optimality criterion}) A coupling $\pi \in \Pi(P,Q)$ is optimal for the primal \prettyref{eq:kantor_relax} if and only if there exists a convex function $f \in \mathtt{CVX}(\mathbb{R}^d)$ such that $\mathrm{Supp}(\pi) \subset \mathrm{Graph}(\partial f)$. Or equivalently, for all $d\pi$-almost $(x,y)$, $y \in \partial f(x)$. Moreover, the pair $(f,f^\ast)$ achieves the minimum in the dual form \prettyref{eq:dual_convex_form}.
\item
(\textbf{Brenier's theorem}) If $Q$ admits a density with respect to the Lebesgue measure on $\mathbb{R}^d$, then there is a unique optimal coupling $\pi$ for the primal problem. In particular, the optimal coupling satisfies that
\begin{align*}
d\pi(x,y) = dQ(y) \delta_{x=\nabla f^\ast(y)},
\end{align*}
where the convex pair $(f,f^\ast) $ achieves the minimum in the dual problem \prettyref{eq:dual_convex_form}. Equivalently, $\pi=(\nabla f^\ast \times \mathrm{Id})_{\#}Q$.
\item
Under the above assumptions of Brenier's theorem, $\nabla f^\ast$ in the unique solution to Monge transportation problem from $Q$ to $P$, i.e.\xspace
\begin{align*}
\mathbb{E}_Q \norm{\nabla f^\ast(Y)-Y}^2 = \inf_{T: T_{\#}Q=P}\mathbb{E}_Q \norm{T(Y)-Y}^2.
\end{align*}
\end{enumerate}
\end{theorem}
\begin{remark}\normalfont
Whenever $Q$ admits a density, we refer to $\nabla f^\ast$ as the optimal transport map.
\label{rem:main}
\end{remark}
Henceforth, throughout the paper we assume that the distribution $Q$ admits a density in $\mathbb{R}^d$.
Note that in view of \prettyref{thm:knott-brenier}, any optimal pair $(f,f^\ast)$ from the dual formulation in \prettyref{eq:dual_convex_form} provides us an optimal transport map $\nabla f^\ast$ pushing forward $Q$ onto $P$. However, the objective \prettyref{eq:dual_convex_form} is not amenable to standard stochastic optimization schemes due to the conjugate function $f^\ast$.
To this end, we propose a novel minimax formulation in the following theorem where we replace the conjugate with a new convex function.
\begin{theorem}
\label{thm:our_optim_result}
Whenever $Q$ admits a density in $\mathbb{R}^d$, we have
\begin{align}\label{eq:max-min}
&W_2^2(P,Q) = \sup_{\substack{f \in \mathtt{CVX}(P), \\ f^\ast \in L^1(Q)}} \inf_{g \in \mathtt{CVX}(Q) }~\mathcal{V}_{P,Q}(f,g) + C_{P,Q},
\end{align}
where
$\mathcal{V}_{P,Q}(f,g)$ is a functional of $f,g$ defined as
\begin{equation*}
\mathcal{V}_{P,Q}(f,g)= -\mathbb{E}_P[f(X)]-\mathbb{E}_Q[\inner{Y}{\nabla g(Y)}-f(\nabla g(Y))].
\end{equation*}
In addition, there exists an optimal pair $(f_0, g_0)$ achieving the infimum and supremum respectively, where $\nabla g_0$ is the optimal transport map from $Q$ to $P$.
\end{theorem}
\vspace{-10pt}
\begin{proof}[Proof sketch]
The proof follows from the inequality $\langle y, \nabla g(y) \rangle - f(\nabla g(y)) \leq f^*(y)$ for all functions $g$, and then taking the expectation over $Q$, and observing that the equality is achieved with $g=f^*$. The technical details appear in \prettyref{app:proof_theorem}.
\end{proof}
\begin{remark}\normalfont
\label{rem:relax}
For any convex function $f$, the function $g\in L^1(Q)$ that achieves the infimum in~\eqref{eq:max-min} is convex and equals $f^*$. Therefore, the constraint $g\in \mathtt{CVX}(Q)$ can be relaxed to $g\in L^1(Q)$ without changing the optimal value and optimizing functions. We numerically observe that the optimization algorithm performs better under this relaxation.
\end{remark}
Formulation~\eqref{eq:max-min} now provides a principled approach to learn the optimal transport mapping $\nabla g(\cdot)$
as a solution of a minimax optimization. Since the optimization involves the search over the space of convex functions, we utilize the recent advances in input convex neural networks (ICNNs) to parametrize them as discussed in the following section.
\subsection{Minimax optimization over ICNNs}
\label{sec:icnn}
We propose using parametric models based on deep neural networks to approximate the set of convex functions.
This is known as input convex neural networks \cite{amos2016input}, denoted by $\mathtt{ICNN}(\mathbb{R}^d)$.
We propose estimating
the following approximate Wasserstein-$2$ distance, from samples:
\begin{align}
\tilde{W}_2^2(P,Q) \!= \!\sup_{f\in \mathtt{ICNN}(\mathbb{R}^d)} \inf_{g \in \mathtt{ICNN}(\mathbb{R}^d)}\mathcal{V}_{P,Q}(f,g)\! +\! C_{P,Q}.
\label{eq:approxW2}
\end{align}
ICNNs are a class of scalar-valued neural networks $f(x;\theta)$ such that the function $x \mapsto f(x;\theta) \in \mathbb{R}$ is convex.
The neural network architecture for an ICNN is as follows.
Given an input $x \in \mathbb{R}^d$, the mapping $x \mapsto f(x;\theta)$ is given by a $L$-layer feed-forward NN using the following equations for $l=0,1,\ldots, L-1$:
\begin{align*}
z_{l+1} = \sigma_l(W_l z_l + A_l x + b_l),\quad f(x;\theta)=z_L,
\end{align*}
where $\{W_l\}$, $\{A_l\}$ are weight matrices (with the convention that $W_0=0$), and $\{b_l\}$ are the bias terms. $\sigma_l$ denotes the entry-wise activation function at the layer $l$.
This is illustrated in Figure~\ref{fig:ICNN}.
We denote the total set of parameters by $\theta=(\{W_l\},\{A_l\},\{b_l\})$. It follows from \citet[Proposition 1]{amos2016input} that $f(x;\theta)$ is convex in $x$ provided
(i) all entries of the weights $W_l$ are non-negative;
(ii) activation function $\sigma_0$ is convex;
(iii) $\sigma_l$ is convex and non-decreasing, for $l=1,\ldots,L-1$.
While ICNNs are a specific parametric class of convex functions, it is important to understand if this class is rich enough representationally. This is answered positively by \citet[Theorem 1]{chen2018optimal}. In particular, they show that any convex function over a compact domain can be approximated in sup norm by a ICNN to the desired accuracy. This justifies the choice of ICNNs as a suitable approximating class for the convex functions.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\hsize]{ICNN-architecture.pdf}
\caption{The input convex neural network (ICNN) architecture.}
\label{fig:ICNN}
\vspace{-0.3cm}
\end{figure}
\input{main_figure_compar.tex}
The proposed framework
for learning the optimal transport
provides a novel training method for deep generative models, where
$(a)$ the generator is modeled as
a gradient of a convex function and $(b)$
the minimax optimization in \eqref{eq:approxW2}
(and more concretely, Algorithm \ref{alg:W2}) provides the training methodology.
On the surface, Eq.~\eqref{eq:approxW2} resembles the minimax optimization of generative adversarial networks based on
Wasserstein-1 distance \cite{arjovsky2017wasserstein}, called WGAN.
However, there are several critical differences making our approach attractive.
First, because WGANs use optimal transportation distance only as a measure of distance,
the learned generator map from the latent source to the target is arbitrary
and sensitive to the initialization (see Figure~\ref{fig:W1-GAN}) \cite{jacob2018w2gan}.
On the other hand, our proposed approach aims to find the {\em optimal} transport map and learns the same mapping regardless of
the initialization (see Figure~\ref{fig:checker-board-OT0})
Secondly, in a WGAN architecture~\cite{arjovsky2017wasserstein,petzka2017regularization},
the transport map (which is the generator) is represented with neural network that is a continuous mapping.
Although, a discontinuous map can be approximated arbitrarily close with continuous
neural networks, such a construction requires large weights making training unstable.
On the other hand, through our proposed method, by representing the transport map with
{\em gradient} of a neural network (equipped with ReLU type activation functions),
we obtain a naturally {\em discontinuous map}.
As a consequence we have sharp transition from one part of the support to the other, whereas
GANs (including WGANs) suffer from spurious probability masses that are not present in the target.
This is illustrated in Section~\ref{sec:exp-reg-OT}.
The same holds for
regularization-based methods for learning optimal transport~\cite{genevay2016stochastic,seguy2017large,leygonie2019adversarial},
where transport map is parametrized by continuous neural nets.
\begin{remark}\label{rem:compare}
\normalfont
In a recent work, \citet{taghvaei20192} proposed to solve the semi-dual optimization problem~\eqref{eq:dual_convex_form} by representing the function $f$ with an ICNN and learning it using a stochastic optimization algorithm. However, each step of this algorithm requires computing the conjugate $f^*$ for all samples in the batch via solving a inner convex optimization problem for each sample which makes it slow and challenging to scale to large datasets. Further it is memory intensive as each inner optimization step requires a copy of all the samples in the dataset. In contrast, we represent the convex conjugate $f^\ast$ using ICNN and present a novel minimax formulation to learn it, in a scalable manner.
\end{remark}
\subsection{Stability analysis of the learned transport map}
\prettyref{thm:our_optim_result} establishes the consistency of our proposed optimization: if the objective \prettyref{eq:max-min} is solved exactly with a pair of functions $(f_0,g_0)$, then $\nabla g_0$ is the exact optimal transport map from $Q$ to $P$. In this section, we study the error in approximating the optimal transport map $\nabla g_0$, when the objective \prettyref{eq:max-min} is solved up to a small error. To this end, we build upon the recent results from \citet[Prop. 8]{hutter2019minimax} regarding the stability of optimal transport maps.
Recall that the optimization objective \prettyref{eq:max-min} involves a minimization and a maximization. For any pair $(f,g)$, let $\epsilon_1(f,g)$ denote the minimization gap and $\epsilon_2(g)$ denote the maximization gap, defined according to:
\begin{align}
\label{eq:geps}
\epsilon_1(f,g) &= \mathcal{V}(f,g) - \inf_{\tilde{g} \in \mathtt{CVX}(Q)} \mathcal{V}(f,\tilde{g}),\\
\epsilon_2(f) &= \sup_{\tilde{f}\in \mathtt{CVX}(P)} \inf_{\tilde{g} \in \mathtt{CVX}(Q)}\mathcal{V}(\tilde{f},\tilde{g}) - \inf_{\tilde{g} \in \mathtt{CVX}(Q)} \mathcal{V}(f,\tilde{g}) \nonumber
\end{align}
Then, the following theorem bounds the the error between $\nabla g$ and the optimal transport map $\nabla g_0$ as a function $\epsilon_1$ and $\epsilon_2$. We defer its proof to \prettyref{app:proof_stability}.
\begin{theorem}\label{thm:stability}
Consider the optimization problem~\eqref{eq:max-min}. Assume $Q$ admits a density and let $\nabla g_0(\cdot)$ denote the optimal transport map from $Q$ to $P$. Then for any pair $(f,g)$ such that $f$ is $\alpha$-strongly convex, we have
\begin{align*}
\|\nabla g - \nabla g_0 \|^2_{L^2(Q)} \leq\frac{2}{\alpha}(\epsilon_1(f,g)+\epsilon_2(f)),
\end{align*}
where $\epsilon_1$ and $\epsilon_2$ are defined in~\eqref{eq:geps}, and $\|\cdot\|_{L^2(Q)}$ denotes the $L^2$-norm with respect to measure $Q$.
\end{theorem}
\section{Experiments}
\label{sec:experiments}
In this section, first we qualitatively illustrate our proposed approach (see \prettyref{fig:comparison}) on the following two-dimensional synthetic datasets: (a) Checkerboard, (b) Mixture of eight Gaussians. We compare our method with the following three baselines: (i) Barycentric-OT \citep{seguy2017large}, (ii) W1-LP, which is the state-of-the-art Wasserstein GAN introduced by \cite{petzka2017regularization}, (iii) W2GAN \cite{leygonie2019adversarial}. Note that while the goal of W1-LP is not to learn the optimal transport map, the generator obtained at the end of its training can be viewed as a transport map. For all these baselines, we use the implementations (publicly available) of \citet{leygonie2019adversarial} which has the best set of parameters for each of these methods. In \prettyref{sec:exp-W1} and \prettyref{sec:exp-reg-OT}, we highlight the respective robustness and the discontinuity of our transport maps as opposed to other approaches. Finally, in \prettyref{sec:high-dim}, we show the effectiveness of our approach on the challenging task of learning the optimal transport map on a variety of synthetic and real world high-dimensional data. Full experimental details are provided in \prettyref{app:setup}.
\input{robustness_figure.tex}
{\textbf{f} Training methodology.}
We utilize our minimax formulation in \prettyref{eq:approxW2} to learn the optimal transport map. We parametrize the convex functions $f$ and $g$ using the same ICNN architecture (Figure~\ref{fig:ICNN}). Recall that to ensure convexity, we need to restrict all weights $W_\ell$'s to be non-negative (Assumption (i) in ICNN). We enforce it strictly for $f$, as the maximization over $g$ can be unbounded, making optimization unstable, whenever $f$ is non-convex. However,
we relax this constraint for $g$ (as permitted according to Remark~\ref{rem:relax}) and instead introduce a regularization term
\begin{equation}
R(\theta_g) =\lambda \sum_{ W_l \in \theta_g} \norm{\max (-W_l,0) }^2_F,
\end{equation}
where $\lambda>0$ is a regularization constant and the maximum is taken entry-wise for all the weight parameters $\{W_l\} \subset \theta_g$. We empirically observe that this relaxation makes the optimization converge faster.
For both the maximization and minimization updates
in~\eqref{eq:approxW2}, we use Adam \cite{kingma2014adam}. At each iteration, we draw a batch of samples from $P$ and $Q$ denoted by $\{X_i\}_{i=1}^{M}$ and $\{Y_j\}_{j=1}^{M}$ respectively. Then, we use the following objective for optimization which is an empirical counterpart of \eqref{eq:approxW2}:
\begin{equation}
\max_{\theta_f: W_\ell \geq 0,\forall\ell\in[L-1]} \min_{\theta_g} \;\; J(\theta_f,\theta_g) + R(\theta_g),
\label{eq:J-samples}
\end{equation}
where $\theta_f,\theta_g$ are the parameters of $f$ and $g$, respectively, $W_\ell\geq0$ is an entry-wise constraint, and
\begin{align*}
J(\theta_f,\theta_g) = \frac{1}{M}\sum_{i=1}^{M} f(\nabla g(Y_i)) - \langle Y_i, \nabla g(Y_i)\rangle -f(X_i) .
\end{align*}
This is summarized in Algorithm~\ref{alg:W2}. In the remainder of the paper, we interchangeably refer to Algorithm~\ref{alg:W2} as either `Our approach' or `Our algorithm'.
\begin{algorithm}
\caption{The numerical procedure to solve the optimization problem~\eqref{eq:J-samples}.}
\label{alg:W2}
\begin{algorithmic}
\STATE {\bfseries Input:} Source dist. $Q$, Target dist. $P$, Batch size $M$, Generator iterations $K$, Total iteratioins $T$
\FOR{$t=1,\ldots, T$}
\STATE Sample batch $\{X_i\}_{i=1}^{M} \sim P$
\FOR{$k=1 ,\ldots,K$}
\STATE Sample batch $\{Y_i\}_{i=1}^{M} \sim Q$
\STATE Update $\theta_g$ to minimize~\eqref{eq:J-samples} using Adam method
\ENDFOR
\STATE Update $\theta_f$ to maximize~\eqref{eq:J-samples} using Adam method
\STATE Projection: $ w \leftarrow \max(w,0)$, for all $w\in \{W^l\}\in \theta_f$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{remark}
\normalfont Note that the regularization term $R(\theta_g)$ is {\em data-independent} and does not introduce any bias to the optimization problem. For any convex function $f$, the minimizer of the problem~\eqref{eq:J-samples} is still a convex function $g$ as discussed in Remark~\ref{rem:relax}. We use this regularization to guide the algorithm towards neural networks that are convex.
\end{remark}
\subsection{Learning the optimal transport map}
\label{sec:exp_map}
As highlighted in \prettyref{fig:checker-board-OT0} and \prettyref{fig:ourapproach}, qualitatively, we observe that our proposed procedure indeed learns the optimal transport map on both the Checkerboard and Mixture of eight Gaussians datasets. In particular, our transport map is able to cut the continuous mass symmetrically and transport it to the nearest target support in both these examples. Also, Figure~\ref{fig:comparison} illustrates the qualitative difference of our approach compared to other approaches, in terms of non-optimality and existence of trailing dots. The existence of trailing dots is due to representing the transport map with continuous neural networks, discussed in Section~\ref{sec:exp-reg-OT}.
\subsection{Robustness of learning transport maps}
\label{sec:exp-W1}
In this section we numerically illustrate that the generator in W1-LP and W2GAN
finds arbitrary transport maps, and it is sensitive to initialization as
discussed in Section~\ref{sec:formulation}.
This is in stark contrast with our proposed approach which finds the {\em optimal} transport independent of
the initialization.
We consider the previous Checkerboard example (Figure~\ref{fig:checker-board-data}) and train W1-LP and W2GAN with different random initializations.
The resulting transport maps for two different random trials are depicted in \prettyref{fig:checker-board-W1-1} and \prettyref{fig:checker-board-W1-2} for W1-LP, and \prettyref{fig:checker-board-W2-1} and \prettyref{fig:checker-board-W2-2} for W2-GAN.
In addition to the fact that the learned transport map is very sensitive to initializations, the quality of the samples generated by thus trained models are also sensitive. This is a major challenge in training GANs \cite{lin2018pacgan}.
\subsection{Learning discontinuous transport maps}
\label{sec:exp-reg-OT}
The power to represent a discontinuous transport mapping is
what fundamentally sets our proposed method apart from
the existing approaches, as discussed in Section~\ref{sec:formulation}.
Two prominent approaches for learning transport maps are
generative adversarial networks \cite{arjovsky2017wasserstein,petzka2017regularization}
and regularized optimal transport \cite{genevay2016stochastic,seguy2017large}.
In both cases, the transport map is modeled by a standard neural network with finite depth and width, which is a continuous function.
As a consequence,
continuous transport maps suffer from unintended and undesired spurious probability mass that
connects disjoint supports of the target probability distribution.
First, standard GANs including
the original GAN \cite{goodfellow2014generative} and
variants of WGAN \cite{arjovsky2017wasserstein,gulrajani2017improved,wei2018improving}
all suffer from spurious probability masses.
Even those designed to tackle such spurious probability masses, like PacGAN \cite{lin2018pacgan},
cannot overcome the barrier of continuous neural networks.
This suggests that fundamental change in the architecture, like the one we propose, is necessary. \prettyref{fig:w1-lp} illustrates the same scenario for the transport map learned through the WGAN framework.
We can observe the trailing dots of spurious probability masses,
resulting from undesired continuity of the learned transport maps.
Similarly, regularization methods to approximate optimal transport maps, explained in Section~\ref{sec:background},
suffer from the same phenomenon.
Representing a transport map with an inherently continuous function class results in
spurious probability masses connecting disjoint supports. \prettyref{fig:bary_ot}, corresponding to Barycentric-OT, illustrates those trailing dots of spurious masses
for the learned transport map from algorithm introduced in \citet{seguy2017large}. We also observe a similar phenomenon with \citet{leygonie2019adversarial} as illustrated in \prettyref{fig:w2gan}.
On the other hand,
we represent the transport map with the {\em gradient} of
a neural network (equipped with non-smooth ReLU type activation functions).
The resulting transport map can naturally represent discontinuous transport maps,
as illustrated in
Figure~\ref{fig:checker-board-OT} and \prettyref{fig:ourapproach}.
The vector field of the learned transport map in Figure~\ref{fig:g-vector-field} clearly shows
the discontinuity of the learned optimal transport.
\input{high_dim_experiments.tex}
\section{Background on optimal transport}
\label{sec:background}
Let $P$ and $Q$ be two probability distributions on $\mathbb{R}^d$ with finite second order moments.
The \emph{Monge's optimal transportation problem} is to transport the probability mass under $Q$ to $P$ with the least amount of cost\footnote{In general, Monge's problem is defined in terms of cost function $c(x,y)$. This paper is concerned with quadratic cost function $c(x,y)=\frac{1}{2}\|x-y\|^2$ because of its nice geometrical properties and connection to convex analysis~\citep[Ch. 2]{villani2003topics}.}, i.e.\xspace
\begin{align}
\underset{T: T_{\#}Q=P} {\text{minimize}}\;\; \;\; \frac{1}{2} \mathbb{E}_{X\sim Q} \norm{X-T(X)}^2. \;
\label{eq:monge}
\end{align}
Any transport map $T$ achieving the minimum in \prettyref{eq:monge} is called \emph{optimal transport map}.
Optimal transport map may not exist. In fact, the feasible set in the above optimization problem may itself be empty, for example when $Q$ is a Dirac distribution and $P$ is any non-Dirac distribution.
To resolve the existence issue of the Monge problem~\eqref{eq:monge},
Kantorovich introduced a relaxation of the problem,
\begin{align}
W_2^2(P,Q) \triangleq \inf_{\pi \in \Pi(P,Q)}~ \frac{1}{2} \mathbb{E}_{(X,Y) \sim \pi}\norm{X-Y}^2,
\label{eq:kantor_relax}
\end{align}
where $\Pi(P,Q)$ denotes the set of all joint probability distributions (or equivalently, couplings) whose first and second marginals are $P$ and $Q$, respectively. The optimal value in~\eqref{eq:kantor_relax} is the $2$-Wasserstein distance $W_2(\cdot,\cdot)$ squared. Any coupling $\pi$ achieving the infimum
is called the \emph{optimal coupling}. Optimization problem
\eqref{eq:kantor_relax} is also referred to as the {\em primal formulation} for $2$-Wasserstein distance.
Kantorovich also provided a dual formulation for \eqref{eq:kantor_relax}, known as the Kantorovich duality \citep[Theorem 1.3]{villani2003topics},
\begin{align
W_2^2(P,Q) =\; \sup_{(f,g)\in \Phi_c} \mathbb{E}_P[f(X)]+\mathbb{E}_Q[g(Y)],
\label{eq:dual_form}
\end{align}
where $\Phi_c$ denotes the constrained space of functions, defined as $\Phi_c \triangleq \bigl\{(f,g)\in L^1(P)\times L^1(Q): ~f(x)+g(y)\leq \frac{1}{2}\norm{x-y}^2_2, \quad \forall (x,y) ~d P \otimes d Q~\text{a.e.}\bigr\}$.
The dual problem~\eqref{eq:dual_form} can be recast as an stochastic optimization problem by approximating the expectations using independent samples from $P$ and $Q$. However, there is no easy way to ensure the feasibility of the constraint~$(f,g)\in \Phi_c$ along the gradient updates.
Common approach is to translate the optimization into a tractable form, while sacrificing the original goal of finding the optimal transport map.
Concretely, an entropic or a quadratic regularizer is added to the primal problem~\eqref{eq:kantor_relax}~\cite{cuturi2013sinkhorn,essid2018quadratically,peyre2019computational,blondel2017smooth}.
Then, the dual to the regularized primal problem is an unconstrained version of \eqref{eq:dual_form}
with additional penalty term.
The unconstrained problem can be numerically solved using Sinkhorn algorithm in discrete setting~\cite{cuturi2013sinkhorn}
or stochastic gradient methods with suitable function representation in continuous setting~\cite{genevay2016stochastic,seguy2017large}.
The optimal transport can then be obtained from $f$ and $g$, using the first-order optimality conditions of the Fenchel-Rockafellar's duality theorem~\cite{seguy2017large}, or by training a generator through an adversarial computational procedure~\cite{leygonie2019adversarial}.
In this paper, we take a different approach: solve the dual problem without introducing a regularization. This builds upon~\cite{taghvaei20192}, where ICNN for the task of approximating the Wasserstein distance and optimal transport map is originally proposed. We bring the idea proposed~\cite{taghvaei20192} into practice by introducing a novel minimax optimization formulation. We describe our proposed method in~Section~\ref{sec:formulation} and provide a detailed comparison in Remark~\ref{rem:compare}. Discussion about other related works \citep{lei2017geometric,guo2019mode, xie2019scalable, muzellec2019subspace,rabin2011wasserstein, korotin2019wasserstein} appears in \prettyref{app:related-work-more}.
\section{Proof of Theorem~\ref{thm:our_optim_result}}
\label{app:proof_theorem}
Define $V_f(g) \triangleq \mathbb{E}_Q[\langle Y,\nabla g(Y) \rangle - f(\nabla g(Y))]$. The main step of the proof is to show that $\sup_{g \in \mathtt{CVX}(Q)} V_f(g) = \mathbb{E}_Q[f^*(Y)]$. Then the conclusion follows from \eqref{eq:dual_convex_form}. To prove this, note that for all $g \in \mathtt{CVX}(Q)$, we have
\[
\langle y, \nabla g(y) \rangle - f(\nabla g(y)) \leq \langle y, \nabla f^*(y) \rangle - f(\nabla f^*(y)) = f^*(y),
\]
for all $y \in \mathbb{R}^d$ such that $g$ and $f^*$ are differentiable at $y$. We now claim that both $g$ and $f^*$ are differentiable $Q$-almost everywhere (a.e). If the claim is true, upon taking the expectation w.r.t $Q$:
\[V_f(g) \leq V_f(f^*) = \mathbb{E}_Q[f^*(Y)],\quad \forall g \in \mathtt{CVX}(Q)\]
and the inequality is achieved with $g=f^*$. Now we prove the claim as follows: Since $\int g \,\mathrm{d} Q < \infty$, we have $Q(g=\infty)=0$. Thus $Q(\text{Dom}(g))=1$, where $\text{Dom}(g)$ is the domain of the function $g$. Moreover, $Q(\text{Int}(\text{Dom}(g)) =1$, where $\text{Int}(\cdot)$ denotes the interior, because the boundary has $Q$-measure zero ($Q$ has a density). Since $g$ is convex, it is differentiable on $\text{Int}(\text{Dom}(g))$ except at points of Lebesgue measure zero which have $Q$-measure zero too. Therefore, $g$ is $ Q$-a.e differentiable. Similar arguments hold for $f^*$.
\section{Proof of Theorem~\ref{thm:stability}}
\label{app:proof_stability}
The proof follows from the bounds
\begin{subequations}
\begin{align}
\|\nabla g - \nabla f^* \|^2_{L^2(Q)} \leq \frac{2}{\alpha} \epsilon_1\label{eq:bound1},\\
\|\nabla f^* - \nabla g_0 \|^2_{L^2(Q)} \leq \frac{2}{\alpha} \epsilon_2\label{eq:bound2},
\end{align}
\end{subequations}
and using the triangle inequality. The proof for the first bound is as follows.
If $f$ is $\alpha$-strongly convex, then $f^*$ is $\frac{1}{\alpha}$ smooth. By definition of smoothness,
\begin{equation*}
f^*(z) \leq f^*(y) + \langle \nabla f^*(y), z-y \rangle + \frac{1}{2\alpha}\|z-y\|^2 \triangleq h_y(z),\quad \forall y,z \in \mathbb{R}^d,
\end{equation*}
where $h_y(z)$ is defined to be the quadratic function of $z$ that appears on the right-hand side of the inequality. From $f^*(z)\leq h_y(z)$, it follows that the convex conjugate $f(x) \geq h^*_y(x)$. As a result,
\begin{equation}
f(x) \geq h^*_y(x)= -f^*(y) + \langle y,x \rangle + \frac{\alpha}{2}\|x-\nabla f^*(y)\|^2,\quad \forall x,y\in \mathbb{R}^d.
\label{eq:inequality}
\end{equation}
We use this inequality to control the optimality gap $\epsilon_1(f,g)$:
\begin{align*}
\epsilon_1(f,g)&=\mathcal{V}(f,g) - \inf_{\tilde{g}} \mathcal{V} (f,\tilde{g}) \\&= \mathcal{V}(f,g)-\mathcal{V}(f,f^*) \\&= \mathbb{E}_Q [f^*(Y) - \langle Y,\nabla g(Y) \rangle + f(\nabla g(Y)) ]
\\&\geq \frac{\alpha}{2} \mathbb{E}_Q[\| \nabla g(Y) - \nabla f^*(Y)\|^2 ],
\end{align*}
where the last step follows from~\eqref{eq:inequality}, with $x=\nabla g(y)$. This concludes the proof of the bound~\eqref{eq:bound1}. It remains to prove~\eqref{eq:bound2}. To this end, note that the optimality gap $\epsilon_2(f)$ is given by
\begin{align*}
\epsilon_2(f)&=\mathcal{V}(f_0,g_0) - \inf_{\tilde g} \mathcal{V}(f,\tilde{g}) \\&= \mathcal{V}(f_0,f^*_0) - \mathcal{V}(f,f^*) \\&= - (\mathbb{E}_P[f_0(X)]+ \mathbb{E}_Q [f^*_0(Y)]) + (\mathbb{E}_P[f(X)]+ \mathbb{E}_Q [f^*(Y)])
\\&= - \mathbb{E}_Q [f_0(\nabla f^*_0(Y)) + f^*_0(Y)] + \mathbb{E}_Q [f(\nabla f^*_0(Y)) + f^*(Y)]
\\&= - \mathbb{E}_Q [\langle Y, \nabla f^*_0(Y)\rangle ] +\mathbb{E}_Q [f(\nabla f^*_0(Y)) + f^*(Y)]
\end{align*}
Using the inequality~\eqref{eq:inequality} with $x=\nabla f^*_0(y)$ yields:
\begin{align*}
\epsilon_2(f)
&\geq \frac{\alpha}{2} \mathbb{E}_Q[| \nabla f^*_0(Y) - \nabla f^*(Y)|^2 ]
\end{align*}
concluding~\eqref{eq:bound2} noting that $f^*_0=g_0$.
\section{Experimental set-up}
\label{app:setup}
\subsection{Two-dimensional experiments}
{\textbf{f} Datasets.} We use the following synthetic datasets: (i) Checkerboard, and (ii) Mixture of eight Gaussians. For the Checkerboard dataset, the source distribution $Q$ is the law of the random variable $Y=X + Z$, where $X \sim \mathrm{Unif}(\{(0,0), (1,1),(1,-1),(-1,1),(-1,-1)\})$ and $Z \sim \mathrm{Unif}([-0.5,0.5] \times [-0.5, 0.5])$. Similarly, $P$ is the distribution of random variable $Y=X + Z$, where $X \sim \mathrm{Unif}(\{(0,1), (0,-1),(1,0),(-1,0)\})$ and $Z \sim \mathrm{Unif}([-0.5,0.5] \times [-0.5, 0.5])$. Note that $\mathrm{Unif}(B)$ denotes the uniform distribution over any set $B$. For the mixture of eight Gaussians dataset, we have $Q=\mathcal{N}(0,I_2)$ and $P$ is the law of random variable $Y$, where $Y=X+Z$ with $X \sim \mathrm{Unif}(\{(1,0), (\frac{1} {\sqrt{2}},\frac{1} {\sqrt{2}})\},(0,1),(\frac{-1} {\sqrt{2}},\frac{1} {\sqrt{2}}), (-1,0),(\frac{-1} {\sqrt{2}},\frac{-1} {\sqrt{2}}), (0,-1), (\frac{1} {\sqrt{2}},\frac{-1} {\sqrt{2}}) \}) $ and $Z \sim \mathcal{N}(0,0.5I_2)$.
{\textbf{f} Architecture details.}
For our Algorithm~\ref{alg:W2}, we parametrize both the convex functions $f$ and $g$ by ICNNs. Both these ICNN networks have equal number of nodes for all the hidden layers
followed by a final output layer.
We choose
a square of leaky ReLU function, i.e $\sigma_0(x) = \left(\text{max}(\beta x,x) \right)^2$ with a small positive constant $\beta$ as
the {\em convex}
activation function for the first layer $\sigma_0$.
For the remaining layers, we use the leaky ReLU function, i.e $\sigma_l(x) = \text{max}(\beta x,x)$ for $l=1,\ldots,L-1$,
as the {\em monotonically non-decreasing and convex} activation function.
Note that the assumptions (ii)-(iii) of the ICNN are satisfied.
In all of our experiments, we set the parameter $\beta = 0.2$. In some of the experiments as explained below, we chose the SELU activation function which also obeys the convexity assumptions.
For the three baselines, Barycentric-OT, W1-LP, and W2GAN, we use the implementations of \citet{leygonie2019adversarial}, made publicly available at \url{https://github.com/jshe/wasserstein-2}. For all these methods, we use the default settings of hyperparameters which were fixed to be the best values from the respective papers. Further, for a fair comparison we allow the number of parameters in each of these baselines to be larger than ours; in fact, for W2GAN and Barycentric-OT, the default number of neural network parameters is much larger than ours.
{\textbf{f} Hyperparameters.} For reproducibility, we provide the details of the numerical experiments for each of the figures. For the Checkerboard dataset in \prettyref{fig:comparison} (same as \prettyref{fig:checker-board-OT0}),
we run Algorithm~\ref{alg:W2} with the following parameters: For both the ICNNs $f$ and $g$, we set the hidden size $m=64$, number of layers $L=4$, regularization constant $\lambda=1.0$, Leaky ReLU activation and for training we use batch size $M=1024$, learning rate $10^{-4}$, generator iterations $K=10$, total number of iterations $T=10^5$, and the Adam optimizer with $\beta_1=0.5$, and $\beta_2=0.9$. For each of the baselines, the following are the values of the parameters: (a) Barycentric-OT: $3$ ($1$ corresponding to the dual stage and the rest for the map step) neural networks each with $m=128, L=3, M=512, T=2\times 10^5$ and $l_2$-entropy penalty, (b) W1-LP: Both the discriminator and the generator neural networks with $m=128, L=3, K=5$ and $M=512, T=2\times 10^5$, and (c) W2GAN: $3$ neural networks ($1$ corresponding to the generator whereas the remaining are for two functions in the dual formulation \prettyref{eq:dual_form}) each with $m=128, L=3, K=5, M=512, T=2\times 10^5$. W2GAN also uses six additional regularization terms which set to default values as provided in the code. Also, all these baselines use ReLU activation and Adam optimizer with $\beta_1=0.9$ and $\beta_2=0.990$ and the learning rate for generator parameters being $0.0001$ and $0.0005$ for the rest. For the mixture of eight Gaussians dataset, we use the same parameters except batch-size $M=256$, whereas all the baselines use the same parameters as the above setting. Also, for the multiple trials in \prettyref{fig:W1-GAN} for W1-LP and W2GAN, we use the above parameters but with a different random initialization of the neural network weights and biases.
\subsection{High dimensional experiments}
\label{app:high-dim-exp}
{\textbf{f} Gaussian to Gaussian.} Source distribution $Q=\mathcal{N}(0,I_d)$ and target distribution $P=\mathcal{N}(\mu,I_d)$, for some fixed $\mu\in\mathbb{R}^d$ and $d=784$. The mean vector $\mu=\alpha (1,\ldots,1)^\top$ with $\alpha \in \{1,5,10\}$. For both the ICNNs $f$ and $g$, we have $d=784, m=1024, L=3$, Leaky ReLU activation, batch size $M= 60$, $K=16$, $\lambda=0.1$, $T=40,000$, Adam optimizer with $\beta_1=0.5$ and $\beta_2 =0.99$, learning rate decay by a factor of $0.5$ for every $2,000$ iterations. Note that in \prettyref{fig:w2-convergence}, $1$ epoch corresponds to $1000$ iterations.
{\textbf{f} High-dim. Gaussian to low-dim. mixture.} Source distribution $Q=\mathcal{N}(0,I_d)$ with $d=784$. The target distribution is a mixture of four Gaussians $P=\sum_{i=1}^4\frac{1}{4}\mathcal{N}(\mu_i,\Sigma )$, where $\mu_i=(\pm 1.4,\pm 1.4, 0 , \ldots, 0) \in \mathbb{R}^{784}$ and $\Sigma=\text{diag}(0.2,0.2,0,\ldots,0)$. For both the ICNNs $f$ and $g$, we have $d=784, m=1024, L=3$, Leaky ReLU activation, batch size $M=60$, $K=25$, $\lambda=0.01$, Adam optimizer with $\beta_1=0.5$ and $\beta_2 =0.99$, learning rate decay by a factor of $0.5$ for every two epochs. The algorithm is simulated for $30$ epochs, where each epoch corresponds to $1000$ iterations.
{\textbf{f} MNIST $\{0,1,2,3,4\}$ to MNIST $\{5,6,7,8,9\}$.} To obtain the latent embeddings of the MNIST dataset, we first train a VAE with both the encoder and decoder having $3$ hidden layers with $256$ neurons and the size of latent vector being $16$ dimensional. We then use ICNNs $f$ and $g$ to learn the optimal transport between the embeddings of digits $\{0,1,2,3,4\}$ to that of $\{5,6,7,8,9\}$. For both these ICNNs we have $d=16, m=1024, L=3$, CELU activation, batch size = $128$, $K=16$, $\lambda=1$, $T=100,000$, Adam optimizer with $\beta_1=0.9$ and $\beta_2 =0.99$, learning rate decay by a factor of $0.5$ for every $4,000$ iterations.
{\textbf{f} Gaussian to MNIST.} To obtain the latent embeddings for the MNIST, we use the same pre-trained VAE models as above. Also we use the same hyperparameter settings as that of the ``{\textbf{f} MNIST $\{0,1,2,3,4\}$ to MNIST $\{5,6,7,8,9\}$}" experiment with the only change of batch size being $64$.
\section{Further discussion of related work}
\label{app:related-work-more}
\input{further_related_work}
\section{Background on optimal transport}
\label{sec:background}
Let $P$ and $Q$ be two probability distributions on $\mathbb{R}^d$ with finite second order moments.
The \emph{Monge's optimal transportation problem} is to transport the probability mass under $Q$ to $P$ with the least amount of cost\footnote{In general, Monge's problem is defined in terms of cost function $c(x,y)$. This paper is concerned with quadratic cost function $c(x,y)=\frac{1}{2}\|x-y\|^2$ because of its nice geometrical properties and connection to convex analysis~\citep[Ch. 2]{villani2003topics}.}, i.e.\xspace
\begin{align}
\underset{T: T_{\#}Q=P} {\text{minimize}}\;\; \;\; \frac{1}{2} \mathbb{E}_{X\sim Q} \norm{X-T(X)}^2. \;
\label{eq:monge}
\end{align}
Any transport map $T$ achieving the minimum in \prettyref{eq:monge} is called \emph{optimal transport map}.
Optimal transport map may not exist. In fact, the feasible set in the above optimization problem may itself be empty, for example when $Q$ is a Dirac distribution and $P$ is any non-Dirac distribution.
To resolve the existence issue of the Monge problem~\eqref{eq:monge},
Kantorovich introduced a relaxation of the problem,
\begin{align}
W_2^2(P,Q) \triangleq \inf_{\pi \in \Pi(P,Q)}~ \frac{1}{2} \mathbb{E}_{(X,Y) \sim \pi}\norm{X-Y}^2,
\label{eq:kantor_relax}
\end{align}
where $\Pi(P,Q)$ denotes the set of all joint probability distributions (or equivalently, couplings) whose first and second marginals are $P$ and $Q$, respectively. The optimal value in~\eqref{eq:kantor_relax} is the $2$-Wasserstein distance $W_2(\cdot,\cdot)$ squared. Any coupling $\pi$ achieving the infimum
is called the \emph{optimal coupling}. Optimization problem
\eqref{eq:kantor_relax} is also referred to as the {\em primal formulation} for $2$-Wasserstein distance.
Kantorovich also provided a dual formulation for \eqref{eq:kantor_relax}, known as the Kantorovich duality \citep[Theorem 1.3]{villani2003topics},
\begin{align
W_2^2(P,Q) =\; \sup_{(f,g)\in \Phi_c} \mathbb{E}_P[f(X)]+\mathbb{E}_Q[g(Y)],
\label{eq:dual_form}
\end{align}
where $\Phi_c$ denotes the constrained space of functions, defined as $\Phi_c \triangleq \bigl\{(f,g)\in L^1(P)\times L^1(Q): ~f(x)+g(y)\leq \frac{1}{2}\norm{x-y}^2_2, \quad \forall (x,y) ~d P \otimes d Q~\text{a.e.}\bigr\}$.
The dual problem~\eqref{eq:dual_form} can be recast as an stochastic optimization problem by approximating the expectations using independent samples from $P$ and $Q$. However, there is no easy way to ensure the feasibility of the constraint~$(f,g)\in \Phi_c$ along the gradient updates.
Common approach is to translate the optimization into a tractable form, while sacrificing the original goal of finding the optimal transport map.
Concretely, an entropic or a quadratic regularizer is added to the primal problem~\eqref{eq:kantor_relax}~\cite{cuturi2013sinkhorn,essid2018quadratically,peyre2019computational,blondel2017smooth}.
Then, the dual to the regularized primal problem is an unconstrained version of \eqref{eq:dual_form}
with additional penalty term.
The unconstrained problem can be numerically solved using Sinkhorn algorithm in discrete setting~\cite{cuturi2013sinkhorn}
or stochastic gradient methods with suitable function representation in continuous setting~\cite{genevay2016stochastic,seguy2017large}.
The optimal transport can then be obtained from $f$ and $g$, using the first-order optimality conditions of the Fenchel-Rockafellar's duality theorem~\cite{seguy2017large}, or by training a generator through an adversarial computational procedure~\cite{leygonie2019adversarial}.
In this paper, we take a different approach: solve the dual problem without introducing a regularization. This builds upon~\cite{taghvaei20192}, where ICNN for the task of approximating the Wasserstein distance and optimal transport map is originally proposed. We bring the idea proposed~\cite{taghvaei20192} into practice by introducing a novel minimax optimization formulation. We describe our proposed method in~Section~\ref{sec:formulation} and provide a detailed comparison in Remark~\ref{rem:compare}. Discussion about other related works \citep{lei2017geometric,guo2019mode, xie2019scalable, muzellec2019subspace,rabin2011wasserstein, korotin2019wasserstein} appears in \prettyref{app:related-work-more}.
\section{Conclusion}
We presented a novel minimax framework to learn the optimal transport map under $W_2$-metric. Our framework is in contrast to regularization-based approaches, where the constraint of the dual Kantorovich problem is replaced with a penalty term. Instead, we represent the dual functions with ICNN, so that the constraint is automatically satisfied. Further, the transport map is expressed as gradient of a convex function, which is able to represent discontinuous maps. We believe that our framework paves way for bridging the optimal transport theory and practice.
\section{Experiments}
\label{sec:experiments}
In this section, first we qualitatively illustrate our proposed approach (see \prettyref{fig:comparison}) on the following two-dimensional synthetic datasets: (a) Checkerboard, (b) Mixture of eight Gaussians. We compare our method with the following three baselines: (i) Barycentric-OT \citep{seguy2017large}, (ii) W1-LP, which is the state-of-the-art Wasserstein GAN introduced by \cite{petzka2017regularization}, (iii) W2GAN \cite{leygonie2019adversarial}. Note that while the goal of W1-LP is not to learn the optimal transport map, the generator obtained at the end of its training can be viewed as a transport map. For all these baselines, we use the implementations (publicly available) of \citet{leygonie2019adversarial} which has the best set of parameters for each of these methods. In \prettyref{sec:exp-W1} and \prettyref{sec:exp-reg-OT}, we highlight the respective robustness and the discontinuity of our transport maps as opposed to other approaches. Finally, in \prettyref{sec:high-dim}, we show the effectiveness of our approach on the challenging task of learning the optimal transport map on a variety of synthetic and real world high-dimensional data. Full experimental details are provided in \prettyref{app:setup}.
\input{robustness_figure.tex}
{\textbf{f} Training methodology.}
We utilize our minimax formulation in \prettyref{eq:approxW2} to learn the optimal transport map. We parametrize the convex functions $f$ and $g$ using the same ICNN architecture (Figure~\ref{fig:ICNN}). Recall that to ensure convexity, we need to restrict all weights $W_\ell$'s to be non-negative (Assumption (i) in ICNN). We enforce it strictly for $f$, as the maximization over $g$ can be unbounded, making optimization unstable, whenever $f$ is non-convex. However,
we relax this constraint for $g$ (as permitted according to Remark~\ref{rem:relax}) and instead introduce a regularization term
\begin{equation}
R(\theta_g) =\lambda \sum_{ W_l \in \theta_g} \norm{\max (-W_l,0) }^2_F,
\end{equation}
where $\lambda>0$ is a regularization constant and the maximum is taken entry-wise for all the weight parameters $\{W_l\} \subset \theta_g$. We empirically observe that this relaxation makes the optimization converge faster.
For both the maximization and minimization updates
in~\eqref{eq:approxW2}, we use Adam \cite{kingma2014adam}. At each iteration, we draw a batch of samples from $P$ and $Q$ denoted by $\{X_i\}_{i=1}^{M}$ and $\{Y_j\}_{j=1}^{M}$ respectively. Then, we use the following objective for optimization which is an empirical counterpart of \eqref{eq:approxW2}:
\begin{equation}
\max_{\theta_f: W_\ell \geq 0,\forall\ell\in[L-1]} \min_{\theta_g} \;\; J(\theta_f,\theta_g) + R(\theta_g),
\label{eq:J-samples}
\end{equation}
where $\theta_f,\theta_g$ are the parameters of $f$ and $g$, respectively, $W_\ell\geq0$ is an entry-wise constraint, and
\begin{align*}
J(\theta_f,\theta_g) = \frac{1}{M}\sum_{i=1}^{M} f(\nabla g(Y_i)) - \langle Y_i, \nabla g(Y_i)\rangle -f(X_i) .
\end{align*}
This is summarized in Algorithm~\ref{alg:W2}. In the remainder of the paper, we interchangeably refer to Algorithm~\ref{alg:W2} as either `Our approach' or `Our algorithm'.
\begin{algorithm}
\caption{The numerical procedure to solve the optimization problem~\eqref{eq:J-samples}.}
\label{alg:W2}
\begin{algorithmic}
\STATE {\bfseries Input:} Source dist. $Q$, Target dist. $P$, Batch size $M$, Generator iterations $K$, Total iteratioins $T$
\FOR{$t=1,\ldots, T$}
\STATE Sample batch $\{X_i\}_{i=1}^{M} \sim P$
\FOR{$k=1 ,\ldots,K$}
\STATE Sample batch $\{Y_i\}_{i=1}^{M} \sim Q$
\STATE Update $\theta_g$ to minimize~\eqref{eq:J-samples} using Adam method
\ENDFOR
\STATE Update $\theta_f$ to maximize~\eqref{eq:J-samples} using Adam method
\STATE Projection: $ w \leftarrow \max(w,0)$, for all $w\in \{W^l\}\in \theta_f$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{remark}
\normalfont Note that the regularization term $R(\theta_g)$ is {\em data-independent} and does not introduce any bias to the optimization problem. For any convex function $f$, the minimizer of the problem~\eqref{eq:J-samples} is still a convex function $g$ as discussed in Remark~\ref{rem:relax}. We use this regularization to guide the algorithm towards neural networks that are convex.
\end{remark}
\subsection{Learning the optimal transport map}
\label{sec:exp_map}
As highlighted in \prettyref{fig:checker-board-OT0} and \prettyref{fig:ourapproach}, qualitatively, we observe that our proposed procedure indeed learns the optimal transport map on both the Checkerboard and Mixture of eight Gaussians datasets. In particular, our transport map is able to cut the continuous mass symmetrically and transport it to the nearest target support in both these examples. Also, Figure~\ref{fig:comparison} illustrates the qualitative difference of our approach compared to other approaches, in terms of non-optimality and existence of trailing dots. The existence of trailing dots is due to representing the transport map with continuous neural networks, discussed in Section~\ref{sec:exp-reg-OT}.
\subsection{Robustness of learning transport maps}
\label{sec:exp-W1}
In this section we numerically illustrate that the generator in W1-LP and W2GAN
finds arbitrary transport maps, and it is sensitive to initialization as
discussed in Section~\ref{sec:formulation}.
This is in stark contrast with our proposed approach which finds the {\em optimal} transport independent of
the initialization.
We consider the previous Checkerboard example (Figure~\ref{fig:checker-board-data}) and train W1-LP and W2GAN with different random initializations.
The resulting transport maps for two different random trials are depicted in \prettyref{fig:checker-board-W1-1} and \prettyref{fig:checker-board-W1-2} for W1-LP, and \prettyref{fig:checker-board-W2-1} and \prettyref{fig:checker-board-W2-2} for W2-GAN.
In addition to the fact that the learned transport map is very sensitive to initializations, the quality of the samples generated by thus trained models are also sensitive. This is a major challenge in training GANs \cite{lin2018pacgan}.
\subsection{Learning discontinuous transport maps}
\label{sec:exp-reg-OT}
The power to represent a discontinuous transport mapping is
what fundamentally sets our proposed method apart from
the existing approaches, as discussed in Section~\ref{sec:formulation}.
Two prominent approaches for learning transport maps are
generative adversarial networks \cite{arjovsky2017wasserstein,petzka2017regularization}
and regularized optimal transport \cite{genevay2016stochastic,seguy2017large}.
In both cases, the transport map is modeled by a standard neural network with finite depth and width, which is a continuous function.
As a consequence,
continuous transport maps suffer from unintended and undesired spurious probability mass that
connects disjoint supports of the target probability distribution.
First, standard GANs including
the original GAN \cite{goodfellow2014generative} and
variants of WGAN \cite{arjovsky2017wasserstein,gulrajani2017improved,wei2018improving}
all suffer from spurious probability masses.
Even those designed to tackle such spurious probability masses, like PacGAN \cite{lin2018pacgan},
cannot overcome the barrier of continuous neural networks.
This suggests that fundamental change in the architecture, like the one we propose, is necessary. \prettyref{fig:w1-lp} illustrates the same scenario for the transport map learned through the WGAN framework.
We can observe the trailing dots of spurious probability masses,
resulting from undesired continuity of the learned transport maps.
Similarly, regularization methods to approximate optimal transport maps, explained in Section~\ref{sec:background},
suffer from the same phenomenon.
Representing a transport map with an inherently continuous function class results in
spurious probability masses connecting disjoint supports. \prettyref{fig:bary_ot}, corresponding to Barycentric-OT, illustrates those trailing dots of spurious masses
for the learned transport map from algorithm introduced in \citet{seguy2017large}. We also observe a similar phenomenon with \citet{leygonie2019adversarial} as illustrated in \prettyref{fig:w2gan}.
On the other hand,
we represent the transport map with the {\em gradient} of
a neural network (equipped with non-smooth ReLU type activation functions).
The resulting transport map can naturally represent discontinuous transport maps,
as illustrated in
Figure~\ref{fig:checker-board-OT} and \prettyref{fig:ourapproach}.
The vector field of the learned transport map in Figure~\ref{fig:g-vector-field} clearly shows
the discontinuity of the learned optimal transport.
\input{high_dim_experiments.tex}
\subsection{High dimensional experiments}
\label{sec:high-dim}
\input{figure_high_dim.tex}
We consider the challenging task of learning optimal transport maps on high dimensional distributions. In particular, we consider both synthetic and real world high dimensional datasets and provide quantitative and qualitative illustration of the performance of our proposed approach.
{\textbf{f} Gaussian to Gaussian.} Source distribution $Q=\mathcal{N}(0,I_d)$ and target distribution $P=\mathcal{N}(\mu,I_d)$, for some fixed $\mu\in\mathbb{R}^d$ and $d=784$. The mean vector $\mu=\alpha (1,\ldots,1)^\top$ for some parameter $\alpha>0$. Because both distributions are Gaussian, the optimal transport map is explicitly known: $T^\ast(x)=x+\mu$ and hence $W_2^2(P,Q)=\norm{\mu}^2/2=\alpha^2 d/2$. In \prettyref{fig:w2-convergence}, we compare our estimated distance $\tilde{W}_2^2(P,Q)$, defined in~\eqref{eq:approxW2}, with the exact value $W_2^2(P,Q)$, as the training progresses for various values of $\alpha \in \{1,5,10\}$. Intuitively, learning is more challenging when $\alpha$ is larger. Further, error in learning the optimal transport map, quantified with the metric $\norm{\mu_{T(Q)}-\mu}^2$, where $\mu_{T(Q)}$ is the mean of the transported distribution $T_\#Q$, is reported in Table~\ref{tab:gauss_gauss_table}.
\begin{table}[h]
\caption{The error between the mean of transported and that of the target distributions. The source and target are $728$-dim. Gaussians.}
\tiny
\label{tab:gauss_gauss_table}
\begin{center}
\begin{sc}
\begin{tabular}{cccc}
Metric & $\alpha=1$ & $\alpha=5$ & $\alpha=10$ \\
\midrule
$\norm{\mu_{T(Q)}-\mu}^2$ & $ 0.19\pm 0.015$ & $13.95 \pm 1.45$ & $29.05 \pm 5.16$ \\
\midrule
\hspace{-1.7em} $100 \cdot (\norm{\mu_{T(Q)}-\mu}/\norm{\mu})^2$ & $ 0.02\pm 0.001$ & $0.07 \pm 0.005$ & $0.04 \pm 0.006$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{center}
\end{table}
{\textbf{f} High-dim. Gaussian to low-dim. mixture.} Source distribution $Q$ is standard Gaussian $\mathcal{N}(0,I_d)$ with $d=784$, and the target distribution $P$ is a mixture of four Gaussians that lie in in the two-dimensional subspace of the high-dimensional space $\mathbb{R}^d$, i.e.\xspace the first two components of the random vector $X\sim P$ is mixture of four Gaussians, and the rest of the components are zero. The projection of the learned optimal transport map onto the first four components is depicted in Figure~\ref{fig:mixture-high-dim}. As illustrated in the left panel of \ref{fig:mixture-high-dim}, our transport map correctly maps the source distribution to the mixture of four Gaussians in the first two components. And it maps the rest of the components to zero, as highlighted by a red blob at zero in the right panel.
{\textbf{f} MNIST $\{0,1,2,3,4\}$ to MNIST $\{5,6,7,8,9\}$.} We consider the standard MNIST dataset~\cite{mnist} with the goal of learning the optimal transport map from the set of images corresponding to first five digits~$\{0,1,2,3,4\}$ to the last five digits~$\{5,6,7,8,9\}$. To achieve this, we embed the images into the a space where the Euclidean norm $\|\cdot\|$ between the embedded images is meaningful. This is in alignment with the reported results in the literature for learning the $L_2$-optimal transport map~\citep[Sec. 4.1]{yang2019potential}. We consider the embeddings into a $16$-dimensional latent feature space given by a pre-trained Variational Autoencoder (VAE). We simulate our algorithm on this feature space.
The results of the learned transport map are depicted in \prettyref{fig:all_high_dim}. Figure~\ref{fig:mnist-firstfive} presents samples from the source distribution and Figure~\ref{fig:mnist-lastfive} illustrates the source samples after transportation under the learned optimal transport map. We observe that the digits that look alike are coupled via the optimal transport map, e.g. $1\to9$, $2\to8$, and $4\to9$.
{\textbf{f} Gaussian to MNIST.} The source is $16$-dimensional standard Gaussian distribution, and the target is the $16$-dimensional latent embeddings of all the MNIST digits. The MNIST like samples that are generated from the learned optimal transport map are depicted in~\prettyref{fig:gauss-mnist}.
These experiments serve as a proof of concept that the algorithm scales to high-dimensional setting and real-world dataset. We believe that further improvements on the performance of the proposed algorithm requires careful tuning of hyper-parameters which
takes time to develop (similar to initial WGAN) and is a subject of ongoing work.
\begin{figure}[h]
\centering
\includegraphics[width=0.25\textwidth]{transported_16.pdf}
\caption{MNIST like samples generated by the learned optimal transport map from Gaussian source distribution in feature space.}
\label{fig:gauss-mnist}
\end{figure}
\section{Introduction}
\label{sec:introduction}
Finding a mapping that transports mass from one distribution $Q$ to another distribution $P$
is an important task in various machine learning applications, such as deep generative models \cite{goodfellow2014generative,2013auto} and domain adaptation \cite{gopalan2011domain,ben2010theory}.
Among infinitely many transport maps $T$ that can map a random variable $X$ from $Q$ such that
$T(X)$ is distributed as $P$, several recent advances focus on
discovering some inductive bias to find a transport map with desirable properties.
Research in optimal transport has been leading such efforts,
in applications such as
color transfer \cite{ferradans2014regularized},
shape matching \cite{su2015optimal},
data assimilation \cite{reich2013nonparametric},
and Bayesian inference \cite{el2012bayesian}.
Searching for an optimal transport
encourages a mapping that minimizes the total cost
of transporting mass from $Q$ to $P$, as originally formulated in \citet{monge1781memoire},
and provides the inductive bias needed in many such applications.
However, finding the optimal transport map in general is a challenging task, especially in high dimensions where
efficient approaches are critical.
Algorithmic solutions are well-established
for discrete variables;
the optimal transport can be found as a solution to linear program.
Building upon this mature area, typical approaches for general distributions use quantization,
and this becomes intractable for
high-dimensional variables we encounter in modern applications \cite{evans1999differential,benamou2000computational,papadakis2014optimal}.
\input{starting_figure.tex}
To this end, we propose a novel minimax optimization approach to search for the
optimal transport under the quadratic distance (i.e.~2-Wassertstein metric).
A major challenge in a minimax formulation of optimal transport is
that the constraints in the Kantorovich dual formulation \eqref{eq:dual_form} are notoriously challenging.
They require the evaluation of the functions at every point in the domain, which is not tractable.
A common straightforward heuristics sample some points and add those sampled constraints as regularizers.
Such regularizations create biases that hinder learning the true optimal transport.
Our key innovation is to depart from this common practice;
we instead eliminate the constraints by restricting our search to the set of all convex functions,
building upon the fundamental connection from Theorem \ref{thm:knott-brenier}.
This leads to a novel minimax formulation in \eqref{eq:max-min}.
Leveraging on recent advances in input convex neural networks,
we propose a new architecture and a training algorithm for solving this minimax optimization.
We establish the consistency of our proposed minimax formulation in Theorem~\ref{thm:our_optim_result}.
In particular, we show that the solution to this optimization problems yields the exact optimal transport map.
We provide stability analysis for the proposed estimator in \prettyref{thm:stability}.
Further, when used to train deep generative models,
our approach can be viewed as a novel framework to train
a generator that is modeled as a {\em gradient of a convex function}.
We provide a principled training rule based on the optimal transport theory.
This ensures that $(i)$ the generator converges to the optimal transport, independent of how we initialize the neural network; and
$(ii)$ represent sharp boundaries when the target has multiple disconnected supports.
Gradient of a neural network naturally represents discontinuous functions,
which is critical in mapping from a single connected support to disconnected supports.
To model convex functions, we leverage Input Convex Neural Networks (ICNNs),
a class of scalar-valued neural networks $f(x;\theta)$ such that the function $x \mapsto f(x;\theta) \in \mathbb{R}$ is convex.
These neural networks were introduced by \citet{amos2016input} to provide efficient inference and optimization procedures for structured prediction, data imputation and reinforcement learning tasks. In this paper, we show that ICNNs can be efficiently trained to learn the optimal transport map between two distributions $P$ and $Q$. To the best of our knowledge, this is the first such instance where ICNNs are leveraged for the well-known task of learning optimal transport maps in a {\em scalable} fashion. This framework opens up a new realm for understanding problems in optimal transport theory using parametric convex neural networks, both in theory and practice.
Figure~\ref{fig:checker-board-OT0} provides an example
where the optimal transport map has been learned via our proposed Algorithm~\ref{alg:W2}
from the orange distribution to the green distribution.
\textbf{Notation.} $\mathcal{P}(\mathcal{X})$ denotes the set of probability measures on a Polish space $\mathcal{X}$, and $\mathcal{B}(\mathcal{X})$ denotes the Borel subsets of $\mathcal{X}$. For $P \in \mathcal{P}(\mathcal{X})$ and $Q\in \mathcal{P}(\mathcal{Y})$, $P \otimes Q $ denotes the product measure on $\mathcal{X} \times \mathcal{Y}$. For measurable map $T:\mathcal{X} \to \mathcal{Y}$, $T_{\#} P$ denotes the push-forward of $P$ under $T$, i.e.\xspace $(T_{\#} P)(A)=P(T^{-1}(A)),~\forall A \in \mathcal{B}(\mathcal{Y})$.
$L^1(P) \triangleq \{f \text{ is measurable } \&~ \int f\,\mathrm{d} P<\infty\}$ denotes the set of integrable functions with respect to $P$. $\mathtt{CVX}(P)$ denotes the set of all convex functions in $L^1(P)$. $\mathrm{Id}:x\mapsto x$ denotes the identity function. $\langle \cdot,\cdot \rangle$ and $\|\cdot\|$ denote the inner-product and $\ell_2$-Euclidean norm.
\section{Do \emph{not} have an appendix here}
\end{document}
\section{A novel minimax formulation to learn optimal transport}
\label{sec:formulation}
Our goal is to learn the optimal transport map $T^*$ from $Q$ to $P$,
from samples drawn from $P$ and $Q$, respectively.
We use the
fundamental connection between optimal transport and Kantorovich dual in Theorem~\ref{thm:knott-brenier},
to formulate learning $T^*$ as a problem of estimating $W_2^2(P,Q)$.
However, $W_2^2(P,Q)$ is notoriously hard to estimate.
The standard Kantorovich dual formulation in Eq.~\eqref{eq:dual_form} involves a
supremum over a set $\Phi_c$ with infinite constraints,
which is challenging to even approximately project onto.
To this end, we derive an alternative optimization formulation in Eq.~\eqref{eq:max-min},
inspired by the convexification trick \citep[Section 2.1.2]{villani2003topics}.
This allows us to eliminate the distance constraint of $\Phi_c$,
and instead constrain our search over all {\em convex functions}.
This constrained optimization can now be seamlessly integrated with recent advances in
designing deep neural architectures with convexity guarantees.
This leads to a novel minimax optimization
to learn the optimal transport.
We exploit the fundamental properties of $W_2^2(P,Q)$ and the corresponding optimal transport to
reparametrize the optimization formulation.
Note that for any $(f,g) \in \Phi_c$,
\begin{align*}
&f(x)+g(y) \leq \frac{1}{2}\norm{x-y}_2^2 \;\; \Longleftrightarrow\\ &\qth{\frac{1}{2}\|x\|_2^2 -f(x)}+\qth{\frac{1}{2}\|y\|_2^2 -g(y)} \geq \inner{x}{y}.
\end{align*}
Hence reparametrizing $\frac{1}{2}\|\cdot \|_2^2-f(\cdot)$ and $\frac{1}{2}\|\cdot\|_2^2-g(\cdot)$ by $f$ and $g$ respectively, and
substituting them in \prettyref{eq:dual_form} yields
\begin{align*}
W_2^2(P,Q) = C_{P,Q} -\inf_{(f,g) \in \tilde{\Phi}_c} \Big\{ \mathbb{E}_P[f(X)]+\mathbb{E}_Q[g(Y)] \Big\},
\end{align*}
where $C_{P,Q}=(1/2)\mathbb{E} [\norm{X}_2^2+\norm{Y}_2^2 ]$ is a constant independent of $(f,g)$ and $\tilde{\Phi}_c \triangleq \{ (f,g) \in L^1(P) \times L^1(Q): f(x)+g(y) \geq \inner{x}{y}, \quad \forall (x,y) ~ dP \otimes dQ ~\text{a.e.} \}$. While the above constrained optimization problem involves a pair of functions $(f,g)$, it can be transformed into the following form involving only a single convex function $f$, thanks to \citet[Theorem 2.9]{villani2003topics}:
\begin{align}\label{eq:dual_convex_form}
\hspace{-2pt}W_2^2(P,Q) \!= \!C_{P,Q}\!- \!\!\inf_{f \in \mathtt{CVX}(P)} \mathbb{E}_P[f(X)]\!+\!\mathbb{E}_Q[f^\ast(Y)],
\end{align}
where $f^\ast(y)=\sup_x \langle x,y\rangle - f(x)$ is the convex conjugate of $f(\cdot)$.
The crucial tools behind our formulation are
the following celebrated results due to Knott-Smith and Brenier \cite{villani2003topics},
which relate the optimal solutions for the dual form in \prettyref{eq:dual_convex_form} and
the primal form in \prettyref{eq:kantor_relax}.
\begin{theorem}[{\citep[Theorem 2.12]{villani2003topics}}
\label{thm:knott-brenier}
Let $P,Q$ be two probability distributions on $\mathbb{R}^d$ with finite second order moments. Then,
\begin{enumerate}
\item
(\textbf{Knott-Smith optimality criterion}) A coupling $\pi \in \Pi(P,Q)$ is optimal for the primal \prettyref{eq:kantor_relax} if and only if there exists a convex function $f \in \mathtt{CVX}(\mathbb{R}^d)$ such that $\mathrm{Supp}(\pi) \subset \mathrm{Graph}(\partial f)$. Or equivalently, for all $d\pi$-almost $(x,y)$, $y \in \partial f(x)$. Moreover, the pair $(f,f^\ast)$ achieves the minimum in the dual form \prettyref{eq:dual_convex_form}.
\item
(\textbf{Brenier's theorem}) If $Q$ admits a density with respect to the Lebesgue measure on $\mathbb{R}^d$, then there is a unique optimal coupling $\pi$ for the primal problem. In particular, the optimal coupling satisfies that
\begin{align*}
d\pi(x,y) = dQ(y) \delta_{x=\nabla f^\ast(y)},
\end{align*}
where the convex pair $(f,f^\ast) $ achieves the minimum in the dual problem \prettyref{eq:dual_convex_form}. Equivalently, $\pi=(\nabla f^\ast \times \mathrm{Id})_{\#}Q$.
\item
Under the above assumptions of Brenier's theorem, $\nabla f^\ast$ in the unique solution to Monge transportation problem from $Q$ to $P$, i.e.\xspace
\begin{align*}
\mathbb{E}_Q \norm{\nabla f^\ast(Y)-Y}^2 = \inf_{T: T_{\#}Q=P}\mathbb{E}_Q \norm{T(Y)-Y}^2.
\end{align*}
\end{enumerate}
\end{theorem}
\begin{remark}\normalfont
Whenever $Q$ admits a density, we refer to $\nabla f^\ast$ as the optimal transport map.
\label{rem:main}
\end{remark}
Henceforth, throughout the paper we assume that the distribution $Q$ admits a density in $\mathbb{R}^d$.
Note that in view of \prettyref{thm:knott-brenier}, any optimal pair $(f,f^\ast)$ from the dual formulation in \prettyref{eq:dual_convex_form} provides us an optimal transport map $\nabla f^\ast$ pushing forward $Q$ onto $P$. However, the objective \prettyref{eq:dual_convex_form} is not amenable to standard stochastic optimization schemes due to the conjugate function $f^\ast$.
To this end, we propose a novel minimax formulation in the following theorem where we replace the conjugate with a new convex function.
\begin{theorem}
\label{thm:our_optim_result}
Whenever $Q$ admits a density in $\mathbb{R}^d$, we have
\begin{align}\label{eq:max-min}
&W_2^2(P,Q) = \sup_{\substack{f \in \mathtt{CVX}(P), \\ f^\ast \in L^1(Q)}} \inf_{g \in \mathtt{CVX}(Q) }~\mathcal{V}_{P,Q}(f,g) + C_{P,Q},
\end{align}
where
$\mathcal{V}_{P,Q}(f,g)$ is a functional of $f,g$ defined as
\begin{equation*}
\mathcal{V}_{P,Q}(f,g)= -\mathbb{E}_P[f(X)]-\mathbb{E}_Q[\inner{Y}{\nabla g(Y)}-f(\nabla g(Y))].
\end{equation*}
In addition, there exists an optimal pair $(f_0, g_0)$ achieving the infimum and supremum respectively, where $\nabla g_0$ is the optimal transport map from $Q$ to $P$.
\end{theorem}
\vspace{-10pt}
\begin{proof}[Proof sketch]
The proof follows from the inequality $\langle y, \nabla g(y) \rangle - f(\nabla g(y)) \leq f^*(y)$ for all functions $g$, and then taking the expectation over $Q$, and observing that the equality is achieved with $g=f^*$. The technical details appear in \prettyref{app:proof_theorem}.
\end{proof}
\begin{remark}\normalfont
\label{rem:relax}
For any convex function $f$, the function $g\in L^1(Q)$ that achieves the infimum in~\eqref{eq:max-min} is convex and equals $f^*$. Therefore, the constraint $g\in \mathtt{CVX}(Q)$ can be relaxed to $g\in L^1(Q)$ without changing the optimal value and optimizing functions. We numerically observe that the optimization algorithm performs better under this relaxation.
\end{remark}
Formulation~\eqref{eq:max-min} now provides a principled approach to learn the optimal transport mapping $\nabla g(\cdot)$
as a solution of a minimax optimization. Since the optimization involves the search over the space of convex functions, we utilize the recent advances in input convex neural networks (ICNNs) to parametrize them as discussed in the following section.
\subsection{Minimax optimization over ICNNs}
\label{sec:icnn}
We propose using parametric models based on deep neural networks to approximate the set of convex functions.
This is known as input convex neural networks \cite{amos2016input}, denoted by $\mathtt{ICNN}(\mathbb{R}^d)$.
We propose estimating
the following approximate Wasserstein-$2$ distance, from samples:
\begin{align}
\tilde{W}_2^2(P,Q) \!= \!\sup_{f\in \mathtt{ICNN}(\mathbb{R}^d)} \inf_{g \in \mathtt{ICNN}(\mathbb{R}^d)}\mathcal{V}_{P,Q}(f,g)\! +\! C_{P,Q}.
\label{eq:approxW2}
\end{align}
ICNNs are a class of scalar-valued neural networks $f(x;\theta)$ such that the function $x \mapsto f(x;\theta) \in \mathbb{R}$ is convex.
The neural network architecture for an ICNN is as follows.
Given an input $x \in \mathbb{R}^d$, the mapping $x \mapsto f(x;\theta)$ is given by a $L$-layer feed-forward NN using the following equations for $l=0,1,\ldots, L-1$:
\begin{align*}
z_{l+1} = \sigma_l(W_l z_l + A_l x + b_l),\quad f(x;\theta)=z_L,
\end{align*}
where $\{W_l\}$, $\{A_l\}$ are weight matrices (with the convention that $W_0=0$), and $\{b_l\}$ are the bias terms. $\sigma_l$ denotes the entry-wise activation function at the layer $l$.
This is illustrated in Figure~\ref{fig:ICNN}.
We denote the total set of parameters by $\theta=(\{W_l\},\{A_l\},\{b_l\})$. It follows from \citet[Proposition 1]{amos2016input} that $f(x;\theta)$ is convex in $x$ provided
(i) all entries of the weights $W_l$ are non-negative;
(ii) activation function $\sigma_0$ is convex;
(iii) $\sigma_l$ is convex and non-decreasing, for $l=1,\ldots,L-1$.
While ICNNs are a specific parametric class of convex functions, it is important to understand if this class is rich enough representationally. This is answered positively by \citet[Theorem 1]{chen2018optimal}. In particular, they show that any convex function over a compact domain can be approximated in sup norm by a ICNN to the desired accuracy. This justifies the choice of ICNNs as a suitable approximating class for the convex functions.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\hsize]{ICNN-architecture.pdf}
\caption{The input convex neural network (ICNN) architecture.}
\label{fig:ICNN}
\vspace{-0.3cm}
\end{figure}
\input{main_figure_compar.tex}
The proposed framework
for learning the optimal transport
provides a novel training method for deep generative models, where
$(a)$ the generator is modeled as
a gradient of a convex function and $(b)$
the minimax optimization in \eqref{eq:approxW2}
(and more concretely, Algorithm \ref{alg:W2}) provides the training methodology.
On the surface, Eq.~\eqref{eq:approxW2} resembles the minimax optimization of generative adversarial networks based on
Wasserstein-1 distance \cite{arjovsky2017wasserstein}, called WGAN.
However, there are several critical differences making our approach attractive.
First, because WGANs use optimal transportation distance only as a measure of distance,
the learned generator map from the latent source to the target is arbitrary
and sensitive to the initialization (see Figure~\ref{fig:W1-GAN}) \cite{jacob2018w2gan}.
On the other hand, our proposed approach aims to find the {\em optimal} transport map and learns the same mapping regardless of
the initialization (see Figure~\ref{fig:checker-board-OT0})
Secondly, in a WGAN architecture~\cite{arjovsky2017wasserstein,petzka2017regularization},
the transport map (which is the generator) is represented with neural network that is a continuous mapping.
Although, a discontinuous map can be approximated arbitrarily close with continuous
neural networks, such a construction requires large weights making training unstable.
On the other hand, through our proposed method, by representing the transport map with
{\em gradient} of a neural network (equipped with ReLU type activation functions),
we obtain a naturally {\em discontinuous map}.
As a consequence we have sharp transition from one part of the support to the other, whereas
GANs (including WGANs) suffer from spurious probability masses that are not present in the target.
This is illustrated in Section~\ref{sec:exp-reg-OT}.
The same holds for
regularization-based methods for learning optimal transport~\cite{genevay2016stochastic,seguy2017large,leygonie2019adversarial},
where transport map is parametrized by continuous neural nets.
\begin{remark}\label{rem:compare}
\normalfont
In a recent work, \citet{taghvaei20192} proposed to solve the semi-dual optimization problem~\eqref{eq:dual_convex_form} by representing the function $f$ with an ICNN and learning it using a stochastic optimization algorithm. However, each step of this algorithm requires computing the conjugate $f^*$ for all samples in the batch via solving a inner convex optimization problem for each sample which makes it slow and challenging to scale to large datasets. Further it is memory intensive as each inner optimization step requires a copy of all the samples in the dataset. In contrast, we represent the convex conjugate $f^\ast$ using ICNN and present a novel minimax formulation to learn it, in a scalable manner.
\end{remark}
\subsection{Stability analysis of the learned transport map}
\prettyref{thm:our_optim_result} establishes the consistency of our proposed optimization: if the objective \prettyref{eq:max-min} is solved exactly with a pair of functions $(f_0,g_0)$, then $\nabla g_0$ is the exact optimal transport map from $Q$ to $P$. In this section, we study the error in approximating the optimal transport map $\nabla g_0$, when the objective \prettyref{eq:max-min} is solved up to a small error. To this end, we build upon the recent results from \citet[Prop. 8]{hutter2019minimax} regarding the stability of optimal transport maps.
Recall that the optimization objective \prettyref{eq:max-min} involves a minimization and a maximization. For any pair $(f,g)$, let $\epsilon_1(f,g)$ denote the minimization gap and $\epsilon_2(g)$ denote the maximization gap, defined according to:
\begin{align}
\label{eq:geps}
\epsilon_1(f,g) &= \mathcal{V}(f,g) - \inf_{\tilde{g} \in \mathtt{CVX}(Q)} \mathcal{V}(f,\tilde{g}),\\
\epsilon_2(f) &= \sup_{\tilde{f}\in \mathtt{CVX}(P)} \inf_{\tilde{g} \in \mathtt{CVX}(Q)}\mathcal{V}(\tilde{f},\tilde{g}) - \inf_{\tilde{g} \in \mathtt{CVX}(Q)} \mathcal{V}(f,\tilde{g}) \nonumber
\end{align}
Then, the following theorem bounds the the error between $\nabla g$ and the optimal transport map $\nabla g_0$ as a function $\epsilon_1$ and $\epsilon_2$. We defer its proof to \prettyref{app:proof_stability}.
\begin{theorem}\label{thm:stability}
Consider the optimization problem~\eqref{eq:max-min}. Assume $Q$ admits a density and let $\nabla g_0(\cdot)$ denote the optimal transport map from $Q$ to $P$. Then for any pair $(f,g)$ such that $f$ is $\alpha$-strongly convex, we have
\begin{align*}
\|\nabla g - \nabla g_0 \|^2_{L^2(Q)} \leq\frac{2}{\alpha}(\epsilon_1(f,g)+\epsilon_2(f)),
\end{align*}
where $\epsilon_1$ and $\epsilon_2$ are defined in~\eqref{eq:geps}, and $\|\cdot\|_{L^2(Q)}$ denotes the $L^2$-norm with respect to measure $Q$.
\end{theorem}
| {
"timestamp": "2020-06-19T02:02:30",
"yymm": "1908",
"arxiv_id": "1908.10962",
"language": "en",
"url": "https://arxiv.org/abs/1908.10962",
"abstract": "In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a {\\em discontinuous} transport mapping.",
"subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "Optimal transport mapping via input convex neural networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290913825541,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.7099522475220547
} |
https://arxiv.org/abs/1709.09764 | Rigidity of tilting modules in category O | In this note we give an overview of rigidity properties and Loewy lengths of tilting modules in the BGG category O associated to a reductive Lie algebra. These results are well-known by several specialists, but seem difficult to find in the existing literature. | \section*{Introduction}
In highest weight categories, tilting modules are the modules which have both a standard and costandard filtration, see \cite{Ringel}. In category $\mathcal O$ specifically, they are the self-dual modules with Verma filtration, see \cite{CI, ES}.
In each of the three `classical' Lie-theoretic settings (BGG category $\mathcal O$, quantum groups at roots of unity, modular representation theory of reductive groups), these modules appear with their own applications, see for instance \cite{MaTilt, MO} for category $\mathcal O$, \cite{AK} for quantum groups and \cite{RW} for modular representation theory. The level of understanding of the tilting modules seems to decrease along the above order.
The aim of this note is to write down some results which are known in category $\mathcal O$, but perhaps not written out in full yet, and might be useful for studying the other settings.
\section{The results}In this section we state the main results, the proofs will follow in Section~\ref{SecProofs} and any unexplained notation will be introduced in Section~\ref{SecNot}.
A filtration of a module is called semisimple if all subquotients are semisimple. A {\em Loewy filtration} of a module $M$ is a semisimple filtration of minimal length (we will only consider modules where this is finite). That minimal length is by definition the {\em Loewy length}, $\mathpzc{ll}(M)$. A module is {\em rigid} if it only has one Loewy filtration.
In the following theorem, the equivalence of (5) and (6) is a result of Stroppel in~\cite{Stroppel2} and the equivalence of (5), (7) and (8) (in the regular case) is a result of Irving in~\cite{Irving}.
\begin{thm}\label{Thm1}
For any integral dominant $\lambda\in\mathfrak{h}^\ast$ and $x\in W$ with~$y:=w_0x$, the following statements are equivalent:
\begin{enumerate}
\item $T(x\cdot\lambda)$ is rigid;
\item $T(x\cdot\lambda)$ has simple socle;
\item $\End_{\mathcal O}(T(x\cdot\lambda))$ is commutative;
\item The (dual) Verma flag of $T(x\cdot\lambda)$ is multiplicity free;
\item $(P(y\cdot\lambda):\Delta(\lambda))=[\Delta(\lambda):L(y\cdot\lambda)]=1$;
\item $\End_{\mathcal O}(P(y\cdot\lambda))$ is commutative;
\item $P(y\cdot\lambda)$ has simple socle;
\item $P(y\cdot\lambda)$ is rigid.
\end{enumerate}
\end{thm}
\begin{rem}
The proof of the equivalences
$$(2)\Leftrightarrow(3)\Leftrightarrow(4)\Leftrightarrow(5)\Leftrightarrow(6)\Leftrightarrow(7)$$
in Theorem~\ref{Thm1} does not use the Kazhdan-Lusztig conjecture, nor any results on Koszulity. Irving's proof that (8) implies (2)-(7) relies crucially on the validity of the Kazhdan-Lusztig conjecture. To prove that (1) implies (2)-(7), we furthermore use the existence of a positive grading (the Koszul grading) on $\mathcal O$. For the proof that (2)-(7) imply (1) and (8) we apply Koszulity. This can probably be avoided however, using arguments as in the proof of \cite[Corollary 7]{Irving}.
\end{rem}
It is known, see the following lemma, that $T(x\cdot\lambda)$ is actually a submodule of $P(y\cdot\lambda)$. Theorem~\ref{Thm1} thus implies that (non-)rigidity is directly inherited by these submodules. Note that in general, a submodule of a (non-)rigid module need not be (non-)rigid.
\begin{lemma}\label{LemTrace}
With notation as in Theorem~\ref{Thm1}, the module $T(x\cdot\lambda)$ is the trace of $P(w_0\cdot\lambda)$ in $P(y\cdot\lambda)$. In other words, with $H:=\Hom_{\mathcal O}(P(w_0\cdot\lambda),P(y\cdot\lambda))$, we have
$$T(x\cdot\lambda)\;\cong\;\cup_{f\in H}{\rm{im}}(f).$$
\end{lemma}
Blocks in category $\mathcal O$ are equivalent to finite dimensional Koszul algebras, see \cite{BGS}. Hence, the Loewy length $\mathpzc{ll}$ is bounded by the graded length $\mathpzc{gl}$, see Section~\ref{SecSocRad}. In the case of tilting modules we can say more.
\begin{prop}\label{PropLL}
For any tilting module in $\mathcal O$, the graded and Loewy length coincide. Concretely, for any integral dominant $\lambda\in\mathfrak{h}^\ast$ and $x$ with maximal length in $\{y\in W\,|\,y\cdot\lambda=x\cdot\lambda\}$, we have
$$\mathpzc{ll}( T(x\cdot\lambda))\;=\;\mathpzc{gl} ( T(x\cdot\lambda))\;=\; 2\ell(w_0x)+1.$$
\end{prop}
In \cite{Hazi}, it is proved that tilting modules for quantum groups possess a `balanced' Loewy filtration, see Section~\ref{SecSocRad}. Furthermore, a very elegant algorithm to determine the layers of this filtration is given, although of course it still relies on Kazhdan-Lusztig combinatorics. The existence of a Koszul grading on $\mathcal O$ along with some well known facts concerning tilting modules, imply that the analogue of that algorithm always yields the grading filtration (which is balanced by the self-duality of tilting modules) in category $\mathcal O$.
\begin{lemma}\label{LemHazi} The following algorithm of \cite{Hazi} yields the grading filtration for tilting modules in category $\mathcal O$.
\begin{enumerate}[a.]
\item Write the grading filtration of the Verma module $\Delta(\lambda)$. View this as a partial Loewy series for $T(\lambda)$ (namely the bottom layers). We will reflect Loewy layers about the ``middle'' Loewy layer in which $L(\lambda)$ appears.
\item Pick the highest ``unbalanced'' weight; that is, the largest $\mu<\lambda$ such that $L(\mu)$ appears below $L(\lambda)$ but there is no corresponding factor $L(\mu)$ in the reflected layer above $L(\lambda)$. \label{item:restart}
\item Add the grading filtration of $\Delta(\mu)$ to the partial Loewy series so that the head of $\Delta(\mu)$ is in the reflected Loewy layer above $L(\lambda)$.
\item Repeat from Step \ref{item:restart} until the Loewy series is balanced.
\end{enumerate}
\end{lemma}
\section{Category $\mathcal O$, Ringel duality and Koszul grading}\label{SecNot}
We work over the ground field $\mathbb{C}$ of complex numbers.
\subsection{The Lie algebra}Let $\mathfrak{g}$ be a {\em reductive Lie algebra} with a fixed {\em triangular decomposition}
\begin{equation}\label{eq1}
\mathfrak{g}\;=\;\mathfrak{n}^-\oplus\mathfrak{h}\oplus\mathfrak{n}^+.
\end{equation}
Here $\mathfrak{h}$ is a fixed {\em Cartan subalgebra} and
$\mathfrak{b}=\mathfrak{h}\oplus\mathfrak{n}^+$ is the corresponding {\em Borel subalgebra}.
We denote by $W$ the associated Weyl group with longest element $w_0$.
The half of the sum of all positive roots is denoted by $\rho\in\mathfrak{h}^\ast$
and the $W$-invariant form on $\mathfrak{h}^*$ is denoted $\langle \cdot,\cdot\rangle$.
The dot action of $W$ on $\mathfrak{h}^\ast$ is denoted by $w\cdot\lambda=w(\lambda+\rho)-\rho$. The partial order $\le$ on $W$ is the Bruhat order, see \cite[Section~0.4]{Humphreys}, with convention that the identity element $e\in W$ is minimal.
Let~$\Lambda\subset\mathfrak{h}^\ast$ denote the set of
{\em integral weights}, that is weights which appear in finite dimensional
$\mathfrak{g}$-modules. The {\em dominant weights} form the subset
\begin{displaymath}
\Lambda^+=\{\lambda\,|\,\langle \lambda+\rho,\alpha\rangle \ge 0,\;\;\mbox{for all $\alpha\in \Delta^+$}\}.
\end{displaymath}
For $\lambda\in \Lambda^+$, we denote by $W_\lambda\subset W$ its stabiliser subgroup under the
dot action and by $w_0^\lambda$ the longest element in~$W_\lambda$. The set of longest
representatives in~$W_\lambda\backslash W$ is denoted by $X_\lambda$.
\subsection{Category $\mathcal O$}
Consider the {\em BGG category $\mathcal{O}$} associated to the triangular decomposition~\eqref{eq1},
see \cite{BGG, Humphreys}. Simple objects in~$\mathcal{O}$ are, up to isomorphism,
{\em simple highest weight modules} $L(\mu)$, where $\mu\in\mathfrak{h}^\ast$. The module
$L(\mu)$ is the simple top of the {\em Verma module}
$\Delta(\mu)$ and has highest weight~$\mu$. The projective cover of $L(\mu)$ in~$\mathcal{O}$
is denoted $P(\mu)$. The injective envelope of $L(\mu)$ in~$\mathcal{O}$
is denoted $I(\mu)$.
Category $\mathcal O$ has the simple preserving duality functor $\vee$ of \cite[Section~3.2]{Humphreys}.
The dual Verma module with socle $L(\mu)$ is $\nabla(\mu):=\Delta(\mu)^\vee$. The exact subcategory of $\mathcal O$ of modules with a {\em Verma flag} (or standard filtration), see~\cite[Section~3.7]{Humphreys}, is denoted by $\mathcal O^\Delta$. We also have the category $\mathcal O^{\nabla}$ of modules with dual Verma flag.
By \cite[Theorem~3.7]{Humphreys}, we have
\begin{equation}\label{eqQH}(M:\nabla(\mu))\;=\;\dim\Hom_{\mathcal O}(\Delta(\mu),M),\end{equation}
for any $M\in \mathcal O^{\nabla}$ and $\mu\in\mathfrak{h}^\ast$.
We also have $P(\mu)\in\mathcal O^{\Delta}$ and the BGG reciprocity relation
$$(P(\mu):\Delta(\nu))=[\Delta(\nu):L(\mu)],$$
for all $\mu,\nu\in\mathfrak{h}^\ast$, see \cite[Theorem~3.11]{Humphreys}.
We will only consider the {\em integral part} $\mathcal{O}_\Lambda$ of $\mathcal{O}$ which contains all modules
with weights in~$\Lambda$. This is justified by \cite[Theorem~11]{SoergelD}.
The category $\mathcal{O}_\Lambda$ decomposes into {\em indecomposable} blocks
as follows:
\begin{displaymath}
\mathcal{O}_\Lambda\;=\;\bigoplus_{\lambda\in\Lambda^+}\mathcal{O}_\lambda,
\end{displaymath}
where $\mathcal{O}_\lambda$, for $\lambda\in\Lambda^+$, is the Serre subcategory of $\mathcal{O}$ generated by
all simples of the form~$L(x\cdot\lambda)$, where $x\in X_\lambda$. By \cite[Theorem~5.1]{Humphreys}, we have for all $\lambda\in \Lambda^+$ and $x,y\in X_\lambda$
\begin{equation}\label{eqBGG}
[\Delta(x\cdot\lambda):L(y\cdot\lambda)]\not=0\quad\Leftrightarrow\quad x\le y\quad\Leftrightarrow\quad \Delta(y\cdot\lambda)\subset \Delta(x\cdot\lambda).
\end{equation}
By \cite[Section~4.1]{Humphreys}, the socle of $\Delta(x\cdot\lambda)$, for any $x\in X_\lambda$, is $L(w_0\cdot\lambda)$. In particular, the socle of any module in $\mathcal O^\Delta_\lambda$ is a direct sum of modules isomorphic to $L(w_0\cdot\lambda)$.
Let $A_\lambda$ be the endomorphism algebra of the projective generator
$$P_\lambda:=\bigoplus_{x\in X_\lambda}P(x\cdot\lambda).$$ Then we have an equivalence of categories
$$\Hom_{\mathcal O_\lambda}(P_\lambda,-):\;\;\mathcal O_\lambda\;\tilde\to\;A_\lambda\mbox{-mod}.$$
We will use the same notation for the $\mathfrak{g}$-module $M\in \mathcal O_\lambda$ and the $A_\lambda$-module $\Hom_{\mathcal O_\lambda}(P_\lambda,M)$.
\subsection{Tilting modules}
For each $\mu\in \Lambda$, we have a unique indecomposable $\vee$-self-dual module $T(\mu)$ with a short exact sequence
\begin{equation}\label{DefT}0\to \Delta(\mu)\to T(\mu)\to K\to 0,\qquad\mbox{for some $K\in\mathcal O^\Delta$,}\end{equation}
see~\cite[Chapter~11]{Humphreys}. We refer to~$T(\mu)$ as the {\em tilting module} corresponding to~$\mu$. The weight~$\mu$ is also the highest weight for which $T(\mu)$ has a non-zero weight space and the weight space~$T(\mu)_\mu$ has dimension one.
The endomorphism algebra of
$$T_\lambda:=\bigoplus_{x\in X_\lambda} T(x\cdot\lambda)$$ is known as the {\em Ringel dual algebra} of $A_\lambda$. It follows from \cite{SoergelT} or \cite{prinjective} that $A_\lambda$ is in fact Ringel self-dual\footnote{Note that the Ringel self-duality in~\cite{SoergelT} is somewhat hidden, as it gives Ringel duality between $A_{\lambda}$ and $A_{-(w_0(\lambda+\rho)+\rho)}$. However, since we have $W_{-w_0(\lambda+\rho)-\rho}=w_0W_\lambda w_0$, it follows from \cite[Theorem~11]{SoergelD} that $A_\lambda$ and $A_{-(w_0(\lambda+\rho)+\rho)}$ are actually isomorphic.}. As a consequence, we have the following lemma.
\begin{lemma}\label{LemRingel}
For every $\lambda\in \Lambda^+$ and $x,y\in X_\lambda$, we have
\begin{enumerate}[(i)]
\item $\End_{\mathcal O}(T(x\cdot\lambda))\;\cong\;\End_{\mathcal O}(P(w_0xw_0^\lambda\cdot\lambda))$;
\item $(T(x\cdot\lambda):\Delta(y\cdot\lambda))=(T(x\cdot\lambda):\nabla(y\cdot\lambda))=(P(w_0xw_0^\lambda \cdot\lambda):\Delta(w_0yw_0^\lambda\cdot\lambda))$.
\end{enumerate}
\end{lemma}
\begin{proof}
The equation $(T(x\cdot\lambda):\Delta(y\cdot\lambda))=(T(x\cdot\lambda):\nabla(y\cdot\lambda))$ is an immediate consequence of the self-duality of tilting modules under $\vee$.
The other equalities follow from general principles of Ringel duality in~\cite[Section~6]{Ringel} applied to the case $\mathcal O_\lambda$ in~\cite{SoergelT, prinjective}.
The combinatorics of the corresponding equivalence of categories (where we identify $A_\lambda$-mod and $\mathcal O_\lambda$)
$$\Hom_{\mathcal O_\lambda}(T_\lambda,-):\mathcal O_\lambda^{\nabla}\to\mathcal O_\lambda^{\Delta},$$ which by construction maps tilting modules to projective modules, is for instance worked out in~\cite[Theorem~8.1]{dualities}.
Note that we can combine Soergel's combinatorial functor $\mathbb{V}_\lambda=\Hom_{\mathcal O_\lambda}(P(w_0\cdot\lambda),-)$, \cite[Struktursatz~9]{SoergelD} and Lemma~\ref{LemTrace} to give another proof of the isomorphism between the algebras $\End_{\mathcal O}(T(x\cdot\lambda))$ and $\End_{\mathcal O}(P(y\cdot\lambda))$.
\end{proof}
\subsection{Socle, radical and grading filtration}\label{SecSocRad}We review the two extremal Loewy filtrations, see also~\cite[Section~1.2]{Irving}. The {\em socle filtration},
$$0={\rm soc}_0(M)\subset {\rm soc}_1(M)\subset {\rm soc}_2(M)\subset\cdots\subset M,$$
is the filtration where ${\rm soc}_k(M)$ is the unique submodule of $M$ such that ${\rm soc}_k(M)/{\rm soc}_{k-1}(M)$ is the socle of $M/{\rm soc}_{k-1}(M)$.
The {\em radical filtration},
$$0\subset \cdots\subset {\rm rad}^2(M)\subset{\rm rad}^1(M)\subset {\rm rad}^0{M}=M,$$
is the filtration where ${\rm rad}^i(M)$ is the radical of ${\rm rad}^{i-1}(M)$.
With $d=\mathpzc{ll}(M)$, for any Loewy filtration
$$0=F_0M\subset F_1M\subset\cdots F_{d-1}M\subset F_dM=M,\quad\;\;\mbox{we have}\qquad{\rm rad}^{d-i}(M)\subseteq F_iM\subseteq{\rm soc}_i(M).$$
Clearly, a module is rigid if and only if the socle and radical filtration coincide.
A Loewy filtration $F_\bullet M$ of a module $M$, with~$d=\mathpzc{ll}(M)$, is {\em balanced} if
$$F_iM/F_{i-1}M\;\cong\; F_{d-i+1}M/F_{d-i}M,\qquad \mbox{for all $1\le i\le d$.}$$
A finite dimensional algebra $A$ is `positively graded' if it has a grading $A=\bigoplus_{i\ge 0}A_i$, with~$A_0$ semisimple. Consider a finite dimensional graded module $M=\bigoplus_{i\in\mathbb{Z}}M_i$. Let $k$ be the minimal degree for which $M_{k}$ is non-zero. Then the filtration of $M$ with
$$F_iM=\bigoplus_{j< i+k}M_j,$$
is clearly a semisimple filtration and known as the {\em grading filtration}. The length of this filtration is the {\em graded length} $\mathpzc{gl}(M)$. By definition, $\mathpzc{ll}(M)\le \mathpzc{gl}(M)$.
\subsection{The Koszul grading}
In \cite[Theorem~1.1.3]{BGS}, it is proved that $A_\lambda$ has a Koszul grading.
The following lemma will therefore be relevant.
\begin{lemma}\cite[Proposition~2.4.1]{BGS}\label{LemBGS}
If a graded module $M$ over a Koszul algebra $A$ has simple socle (resp. simple top), then the socle filtration (resp. radical filtration) of $M$ coincides with the grading filtration.
In those cases, $\mathpzc{ll}(M)=\mathpzc{gl}(M)$.
Consequently, if $M$ has both simple top and socle, it is rigid.
\end{lemma}
For a $\mathbb{Z}$-graded vector space~$V$ and $i\in\mathbb{Z}$, we denote by $V\langle i\rangle$ the graded vector space which is equal to $V$ as an ungraded space, with grading
$$V\langle i\rangle_j= V_{j-i},\quad\mbox{for all $j\in\mathbb{Z}$}.$$
Projective and (dual) Verma modules admit graded lifts, see e.g.~\cite{BGS}. Furthermore, it is well-known (see e.g.~\cite{SHPO4}) that
\begin{equation}\label{glDelta}\mathpzc{ll} (\Delta(x\cdot\lambda))=\mathpzc{gl} (\Delta(x\cdot\lambda))=\ell(w_0x)+1,\quad\mbox{for all $x\in X_\lambda$.}\end{equation}
It is proved in~\cite{MO} that the tilting modules admit graded lifts, such that $\Delta(\mu)\hookrightarrow T(\mu)$ in \eqref{DefT} preserves the grading. We choose the normalisation of grading such that $\Delta(\mu)$ has its top in degree $0$. Moreover, we have the following lemma.
\begin{lemma}\label{LemFiltT}
Consider $\lambda\in\Lambda^+$ and $x\in X_\lambda$. Every subquotient of any standard filtration of the graded module $T(x\cdot\lambda)$,
which is not isomorphic to~$\Delta(x\cdot\lambda)$, has the form $\Delta(y\cdot\lambda)\langle l\rangle$ with~$l<0$ and $y\in X_\lambda$ with~$y<x$.
\end{lemma}
\begin{proof}
The condition $y<x$ follows from the combination of equation~\eqref{eqBGG} and Lemma~\ref{LemRingel}(ii). The case $\lambda=0$ is \cite[Lemma~2.4]{MaTilt}.
The singular case can be obtained from the regular case by the combinatorics of graded (exact) translation out of the wall, see e.g.~\cite{Stroppel}, which implies
$$\theta_\lambda^{out}T(x\cdot\lambda)=T(xw_0^\lambda\cdot0)\langle \ell(w_0^\lambda)\rangle,
$$
see e.g.~\cite[equation~(8)]{SHPO4} and
\begin{displaymath}
\left(\theta_{\lambda}^{out}\Delta(y\cdot\lambda):
\Delta(yu\cdot0)\langle j\rangle\right)=\delta_{j,l(u)}\qquad \text{ for all }\quad u\in W_\lambda,
\end{displaymath}
with no other standard modules appearing in the filtration, see e.g.~\cite[Theorem~4.4]{dualities}.
\end{proof}
\section{Proofs}\label{SecProofs}
\begin{proof}[Proof of Theorem~\ref{Thm1}]
We choose $x\in X_\lambda$ and set $y:=w_0xw_0^\lambda\in X_\lambda$.
From Lemma~\ref{LemBGS}, we find (2)$\Rightarrow$(1) and (7)$\Rightarrow$(8). Note that, by self-duality, tilting modules have simple top if and only if they have simple socle. By definition projective covers have simple top.
The equivalence (3)$\Leftrightarrow$(6) follows from Lemma~\ref{LemRingel}(i).
The equivalence (5)$\Leftrightarrow$(6) is precisely \cite[Theorem~7.1]{Stroppel2}.
Now we prove (4)$\Leftrightarrow$(5). First we observe that $[\Delta(\lambda):L(y\cdot\lambda)]=1$ implies that $[\Delta(z.\lambda):L(y\cdot\lambda)]\le 1$ for all $z\in X_\lambda$, as follows from $\Delta(z\cdot\lambda)\subset\Delta(w_0^\lambda\cdot\lambda)=\Delta(\lambda)$, see \eqref{eqBGG}. By BGG reciprocity, (5) is thus equivalent to
$$(P(y\cdot\lambda):\Delta(z\cdot\lambda))\le 1,\quad\mbox{for all $z\in X_\lambda$.}$$ By Lemma~\ref{LemRingel}(ii), the above is in turn equivalent to (4).
Now we prove (2)$\Leftrightarrow$(4). By the above paragraph, we know that (4) is equivalent to $(T(x\cdot\lambda):\nabla(w_0\cdot\lambda))=1$. Since $T(x\cdot\lambda)\in\mathcal O^{\Delta}_\lambda$, its socle consists of copies of $L(w_0\cdot\lambda)$. Furthermore, since $\Delta(w_0\cdot\lambda)=L(w_0\cdot\lambda)$, equation~\eqref{eqQH} implies that
$$\dim\Hom_{\mathcal O}(L(w_0\cdot\lambda),T(x\cdot\lambda))=(T(x\cdot\lambda):\nabla(w_0\cdot\lambda)).$$
This shows that (2) and (4) are equivalent.
Now we prove (7)$\Rightarrow$(5). Assume that $(P(y\cdot\lambda):\Delta(\lambda))=r>1$. It follows immediately from \cite[Theorem~6.5]{Humphreys} that we have a monomorphism
$$\Delta(\lambda)^{\oplus r}\hookrightarrow P(y\cdot\lambda),$$
with cokernel in $\mathcal O^{\Delta}$. In particular, $L(w_0\cdot\lambda)^{\oplus r}$ appears in the socle.
To prove that (8)$\Rightarrow$(5)$\Rightarrow$(7), we can just repeat the last two paragraphs of the proof of \cite[Corollary~7]{Irving}.
Finally we prove (1)$\Rightarrow$(2). By Lemma~\ref{LemFiltT} and equation~\eqref{glDelta}, we find that the first non-zero module in the grading filtration of $T(x\cdot\lambda)$ is $L(w_0\cdot\lambda)$ (as the socle of $\Delta(x\cdot\lambda)$) in degree~$\ell(w_0x)$. If the socle of $T(x\cdot\lambda)$ is not simple, the grading filtration and socle filtration thus differ, which concludes the proof.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{LemTrace}]
Note that $L(w_0\cdot0)\cong T(w_0\cdot0)$ is indeed the trace of $P(w_0\cdot0)$ in $\Delta(w_0\cdot0)$. For any $w\in W$, denote by $\theta_w$ the unique projective functor, see \cite{BG}, which maps $\Delta(0)\cong P(0)$ to $P(w\cdot0)$. From the fact that $\theta_w$ is left and right adjoint to $\theta_{w^{-1}}$ and the fact that $\theta_w P(w_0\cdot 0)$ is a direct sum of copies of $P(w_0\cdot0)$ it follows immediately that, if $K$ is the trace of $P(w_0\cdot0)$ in $M$, then $\theta_wK$ is the trace of $P(w_0\cdot0)$ in $\theta_wM$.
We thus find that $\theta_wL(w_0\cdot 0)$ is the trace of $P(w_0\cdot0)$ in $P(x\cdot0)$. The former is precisely $T(w_0w\cdot0)$, see e.g. \cite[Proposition~5.10]{dualities}. The statement for arbitrary integral dominant $\lambda$ follows similarly from the above and translation onto the wall.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{PropLL}]
We have
$$\mathpzc{ll} (T(x\cdot\lambda))\le\mathpzc{gl} (T(x\cdot\lambda))=2\ell(w_0x)+1,$$
see e.g. \cite{SHPO4}.
We will trace the unique simple constituent $L=L(x\cdot\lambda)$ in the semisimple filtrations of~$T=T(x\cdot\lambda)$. Consider an arbitrary semisimple filtration
$$0=T_0\subset T_1\subset T_2\subset \cdots\subset T_{k-1}\subset T_k=T.$$
Since $L$ is the top of the submodule $\Delta(x\cdot\lambda)\subset T$, see~\eqref{DefT}, this means that the $j$ for which $L$ is a submodule of $T_j/T_{j-1}$ satisfies
$$j\;\ge\;\mathpzc{ll}\Delta(x\cdot\lambda)\;=\; \ell(w_0x)+1. $$
Applying the duality functor $\vee$ to the filtration of $T=T^\vee$ yields a semisimple filtration
$$0=T'_0\subset T'_1\subset T'_2\subset \cdots\subset T'_{k-1}\subset T'_k=T,$$
with~$T'_i=(T/T_{k-i})^{\vee}$.
We can thus also conclude that
$$k-j+1\;\ge\; \ell(w_0x)+1.$$
In conclusion, we find
$k\ge 2\ell(w_0x)+1,$
which implies $\mathpzc{ll} (T(x\cdot\lambda))\ge 2\ell(w_0x)+1$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{LemHazi}]
This follows immediately from Lemma~\ref{LemFiltT}, equation~\eqref{eqBGG} and (graded) self-duality of tilting modules.
\end{proof}
\subsection*{Acknowledgement}
The author thanks Jim Humphreys for discussions, pointing out the gap in the literature concerning rigidity of tilting modules in $\mathcal O$ and bringing the paper \cite{Hazi} to his attention.
| {
"timestamp": "2017-10-11T02:02:05",
"yymm": "1709",
"arxiv_id": "1709.09764",
"language": "en",
"url": "https://arxiv.org/abs/1709.09764",
"abstract": "In this note we give an overview of rigidity properties and Loewy lengths of tilting modules in the BGG category O associated to a reductive Lie algebra. These results are well-known by several specialists, but seem difficult to find in the existing literature.",
"subjects": "Representation Theory (math.RT)",
"title": "Rigidity of tilting modules in category O",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.971129093889291,
"lm_q2_score": 0.7310585669110202,
"lm_q1q2_score": 0.7099522436643028
} |
https://arxiv.org/abs/1910.11442 | The thresholding scheme for mean curvature flow and de Giorgi's ideas for minimizing movements | We consider the thresholding scheme and explore its connection to De Giorgi's ideas on gradient flows in metric spaces; here applied to mean curvature flow as the steepest descent of the interfacial area.The basis of our analysis is the observation by Esedoglu and the second author that thresholding can be interpreted as a minimizing movements scheme for an energy that approximates the interfacial area.De Giorgi's framework provides an optimal energy dissipation relation for the scheme in which we pass to the limit to derive a dissipation-based weak formulation of mean curvature flow.Although applicable in the general setting of arbitrary networks, here we restrict ourselves to the case of a single interface, which allows for a compact, self-contained presentation. | \section{Introduction and context}
The purpose of these notes is to draw a connection between De Giorgi's tools for
minimizing movements, that is, gradient flows in metric spaces on the one
hand, and the very popular thresholding scheme for flow of a hyper-surface by its mean curvature
on the other hand.
While we have developed this connection in the case of multiple phases
with surface energies and mobilities depending on the pair of phases,
as is relevant for grain growth in polycrystals, and when the notion
of viscosity solution is not available, we present our results
here in the simplest setting of two phases. Our
presentation is essentially self-contained.
\smallskip
What makes the evolution of the boundary $\partial\Omega$ of a set $\Omega$
by its mean curvature $H$ valuable for modeling in materials science
is that it is driven by the reduction of the (total) interfacial area of $\partial\Omega$,
which relies on the mean curvature $H$, the sum of the principal
curvatures, being the first variation of the interfacial area.
There is a more intimate connection between mean curvature flow (MCF) and the functional $E$ of interfacial area
of a configuration: MCF can formally be understood as a gradient flow of $E$.
We stress that a dynamical system that can be written as a gradient flow, that is,
a steepest descent in an energy landscape, does not just rely on the height function $E$,
but also on a notion of distance on configuration space, which is typically described by
a metric tensor $g$ in the sense of Riemannian geometry. In case of MCF, the tangent
space in some configuration $\Omega$ should be thought of as consisting of all normal
velocities $V$, i.e., functions on $\partial \Omega$, while the configuration-dependent metric tensor $g_\Omega$ is
given by the $L^2$-inner product on $\partial\Omega$.
\smallskip
Still formally, any gradient flow allows for a natural discretization in time.
Every step of the discretization comes in form of a variational problem, just involving
the functional $E$ and the induced distance $d$, cf.~(\ref{wg66}), but not the metric tensor $g$ and
the differential of $E$ -- it thus relies rather on the ``metric'', but not the
differential structure. Following De Giorgi, we call such a scheme a minimizing
movements scheme. We recall that, as in elementary differential geometry,
the induced distance $d$ on a Riemannian manifold $({\mathcal M},g)$ is defined
via $d^2(\chi_0,\chi_1)$ $:=\inf\{\int_0^1 g_{\chi_s}(\frac{d\chi}{ds},\frac{d\chi}{ds})ds\}$,
where the infimum is taken over all curves $[0,1]\ni s\mapsto\chi_s$ connecting
$\chi_0$ to $\chi_1$ (we use the letter $\chi$ because we think of a characteristic
function describing the configuration).
We note that in the Euclidean case, the Euler-Lagrange equation
of (\ref{wg66}) turns into the implicit Euler scheme for $\frac{d\chi}{dt}=\mathop{\rm grad}E|_{\chi}$.
\smallskip
However, this infinite-dimensional Riemannian structure making MCF
a gradient flow leads to a degenerate induced metric (i.e.~$d\equiv0$): It can be seen that the
infimum of $\int_0^1\int_{\partial\Omega_s}V_s^2 ds$
over all curves of configurations $[0,1]\ni s\mapsto\Omega_s$ with normal velocity
$[0,1]\ni s\mapsto V_s$,
connecting some given configurations $\Omega_0$ and $\Omega_1$, vanishes \cite{MichorMumford}.
\smallskip
Nonetheless, a minimizing movements scheme (before the latter) for MCF has
been formulated by Almgren~et.~al.~\cite{AlmgrenTaylorWang}, with $E(\Omega)$ being
the surface area of $\partial\Omega$ and with $d^2(\Omega_1,\Omega_0)$
$=4\int_{\Omega_1\triangle\Omega_0}{\rm dist}(\cdot,\partial\Omega_0)$.
Luckhaus~et.~al.~\cite{LuckhausSturzenhecker} have established a (long-time) convergence
result for this scheme. This convergence result is conditional
in the sense that a condition like in (\ref{wg04}) has to be imposed.
\smallskip
Thresholding, cf.~(\ref{wg59}), is a very well performing and widely used numerical scheme for MCF,
introduced by Osher~et.~al.~\cite{BenceMerrimanOsher}. Also the convolution step,
which after spatial discretization can be carried out by the Fast Fourier Transform,
is of low complexity. Right from the beginning, thresholding
has attracted the attention of analysts; since it obviously conserves the comparison principle
for MCF, it has been shown to converge to MCF in the sense of viscosity solution
in the two-phase case \cite{Evans}.
\smallskip
Esedo\u{g}lu and the second author \cite{EsedogluOtto} realized that thresholding also respects the
gradient-flow structure of MCF, in the sense that it can be interpreted as a minimizing
movements scheme, cf.~Lemma \ref{Le2}. This was used in the multi-phase case
to extend thresholding to surface tensions and mobilities \cite{EsedogluSalvador}
that depend on the pair of grains, while keeping its low complexity.
It was also used by the present authors to
provide several types of convergence results; presently, all of them are
conditional in the sense of assumption (\ref{wg04}), in the tradition of
\cite{LuckhausSturzenhecker}.
\smallskip
The first result \cite{LauxOtto1} provided the same
limiting notion of solution for MCF as in \cite{LuckhausSturzenhecker}. However, this
weak notion of solution does not imply the dissipation inequality natural to a gradient flow.
It is Brakke's weak notion of solution for MCF that is based on a localization of
the dissipation inequality; in \cite{LauxOtto2}, we establish a (still conditional)
convergence result towards this inequality-based notion of solution.
\smallskip
For any gradient flow in a Riemannian context $({\mathcal M},g,E)$,
there is yet another notion of weak solution based on a single inequality,
namely $E(\chi(T))$ $+\int_0^T\frac{1}{2}g_{\chi}(\frac{d\chi}{dt},\frac{d\chi}{dt})
+\frac{1}{2}|{\rm grad} E_{|\chi}|^2dt$ $\le E(\chi(0))$. This elementary
observation is credited to De Giorgi; its appeal lies in the fact that
it is potentially more stable in limiting procedures because only lower semi-continuity
is needed (as provided by Propositions \ref{Pr2} and \ref{Pr3}).
The main result of this paper, Theorem \ref{Th}, precisely establishes
this inequality in the case of MCF, cf.~(\ref{wg62}).
\smallskip
One advantage of a minimizing movements scheme, cf.~(\ref{wg66}), lies in the fact that it
automatically comes with the a priori estimate
$E(\chi^N)+\sum_{n=1}^N\frac{1}{2h}d^2(\chi^n,\chi^{n-1})$ $\le E(\chi^0)$, which
is obtained by using $\chi^{n-1}$ as a competitor in (\ref{wg66}). In the limit $h\downarrow0$,
this inequality formally turns into
$E(\chi(T))$ $+\int_0^T\frac{1}{2}g_{\chi}(\frac{d\chi}{dt},\frac{d\chi}{dt})dt$ $\le E(\chi(0))$,
which misses the formally correct identity by a factor of $2$. On the level of the metric
structure, De Giorgi provides tools to capture the missing term
$\int_0^T\frac{1}{2}|{\rm grad} E_{|\chi}|^2dt$, see Lemma \ref{Le1}. We take
the proof from the monograph \cite{AmbrosioGigliSavare}.
\smallskip
As a consequence of these notions and tools of De Giorgi, our (conditional) convergence proof
for the thresholding scheme in fact is rather ``soft'', softer than \cite{LauxOtto1} which
relied on the notion of tilt excess and the fine structure of Caccioppoli sets,
and certainly softer than \cite{LuckhausSturzenhecker}
which relied on regularity theory for minimal surfaces. We believe that these tools
have a wider potential for geometric evolutions or non-linear PDE of gradient-flow type.
For the broader context and more references, we refer to \cite{LauxOtto1}.
\section{Main result and structure of proof}
Given an initial configuration, as described by its (Lebesgue-\-mea\-sur\-able)
characteristic function $\chi^0\colon\mathbb{R}^d\rightarrow\{0,1\}$,
and a time step size $h>0$,
the thresholding scheme iteratively produces configurations at time steps $n=1,2,\cdots$,
encoded by their characteristic functions $\chi^n$, via convolution and ``thresholding'':
\begin{align}\label{wg59}
\chi^n:=\left\{\begin{array}{cl}
1&\mbox{where}\;G_h*\chi^{n-1}>\frac{1}{2}\\
0&\mbox{else}\end{array}\right\},
\end{align}
where $G_h$ denotes the heat kernel at time $\frac{h}{2}$, that is,
\begin{align}\label{wg76}
G_h(z):=\frac{1}{\sqrt{2\pi h}^d}\exp(-\frac{|z|^2}{2h})
\end{align}
(like in stochastic analysis, we take $\frac{h}{2}$ so that $G_1$ is the standard Gaussian).
We interpolate piecewise constant in time:
\begin{align}\label{wg61}
\chi_h(t)=\chi^n\quad\mbox{for}\;t\in[nh,(n+1)h),
\quad\chi_h(t)=\chi^0\quad\mbox{for}\;t\le 0.
\end{align}
For simplicity, we pass from the whole space $\mathbb{R}^d$ to a torus as the spatial domain;
by rescaling, we may w.~l.~o.~g.~take the unit torus $[0,1)^d$.
We also restrict to the finite time horizon $T<\infty$ and $h\le 1$.
\medskip
Our main result is the following convergence result, which is only a conditional one
since assumption (\ref{wg04}) on the energies $E_h$ defined below in (\ref{wg06}) presumably cannot be verified. It is the opposite
direction,
$c_0\int_{(0,T)\times[0,1)^d}|\nabla\chi|dt$
$\le\liminf_{h\downarrow0}$ $\int_0^T E_h(\chi_h(t))dt$,
that follows from (\ref{wg07}), see (\ref{wg84}) in Lemma \ref{Le3}.
Here and in the sequel, $|\nabla\chi|dt$ denotes the total variation of
the distribution $\nabla\chi$ in $(0,T)\times[0,1)^d$, provided the latter is a bounded measure.
This notation is justified since in the present case, $|\nabla\chi|dt$ is
(Lebesgue) equi-integrable in $t$ and thus admits a density $|\nabla\chi|$.
In the sequel, $\nu\in L^1(|\nabla\chi|dt)$ denotes the measure-theoretic normal
(characterized through the polar factorization $\nabla\chi=\nu|\nabla\chi|dt$
and $|\nu|=1$ $|\nabla\chi|dt$-almost everywhere).
\begin{theorem}\label{Th}
Given $\chi^0$ as above and such that $\nabla\chi^0$ is a bounded measure,
and a sequence $h\downarrow 0$; let $\chi_h$ be defined by (\ref{wg59}) and (\ref{wg61}).
Suppose that there exists a
$\chi\colon(0,T)\times[0,1)^d\rightarrow[0,1]$ such that
\begin{align}
\chi_h\rightharpoonup\chi\quad\mbox{in}\; L^1((0,T)\times[0,1)^d).\label{wg07}
\end{align}
Then we have $\chi\in\{0,1\}$ (Lebesgue)-a.~e. and $\nabla\chi$ is a bounded measure
which is equi-integrable in $t$.
If we assume in addition
\begin{align}
\limsup_{h\downarrow 0}\int_0^T E_h(\chi_h(t))dt\le
c_0\int_{(0,T)\times[0,1)^d}|\nabla\chi|dt,\label{wg04}
\end{align}
where $c_0:=\frac{1}{\sqrt{2\pi}}$,
then there exists $H\in L^2(|\nabla\chi|dt)$ with
\begin{align}\label{wg60}
\int_{(0,T)\times[0,1)^d}(\nabla\cdot\xi-\nu\cdot\nabla\xi\nu)|\nabla\chi|dt
=-\int_{(0,T)\times[0,1)^d} H\nu\cdot\xi|\nabla\chi|dt
\end{align}
for all $\xi\in C^\infty_0((0,T)\times[0,1)^d)^d$,
and $V$ $\in L^2$ $(|\nabla\chi|dt)$ with
\begin{align}\label{wg57bis}
\int_{[0,1)^d}\zeta(t=0)\chi^0dx&+\int_{(0,T)\times[0,1)^d}\partial_t\zeta\chi dxdt\nonumber\\
&+\int_{(0,T)\times[0,1)^d}\zeta V|\nabla\chi|dt=0
\end{align}
for all $\zeta\in C^\infty_0([0,T)\times[0,1)^d)$, such that
\begin{align}\label{wg62}
\limsup_{\tau\downarrow 0}&\frac{1}{\tau}
\int_{(T-\tau,T)\times[0,1)^d}|\nabla\chi|dt\nonumber\\
&+\int_{(0,T)\times[0,1)^d}\big(V^2+(\frac{H}{2})^2\big)|\nabla\chi|dt
\le\int_{[0,1)^d}|\nabla\chi^0|.
\end{align}
\end{theorem}
We note that in case of $\{\chi=1\}$ being smooth in time-space $[0,T]\times[0,1)^d$,
$|\nabla\chi|$ coincides with the surface measure, $\nu$ with the (inner) normal, and
$H$ and $V$ coincide with the mean curvature (with the convention
that convex sets have positively curved boundary) and normal velocity
(with the convention that growing sets have positive velocity), respectively. In
addition, (\ref{wg57bis}) yields that $\chi(t=0)=\chi^0$. Moreover,
expanding the square $V^2+(\frac{H}{2})^2$ $=(V+\frac{H}{2})^2-VH$,
and appealing to the classical formula $\frac{d}{dt}\int_{[0,1)^d}|\nabla\chi|$
$=\int_{[0,1)^d}VH|\nabla\chi|$ (which relies on the fact that mean curvature
describes the first variation of the surface area), we see that (\ref{wg62}) turns
into $\int_{(0,T)\times[0,1)^d}(V+\frac{H}{2})^2|\nabla\chi|dt\le 0$, and
thus MCF in form of $V=-\frac{H}{2}$ (the factor $\frac{1}{2}$ stems from
the normalization in (\ref{wg76})). Therefore, the inequality (\ref{wg62}) may
be considered a weak notion of MCF.
\medskip
In the sequel, we omit writing the time-space domain $(0,T)\times[0,1)^d$
when integrating the Lebesgue measure $dxdt$ or the limiting surface measure $|\nabla\chi|dt$.
However, the convolution $*$, for which we reserve the $z$-variable, is always w.~r.~t.~
$\mathbb{R}^d$.
\medskip
The next elementary lemma provides the necessary notions and results
on abstract minimizing movements schemes.
\begin{lemma}\label{Le1}
Let $({\mathcal M},d)$ be a compact metric space and $E\colon{\mathcal M}\rightarrow\mathbb{R}$
be continuous. Given $\chi^0\in{\mathcal M}$ and $h>0$ consider a sequence
$\{\chi^n\}_{n\in\mathbb{N}}$ satisfying
\begin{align}\label{wg66}
\chi^n\quad\mbox{minimizes}\quad\frac{1}{2h}d^2(u,\chi^{n-1})+E(u)\quad
\mbox{among all}\;u\in{\mathcal M}.
\end{align}
Then we have for all $t\in\mathbb{N}h$
\begin{align}\label{wg67}
\lefteqn{E(\chi(t))}\nonumber\\
&+\frac{1}{2}\int_0^t\big(\frac{1}{h^2}d^2(\chi(s+h),\chi(s))+|\partial E(u(s))|^2\big)ds
\le E(\chi^0).
\end{align}
Here $\chi(t)$ is the piecewise constant interpolation, cf.~(\ref{wg61}),
$u(t)$ is another interpolation (the ``variational interpolation'') satisfying
\begin{align}
\int_0^\infty\frac{1}{2h^2}d^2(u(t),\chi(t))dt&\le E(\chi^0)\label{wg69},\\
E(u(t))&\le E(\chi(t))\;\;\mbox{for all}\;t\ge 0,\label{wg70}
\end{align}
and $|\partial E(u)|$ is the ``metric slope'' defined through
\begin{align}\label{wg26}
|\partial E(u)|:=\limsup_{v:d(v,u)\rightarrow 0}\frac{(E(u)-E(v))_+}{d(v,u)}\in[0,\infty].
\end{align}
\end{lemma}
The next elementary but crucial lemma establishes that the thresholding scheme
is a minimizing movements scheme.
\begin{lemma}\label{Le2}
Expression (\ref{wg59}) satisfies (\ref{wg66}) provided we define
\begin{align}
{\mathcal M}&:=\{u\colon[0,1)^d\rightarrow[0,1]\;\mbox{measurable}\},\label{wg92}\\
E_h(u)&:=\frac{1}{\sqrt{h}}\int(1-u)\,G_h*u dx,\label{wg06}\\
d_h(u,u')&:=\big(2\sqrt{h}\int|G_\frac{h}{2}*(u-u')|^2dx\big)^\frac{1}{2}.\label{wg68}
\end{align}
Furthermore, $({\mathcal M},d_h)$ is a compact metric space and $E_h$ continuous.
\end{lemma}
We will mostly use (\ref{wg68}) in form of
\begin{align}\label{wg18}
\frac{1}{2h}d_h^2(u,u')=\frac{1}{\sqrt{h}}\int|G_\frac{h}{2}*(u-u')|^2dx.
\end{align}
\medskip
The first part of the next lemma provides compactness.
The second part contains the (only) way we use the convergence assumption (\ref{wg04});
loosely speaking, it ensures convergence of the (oriented) normal down to (spatial)
scales of $O(\sqrt{h})$. In particular, it rules out ghost interfaces.
Since it will also be used for the variational interpolation,
cf.~Lemma \ref{Le1}, it is formulated for a $[0,1]$-valued sequence $\{u_h\}_{h\downarrow0}$.
\begin{lemma}\label{Le3}
i) Consider a sequence $\{\chi_h\}_{h\downarrow0}$ of $\{0,1\}$-valued functions
on $(0,T)\times[0,1)^d$ that satisfies
\begin{align}\label{wg79}
{\rm esssup}_{t\in(0,T)} E_h(\chi_h(t))
+\int_0^T\frac{1}{2h^2}d_h^2(\chi_h(t),\chi_h(t-h))dt\nonumber\\
\mbox{stays bounded as}\;h\downarrow0,
\end{align}
and that is piecewise constant in the sense of (\ref{wg61}).
Such a sequence is compact in $L^1((0,T)\times[0,1)^d)$; any (weak) limit
$\chi$ is such that $\nabla\chi$ is a bounded measure, equi-integrable in $t$, with
\begin{align}\label{wg84}
c_0\int|\nabla\chi|dt\le\liminf_{h\downarrow0}\int_0^TE_h(\chi_h(t))dt.
\end{align}
ii) Consider a sequence $\{u_h\}_{h\downarrow 0}$ of $[0,1]$-valued functions
on $(0,T)\times[0,1)^d$ and a $\{0,1\}$-valued function $\chi$ on
$(0,T)\times[0,1)^d$ that satisfies (\ref{wg07}) and (\ref{wg04})
(with $\chi_h$ replaced by $u_h$) and
\begin{align}\label{wg94}
{\rm ess\,sup}_{t\in(0,T)} E_h(u_h(t))\quad\mbox{stays bounded as}\;h\downarrow0.
\end{align}
Then, as measures on $(z,t,x)$-space, we have the weak convergences
\begin{align}
G_1(z)\frac{1}{\sqrt{h}}u_h(t,x)(1-u_h)(t,x-\sqrt{h}z)dx&dtdz\nonumber\\
\rightharpoonup G_1(z)(\nu\cdot z)_+|\nabla\chi|&dtdz,\label{wg11}\\
G_1(z)\frac{1}{\sqrt{h}}(1-u_h)(t,x)u_h(t,x-\sqrt{h}z)dx&dtdz\nonumber\\
\rightharpoonup G_1(z)(\nu\cdot z)_-|\nabla\chi|&dtdz.\label{wg12}
\end{align}
The test functions may even have polynomial growth in $z$.
\end{lemma}
The next two propositions are at the core and provide the link
between (\ref{wg67}) and (\ref{wg62}). Proposition \ref{Pr2} ensures that
the metric $d_h$, cf.~(\ref{wg68}), is strong enough to control
the right notion of energy of curves in configuration space.
Proposition \ref{Pr3} makes sure that it is not too strong so that
the metric slope $|\partial E_h|$, cf.~(\ref{wg26}), controls
the gradient of the limiting functional.
\begin{proposition}\label{Pr2}
Suppose that (\ref{wg07}) and the conclusion of
Lemma \ref{Le3} hold (with $u_h$ replaced by $\chi_h$).
Provided the l.~h.~s.~of (\ref{wg51}) is finite,
there exists $V\in L^2(|\nabla\chi|dt)$
that is the normal velocity in the sense of
\begin{align}\label{wg57}
\partial_t\chi=V|\nabla\chi|\quad\mbox{distributionally},
\end{align}
and that is dominated in the sense of
\begin{align}\label{wg51}
\liminf_{h\downarrow0}\int_0^T\frac{1}{2h^2}d_h^2(\chi_h(t+h),\chi_h(t))dt
\ge c_0\int V^2|\nabla\chi|dt.
\end{align}
\end{proposition}
\begin{proposition}\label{Pr3}
Suppose that the conclusions of Lemma \ref{Le3} ii) hold.
Then there exists $H\in L^2(|\nabla\chi|dt)$
that is the mean curvature in the sense of (\ref{wg60})
and that is dominated in the sense of
\begin{align}\label{wg74}
\liminf_{h\downarrow0}\int_0^T\frac{1}{2}|\partial E_h(u_h(t))|^2dt
\ge c_0\int (\frac{H}{2})^2|\nabla\chi|dt.
\end{align}
\end{proposition}
\section{Proofs}
We will repeatedly use the (parabolic) scaling of $G_h$, cf.~(\ref{wg76}),
\begin{align}\label{wg05}
G_h(z)=\frac{1}{\sqrt{h}^d}G_1(\frac{z}{\sqrt{h}})
\end{align}
and its semi-group property in form of
\begin{align}\label{wg38}
G_{h}*G_{h'}=G_{h+h'}\quad\mbox{in particular}\quad G_\frac{h}{2}*G_\frac{h}{2}=G_h.
\end{align}
The constant $c_0=\frac{1}{\sqrt{2\pi}}$ appears because of the identity
\begin{align}\label{wg03}
\int G_1(z)(z_1)_+dz=G^{d=1}_1(0)=c_0,
\end{align}
where $G^{d=1}_1(z_1):=\frac{1}{\sqrt{2\pi}}\exp(-\frac{z_1^2}{2})$
denotes the standard Gaussian in a single variable. Indeed, by the factorization
of the $d$-dimensional standard Gaussian into $G_1^{d=1}$ and the
$(d-1)$-dimensional one, and by the normalization of the latter, the
integral in (\ref{wg03}) reduces to $\int_0^\infty G_1^{d=1}z_1dz_1$.
The formula then follows from writing $z_1 G_1^{d=1}$ $=-\frac{d}{dz_1}$ $G_1^{d=1}$.
\medskip
{\sc Proof of Theorem \ref{Th}}.\nopagebreak
Note that Lemma \ref{Le2} allows to make use of Lemma \ref{Le1},
so that we have (\ref{wg67}) with $(E,d,\chi,u)$ replaced by $(E_h,d_h,\chi_h,u_h)$.
We start with the l.~h.~s.~of (\ref{wg67}), for which we plainly have
\begin{align}\label{wg71}
E_h(\chi^0)\le c_0\int|\nabla\chi^0|.
\end{align}
Indeed, dropping the index $0$, this follows by making the l.~h.~s.~explicit
$\frac{1}{\sqrt{h}}\int G_h(z)(1-\chi)\chi(\cdot-z)dxdz$, which by (\ref{wg05})
and $\chi\in\{0,1\}$ coincides with
$\int G_1(z)\frac{1}{\sqrt{h}}(\chi-\chi(\cdot-\sqrt{h}z))_-dxdz$.
It remains to appeal to the mean-value inequality
$\int\frac{1}{\sqrt{h}}(\chi-\chi(\cdot-\sqrt{h}z))_-dx$ $\le\int(z\cdot\nu)_-|\nabla\chi|$
and to (\ref{wg03}).
\medskip
Note that because of (\ref{wg67}) and (\ref{wg71}), (\ref{wg79}) is satisfied.
Hence we may apply Lemma \ref{Le3} i), which yields
$\chi\in\{0,1\}$ a.~e.~and that $\nabla\chi$
is a bounded measure which is equi-integrable in $t$.
By Lemma \ref{Le3} ii), in view
of the theorem's assumption (\ref{wg04}), we obtain (\ref{wg11}) \& (\ref{wg12})
with $u_h$ replaced by $\chi_h$, so that we may apply Proposition \ref{Pr2}.
We now argue that (\ref{wg69}) \& (\ref{wg70})
imply that (\ref{wg07}) \& (\ref{wg04})
hold with $\chi_h$ replaced by $u_h$, so that we may use Proposition \ref{Pr3} also for $u_h$.
Indeed, (\ref{wg04}) for $u_h$ follows immediately from (\ref{wg04}) for $\chi_h$
and (\ref{wg70}). We now turn to (\ref{wg07}); because it is $[0,1]$-valued,
the sequence $\{u_h\}_{h\downarrow0}$ always admits a subsequence that has a
weak limit $u$, so that it remains to argue that $u=\chi$, w.~l.~o.~g.~assuming
that the entire sequence converges. We momentarily fix $h_0>0$ and note that by
(\ref{wg38}) together with Jensen's inequality we have for all $h\le 2 h_0$ that
\begin{align*}
\lefteqn{\int|G_{h_0}*(u_h-\chi_h)|^2dxdt\le\int|G_\frac{h}{2}*(u_h-\chi_h)|^2dxdt}\nonumber\\
&\stackrel{(\ref{wg18})}{=}\frac{1}{2\sqrt{h}}\int_0^Td_h^2(u_h(t),\chi_h(t))dt
\stackrel{(\ref{wg69})}{\le}h\sqrt{h}E_h(\chi^0)
\stackrel{(\ref{wg71})}{\le}c_0h\sqrt{h}\int|\nabla\chi^0|,
\end{align*}
so that by lower-semi continuity of the l.~h.~s.~under weak convergence we
obtain $\int|G_{h_0}*(u-\chi)|^2dxdt=0$. From letting $h_0$ tend to zero
we obtain the desired $u=\chi$.
\medskip
Momentarily setting $\rho(t)$ $:=E_h(\chi_h(t))$ $+\frac{1}{2}\int_0^t$
$\big(\frac{1}{h^2}d_h^2(\chi_h(s),\chi_h(s-h))$ $+|\partial E_h(u_h(s))|^2\big)ds$,
we note that by definition $\rho(t) =\rho(nh) + \delta(t)$, for
$t\in\big[nh, (n+1) h\big)$, where $\delta(t) := \frac12\int_{nh}^t \big(\frac{1}{h^2}d_h^2(\chi_h(s),\chi_h(s-h))$ $+|\partial E_h(u_h(s))|^2\big)ds $.
By (\ref{wg67}), $\int_0^T \delta(t)dt$ $ \le h E_h(\chi^0)$. Hence if we multiply (\ref{wg67}) in form of
$\rho(nh)\le E_h(\chi^0)$ with $\eta(nh)-\eta((n+1)h)$ for some
non-increasing $\eta\in C_0^\infty([0,T))$, we obtain
$\int_0^\infty(-\frac{d\eta}{dt})\rho dt$ $\le \big( \eta(0)+ h \sup \big| \frac{d \eta}{dt}\big| \big) E_h(\chi^0) $.
By an integration by parts and with the choice
$\eta(t) =\max\{\min\{\frac{T-t}{\tau},1\},0\}$, this turns into
\begin{align*}
\lefteqn{\frac{1}{\tau}\int_{T-\tau}^TE_h(\chi_h(t))dt}\nonumber\\
&+\frac{1}{2}\int_0^{T-\tau} \!\! \big(\tfrac{1}{h^2}d_h^2(\chi_h(t),\chi_h(t-h))
+|\partial E_h(u_h(t))|^2\big)dt\le \big(1+\tfrac{h}{\tau} \big) E_h(\chi^0).
\end{align*}
Passing to the limit $h\downarrow0$, for the first l.~h.~s.~term, we appeal to (\ref{wg84}) in Lemma \ref{Le3}
with $(0,T)$ replaced by $(T-\tau,T)$.
For the second l.~h.~s.~term, we apply (\ref{wg51}) (note that we may
extend the integral down to 0 because of the second item in (\ref{wg61}))
and (\ref{wg74}), both with $(0,T)$ replaced by $(0,T-\tau)$. For the r.~h.~s.~term,
we use (\ref{wg71}). Summing up, we obtain
\begin{align*}
\lefteqn{\frac{c_0}{\tau}\int_{(T-\tau,T)\times[0,1)^d}|\nabla\chi|dt}\nonumber\\
&+c_0\int_{(0,T-\tau)\times[0,1)^d}\big(V^2+(\frac{H}{2})^2\big)|\nabla\chi|dt
\le c_0\int|\nabla\chi^0|.
\end{align*}
Dividing by $c_0$ and letting $\tau\downarrow0$ yields (\ref{wg62}).
\medskip
Finally, we argue why (\ref{wg57}) is sufficient to infer (\ref{wg57bis}).
Indeed, by the trivial extension of $\chi_h$ to $t\le 0$, cf.~(\ref{wg61}),
the assumptions (\ref{wg07}) \& (\ref{wg04}) extend to $(-T,T)$, where
for (\ref{wg04}) we appeal to (\ref{wg71}). Likewise,
the l.~h.~s.~integral in (\ref{wg51}) extends to $(-T,T)$. Hence (\ref{wg57})
holds distributionally on $(-T,T)\times[0,1)^d$, which turns into (\ref{wg57bis})
because of $\chi=\chi^0$ for $t<0$.
\bigskip
{\sc Proof of Lemma \ref{Le1}}.
We reproduce the proof of \cite[Theorem 3.1.4 \& Lemma 3.1.3]{AmbrosioGigliSavare}.
We start with the definition of the variational interpolation $u$.
Since by assumption, $({\mathcal M},d)$ is compact and $E$ continuous,
for any $n\in\mathbb{N}$ and any $t\in((n-1)h,nh]$, there exists $u(t)$ that
minimizes
\begin{align}\label{wg96}
\frac{d^2(u,\chi^{n-1})}{2(t-(n-1)h)}+E(u)\quad\mbox{among}\;u\in{\mathcal M}.
\end{align}
W.~l.~o.~g.~we may assume that $u(nh)=\chi^n$, cf.~(\ref{wg66}),
so that $u$ is indeed an interpolation
of $\{\chi^n\}_{n\in\mathbb{N}}$. Since by comparison with $u=\chi^{n-1}$
we have $E(u(t))\le E(\chi^{n-1})$, (\ref{wg70}) follows immediately from
the way we defined the piecewise linear interpolation, cf.~(\ref{wg61}).
\medskip
Fixing $n\in\mathbb{N}$ and introducing
\begin{align}\label{wg99}
e(t):=\min_{u\in{\mathcal M}}\big(\frac{d^2(u,\chi^{n-1})}{2(t-(n-1)h)}+E(u)\big),
\quad(n-1)h<t\le nh,
\end{align}
we now establish the two crucial inequalities
\begin{align}\label{wg98}
\lefteqn{\frac{d^2(u(s),\chi^{n-1})}{2(s-(n-1)h)(t-(n-1)h)}
\le\frac{e(s)-e(t)}{t-s}}\nonumber\\
&\le\frac{d^2(u(t),\chi^{n-1})}{2(s-(n-1)h)(t-(n-1)h)},
\quad(n-1)h<s<t\le nh.
\end{align}
For notational simplicity we consider the case $n=1$; for any $s,t>0$ we have
by definitions (\ref{wg96}) and (\ref{wg99})
\begin{align*}
e(s)&=\frac{1}{2s}d^2(u(s),\chi^0)+E(u(s))\\
&\le \frac{1}{2s}d^2(u(t),\chi^0)+E(u(t))
=(\frac{1}{2s}-\frac{1}{2t})d^2(u(t),\chi^0)+e(t).
\end{align*}
Writing $\frac{1}{2s}-\frac{1}{2t}$ $=\frac{t-s}{2st}$ this gives the upper bound
in (\ref{wg98}) after division by $t-s>0$.
Exchanging the roles of $s$ and $t$, we likewise get the lower one.
\medskip
We now argue that
\begin{align}\label{wg97}
|\partial E(u(t))|\le\frac{d(u(t),\chi^{n-1})}{t-(n-1)h}
\quad\mbox{for}\;t\in((n-1)h,nh].
\end{align}
Again, for notational simplicity we consider $n=1$ and give ourselves
a $v\in{\mathcal M}$. By the characterizing property (\ref{wg96})
of $u(t)$ we have $\frac{1}{2t}d^2(u(t),\chi^0)+E(u(t))$
$\le\frac{1}{2t}d^2(v,\chi^0)+E(v)$, so that
$E(u(t))-E(v)$ $\le\frac{1}{2t}(d(v,\chi^0)-d(u(t),\chi^0))(d(v,\chi^0)+d(u(t),\chi^0))$.
By the triangle inequality, this implies
\begin{align*}
E(u(t))-E(v)\le d(v,u(t))\frac{1}{t}\big(d(u(t),\chi^0)+\frac{1}{2}d(v,u(t))\big),
\end{align*}
so that (\ref{wg97}) follows from definition (\ref{wg26}) of the metric slope.
\medskip
We now may conclude on (\ref{wg67}). By telescoping and
according to the piecewise constant interpolation,
it is sufficient to establish
\begin{align*}
E(\chi^n)+\frac{1}{2h}d^2(\chi^n,\chi^{n-1})+\int_{(n-1)h}^{nh}\frac{1}{2}|\partial E(u(s))|^2ds
\le E(\chi^{n-1}),
\end{align*}
which according to (\ref{wg97}) follows from
\begin{align*}
E(\chi^n)+\frac{1}{2h}d^2(\chi^n,\chi^{n-1})
+\int_{(n-1)h}^{nh}\frac{d^2(u(s),\chi^{n-1})}{2(s-(n-1)h)^2}ds\le E(\chi^{n-1}),
\end{align*}
and with help of (\ref{wg99}) may be rewritten as
\begin{align}\label{so01}
e(nh)+\int_{(n-1)h}^{nh}\frac{d^2(u(s),\chi^{n-1})}{2(s-(n-1)h)^2}ds
\le E(\chi^{n-1}).
\end{align}
Here comes the argument for (\ref{so01}): We first learn from (\ref{wg98}) that
\begin{align}\label{so02}
((n-1)h,nh]\ni s\mapsto d^2(u(s),\chi^{n-1})\quad\mbox{is monotone increasing}
\end{align}
and thus continuous outside of a countable set of $s$'s. We then learn that
$e$ is locally Lipschitz continuous on $((n-1)h,nh]$ and differentiable
where (\ref{so02}) is continuous. In particular, we have in those (Lebesgue)
almost every time points $s$, $\frac{de}{dt}(s)$ $=-\frac{1}{2s^2}d^2(u(s),\chi^{n-1})$.
Integrating this relationship from some $t\in((n-1)h,nh]$ to $nh$ we obtain
$e(nh)$ $+\int_t^{nh}\frac{1}{2s^2}d^2(u(s),\chi^{n-1})ds$ $\le e(t)$.
Using the obvious $e(t)\le E(\chi^{n-1})$, cf.~(\ref{wg99}), and letting $t\downarrow(n-1)h$
we obtain (\ref{so01}) by monotone convergence.
\medskip
We finally turn to (\ref{wg69}). According to (\ref{so02}) we have
$d^2(u(t),\chi^{n-1})$ $\le d^2(\chi^n,\chi^{n-1})$ for $t\in((n-1)h,nh]$ and thus
$\int_0^\infty d^2(u(t),\chi(t))dt$ $\le\int_0^\infty d^2(\chi(t),\chi(t-h))dt$,
so that (\ref{wg69}) follows from (\ref{wg67}).
\bigskip
{\sc Proof of Lemma \ref{Le2}}.
By the definitions (\ref{wg06}) and (\ref{wg18}), the latter in
conjunction with (\ref{wg38}), we have
\begin{align*}
\frac{1}{2h}d_h^2(u,\chi^{n-1})+E_h(u)=
\langle u-\chi^{n-1},u-\chi^{n-1}\rangle +\langle 1-u,u\rangle,
\end{align*}
where we momentarily introduced the bilinear form $\langle u,u'\rangle$
$:=\frac{1}{\sqrt{h}}\int u$ $ G_h*u'dx$. Since this form is symmetric, we may rewrite
the r.~h.~s.~as $\langle u,1-2\chi^{n-1}\rangle$ $+\langle\chi^{n-1},\chi^{n-1}\rangle$,
so that
\begin{align*}
\frac{1}{2h}d_h^2(u,\chi^{n-1})+E_h(u)=
\frac{1}{\sqrt{h}}\int u (1-2G_h*\chi^{n-1})+C,
\end{align*}
where $C:=\langle\chi^{n-1},\chi^{n-1}\rangle$ does not depend on $u$.
It is now obvious that (\ref{wg59}) minimizes $\frac{1}{2h}d_h^2(u,\chi^{n-1})+E_h(u)$
among all $u\in{\mathcal M}$, cf.~(\ref{wg92}).
\medskip
It remains to argue that the metric space $({\mathcal M},d_h)$ is compact
and $E_h$ continuous. Both follows from the fact that $d_h$ metrizes
weak convergence on ${\mathcal M}\subset L^2([0,1)^d)$. The latter can be
seen as follows: In terms of Fourier series, we have
$\frac{2}{\sqrt{h}}d_h^2(u,u')$
$=\sum_{k\in2\pi\mathbb{Z}^d}\exp(-\frac{|hk|^2}{2})$ $|{\mathcal F}(u-u')|^2(k)$,
and note $|{\mathcal F}(u-u')|^2(k)$ $\le\int(u-u')^2dx$ $\le 1$.
Hence by dominated convergence,
$d_h(u_n,u)\rightarrow 0$ is equivalent to ${\mathcal F}(u_n-u)(k)\rightarrow 0$
for all $k\in2\pi\mathbb{Z}^d$, which by the $L^2$-boundedness of $\{u_n\}_{n\uparrow\infty}$
$\subset{\mathcal M}$ is equivalent to weak convergence.
\bigskip
{\sc Proof of Lemma \ref{Le3}}.
\newcounter{Le3S}
\refstepcounter{Le3S}
{\sc Step} \arabic{Le3S}.\label{Le3S1}\refstepcounter{Le3S}
Some useful inequalities on $E_h$ and $d_h$.
We claim for any $[0,1]$-valued function $u$ of space:
\begin{align}
\int|u-G_h*u|d
&\le 2\sqrt{h}E_h(u),\label{wg85}\\
E_{h_0}(u)&\le E_{h}(u)\quad\mbox{for}\;h_0\in\mathbb{N}^2h,\label{wg86}
\end{align}
which we claim combine t
\begin{align}\label{wg93}
\int(u-G_{h_0}*u)^2dx \le 4\sqrt{h_0}E_{h}(u)\quad\mbox{for all}\;h_0\ge h.
\end{align}
We also claim for any pair of $\{0,1\}$-valued functions $\chi,\chi'$ of space
\begin{align}
\int|\chi-\chi'|dx&\le \frac{1}{2\sqrt{h}}d_h^2(\chi,\chi')
+2\sqrt{h}\big(E_h(\chi)+E_h(\chi')\big).\label{wg83}
\end{align}
We first tackle (\ref{wg85}); by Jensen's inequality in form of
$|u-G_h*u|(x)$ $\le\int G_h(z)|u(x)-u(x-z)|dz$, and by the definition (\ref{wg06})
of $E_h$ which together with the symmetry of $G_h*$ yields
$2\sqrt{h}E_h(u)$ $=\int((1-u)G_h*u$ $+uG_h*(1-u))dx$, (\ref{wg85}) follows
from the elementary inequality
\begin{align}\label{wg95}
|u-u'|\le (1-u)u'+u(1-u')\quad\mbox{for}\;u,u'\in[0,1].
\end{align}
\medskip
We now turn to (\ref{wg86}) which we (iteratively) establish in the more general form of
\begin{align}\label{wg91}
\sqrt{h_0}E_{h_0}\le\sqrt{h}E_{h}
+\sqrt{h'}E_{h'}\quad\mbox{provided}\;\sqrt{h_0}=\sqrt{h}+\sqrt{h'}.
\end{align}
Indeed, by definition (\ref{wg06}) and the scaling (\ref{wg05})
we have $\sqrt{h}E_h(u)$ $=\int G_1(z)(1-u)(x)u(x-\sqrt{h}z)dxdz$
and by a change of variables in $x$, $\sqrt{h'}E_{h'}(u)$
$=\int G_1(z)(1-u)(x-\sqrt{h}z)u(x-(\sqrt{h}+\sqrt{h'})z)dxdz$.
Hence (\ref{wg91}) follows from the elementary inequality
\begin{align*}
(1-u)u''\le (1-u)u'+(1-u')u''\quad\mbox{for}\;u,u',u''\in[0,1].
\end{align*}
This is equivalent to $u'(u+u''-1)$ $\le uu''$, which because of $u'\in[0,1]$ and $uu''\ge0$
follows from $u+u''-1\le uu''$. The latter is equivalent to $u''(1-u)\le 1-u$,
which holds because of $u,u''\in[0,1]$.
\medskip
We now turn to the upgrade (\ref{wg93}). We first observe that the
l.~h.~s.~is monotone increasing in $h_0$, as can be seen by the Fourier representation
$\sum_{k}(1-\exp(-\frac{|h_0k|^2}{2}))|{\mathcal F} u(k)|^2$, where ${\mathcal F}u(k)$,
$k\in2\pi\mathbb{Z}^d$, denotes the Fourier series of $u$ and $\exp(-\frac{|h_0k|^2}{2})$
is the Fourier transform of $G_{h_0}$. Now given $h_0\ge h$,
we write $\sqrt{h_0}=N\sqrt{h}-s$ with $N\in\mathbb{N}$ and $s\in[0,\sqrt{h})$. By the
above monotonicity we have
$\int(u-G_{h_0}*u)^2$ $\le\int(u-G_{N^2h}*u)^2$ $\le\int|u-G_{N^2h}*u|$,
to which we first apply (\ref{wg85})
with $h$ replaced by $N^2h$ and second apply (\ref{wg86}) with $N^2h$ playing the role of $h_0$.
Hence we end up with $\int(u-G_{h_0}*u)^2$ $\le 2N\sqrt{h}E_h(u)$, which because
of $N\sqrt{h}$ $\le\sqrt{h_0}+\sqrt{h}$ turns into (\ref{wg93}).
\medskip
We finally address (\ref{wg83}), which according to the definitions (\ref{wg06})\&(\ref{wg68})
of $E_h$ and $d_h$ and the simple estimate \eqref{wg85} follows from integrating the following
inequality in $x$, appealing to the symmetry of $G_h*$,
\begin{align*}
|\chi-\chi'|\le(\chi-\chi')G_h*(\chi-\chi'
&+|\chi -G_h*\chi|+|\chi' -G_h*\chi'|.
\end{align*}
Writing $|\chi-\chi'| = (\chi-\chi')^2 = (\chi-\chi')G_h*(\chi-\chi') +(\chi-\chi') (\chi-G_h*\chi) +(\chi'-\chi) (\chi'-G_h*\chi')$, we see that the inequality relies on $(\chi-\chi') (\chi-G_h*\chi) \le |\chi-G_h*\chi|$ and on the same inequality with the roles of $\chi$ and $\chi'$ exchanged.
\medskip
{\sc Step} \arabic{Le3S}.\label{Le3S1bis}\refstepcounter{Le3S}
Modulus of continuity in time:
We claim that for every $\{0,1\}$-valued function $\chi$ of time-space
that is piecewise constant in time, cf.~(\ref{wg61}), and any time shift $s\in[0,1]$ we have
\begin{align}\label{wg89}
I(s)\le C_0\left\{\begin{array}{ccc}
\frac{s}{\sqrt{h}}&\mbox{for}&s\le h\\
2\sqrt{h} &\mbox{for}&h\le s\le\sqrt{h}\\
4s &\mbox{for}&\sqrt{h}\le s
\end{array}\right\}\le 4C_0\sqrt{s},
\end{align}
where in this step of the proof, we use the abbreviation
\begin{align}
I(s)&:=\int_{(s,T)\times[0,1)^d}|\chi(t,x)-\chi(t-s,x)|dxdt,\nonumber\\
C_0&:=\int_h^T\frac{1}{2h^2}d_h^2(\chi(t),\chi(t-h))dt + 4\int_0^TE_h(\chi(t))dt.\label{wg90}
\end{align}
Indeed, for $s\ge h$, we use (\ref{wg83}) with $(\chi,\chi')$ $=(\chi(t),\chi(t-s))$
and integrate in $t\in(s,T)$ to obtain
\begin{align}\label{wg87}
I(s)\le\int_s^T\frac{1}{2\sqrt{h}}d_h^2(\chi,\chi(\cdot-s))dt
+4\sqrt{h}\int_0^TE_h(\chi)dt
\end{align}
where we write $\chi(\cdot-s)$ for the time-shifted function $(t,x)\mapsto\chi(t-s,x)$.
We first use this to treat (\ref{wg89}) in case of $s\le h$: By the piecewise constant interpolation,
cf.~(\ref{wg61}), we have $I(s)\le \frac{s}{h}I(h)$,
into which we insert (\ref{wg87}) for $s=h$ in form of $I(h)\le C_0\sqrt{h}$.
We now treat (\ref{wg89}) in case of $h\le s\le\sqrt{h}$, and first restrict
ourselves to $s=Nh$ with $N\in\mathbb{N}$
in order to use the triangle inequality for $d_h$ in form of
$d_h^2(\chi,\chi(\cdot-s))$ $\le N\sum_{n=1}^Nd_h^2(\chi(\cdot-(n-1)h),\chi(\cdot-nh))$
so that we obtain from (\ref{wg87})
\begin{align}\label{wg88}
I(s)&\le(\frac{s}{h})^2\int_h^T\frac{1}{2\sqrt{h}}d_h^2(\chi,\chi(\cdot-h))dt
+4\sqrt{h}\int_0^TE_h(\chi)dt\nonumber\\
&\stackrel{(\ref{wg90})}{\le} C_0\max\{\frac{s^2}{\sqrt{h}},\sqrt{h}\}= C_0\sqrt{h}.
\end{align}
For the unrestricted range of $h\le s\le\sqrt{h}$, we write $s=Nh+s'$ with $N\in\mathbb{N}$ and
$s'\in[0,h)$, use $I(s)\le I(Nh)+I(s')$, and
appeal to (\ref{wg88}) for the first contribution and to (\ref{wg89}) in the
previously treated case of $s'\le h$ for the second contribution.
Finally, for (\ref{wg89}) in the remaining case of $s\ge\sqrt{h}$, we write $s=N\sqrt{h}+s'$
with $N\in\mathbb{N}$ and $s'\in[0,\sqrt{h})$, use $I(s)\le N I(\sqrt{h})+I(s')$, and
appeal to the previously treated case of (\ref{wg89}) for both terms.
\medskip
{\sc Step} \arabic{Le3S}.\label{Le3S3}\refstepcounter{Le3S}
Proof of the compactness statement. From (\ref{wg93}) and thanks
to the first part of our assumed bound (\ref{wg79}), we learn that
$G_{h_0}*\chi_h$ is close to $\chi_h$ in $L^\infty((0,T), L^2([0,1)^d))$
(and thus in $L^1((0,T)\times[0,1)^d)$), as $h_0\downarrow 0$,
uniformly in $h\downarrow0$. Hence it remains to argue for fixed $h_0>0$
that $\{G_{h_0}*\chi_h\}_{h\downarrow0}$ is compact in $L^1$. Because of
the convolution in space, and of the equi-integrability following
from $G_{h_0}*\chi_h$ $\in [0,1]$,
this follows from a modulus of continuity in time in $L^1$
that is uniform in $h\downarrow0$. Thanks to our assumption (\ref{wg79}),
this holds for $\chi_h$ itself by Step \ref{Le3S1bis}; it transmits to $G_{h_0}*\chi_h$
by Jensen's inequality.
\medskip
{\sc Step} \arabic{Le3S}.\label{Le3S3bis}\refstepcounter{Le3S}
Before establishing the exact inequality (\ref{wg84}), which will be done
at the end of Step \ref{Le3S5},
it is convenient to first argue that $\nabla\chi$ is a bounded measure,
equi-integrable in $t$, under the mere assumption (\ref{wg94}).
We focus on $\partial_1\chi$, give
ourselves a $\zeta\in C^\infty_0((0,T)\times[0,1)^d)$ and note that
as a consequence of (\ref{wg03}) we have
\begin{align*}
-c_0\partial_1\zeta
&=\lim_{h\downarrow0}\frac{1}{\sqrt{h}}\int_{z_1>0}(\zeta-\zeta(\cdot+\sqrt{h}z))G_1(z)dz\nonumber\\
&\stackrel{(\ref{wg05})}{=}\lim_{h\downarrow0}
\frac{1}{\sqrt{h}}\int_{z_1>0}(\zeta-\zeta(\cdot+z))G_h(z)dz
\end{align*}
uniformly in time-space, where we write $\zeta(\cdot+z)$ for the space-shifted function
$(t,x)\mapsto\zeta(t,x+z)$.
Hence it follows from (\ref{wg07}) that
\begin{align*}
-c_0\int\partial_1\zeta\chi dxdt
=\lim_{h\downarrow0}\int_{z_1>0}\zeta \frac{1}{\sqrt{h}}(\chi_h-\chi_h(\cdot-z))G_h(z) dxdtdz
\end{align*}
and thus
\begin{align*}
c_0\big|\int\partial_1\zeta\chi dxdt\big|
\le\int_0^T\sup_x|\zeta|dt\liminf_{h\downarrow0}
\mathop{\rm ess\,sup}_t\frac{1}{\sqrt{h}}\int|\chi_h-G_h*\chi_h|dx.
\end{align*}
According to (\ref{wg85}), the second r.~h.~s.~factor is estimated by
$2\mathop{\rm ess\,sup}_t$ $ E_h(\chi_h)$, the $\liminf_{h\downarrow0}$ of which is bounded
by our assumption (\ref{wg94}).
\medskip
{\sc Step} \arabic{Le3S}.\label{Le3S4}\refstepcounter{Le3S}
Turning to (\ref{wg11}) \& (\ref{wg12}), it is convenient to have
the following equi-integrability of the non-negative
\begin{align*}
\rho_h(z,t,x):=G_1(z)\frac{1}{\sqrt{h}}\big(&(1-u_h)(t,x)u_h(t,x-\sqrt{h}z)\nonumber\\
&+u_h(t,x)(1-u_h)(t,x-\sqrt{h}z)\big)
\end{align*}
in the (non-compact) variables $t$ and $z$ in the sense of
\begin{align*}
\int\exp(\frac{3|z|^2}{8})\rho_h(z,t,x)dxdz\le 2^{d+2}E_h(u_h(t)),
\end{align*}
cf.~(\ref{wg94}). Indeed, we first observe that $G_4(z)$
$=\frac{1}{2^d}\exp(\frac{3|z|^2}{8})G_1(z)$, so that by the scaling (\ref{wg05})
we obtain $\int\exp(\frac{3|z|^2}{8})\rho_h dxdz$ $=\frac{2^d}{\sqrt{h}}$
$\int\big((1-u_h)$ $G_{4h}*u_h$ $+u_hG_{4h}*(1-u_h)\big)dx$, so that by symmetry of $G_{4h}$
and the definition (\ref{wg06}) of $E_{4h}$,
$\int\exp(\frac{3|z|^2}{4})\rho_h dxdz$ $=2^{d+2}E_{4h}(u_h)$,
so that it remains to appeal to (\ref{wg86}).
\medskip
{\sc Step} \arabic{Le3S}.\label{Le3S5}\refstepcounter{Le3S}
Proof of (\ref{wg11}) \& (\ref{wg12}).
We focus on (\ref{wg11}); according to Step \ref{Le3S4}, it is sufficient
to treat bounded continuous test functions $\zeta(z,t,x)$, which by linearity
and splitting into positive and negative part we may assume to be $[0,1]$-valued.
The statement splits into the local lower bound
\begin{align}\label{wg01}
\liminf_{h\downarrow 0}&
\int\zeta G_1(z) \frac{1}{\sqrt{h}}u_h(1-u_h)(\cdot-\sqrt{h}z)dxdtdz\nonumber\\
&\ge\int\zeta G_1(z)(\nu\cdot z)_+|\nabla\chi|dtdz
\end{align}
and the global upper bound
\begin{align}\label{wg02}
\limsup_{h\downarrow 0}&
\int G_1(z)\frac{1}{\sqrt{h}}u_h(1-u_h)(\cdot-\sqrt{h}z)dxdtdz\nonumber\\
&\le\int G_1(z)(\nu\cdot z)_+|\nabla\chi|dtdz.
\end{align}
Indeed, splitting a given test function $\zeta=1-(1-\zeta)$,
appealing to linearity and using (\ref{wg01})
with $\zeta$ replaced by $1-\zeta\in[0,1]$, we obtain also
the local upper bound.
\medskip
We note that the global upper bound (\ref{wg02}) is nothing else than
our assumption (\ref{wg04}) (with $\chi_h$ replaced by $u_h$):
For the l.~h.~s.~this follows from the
the scaling (\ref{wg05}), the symmetry of $G_h*$, and
the definition (\ref{wg06}) of $E_h$. For the r.~h.~s.~this follows from (\ref{wg03}).
\medskip
Hence it remains to establish (\ref{wg01}) by a typical l.~s.~c.~argument:
By Fatou's Lemma, it is enough to establish for fixed $z$
\begin{align*}
\liminf_{h\downarrow 0}
\int\zeta\frac{1}{\sqrt{h}}u_h(1-u_h)(\cdot-\sqrt{h}z)dxdt
\ge\int\zeta(\nu\cdot z)_+|\nabla\chi|dt.
\end{align*}
Since by Step \ref{Le3S3bis}, we already know that $\nabla\chi$ is a bounded
measure, we have $(\nu\cdot z)_+|\nabla\chi|=(\partial_z\chi)_+$. Hence by the definition
of the positive part of a measure, and by the equi-integrability in $t$ established
in Step \ref{Le3S4}, it is enough to establish for any
non-negative $\zeta\in C_0^\infty((0,T)\times[0,1)^d)$
\begin{align*}
\liminf_{h\downarrow 0}
\int\zeta\frac{1}{\sqrt{h}}u_h(1-u_h)(\cdot-\sqrt{h}z)dxdt
\ge-\int\partial_z\zeta \chi dxdt.
\end{align*}
Since by assumption (\ref{wg07}), the r.~h.~s.~is the limit of
\begin{align*}
\int\frac{1}{\sqrt{h}}(\zeta-\zeta(\cdot+\sqrt{h}z)) u_h
=\int\zeta\frac{1}{\sqrt{h}}(u_h-u_h(\cdot-\sqrt{h}z)),
\end{align*}
the statement follows from the elementary inequality
$u-u'$ $\le u(1-u')$ that is valid for any $u,u'\in[0,1]$.
\medskip
We note that this argument for (\ref{wg01}) did not involve the extra assumption
(\ref{wg04}) and thus applies also under the assumptions of part i)
of the lemma. Statement (\ref{wg01}) applied to $\zeta=1$ yields (\ref{wg84})
(for the same reason, given above, that (\ref{wg02}) is a mere reformulation of (\ref{wg04})).
\bigskip
{\sc Proof of Proposition \ref{Pr2}}.\nopagebreak
Note that by definition (\ref{wg18}) of $d_h$ and (\ref{wg38}) we have
\begin{align*}
\lefteqn{\int_h^T\frac{1}{2h^2}d_h^2(\chi_h(t),\chi_h(t-h))dt}\nonumber\\
&=\int_{(h,T)\times[0,1)^d}\frac{1}{h\sqrt{h}}|G_\frac{h}{2}*(\chi_h-\chi_h(\cdot-h))|^2dxdt,
\end{align*}
which motivates to introduce the non-negative (bounded) ``dissipation measure''
on $(0,T)\times[0,1)^d$ (after possibly passing to a subsequence)
\begin{align}\label{wg35}
\mu:=\lim_{h\downarrow 0}\frac{1}{h\sqrt{h}}|G_\frac{h}{2}*(\chi_h-\chi_h(\cdot-h))|^2.
\end{align}
In fact, we shall establish (\ref{wg51}) in the localized form of
\begin{align}\label{wg53}
c_0V^2|\nabla\chi|dt\le\mu.
\end{align}
\medskip
We now give an outline of the proof, which amounts to a better quantification
of Step \ref{Le3S1bis} in the proof of Lemma \ref{Le3}.
In deriving this estimate on the distribution $\partial_t\chi$,
we work on an intermediate time scale, which we fix to be an (eventually small) fraction
of the characteristic spatial scale:
\begin{align}\label{wg44}
\tau:=\alpha\sqrt{h}\quad\mbox{for some fixed}\;\alpha\in(0,\infty).
\end{align}
We consider the corresponding increments and their positive and negative parts
\begin{align}\label{wg40}
\delta\chi:=\chi_h-\chi_h(\cdot-\tau)\quad\mbox{and}\quad
\delta\chi_\pm:=\max\{0,\pm\delta\chi\}.
\end{align}
Using $|\delta\chi| = \delta\chi_+ + \delta\chi_-$ we then write
\begin{align}\label{wg54}
\frac{1}{\sqrt{h}}|\delta\chi|=\frac{1}{\sqrt{h}}\big(&
\delta\chi_+ G_h*\delta\chi_+ +\delta\chi_+ G_h*(1-\delta\chi_+)\nonumber\\
+&\delta\chi_- G_h*\delta\chi_- +\delta\chi_- G_h*(1-\delta\chi_-)\big).
\end{align}
In Step \ref{Pr1S1} we will show, in the sense of closeness between distributions,
\begin{align*}
\frac{1}{\sqrt{h}}|\delta\chi|\approx\frac{1}{\sqrt{h}}\big(
\delta\chi G_h*\delta\chi +\delta\chi_+ G_h*(1-\delta\chi_+)
+\delta\chi_- G_h*(1-\delta\chi_-)\big).
\end{align*}
The main idea is to estimate the first r.~h.~s.~by the dissipation measure $\mu$
(in Step \ref{Pr1S2}) and to estimate the two last contributions by
the surface measure $|\nabla\chi|dt$ (in Step \ref{Pr1S3}).
However, this just suffices to show that $\partial_t\chi$
is absolutely continuous w.~r.~t. $|\nabla\chi|dt$, as is carried out at
the beginning of Step \ref{Pr1S4} on the level of $O(\alpha)$ as $\alpha\downarrow0$.
In order to retrieve (\ref{wg53}), we need the finer estimate in Step \ref{Pr1S3}
and look at level $O(\alpha^2)$.
\medskip
\newcounter{Pr1S}
\refstepcounter{Pr1S}
{\sc Step} \arabic{Pr1S}.\label{Pr1S1}\refstepcounter{Pr1S}
We claim that the mixed term vanishes distributionally:
\begin{align}\label{wg33}
\lim_{h\downarrow0}\frac{1}{\sqrt{h}}(\delta\chi_+G_h*\delta\chi_-+\delta\chi_-G_h*\delta\chi_+)
=0.
\end{align}
Spelling out the $z$-integral, we want to show that the distributional limit of
\begin{align}\label{wg34}
\int G_h(z)\frac{1}{\sqrt{h}}\big(\delta\chi_+\delta\chi_-(\cdot-z)
+\delta\chi_-\delta\chi_+(\cdot-z)\big)dz
\end{align}
vanishes. Fixing some unit vector $\nu_0$, we split the expression into
\begin{align}\label{wg31}
\int_{\nu_0\cdot z\ge 0} G_h(z)\frac{1}{\sqrt{h}}\big(\delta\chi_+\delta\chi_-(\cdot-z)
+\delta\chi_-\delta\chi_+(\cdot-z)\big)dz
\end{align}
and the analogous expression on $\{\nu_0\cdot z\le 0\}$.
We note that by definition of (\ref{wg40}),
we have $\delta\chi_+=\chi_h(1-\chi_h)(\cdot-\tau)$ and
$\delta\chi_-=\chi_h(\cdot-\tau)(1-\chi_h)$ and thus
$\delta\chi_+\delta\chi_-(\cdot-z)$
$=\chi_h(1-\chi_h)(\cdot-\tau)\chi_h(\cdot-\tau-z)(1-\chi_h)(\cdot-z)$
$\le(1-\chi_h)(\cdot-\tau)\chi_h(\cdot-\tau-z)$ and likewise
$\delta\chi_-\delta\chi_+(\cdot-z) =\chi_h(\cdot-\tau) $ $(1-\chi_h)\chi_h(\cdot-z)(1-\chi_h)(\cdot-\tau-z)$
$\le(1-\chi_h)\chi_h(\cdot-z)$. Here, with a slight abuse of notation, $\chi(\cdot-\tau-z) (t,x) = \chi(t-\tau, x-z)$. Hence the distributional limit of (\ref{wg31})
is dominated by the one of
\begin{align*}
\int_{\nu_0\cdot z\ge 0} G_h(z)\frac{1}{\sqrt{h}}\big(
&(1-\chi_h)(\cdot-\tau)\chi_h(\cdot-\tau-z)\nonumber\\
+&(1-\chi_h)\chi_h(\cdot-z)\big)dz.
\end{align*}
Since the first contribution differs from the second one just by a (vanishing) time shift,
which we may put into the continuous test function,
the distributional limit is equal to the weak limit of
\begin{align*}
2\int_{\nu_0\cdot z\ge 0} G_h(z)\frac{1}{\sqrt{h}}(1-\chi_h)\chi_h(\cdot-z)dz,
\end{align*}
provided the latter exists.
Appealing to the scaling (\ref{wg05}),
according to (\ref{wg12}) in Lemma \ref{Le3} which we test with the
characteristic function of the closed set $\{\nu_0\cdot z\ge 0\}$,
the weak limit of this term is dominated by the measure
$2\int_{\nu_0\cdot z\ge0}G_1(z)(\nu\cdot z)_-|\nabla\chi|dtdz$.
Treating the contribution of $\{\nu_0\cdot z\le0\}$ in a similar way (exchanging the roles
of $\chi$ and $1-\chi$), the weak limit of that contribution is dominated by
$2\int_{\nu_0\cdot z\le0}G_1(z)(\nu\cdot z)_+|\nabla\chi|dtdz$.
\medskip
Hence we have shown that the weak limit $\lambda\ge 0$ of (\ref{wg34})
satisfies
\begin{align}\label{wg63}
\lambda\le 2\Big(\int_{\nu_0\cdot z\ge0}G_1(\nu\cdot z)_-dz
+\int_{\nu_0\cdot z\le0}G_1(\nu\cdot z)_+dz\Big)|\nabla\chi|dt.
\end{align}
In particular, we have $\lambda\le 4c_0|\nabla\chi|dt$, cf.~(\ref{wg03}), so that
there is a $\theta\in L^1(|\nabla\chi|dt)$ with $\lambda=\theta|\nabla\chi|dt$,
which allows us the rewrite (\ref{wg63}) as
\begin{align*}
\theta\le2\int_{\nu_0\cdot z\ge0}G_1(\nu\cdot z)_-dz
+2\int_{\nu_0\cdot z\le0}G_1(\nu\cdot z)_+dz\\
|\nabla\chi|dt-\mbox{a.~e.}\;\mbox{and for all}\;\nu_0\in\mathbb{R}^d.
\end{align*}
An elementary separability argument allows to exchange the order between the $\forall$,
so that we may choose $\nu_0=\nu$, obtaining $\theta\le0$ $|\nabla\chi|dt$-a.~e.~
and thus $\lambda\le 0$, yielding (\ref{wg33}).
\medskip
{\sc Step} \arabic{Pr1S}.\label{Pr1S2}\refstepcounter{Pr1S}
We claim that in a distributional sense
\begin{align}\label{wg37}
\limsup_{h\downarrow 0}\frac{1}{\sqrt{h}}\delta\chi G_h*\delta\chi\le\alpha^2\mu.
\end{align}
Indeed, by definition (\ref{wg35}), we may split this into
\begin{align}
\lim_{h\downarrow 0}\frac{1}{\sqrt{h}}
\big(\delta\chi G_h*\delta\chi-|G_\frac{h}{2}*\delta\chi|^2\big)=0\quad\mbox{and}\label{wg36bis}\\
\limsup_{h\downarrow 0}\Big(\frac{1}{\sqrt{h}}|G_\frac{h}{2}*\delta\chi|^2
-\alpha^2\frac{1}{h\sqrt{h}}|G_\frac{h}{2}*(\chi_h-\chi_h(\cdot-h))|^2\Big)
\le 0.\label{wg36}
\end{align}
We start with (\ref{wg36}) and assume w.~l.~o.~g.~that $\tau=Nh$ for some $N\in\mathbb{N}$
so that by telescoping
and the Cauchy-Schwarz inequality in $n$
\begin{align*}
\lefteqn{\frac{1}{\sqrt{h}}|G_\frac{h}{2}*(\chi_h-\chi_h(\cdot-\tau))|^2}\nonumber\\
&\le N\sum_{n=0}^{N-1}\frac{1}{\sqrt{h}}|G_\frac{h}{2}*(\chi_h(\cdot-nh)-\chi_h(\cdot-(n+1)h))|^2.
\end{align*}
Appealing to $N=\frac{\alpha}{\sqrt{h}}$, the r.~h.~s.~may be rewritten as
\begin{align*}
\alpha^2\frac{1}{N}\sum_{n=0}^{N-1}
\frac{1}{h\sqrt{h}}|G_\frac{h}{2}*(\chi_h(\cdot-nh)-\chi_h(\cdot-(n+1)h))|^2.
\end{align*}
Note that this is an average of time shifts of $\alpha^2
\frac{1}{h\sqrt{h}}|G_\frac{h}{2}*(\chi_h-\chi_h(\cdot-h))|^2$;
because of $Nh=O(\sqrt{h})=o(1)$ all these time shifts are small,
so that the (non-negative) expression has the same (bounded) weak limit as
$\alpha^2\frac{1}{h\sqrt{h}}|G_\frac{h}{2}*(\chi_h-\chi_h(\cdot-h))|^2$ itself.
This yields (\ref{wg36}).
\medskip
We now turn to (\ref{wg36bis}). By the semi-group property (\ref{wg38})
and the symmetry of $G_\frac{h}{2}*$,
we have for a smooth test function $\zeta$
\begin{align*}
\int\zeta\big(\delta\chi G_h*\delta\chi-|G_\frac{h}{2}*\delta\chi|^2\big)
=-\int[\zeta,G_\frac{h}{2}*]\delta\chi\,G_\frac{h}{2}*\delta\chi,
\end{align*}
where $[\zeta,G_\frac{h}{2}*]$ denotes the commutator of multiplying
with $\zeta$ and convolving with $G_\frac{h}{2}$.
Hence by the boundedness of $\frac{1}{\sqrt{h}}|G_\frac{h}{2}*\delta\chi|^2$
as a sequence of measures, which follows from (\ref{wg36}), it is enough
to establish
\begin{align*}
\lim_{h\downarrow 0}\frac{1}{\sqrt{h}}\int|[\zeta,G_\frac{h}{2}*]\delta\chi|^2=0.
\end{align*}
We spell out the integrand:
\begin{align*}\big([\zeta,G_\frac{h}{2}*]\delta\chi\big)(t,x)
=\int G_\frac{h}{2}(z)(\zeta(t,x)-\zeta(t,x-z))\delta\chi(t,x-z)dz,
\end{align*}
so that $|[\zeta,G_\frac{h}{2}*]\delta\chi|(t,x)$
$\le\sup|\nabla\zeta|\int G_\frac{h}{2}(z)|z||\delta\chi(t,x-z)|dz$
and thus
\begin{align*}
\frac{1}{\sqrt{h}}\int|[\zeta,G_{\frac{h}{2}}*]\delta\chi|^2dx
\le\frac{1}{\sqrt{h}}\big(\sup|\nabla\zeta|\int G_\frac{h}{2}(z)|z|dz\big)^2\int|\delta\chi|^2dxdt.
\end{align*}
By the scaling (\ref{wg05}), the r.~h.~s.~is $O(\sqrt{h})$ and thus vanishing.
\medskip
{\sc Step} \arabic{Pr1S}.\label{Pr1S3}\refstepcounter{Pr1S}
For given unit vector $\nu_0$ and $V_0\in(0,\infty)$ we claim that in a distributional sense
\begin{align}\label{wg55}
\lefteqn{\limsup_{h\downarrow0}\frac{1}{\sqrt{h}}\Big(
\delta\chi_+\,G_h*(1-\delta\chi_+)+\delta\chi_-\,G_h*(1-\delta\chi_-)}\nonumber\\
&-2\int_{z\cdot\nu_0>\alpha V_0}G_1(z)dz|\delta\chi|\Big)
\le2\int_{0\le z\cdot\nu_0\le \alpha V_0}G_1(z)|z\cdot\nu||\nabla\chi|dt.
\end{align}
We split this into three steps: 1) The l.~h.~s.~may be substituted according to
\begin{align}\label{wg47}
\lim_{h\downarrow 0}
\frac{1}{\sqrt{h}}\Big(&
\delta\chi_\pm\,G_h*(1-\delta\chi_\pm)\nonumber\\
&-\int_{z\cdot\nu_0\ge0}G_h(z)|\delta\chi_\pm-\delta\chi_\pm(\cdot-z)|dz\Big)=0.
\end{align}
2) The integrand, which is a second-order difference, satisfies the two inequalities
\begin{align}\label{wg39}
\lefteqn{|\delta\chi_+-\delta\chi_+(\cdot-z)|+|\delta\chi_--\delta\chi_-(\cdot-z)|}\nonumber\\
&\le\left\{\begin{array}{c}
|\chi_h-\chi_h(\cdot-z)|+|\chi_h(\cdot-\tau)-\chi_h(\cdot-\tau-z)|\\
|\delta\chi|+|\delta\chi(\cdot-z)|
\end{array}\right\},
\end{align}
where the first inequality is ``space-like'' and we use it on the set
$\{0\le z\cdot\nu_0\le\tau V_0\}$, while the second one is ``time-like''
and we use it on the complement $\{z\cdot\nu_0>\tau V_0\}$.
3) We finally argue that
\begin{align}
\limsup_{h\downarrow0}&\frac{1}{\sqrt{h}}\int_{0\le z\cdot\nu_0\le\tau V_0}G_h(z)
\big(|\chi_h-\chi_h(\cdot-z)|+|\chi_h(\cdot-\tau)\nonumber\\
&-\chi_h(\cdot-z-\tau)|\big)dz
\le2\int_{0\le z\cdot\nu_0\le\alpha V_0}G_1(z)|z\cdot\nu||\nabla\chi|dtdz,\label{wg43}\\
\lim_{h\downarrow0}&\frac{1}{\sqrt{h}}\Big(\int_{z\cdot\nu_0>\tau V}G_h(z)
\big(|\delta\chi|+|\delta\chi(\cdot-z)|\big)dz\nonumber\\
&-2\frac{1}{\sqrt{h}}\int_{z\cdot\nu_0>\alpha V} G_1(z)dz
|\delta\chi|\Big)=0.\label{wg42}
\end{align}
\medskip
We start with (\ref{wg39}); the second inequality is obvious by the
triangle inequality in form of $|\delta\chi_\pm-\delta\chi_\pm(\cdot-z)|$
$\le\delta\chi_\pm+\delta\chi_\pm(\cdot-z)$ and by $\delta\chi_++\delta\chi_-=|\delta\chi|$.
In view of the definition (\ref{wg40}), the first inequality in (\ref{wg39})
follows from the elementary inequality
\begin{align}\label{wg41}
|(a-a')_+-(b-b')_+|&+|(a-a')_--(b-b')_-|\nonumber\\&\le|a-b|+|a'-b'|,
\end{align}
which can be seen by distinguishing two cases: Case 1): $(a-a')(b-b')\ge 0$,
w.~l.~o.~g. by symmetry $(a,a',b,b')$ $\leadsto(-a,-a',-b,-b')$ we may assume $a-a',b-b'\ge 0$,
in which case (\ref{wg41}) turns into the obvious $|(a-a')-(b-b')|$ $\le|a-b|+|a'-b'|$.
Case 2): $(a-a')(b-b')\le 0$, by the same symmetry
we may assume $a-a'\ge0\ge b-b'$, in which
case (\ref{wg41}) turns into the obvious $(a-a')+(b'-b)\le|a-b|+|a'-b'|$.
\medskip
We now argue for (\ref{wg43}) \& (\ref{wg42}). Putting the vanishing shift $\tau$ in the time
variable on the continuous test function, (\ref{wg43}) follows once we argue that
\begin{align}\label{wg46}
\limsup_{h\downarrow0}&\frac{1}{\sqrt{h}}\int_{0\le z\cdot\nu_0\le\tau V_0}G_h(z)
|\chi_h-\chi_h(\cdot-z)|dz\nonumber\\
&\le\int_{0\le z\cdot\nu_0\le\alpha V_0}G_1(z)|z\cdot\nu||\nabla\chi|dtdz,
\end{align}
which because of non-negativity implies that the l.~h.~s.~admits a limit
as a measure on $(0,T)\times [0,1)^d$.
Appealing to scaling (\ref{wg05})
in form of $G_h(z)dz=G_1(\frac{z}{\sqrt{h}})d\frac{z}{\sqrt{h}}$ and noting
that $z\cdot\nu\le\tau V_0$ is equivalent to $\frac{z}{\sqrt{h}}\cdot\nu\le\alpha V_0$,
cf.~(\ref{wg44}), (\ref{wg46}) follows from taking the sum of (\ref{wg11}) and (\ref{wg12}),
appealing to (\ref{wg95}),
testing with the characteristic function of the closed set $\{0\le z\cdot\nu_0\le\alpha V_0\}$,
and integrating out $z$.
\medskip
We now turn to (\ref{wg42}) and note that it follows from
\begin{align*}
\lim_{h\downarrow0}\frac{1}{\sqrt{h}}
\int_{z\cdot\nu_0>\tau V}G_h(z)(|\delta\chi(\cdot-z)|-|\delta\chi|)dz=0,
\end{align*}
which testing with $\zeta\in C_0^\infty((0,T)\times[0,1)^d)$ assumes the form
\begin{align*}
\lim_{h\downarrow0}\frac{1}{\sqrt{h}}
\int_{z\cdot\nu_0>\tau V}G_h(z)(\zeta(\cdot+z)-\zeta)|\delta\chi|dxdtdz=0.
\end{align*}
This holds, since the integral can be estimated by
\begin{align}\label{wg49}
\lefteqn{\sup|\nabla\zeta|\int\frac{|z|}{\sqrt{h}}G_h(z)|\delta\chi|dxdtdz}\nonumber\\
&\stackrel{(\ref{wg05}),(\ref{wg40})}{=}
\sup|\nabla\zeta|\int|z|G_1(z)dz\int|\chi_h-\chi_h(\cdot-\tau)|dxdt,
\end{align}
and the last contribution vanishes in the limit since $\tau$ does, and since
$\{\chi_h\}_{h\downarrow0}$ is compact
in $L^1((0,T)\times[0,1)^d)$, cf.~part i) of Lemma \ref{Le3}.
\medskip
We finally address (\ref{wg47}) and focus on the $+$-part. We first argue that
\begin{align}\label{wg50}
\lim_{h\downarrow 0}\frac{1}{\sqrt{h}}\big(\delta\chi_+ G_h*(1-\delta\chi_+)
-(1-\delta\chi_+)G_h*\delta\chi_+\big)=0.
\end{align}
Spelling out the $z$-integral, and using that $G_h$ is even, this follows from
\begin{align}\label{wg64}
\lefteqn{\lim_{h\downarrow 0}\frac{1}{\sqrt{h}}
\int G_h(z)\delta\chi_+(1-\delta\chi_+)(\cdot-z)dz}\nonumber\\
&=\lim_{h\downarrow 0}\frac{1}{\sqrt{h}}\int G_h(z)\delta\chi_+(\cdot+z)(1-\delta\chi_+)dz.
\end{align}
Since the second function differs from the first just by a spatial shift of $z$,
the limits coincide provided the l.~h.~s.~limit is finite, which by the non-negativity
of the functions follows if their integral remains bounded.
The l.~h.~s.~integral indeed remains bounded since $\delta\chi_+(1-\delta\chi_+)(\cdot-z)$
$\le\chi_h(1-\chi_h)(\cdot-z)+(1-\chi_h)(\cdot-\tau)\chi_h(\cdot-\tau-z)$, which follows from $\delta \chi_+ = \chi_h(1-\chi_h)(\cdot-\tau)$,
and since the integral of the first summand remains bounded by (\ref{wg11}),
whereas the integral of the second summand is oblivious to the (vanishing) time shift
and thus remains bounded by (\ref{wg12}).
\medskip
Equipped with (\ref{wg50}),
we now may substitute $\frac{1}{\sqrt{h}}\delta\chi_+ G_h*(1-\delta\chi_+)$
by $\frac{1}{2}\frac{1}{\sqrt{h}}\big(\delta\chi_+ G_h*(1-\delta\chi_+)
+(1-\delta\chi_+) G_h*\delta\chi_+\big)$, which we may write as
\begin{align*}
\frac{1}{2\sqrt{h}}\int G_h(z)|\delta\chi_+-\delta\chi_+(\cdot-z)|dz,
\end{align*}
where we used for any $a,b\in\{0,1\}$ that $a(1-b)+(1-a)b=|a-b|$. Hence in
order to obtain (\ref{wg47}), we need the two function sequences
\begin{align*}
\frac{1}{\sqrt{h}}\int_{\pm z\cdot\nu_0\ge 0} G_h(z)|\delta\chi_+-\delta\chi_+(\cdot-z)|dz
\end{align*}
to have the same limit. Again by the evenness of $G_h$, these two functions only
differ by a spatial shift $z$. The same argument as for (\ref{wg64})
shows that the limits agree.
\medskip
{\sc Step} \arabic{Pr1S}.\label{Pr1S4}\refstepcounter{Pr1S}
Conclusion. We start from the identity (\ref{wg54})
in form of
\begin{align*}
\frac{1}{\sqrt{h}}|\delta\chi|&=\frac{1}{\sqrt{h}}\big(
\delta\chi G_h*\delta\chi\nonumber\\
&+\delta\chi_+G_h*\delta\chi_-+\delta\chi_-G_h*\delta\chi_+\nonumber\\
&+\delta\chi_+ G_h*(1-\delta\chi_+)+\delta\chi_- G_h*(1-\delta\chi_-)\big),
\end{align*}
or rather, using $2\int_{z\cdot\nu_0\ge 0}G_1dz=1$,
\begin{align*}
\lefteqn{2\int_{0\le z\cdot\nu_0\le \alpha V_0}G_1dz\frac{1}{\sqrt{h}}|\delta\chi|
=\frac{1}{\sqrt{h}}\big(
\delta\chi G_h*\delta\chi}\nonumber\\
&+\delta\chi_+G_h*\delta\chi_-+\delta\chi_-G_h*\delta\chi_+\nonumber\\
&+\delta\chi_+ G_h*(1-\delta\chi_+)+\delta\chi_- G_h*(1-\delta\chi_-)
-2\int_{z\cdot\nu_0>\alpha V_0}G_1dz|\delta\chi|\big).
\end{align*}
We note that by an elementary lower-semi-continuity argument
based on the definitions (\ref{wg44}) and (\ref{wg40}), we have
the distributional inequality
$\alpha|\partial_t\chi|\le\liminf_{h\downarrow0}\frac{1}{\sqrt{h}}|\delta\chi|$,
provided the r.~h.~s.~is a finite measure.
Hence we obtain from (\ref{wg33}), (\ref{wg37}), and (\ref{wg55}) the distributional inequality
\begin{align}\label{wg58}
2\alpha\int_{0\le z\cdot\nu_0\le \alpha V_0}G_1dz|\partial_t\chi|
\le &\alpha^2\mu\notag\\
&+
2\int_{0\le z\cdot\nu_0\le \alpha V_0}G_1|z\cdot\nu|dz|\nabla\chi|dt,
\end{align}
which in particular shows that $\partial_t\chi$ is a measure.
Letting $V_0\uparrow\infty$ and appealing to (\ref{wg03}),
this yields in particular $\alpha|\partial_t\chi|$ $\le\alpha^2\mu+4c_0|\nabla\chi|dt$
which we divide by $\alpha$:
\begin{align*}
|\partial_t\chi|\le\alpha\mu+\frac{4}{\alpha}c_0|\nabla\chi|dt.
\end{align*}
Letting $\alpha\downarrow 0$, we learn from the latter estimate
that null sets of $|\nabla\chi|dt$ are null sets of $\partial_t\chi$, so
that there exists $V\in L^1(|\nabla\chi|dt)$ such that (\ref{wg57}) holds.
\medskip
Since $|\partial_t\chi|=|V||\nabla\chi|dt$ is absolutely continuous
w.~r.~t.~to $|\nabla\chi|dt$, (\ref{wg58}) even holds with $\mu$ replaced
by its absolutely continuous part $\mu'$ w.~r.~t.~$|\nabla\chi|dt$.
Writing $\mu'=\theta|\nabla\chi|dt$ with $\theta\in L^1(|\nabla\chi|dt)$,
(\ref{wg58}) assumes the form
\begin{align*}
2\alpha\int_{0\le z\cdot\nu_0\le\alpha V_0}G_1dz|V|
\le\alpha^2\theta+
2\int_{0\le z\cdot\nu_0\le\alpha V_0}G_1|z\cdot\nu|dz\quad|\nabla\chi|dt\mbox{-a.~e.}.
\end{align*}
As in Step \ref{Pr1S1}, a separability argument now allows to choose $\nu_0=\nu$,
so that by radial symmetry of $G_1$, the above assumes the form
\begin{align*}
2\alpha\int_{0\le z_1\le\alpha V_0}G_1dz|V|
\le\alpha^2\theta+
2\int_{0\le z_1\le\alpha V_0}G_1 z_1dz\quad|\nabla\chi|dt\mbox{-a.~e.}
\end{align*}
Dividing by $\alpha^2$ and momentarily writing $\alpha':=\alpha V_0$, this turns into
\begin{align*}
2\frac{V_0}{\alpha'}\int_{0\le z_1\le\alpha'}G_1dz|V|
\le\theta+2\frac{V_0^2}{{\alpha'}^2}\int_{0\le z_1\le\alpha'}G_1 z_1dz\quad|\nabla\chi|dt\mbox{-a.~e.}
\end{align*}
We now appeal to the limiting relations (which follow from factorizing $G_1$ into
the $(d-1)$-dimensional standard Gaussian and $G^{d=1}_1$)
\begin{align*}
\lim_{\alpha'\downarrow 0}\frac{1}{\alpha'}\int_{0\le z_1\le\alpha'}G_1dz&=G_1^{d=1}(0)=c_0,\\
\lim_{\alpha'\downarrow 0}\frac{1}{{\alpha'}^2}\int_{0\le z_1\le\alpha'}G_1z_1dz
&=\frac{1}{2}G_1^{d=1}(0)=\frac{c_0}{2},
\end{align*}
to see that the above turns into
\begin{align*}
2c_0V_0|V|\le\theta+c_0V_0^2\quad|\nabla\chi|dt\mbox{-a.~e.}
\end{align*}
Again, by a separability argument for $V_0$, we may assume $V_0=|V|$
so that the above yields (\ref{wg53}) in form of $c_0V^2\le\theta$.
\bigskip
{\sc Proof of Proposition \ref{Pr3}}.
\newcounter{Pr3S}
\refstepcounter{Pr3S}
{\sc Step} \arabic{Pr3S}.\label{Pr3S1}\refstepcounter{Pr3S} Metric slope and functional derivative.
We claim the following relation between the metric slope $|\partial E(u)|$ of a functional
$E$ on ${\mathcal M}$, cf.~(\ref{wg92}),
at a configuration $u$, and its first variation $\delta E(u).\xi$
in direction of a smooth vector field $\xi$:
\begin{align}\label{wg25}
\frac{1}{2}|\partial E(u)|^2\ge\delta E(u).\xi-\frac{1}{2}\big(\delta d(u,\cdot)(u).\xi\big)^2.
\end{align}
As usual, first variation $\delta$ is defined by considering the curve $s\mapsto u_s$ of
configurations characterized via the transport equation (to be interpreted distributionally
or solved explicitly with help of the flow map $\Phi_s$ via $u_s\circ\Phi_s=u$)
\begin{align}\label{wg27}
\frac{\partial u_s}{\partial s}+\xi\cdot\nabla u_s=0\quad\mbox{and}\quad u_{s=0}=u,
\end{align}
and setting
\begin{align}\label{wg28}
\delta E(u).\xi:=\frac{d}{ds}\Big|_{s=0}E(u_s)\quad\mbox{and}\quad
\delta d(u,\cdot)(u).\xi:=\frac{d}{ds}\Big|_{s=0}d(u,u_s),
\end{align}
with the understanding that both derivatives exist (and define linear functionals in $\xi$
which is the case for $E=E_h$ and $d(u,\cdot)=d_h(u,\cdot)$,
cf.~Steps \ref{Pr3S2} and \ref{Pr3S3}).
Inequality (\ref{wg25})
is then a direct consequence of the definition $|\partial E(u)|$, cf.~(\ref{wg26}),
which yields
\begin{align*}
|\partial E(u)|&\ge\limsup_{s\downarrow 0}\frac{(E(u)-E(u_s))_+}{d(u,u_s)}\nonumber\\
&\ge\frac{\lim_{s\downarrow 0}\frac{1}{s}(E(u_s)-E(u))}
{\lim_{s\downarrow 0}\frac{1}{s}d(u,u_s)}
=\frac{\delta E(u).\xi}{\delta d(u,\cdot)(u).\xi},
\end{align*}
and Young's inequality.
\medskip
{\sc Step} \arabic{Pr3S}.\label{Pr3S2}\refstepcounter{Pr3S}
Representation of $\delta E_h(u).\xi$; we claim:
\begin{align}\label{wg10}
\delta E_h(u).\xi=\frac{1}{\sqrt{h}}\int\Big(
&\nabla\cdot\xi\big((1-u)G_h*u+uG_h*(1-u)\big)\nonumber\\
+&u[\xi,\nabla G_h*](1-u)\Big),
\end{align}
where $[\xi,\nabla G_h*]$ $=\sum_{i=1}^d[\xi_i,\partial_iG_h*]$
denotes the commutator of multiplying with
$\xi$ and convolving with $\nabla G_h$.
In checking this formula we may by approximation assume that $u$ is smooth;
by definition (\ref{wg27}) \& (\ref{wg28}) of $\delta$ we obtain from
the definition (\ref{wg06}) of $E_h$
\begin{align*}
\delta E_h(u).\xi=\frac{1}{\sqrt{h}}
\int\big((\xi\cdot\nabla u) G_h*u-(1-u)G_h*(\xi\cdot\nabla u)\big),
\end{align*}
which by the symmetry of $G_h*$ we rewrite as
\begin{align*}
\delta E_h(u).\xi=-\frac{1}{\sqrt{h}}
\int\big(\xi\cdot\nabla(1-u)\,G_h*u+\xi\cdot\nabla u\,G_h*(1-u)\big).
\end{align*}
We write $\xi\cdot\nabla u=\nabla\cdot(u\xi)-u\nabla\cdot\xi$
and $\xi\cdot\nabla(1-u)=\nabla\cdot((1-u)\xi)-(1-u)\nabla\cdot\xi$,
so that (\ref{wg10}) reduces to the identity
\begin{align*}
-\int\big(\nabla\cdot((1-u)\xi)\,G_h*u&+\nabla\cdot(u\xi)\,G_h*(1-u)\big)\nonumber\\
&=\int u[\xi,\nabla G_h*](1-u),
\end{align*}
which follows from integration by parts and the anti-symmetry of $\nabla G_h*$.
\medskip
{\sc Step} \arabic{Pr3S}.\label{Pr3S3}\refstepcounter{Pr3S}
Representation of $\delta d_h(u,\cdot)(u).\xi$:
\begin{align}\label{wg19}
\lefteqn{\frac{1}{2}\big(\delta d_h(u,\cdot)(u).\xi\big)^2}\nonumber\\
&=\sqrt{h}\int\Big(u\xi\cdot\nabla^2G_h*((1-u)\xi)
-u\xi\cdot\nabla G_h*((1-u)\nabla\cdot\xi)\nonumber\\
&+u\nabla\cdot\xi\,\nabla G_h*((1-u)\xi)-u\nabla\cdot\xi\,G_h*((1-u)\nabla\cdot\xi)\Big).
\end{align}
In the notation of Step \ref{Pr3S2},
$\frac{1}{2}\big(\delta d_h(u,\cdot)(u).\xi\big)^2 =\frac{1}{2}\big(\frac{d}{ds}\big|_{s=0}d_h(u,u_s)\big)^2$
$=\frac{1}{4}\frac{d^2}{ds^2}\big|_{s=0}d_h^2(u,u_s)$, so that by definition
(\ref{wg18}) of $d_h$ and by (\ref{wg27}), we have
$\frac{1}{2}\big(\delta d_h(u,\cdot)(u).\xi\big)^2$
$=\sqrt{h}\int\frac{\partial u_s}{\partial s}\big|_{s=0} \,G_h*\frac{\partial u_s}{\partial s}\big|_{s=0} $
$=\sqrt{h}\int\xi\cdot\nabla u\,G_h*(\xi\cdot\nabla u)$. Rewriting the second factor
$\xi\cdot\nabla u$
$=\nabla\cdot(u\xi)-u\,\nabla\cdot\xi$ and using the symmetry of $G_h*$,
an integration by parts, and $-\nabla u=\nabla(1-u)$, we obtain
\begin{align*}
\lefteqn{\frac{1}{2}\big(\delta d_h(u,\cdot)(u).\xi\big)^2}\nonumber\\
&=\sqrt{h}\int\big(u\xi\cdot\nabla G_h*(\xi\cdot\nabla(1-u))
+u\nabla\cdot\xi\,G_h*(\xi\cdot\nabla(1-u))\big).
\end{align*}
We write $\xi\cdot\nabla(1-u)$ $=\nabla\cdot((1-u)\xi)-(1-u)\nabla\cdot\xi$
to obtain (\ref{wg19}).
\medskip
{\sc Step} \arabic{Pr3S}.\label{Pr3S3bis}\refstepcounter{Pr3S}
Passage to the limit in $\delta E_h$; we claim that
\begin{align}\label{wg29}
\lim_{h\downarrow 0}\int_0^T\delta E_h(u_h).\xi dt
=c_0\int(\nabla\cdot\xi-\nu\cdot\nabla\xi\nu)|\nabla\chi|dt.
\end{align}
According to (\ref{wg10}), we may split into two statements. The first statement is
\begin{align*}
\lim_{h\downarrow0}\frac{1}{\sqrt{h}}\int\nabla\cdot\xi\big((1-u_h)\,G_h*u_h
+u_h\,G_h*(1-u_h)\big)\nonumber\\
=2 c_0\int\nabla\cdot\xi|\nabla\chi|dt,
\end{align*}
which is an immediate consequence of testing (\ref{wg11}) \& (\ref{wg12})
with $\nabla\cdot\xi$, appealing to the scaling (\ref{wg05}) and
to the formula (\ref{wg03}). The second statement is
\begin{align}\label{wg13}
\lefteqn{\lim_{h\downarrow0}\frac{1}{\sqrt{h}}\int u_h[\xi,\nabla G_h*](1-u_h)}\nonumber\\
&=-c_0\int(\nu\cdot\nabla\xi\nu+\nabla\cdot\xi)|\nabla\chi|dt,
\end{align}
for which we now give the argument. Spelling out
\begin{align*}
\lefteqn{\big([\xi,\nabla G_h*](1-u_h)\big)(t,x)}\nonumber\\
&=\int(\xi(t,x)-\xi(t,x-z))\cdot\nabla G_h(z)(1-u_h)(t,x-z)dz,
\end{align*}
we see that
\begin{align}\label{wg15}
\big|&\big([\xi,\nabla G_h*](1-u_h)\big)(t,x)\nonumber\\
&-\int\nabla G_h(z)\cdot\nabla\xi(t,x) z(1-u_h)(t,x-z)dz\big|\nonumber\\
&\le\frac{1}{2}\sup|\nabla^2\xi| \int |z|^2|\nabla G_h(z)|(1-u_h)(t,x-z)dz.
\end{align}
Appealing to (\ref{wg05}) in form of $\nabla G_h(z)dz$
$=\frac{1}{\sqrt{h}}\nabla G_1(\frac{z}{\sqrt{h}})d\frac{z}{\sqrt{h}}$, we learn that the limit
of the contribution of the main term can be computed by
testing (\ref{wg11}) with $\frac{\nabla G_1(z)\cdot\nabla\xi(t,x) z}{G_1(z)}$
$=-z\cdot\nabla\xi(t,x)z$ (which is of polynomial growth in $z$); it assumes the value
\begin{align*}
\int\nabla G_1\cdot\nabla\xi z(\nu\cdot z)_+ |\nabla\chi|dtdz,
\end{align*}
which yields (\ref{wg13}) by formula (\ref{wg14}) below.
The contribution of the r.~h.~s.~error term in (\ref{wg15}) is vanishing of
$O(\sqrt{h})$, as follows from appealing to (\ref{wg11})
tested with $\frac{|\nabla G_1(z)||z|^2}{G_1(z)}$ $=|z|^3$.
\medskip
{\sc Step} \arabic{Pr3S}.\label{Pr3S5}\refstepcounter{Pr3S}
Passage to the limit in $\delta d_h$; we claim
\begin{align}\label{wg30}
\lim_{h\downarrow 0}\frac{1}{2}\int_0^T\big(\delta d_h(u_h,\cdot)(u_h).\xi\big)^2dt
=c_0\int(\xi\cdot\nu)^2|\nabla\chi|dt.
\end{align}
According to (\ref{wg19}), this statement may be split into a leading-order statement
\begin{align}
\lim_{h\downarrow 0}\sqrt{h}\int u_h\xi\cdot\nabla^2G_h*((1-u_h)\xi) dxdt
&=c_0\int(\xi\cdot\nu)^2|\nabla\chi|dt,\label{wg20}
\end{align}
and the higher-order statements
\begin{align}\label{wg24}
\int u_h\xi\cdot\nabla G_h*((1-u_h)\nabla\cdot\xi)dxdt
&=O(1),\\
\int u_h\nabla\cdot\xi\,\nabla G_h*((1-u_h)\xi) dxdt
&=O(1),\nonumber\\
\frac{1}{\sqrt{h}}\int u_h\nabla\cdot\xi\,G_h*((1-u_h)\nabla\cdot\xi)dxdt
&=O(1).\nonumber
\end{align}
The statement (\ref{wg20}) itself splits into the main part
\begin{align*}
\lim_{h\downarrow 0}\sqrt{h}\int \xi\cdot(u_h\nabla^2G_h*(1-u_h))\xi dxdt
=c_0\int(\xi\cdot\nu)^2|\nabla\chi|dt
\end{align*}
and the higher-order commutator
\begin{align}\label{wg23}
\int u_h\xi\cdot[\xi,\nabla^2G_h]*(1-u_h) dxdt=O(1).
\end{align}
The main part follows from appealing to (\ref{wg05})
in form of $\sqrt{h}\nabla^2G_h(z) dz$
$=\frac{1}{\sqrt{h}}\nabla^2 G_1(\frac{z}{\sqrt{h}}) d\frac{z}{\sqrt{h}}$,
testing (\ref{wg11}) with $\frac{\xi(t,x)\cdot\nabla^2 G_1(z)\xi(t,x)}{G_1(z)}$
$=(\xi(t,x)\cdot z)^2-|\xi(t,x)|^2$,
and appealing to formula (\ref{wg21}).
The estimate of the error term (\ref{wg23}) follows from estimating the integrand by
$\sup|\xi|\sup|\nabla\xi|$ $\int |z||\nabla^2G_h(z)|\,u_h(t,x)(1-u_h)(t,x-z) dz$,
and then using the scaling (\ref{wg05})
further by
\begin{align*}
\sup|\xi|\sup|\nabla\xi|\,\int |z||\nabla^2 G_1(z)|\,
\frac{1}{\sqrt{h}}u_h(t,x)(1-u_h)(t,x-\sqrt{h}z) dz,
\end{align*}
so that another application of (\ref{wg11}) yields (\ref{wg23}).
Statement (\ref{wg24}) and the other two higher-order estimates
follow along the same lines: For instance, the integrand in (\ref{wg24})
is $\le\sup|\xi|\sup|\nabla\cdot\xi|$ $\int|\nabla G_h(z)|$ $\,u_h(t,x)$ $(1-u_h)(t,x-z) dz$.
By rescaling, $\int|\nabla G_h(z)|$ $\,u_h(t,x)$ $(1-u_h)(t,x-z)$ $dz$
$=\int|z|G_1(z)\,\frac{1}{\sqrt{h}}u_h(t,x)(1-u_h)(t,x-\sqrt{h}z) dz$,
which is $O(1)$ by (\ref{wg11}).
\medskip
{\sc Step} \arabic{Pr3S}.\label{Pr3S6}\refstepcounter{Pr3S}
Conclusion. By Riesz' representation theorem in $L^2(|\nabla\chi|dt)$ and an approximation argument
in the (arbitrary) smooth vector field $\xi$, the statement
of Proposition \ref{Pr3} is a consequence of
\begin{align*}
\liminf_{h\downarrow0}\int_0^T\frac{1}{2}|\partial E_h(u_h)|^2dt\ge
c_0\int\big(\nabla\cdot\xi-\nu\cdot\nabla\xi\nu-(\xi\cdot\nu)^2\big)|\nabla\chi|dt.
\end{align*}
The latter follows starting from the inequality (\ref{wg25}) for $(E,d,u)=(E_h,d_h,u_h(t))$,
integrating in $t\in(0,T)$, and appealing to (\ref{wg29}) and (\ref{wg30}) to
pass to the limit on the r.~h.~s.~.
\medskip
{\sc Step} \arabic{Pr3S}.\label{Pr3S8}\refstepcounter{Pr3S}
Two formulas: For any unit vector $\nu$, any matrix $A$, and any vector $\xi$, we have
\begin{align}
-\int\nabla G_1(z)\cdot A z\,(\nu\cdot z)_+dz&=c_0(\nu\cdot A\nu+{\rm tr A}),\label{wg14}\\
\int\xi\cdot\nabla^2G_1(z)\xi\,(\nu\cdot z)_+dz&=c_0(\xi\cdot\nu)^2\label{wg21}.
\end{align}
Since $G_1(z)=-zG_1(z)$, for the first formula
we may assume that $A$ is symmetric; by linearity
we may assume that $A=e\otimes e$ for some unit vector $e$; by radial symmetry of $G_1$,
it thus remains to show
\begin{align*}
-\int\partial_1 G_1 z_1(\nu\cdot z)_+dz=c_0(\nu_1^2+1),
\end{align*}
which by one integration by parts, taking into account (\ref{wg03}), reduces to
\begin{align*}
\int_{\nu\cdot z>0} G_1 z_1dz=c_0\nu_1,
\end{align*}
which in view of $G_1 z_1=-\partial_1G_1=-\nabla\cdot(G_1 e_1)$ by the divergence theorem
reduces to
\begin{align}\label{wg22}
\int_{\nu\cdot z=0} G_1=c_0,
\end{align}
which follows since $c_0=G_1^{d=1}(0)$.
We now turn to (\ref{wg21}). By radial symmetry of $G_1$ and homogeneity, it suffices to
show
\begin{align}
\int\partial_1^2G_1(\nu\cdot z)_+dz=c_0\nu_1^2,
\end{align}
which by two integration by parts reduces to (\ref{wg22}).
\bibliographystyle{plain}
| {
"timestamp": "2019-10-28T01:04:29",
"yymm": "1910",
"arxiv_id": "1910.11442",
"language": "en",
"url": "https://arxiv.org/abs/1910.11442",
"abstract": "We consider the thresholding scheme and explore its connection to De Giorgi's ideas on gradient flows in metric spaces; here applied to mean curvature flow as the steepest descent of the interfacial area.The basis of our analysis is the observation by Esedoglu and the second author that thresholding can be interpreted as a minimizing movements scheme for an energy that approximates the interfacial area.De Giorgi's framework provides an optimal energy dissipation relation for the scheme in which we pass to the limit to derive a dissipation-based weak formulation of mean curvature flow.Although applicable in the general setting of arbitrary networks, here we restrict ourselves to the case of a single interface, which allows for a compact, self-contained presentation.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "The thresholding scheme for mean curvature flow and de Giorgi's ideas for minimizing movements",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9591542852576265,
"lm_q2_score": 0.7401743620390163,
"lm_q1q2_score": 0.7099414111875524
} |
https://arxiv.org/abs/1108.2452 | Sequential Auctions and Externalities | In many settings agents participate in multiple different auctions that are not necessarily implemented simultaneously. Future opportunities affect strategic considerations of the players in each auction, introducing externalities. Motivated by this consideration, we study a setting of a market of buyers and sellers, where each seller holds one item, bidders have combinatorial valuations and sellers hold item auctions sequentially.Our results are qualitatively different from those of simultaneous auctions, proving that simultaneity is a crucial aspect of previous work. We prove that if sellers hold sequential first price auctions then for unit-demand bidders (matching market) every subgame perfect equilibrium achieves at least half of the optimal social welfare, while for submodular bidders or when second price auctions are used, the social welfare can be arbitrarily worse than the optimal. We also show that a first price sequential auction for buying or selling a base of a matroid is always efficient, and implements the VCG outcome.An important tool in our analysis is studying first and second price auctions with externalities (bidders have valuations for each possible winner outcome), which can be of independent interest. We show that a Pure Nash Equilibrium always exists in a first price auction with externalities. | \section{Iterated elimination of dominated
strategies}\label{appendix:refinements}
First we define precisely the concept of a strategy profile that survives
iterated elimination of weakly dominated strategies. Then we characterize such
profiles for the first-price auction with externalities using a
graph-theoretical argument.
\begin{Definition}
Given an $n$-player game define by strategy sets $S_1, \hdots, S_n$ and
utilities $u_i: S_1 \times \hdots \times S_n \rightarrow \R$ we define a
\emph{valid procedure for eliminating weakly-dominated strategies} as a
sequence $\{ S_i^t \}$ such that for each $t$ there is $i$ such that $S_j^t =
S_j^{t-1}$ for $j \neq i$, $S_i^t \subseteq S_i^{t-1}$ and for all $s_i \in
S_i^{t-1} \setminus S_i^{t}$ there is $s'_i \in S_i^t$ such that
$u_i(s'_i, s_{-i}) \geq u_i(s_i, s_{-i})$ for all $s_{-i} \in \prod_{j \neq i}
S_j^t$ and the inequality is strict for at least one $s_{-i}$. We say that an
strategy
profile $s$ survives iterated elimination of weakly-dominated strategies if for
any valid procedure $\{ S_i^t \}$, $s_i \in \cap_t S_i^t$.
\end{Definition}
The concept above is very strong as different elimination
procedures can lead to elimination of different strategies.
This can possibly lead to no strategy (at least no Nash
equilibrium) surviving iterated elimination of weakly-dominated strategies.
We show that the first price auction game has equilibria that
satisfies this strong definition, which makes the
equilibria a very robust prediction.
As a warm up, consider the first price auction without externalities, i.e.,
$v_i^i = v_i$ and $v_i^j = 0$ for $j \neq i$ with $v_1 \geq v_2 \geq \hdots$.
It is easy to see that the set of
strategies surviving any iterated elimination procedure is $[0, v_i)$ for
player $i>1$ and $[0,v_2]$ for player $1$. Bidding $b_i > v_i$ is clearly
dominated by bidding $v_i$. By the definition, bidding $v_i$ is dominated by
bidding any value smaller then $v_i$, since by bidding $v_i$, the player can
never get positive utility. After we eliminate $b_i \geq v_i$ for all the
players, it is easy to see that $b_1 = v_2$ dominates any bid $b_1 > v_2$,
since player $1$ wins anyway (since all the other players have eliminated their
strategies $b_i \geq v_i$). The natural equilibrium to expect in this
case is player $1$ getting the item
for price $v_2$, which is a result of $b_1 = v_2+$
and $b_2 = v_2$. However, $b_2 = v_2$ is eliminated for player $2$, but any
strategy arbitrarly close to $b_2 = v_2$ is not.
This motivates us to pass to the topological closure when discussing iterative
elimination of weakly dominated strategies for first price auctions:
\begin{Definition}
In a first-price auction with externalities,
a bid $b_i$ for player $i$ is \emph{compatible with iterated elimination of
weakly dominated strategies}, if $b_i$ is in the topological closure of the set
of bids that survive any procedure of elimination. In other words, for each
$\delta > 0$ there is a bid $b'_i$ that survives any procedure of elimination
such that $\abs{b_i - b'_i} < \delta$.
\end{Definition}
Now, we are ready to characterize the set of Nash equilibria that are
compatible with iterated elimination. In order to do that, we define an
\emph{overbidding-graph} in the following way: for each price $p$, consider a
directed graph $G_p$ on $n$ nodes
such that there is an edge from $i$ to $j$ if $v_j^j - p > v_j^i$, i.e., if
player $i$ were getting the item at price $p$, player $j$ would rather overbid
him and take the item. Now, notice that the graph $G_{p+\epsilon}$ is a
subgraph of $G_p$.
Let's assume that all nodes have positive in-degree and
out-degree in $G_0$. If there are nodes with zero in-degree, simply remove the
players that have in-degree zero in $G_0$ (which mean that they can't possibly
want the item, i.e. they bidding zero is a dominant strategy). If there are
players with zero out-degree, then the problem is trivial, since there are nodes
for who we can give the item and get an equilibrium with zero price.
\begin{theorem}
The strategies for player $i$
that survive iterated elimination of weakly dominated strategies are $S_i = [0,
\tau_i)$ where $\tau_i$ can be computed by the following algorithm: begin with
$p=0$ and $V=[n]$. In each step, if there is a node $i \in V$ of in-degree zero
in $G_p[V]$ (i.e., $G_p$ defined on the nodes $V$), then set $\tau_i = p$ and
remove $i$ from $V$ and recurse. If there is no such node, increase the value of
$p$ until some node's in-degree becomes zero.
\end{theorem}
\begin{proof}
Consider that the players are numbered such that $\tau_1 \leq \tau_2 \leq
\hdots \leq \tau_n$. Now, we will prove by induction that no element of $[0,
\tau_i)$ can be eliminated from the strategy set of player $ j \geq i$ by
recursive elimination of weakly dominated strategies. And that there is one
procedure that eliminates all bids $b \geq \tau_i$ for player $i$ strategy set.
For the base case, suppose there is some process of iterated elimination that
removes some strategy $b \in [0, \tau_1)$ for player $i$, imagine the first time
it happens in this process and say that the strategy that eliminates it is some
$b'$. If $b' < b$, consider the profile for the other players where everyone
plays some value between $b'$ and $b$, and given that player $i$ has
positive in-degree in $G_{b}$, suppose that the highest bid is
submitted by a player $j$ such that $(j,i)$ is an edge of $G_{b}$.
Then clearly $b$ generates strictly higher utility then $b'$. Now, suppose $b' >
b$, then $b$ performs strictly better then $b'$ in the profile where all the
other players bid zero. Now, notice that all the bids $b_1 > \tau_1$ for player
$1$ are dominated and bidding $b_1 = \tau_1$ is dominated by playing any
smaller bid.
Now the induction step is along the same lines: We know that no elimination
procedure can eliminate bids in $[0,\tau_k)$ for player $k$, $k<i$. Now,
suppose there is some procedure in which we are able to eliminate some bid $b
\in [0, \tau_i )$ for some player $j \geq i$. Then again, consider the first
time it happens and let $b'$ be the bid that dominates $b$. We analyze again two
cases. If $b' < b$. consider a profile where the other players $j' \geq i, j'
\neq j$ bid between $b'$ and $b$ where the highest bidder is a player $k$ such
that the edge $(k,j')$ is in $G_b$. It is easy to see that $b$ outperforms $b'$
for this profile. If $b' > b$, we can use the same argument as in the base case.
Also, given that the strategies $b_j \geq \tau_j$ were already eliminated for
players $j < i$, clearly $b_i > \tau_i$ is dominated by $\tau_i$.
\end{proof}
\begin{corollary}
The bids $b_i \in [0, \tau_i]$ are exactly the bids that are compatible with
iterative elimination of weakly dominated strategies for the first price
auction with externalities.
\end{corollary}
Now, given the result above, it is simple to prove that there are Nash
equilibria that are compatible with iterated elimination. Consider the
algorithm used to calculate $\tau_i$. Consider that at point $p, V$, the
active edges are the edges in $G_p[V]$. Now, in the execution of the algorithm,
we can keep track of the in-degree and out-degree of each node with respect to
active egdes. Those naturally decrease with the execution of the algorithm.
Since in each step some edges become inactive, there is at least one node such
that its out-degree becomes zero before or at the same time that his in-degree
becomes zero. So, for the corresponding player $i$, there is one price $p$ such
that $\tau_i \geq p$, there is an edge $(j,i)$ in $G_{p'}[V']$, where $p',V'$
is the state of the algorithm just before the out-degree of $i$ became zero.
So, clearly $\tau_j \geq p$. Now, it is easy to see that the strategy profile
$b_i = p+$, $b_j = p$ and $b_k = 0$ for all $k \neq i,j$ is a Nash equilibrium
and it is compatible with iterative elimination.
In fact, the reasoning above allows us to fully characterize and enumerate all
outcomes that are a Nash equilibrium compatible with iterated elimination:
\begin{theorem}
The outcome of player $i$ winning the item for price $p$ can be expressed as a
Nash equilibrium that is compatible with iterated elimination iff $p \leq
\tau_i$, player $i$ has out-degree zero in $G_p$ and there is some player $j$
with $\tau_j \geq p$ such that the edge $(j,i)$ is in $G_{p'}$ for all $p' < p$.
\end{theorem}
\comment{
\begin{theorem}
The set of Nash equilibria that are compatible with iterated elimination of
weakly dominated strategies correspond to the outcomes where player $i$ gets
the item for price $p$ where $p \leq \tau_i$, the
out-degree of $i$ is zero in $G_p$ and in-degree of $i$ is positive
in $G_{p'}, p' < p$.. Moreover, at least one such equilibria always exists.
\end{theorem}
As a sanity check one can check that for the first-price auction without
externalities, where $v_1 \geq v_2 \geq \hdots$ we have $\tau_1 = v_2$ and
$\tau_i = v_i$ for $i > 1$. For any $p < v_2$ all nodes have out-degree at
least one, but for $p = v_2$, the node corresponding to player $1$ has
out-degree zero, so he gets the item for $v_2$. Notice that this is the only
equilibrium predicted by the above theorem, since for prices $p$ larger then
$v_2$, we have $p > \tau_1$. Now, let's prove the theorem with externalities:
\begin{proof}
It is easy to see that those three conditions are necessary. Now, let's prove
sufficiency: given a player $i$ and a price $p$ with those characteristics,
there is some player $j$ for which the edge $(j,i)$ is in $G_{p'}$ for all $p' <
p$.
Consider the equilibrium where $b_i = p+$, $b_j = p$ and every other player
bid zero. Clearly this is a Nash equilibrium. Now, we need to argue that $p <
\tau_j$. Notice that $\tau_j \geq p$ since for all $G_{p'}$ there is an edge
from $(i,j)$, so $j$ has positive in-degree coming from a node that is still in
$V$ in the execution of the algorithm that assigns $\tau$ values.
Now, we need to argue that all
\end{proof}
}
\comment{
Let $p$ be the smallest price for which there is a node in
$G_p$ with zero out-degree, but had positive in-degree in $G_{p-\epsilon}$.
Let this be node $i$ (it is simple to see such node exist). Now, consider the
equilibrium where player $i$ bids $p+$ and for the rest of the players $j \neq
i$, they bid $p$ if they have positive in-degree in $G_{p-\epsilon}$ and bid
$\epsilon/2$ if they have zero in-degree in $G_{p-\epsilon}$. The players with
zero in-degree in $G_{p-\epsilon}$ clearly prefer not to have the item for this
price.
Now, we need to argue that this profile is not eliminated by any iterated
elimination procedure (see appendix \ref{appendix:refinements} for a
definition). Clearly, $\epsilon/2$ is not eliminated by any iterative
elimination. We can see this by the following argument: consider any
elimination procedure and think the first time in this procedure a bid $b$
between $0$ and $\epsilon$ is elminated. Now, this clearly is a best response a
profile where everyone else bids between $0$ and $b$ (which hasn't been
eliminated yet).
Now, we use the fact that all players bidding $p$ or $p+$ and
positive in and out degree on $G_{p-\epsilon}$ are not playing strategies that
can eliminated by a iterated procedure. Let $I$ be this set of players.
In order to see that no bid between $p-\epsilon$ and $p$ can be eliminated for
those players, suppose by contradiction that there is a procedure that
eliminates such a bid and look at the first time such a bid is eliminated. Say
$b$ is the bid eliminated and let $b'$ be a bid that dominates it. If $b' < b$,
then clearly $b$ is a better response to a profile where all other players in
$I$ bid between $\max \{p-\epsilon, b'\}$ and $b$. If not, then because it is a
first price auction, $b$ is a better response to a profile where the other
players in $I$ play between $p-\epsilon$ and $b$.
}
\section{Formal definition of extensive form games}\label{appendix:extensive}
We provide in this session a formal mathematical description of the concepts
described in session \ref{sequential-item-auctions}: We can represent an
\textbf{extensive-form game} via a game-tree, where nodes of
the tree correspond to different histories of play. At each stage of the
game, players make simultaneous moves, that can depend on the history of play so
far. So a player's strategy in an extensive form game is a strategy for each
possible history, i.e., each node of the tree. More formally,
\begin{itemize}
\item Let $N$ denote the set of players, and let $n=\abs{N}$
\item A $k$-stage game is represented by a directed game tree $\mathcal{T} =
(V,E)$ of $k+1$ levels. Let $V^t$ be the nodes in level $t$, where $V^t$ denotes
possible partial histories at the start of stage $t$. So $V^1$ contains only
the root and $V^{k+1}$ contains all the leaves, i.e., the outcomes of the game.
Note that the tree can be infinite, if for example some player has an infinite
strategy set.
\item for each $v\in V \setminus V^{k+1}, i \in N$, a strategy set $S_i(v)$ is
the set of possible strategies of player $i$
\item for each $v \in V$, the out-going edges of $v$ correspond to strategy
profiles $s(v) \in \times_i S_i(v)$, the outcome of this stage when players
play strategies $s(v)=(s_1(v),\ldots,s_n(v))$.
\item for each $i \in N$, we have the utility function $u_i : V^{k+1}
\rightarrow \R$, that denotes the utility of the outcome corresponding to node
$v \in V^{k+1}$ for player $i$.
\end{itemize}
The \textbf{pure strategy} of a player consists of choosing $s_i(v) \in S_i(v)$
for each node $v \in V$, i.e. a function $s_i:V \rightarrow \cup_v S_i(v)$ such
that $s_i(v) \in S_i(v)$. In other words, it is a strategy choice for each
round, given the history of play so far, which is encoded by node $v$. A
strategy profile is a $n$-tuple $s = (s_1, \hdots, s_n)$. It defines the
\textbf{actual history of play } $h = (h_1, h_2, \hdots, h_{k})$, where $h_1 =
s(r)$ is the strategy profile played at the root, and $h_i$ is the strategy
profile played at the node that corresponds to history $h_1, \hdots,
h_{i-1}$. Notice that $h$ corresponds to a leaf of the tree, which allows to
define the utility of $i$ for a strategy profile:
$$u_i(s) = u_i(h(s))$$
\comment{
A \textbf{mixed strategy} corresponds to choosing $\sigma_i(v) \in
\Delta(S_i(v))$ for each $v \in V$, where $\Delta(X)$ is the set of
distributions over the elements of $X$ \footnote{One could alternatively define
a mixed strategy as a distribution over pure strategies, but those definitions
are equivalent, according to Kuhn's Theorem \cite{kuhn53}.}. A mixed strategy
profile is defined by $\sigma = (\sigma_1, \hdots, \sigma_n)$. Now, one can
define history of play in the same way: the difference now is that $h$ is a
random variable. We define:
$$u_i(\sigma) = \E u_i(h(\sigma))$$
}
We use \textbf{subgame perfect equilibrium} ($\spe$) as our main solution
concept.
A subgame of sequential game is the game resulting after fixing some initial
history of play, i.e., starting the game from a node $v$ of the game tree.
Let $u^v_i(s)$ denote the utility that $i$ gets from playing $s$
starting from node $v$ in the tree.
We say that a profile $s$ is a \spe if it is a Nash equilibrium for each subgame
of the game, that is, for all nodes $v$ we have:
$$\forall s'_i: u_i^v(s_i, s_{-i}) \geq u_i^v(s'_i, s_{-i}).$$
Given a node $v$ in the game tree and fixing $s_i(v')$ for all $v'$ below $v$,
we can define an induced normal-form game in node $v$ by $s$ as the game with
strategy space $\times S_i(v)$ such that the utility for player $i$ by playing
$\tilde{s}(v), \tilde{s}_i(v) \in S_i(v)$ is $u_i^v(s_i, s_{-i})$ where player
$i$ plays $\tilde{s}_i(v)$ in node $v$ and according to $s_i(v')$ in all nodes
$v'$ below $v$. Kuhn's Theorem states that $s$ is a subgame perfect equilibrium
iff $s(v)$ is a Nash equilibrium on the induced normal-form game in node $v$ for
all $v$.
\comment{
We say that an $\spe$ is an \textbf{undominated subgame perfect equilibrium} if
the equilibrium selected in the induced game in each node of the tree is in
undominated strategies. I.e., conditioned on profiles selected in nodes below
$v$, no player $i$ employs a strategy $s_i(v)$ such that there is $s'_i(v)$ in
node $v$ such that for all possible choices of $s_{-i}(v)$, $s'_i(v)$ generates
at least as much utility as $s_i(v)$ and outperforms it for some $s'_i(v)$.
Notice this definition is equivalent to apply a single round of elimination of
weakly-dominated strategies in the induced subgame. An \textbf{iteratively
undominated subgame perfect equilibrium} is one that in each induced node-game
the equilibrium survives any order of elimination of dominated strategies.
}
The main tool we will use to analyse those games is the \textbf{price of
anarchy}. Consider a welfare function defined on the leaves of the tree, i.e. $W
: V^{k+1} \rightarrow \R$. Given a certain strategy profile $s$ and its induced
history $h(s)$, the social welfare of this game play is given by $W(v) = \sum_{i
\in N} u_i(h(s))$. We define the optimal welfare as $W^* = \max_{v \in V^{k+1}}
W(v)$, and the pure Price of Anarchy ($\poa$) as:
$$\poa = \max_{s \in E} \frac{W^*}{W(s)}$$
where $E$ is the set of all subgame perfect equilibria.
There sequential auctions we study are $m$-stage games and strategy space on
each node $v$ for player $i$ is a bid $b_i(v) \in [0, \infty)$. In other words,
the strategy of each player in this game is a function that maps the bid
profiles in the first $k-1$ items to his bid in the $k$-th item. Their utility
is the total value they get for the bundle they acquired minus the price paid.
The welfare is the sum of the values of all players.
\section{Non-Existense of SPE in Multi-Item Auctions}\label{appendix:multi-unit}
We give an example of a multi-item sequential auction with no $\spe$ in pure
strategies. The example has $4$ players and $5$ items. The first two items
$X_1,X_2$ are auctioned simultaneously first and the remaining items are
auctioned sequentialy afterwards in the order $W,Y,Z$. Players $1$ and $4$ are
single minded. Player $1$ has value $v$ only for item $Z$ and player $4$ has
value $\frac{2\delta}{3}+\epsilon$ only for item $W$. Players $2$ and $3$ have
coverage submodular valuations that are depicted in Figure \ref{fig_non_vals}.
One can check that the following allocation and prices constitutes a walrasian
equilibrium of the above instance: $A_1=\emptyset, A_2 = \{X_1,Z\},
A_3=\{X_2,Y\}, A_4=\{W\}), p_{X_1}=p_{X_2}=\delta/3,
p_{Y}=v+\delta/6,p_{W}=2\delta/3$. However, we will show that there is no
subgame perfect equilibrium in pure strategies.
\begin{figure}
\centering
\includegraphics{coverage_nonexist1.mps}
\caption{Valuations $v_2$ and $v_3$.}
\label{fig_non_vals}
\end{figure}
We will show that the subgame perfect equilibrium in the last three auctions
is always unique given the outcome in the first two item auction and is such
that player $1$ has a huge value for winning both $X_1,X_2$ and almost $0$
otherwise and player $3$ has huge value for winning any of $X_1$ or $X_2$ and
almost $0$ otherwise. Thus ignoring players $2$ and $4$ since they have
negligent value for $X_1,X_2$ we observe that the first two-item auction is an
example of an AND and an OR bidder that is well known to not have walrasian
equilibria and hence pure nash equilibria in the first price item auction.
So we examine what happens after any outcome of the first two-item auction:
\begin{itemize}
\item Case 1: Player $1$ won both $X_1,X_2$.
In this case player $2$ has a value of $v+\frac{2\delta}{3}$ for $Y$ and a value
of $v+\delta/2$ for $Z$ given that he loses $Y$. In the $Z$ auction player $2$
will bid $v$. Hence, player $2$ will gain a profit of $\delta/2$ from the $Z$
auction if he loses $Y$. Moreover, the value of player $3$ for $W$ is
$\delta+\delta/3$.
\begin{itemize}
\item Case 1a: Player $3$ won $W$.
In this case the value of $3$ for $Y$ is $v-\delta/2$. Hence, the game
played at the $Y$ auction is the following (we ignore player $4$):
$$[v_i^j] =
\begin{bmatrix} 0 & v-\frac{\delta}{2} & 0\\
\delta/2 & v+\frac{2\delta}{3} & \delta/2\\
0 & 0 & v-\delta/2 \end{bmatrix} $$
Thus player $2$ wants to win for a price of at most
$v+\frac{2\delta}{3}-\frac{\delta}{2}$. Player $3$ will bid $v-\delta/2$ and
player $2$ will win. In the last auction player $2$ will just bid $\delta/2$.
Hence, player $1$ will get utility $v-\delta/2$, player $2$ utility
$\frac{\delta}{2}+\frac{2\delta}{3}$ and player $3$ utility $0$.
\item Case 1b: Player $3$ lost $W$.
In this case the value of $3$ for $Y$ is $v+\delta/2$ and the
game played is:
$$[v_i^j] =
\begin{bmatrix} 0 & v-\frac{\delta}{2} & 0\\
\delta/2 & v+\frac{2\delta}{3} & \delta/2\\
0 & 0 & v+\delta/2 \end{bmatrix} $$
Thus player $3$ now wants to win for a value at most
$v+\delta/2$ and player $2$ for a value at most
$v+\frac{2\delta}{3}-\frac{\delta}{2}$. Hence, in the unique no-overbidding
equilibrium player $3$ will win. Therefore, player $1$ will get utility
$0$,player $2$ utility $\delta/2$ and player $3$ utility $\frac{\delta}{3}$.
\end{itemize}
Thus we see that at auction $W$ the following game is played:
$$[v_i^j] =
\begin{bmatrix} 0 & 0 & v-\frac{\delta}{2} & 0\\
\frac{\delta}{2} & \frac{\delta}{2} &
\frac{\delta}{2}+\frac{2\delta}{3} & \frac{\delta}{2}\\
\frac{\delta}{3} & \frac{\delta}{3} & \delta+\frac{\delta}{3} &
\frac{\delta}{3} \\
0 & 0 & 0 & \frac{2\delta}{3}+\epsilon \end{bmatrix} $$
Players $1$ and $2$ will bid $0$ and player $4$ wants to win for at most
$\frac{2\delta}{3}+\epsilon$. Player $3$ wants to win for at most $\delta$.
Hence, in the unique equilibrium player $3$ will win $W$. Consequently, player
$2$ will win $Y$ and player $1$ will win $Z$. Thus, player $1$ will get utility
$v-\frac{\delta}{2}$, player $2$ utility $\frac{2\delta}{3}$, player $3$
utility $\frac{2\delta}{3}-\epsilon$ and player $4$ utility $0$.
\item Case 2: Player $3$ won at least one of $X_1$ or $X_2$.
In this case player $2$ has a value of at least $v+\frac{\delta}{3}$ and at
most $v+\frac{2\delta}{3}$ for $Y$ and a value of $v+\delta/2$ for $Z$ given
that he loses $Y$. In the $Z$ auction player $2$ will bid $v$. Hence, player $2$
will gain a profit of $\delta/2$ from the $Z$ auction if he loses $Y$. Moreover,
the value of player $3$ for $W$ is
$\delta$.
\begin{itemize}
\item Case 2a: Player $3$ won $W$.
In this case the value of $3$ for $Y$ is $v-\delta/2$. Hence, the game
played at the $Y$ auction is the following (we ignore player $4$):
$$[v_i^j] =
\begin{bmatrix} 0 & v-\frac{\delta}{2} & 0\\
\delta/2 & v+\frac{\delta}{3} \text{~or~} v+\frac{2\delta}{3} &
\delta/2\\
0 & 0 & v-\delta/2 \end{bmatrix} $$
Thus player $2$ wants to win for a price of at most
$v+\frac{\delta}{3}-\frac{\delta}{2}$. Player $3$ will bid $v-\delta/2$ and
player $2$ will win. In the last auction player $2$ will just bid $\delta/2$.
Hence, player $1$ will get utility $v-\delta/2$, player $2$ utility
$\geq \frac{\delta}{2}+\frac{\delta}{3}$ and player $3$ utility $0$.
\item Case 2b: Player $3$ lost $W$.
In this case the value of $3$ for $Y$ is $v+\delta/2$ and the
game played is:
$$[v_i^j] =
\begin{bmatrix} 0 & v-\frac{\delta}{2} & 0\\
\delta/2 & v+\frac{\delta}{3} \text{~or~} v+\frac{2\delta}{3} &
\delta/2\\
0 & 0 & v+\delta/2 \end{bmatrix} $$
Thus player $3$ now wants to win for a value at most
$v+\delta/2$ and player $2$ for a value at most
$v+\frac{2\delta}{3}-\frac{\delta}{2}$. Hence, in the unique no-overbidding
equilibrium player $3$ will win. Therefore, player $1$ will get utility
$0$, player $2$ utility $\delta/2$ and player $3$ utility at least
$\frac{\delta}{3}$.
\end{itemize}
Thus we see that at auction $W$ the following game is played:
$$ \begin{bmatrix} 0 & 0 & v-\frac{\delta}{2} & 0\\
\frac{\delta}{2} & \frac{\delta}{2} &
\frac{\delta}{2}+\frac{\delta}{3} \text{~or~}
\frac{\delta}{2}+\frac{2\delta}{3} & \frac{\delta}{2}\\
\frac{\delta}{3} \text{~or~} \frac{2\delta}{3} &
\frac{\delta}{3} \text{~or~} \frac{2\delta}{3} &
\delta &
\frac{\delta}{3} \text{~or~} \frac{2\delta}{3} \\
0 & 0 & 0 & \frac{2\delta}{3}+\epsilon \end{bmatrix} $$
Players $1$ and $2$ will bid $0$ and player $4$ wants to win for at most
$\frac{2\delta}{3}+\epsilon$. Player $3$ wants to win for at most
$\delta-\frac{\delta}{3}$.
Hence, in the unique equilibrium player $4$ will win $W$. Consequently,
player $3$ will win $Y$ and player $2$ will win $Z$. Thus, player $1$ will get
utility $0$, player $2$ utility $\delta/2$, player $3$
utility at least $\frac{\delta}{3}$ and at most $\frac{2\delta}{3}$ and player
$4$ utility at least $\epsilon$ and at most $\frac{\delta}{3}+\epsilon$
(according to whether player $2$ won one of $X_1,X_2$ or not).
\item Case 3: Player $3$ didn't win any of $X_1,X_2$ and player $2$ won some of
$X_1,X_2$.
In this case we just need to observe that $2$ expects a profit of at most
$\delta/2$ from $Z$ hence he will set a price of at least $v-\delta/2$ at the
$Y$ auction. Thus player $3$ expects to get utility at most $2\delta$ from the
$Y$ and $W$ auctions.
\end{itemize}
Now we examine the existence of equilibrium in the two-item auction. Both
players $2$ and $4$ get utilities at most $2\delta$ from the $Y,W$ and $Z$
auctions and have at most $\delta$ value for $X_1$ and $X_2$. Thus they will
bid at most $3\delta$.
On the other hand player $1$ has a utility of $v-\delta/2$ from subsequent
auctions if he wins both items and utility $0$ if player $3$ wins some of them.
Moreover, player $3$ has a utility at most $2\delta$ from subsequent auctions
in any outcome, but has a value of $2v/3+\frac{\delta}{3}$ for winning some of
$X_1$ or $X_2$. Hence, player $3$ is willing to win some of $X_1$ or $X_2$ at a
price of $2v/3-2\delta$. Since we assume that $\delta\rightarrow 0$ we can
ignore players $2$ and $4$ in the first auction.
If player $1$ wins both items and both at a price smaller than
$2v/3-2\delta$ then player $3$ has a profitable deviation to bid higher than
that at one auction and outbid $1$. Thus if player $1$ wins both items he must
be paying at least $4v/3-4\delta$ which is much more than the utility he
receives. Hence, this cannot happen.
Thus player $3$ must be winning some auction. If that is true then player $2$
receives $0$ utility in any possible outcome and since he has no direct value
for $X_1$ or $X_2$ he doesn't won to win any of the auctions. Moreover, if
player $3$ bids more than $2\delta$ in both auctions and wins both auctions then
he has a profitable deviation to bid $0$ in one of them since given that he wins
one item his marginal valuation for the second is $0$. Thus in equilibrium
player $3$ will bid less than $2\delta$ in some of the two auctions. Moreover,
he is bidding at most $2v/3+2\delta$ in the auction he is winning. However, in
that case player $1$ has a profitable deviation of marginally outbidding player
$3$ in both auctions. Hence, player $3$ winning some auction cannot happen
either at equilibrium and therefore no pure
nash equilibrium can exist in the first round.
\input{external-second-price.tex}
\end{appendix}
\section{Auctions with Externalities}\label{sec:external-payoffs}
In this section we consider a \emph{single-item auction with externalities}, and
analyze a simple first price auction for this case. Variations
on the same concept of externalities can be found, in Jehiel and Moldovanu
\cite{JehielMoldovanuNonparticipation}, Funk \cite{Funk} and in Bae et al
\cite{Bae2008} - the last one also motivated by sequential
auctions, but considered auctions with two players only.
\comment{
Funk \cite{Funk} also gives an example of an equilibrium in undominated
strategies that leads to an unnatural outcome. In the example the unnatural
outcome would be eliminated by restricting to strategies that survive the
iterated elimination of dominated strategies. Instead, Funk introduces a new
concept of locally undominated strategy, and shows that first price auction with
externalities always has a pure Nash equilibrium using locally undominated
strategies.}
Here we show that pure Nash equilibrium exists using strategies that survives
the iterated elimination of dominated strategies.
The single-item auction with
externalities will be used as the main building block in the study of
sequential auctions.
\begin{Definition}
A \textbf{first-price single-item auction with externalities} with $n$
players is a game such that the type of player $i$ is a vector $[v_i^1,
v_i^2, \hdots, v_i^n]$ and the strategy set is $[0, \infty)$. Given
bids $b=(b_1, \hdots, b_n)$ the first price auction allocates the item to the
highest bidder, breaking ties using some arbitrary fixed rule, and makes the
bidder pay his bid value. If player $i$ is allocated, then
experienced utilities are $u_i = v_i^i - b_i$ and $u_j = v_j^i$ for all $j \neq
i$. \end{Definition}
For technical reasons, we allow a player do bid $x$ and $x+$ for each real
number $x\geq 0$. The bid $b_i = x+$ means a bid that is infinitesimally larger
than $x$. This is essential for the existence of equilibrium in first-price
auctions. Alternatively, one could consider limits of $\epsilon$-Nash equilibria
(see Hassidim et al \cite{Hassidim11} for a discussion).
Next, we present a natural constructive proof of the existence of
equilibrium that survives iterated elimination of weakly-dominated strategies.
See the formal definition concept of iterated elimination of weakly-dominated
strategies we use in Appendix \ref{appendix:refinements}. Our proof is based on
ascending auctions.
\begin{theorem} \label{thm:external-exists}
Each instance of the first-price single-item auction with externalities has
a pure Nash equilibrium that survives iterated elimination of weakly-dominated
strategies. \end{theorem}
\begin{proof}
For simplicity we assume that all $v_i^j$ are multiples of $\epsilon$. Further, we may
assume without loss of generality that $v\ge 0$ and $\min_j v_i^j=0$. We
say that an item is toxic, if $v_i^i < v_i^j$ for all $i \neq j$. If the item is
toxic, then $b_i = 0$ for all players and player $1$ getting the item is an
equilibrium. If not, assume $v_1^1 \geq v_1^2$.
Let $\langle i,j,p \rangle$ denote the state of the game where player $i$ wins
for price $p$ and $j$ is the price setter, i.e., $b_i = p+$, $b_j = p$, $b_k =
0$ for $k \neq i,j$. The idea of the proof is to define a sequence of states
which have the following invariant property: $p \leq \gamma_i$, $p \leq
\gamma_j$ and $v_i^i -p \geq v_i^j$, where $\gamma_i = \max_j v_i^i - v_i^j$.
\comment{We call the first two
conditions \emph{non-overbidding}.} We will define the sequence, such that
states don't appear twice on the sequence (so it can't cycle) and when the
sequence stops, we will have reached an equilibrium.
Start from the state $\langle 1,2,0 \rangle$, which clearly satisfies the
conditions. Now, if we are in state $\langle i,j,p \rangle$, if there is no $k$
such that $v_k^k - p > v_k^i$ then we are at an equilibrium satisfying all
conditions. If there is such $k$, move to the state $\langle k,i,p \rangle$ if
this state
hasn't appeared before in the sequence, and otherwise, move to $\langle
k,i,p+\epsilon \rangle$. We need to check that the new states satisfy the
invariant conditions: first $v_k^k - (p+\epsilon) \geq v_k^i$. Now, we need to
check the two first conditions: $p+\epsilon \leq v_k^k - v_k^i \leq \gamma_k$.
Now, the
fact that $i$ is not overbidding is trivial for $\langle k,i,p \rangle$, since
$i$ wasn't overbidding in $\langle i,j,p \rangle$. If, $\langle k,i,p \rangle$
already appeared in this sequence, it means that $i$ took the item from player
$j$, so: $v_i^i - p > v_i^j$ so: $p < v_i^i - v_i^j \leq \gamma_i$ so
$p+\epsilon \leq \gamma_i$.
Now, notice this sequence can't cycle, and prices are bounded by valuations, so
it must stop somewhere and there we have an equilibrium. To show the existence
of an equilibrium surviving iterative elmination of weakly dominated
strategies, we need a more careful argument: we refer to appendix
\ref{appendix:refinements} for a proof.
\end{proof}
\comment{
To see that there is an equilibrium that survives iterated elimination
of dominated strategies, consider the following alternative way of presenting
the proof above: for each price $p$, consider a directed graph $G_p$ on $n$
nodes such that there is an edge from $i$ to $j$ if $v_j^j - p > v_j^i$, i.e.,
if player $i$ were getting the item at price $p$, player $j$ would rather
overbid him and take the item. Now, notice that the graph $G_{p+\epsilon}$ is a
subgraph of $G_p$. Let's assume that all nodes have positive in-degree and
out-degree in $G_0$. If there are nodes with zero in-degree, simply remove the
players that have in-degree zero in $G_0$ (which mean that they can't possibly
want the item, i.e. they bidding zero is a dominant strategy). If there are
players with zero out-degree, then the problem is trivial, since there are nodes
for who we can give the item and get an equilibrium with zero price.
Let $p$ be the smallest price for which there is a node in
$G_p$ with zero out-degree, but had positive in-degree in $G_{p-\epsilon}$.
Let this be node $i$ (it is simple to see such node exist). Now, consider the
equilibrium where player $i$ bids $p+$ and for the rest of the players $j \neq
i$, they bid $p$ if they have positive in-degree in $G_{p-\epsilon}$ and bid
$\epsilon/2$ if they have zero in-degree in $G_{p-\epsilon}$. The players with
zero in-degree in $G_{p-\epsilon}$ clearly prefer not to have the item for this
price.
Now, we need to argue that this profile is not eliminated by any iterated
elimination procedure (see appendix \ref{appendix:refinements} for a
definition). Clearly, $\epsilon/2$ is not eliminated by any iterative
elimination. We can see this by the following argument: consider any
elimination procedure and think the first time in this procedure a bid $b$
between $0$ and $\epsilon$ is elminated. Now, this clearly is a best response a
profile where everyone else bids between $0$ and $b$ (which hasn't been
eliminated yet).
Now, we use the fact that all players bidding $p$ or $p+$ and
positive in and out degree on $G_{p-\epsilon}$ are not playing strategies that
can eliminated by a iterated procedure. Let $I$ be this set of players.
In order to see that no bid between $p-\epsilon$ and $p$ can be eliminated for
those players, suppose by contradiction that there is a procedure that
eliminates such a bid and look at the first time such a bid is eliminated. Say
$b$ is the bid eliminated and let $b'$ be a bid that dominates it. If $b' < b$,
then clearly $b$ is a better response to a profile where all other players in
$I$ bid between $\max \{p-\epsilon, b'\}$ and $b$. If not, then because it is a
first price auction, $b$ is a better response to a profile where the other
players in $I$ play between $p-\epsilon$ and $b$.
}
It is not hard to see that any equilibrium of the \emph{first-price} auction
with externalities is also an equilibrium in the \emph{second-price} version.
The subclass of second-price equilibria that are equivalent to a
first-price equilibria (producing same price and same allocation),
are the second-price equilibria that are \textbf{envy-free}, i.e., no player
would rather be the winner by the price the winner is actually paying. So, an
alternative way of looking at our results for first price is to see them as
outcomes of second-price when we believe that envy-free equilibria are selected
at each stage.
\section{Sequential Item Auctions}\label{sequential-item-auctions}
Assume there are $n$ players and $m$ items and each
player has a monotone (free-disposal) combinatorial valuation $v_i: 2^{[m]}
\rightarrow \R_+$. We will consider sequential auctions. First assume that at each time step only a single item is being auctioned off: item $t$ is auctioned in step $t$.
We define the sequential first (second) price auction for this case as
follows: in time step $t = 1 \hdots m$ we ask for bids $b_i(t)$ from each agent for the item being considered in this step,
and run a first (second) price auction to sell this item. Generally, we assume that after
each round, the bids of each agent become common knowledge, or at least the winner and the winning price become public knowledge. The agents can
naturally choose their bid in time $t$ as a function of the past history.
We will also consider the natural extension of these games, when each round can
have multiple items on sale, bidders submit bids for each item on sale, and we
run a separate first (second) price auction for each item.
This setting is captured by \textbf{extensive form games} (see \versions{the
Appendix of the full version}{Appendix \ref{appendix:extensive} } for a formal
definition and
\cite{fudenberg91} for a more comprehensive treatment). The strategy of each
player is an adaptive bidding policy: the policy specifies what a player bids
when the $t^{th}$ item (or items) is auctioned, depending on the bids and outcomes
of the previous $t-1$ items. More formally a strategy for player $i$ is a bidding function
$\beta_i(\cdot)$ that associates a bid $\beta_i(\{b_i^\tau\}_{i,\tau <
t})\in \R_+$ with each sequence of previous bidding
profiles $\{b_i^\tau\}_{i,\tau < t}$.
Utilities are calculated in the natural way: utility for the set of items won,
minus the sum of the payments from each round. In each round the player with
largest bid wins the item and pays the first (second) price. We are interested
in the \textbf{subgame perfect equilibria} ($\spe$) of this game: which means
that the
profile of bidding policies is a Nash equilibrium of the original
game and if we arbitrarily fix what happens in the first $t$ rounds, the policy
profile also remains a Nash equilibrium of this induced game.
Our goal is to measure the \textbf{Price of Anarchy}, which is the worse
possible ratio between the optimal welfare achievable (by allocating the items
optimally) and the welfare in a subgame perfect equilibrium. Again, we invite
the reader to \versions {the Appendix of the full version}{Appendix \ref{appendix:extensive}} for formal definitions.
\subsection{First Price Auctions: existence of pure equilibria}\label{sec:first-price}
First we show that sequential first price single item auctions have pure equilibria for all valuations.
\begin{theorem}
Sequential first price auction when each round a single item is auctioned has a $\spe$ that doesn't use dominated strategies, and in which bids in each node of the game tree depend only on who got the item in the previous rounds.
\end{theorem}
We use backwards induction, and apply our result on the existence of Pure Nash
Equilibria in first price auctions with externalities to show the theorem.
Given outcomes of the game starting from stage $k+1$ define a game with externalities for stage $k$, and by Theorem \ref{thm:external-exists} this game has a pure Nash equilibrium. It interesting to
notice that we have existence of a pure
equilibrium for arbitrary combinatorial valuation. In contrast, the
simultaneous item bidding auctions, don't always possess a pure equilibrium even for subadditive bidders (\cite{Hassidim11}).
\comment{
We show that in sharp contrast to second price, sequential first price auctions
have a small price of anarchy. The main reason is that equilibria are envy-free,
i.e. if some player is winning some item for a certain price $p$, other player
is able to take this item for the same price $p$ that he is currently paying.
Consider for example players with additive valuations and let's look at the
non-overbidding equilibria, in the sense that players don't overbid in the
``auction with external payoffs'' induced in each node -- notice that it is
still possible that a player bids more than his marginal value for some item. In
this setting, there only one thing that can happen for the last item, it is sold
for the player that values it the most by the second highest price (that is the
only first-price equilibrium where players don't overbid). Now, since the last
auction is determined regardless of what happens before, there is one possible
equilibrium for the before to last item, which is again the player that values
it the most wins for the second highest price and so on... Therefore, the only
subgame perfect equilibrium is efficient.
}
In the remainder of this section we consider three classes of valuations:
additive, unit-demand and submodular. For additive valuations, the sequential
first-price auction always produces the optimal outcome. This is in contrast to
second price auctions, as we show in \versions{the Appendix of the full version}{Appendix \ref{sec:second-price}}.
In the next two subsections we consider unit-demand bidders, and prove a bound of 2 for the Price of Anarchy, and then show that for submodular valuations the price of anarchy is unbounded (while in the simultaneous case, the price of anarchy is bounded by 2 \cite{Hassidim11}).
\comment{
We present our results for the case of one item being auctioned at each
timestep. However, all results carry over to the case where in each timestep a
subset of items is auctioned simultaneously -- i.e. an intermediate model
between our fully sequential model and the fully simultaneous model of
\cite{Hassidim11}. However, if multiple items are auctioned in a round, a
subgame perfect equilibrium might fail to exist, as shown in \versions{Appendix of the full version}{ appendix
\ref{appendix:multi-unit}}.
Consider players with additive valuations and let's look at the
non-overbidding equilibria, in the sense that players don't overbid in the
``auction with external payoffs'' induced in each node -- notice that it is
still possible that a player bids more than his marginal value for some item. In
this setting, there only one thing that can happen for the last item, it is sold
for the player that values it the most by the second highest price (that is the
only first-price equilibrium where players don't overbid). Now, since the last
auction is determined regardless of what happens before, there is one possible
equilibrium for the before to last item, which is again the player that values
it the most wins for the second highest price and so on... Therefore, the only
subgame perfect equilibrium is efficient. In section \ref{sec:second-price} we
show that this is not true for second-price.
}
\subsection{First Price Auction for Unit-Demand Bidders}\label{subsec:multi_unit_auctions}
We assume that there is free disposal, and hence say that a player $i$ is unit-demand if, for a bundle $S \subseteq [m]$, $v_i(S) = \max_{j \in S} v_{ij}$, where $v_{ij}$ is the valuation of player $i$ for item $j$.
To see that inefficient allocations are possible, consider the example given in
Figure \ref{fig1}. There is a sequential first price auction of three items
among four players. Player $b$ prefers to loose the first item, anticipating
that he might get a similar item for a cheaper price later. This gives an example
where the Price of Anarchy is $3/2$. Notice that this is the only equilibrium using
non-dominated strategies.
\versions{
\begin{figure}
\centering
\includegraphics{unit-demand-example_solo1.mps}
\caption{Sequential Multi-unit Auction generating $\poa$ $3/2$: there
are $4$ players $\{a,b,c,d\}$ and three items that are auctioned first $A$, then
$B$ and then $C$. The optimal allocation is $b\rightarrow A$, $c\rightarrow C$,
$d \rightarrow B$ with value $3\alpha-\epsilon$. There is a $\spe$ that has
value $2\alpha+\epsilon$. In the limit when $\epsilon$ goes to $0$ we get
$\poa=3/2$.
}
\label{fig1}
\end{figure}
}{
\begin{figure}[h]
\centering
\includegraphics{unit-demand-example1.mps}
\caption{Sequential Multi-unit Auction generating $\poa$ $3/2$: there
are $4$ players $\{a,b,c,d\}$ and three items that are auctioned first $A$, then
$B$ and then $C$. The optimal allocation is $b\rightarrow A$, $c\rightarrow C$,
$d \rightarrow B$ with value $3\alpha-\epsilon$. There is a $\spe$ that has
value $2\alpha+\epsilon$. In the limit when $\epsilon$ goes to $0$ we get
$\poa=3/2$.
}
\label{fig1}
\end{figure}
}
\begin{theorem}\label{thm:unit-demand}
For unit-demand bidders, the $\poa$ of pure subgame perfect equilibria of Sequential First Price Auctions of individual items is bounded by $2$, while for mixed equilibria it is at most $4$.
\end{theorem}
\begin{proof}
Consider the optimal allocation and a subgame perfect equilibrium, and let
$\opt$ denote the social value of the optimum, and $\spe$ the social value of the subgame perfect equilibrium. Let $N$ be the set of players allocated in the optimum.
For each $i\in N$, let $j^*(i)$ be the element it was allocated to in the
optimal, and let $j(i)$ be the element he was allocated in the
subgame perfect equilibrium and let
$v_{i,j(i)}$ be player $i$'s value for this element (if player $i$ got more
than one element, let $j(i)$ be his most valuable element). If player $i$ wasn't
allocated at all, let $v_{i,j(i)}$ be zero. Let $p(j(i))$ be the price for which
item $j(i)$ was sold in equilibrium. Consider three possibilities:
\begin{enumerate}
\item $i$ gets $j^*(i)$, then clearly $v_{i,j(i)} \geq v_{i,j^*(i)}$
\item $i$ gets $j(i)$ after $j^*(i)$ or doesn't get allocated at all, then
$v_{i,j(i)} \geq v_{i,j^*(i)} - p(j^*(i))$, otherwise
he could have improved his utility by winning $j^*(i)$
\item $i$ gets $j(i)$ before $j^*(i)$, then either $v_{i,j(i)} \geq
v_{i,j^*(i)}$ or he can't improve his utility by getting $j^*(i)$, so it
must be the case that his marginal gain from $j^*(i)$ was smaller than
the maximum bid in $j^*(i)$, i.e. $p(j^*(i)) \geq v_{i,j^*(i)} - v_{i,j(i)}$
\end{enumerate}
Therefore, in all the cases, we got $p(j^*(i)) \geq v_{i,j^*(i)} - v_{i,j(i)}$.
Summing for all players $i\in N$, we get:
$$\opt = \sum_i v_{i,j^*(i)} \leq \sum_i v_{i,j(i)} + p(j^*(i)) \leq 2
\spe$$
where in the last inequality is due to individual rationality of the players.
\versions{The bound of 4 for mixed equilibria is proved in the full version.
}
{Next we prove the bound of 4 for the mixed case.
We focus of a player $i$ and let $j=j^*(i)$ denote item assigned to $i$ in the optimal matching.
In the case of mixed Nash equilibria, the price $p(j)$ is a random variable, as well as $A_i$ the set of items player $i$ wins in the auction. Consider a node $n$ of the extensive form game, where $j$ is up for auction, i.e., a possible history of play up to $j$ being auctioned. Let $P^{n-}_i$ be the expected value of the total price $i$ paid till this point in the game, and let
$P^n(j)=\expect{}{p(j)|n}$ be the expected price for item $j$ at this node $n$, and note that $P(j)=\expect{}{p(j)}=\expect{}{P^n(j)}$, where the right expectation is over the induced distribution on nodes $n$ where $j$ is being auctioned.
Player $i$ deviating by offering price $2 P^n(j)$ at every node $n$ that $j$ comes up for auction, and then dropping out of the auction, gets him utility at least $1/2(v_i(j)-2P^n(j))-P^{n-}_i$, as he wins item $j$ with probability at least 1/2 and paid $P^{n-}_i$ to this point. Using the Nash inequality we get
$$\expect{}{v_i(A_i)}-P_i \ge \expect{}{1/2(v_i(j)-2P^n(j))-P^{n-}_i},$$
where $P_i$ is the expected payment of player $i$, and the expectation on the right hand side is over the induced distribution on the the nodes of the game tree where $j$ is being auctioned.
Note that the proposed deviation does not effect the play before item $j$ is being auctioned, so the expected value of $\expect{}{P^{n-}_i}$ over the nodes $n$ is at least the expected payment $P_i$ of player $i$, and that the expected value of $P^n_j$ over the nodes $u$ is the expected price $P(j)$ of item $j$. Using these we get
$$\expect{}{v_i(A_i)}\ge \frac{1}{2}v_i(j)-P(j).$$
Now summing over all players, and using that $\sum_j P(j) \le \sum_i \expect{}{v_i(A_i)}$ due to individual rationality, we get the claimed bound of 4.}
\end{proof}
The proof naturally extends to sequential auctions when in each round multiple items are being auctioned.
We can also generalize the above positive result to any
class of valuation functions that satisfy the property that the optimal
matching allocation is close to the optimal allocation.
\begin{theorem}
\label{thm:approx-unit}
Let $\opt_M$ be the optimal matching allocation and $\opt$ the optimal
allocation of a Sequential First Price Auction. If $\opt \leq \gamma \opt_M$
then the $\spoa$ is at most $2\gamma$ for pure equilibria and at most $4\gamma$ for mixed Nash, even
if each round multiple items are auctioned in parallel (using separate first price auctions).
\end{theorem}
\versions{}{
\begin{proof}
Let $j^*(i)$ be the item of bidder $i$ in the optimal matching allocation and
$A_i$ his allocated set of items in the $\spe$. Let $A_i^-$ be the items that
bidder $i$ wins prior or concurrent to the auction of $j^*(i)$ and $A_i^+$ the ones that he
wins after. Consider a bidder $i$ that has not won his item in the optimal matching allocation. Bidder $i$
could have won this item when it appeared by bidding above its current
price $p_{j^*(i)}$ and then abandon all subsequent auctions. Hence:
$$ v_i(A_i^- \cup \{j^*(i)\})- p_{j^*(i)}-\sum_{j\in A_i^-}p_j \leq
v_i(A)-\sum_{j\in A}p_j $$
$$ v_i(A_i^- \cup \{j^*(i)\})-p_{j^*(i)} \leq v_i(A)-\sum_{j\in A_i^+}p_j \leq
v_i(A) $$
$$v_i(j^*(i))-p_{j^*(i)} \leq v_i(A)$$
If a player did acquire his item in the optimal matching allocation then the above inequality certainly
holds. Hence, summing up over all players we get:
\begin{equation*}
\begin{split}
\opt_M = & \sum_i v_i(j^*(i)) \leq \sum_i v_i(A_i) +\sum_i
p_{j^*(i)} \\
\leq & \spe + \sum_{j} p_j = \spe + \sum_i \sum_{j\in A_i}p_j \\
\leq & \spe + \sum_i v_i(A_i) = 2\spe
\end{split}
\end{equation*}
which in turn implies: $$\opt\leq \gamma \opt_M\leq 2\gamma \spe$$
The bound of $4\gamma$ for the mixed case is proved along the lines as the mixed proof of
Theorem \ref{thm:unit-demand}.
\end{proof}
The above general result can be applied to several natural classes of bidder
valuations. For example, we can derive the following corollary for
multi-unit auctions with submodular bidders: a bidder is said
to be uniformly submodular if his valuation is a submodular function on the
number of items he has acquired and not on the exact set of items. Thus a
submodular valuation is defined by a set of decreasing marginals
$v_i^1,\ldots,v_i^m$.
\begin{corollary}
If bidders have uniformly submodular valuations and $\forall i,j:
|v_i^1-v_j^1|\leq \delta \max(v_i^1,v_j^1)$ ($\delta<1$) and there are more
bidders than items then the $\spoa$ of a Sequential First Price Auction is at
most $2/(1-\delta)$.
\end{corollary}
\comment{
\begin{theorem}For unit-demand bidders, the $\poa$ of any mixed strategy subgame
perfect equilibrium of Sequential First Price Auctions of individual items is
bounded by $4$.\end{theorem}
\begin{proof}
A mixed strategy \spe induces a probability distribution $D$ on
bidding histories of the game $h$ and hence to outcomes. Moreover, it induces a
probability distribution $D_j$ on the nodes of the game tree for the auction of
item $j$, each node representing a different bidding history. On each such node
$t_j$ the mixed strategies of the players induce a probability distribution
$\mathcal{P}(t_j)$ on the price of item $j$. The total expected price of an item
$j$ is
$\expect{h\sim D}{p_j(h)} = \expect{t_j \sim D_j}{\expect{p_j
\sim \mathcal{P}(t_j)}{p_j}}$
Consider player $i$ bidding $2\expect{p_{j^*(i)} \sim
\mathcal{P}(t_{j^*(i)})}{p_{j^*(i)}}$ on all nodes $t_{j^*(i)}$ of
the game tree for item $j^*(i)$ while keeping the same mixed strategy for all
previous items and dropping out from all subsequent items. This induces a new
probability distribution on bidding histories $D'$. By Markov's Inequality, the
probability of him winning $j^*(i)$ at each of the item's nodes is at least
$1/2$. Thus for each possible history in the support of $D'$ player $i$ wins
$j^*(i)$ with probability at least $1/2$. Due to the free disposal assumption,
the expected utility of the player by switching to this strategy is at least:
\begin{equation*}
\begin{split}
c \geq \expect{t_j \sim D'_{j^*(i)}}{(v_i(j^*(i)) -
2\expect{p \sim \mathcal{P}_{j^*(i)}}{p})\prob_{p \sim
\mathcal{P}_{j^*(i)}}[b_i(t_j) \geq p]} \\
\geq
\expect{t_j \sim D'_j}{\frac{v_i(j^*(i))}{2} - \expect{p \sim
\mathcal{P}_{j^*(i)}}{p}}
\end{split}
\end{equation*}
Moreover, since the strategies of all the players remain the same up to item
$j$ we have $D'_j = D_j$. Hence:
$$\expect{h \sim D'}{u_i(h)} \geq \expect{t_j \sim D_j}{\frac{v_i(j^*(i))}{2} -
\expect{p \sim \mathcal{P}_{j^*(i)}}{p}} = \frac{v_i(j^*(i))}{2} -
\expect{h \sim D}{p_{j^*(i)}(h)}$$
From the nash condition we have, $\expect{h \sim D'}{u_i(h)} \leq \expect{h \sim
D}{u_i(h)}$. Hence:
$$\frac{v_i(j^*(i))}{2} - \expect{h \sim D}{p_{j^*(i)}(h)} \leq
\expect{h\sim D}{v_i(A_i(h))-\sum_{j\in A_i(h)}p_j(h)} \leq \expect{h\sim
D}{v_i(A_i(h))}$$
Summing over all player we get:
\begin{equation*}
\begin{split}
\opt = \sum_i v_i(j^*(i)) \leq 2 \sum_i\expect{h\sim D}{v_i(A_i(h))} + 2
\sum_i \expect{h \sim D}{p_{j^*(i)}(h)} \\
\leq 2\spe +2 \expect{h \sim D}{\sum_i p_{j^*(i)}(h)} \leq
2\spe +2 \expect{h \sim D}{\sum_i \sum_{j \in A_i(h)} p_j}
\end{split}
\end{equation*}
From individual rationality we have that:
$$\expect{h \sim D}{u_i(h)} \geq 0 \Rightarrow \expect{h \sim D}{v_i(A_i(h))}
\geq \expect{h \sim D}{\sum_{j \in A_i(h)} p_j(h)}$$
Combining with above inequality we get:
\begin{equation*}
\opt \leq 2\spe + 2 \sum_i \expect{h \sim D}{\sum_{j \in A_i(h)} p_j} \leq
2\spe + 2 \sum_i \expect{h \sim D}{v_i(A_i(h))} \leq 4\spe
\end{equation*}
\end{proof}
}
\subsection{First Price Auctions, Submodular Bidders}\label{subsec:submodular}
In sharp contrast to the simultaneous item bidding auction, where both first
and second price have good price of anarchy whenever Pure Nash equilibrium exist
\cite{Christodoulou2008, Bhawalkar, Hassidim11}, we show that for certain
submodular valuations, no welfare guarantee is possible in the sequential case.
While there are multiple equilibria in such auctions, in our example the natural
equilibrium is arbitrarily worse then the optimal
allocation.
\comment{
The base of the bad example is an equilibrium selection gadget
$\mathcal{G}_i$ that will also be used in the bad second-price examples.
Specifically we use two such gadgets as depicted in Figure \ref{fig3}.
\begin{figure}[h]
\centering
\includegraphics{submodular1.mps}
\caption{Gadgets used in high $\spoa$ instance of Sequential First Price
Auctions with submodular bidders.}
\label{fig3}
\end{figure}
Again we denote with $\spe_1$ the subgame perfect equilibrium that favors
player $b_i$ in gadget $\mathcal{G}_i$. Thus each player $b_i$ has a utility
increase of $2-\epsilon$ between $\spe_1$ and $\spe_2$.
Apart from the above gadgets $b_1$ and $b_2$ have an additive utility on $k$
other items auctioned before the auctions of the two gadgets. The value of
$b_1$ for each of the $k$ items is $1+\epsilon$ and the value of $b_2$ is $1$.
Moreover, there is an extra bidder $d$ that has additive valuation on the $k$
items with value $\epsilon$ for each item.
Now we describe a $\spe$ with unbounded $\spoa$: If both players $b_1$ and
$b_2$ lose in the $k$ auctions to player $d$ then $\spe_1$ is implemented in
both gadgets, otherwise $\spe_2$ is implemented. We claim that it is a $\spe$
for bidders $b_1$ and $b_2$ to lose all $k$ auctions. In the case when they
lose all auctions they have a utility of $2$. If any of $b_1$ or $b_2$ win
some of the $k$ auctions, then there is no point for these two players to lose
the remaining of the $k$ auctions since in any case $\spe_2$ will be
implemented in the gadgets. Hence, in the remaining of the $k$ auctions they
will bid truthfully. Thus $b_1$ will win the remaining auctions at a price of
$1$ giving him utility at most $k\epsilon$. If $b_1$ was the one to win the
first auction he will also gain utility $1-\epsilon$ from that auction leading
to a total utility of at most $1+k\epsilon$ which is less than $2$. If $b_2$
was the one to win the first auction he just gets a utility of $1-\epsilon$
from that auction and no subsequent utility which again is less than $2$.
}
\begin{theorem}\label{submodular_thm}
For submodular players, the Price of Anarchy of the sequential first-price
auction is unbounded.
\end{theorem}
\versions{
The example with unbounded Price of Anarchy are discussed in the full version.}{}
The intuition is that there is a misalignment between social welfare
and player's utility. A player might not want an item for which he has high
value but has to pay a high price. In the sequential setting, a bidder may
prefer to let a
smaller value player win because of the benefits she can derive from his decreased value on future items, allowing her to buy future items at a smaller price, or diverting a competitor, and hence decreasing the price.
\versions{}{
\begin{proof}
Consider four players and $k+3$ items where $2$
of the payers have additive valuations and $2$ of them has a coverage function
as a valuation. Call the items $\{I_1, \hdots, I_k, Y, Z_1, Z_2\}$ and let
players $1,2$ have additive valuations. Their valuations are represented by the
following table:
\begin{center}
\begin{tabular}{ c || c | c | c | c | c | c }
& $I_1$ & $\hdots$ & $I_k$ & $Y$ & $Z_1$ & $Z_2$ \\
\hline
$1$ & $1+\epsilon$ & $\hdots$ & $1+\epsilon$ & $0$ & $2-k\delta/2$ &
$0$ \\
$2$ & $1$ & $\hdots$ & $1$ & $0$ & $0$ & $2-k\delta/2$ \\
\end{tabular}
\end{center}
The valuations of players $3$ and $4$ are given by the coverage
functions defined in Figure \ref{fig_vals}: each item corresponds to a set
in the picture. If the player gets a set of items, his valuation for those items
(sets)
is the sum of the values of the elements covered by the sets corresponding to
the items.
\begin{figure}
\centering
\includegraphics{coverage_v41.mps}
\caption{Valuations $v_3$ and $v_4$.}
\label{fig_vals}
\end{figure}
In the optimal allocation, player $1$ gets all the items $I_1, \hdots, I_k$,
player $3$ gets $Y$ and player $4$ gets $Z_1, Z_2$. The resulting social welfare
is $k + 8 + k\epsilon - \delta/2$. We will show that there is a subgame perfect
equilibrium such that player $3$ wins all the items $I_1, \hdots, I_k$, even
though it has little value for them, resulting in a social welfare of
approximately 8 only.
The intuition is the following: in the end
of the auction, player $4$ has to decide if he goes for item $Y$ or goes for
items
$Y_1, Y_2$. If he goes for item $Y$, he competes with player $3$ and afterwards
lets players $1$ and $2$ win items $Z_1, Z_2$ for free. This decision of player
$4$
depends on the outcomes of the first $k$ auctions. In
particular, we show that if all items $I_1, \hdots, I_k$ go to either $3$ or
$4$, then player $4$ will go for item $Y$, otherwise, he will go for items $Z_1,
Z_2$. If either players $1$ or $2$ acquire any of the items $I_1, \hdots,
I_k$,
they will be guaranteed to lose
item $Z_1, Z_2$, and therefore both will start bidding truthfully on all
subsequent $I_i$ auctions, deriving very little utility. In equilibrium
agent $3$ gets all items $I_1, \hdots, I_k$, resulting in a social welfare of
approximately $8$ only.
In the remainder of this section, we provide a more formal analysis: We
begin by examining what happens in the last three auctions of $Y,Z_1$ and $Z_2$
according to what happened in the first $k$ auctions. Let $k_{1,2},k_3,k_4$ be
the number of items won by the corresponding players in the first $k$ auctions.
\begin{itemize}
\item Case 1: $k_{1,2}=0$. Thus $k_3 = k-k_4$. Player $3$ has a value of
$4-\frac{\delta}{2}-k\delta+(k-k_3)\delta = 4-\frac{\delta}{2}-(k-k_4)\delta$
for item $Y$. Player $4$ has a value of $4$ for $Y$ and a value of
$2-k\frac{\delta}{2}+(k-k_4)\frac{\delta}{2}$ for each of $Z_1$ and $Z_2$. Thus
if player $4$ loses auction $Y$ he will get a utility of $(k-k_4)\delta$ from
the auctions of $Z_1$ and $Z_2$ since players $1$ and $2$ will bid
$2-k\delta/2$. Thus at auction $Y$ player $4$ is willing to win for a price of
at most $4-(k-k_4)\delta$ and player $3$ will bid
$4-\frac{\delta}{2}-(k-k_4)\delta$.
Thus, player $4$ will win $Y$ and will only bid $(k-k_4)\frac{\delta}{2}$ in
each of $Z_1,Z_2$. Therefore, in this case we get that the utilities of all the
players from the last three auctions are:
$u_1=u_2=2-(2k-k_4)\frac{\delta}{2},u_3=0,u_4=\frac{\delta}{2}+(k-k_4)\delta$
\item Case 2: $k_{1,2}>0$. Player $3$ has a value of
$4-\frac{\delta}{2}-k\delta+(k-k_3)\delta$ for $Y$. Since $k_{12}\geq 1$, we
have $k-k_3=k_4+k_{1,2}\geq k_4+1$, hence the value of $3$ for $Y$ is at least
$4+\frac{\delta}{2}-(k-k_4)\delta$. Player $4$ has a value of $4$ for $Y$
and a value of $2-k\frac{\delta}{2}+(k-k_4)\frac{\delta}{2}$ for each of $Z_1$
and $Z_2$. Hence, again player $4$ wants to win at auction $Y$ for at most
$4-\frac{\delta}{2}-(k-k_4)\delta$, hence he will lose to $3$ and will go on to
win both $Z_1$ and $Z_2$.
Thus the utilities of all the
players from the last three auctions are:
$u_1=u_2=0,u_3=\delta,u_4=(k-k_4)\delta$
\end{itemize}
We show by induction on $i$ that as long as players $1$ and $2$ haven't won any
of the $k-i$ items auctioned so far then they will bid $0$ in the remaining $i$
items and one of players $3$ or $4$ will win marginally with zero profit.
For $i=1$ since both $1$ and $2$ haven't won any previous item, by losing the
$k$'th item we know by the above analysis that they both get utility of
$\approx 2$, while if any of them wins then they get utility of $0$.
The external auction that is played at the $k$'th item is represented by the
following $[v^i_j]$ matrix:
$$ \begin{bmatrix} 1+\epsilon & 0 & \approx 2 & \approx 2\\
0 & 1 & \approx 2 & \approx 2\\
\delta & \delta & \delta & 0 \\
(k-k_4)\delta & (k-k_4)\delta &
\frac{\delta}{2}+(k-k_4)\delta &
\frac{3\delta}{2}+(k-k_4)\delta \end{bmatrix} $$
It is easy to observe that the following bidding profile is an equilibrium of
the above game that doesn't involve any weakly dominated strategies:
$b_1=b_2=0,b_3=\delta,b_4=\delta+$. Thus, player $4$ will marginally win with
no profit from the current auction (alternatively we could have player $3$ win
with no profit).
Now we prove the induction step. Assume that it is true for the $i-1$. We know
that if either player $1$ or $2$ wins the $k-i$ item then whatever they do in
subsequent auctions, from the case 2 of the analysis, player $4$ will go for
$Z_1$ and $Z_2$ and they will get $0$ utility in the last $3$ auctions. Hence,
in the $i-1$ subsequent auctions they will bid truthfully, making player $1$
win marginally at zero profit every auction. On the other hand if they lose, by
the induction hypothesis they will lose all subsequent auctions leading them to
utility of $\approx 2$. Moreover, players $3$ and $4$ have the same exactly
utilities as in the base case, since they never acquire any utility from the
first $k$ auctions. Thus the external auction played at the $k-i$ item is
exactly the same as the auction of the base case and hence has the same bidding
equilibrium.
Thus in the above $\spe$ players $1$ and $2$ let some of the players $3$ and
$4$ win all the first $k$ items. This leads to an unbounded $\spoa$.
\end{proof}
}
\section{Introduction}
The first and second price auctions for a single item, and their corresponding strategically
equivalent ascending versions are some of the most commonly used auctions for
selling items. Their popularity has been mainly due to the fact that they
combine simplicity with efficiency: the auctions have
simple rules and in many settings lead to efficient allocation.
The simplicity and efficiency tradeoff is more difficult when auctioning
multiple items, especially so when items to be auctioned are possibly owned by
different sellers. The most well-known auction is the truthful VCG auction,
which is efficient, but is not simple: it requires coordination among sellers,
requires the sellers to agree on how to divide the revenue,
and in many situations requires solving computationally hard problems. In light
of these issues, it is important to design simple auctions with good
performance, it is also important to understand properties of simple auction designs used
in practice.
Several recent papers have studied properties of simple item-bidding auctions,
such as using simultaneous second price auctions for each item
\cite{Christodoulou2008,Bhawalkar}, or simultaneous first price auction
\cite{Hassidim11,Immorlica}. Each of these papers study the case when all the items are
auctioned simultaneously, a property essential for all of their results.
The simplest, most natural, and most common way to auction items by different owners
is to run individual single
item auctions (e.g., sell each item separately on eBay). No common auction environment is running simultaneous auctions (first price or second price) for large sets of items.
To evaluate to what
extent is the simultaneity important for the good properties of the above simple
auctions \cite{Christodoulou2008,Bhawalkar,Hassidim11}, it is important to
understand the sequential versions of item bidding auctions.
There is a large body of work on online auctions (see \cite{Parkes} for a
survey), where players have to make strategic decisions without having any
information about the future. In many auctions participants have information
about future events, and engage in strategic thinking about the upcoming
auctions. Here we take the opposite view from online auctions, and study the
full information version of this game, when the players have full information
about all upcoming auctions.
Driven by this motivation we study sequential simple auctions from an
efficiency perspective. Sequential auctions are very different from their
simultaneous counterparts. For example, it may not be dominated to bid above the
actual value of an item, as the outcome of this auction can have large effect
for the player in future auctions beyond the value of this item. We focus on two
different economic settings. In the first set of results we study the case of a
market of buyers and sellers, where each seller holds one item and each bidder
has a combinatorial valuation on the items. In the second setting we study the
case of procuring a base of a matroid from a set of bidders, each controlling an
element of the ground set.
For item auctions, we show that subgame perfect equilibrium always exists, and study the quality of the resulting outcome. While the equilibrium is not unique in most cases, in some important classes of games, the quality of any equilibrium is close to the quality of the optimal solution.
We show that in the widely studied case of matching markets
\cite{Shapley1971,Demange1986,Kranton2001}, i.e. when bidders are unit-demand,
the social welfare achieved by a subgame perfect equilibrium of a sequential
first price auction is at least half of the optimal.
Thus in a unit-demand setting
sequential implementation causes at most a factor of 2 loss in social welfare.
On the other hand, we also show that the welfare loss due to sequential implementation is unbounded
when bidders have arbitrary submodular valuations, or when the second price
auction is used, hence in these cases the simultaneity of the auctions is
essential for achieving the positive results
\cite{Christodoulou2008,Bhawalkar,Hassidim11}.
For the setting of auctioning a base of a matroid to bidders that control a
unique element of the ground set, we show that a natural sequential first price
auction has unique outcome, that achieves the same allocation and price outcome as $\vcg$.
An important building block in all of our results is a single item first or second price
auction in a setting with externalities, i.e., when players have different valuations for different
winner outcomes. Many economic settings might give rise to such
externalities among the bidders of an auction:
\begin{itemize}
\item We are motivated by externalities that arise in sequential auctions. In this setting, bidders might know of future auctions, and realize that their surplus in the future auction depends on who
wins the current auction. Such consideration introduces externalities, as players can have a different expected future utility according to the winner of the current auction.
To illustrate how externalities arise in sequential auction, consider the following example
of a sequential auction with two items and two players and with
valuations: $v_1(1) = v_1(2) = 5, v_1(1,2) = 10$ for player 1, and $v_2(1) = v_2(2) = v_2(1,2)
= 4$ for player 2. Player 1 has higher value for both item. However, if she wins item $1$ then she has
to pay 4 for the second item, while by allowing player 2 to win the first item,
results in a price of 0 for the second item. We can summarize this by saying
that his value for winning the first item is 6 (value 5 for the first item
itself, and an additional expected value of 1 for subsequently winning the
second item at a price 4), while her value for player 2 winning the first item is 5
(for subsequently winning the second item for free).
\item Bidders might want to signal information through their bids so as to
threat or inform other bidders and hence affect future options. This is the cause of the inefficiency in our example of sequential second price auction. Such phenomena
have been observed at Federal Communication Commission (FCC) auctions where
players were using the lower digits of their bids to send signals discouraging
other bidders to bid on a particular license (see chapter 1 of Milgrom
\cite{MilgromBook}).
\item If bidders are competitors or
collaborators in a market then it makes a difference whether your friend or
your enemy wins. One very vivid portrayal of such an externality is an auction
for nuclear weapons \cite{Jehiel96}.
\end{itemize}
The properties of such an auction are of independent interest outside of the
sequential auction scope.
\subsection{Our Results }
\quad
\noindent \textbf{Existence of equilibrium.} In section
\ref{sec:external-payoffs} we show that the first price single item auction
always has a pure Nash equilibrium that survives iterated elimination of
dominated strategies, even in the presence of
arbitrary externalities, strengthening the result in
\cite{JehielMoldovanuNonparticipation, Funk}.
\comment{First price auctions with externalities have been previously studies by
Jehiel and Moldovanu \cite{JehielMoldovanuNonparticipation} who showed that such
first price auctions have pure Nash equilibria, if bidders are allowed to use
dominated strategies. Funk \cite{Funk} proved that pure equilibria also exists
in undominated strategies. Our equilibrium construction also survives iterated
elimination of dominated strategies.}
In section \ref{sequential-item-auctions} we use such external auction to show
that sequential first price auctions have pure subgame perfect Nash equilibria
for any combinatorial valuations. This is in contrast to simultaneous first
price auctions that may not have pure equilibria even for very simple
valuations.
\textbf{Quality of outcomes in sequential first price item auctions.} Next we
study the quality of outcome of sequential first price auctions in various
settings. Our main result is that with unit demand bidders the price of
anarchy is bounded by 2. In contrast, when valuations are submodular, the price
of anarchy can be arbitrarily high. This differentiates sequential auctions from simultaneous
auctions, where pure Nash equilibria are socially optimal \cite{Hassidim11}.
These results extend to sequential auctions,
where multiple items are sold at each stage using independent first price auctions. Further, the efficiency
guarantee degrades smoothly as we move away from unit demand. Moreover, the results
also carry over to mixed strategies with a factor loss of $2$.
Unfortunately, the existence of pure equilibria is only guaranteed when
auctioning one item at-a-time.
\textbf{Sequential second price auctions.}
Our positive results depend crucially on using the first price auction format.
In the appendix, we show that sequential second price auctions can lead to
unbounded
inefficiency, even for additive valuations, while for additive valuations
sequential
first price, or simultaneous first or second price auctions all lead to
efficient outcomes.
\textbf{Sequential Auctions for selling basis of a matroid.} In section
\ref{sec:matroid} we consider matroid auctions, where bidders each
control a
unique element of the ground set, and we show that a natural sequential first
price
auction achieves the same allocation and price outcome as $\vcg$. Specifically,
motivated by the greedy spanning tree algorithm, we propose the following
sequential auction: At each iteration pick a cut of the matroid that doesn't
intersect previous winners and run a first price auction among the bidders in
the cut. This auction is a more distributed alternative to the ascending auction
proposed by Bikhchandani et al \cite{Bikhch2010}. For the interesting case of a
procurement auction for buying a spanning tree of a network from local
constructors, our mechanism takes the form of a
geographically local and simple auction.
We also study the case where bidders control several elements of the ground set
but have a unit-demand valuation. This problem is a common generalization of the matroid problem, and
the unit demand auction problem considered in the previous section. We show that the bound of 2 for the
price of anarchy of any subgame perfect equilibrium extends also to this generalization.
\subsection{Related Work}
\quad
\noindent \textbf{Externalities.} The fact that one player might influence the
other
without direct competition has been long studied in the economics literature.
The textbook model is due to Meade \cite{Meade} in 1952, and the concept has
been studied in various contexts. To name a few recent ones: \cite{Jehiel96}
study it in the context of weapon sales,
\cite{ghosh08,gomes09} in the context of AdAuctions, and
\cite{Krysta10} in the context of combinatorial auctions. Our externalities
model is due to Jehiel and Moldovanu \cite{JehielMoldovanuNonparticipation}.
They show that a pure Nash
equilibrium exists in a full information game of first price auction with
externalities, but use dominated strategies in their proof. Funk \cite{Funk}
shows the existence of an equilibrium after one round of elimination of
dominated strategies, but argues that this refinement alone is not enough for
ruling out unnatural equilibria - and gives a very compelling example of this
fact. Iterative elimination of dominated strategies would
eliminate the unnatural equilibria in his example, but instead of analyzing it,
Funk analyzes a different concept: locally undominated strategies, which he
defines in the paper. We show the existence of an equilibrium surviving any
iterated elimination of dominated strategies - our proof is based on a
natural ascending price auction argument, that provides more intuition
on the structure of the game.
Our first price auction equilibrium is also an equilibrium in a second price auction. Previous work studying second price auctions include
Jehiel and Moldovanu \cite{JehielMoldovanu00}, who study a simple case of
second price auctions with two buyers and externalities between the buyers,
derive equilibrium bidding strategies, and point out the various effects caused
by positive and negative externalities, while in \cite{JehielMoldovanu06} the
same authors study a simple case of second price auction with two types of
buyers.
\comment{
There is a long line of research in the economics
literature on auctions with externalities, see for example \cite{Greenwood91,
Jehiel96, Jehiel99}. It has also been studied in the context of AdAuctions
\cite{ghosh08,gomes09} and Combinatorial Auction \cite{Krysta10}. Variations of
our the single shot auctions with external payoffs appeared in \cite{Bae2009}
and \cite{Jehiel96}.
}
\textbf{Sequential Auctions.} A lot of the work in the economic
literature studies the behavior of prices in sequential auctions of identical
items where bidders participate in all auctions. Weber \cite{Weber2000} and
Milgrom and Weber \cite{Milgrom1982a} analyze first and second price sequential
auctions with unit-demand bidders in the Bayesian model of incomplete
information and show that in the unique symmetric Bayesian equilibrium the
prices have an upward drift. Their prediction was later refuted by
empirical evidence (see e.g. \cite{Ashenfelter1989}) that show a declining
price phenomenon. Several attempts to describe this ``declining price anomaly''
have since appeared such as McAfee and Vincent \cite{McAfee1993} that
attribute it to risk averse bidders. Although we study full information games
with pure strategy outcomes, we still observe declining price phenomena in
our sequential auction models without relying to risk-aversion.
Boutilier el al \cite{Boutilier99} studies first price auction in a setting with uncertainty, and gives a dynamic programming algorithm for finding optimal auction strategies assuming the distribution of other bids is stationary in each stage, and shows experimentally that good quality solutions do emerge when all players use this algorithm repeatedly.
The multi-unit demands case has been studied under the complete
information model as well. Several papers (e.g. \cite{Gale2001,Rodriguez2009})
study the
case of two bidders. In the case of two bidders they show that there is a unique
subgame perfect equilibrium that survives the iterated elimination of weakly
dominated strategies, which is not the case for more than two bidders. Bae et
al. \cite{Bae2009,Bae2008}
study the case of sequential second price auctions of identical items to two
bidders with concave valuations on homogeneous items. They show that the unique
outcome that
survives the iterated elimination of weakly dominated strategies is inefficient,
but achieves a social welfare at least $1-e^{-1}$ of the optimal. Here we
consider more than two bidders, which makes our analysis more challenging,
as the uniqueness argument of the Bae et al. \cite{Bae2009,Bae2008} papers
depends heavily on having only two players: when there are only two players, the
externalities that naturally arise due to the sequential nature of the auction
can be modeled by standard auction with no externalities using modified
valuations.
\textbf{Item Auctions.} Recent work from the
Algorithmic Game Theory community tries to propose the study of outcomes of
simple mechanisms for multi-item auctions.
Christodoulou, Kovacs and Schapira \cite{Christodoulou2008} and
Bhawalkar and Roughgarden \cite{Bhawalkar} study the case of running
simultaneous second price item
auctions for combinatorial auction settings. Christodoulou
et al. \cite{Christodoulou2008} prove that for bidders with submodular
valuations and incomplete information the Bayes-Nash Price of Anarchy is $2$.
Bhawalkar and Roughgarden \cite{Bhawalkar} study the more general case of
bidders with
subadditive valuations and show that under complete information the Price of
Anarchy of any Pure Nash Equilibrium is $2$ and under incomplete information the
Price of Anarchy of any Bayes-Nash Equilibrium is at most logarithimic in the
number of items.
Hassidim et al. \cite{Hassidim11} and Immorlica et al \cite{Immorlica} study the case of simultaneous first price auctions and show that the set of pure Nash equilibria of the game correspond to exactly the Walrasian equilibria. Hassidim et al. \cite{Hassidim11} also show that mixed Nash equilibria have a price of anarchy of 2 for submodular bidders and logarithmic, in the number of items, for subadditive valuations.
\textbf{Unit Demand Bidders.} Auctions with unit demand bidders correspond to the classical matching problem in optimization. They have been studied extensively also in the context of auctions, starting with the classical papers of Shapley and Shubik \cite{Shapley1971} and
Demange, Gale, and Sotomayor \cite{Demange1986}.
The most natural application of unit demand bidders is the case of a buyer-seller network market.
A different interesting application where sequential auction is also natural, is in the case of
scheduling jobs with deadlines. Suppose we have a set of jobs with different
start and end times (that are commonly
known) and each has a private valuation for getting the job done, not known to the auctioneer.
Running an auction for each time slot sequentially is natural since, for example, it
doesn't require for a job to participate in an auction before its start time.
\textbf{Matroid Auctions.} The most recent and related work on Matroid Auctions
is that of Bikhchandani et al \cite{Bikhch2010} who propose a centralized
ascending auction for selling bases of a Matroid that results in the \vcg
outcome. In their model each bidder has a valuation
for several elements of the matroid and the auctioneer is interested in selling
a base. Kranton and Minehart \cite{Kranton2001}
studied the case of a buyer-seller bipartite network market where each buyer has
a private valuation and unit-demand. They also propose an ascending auction that
simulates the \vcg outcome. Their setting can be viewed as a matroid auction,
where the matroid is the set of matchable bidders in the bipartite network.
Under this perspective their ascending auction is a special case of that of
Bikhchandani et al. \cite{Bikhch2010}. We study a sequential version of this
matroid basis auction game, but consider only the case when bidders are interested in a
specific element of the matroid, and show that the sequential auction also
implements the VCG outcome.
\section{Matroid Basics}
\label{apdx:matroid}
For completeness, we summarize some some definitions regarding matroids and review notation.
A Matroid $\mathcal{M}$ is a pair
$(\mathcal{E}_M,\mathcal{I}_M)$, where $\mathcal{E}_M$ is a ground set of
elements and $\mathcal{I}_M$ is a set of subsets of $\mathcal{E}_M$ with the
following properties: (1) $\emptyset \in \mathcal{I}_M$, (2) If $A\in
\mathcal{I}_M$ and $B\subset A$ then $B\in \mathcal{I}_M$, (3) If $A,B \in
\mathcal{I}_M$ and $|A|>|B|$ then $\exists e\in A-B$ such that $B+e \in
\mathcal{I}_M$. The subsets of $\mathcal{I}_M$ are the independent subsets of
the matroid and the rest are called dependent.
The rank of a set $S\subset \mathcal{E}_M$, denoted as
$r_{\mathcal{M}}(S)$, is the cardinality of the maximum independent subset of
$S$. A base of a matroid is a maximum cardinality independent set and the set of
bases of a matroid $\mathcal{M}$ is denoted with $\mathcal{B}_{\mathcal{M}}$.
An important example of a matroid is the graphical matroid on the edges of a
graph, where a set $S$ of edges is independent if it doesn't contain a cycle,
and bases of this matroid are the spanning trees.
A circuit of a matroid $\mathcal{M}$ is a minimal dependent set.
and we denote with $\mathcal{C}(\mathcal{M})$ the set of circuits of $\mathcal{M}$.
Circuits in graphical matroids are exactly the cycles of the graph. A cocircuit is a minimal set that intersects every base of $\mathcal{M}$. Cocircuits in graphical matroids corresond to cuts of the graph.
\begin{Definition}[Contraction]
Given a matroid $\matroid$ and a set $X\subset \mathcal{E}_M$ the contraction
of $\mathcal{M}$ by $X$, denoted $\mathcal{M}/X$, is the matroid defined on
ground set $\mathcal{E}_M-X$ with $\mathcal{I}_{M/X}=\{S\subseteq
\mathcal{E}_M-X: S\cup X \in \mathcal{I}_M\}$.
\end{Definition}
If we are given weights for each element of the ground set of a matroid
$\mathcal{M}$ then it is natural to define the following optimization problem:
Find the base $\opt(\mathcal{M})\in \mathcal{B}_{\mathcal{M}}$ that
has minimum/maximum total weight (we might sometimes abuse notation and denote
with $\opt$ both the set and its total weight). A well known algorithm for
solving the above
optimization problem is the following (see \cite{Lawler1976}): At each iteration
consider a cocircuit that doesn't intersect the elements already picked in
previous iterations, then add its minimum/maximum element to the current
solution.
Matroids are also natural structures for auctions. Assume that we
want to auction (sell) a base of a matroid $\mathcal{M}$ defined on the
bidders, i.e. each bidder is an element of the ground set and the weight of an
element is the private valuation of the corresponding bidder, and we can sell (or service) any independent set. The auction we study in the next section is motivated by the above algorithm, where we sequentially run second price auctions at each step. The most well known mechanism for auctioning items to a set of bidders is
the Vickrey-Clarke-Groves Mechanism ($\vcg$). The
\vcg mechanism selects the optimal basis $\opt(\mathcal{M})$. It is not hard to show by
the properties of matroids that the \vcg price of a player $i\in
\opt(\mathcal{M})$, denoted as $\vcg_i(\mathcal{M})$, is the valuation of the
highest bidder $j(i)$ that can be exchanged with $i$ in $\opt(\mathcal{M})$,
i.e. $\vcg_i(\mathcal{M})=\max\{v_j: \opt(\mathcal{M})-i+j\in \mathcal{I}_M\}$, or alternately
the above price is the maximum over all cycles of the matroid that contain $i$
of the minimum value bidder in each cycle: $\vcg_i(\mathcal{M}) = \max_{C\in
\mathcal{C}(\mathcal{M}): i\in C}\min_{i\neq j\in C} v_j$. To unify notation we
say that $\vcg_i(\mathcal{M})=\infty$ for a bidder $i\notin
\opt(\mathcal{M})$, although the actual price assigned by the \vcg mechanism is
$0$.
\section{Matroid Auctions}
\label{sec:matroid}
In this section we first consider a matroid auction where each matroid element
is associated with a separate bidder, then in Section
\ref{subsec:sequential_matroid} we consider a problem that generalizes matroid
auctions and item auctions with unit demand bidders.
\subsection{Sequential Matroid Auctions}
\label{subsec:sequential_matroid}
Suppose that a telecommunications company wants to build a
spanning tree over a set of nodes. At each of the possible links of the network
there is a distinct set of local constructor's that can build the link. Each
constructor has a private cost for building the link. So the company has to
hold a procurement auction to get contract for building edges of a spanning tree with
minimum cost.
In this section, we show that by running a sequential first price auction,
we get the outcome equivalent to the $\vcg$ auction in a distributed and
asynchronous fashion.
A version of the well-known greedy algorithm for this optimization problem is to
consider cuts of this graph sequentially, and for each cut we consider, include
the minimum cost edge of the cut.
Our sequential auction is motivated by this greedy algorithm:
we run a sequence of first price auctions among the edges in a cut. More
formally, at each stage of the auction, we consider a cut where no edge was
included so far, and hold a first price sealed bid auction, whose winner is
contracted.
More generally, we can run the same auction on any matroid, not just the
graphical matroid considered above. The goal of the procurement auction is to
select a minimum cost matroid basis, and at each stage we run a sealed bid first
price auction for selecting an element in a co-circuit.
Alternately, we can also consider the analogous auction for selling some service to a basis of a matroid. As before, the bidders correspond to elements of a matroid. Their private value $v_i$ is their value for the service. Due to some conflicts, not all subsets of the bidders can be selected. We assume that feasible subsets form a matroid, and hence the efficient selection chooses the basis of maximum value. As before, it may be simpler to implement smaller regional auctions. Our method sequentially runs first price auctions for adding a bidder from a
co-circuit. For the special case of the dual of graphical matroid, this problem corresponds to the following. Suppose that a telecommunications company due to some mergers ended up with
a network that has cycles. Thus the company decides to
sell off its redundant edges
so that it ends up owning just a spanning tree. The sequential auction we propose
runs a sequence of first price auctions, each time selecting an edge of a cycle in the network for sale. If more than one bidder is interested in an
edge we can simply think of it as replacing that edge with a path of edges, each
controlled by a single individual.
The main result of this section is that the above sequential auction implements
the $\vcg$ outcome both for procurement and direct auctions.
To unify the presentation with the other sections, we will focus here on direct
auction. In the final subsection, we will consider a common generalization of
the unit-demand auction and this matroid auction. In the procurement version, we
make the small technical assumption that every cut of the matroid contains at
least two elements, otherwise the $\vcg$ price of a player could be
infinity. Such assumption
was also made in previous work on matroid auctions \cite{Bikhch2010}.
\begin{theorem} In a sequential first price auction among players in the
co-circuit of a matroid (as described above), subgame perfect equilibria in
undominated strategies emulate the $\vcg$ outcome (same allocations and prices).
\label{thm:matroid_spe_opt}
\end{theorem}
\comment{
For completeness, we summarize definitions regarding matroids in \versions{the Appendix of the full version}{Appendix \ref{apdx:matroid}}. Here we summarize the notation we need.
Let $\mathcal M$ denote a matroid on ground set $\mathcal{E}_M$, we use $\mathcal{I}_M$ to denote set of independent sets of $\mathcal M$, and $\mathcal{C}(\mathcal{M})$ the set of circuits of $\mathcal M$. For
a subset $S\subset \mathcal{E}_M$ we use $r_{\mathcal{M}}(S)$ to denote the
rank of the set $S$, and use $\mathcal{M}/S$ to denote the matroid with set $S$ contracted.
}
For completeness, we summarize some some definitions regarding matroids and
review notation.
A Matroid $\mathcal{M}$ is a pair
$(\mathcal{E}_M,\mathcal{I}_M)$, where $\mathcal{E}_M$ is a ground set of
elements and $\mathcal{I}_M$ is a set of subsets of $\mathcal{E}_M$ with the
following properties: (1) $\emptyset \in \mathcal{I}_M$, (2) If $A\in
\mathcal{I}_M$ and $B\subset A$ then $B\in \mathcal{I}_M$, (3) If $A,B \in
\mathcal{I}_M$ and $|A|>|B|$ then $\exists e\in A-B$ such that $B+e \in
\mathcal{I}_M$. The subsets of $\mathcal{I}_M$ are the independent subsets of
the matroid and the rest are called dependent.
The rank of a set $S\subset \mathcal{E}_M$, denoted as
$r_{\mathcal{M}}(S)$, is the cardinality of the maximum independent subset of
$S$. A base of a matroid is a maximum cardinality independent set and the set of
bases of a matroid $\mathcal{M}$ is denoted with $\mathcal{B}_{\mathcal{M}}$.
the
An important example of a matroid is the graphical matroid on the edges of a
graph, where a set $S$ of edges is independent if it doesn't contain a cycle,
and bases of this matroid are the spanning trees.
A circuit of a matroid $\mathcal{M}$ is a minimal dependent set.
and we denote with $\mathcal{C}(\mathcal{M})$ the set of circuits of
$\mathcal{M}$.
Circuits in graphical matroids are exactly the cycles of the graph. A cocircuit
is a minimal set that intersects every base of $\mathcal{M}$. Cocircuits in
graphical matroids corresond to cuts of the graph.
\begin{Definition}[Contraction]
Given a matroid $\matroid$ and a set $X\subset \mathcal{E}_M$ the contraction
of $\mathcal{M}$ by $X$, denoted $\mathcal{M}/X$, is the matroid defined on
ground set $\mathcal{E}_M-X$ with $\mathcal{I}_{M/X}=\{S\subseteq
\mathcal{E}_M-X: S\cup X \in \mathcal{I}_M\}$.
\end{Definition}
If we are given weights for each element of the ground set of a matroid
$\mathcal{M}$ then it is natural to define the following optimization problem:
Find the base $\opt(\mathcal{M})\in \mathcal{B}_{\mathcal{M}}$ that
has minimum/maximum total weight (we might sometimes abuse notation and denote
with $\opt$ both the set and its total weight). A well known algorithm for
solving the above
optimization problem is the following (see \cite{Lawler1976}): At each iteration
consider a cocircuit that doesn't intersect the elements already picked in
previous iterations, then add its minimum/maximum element to the current
solution.
The most well known mechanism for auctioning items to a set of bidders is
the Vickrey-Clarke-Groves Mechanism ($\vcg$). The
\vcg mechanism selects the optimal basis $\opt(\mathcal{M})$. It is easy to see that
the \vcg price of a player $i\in \opt(\mathcal{M})$, denoted as $\vcg_i(\mathcal{M})$, is
the valuation of the highest bidder $j(i)$ that can be exchanged with $i$ in $\opt(\mathcal{M})$,
i.e. $\vcg_i(\mathcal{M})=\max\{v_j: \opt(\mathcal{M})-i+j\in \mathcal{I}_M\}$, or alternately
the above price is the maximum over all cycles of the matroid that contain $i$
of the minimum value bidder in each cycle: $\vcg_i(\mathcal{M}) = \max_{C\in
\mathcal{C}(\mathcal{M}): i\in C}\min_{i\neq j\in C} v_j$. To unify notation we
say that $\vcg_i(\mathcal{M})=\infty$ for a bidder $i\notin
\opt(\mathcal{M})$, although the actual price assigned by the \vcg mechanism is
$0$.
The proof of Theorem \ref{thm:matroid_spe_opt} is based on an induction on
matroids of lower
rank. After a few stages of the sequential game, we have selected a subset of
elements $X$. Notice that the resulting subgame is exactly a matroid
basis auction game in the contracted matroid $\mathcal{M}/X$. To understand such
subgames, we first prove a lemma \versions{(whose proof appears in the full version)}
that relates the \vcg prices of a player in a sequence of
contracted matroids.
\begin{lemma}
Let $\mathcal{M}$ be a matroid, and consider a player
$i^*\in \opt(\mathcal{M})$. Consider a co-circuit $D$, and assume our auction
selects an element $k \neq i^*$, and let $\mathcal{M'}$ be the matroid that
results from
contracting $k$. Then $\vcg_{i^*}(\mathcal{M'}) \geq
\vcg_{i^*}(\mathcal{M})$ and the two are equal if $k\in \opt(\mathcal{M})$.
\label{lem:vcg_prices}
\end{lemma}
\versions{}{
\begin{proof}
First we show that the VCG prices do not change when contracting an element
from the optimum. From matroid properties it holds that for any set $X$:
$\opt(\mathcal{M}/X)\subset \opt(\mathcal{M})$. In this case
$\opt(\mathcal{M'})=\opt(\mathcal{M}/\{k\})\subset \opt(\mathcal{M})$, which
directly implies that $\opt(\mathcal{M'})=\opt(\mathcal{M})-\{k\}$.
Hence, $\{j: \opt(\mathcal{M})-i^*+j \in
\mathcal{I}_{M}\}=\{j: \opt(\mathcal{M}')-i^*+j\in
\mathcal{I}_{M}'\}$, and thus,
$\vcg_{i^*}(\mathcal{M})=\vcg_{i^*}(\mathcal{M'})$.
As mentioned in the previous section, the \vcg price can also be defined as
the maximum over all cycles of the matroid that contain $i^*$
of the minimum value bidder in each cycle: $\vcg_{i^*}(\mathcal{M}) = \max_{C\in
\mathcal{C}(\mathcal{M}): i^*\in C}\min_{i^* \neq j\in C} v_j$. Let $C$ be the
cycle
that attains this maximum in $\mathcal{M}$. The element $i^*$ is dependent on
the
set $C \setminus \{i^*\}$ in $\mathcal{M}$, and as a result $i^*$ is dependent
of the set $C \setminus \{i^*,k\}$ in $\mathcal{M'}$, hence there is a cycle
$C'\subset C$ in $\mathcal{M'}$ with $i^*\in C'$. This proves that the \vcg
price can only increase due to contracting an element.
\end{proof}
}
\versions{Next we sketch the proof of the theorem, see the full version for the details.}{Now we are ready to prove Theorem \ref{thm:matroid_spe_opt}.}
\versions{\begin{proofof}{Theorem
\ref{thm:matroid_spe_opt} (sketch)}}{\begin{proofof}{Theorem
\ref{thm:matroid_spe_opt}}}
For clarity, we assume the values of the players for being allocated to be all different.
We will prove the theorem by induction on the rank of the matroid.
Let $\mathcal{M}$ be our initial matroid prior to some auction. Notice that
for any outcome of the current auction the corresponding subgame
is exactly a sequential matroid auction on a contracted
matroid $\mathcal{M'}$.
\versions{}{
The proposed auction for rank 1 matroids is exactly a standard (no external payoffs) first price auction.
}Let $D$ be the co-circuit auctioned.
Using the induction hypothesis, we can write the induced game on this node of the game tree exactly.
For $i\in D-\opt(\mathcal{M})$, if he doesn't win the current auction then by the induction hypothesis, he is not going to win in any of the subsequent auctions, and hence $v_i^i = v_i$ and $v_i^j = 0$ for $j \neq i$.
For player a player $i \in \opt(\mathcal{M}) \cap D$. Again $v_i^i = v_i$ and for $j \neq i$, we have that $v_i^j = v_i - \vcg_i(\mathcal{M} / j)$ if $i \in \opt(\mathcal{M} / j)$ and $v_i^j = 0$ otherwise.
We claim that in all equilibria of this game, where no player bids $b_i > \gamma_i := \max_j v_i^i - v_i^j$ (notice that bidding above $\max_j v_i^i - v_i^j$ is dominated strategy), a player $i \in \opt(\mathcal{M})$ wins by his $\vcg$ price. Suppose some player $k \notin \opt(\mathcal{M})$ wins the auction. Then, there is some player $j \in \opt(\mathcal{M}) \setminus \opt(\mathcal{M}/k)$. This is not an equilibrium, as player has $v_j > v_k$, and hence $j$ could overbid $k$ and get the item, since the price is $p \leq v_k < v_j$.
\versions{To establish the induction hypothesis, in the full version we will show that}{Next we claim that} the winner $i \in \opt(\mathcal{M})$ gets the item by his $\vcg$ price, and the winner is $i \in D$ with highest $\vcg$ price.
\versions{}{
Suppose he gets the item by some value strictly smaller than his $\vcg$ price. If we can show that there is some player $t$ such that $v_t^t - \vcg_i(\mathcal{M}) \geq v_t^i$. Let $C$ and $j$ be the cycle and element $j$ that define the $\vcg$ price of $i$ in:
$$\vcg_{i}(\mathcal{M}) = \max_{C\in \mathcal{C}(\mathcal{M}): i\in C}\min_{i \neq j\in C} v_j$$
Now, since $\abs{C \cap D} \geq 2$, there is some $t \neq i$, $t \in C \cap D$. Notice that $v_t \geq \vcg_i(\mathcal{M})$, so if $t \notin \opt(\mathcal{M})$, then he would overbid $i$ and get the item. If $t \in \opt(\mathcal{M})$, then notice $\vcg_t(\mathcal{M}) \geq \vcg_i(\mathcal{M})$, so again he would prefer to overbid $i$ and get the item. This also shows that some player $i$ whose $\vcg$ price is not maximum winning by at most his $\vcg$ price is not possible in equilibrium.
At last, suppose the winner gets the item for some price $p$ above his $\vcg$ price. Then $b_i = p+$ and there is some player $j \in D$ such that $b_j = p$. It can't be that $j \notin \opt(\mathcal{M})$, then his value $v_j$ can't be higher than then maximum $\vcg$ price.
So, it must be that $j \in \opt(\mathcal{M})$, then player $i$ can improve his utility by decreasing his bid, letting $j$ win and win for $\vcg_i(\mathcal{M}/j) =\vcg_i(\mathcal{M}) < p$ (by Lemma \ref{lem:vcg_prices}).
}
\end{proofof}
The above optimality result tells us that $\vcg$ can be implemented in a
distributed and asynchronous way. Although the auctions happen locally, the
final price of each auction (the $\vcg$ price) is a global property. It should
be noted, nevertheless, that this is a common feature in network games in
general. The previous theorem concerns with the state of the game after
equilibrium is reached. If one considers a certain (local) dynamics and believes
it will eventually settle in an equilibrium, the $\vcg$ outcome is the only
possible such stable state.
\subsection{Unit-demand matroid auction}
In this section we sketch a common generalization of the auction for unit demand
bidders from section \ref{subsec:multi_unit_auctions} and the matroid auction of
section \ref{subsec:sequential_matroid}.
Suppose that the items
considered form the ground set of some matroid
$\mathcal{M}=([m],\mathcal{I}_\mathcal{M})$ and the auctioneer wants to sell an
independent set of this matroid, while buyers remain unit-demand and are only
interested in buying a single item.
We define the Sequential Matroid Auction with Unit-Demand Bidders to be the
game induced if in the above setting we run the Sequential First Price Auction
on co-circuits of the matroid as defined in previous section.
\comment{
We consider the following sequential
auction: at each moment pick a co-circuit that doesn't intersect any of the
previously allocated items and run a first price auction among all the players
connected to some of these items. The winner pays his bid and is
allowed to choose any element in that co-circuit that he is connected to. We
refer to such a game as a Sequential Matroid Auction with Unit-Demand Bidders.
}
\begin{theorem}
\label{thm:matroid-unit-demand}
The price of anarchy of a subgame
perfect equilibrium of any Sequential Matroid Auction with Unit-Demand
Bidders is $2$.
\end{theorem}
To adopt our proof from the auction with unit-demand bidder to the more general
Theorem \ref{thm:matroid-unit-demand} we define the notion of the
\textbf{participation graph} $\mathcal{P}(B)$ of a base $B$ to be a bipartite
graph between the nodes in the base and the auctions that took place. An edge
exists between an element of the base and an auction if that element
participated in the auction. Now the proof is a combination of the proof of
Theorem \ref{thm:unit-demand} and of the following lemma.
\begin{lemma} For any base $B$ of the matroid $\mathcal{M}$, $\mathcal{P}(B)$
contains a perfect matching.
\label{lem:exists_perf_match}
\end{lemma}
\versions{}{
\begin{proof}
We will prove that given any $k$-element independent set, there were $k$
auctions
that had at least one of those elements participating. Then by applying Hall's
theorem we get the lemma.
Let $I_k=\{x_1,\ldots,x_k\}$ be such an independent set of the matroid. Let
$\mathcal{A}_{-k}=\{A_1,\ldots,A_t\}$ be the set of auctions (co-circuits) that
contain no element of $I_k$ ordered in the way they took place in the game and
$\mathcal{A}_{k}$ its complement. Let $a_1,\ldots,a_t$ be the winners of the
auctions in $A_{-k}$. Let $r(\mathcal{M})$ be the rank of the
matroid.
Since $I_k$ is an independent set, it is a subset of some basis and by the
properties of co-circuits: for any $x_i\in I_k$ there exists a co-circuit $X_i$
that contains $x_i$ and no other $x_j$. The sequence of elements
$(x_1,\ldots,x_k,a_1,\ldots,a_t)$ and co-circuits
$(X_1,\ldots,X_k,A_1,\ldots,A_t)$ have the property that each element belongs to
its corresponding co-circuit and no co-circuit contains any previous element.
Hence, the set $\{x_1,\ldots,x_k,a_1,\ldots,a_t\}$ is an independent set and
therefore
$t+k\leq r(\mathcal{M})$.
Since the total number of auctions is $r(\mathcal{M})$,
$|\mathcal{A}_{k}|\geq k$.
\end{proof}
}
\versions{}{Now the proof of Theorem \ref{thm:matroid-unit-demand}.}
\begin{proofof}{Theorem \ref{thm:matroid-unit-demand} (sketch)}
Using the last lemma, there is a bijection between the elements
allocated in the efficient outcome and the co-circuits auctioned. For a player
$i$ that is assigned an item $j^*(i)$ in the efficient outcome, let $A(i)$
be the auction (co-circuit) matched with $j^*(i)$ in the above
bijection. Now if in the proof of Theorem
\ref{thm:unit-demand} we replace any reasoning about the auction of item
$j^*(i)$ with the auction $A(i)$, we can extend the arguments and
prove that $p(A(i)) \geq v_{i,j^*(i)} - v_{i,j(i)}$, where $p(A(i))$ is the value of
the bid that won auction $A(i)$. Summing these inequalities over the auctions
completes the proof.
\end{proofof}
\section{Sequential Multi-Item Auctions}
TODO: Should we remove this section ?/
Here we consider the case where more than one items are auctioned at a time. We
show that even when two items are auctioned at a time and bidders have
unit-demand valuations then there might not exist a $\spe$ in pure strategies.
First we start with an example that uses gadget $\mathcal{G}_i$ and shows that
even for unit-demand bidders, it might be that for some choice of subgame
perfect equilibria of subgames, a stage game might not have a pure nash
equilibrium. Apart from $\mathcal{G}_i$ we also add two
auctions that are auctioned simultaneously in the first round and the auctions
of the gadget are auctioned sequentially after the first round. If player $b$
wins both auctions of the first round then we implement $\spe_1$ in
$\mathcal{G}_i$ otherwise $\spe_2$ is implemented. Players have no value for
the auctions of the first round. Thus in the first round the utility of player
$b$ is $2$ for winning both items and $0$ for winning some of them, while the
utility of player $c$ is $2$ for winning any of the two items. This is a
standard example of an $AND$ and an $OR$ player that has no walrasian
equilibrium and hence no First Price Item Auction Equilibrium. Hence, the first
round has no equilibrium in pure strategies.
However, it might be that by restricting the type of subgame perfect equilibria
that arise in subsequent rounds for different outcomes of the current then
there exists a pure nash equilibrium. For example in the above example if we
restrict only one type of equilibrium to arise in the gadget then we can
construct an $\spe$. However, we show that this is not true.
Particularly, in the appendix we give an example with submodular bidders where
at most $2$ items are auctioned at a time and with no $\spe$.
\section{First vs Second Price Auction}\label{sec:preliminaries}
Consider a single-item auction with $n$ players with valuations $v_1,
\hdots, v_n$ in the full information setting and where the strategy space of a
player is to report some bid $b_i \in [0, \infty)$. To avoid technical
difficulties in the first price auction, suppose a player can bid, for any real
number $x$ a bid $b = x+$ which means, infinitesimally larger then $x$.
Alternatively, one could consider limits of $\epsilon$-Nash equilibria (see
\cite{Hassidim11}).
Both first and second price mechanisms allocate the item to the player
with higher bid, but they differ in the payment rule: in the first price
auction, the allocated bidder pays his bid and in the first price auction, the
allocated player pays the second highest bid. Supposing for simplicity all
players have different valuations and $v_1 > v_2 > \hdots > v_n$, it is simple
to characterize the Nash equilibria of this auction:
\begin{enumerate}
\item \textit{First price auction}: For any number $x \in [v_2, v_1]$ there is
an equilibrium where $b_1 = x+$ and $b_i = x$ for some $i \neq 1$ and $b_j \leq
x$ for all $j \neq 1$. I.e. player $1$ always wins the item for some price
between $v_2$ and $v_1$.
\item \textit{Second price auction}: For any number $x \in [v_2, v_1]$, player
$1$ bids $x$ and the other players bid $b_i < x$. In this case, player $1$ gets
the item for some price between $0$ and $v_1$. The second price auction has,
nevertheless, non-efficient equilibria, where player $i > 1$ bids $b_i \geq v_1$
and all players $j \neq i$ bid $b_j \leq v_i$.
\end{enumerate}
Bidding above ones own valuation is a blatantly dominated strategy and it is a
common assumption in much of the literature that players don't overbid. This
leaves us with a single outcome for the first price auction, where $b_1 = v_2+$
and $b_2 = v_2$, and the highest valued player gets the item by the second
highest valuation. In the second price auction, the first player bidding $b_1
\in [v_2, v_1]$ and each other player bidding $b_i \in [0,v_i]$ is an
equilibrium. So, in all outcomes, the highest value player wins the item but
the price can be anything between $0$ and $v_2$.
It is a standard argument in the second price auction the equilibrium where
each player reports his valuation truthfully is selected, since it is a
weak-dominant strategy to report the truth - and therefore in both auctions, the
highest player would win the item for the second highest value. We explicitely
avoid type of arguments, for two
reasons: (i) Price of Anarchy bounds that don't invoke elimination of dominated
strategies are more robust; (ii) players might employ dominated strategies for
reasons external to this auction. Consider two sequential auctions of identical
items and two players that have equal value $1$ for each items and $2$ for both
items. Since the valuations are additive, one can see the auctions are
independent. However, there is an (subgame perfect) equilibrium of this game
where player $1$ bids $1$ on the first auction and if player $2$ bids zero in
the first auction, he bids $0$ in the second, otherwise he bids $1$. Player $2$
plays zero in the first auction and one in the second auction. It is easy to see
that this is a subgame perfect equilibrium and both players are playing
dominated strategies -- and nevertheless, both are better off compared to the
equilibrium where they play truthfully in both auctions.
So, we restrict ourselves to non-overbidding equilibria, but allow other types
of weakly-dominated strategies, since they can be for a player's high-level
strategy -- signaling, for example.
\section{Second vs First price in Sequential Auctions}
\label{sec:second-price}
In order to stress how essential is the design decision of adopting first price instead of second price in the sequential auctions\footnote{or altenatively, how crucial the envy-free assumption in second-price auctions is}, we present two examples that show how sequential second price auction fail to provide any welfare guarantee even for elementary valuations. It is important to notice that this happens even though we restrict ourselves to equilibria where no player overbids in any game induced in a node of the game-tree. The second example is even stronger: even if we restrict out attention to equilibria that remain after iterative elimination of weakly dominated strategies of all induced games, still no welfare guarantee is possible.
\subsection{Additive Valuations}
Consider a sequential auction of $m$ items among $n$ players using a sequential second price auction, where each player has additive valuation $v_i:2^{[m]} \rightarrow \R_+$, i.e., $v_i(S) = \sum_{j \in S} v_j(\{j\})$. It is tempting to believe that this is equivalent to $m$ independent Vickrey auctions. Using $\spe$ as a solution concept, however, allows the possibility of signaling.
Consider the following example with $3$ players, where the Price of Anarchy is
infinite, which happens due to a miscoordination of the players. Consider $t+2$
items $\{A_1, \hdots, A_t, B, C\}$ and valuations given by the following table:
\begin{center}
\begin{tabular}{ c || c | c | c | c | c | c }
& $A_1$ & $A_2$ & $\hdots$ & $A_t$ & $B$ & $C$ \\
\hline
$1$ & $1$ & $1$ & $\hdots$ & $1$ & $0$ & $1$ \\
$2$ & $1-\epsilon$ & $1-\epsilon$ & $\hdots$ & $1-\epsilon$ & $1$ & $1-\epsilon$ \\
$3$ & $\delta$ & $\delta$ & $\hdots$ & $\delta$ & $1-\epsilon$ & $0$ \\
\end{tabular}
\end{center}
Now, notice that in each subtree, it is an equilibrium if everyone plays truthfully in the entire subtree and notice that under this players get only very small utility. Now, consider the outcome where player $3$ gets items $A_1 \hdots A_t$, player $2$ gets item $B$ and player $1$ gets item $C$. This outcome has social welfare $SW = 2 + t\epsilon$ while $Opt = t + 2$. Now we argue that there is an $\spe$ that produces this outcome, showing therefoe that the Price of Anarchy is unbounded.
In the game tree, in the path corresponding to the equilibrium described above, consider the winner bidding truthfully and all other bidding zero. Now, in all other decision nodes of the tree outside that path, let everyone bid truthfully. It is easy to check that this is a $\spe$ according to the definition above.
Notice that this is a feature of second price. For example, in the last auction, player $3$ couldn't have gotten this item for free in the first price version, since player $2$ would have been overbidded him and got it instead. Second price auctions have the bug that a player can win an item for some price $p$, but some other player to take the item, he may need to pay $p' > p$ and it may make (as in the example) the equilibrium be non-envy free.
\subsection{Unit-demand players}
In this section we present a unit-demand sequential second price instance that
exhibits arbitrarily high $\spoa$. The instance we present involves signaling
behaviour from the players. Moreover, the second price nature of the auction
enables players to signal for a zero price and as much as they want, a
combination that has devastating effects on the efficiency.
\begin{figure}[h]
\centering
\includegraphics{second-price-example1.mps}
\caption{Sequential Second Price instance with high $\spoa$. Auctions happen
from left to right. If a bidder is not connected to an item that implies $0$
value. If a bidder has a single value then that is his value for any of
the items he is connected to. Dashed lines mean $0$ value but imply that the
bidder might bid at that auction despite the $0$ value.}
\label{fig2}
\end{figure}
The instance is depicted in Fig. \ref{fig2} is an auction with $2k+2$ items
auctioned in the order $A_1, B_1, A_2, B_2, \hdots, A_k, B_k, A_*, B_*$ and
$n+3$ player called $1,2,\hdots,k,a,b,c$. The main component of the
instance
is gadget $\mathcal{G}_{*}$, which comprises of the last two auctions of the
game. As a subgame $\mathcal{G}_{*}$ has two possible subgame perfect
equilibria: In the first equilibrium, which we denote
$\spe_1$, $b$ wins $A_*$ at price $1$ and $c$ wins $B_*$ at price $0$. In the
second $\spe$ $c$ wins $A_*$ and $b$ wins $B_*$. Hence, player $b$'s utility
is $1$ unit higher in $\spe_1$.
In what follows we construct a $\spe$ of the whole instance that survives
iterated elimination of weakly dominated strategies and exhibits unbounded
price of anarchy. We describe what happens in the last $2$ external auctions
$A_k,B_k$. If player $b$ or $c$ win at auction $A_k$ and at $0$ price then in
last two auctions $\spe_1$ is implemented. If player $k$ wins auction $A_k$ then
$\spe_2$ is implemented. If player $k$ loses and sets a positive price then
if either $b$ or $c$ win at auction $B_k$ then $\spe_1$ is implemented otherwise
$\spe_2$. Now using backwards induction we see that if player $c$ has no
incentive to bid at any of $A_k,B_k$. Moreover, if player $b$ wins $A_k$ at any
price then at $B_k$ he has a utility of $2$ for winning and $1$ for losing.
Thus at $B_k$ he bids $1$ and player $k$ bids $\delta$. Thus, at $A_k$ player
$b$ has a utility of $1-\epsilon$ for winning at any price. Hence, he will bid
$1-\delta>1-\epsilon$. Now, player $k$ knows that he is going to lose at $A_k$,
and if he sets a positive price he is going to also lose at $B_k$. On the other
hand if he sets a price of $0$ at $A_k$ then none of $b,c$ have any incentive
to outbid him on $B_k$ which will give him a utility of $\delta$. Thus, player
$k$ will bid $0$ on $A_k$. We can copy this behaviour by adding several
auctions $A_i,B_i$ happening before $A_*,B_*$. At each of these auctions player
$b$ is going to be winning auction $A_i$ at a price of $0$ and the
corresponding player $i$ will be winning auction $B_i$. This leads to a $\spoa=
\frac{k(1-\epsilon)+4}{k\delta+4}=O(\frac{1-\epsilon}{\delta})$ which can
be arbitrarily high.
| {
"timestamp": "2011-12-30T02:04:53",
"yymm": "1108",
"arxiv_id": "1108.2452",
"language": "en",
"url": "https://arxiv.org/abs/1108.2452",
"abstract": "In many settings agents participate in multiple different auctions that are not necessarily implemented simultaneously. Future opportunities affect strategic considerations of the players in each auction, introducing externalities. Motivated by this consideration, we study a setting of a market of buyers and sellers, where each seller holds one item, bidders have combinatorial valuations and sellers hold item auctions sequentially.Our results are qualitatively different from those of simultaneous auctions, proving that simultaneity is a crucial aspect of previous work. We prove that if sellers hold sequential first price auctions then for unit-demand bidders (matching market) every subgame perfect equilibrium achieves at least half of the optimal social welfare, while for submodular bidders or when second price auctions are used, the social welfare can be arbitrarily worse than the optimal. We also show that a first price sequential auction for buying or selling a base of a matroid is always efficient, and implements the VCG outcome.An important tool in our analysis is studying first and second price auctions with externalities (bidders have valuations for each possible winner outcome), which can be of independent interest. We show that a Pure Nash Equilibrium always exists in a first price auction with externalities.",
"subjects": "Computer Science and Game Theory (cs.GT)",
"title": "Sequential Auctions and Externalities",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.959154281754899,
"lm_q2_score": 0.7401743620390163,
"lm_q1q2_score": 0.7099414085949233
} |
https://arxiv.org/abs/q-alg/9605025 | Quantum Lie algebras; their existence, uniqueness and $q$-antisymmetry | Quantum Lie algebras are generalizations of Lie algebras which have the quantum parameter h built into their structure. They have been defined concretely as certain submodules of the quantized enveloping algebras. On them the quantum Lie bracket is given by the quantum adjoint action.Here we define for any finite-dimensional simple complex Lie algebra g an abstract quantum Lie algebra g_h independent of any concrete realization. Its h-dependent structure constants are given in terms of inverse quantum Clebsch-Gordan coefficients. We then show that all concrete quantum Lie algebras are isomorphic to an abstract quantum Lie algebra g_h.In this way we prove two important properties of quantum Lie algebras: 1) all quantum Lie algebras associated to the same g are isomorphic, 2) the quantum Lie bracket of any quantum Lie algebra is $q$-antisymmetric. We also describe a construction of quantum Lie algebras which establishes their existence. | \section{Introduction}
Lie algebras play an important role in the description of many classical
physical theories. This is particularly pronounced in integrable
models which are described entirely in terms of Lie algebraic data.
However, when quantizing a classical theory the Lie algebraic
description seems to be destroyed by quantum corrections.
It is conceivable that in some cases the Lie algebraic structure of the
theory is deformed rather than destroyed. The quantum theory may be
describable by a quantum generalization of a Lie algebra which has
higher order terms in $\hbar$ built into its structure. These
speculations were prompted by the beautiful structure found in
affine Toda quantum field theories \cite{toda}. They are the
physical motivation for this work on quantum Lie algebras.
As a preliminary step towards physical applications it is necessary to identify
the natural quantum generalizations of Lie algebras and to study
their properties. Quantum generalizations $U_h(\lie)$ of the enveloping algebras
$U(\lie)$ of Lie algebras $\lie$ have been known since
the work of Drinfeld \cite{Dri85} and Jimbo \cite{Jim85} and they have
been found to play a central role in quantum integrable models. This
has lead us in \cite{qlie} to define quantum Lie algebras $\mathfrak{L}_h(\lie)$
as certain submodules of $U_h(\lie)$, modelling the way in which ordinary
Lie algebras are naturally embedded in $U(\lie)$.
Explicit examples of quantum Lie algebras were constructed in \cite{qlie} using
symbolic computer calculations, in particular for
$\mathfrak{L}_h(\mathfrak{sl}_3)$,
$\mathfrak{L}_h(\mathfrak{sl}_4)$,
$\mathfrak{L}_h(\mathfrak{sp}_4)$ and
$\mathfrak{L}_h(G_2)$.
It was found empirically that in these quantum Lie algebras the
quantum Lie products satisfy an intriguing generalization of the
classical antisymmetry property. They are $q$-antisymmetric.
This can be exhibited already in the simple example of
$\mathfrak{L}_h(\mathfrak{sl}_2)$. This quantum Lie algebra is
spanned by three generators $X^+_h,X^-_h$ and $H_h$ with the quantum
Lie product relations
\begin{align}\label{sl2q}
[X^+_h,X^-_h]_h&=H_h,&
[X^-_h,X^+_h]_h&=-H_h,\nonumber\\
[H_h,X^\pm_h]_h&=\pm 2 q^{\pm 1} X^\pm_h,&
[X^\pm_h,H_h]_h&=\mp 2 q^{\mp 1} X^\pm_h\nonumber\\
[H_h,H_h]_h&=2(q-q^{-1}) H_h,&
[X^\pm_h,X^\pm_h]_h&=0.
\end{align}
Here $q=e^h$ is the quantum parameter. Clearly for $q=1$ the above reduces
to the ordinary $\mathfrak{sl}_2$ Lie algebra. For $q\neq 1$ the
Lie product is antisymmetric if the interchange of the factors is
accompanied by $q\to q^{-1}$.
To convincingly establish that the quantum Lie algebras $\mathfrak{L}_h(\lie)$ defined
in \cite{qlie} are the natural quantum generalizations of Lie algebras,
three questions in particular should be answered:
\begin{enumerate}
\vspace{-1mm}
\item Do the $\mathfrak{L}_h(\lie)$ exist for all $\lie$?
\vspace{-1mm}
\item Are all $\mathfrak{L}_h(\lie)$ associated to the same $\lie$ isomorphic?
\vspace{-1mm}
\item Do all $\mathfrak{L}_h(\lie)$ have $q$-antisymmetric quantum Lie products?
\vspace{-1mm}
\end{enumerate}
These questions will be answered in the affirmative in this paper.
The paper is organized as follows:
\secref{sec:prelim} contains preliminaries about quantized enveloping
algebras $U_h(\lie)$ and defines the concept of $q$-conjugation. In
\secref{sec:ab} we give a new definition of quantum Lie algebras $\qlie$
which is independent of any realization as submodules of $U_h(\lie)$.
We study the properties of the $\qlie$. In \secref{sec:qlie} we recall
the definition of the quantum Lie algebras $\mathfrak{L}_h(\lie)$ and then show that all
$\mathfrak{L}_h(\lie)$ are isomorphic to $\qlie$. It is in this way that we arrive
in \thmref{theorem} at the answers to
questions 2) and 3) above. In \secref{sec:constr} we describe a
construction for quantum Lie algebras $\mathfrak{L}_h(\lie)$ for any
finite-dimensional simple complex
Lie algebra $\lie$, thus establishing their existence.
There are many natural questions about quantum Lie algebras which we do
not address in this paper. These are question of representations,
of the enveloping algebras, of exponentiation to quantum groups,
of applications to physics and many more which we hope will be
addressed in the future.
We do not wish to reserve the term {\it quantum Lie algebra} only
for the particular algebras defined in this paper. Rather
we view the algebras $\qlie$ and $\mathfrak{L}_h(\lie)$ which are defined in
Definitions \ref{def:ab} and \ref{def:qlie} in terms of $U_h(\lie)$ as
particular examples of a more general concept
of quantum Lie algebras. What a quantum Lie algebra should be in
general is not yet known, i.e., there are not yet any satisfactory
axioms for quantum Lie algebras. Finding such an axiomatic
definition is an important problem. We hope that our study of
the quantum Lie algebras arising from $U_h(\lie)$ will help to provide the
ideas needed to formulate the axioms. In particular we expect that
the $q$-antisymmetry of the product discovered here will be an
important ingredient.
There has been an important earlier approach to the subject of
quantum Lie algebras. It was initiated by Woronowicz in his work
on bicovariant differential calculi on quantum groups \cite{Wor89}.
He defined a quantum Lie product on the dual space to the space
of left-invariant one-forms. This has been developed further by
several groups \cite{difcalc}. These quantum Lie algebras are
$n^2$-dimensional where
$n$ is the dimension of the defining representation
of $\lie$ and thus they do not have the same dimension
as the classical Lie algebra except for $\lie=\mathfrak{gl}_{n}$. It has
never been shown how to project them onto quantum Lie algebras of
the correct dimension. Only recently Sudbery \cite{Sud95}
has defined quantum
Lie algebras for $\lie=\mathfrak{sl}_{n}$ which have the correct dimension $n^2-1$.
These are isomorphic to our $(\sln)_h(0)$ (set $s=-1, t=0$ in
\propref{slnstruc}). Sch\"uler and Schm\"udgen \cite{Sch} have defined
$n^2-1$ dimensional quantum Lie algebras for $\mathfrak{sl}_{n}$ using left-covariant
differential calculi.
In \cite{diffcalc} we explained how our quantum Lie algebras lead to
bicovariant differential calculi of the correct dimension.
Up to date information on quantum Lie algebras
can be found on the World Wide Web at
http://www.mth.kcl.ac.uk/$\sim$delius/q-lie.html
\section{Preliminaries}\label{sec:prelim}
We recall the definition of quantized enveloping algebras
$U_h(\lie)$ \cite{Dri85,Jim85,Cha94} in order to fix our notation.
$U_h(\lie)$ is an algebra over ${\mathbb C}[[h]] $, the ring of formal power
series in an indeterminate $h$. In applications of
quantum groups in physics, the parameter $h$ does not need to be
identified with Planck's constant. In general it will depend on a
dimensionless combination
of coupling constants and Planck's constant. We use the notation
$q=e^h$.
The formal power series in $h$ form only a ring, not a field.
It is not possible to divide by an element of ${\mathbb C}[[h]]$ unless the power
series contains a term of order $h^0$. We will have to work with
modules over this ring, rather than with vector spaces over a field as
would be more familiar to physicists like ourselves.
However ${\mathbb C}[[h]]$ is a principal ideal domain and
thus many of the usual results of linear algebra continue to hold
\cite{Cur62}.
In the physics literature on quantum groups it is quite common to
treat $q$ not as an
indeterminate but as a complex (or real) number. It is our opinion that
in doing so, physicists loose much of the potential power of quantum
groups. Keeping $h$ as an indeterminate in the formalism will, when
applied to quantum mechanical systems, lead to deeper insight.
\begin{definition}
Let $\lie$ be a finite-dimensional
simple complex Lie algebra with symmetrizable Cartan matrix
$a_{ij}$. The \dem{quantized enveloping algebra}
$U_h(\lie)$ is the unital associative algebra over ${\mathbb C}[[h]]$ (completed in
the $h$-adic topology) with
generators $x_i^+,\ x_i^-,\ h_i$,
$1 \le i \le \text{rank}(\lie)$ and relations
\footnote{Our $x_i^{\pm}$ are related to the $X_i^\pm$ of
\cite{Cha94} by $x_i^+= q_i^{-h_i/2} X_i^+$ and $x_i^-= X_i^- q_i^{h_i/2}$
and it uses the opposite Hopf-algebra structure.}
\begin{gather}\label{uqrel}
h_i h_j = h_j h_i,~~~~~
h_i x_j^\pm - x_j^\pm h_i = \pm a_{ij} x_j^\pm , \nonumber\\
x_i^+ x_j^- -x_j^- x_i^+ = \delta_{ij}
\ \frac{q_i^{h_i} - q_i^{-h_i}}{q_i- q_i^{-1}},\\
\sum_{k=0}^{1-a_{ij}} (-1)^k
\left[ \begin{array}{c} 1-a_{ij} \\ k \end{array} \right]_{q_i}
(x_i^\pm)^k x_j^\pm (x_i^\pm)^{1-a_{ij}-k} = 0 \qquad i \not= j.\nonumber
\end{gather}
Here $\left[\begin{array}{c}a\\b\end{array}\right]_q$ are the
q-binomial coefficients.
We have defined $q_i = e^{d_i h}$ where $d_i$ are the
coprime integers such that $d_i a_{ij}$ is a symmetric matrix.
\end{definition}
The Hopf algebra structure of $U_h(\lie)$ is given by the comultiplication
$\Delta:U_h(\lie)\toU_h(\lie)\hat{\ot}\,U_h(\lie)$ ($\hat{\ot}\,$ denotes the tensor product over ${\mathbb C}[[h]]$,
completed in the $h$-adic topology when necessary) defined by
\footnote{Interchanging $q$ and $q^{-1}$ gives an alternative Hopf
algebra structure, which is the one chosen in \cite{qlie,Cha94}.}
\begin{align}
\Delta(h_i) &= h_i \hat{\ot}\, 1 + 1 \hat{\ot}\, h_i, \\
\Delta(x_i^\pm) &= x_i^\pm \hat{\ot}\, q_i^{-h_i/2} +
q_i^{h_i/2} \hat{\ot}\, x_i^\pm,
\end{align}
and the antipode $S$ and counit $\epsilon$ defined by
\begin{equation}\label{antipode}
S(h_i)= - h_i, ~~~
S(x_i^\pm) = - q_i^{\mp 1}\,x_i^\pm ,~~~
\epsilon(h_i) = \epsilon(x_i^\pm) = 0.
\end{equation}
$U_h(\lie)$ is quasitriangular with universal $R$-matrix $R\inU_h(\lie)\hat{\ot}\,U_h(\lie)$.
The adjoint action of $U_h(\lie)$ on itself is given,
using Sweedler's notation \cite{Swe69}, by
\begin{equation}\label{adjoint}
\ad{x}{y}=\sum x_{(1)}\,y\,S(x_{(2)}),~~~~~x,y\inU_h(\lie).
\end{equation}
If the Dynkin diagram of $\lie$ has a symmetry $\tau$ which maps node
$i$ into node $\tau(i)$ then $U_h(\lie)$ has a Hopf-algebra automorphism
defined by
$\tau(x^\pm_i)=x^\pm_{\tau(i)},\ \tau(h_i)=h_{\tau(i)}$.
Such $\tau$ are referred to as diagram automorphisms and
except for rescalings of the $x^\pm_i$ they are the only
Hopf-algebra automorphisms of $U_h(\lie)$.
\begin{proposition}[Drinfel'd \cite{Dri90}]
\label{thm:dri}
There exists an algebra isomorphism
\newline
$\driso:U_h(\lie)\to U(\lie)[[h]]$
such that $\driso\equiv\text{id}\ (\text{mod }h)$ and
$\driso(h_i)=h_i$.
\end{proposition}
\begin{note}
This is not a Hopf-algebra isomorphism however.
\end{note}
\begin{proposition}\label{thm:rep}
By $(V^\mu,\pi^\mu)$ denote the $U(\lie)$-representation with highest
weight $\mu$, carrier space $V^\mu$ and representation map $\pi^\mu$.
Let $\{(V^\mu,\pi^\mu)\}_{\mu\in D_+}$
be the set of all finite-dimensional irreducible representations of $U(\lie)$.
$D_+$ is the set of dominant weights. Let $m_\lambda^{\mu\nu}$ denote
the multiplicities in the decomposition
of tensor product representations into irreducible $U(\lie)$ representations
\begin{equation}\label{clasdec}
V^\mu\otimes V^\nu=\bigoplus_{\lambda\in D_+}\,m_\lambda^{\mu\nu}\,V^\lambda.
\end{equation}
Then
\begin{enumerate}
\item $\{(V^\mu[[h]],\pi^\mu\circ\driso)\}_{\mu\in D_+}$ is the
set of all indecomposable representations of $U_h(\lie)$ which are
finite-dimensional, i.e., topologically free and of finite rank.
Here $\driso$ is the isomorphism of \propref{thm:dri}.
\item The decomposition of $U_h(\lie)$ tensor product representations into
indecomposable $U_h(\lie)$ representations is described by the classical
multiplicities $m_\lambda^{\mu\nu}$
\begin{equation}\label{quantdec}
V^\mu[[h]]\hat{\ot}\, V^\nu[[h]]=
\bigoplus_{\lambda\in D_+}\,m_\lambda^{\mu\nu}\,V^\lambda[[h]].
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
1. is from Drinfel'd \cite{Dri90}. It follows immediately from the
isomorphism property of $\driso$ and from the fact that the finite
dimensional representations of $U(\lie)$ have no non-trivial
deformations.
2. the decomposition can be achieved by the same method as classically.
A careful analysis shows that working over ${\mathbb C}[[h]]$ does not lead to
complications. The reason is
that all expressions appearing have a non-vanishing classical term.
\end{proof}
\begin{note}
The $U_h(\lie)$ modules $V[[h]]$ are not irreducible. Their submodules
are of the form $c\,V[[h]]$ with $c\in{\mathbb C}[[h]]$ not invertible.
In this setting Schur's lemma takes the following form:
\begin{lemma}[Schur's lemma]\label{schur}
Let $V[[h]]$ and $W[[h]]$ be two finite-dimen\-sio\-nal indecomposable
$U_h(\lie)$-modules and let $f:V[[h]]\rightarrow W[[h]]$
be a $U_h(\lie)$-module homomorphism. Then if $f\neq 0$ then $f=c\,g$ with
$c\in{\mathbb C}[[h]]$ and $g$ an isomorphism.
\end{lemma}
\end{note}
A central concept in the theory of quantum Lie algebras \cite{qlie} is
$q$-conjuga\-tion which in ${\mathbb C}[[h]]$ maps
$h\mapsto -h$, i.e. $q\mapsto q^{-1}$.
\begin{definition}
\end{definition}
\vspace{-5mm}
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi})}
\item \dem{$q$-conjugation}
\mbox{$\sim: {\mathbb C}[[h]] \rightarrow{\mathbb C}[[h]]$}, $a\mapsto\t{a}$ is the
${\mathbb C}$-linear ring
automorphism defined by $\t{h}=-h$.
\vspace{-1mm}
\item Let $M,N$ be ${\mathbb C}[[h]]$-modules. An additive map
$\phi:M\rightarrow N$ is said to be \dem{$q$-linear} if
$\phi(\lambda \,a)=\t{\lambda}\,\phi(a),~\forall a\in M, \lambda\in{\mathbb C}[[h]]$.
\vspace{-1mm}
\item A \dem{$q$-conjugation on a ${\mathbb C}[[h]]$ module $M$} is a
$q$-linear involutive map $\qconj{}:M\rightarrow M$ with
$\qconj{}=\text{id}\ (\text{mod }h)$.
\end{enumerate}
Note the analogy between the concepts of $q$-conjugation and complex
conjugation and between $q$-linear maps and anti-linear maps.
\begin{remark}
If $M$ is a finite-dimensional ${\mathbb C}[[h]]$-module then a $q$-conjugation
$\qconj{}$ on $M$ is uniquely specified by giving a basis $\{b_i\}$
which is invariant. Then the $q$-conjugation takes the form
$\qconj{(\sum_i \lambda_i b_i)}=\sum\t{\lambda}_i b_i$.
Conversely, for any $q$-conjugation on $M$
there exists an invariant basis. It can be constructed from an arbitrary
basis by adding correction terms order by order in $h$.
\end{remark}
The unique $q$-linear algebra
auto\-mor\-phism \mbox{$\sim: U_h(\lie) \rightarrow U_h(\lie)$} which
extends $q$-conjugation on ${\mathbb C}[[h]]$
by acting as the identity on the generators $x_i^\pm$ and $h_i$
is a $q$-conjugation on $U_h(\lie)$.
It exists because the relations \eqref{uqrel}
are invariant under $q\mapsto q^{-1}$.
We choose the isomorphism $\driso$ in \propref{thm:dri} such that
$\sim\circ\,\driso=\driso\,\circ\sim$. This
$q$-conjugation is a coalgebra q-antiautomorphism of $U_h(\lie)$, i.e.,
$\epsilon\, \circ \sim = \sim \circ \,\epsilon,~~
\Delta\, \circ \sim = \sim \circ\, \Delta^T$ and it satisfies
$S \,\circ \sim = \sim \circ \,S^{-1}$. The map $\sim$ was introduced
already in \cite{Dri90}.
If in physical applications
$q$ were identified with a combination of a coupling constant and Planck's
constant, then $q$-conjugation would correspond to the strong-weak coupling
duality\footnote{
In some applications of quantum groups the relation between $q$ and
the coupling constant is not linear but exponential and then
$q$-conjugation is not related to strong-weak duality}.
It has been observed in several quantum field theories, that
such a duality transformation can form a symmetry of the theory.
Affine Toda field theories in two dimensions \cite{toda} as well
as supersymmetric Yang-Mills theory in four dimensions provide
examples of this phenomenon. It is thus very desirable to have an
algebraic structure, in which $q$-conjugation is incorporated.
We hope that the study of this structure will one day enhance our
understanding of the origin of strong-weak coupling duality in physics.
\section{Quantum Lie algebras $\qlie$}\label{sec:ab}
The quantized enveloping algebra $U_h(\lie)$ is an infinite dimensional algebra.
It is our aim to associate to it in a natural way a finite dimensional
algebra which would be the quantum analog of the Lie algebra.
Here our approach is based on the observation that classically a
Lie algebra $\lie$ is also the carrier space of the adjoint
representation $\adsc$ of $U(\lie)$. The superscript $0$ is to remind
us that this is the classical adjoint representation. It is defined by
$\adc{a}{b}=[a,b]\ \forall a,b\in\lie$. It follows from the Jacobi identity
\begin{equation}
[a,[b,c]]=[[a,b],c]+[b,[a,c]]~~~ \forall a,b,c\in\lie
\end{equation}
that
\begin{equation}\label{succ}
(\adsc\,x)\circ \lb=\lb\circ(\adsc_2\,x),~~~~
\forall x\in U(\lie),
\end{equation}
where $(\adsc_2\,x)=(\adsc\,\otimes \adsc)\,\Delta(x)$
is the tensor product representation carried by $\lie\otimes\lie$.
Equation \eqref{succ} states that the Lie product $\lb$ of $\lie$ is
a $U(\lie)$-module homomorphism from $\lie\otimes\lie$ to $\lie$.
Because of \propref{thm:rep} we know that $\lie[[\hhh]]$
is an indecomposable module of $U_h(\lie)$.
Let us denote the representation of $U_h(\lie)$ on $\lie[[\hhh]]$ by
$\adsh$. Note that at this point there is no relation between the
representation $\adsh$ of $U_h(\lie)$ on $\lie[[\hhh]]{}$ and the adjoint
action $\ads$ of $U_h(\lie)$ on $U_h(\lie)$ defined in \eqref{adjoint}.
Generalizing the above classical observation
we obtain a natural definition for a quantum Lie algebra
\footnote{As Ding has informed us, he and Frenkel have been
pursuing similar ideas for some time. See also their paper
\cite{Din94} in which the utility of defining algebraic structures
using $U_h(\lie)$ module homomorphisms is stressed.}.
\begin{definition}\label{def:ab}
Let $\qlb{}:\lie[[\hhh]]{}\hat{\ot}\,\lie[[\hhh]]{}\to\lie[[\hhh]]{}$
be a $U_h(\lie)$-module homomorphism which satisfies
$\qlb=\lb\ (\text{mod }h)$. $\qlb$ gives $\lie[[\hhh]]{}$ the structure of a non-associative
algebra over ${\mathbb C}[[h]]$.
We call this algebra
$\qlie=(\lie[[h]],\qlb{})$ a \dem{quantum Lie algebra}
and the product $\qlb$ a \dem{quantum Lie product}.
\end{definition}
For each Lie algebra $\lie$ this definition potentially gives many
different quantum Lie algebras $\qlie$, one for each choice
of the homomorphism $\qlb{}$.
This would be unsatisfactory were it not for the fact that such
a $U_h(\lie)$-module homomorphism is almost unique.
\begin{proposition}\label{thm:iso}
For a given $\lie\neq\mathfrak{sl}_{n>2}$ the quantum Lie algebra $\qlie$ is unique
(up to a rescaling of the product by an invertible element of ${\mathbb C}[[h]]$).
For $\lie=\mathfrak{sl}_{n}$ with $n>2$ there is a family of quantum
Lie algebras $(\sln)_h(\pa)$ depending on a parameter $\pa\in{\mathbb C}((h))$
(see \propref{slnstruc}).
\end{proposition}
\begin{proof}
The idea of the proof is simple: For $\lie\neq\mathfrak{sl}_{n>2}$
the adjoint representation appears in the tensor product of two
adjoint representations with unit multiplicity.
This is an empirical fact. Thus the homomorphism
$\qlb$ from $\lie[[\hhh]]\hat{\ot}\,\lie[[\hhh]]$ into $\lie[[\hhh]]$ with the requirement that
$\qlb\ (\text{mod }h)=\lb$ is unique by the weak form of Schur's lemma.
In the case $\lie=\mathfrak{sl}_{n}$ with $n>2$ however,
the adjoint representation appears
with multiplicity two in the tensor product. Any module arising from
a linear combination of the highest weight vectors of
two adjoint modules is also an adjoint module and this leads to
a one-parameter family of non-isomorphic weak quantum Lie algebras
$(\sln)_h(\pa)$.
We find it helpful to be more explicit here than necessary and to explain
how the homomorphism $\qlb$ is obtained from inverse Clebsch-Gordan
coefficients. We begin with $\lie\neq\mathfrak{sl}_{n>2}$ and with the classical
situation.
Let $\{v_a\}$ be a basis for $\lie$ which contains a highest weight
vector $v_0$, i.e.,
\begin{equation}\label{highest}
\adc{x^+_i}{v_0}=0,~~~~
\adc{h_i}{v_0}=\psi(h_i)v_0,~~~
\forall i,
\end{equation}
where $\psi$ is the highest root of $\lie$. Let $P_a(x^-)$ be the
polynomials in the $x^-_i$ such that $v_a=\adc{P_a(x^-)}{v_0}$.
The adjoint representation matrices $\pi$ in this basis are defined by
\begin{equation}\label{admat}
\adc{x}{v_a}=v_b\,\pi^b{}_a(x).
\end{equation}
In this paper we use the summation convention according to which
repeated indices are summed over their range.
$\lie\otimes\lie$ contains a highest weight state $\hat{v}_0$ such that
\begin{equation}\label{highest2}
\adtc{x^+_i}{\hat{v}_0}=0,~~~~
\adtc{h_i}{\hat{v}_0}=\psi(h_i)\hat{v}_0,~~~
\forall i,
\end{equation}
For $\lie\neq\mathfrak{sl}_{n>2}$ this state is unique up to rescaling.
The vectors
\begin{equation}\label{basis2}
\hat{v}_a=\adtc{P_a(x^-)}{\hat{v}_0}=K_a{}^{bc}\,v_b\otimes v_c
\end{equation}
form a basis for $\lie$ inside $\lie\otimes\lie$ such that
\begin{equation}\label{admat2}
\adtc{x}{\hat{v}_a}=\hat{v}_b\,\pi^b{}_a(x)
\end{equation}
with the same representation matrices $\pi$ as in \eqref{admat}.
Thus the map
\begin{equation}\label{beta}
\beta:v_a\mapsto\hat{v}_a=K_a{}^{bc}\,v_b\otimes v_c
\end{equation}
is a $U(\lie)$-module homomorphism $\beta:\lie\to\lie\otimes\lie$.
The coefficients $K_a{}^{bc}$ are called the Clebsch-Gordan
coefficients.
$\lie$ and Im$(\beta)$ are irreducible modules and thus by Schur's
lemma the homomorphism $\beta$ is invertible on its image.
Define $\lb:\lie\otimes\lie\to\lie$
to be zero on the module complement of the image of $\beta$ and
on the image of $\beta$ define
$\lb=\beta^{-1}$. Then $\lb$ is the $U(\lie)$ homomorphism
from $\lie\otimes\lie$ to $\lie$, unique up to rescaling.
It is the Lie product of $\lie$.
On the basis it is given by
\begin{equation}\label{cs}
[v_a,v_b]=f_{ab}{}^c \,v_c,~~~~
\text{where }K_a{}^{bc}f_{bc}{}^d=\d_a{}^d.
\end{equation}
Thus the structure constants are given by the inverse Clebsch-Gordan
coefficients.
We turn to the quantum case. Let $\hat{v}_0$ be a highest weight state
inside $\lie[[\hhh]]\hat{\ot}\,\lie[[\hhh]]$ satisfying the analog of \eqref{highest2}
\begin{equation}\label{highest3}
\adth{x^+_i}{\hat{v}_0}=0,~~~~
\adth{h_i}{\hat{v}_0}=\psi(h_i)\hat{v}_0,~~~
\forall i,
\end{equation}
where $\adsh$ is the deformed adjoint representation $\adsh=\adsc\circ\driso$
and $\hat{v}_0\ (\text{mod }h)\neq 0$.
$\hat{v}_0$ generates the $U_h(\lie)$ module $\lie[[\hhh]]$ inside $\lie[[\hhh]]\hat{\ot}\,\lie[[\hhh]]$.
$\hat{v}_0$ must be unique up to rescaling, otherwise $\lie[[\hhh]]$ would
appear with multiplicity greater than one in $\lie[[\hhh]]\hat{\ot}\,\lie[[\hhh]]$.
We construct a basis $\{\hat{v}_a\}$ as in \eqref{basis2} using
$P_a(x^-)\inU_h(\lie)$ with the
same polynomials $P_a$ as in \eqref{basis2}. This leads to quantum
Clebsch-Gordan
coefficients $K_a{}^{bc}(h)\in{\mathbb C}[[h]]$. We obtain a $U_h(\lie)$-module
homomorphism $\beta:\lie[[\hhh]]\to\lie[[\hhh]]\hat{\ot}\,\lie[[\hhh]]$ as in \eqref{beta}.
$\beta$ is invertible by the weak form of Schur's lemma. A homomorphism
$\qlb:\lie[[\hhh]]\hat{\ot}\,\lie[[\hhh]]\to\lie[[\hhh]]$ is obtained as above \eqref{cs}
\begin{equation}\label{qs}
[v_a,v_b]_h=f_{ab}{}^c(h) \,v_c,~~~~
\text{where }K_a{}^{bc}(h)f_{bc}{}^d(h)=\d_a{}^d.
\end{equation}
Up to rescaling it is the unique such homomorphism with the
property that $\qlb\ (\text{mod }h)\neq 0$.
We now turn to $\lie=\mathfrak{sl}_{n}$ with $n>2$ and again begin by considering
the classical situation. There are two linearly
independent highest weight vectors $\hat{v}^{(+)}_0$ and $\hat{v}^{(-)}_0$
in $\lie\otimes\lie$ which satisfy \eqref{highest2}. They can be chosen so
that
\begin{equation}\label{parity}
\sigma\,\hat{v}_0^{(\pm)}=\pm\,\hat{v}_0^{(\pm)},
\end{equation}
where $\sigma$ is the bilinear map acting as $\sigma\,(v_a\otimes v_b)=
v_b\otimes v_a$. Expressed differently, the Clebsch-Gordan coefficients
$K^{(\pm)}_a{}^{bc}$ defined as in \eqref{basis2} satisfy
$K^{(\pm)}_a{}^{bc}=\pm K^{(\pm)}_a{}^{cb}$. Any linear combination
of $\hat{v}^{(+)}_0$ and $\hat{v}^{(-)}_0$ is a highest weight state
and leads to a homomorphism as described above but clearly only
$\hat{v}^{(-)}_0$ leads to an {\em{antisymmetric}} Lie product.
In the quantum case too there are two linearly independent highest weight
states satisfying \eqref{highest3}.
We can choose any linear combination and thus have
a one-parameter family of $\hat{v}_0(\pa)=K_0{}^{bc}(\pa,h)\,
(v_b\hat{\ot}\, v_c)$. We impose $\hat{v}_0(\pa)\ (\text{mod }h)\neq 0$ as before.
In this way we obtain the family
$(\sln)_h(\pa)$ of quantum Lie algebras.
We will give these explicitly in \propref{slnstruc}. Certain
values for $\pa$ will lead to a $q$-antisymmetric quantum Lie product
(see \propref{thm:anti}).
\end{proof}
Some important properties of $\lie$ carry over immediately to $\qlie$.
Define root subspaces $\lie^{(\a)}$ of $\lie$ by
\begin{equation}
\lie^{(\a)}=\{ x\in\lie|\adc{h_i}{x}=\a(h_i)\,x\ \forall i\}.
\end{equation}
$\lie$ possesses a gradation
\begin{equation}
\lie=\bigoplus_{\a\inR\cup\{0\}}\lie^{(\a)}
,\qquad\qquad
[\lie^{(\a)},\lie^{(\b)}]\subset\lie^{(\a+\b)},
\end{equation}
where $R$ is the set of non-zero roots of $\lie$.
\begin{proposition}\label{thm:grad}
A quantum Lie algebra $\qlie$ possesses a gradation
\begin{equation}\label{gradation}
\qlie=\bigoplus_{\a\inR\cup\{0\}}\lie^{(\a)}[[h]],\qquad\qquad
\left[\lie^{(\a)}[[h]],\lie^{(\b)}[[h]]\right]_h{}
\subset\lie^{(\a+\b)}[[h]].
\end{equation}
\end{proposition}
\begin{proof}
According to \propref{thm:dri} the algebra isomorphism
$\driso:U_h(\lie)\to U(\lie)[[h]]$ leaves the $h_i$ invariant and
thus
\begin{equation}
\lie^{(\a)}[[h]]=\{ x\in\lie[[\hhh]]|\adh{h_i}{x}=\a(h_i)\,x\ \forall i\}.
\end{equation}
Let $X_\a\in\lie^{(\a)}[[h]]$ and $X_\b\in\lie^{(\b)}[[h]]$.
From the homomorphism property of $\qlb$ and the coproduct
$\Delta(h_i)=h_i\hat{\ot}\, 1+1\hat{\ot}\, h_i$ it follows that
\begin{align}
\adh{h_i}{[X_\a,X_\b]_h}&=\left[\adh{h_i}{X_\a},X_\b\right]_h
+\left[X_\a,\adh{h_i}{X_\b}\right]_h
\nonumber\\
&=\left(\a(h_i)+\b(h_i)\right)\,[X_\a,X_\b]_h
\end{align}
and thus $[X_\a,X_\b]_h\in\lie^{(\a+\b)}[[h]]$.
\end{proof}
Choosing basis vectors $X_\a\in\lie^{(\a)}$ and $H_i\in\lie^{(0)}$
\propref{thm:grad} implies that the quantum Lie product relations are of
the form
\begin{alignat}{2}\label{qliestruc}
{[}H_i,X_\a]_h{}&=l_\a(H_i)\,X_\a,~~~&
{[}X_\a,H_i]_h{}&=-r_\a(H_i)\,X_\a,\nonumber\\
{[}H_i,H_j]_h{}&=f_{ij}{}^k\,H_k,&
{[}X_\a,X_{-\a}]_h{}&=g_\a{}^k\,H_k,\\
{[}X_\a,X_\b]_h{}&=N_{\a\b}\,X_{\a+\b}
&\text{ for } \a+\b\inR, &~~~0\text{ otherwise}.\nonumber
\end{alignat}
This is similar in form to the Lie product relations of ordinary
Lie algebras. The most important differences are
\begin{enumerate}
\item The structure constants are now elements of ${\mathbb C}[[h]]$, i.e., they
depend explicitly on the quantum parameter.
\item $[H_i,H_j]_h{}$ does not have to be zero. Thus the grade zero
subalgebra $\lie^{(0)}[[h]]$ of $\qlie$ is not abelian. We will nevertheless
refer to it as the quantum Cartan subalgebra.
\item Each classical root $\a$ splits up
into a ``left'' root $l_\a$ and a ``right'' root $r_\a$. Classically
they are forced to be equal because of the antisymmetry of the Lie
product.
\end{enumerate}
The quantum Clebsch-Gordan coefficients which describe
the homomorphism $\qlb{}: \qlie\hat{\ot}\,\qlie\to\qlie$ can be
calculated directly by decomposing the tensor product representation.
This is however very tedious in general.
In \cite{qgln} it was done for $(\sln)_h$ in an indirect way by
using the R-matrix of $U_q(\mathfrak{sl}_{n})$. The method is based on realizing the
quantum Lie algebra as a particular submodule of $U_h(\lie)$ as explained
in \secref{s:concrete}. The particular submodule used in \cite{qgln} gives
the quantum Lie algebra $(\sln)_h(\pa=1)$
but the method can be extended and gives the following result.
\begin{proposition}\label{slnstruc}
The parameter $\pa\in{\mathbb C}((h))$ of $(\sln)_h(\pa)$ is a fraction $\pa=t/s$ with
$s,t\in{\mathbb C}[[h]]$ and with the restriction that $(s+t)^{-1}\in{\mathbb C}[[h]]$.
The Lie product relations for $(\sln)_h(\pa)$ are
\begin{gather}
\label{stb}
[H_k,X_{ij}]_h{}=l_{ij}(H_k)\,X_{ij},~~~~~~
[X_{ij},H_k]_h{}=-r_{ij}(H_k)\,X_{ij},\nonumber\\{}
[H_i,H_j]_h{}=f_{ij}{}^k\,H_k,~~~~~~
[X_{ij},X_{ji}]_h{}=g_{ij}{}^k\,H_k,\\{}
[X_{ij},X_{kl}]_h{}=\d_{jk}\d_{i\neq l}N_{ijl}\,X_{il}-
\d_{il}\d_{j\neq k}M_{kij}\,X_{kj},\nonumber
\end{gather}
where $\{X_{ij}\}_{i,j=1\cdots n}\cup\{H_i\}_{i=1\cdots n-1}$ is a
basis and the structure constants are explicitly given by
\begin{align}
l_{ij}(H_k)&=
(q^{1-k}\d_{ki}-q^{-1-k}\d_{k,i-1})(s+t\, q^{n})\nonumber\\
&\quad\quad-(q^{k-1}\d_{kj}-q^{k+1}\d_{k,j-1})(s+t\,q^{-n}),
\\
r_{ij}(H_k)&=-l_{ji}(H_k),\label{lr}
\\
f_{ij}{}^k&=
\d_{ij}\,\left(\d_{ki}\left(s\,(q^{k+1}-q^{-k-1})+
t\,(q^{n+1-i}-q^{-n-1+i})\right)\right.
\nonumber\\
&\qquad\qquad \left.+s\,\d_{k<i}\,(q+q^{-1})(q^k-q^{-k})\right.
\nonumber\\
&\qquad\qquad \left.+t\,\d_{k>i}\,(q+q^{-1})(q^{n-k}-q^{-n+k})\right)
\nonumber\\
&\qquad+\d_{i,j-1}\left(s\,\d_{k\leq i}\,(q^{-k}-q^k)
+t\,\d_{k>i}\,(q^{k-n}-q^{-k+n})\right)
\nonumber\\
&\qquad+\d_{j,i-1}\left(s\,\d_{k\leq j}\,(q^{-k}-q^k)
+t\,\d_{k>j}\,(q^{k-n}-q^{-k+n})\right),
\\
g_{ij}{}^k&=q^{i-j}\left(
s\,\left(q^k\,\d_{k<j}-q^{-k}\,\d_{k<i}\right)+
t\,\left(q^{n-k}\d_{k\geq i}-q^{k-n}\d_{k\geq j}\right)
\right)
\nonumber\\
N_{ijl}&=q^{1/2-j}\left(s+t\,q^n\right),~~~~~~~
M_{kij}=q^{i-1/2}\left(s+t\,q^{-n}\right)
\end{align}
(We use a generalized Kronecker delta notation, e.g., $\d_{i\leq j}=1$
if $i\leq j$, $0$ otherwise.)
\end{proposition}
The restriction that if $\pa$ is written as $\pa=t/s$ then $s+t$ has
to be invertible comes from the requirement that the quantum Lie
product should not vanish modulo $h$.
For details of the calculation leading to the above formulae we refer
the reader to \cite{qgln}.
The Lie algebra $\mathfrak{sl}_{n}$ with $n>2$ possesses an automorphism which is due to
the symmetry of the Dynkin diagram. It would be natural to require
that this automorphism survives also at the quantum level.
By inspecting the above Lie product relations we find
\begin{proposition}\label{thm:t}
The quantum Lie algebra $(\sln)_h(\pa)$ possesses the Dynkin diagram
automorphism
\begin{equation}
\tau(X_{ij})=-X_{n+1-j,n+1-i},~~~~~
\tau(H_i)=H_{n-i}
\end{equation}
iff $\pa=1$.
\end{proposition}
This is the reason why in \cite{qgln} we focused our attention on
the case of $\pa=1$.
The most basic property of a Lie product is its antisymmetry. In
quantum Lie algebras this has found an interesting generalization.
\begin{proposition}\label{thm:anti}
The quantum Lie product of $\qlie$ for $\lie\neq\mathfrak{sl}_{n>2}$ and of $(\sln)_h(\pa)$
with $\tilde{\pa}=\pa$ is \dem{$q$-antisymmetric}, i.e.,
there exists a q-conjugation $\qconj{}:\qlie\to\qlie$ consistent with the
gradation \eqref{gradation} such that
\begin{equation}
\qconj{[a,b]_h}{}=-[\qconj{b},\qconj{a}]_h{}.
\end{equation}
Thus, choosing the basis in \eqref{qliestruc} so that
$\qconj{X_\a}=X_\a$, $\qconj{H_i}=H_i$, the structure constants satisfy
\begin{equation}
r_\a=\t{l}_\a,~~~f_{ij}{}^k=-\t{f}_{ji}{}^k,~~~
g_\a{}^k=-\t{g}_{-\a}{}^k,~~~N_{\a\b}=-\t{N}_{\b\a}.
\end{equation}
\end{proposition}
\begin{proof}
For $(\sln)_h$ the statement can be verified directly from the expressions
in \propref{slnstruc}. For $\lie\neq\mathfrak{sl}_{n}$ we use the same notation as
in the proof of \propref{thm:iso}. The adjoint representation
appears with multiplicity one in the tensor product and thus we know that
the highest weight state $\hat{v}_0=K_0{}^{ab}(h)\,v_a\otimes v_b$ in
$\lie[[\hhh]]\hat{\ot}\,\lie[[\hhh]]$ satisfying \eqref{highest3} is unique up to rescaling.
$\t{\hat{v}}_0^T=K_0{}^{ba}(-h)\,v_a\otimes v_b$ also satisfies the highest
weight condition \eqref{highest3}.
\begin{align}
\adth{x_i^+}{\t{\hat{v}}_0^T}&=
\left((\adsh\otimes\adsh)\Delta(x_i^+)\right)\,\t{\hat{v}}_0^T
\nonumber\\
&=\,\sim\left[\left((\adsh\otimes\adsh)\Delta^T(x_i^+)\right)\,
\hat{v}_0^T\right]
\nonumber\\
&=\,\sim\left[\adth{x_i^+}{\hat{v}_0}\right]^T
\nonumber\\
&=0.
\end{align}
We used that $\sim\circ\,(\adsh\,x)=(\adsh\,\t{x})\,\circ\sim$
(which follows from $\sim\circ\,\driso=\driso\,\circ\sim$),
that $\t{v}_a=v_a$ and that $\sim\circ\,\Delta=\Delta^T\,\circ\sim$.
Thus $\hat{v}^\prime_0=\frac{1}{2}(\hat{v}_0-\t{\hat{v}}_0^T)$ is a
highest weight state (proportional to $\hat{v}_0$ by uniqueness).
It is non-zero because it is non-zero classically.
Following a similar calculation to the above one finds that it leads
to Clebsch-Gordan coefficients
$K^\prime_a{}^{bc}(h)=\frac{1}{2}\left(K_a{}^{bc}(h)-
K_a{}^{cb}(-h)\right)$.
These are manifestly $q$-antisymmetric. Following through the
construction of the structure constants one finds
$f^\prime_{ab}{}^c(h)=-f^\prime_{ba}{}^c(-h)$.
\end{proof}
\section{Quantum Lie algebras $\mathfrak{L}_h(\lie)$ inside $U_h(\lie)$}\label{s:concrete}
\label{sec:qlie}
In \defref{def:ab} quantum Lie algebras are defined abstractly, i.e.,
independently of any specific realization.
In \cite{qlie} quantum Lie algebras were defined
as concrete objects, namely as certain submodules of the quantized
enveloping algebras $U_h(\lie)$. This definition is based on the
observation that an ordinary Lie algebra $\lie$ can be naturally
viewed as a subspace of its enveloping algebra $U(\lie)$ with the Lie
product on this subspace given by the adjoint action of $U(\lie)$.
Thus it is natural to define a quantum Lie algebra as an analogous
submodule of the quantized enveloping algebra $U_h(\lie)$ with the quantum
Lie product given by the adjoint action of $U_h(\lie)$. Before we can
state the precise definition we need some preliminaries.
The Cartan involution $\theta:U_h(\lie)\toU_h(\lie)$
is given by the same formulas as in the
classical case:
$\theta(x_i^\pm)=x_i^\mp,\ \theta(h_i)= -h_i$.
It is an algebra automorphism and a coalgebra antiautomorphism, i.e.,
$\Delta\circ\theta=(\theta\hat{\ot}\,\theta)\circ\Delta^T$ and
$S\circ \theta=\theta\circ S^{-1}$.
We define a tilded Cartan involution by composing the Cartan
involution with $q$-conjugation, i.e.,
$\t{\theta}=\sim\circ\theta$. Similarly we define a tilded
antipode as $\t{S}=\sim\circ S$.
With respect to the adjoint action defined in \eqref{adjoint} they
satisfy
$\ad{\t{\theta}(a)}{\t{\theta}(b)}=\t{\theta}(\ad{a}{b})$ and
$\ad{\t{S}(a)}{\t{S}(b)}=\t{S}(\ad{S^{-1}(a)}{b})$ for all $a,b\inU_h(\lie)$.
\begin{definition}\label{def:qlie}
A \dem{quantum Lie algebra $\mathfrak{L}_h(\lie)$ inside $U_h(\lie)$}
is a finite-dimen\-sio\-nal
indecomposable $\text{ad}$ - submodule
of $U_h(\lie)$ endowed with the
\dem{quantum Lie product}
$[a,b]_h{}=\ad{a}{b}$
such that
\begin{enumerate}
\vspace{-2mm}
\item $\mathfrak{L}_h(\lie)$ is a deformation of $\lie$, i.e., there is an
algebra isomorphism
$\mathfrak{L}_h(\lie)\cong\lie\ (\text{mod }h)$.
\vspace{-2mm}
\item
$\mathfrak{L}_h(\lie)$ is invariant under $\t{\theta}$, $\t{S}$ and any diagram
automorphism $\tau$.
\end{enumerate}
\vspace{-2mm}
A \dem{weak quantum Lie algebra $\wqlie$} is defined similarly but
without the requirement 2.
\end{definition}
The existence of a Cartan involution and
an antipode on $\mathfrak{L}_h(\lie)$ plays an important role in the investigations into
the general structure of quantum Lie algebras in \cite{qlie}. In particular
it allows the definition of a quantum Killing form.
The invariance under the diagram automorphisms $\tau$ is less important
but is clearly a natural condition to impose.
It is shown in \cite{qlie} that given any weak quantum Lie algebra $\wqlie$
inside $U_h(\lie)$, one can always construct a true quantum Lie
algebra $\mathfrak{L}_h(\lie)$ which satisfies property 2 as well. Thus this
extra requirement is not too strong.
We now come to the relation between the abstract quantum Lie algebras
$\qlie$ of \defref{def:ab} and the concrete weak quantum Lie algebras
$\wqlie$ of \defref{def:qlie}.
\begin{proposition}\label{thm:lq}
All weak quantum Lie algebras
$\wqlie$ inside $U_h(\lie)$ are isomorphic to the quantum Lie algebra
$\qlie$ as algebras
(or to $(\sln)_h(\pa)$ for
some $\pa$ in the case of $\lie=\mathfrak{sl}_{n}$).
\end{proposition}
\begin{proof}
By definition $\wqlie$ is a finite-dimensional, indecomposable $U_h(\lie)$
module. Condition
1 of the definition implies that the representation of $U_h(\lie)$ carried by
this module is a deformation of the representation of $U(\lie)$
carried by $\lie$. There is only one such deformation, namely the
adjoint representation $\adsh$ carried by $\lie[[\hhh]]$. Thus $\wqlie$
is isomorphic to $\lie[[\hhh]]{}$ as a $U_h(\lie)$ module.
The identity
\begin{equation}
\sum\ad{\ad{x_{(1)}}{a}}{(\ad{x_{(2)}}{b})}=\ad{x}{(\ad{a}{b})}
\end{equation}
can be rewritten using that, when restricted to $\wqlie\subsetU_h(\lie)$,
$[a,b]_h=\ad{a}{b}=\adh{a}{b}$.
\begin{equation}
(\adsh\,x)\circ \qlb=\qlb\circ(\adsh_2\,x),~~~~
\forall x\in U(\lie).
\end{equation}
This states that the quantum Lie product on $\wqlie$ is
a $U_h(\lie)$-module homomorphism
and thus is a quantum Lie product in the sense of \defref{def:ab}.
\end{proof}
\begin{remark} One should not confuse the adjoint {\em{action}} $\ads$
with the adjoint {\em{representation}} $\adsh$. The adjoint action
$\ads$ is defined using the coproduct and the antipode as
\[
\ad{x}{y}=x_{(1)}yS(x_{(2)})~~~~\forall x,y\inU_h(\lie).
\]
The adjoint
representation $\adsh$ is defined using the algebra isomorphism
$\driso:U_h(\lie)\to U(\lie)[[h]]$ of \propref{thm:dri} as
\[
\adh{x}{a}=\adc{\driso(x)}{a}~~~~\forall
x\inU_h(\lie),\ a\in\lie[[\hhh]].
\]
Thus the adjoint action is determined by the $h$-deformed Hopf-algebra
structure whereas the adjoint representation is determined by only
the $h$-deformed algebra structure. From this point of view it is
surprising that the two ever coincide. But the weak quantum Lie
algebras $\wqlie$ are exactly those embeddings of $\lie[[\hhh]]$ into
$U_h(\lie)$ on which $\ads$ and $\adsh$ coincide and we will establish
their existence in the next section.
\end{remark}
\propref{thm:lq} allows us
to answer two important questions about the
concrete quantum Lie algebras $\mathfrak{L}_h(\lie)$ inside $U_h(\lie)$ which were left
unanswered in \cite{qlie}.
\begin{theorem}\label{theorem}
Given any finite-dimensional simple complex Lie algebra $\lie$.
\begin{enumerate}
\item
All quantum Lie
algebras $\mathfrak{L}_h(\lie)$ are isomorphic as algebras.
\item
All quantum Lie algebras $\mathfrak{L}_h(\lie)$ have $q$-antisymmetric Lie products.
\end{enumerate}
\end{theorem}
\begin{proof}
1. For $\lie\neq\mathfrak{sl}_{n>2}$ this is obvious from \propref{thm:lq} and the
uniqueness of $\qlie$ according to \propref{thm:iso}. For $\lie=\mathfrak{sl}_{n>2}$
the requirement of $\tau$-invariance in \defref{def:qlie} implies through
\propref{thm:t} that $\mathfrak{L}_h(\sln)$ can be isomorphic only to $(\sln)_h(\pa=1)$.
2. This is obvious because $\qlie$ and $(\sln)_h(\pa=1)$ have
$q$-antisymmetric Lie products according to \propref{thm:anti}.
\end{proof}
\section{Construction of quantum Lie algebras $\mathfrak{L}_h(\lie)$}\label{sec:constr}
There is a general method for the construction of weak quantum Lie
algebras $\wqlie$ and quantum Lie algebras $\mathfrak{L}_h(\lie)$
inside $U_h(\lie)$. The method was presented in \cite{qgln}
for $\lie=\mathfrak{sl}_{n}$ but it works for any finite-dimensional
simple complex Lie algebra $\lie$ as we will discuss here.
We begin with a lemma giving a construction of ad-submodules of
$U_h(\lie)$.
\begin{lemma}\label{lem}
Let $A$ be any element of $U_h(\lie)\hat{\ot}\,U_h(\lie)$ satisfying
$A\,\Delta(x)=\Delta(x)\,A,~\forall x\inU_h(\lie)$. Let $V[[h]]$
be any finite-dimensional indecomposable $U_h(\lie)$ module
and let $\pi_{ij}$ be the corresponding
representation matrices. Then the elements
\begin{equation}
A_{ij}=(\pi_{ij}\otimes\text{id})\,A\inU_h(\lie)
\end{equation}
span an ad-submodule of $U_h(\lie)$ which is isomorphic to a
submodule of\hfill\newline
$V[[h]]^*\hat{\ot}\, V[[h]]$, i.e.,
\begin{equation}
\ad{x}{A_{ij}}=A_{kl}\,\pi_{ki}^*(x_{(1)})\,\pi_{lj}(x_{(2)}),~~~
\forall x\inU_h(\lie).
\end{equation}
Here $\pi^*$ denotes the dual (contragredient) representation to $\pi$
defined by
\begin{equation}\label{dualrep}
\pi_{ki}^*(x)=\pi_{ik}(S(x)).
\end{equation}
\end{lemma}
\begin{proof}
We first calculate
\begin{align}
x\,A_{ij}&=\left(\pi_{ij}\otimes\text{id}\right)(1\otimes x)\ A
\nonumber\\
&=\left(\pi_{ij}\otimes\text{id}\right)(S(x_{(1)})\otimes 1)
\ A\ (x_{(2)}\otimes x_{(3)})
\\
&=\pi_{ik}(S(x_{(1)}))\,A_{kl}\,\pi_{lj}(x_{(2)})\,x_{(3)}.
\nonumber
\end{align}
Then, using \eqref{dualrep}
\begin{align}
\ad{x}{A_{ij}}&=x_{(1)}\,A_{ij}\,S(x_{(2)})
\nonumber\\
&=A_{kl}\,\pi^*_{ki}(x_{(1)})\,\pi_{lj}(x_{(2)})\,x_{(3)}\,
S(x_{(4)})
\\
&=A_{kl}\,\pi^*_{ki}(x_{(1)})\,\pi_{lj}(x_{(2)})
\nonumber
\end{align}
\end{proof}
This lemma can be applied to construct weak quantum Lie algebras.
\begin{proposition}
Let $A=h^{-1}\left(R^T R-1\right)$ where $R$ is the universal
R-matrix of $U_h(\lie)$ and $R^T$ the same with the tensor factors
interchanged (i.e., if $R=\sum a_i\otimes b_i$ then
$R^T=\sum b_i\otimes a_i$).
Let $\{e_i\}$ be a basis for the $U_h(\lie)$ module $V[[h]]$
and let $\pi_{ij}$ be the corresponding representation
matrices. Choose a basis $\{v_a\}$ for the adjoint representation
$\lie[[h]]$ of $U_h(\lie)$ and let
$K:\lie[[h]]\to V[[h]]^*\hat{\ot}\, V[[h]],
v_a\mapsto\hat{v}_a=K_a{}^{ij}\,(e^*_i\otimes e_j)$ be a
$U_h(\lie)$-module homomorphism, i.e., the $K_a{}^{ij}$ are
quantum Clebsch-Gordan coefficients.
Then the elements
\begin{equation}
A_a=K_a{}^{ij}\left(\pi_{ij}\otimes\text{id}\right)\,A\,\inU_h(\lie)
\end{equation}
span a weak quantum Lie algebra $\wqlie=\mbox{span}_{{\mathbb C}[[h]]}
\{A_a\}$.
\end{proposition}
\begin{proof}
The expression $A=h^{-1}\left(R^T R-1\right)$ is well defined
because $R=1 \ (\text{mod }h)$. It follows from the defining property
$R\,\Delta(x)=\Delta^T(x)\,R\ \forall x\inU_h(\lie)$
of the R-matrix that
$A\,\Delta(x)=\Delta(x)\,A,~~\forall x\inU_h(\lie)$. It is then clear
from \lemref{lem} that the $A_a$ span an ad-submodule
of $U_h(\lie)$. It follows from the definition of the Clebsch-Gordan
coefficients $K_a{}^{ij}$ that this ad-submodule is either
isomorphic to the adjoint representation or zero.
$R$ satisfies $R=1+h\,r+{\cal O}(h^2)$ where $r\in\lie\otimes\lie$
is the classical $r$-matrix. Thus $A=r+r^T \ (\text{mod }h)\in\lie\otimes\lie$
and $A_a\ (\text{mod }h)\in\lie$. It follows that $\mbox{span}_{{\mathbb C}[[h]]}
\{A_a\}=\lie\ (\text{mod }h)$.
\end{proof}
Using the fact, established in \cite{qlie}, that given a weak
quantum Lie algebra $\wqlie$ one can always construct a
true quantum Lie algebra $\mathfrak{L}_h(\lie)$, we arrive at the announced
existence result.
\begin{theorem}
For any finite-dimensional simple complex Lie algebra $\lie$
there exists at least one quantum Lie algebra $\mathfrak{L}_h(\lie)$ inside $U_h(\lie)$.
\end{theorem}
\noindent{\bf Thanks:}
We thank Andrew Pressley, Vyjayanthi Chari, Manfred Scheunert and Chris
Gardner for discussions and helpful comments.
| {
"timestamp": "1996-10-10T21:36:38",
"yymm": "9605",
"arxiv_id": "q-alg/9605025",
"language": "en",
"url": "https://arxiv.org/abs/q-alg/9605025",
"abstract": "Quantum Lie algebras are generalizations of Lie algebras which have the quantum parameter h built into their structure. They have been defined concretely as certain submodules of the quantized enveloping algebras. On them the quantum Lie bracket is given by the quantum adjoint action.Here we define for any finite-dimensional simple complex Lie algebra g an abstract quantum Lie algebra g_h independent of any concrete realization. Its h-dependent structure constants are given in terms of inverse quantum Clebsch-Gordan coefficients. We then show that all concrete quantum Lie algebras are isomorphic to an abstract quantum Lie algebra g_h.In this way we prove two important properties of quantum Lie algebras: 1) all quantum Lie algebras associated to the same g are isomorphic, 2) the quantum Lie bracket of any quantum Lie algebra is $q$-antisymmetric. We also describe a construction of quantum Lie algebras which establishes their existence.",
"subjects": "Quantum Algebra (math.QA)",
"title": "Quantum Lie algebras; their existence, uniqueness and $q$-antisymmetry",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462226131667,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7099326276391377
} |
https://arxiv.org/abs/1608.01305 | Elephant Random Walks and their connection to Pólya-type urns | In this paper, we explain the connection between the Elephant Random Walk (ERW) and an urn model à la Pólya and derive functional limit theorems for the former. The ERW model was introduced by Schütz and Trimper [2004] to study memory effects in a one-dimensional discrete-time random walk with a complete memory of its past. The influence of the memory is measured in terms of a parameter $p$ between zero and one. In the past years, a considerable effort has been undertaken to understand the large-scale behavior of the ERW, depending on the choice of $p$. Here, we use known results on urns to explicitly solve the ERW in all memory regimes. The method works as well for ERWs in higher dimensions and is widely applicable to related models. | \section{Introduction}
Random walks and, more generally, diffusion processes are widely used in
theoretical physics to describe phenomena of traveling motion and mass
transport. Due to the fractal structure of nature and space and temporal
long-range correlations in particle movements (see,
e.g.,~\cite{Ma,MaNe,MeKl,ScMo,Wa}), often so-called anomalous diffusions
appear, where the mean square displacement of a particle is no longer a
linear function of time, but rather given by a power law.
A simple model exhibiting anomalous diffusion is the so-called Elephant
Random Walk (ERW) introduced by Sch\"utz and Trimper~\cite{ScTr} in 2004,
which is the topic of this paper. The ERW model is a one-dimensional
discrete-time nearest neighbor random walk on $\mathbb{Z}$, which remembers its
full history and chooses its next step as follows: First, it selects
randomly a step from the past, and then, with probability $p\in[0,1]$, it
repeats what it did at the remembered time, whereas with the complementary
probability $1-p$, it makes a step in the opposite direction. We refer to
the next section for the precise definition. The memory parameter
$p\in[0,1]$ allows to model the willingness of the walker to do the same as
in the past. When $p=1/2$, the memory has no effect on the movement: the
model becomes Markovian.
The ERW model and some variations thereof have drawn a lot of attention in
the last years, see,
e.g.,~\cite{AlArCrSiSiVi,BoRC,HaKuLi,Ha1,Ha2,KuHaLi,Kue,PaEs,ScTr,Se,SiCrScViTr}
to mention just a few. One of the key questions concerns the influence of
the memory on the long-time behavior. Various results and predictions have
been obtained, e.g., in~\cite{PaEs,ScTr,SiCrScViTr}. In this note, we
explicitly determine the long-time behavior of the ERW model in all regimes
$p\in[0,1]$. We obtain central limit theorems for the full process of the
ERW, with a scaling depending on the choice of $p$. In the regime
$p\leq 3/4$, the limiting process turns out to be Gaussian (with explicit
parameters). In the superdiffusive case $p>3/4$, the limit is non-Gaussian,
as it was already predicted in~\cite{SiCrScViTr, PaEs}. We point out that our
limit theorems are stronger than finite-dimensional convergence of
the ERW. In particular, they imply convergence of continuous functionals of
the walker.
Our method uses a connection to P\'olya-type urns that was already known
before in the literature, see, e.g., the works of Harris~\cite{Ha1,Ha2} and
also the survey of Pemantle~\cite{Pe} on related random processes with
reinforcement. Being robust and simple, the method is neither limited to
one-dimensional models nor to the specific ERW model, but rather
widely applicable to other random walks with memory. A bit more precisely,
given what is known from the theory of urns, we will see that the
asymptotic behavior of such models is essentially determined by the
spectral decomposition of the (replacement) matrix of the corresponding
urn.
Since the ERW is arguably the most natural and simplest model of a
one-dimensional random walk with a complete memory, we concentrate in this
note on the basic ERW and leave it mostly to the reader to adapt the method
to other walks with memory. However, we outline some possible extensions in
Section~\ref{sec:extensions}.
The rest of this paper is structured as follows. After having introduced the
exact ERW model in the following section, we describe in
Section~\ref{sec:urn} a particular discrete-time urn model containing balls
of two colors, where step by step a new ball is added. We then show in
Section~\ref{sec:results} how known limit results on the composition of
the urn can be transferred into statements about the position of the ERW
when time goes to infinity. In Section~\ref{sec:extensions}, we discuss
various extensions, and in the last part, we summarize our findings.
We finally mention that independently of us and at the same time as ours, a
work of Coletti, Gava and Sch\"utz~\cite{CoGaSc} appeared on the arXiv,
with related results on the ERW but using a different approach.
\section{The model}
Let us now introduce the exact model, in the way it was first defined
in~\cite{ScTr}. The ERW is a one-dimensional random walk $(S_n,n\in\mathbb{N}_0)$
on the integers
starting, say, at zero at time zero, $S_0=0$. At time $n\geq 1$, the
position of the walk is given by
$$
S_n = S_{n-1}+\sigma_n,
$$
where $\sigma_n$, $n\in\mathbb{N}=\{1,2,\ldots\}$, are random variables taking
values in $\{\pm 1\}$, which are specified as follows. Firstly, $\sigma_1$
takes value $1$ with some probability $q\in [0,1]$ and value $-1$ with
probability $1-q$. Accordingly, the first step of the ERW goes to the right
[left] with probability $q$ $[1-q]$. At any later time $n\geq 2$, we choose
a number $n'$ uniformly at random among the previous times ${1,\ldots,n-1}$
and set
$$
\sigma_n= \left\{\begin{array}{l@{\quad\mbox{with probability }}l}
+\sigma_{n'} & p\\
-\sigma_{n'} &1-p
\end{array}\right.,
$$
where $p\in [0,1]$ is a memory parameter which is inherent to the
model. Note that the case $p=1/2$ corresponds to simple symmetric random
walk: there is no memory effect. Moreover, we remark that
$S_n=\sigma_1+\ldots+\sigma_n$. We implicitly agree that the various random
choices made in this construction are independent from each other.
In~\cite{ScTr}, the question of how the memory of the history influences
the position of the walker at large times was investigated. In particular,
writing $\langle \cdot\rangle$ for the expectation operator, it was shown
that the mean displacement of the ERW satisfies for $n\gg 1$
\begin{equation}
\label{eq:mean}
\langle S_n\rangle \sim \frac{(2q-1)}{\Gamma(2p)}n^{2p-1},
\end{equation}
while for the second moment, it was proved that
\begin{equation}
\label{eq:meansquare}
\langle S_n^2\rangle \sim \left\{\begin{array}{l@{\quad\mbox{for }}l}
\frac{n}{3-4p} & 0\leq p<3/4\\
n\ln n&p=3/4\\
\frac{n^{4p-2}}{(4p-3)\Gamma(4p-2)}&3/4<p\leq 1
\end{array}.\right.
\end{equation}
The last display entails at $p=3/4$ a transition from a diffusive
$(0\leq p<3/4)$ to a superdiffusive ($3/4<p\leq 1)$ regime, whereas at
$p=3/4$, the ERW behaves marginally superdiffusive. Using an approximation
by a non-Markovian Fokker-Planck equation, the random walk propagator of
the ERW model was reported in~\cite{ScTr} to be Gaussian in all regimes
(with a time dependent diffusion constant), an observation which was later
adapted in~\cite{SiCrScViTr} for the superdiffusive regime $p>3/4$, where
a preciser analysis showed that the random walk propagator is in fact
non-Gaussian. Here, the term {\it propagator} refers to the probability
density of the usual continuum limit. See also~\cite{PaEs} for a related
work confirming that the Fokker-Planck approximations do not yield adequate
results for the ERW model, at least not in the superdiffusive regime. The
statistics in the regime $1/2<p\leq 3/4$ were left open
in~\cite{SiCrScViTr}.
It is the main purpose of this note to affirm the observation
of~\cite{SiCrScViTr} in the superdiffusive regime and clarify the behavior
in the remaining regimes, by explicitly calculating the large-scale
behavior of the ERW model using a connection to P\'olya-type urns, which we
explain next.
\section{The connection to P\'olya-type urns}
\label{sec:urn}
Imagine a discrete-time urn with balls of two colors, black and red,
say. The composition of the urn at time $n\in\mathbb{N}$ is given by
a vector $X_n=(X_n^1,X_n^2)$, where the first component $X_n^1$ counts the
number of black balls at time $n$, and the second component $X_n^2$ the
number of red balls. We restrict ourselves to starting compositions
$X_1=\xi$ for some (possibly random) vector $\xi=(\xi^1,\xi^2)$ taking
values in $\{(1,0),\,(0,1)\}$ almost surely. The urn now evolves according
to the following dynamics: At time $n=2,3,\ldots$, we draw a ball uniformly
at random, observe its color, put it back to the urn and add with
probability $p$ a ball of the same color, and with probability $1-p$ a ball
of the opposite color. Then we update $X_n$, so that $X_n$ describes the
composition of the urn after the $(n-1)$th drawing.
The connection to the ERW model is remarkably
simple: If $(S_n,n\in\mathbb{N}_0)$ denotes the ERW started
from $S_0=0$ and such that $S_1=\xi^1-\xi^2$, then
\begin{equation}
\label{eq:ERW-urn}
(S_n,n\in\mathbb{N}) =_d (X_n^1-X_n^2,n\in\mathbb{N}),
\end{equation}
where $=_d$ refers to equality in law. In other words, the difference
between the number of black and red balls in the above urn evolves like an
ERW with first step equals $\xi^1-\xi^2$.
The urn described above fits into a broader setting of so-called
generalized Friedman's or P\'olya urns, see~\cite{Be1,Be2,Fr,Sa} for first
results (with deterministic replacement rules). Athreya and
Karlin~\cite{AtKa} proved an embedding of urn schemes into continuous-time
multitype Markov branching processes, which includes the treatment of
generalized Friedman's urn processes with randomized replacement rules, as
in our case. These techniques were further developed by Janson
in~\cite{Ja}, which serves as the main reference for this paper. Many
results on urns can also be found in Mahmoud's book~\cite{Mah}, which is
however more combinatorial in nature.
Key quantities that govern the long-time behavior of the urn
process are the eigenvalues and eigenvectors of the so-called mean
replacement matrix. In our case, it is given by
\begin{equation}
\label{eq:A}
A=\begin{pmatrix}p&1-p\\1-p&p\end{pmatrix}.
\end{equation}
The eigenvalues of $A$ are $\lambda_1=1$, $\lambda_2=2p-1$, and
corresponding right and left eigenvectors are $v_1=\frac{1}{2}(1, 1)'$,
$v_2=\frac{1}{2}(1, -1)'$, $u_1=(1,1)$, $u_2=(1,-1)$, where we write $v'$
for the transpose of $v$. Here, as in $(2.2)$ and $(2.3)$ of~\cite{Ja}, we
have chosen $v_1,v_2$ and $u_1,u_2$ such that $u_1v'_1 = u_2v'_2=1$ and the
$L^1$-norm of $v_1$, $v_2$ is equal to one.
It is well-known, see, e.g.~\cite{AtKa, ChPoSa,KeSt,Ja}, that the
asymptotics of the urn depends on the position of $\lambda_2/\lambda_1$
with respect to $1/2$ (in the situation of a more general urn, assuming
that the largest eigenvalue $\lambda^\ast$ is positive and simple, one has
to check whether there is an eigenvalue different from $\lambda^\ast$ with
real part $>\lambda^\ast/2$). This already explains on a formal level why
for the ERW model, a phase transition occurs at $p=3/4$.
\section{Results and proofs for the standard ERW model}
\label{sec:results}
The paper of Janson~\cite{Ja} contains an exhaustive and very broad
treatment of urn schemes and corresponding functional limit theorems. For
our purpose, it is most convenient to adapt the general results from
there and to translate them into the setting of the ERW model, {\it
via}~\eqref{eq:ERW-urn}.
\subsection{The diffusive case ($0\leq p<3/4$)}
Our first convergence result deals with a distributional convergence of
processes, which holds in the Skorokhod space $D([0,\infty))$ of
right-continuous functions with left-hand limits. We simply recall that
distributional convergence in $D([0,\infty))$ to a process without
discontinuities at fixed times is stronger than finite-dimensional
distributional convergence, and point at~\cite{Bi} for more background.
\begin{theorem}
\label{thm:el-diffusive}
Let $0\leq p<3/4$. Then, for $n$ tending to
infinity, we have the distributional convergence in $D([0,\infty))$
$$
\left(\frac{S_{\lfloor tn\rfloor}}{\sqrt{n}}, t\geq 0\right)\Longrightarrow
(W_t,t\geq 0),
$$
where $W=(W_t,t\geq 0)$ is a continuous $\mathbb{R}$-valued Gaussian process
specified by $W_0=0$, $\langle W_t\rangle=0$ for all $t\geq 0$, and
$$
\langle W_sW_t\rangle=\frac{s}{3-4p}\left(\frac{t}{s}\right)^{2p-1},\quad 0<s\leq t.
$$
\end{theorem}
We observe that when $p=1/2$, $W$ is a standard Brownian motion. Of course,
this we already know from Donsker's invariance principle, since in this
case, the ERW behaves as a simple symmetric (Bernoulli) random walk on
$\mathbb{Z}$, except possibly for the first step.
\begin{proof}
We apply Theorem 3.31(i) of~\cite{Ja},
which shows that $$(n^{-1/2}(X_{\lfloor tn\rfloor}-tn\lambda_1v_1), t\geq
0)$$ converges in distribution towards a continuous $\mathbb{R}^2$-valued Gaussian
process $V=(V_t,t\geq 0)$ with $V_0=0$ and $\langle V_t\rangle = 0$ for all
$t\geq 0$. In our case, we have $\lambda_1=1$, and the covariance structure
of $V$ is closer specified in Remark $5.7$ of~\cite{Ja}. Display $(5.6)$
there shows that
$$
\langle V_s{V_t}'\rangle=s\Sigma_I{\rm e}^{\ln(t/s)A},\quad 0<s\leq t,
$$
with $\Sigma_I$ a $2\times 2$-matrix defined under $(2.15)$
of~\cite{Ja}. An explicit calculation gives
$$\Sigma_I = \frac{1}{4(3-4p)}\begin{pmatrix}1&-1\\-1&1\end{pmatrix},$$
and the matrix exponential reads in our case
$$
{\rm e}^{\ln(t/s)A}=P\begin{pmatrix}\frac{t}{s}&0\\0&\left(\frac{t}{s}\right)^{2p-1}\end{pmatrix}P^{-1},\quad\hbox{with
} P=\frac{1}{2}\begin{pmatrix}1&1\\1&-1\end{pmatrix}.
$$
Together, we obtain for $0<s\leq t$,
$$
\langle V_s{V_t}'\rangle =
\frac{s}{4(3-4p)}\left(\frac{t}{s}\right)^{2p-1}\begin{pmatrix}1&-1\\-1&1\end{pmatrix}.
$$
By definition of $S_m$ and the continuous mappping theorem, we then deduce
that $(n^ {-1/2}S_{\lfloor tn\rfloor}, t\geq 0)$ converges in law in
$D([0,\infty))$ to a process $W=(W_t,t\geq 0)$ given by $W_t=V_t^1-V_t^2$
almost surely, where for $i=1,2$, $V^i$ denotes the $i$th component of
$V$. This proves our claim.
\end{proof}
Note that the covariance structure of the limit $W$ does not fit the
asserted effective diffusion coefficient in~\cite{ScTr}, cf. Display $(27)$
there. But the asymptotic behavior of the ERW mean square displacement
derived in~\cite{ScTr} (see Display~\eqref{eq:meansquare} above) is in
agreement with the second moment of $W$.
Moreover, we note that the initial steps of the ERW do not influence its
long-time behavior. Indeed, this can easily be derived from the fact that
the above urn admits the same Gaussian limit when starting from more
general configurations $\xi=(\xi^1,\xi^2)\in\mathbb{N}_0^ 2$ with
$\langle|\xi|^2\rangle <\infty$ and $\xi\neq (0,0)$. Specifying for example
to the deterministic initial configuration $\xi =(k_1,k_2)$ for some
$k_1,k_2\in\mathbb{N}$, the increment process $(X_n^1-X_n^2,n=1,2,\ldots)$ can be
seen as an ERW observed from time $k=k_1+k_2$ on when conditioned to be at
position $k_1-k_2$ at time $k$. Applying~\cite[Theorem 3.31(i)]{Ja} to the
urn when starting from configuration $\xi=(k_1,k_2)$, we deduce that the
first $k$ steps do not influence the limiting behavior.
\subsection{The critical case ($p=3/4$)}
In the borderline case $p=3/4$, part (ii)
of~\cite[Theorem 3.31]{Ja} applies.
\begin{theorem}
\label{thm:el-critical}
Let $p=3/4$. Then, for $n$ tending to
infinity, we have the distributional convergence in $D([0,\infty))$
$$
\left(\frac{S_{\lfloor n^t\rfloor}}{\sqrt{\ln n}\,n^{t/2}}, t\geq
0\right)\Longrightarrow (B_t,t\geq 0),
$$
where $B=(B_t,t\geq 0)$ is a standard
one-dimensional Brownian motion.
\end{theorem}
The function space $D([0,\infty))$ is defined as in the diffusive case
discussed above.
\begin{proof}
According to Theorem 3.31(ii) of~\cite{Ja},
$$((\ln n)^{-1/2}n^{-t/2}(X_{\lfloor n^t\rfloor}-n^t\lambda_1v_1), t\geq
0)$$
converges in law towards a continuous $\mathbb{R}^2$-valued Gaussian process
$V=(V_t,t\geq 0)$ with $V_0=0$ and mean $\langle V_t\rangle=0$ for all
$t\geq 0$. The covariance structure of $V$ is given by expression $(3.27)$
of~\cite{Ja}, which simplifies in our case to
$$
\langle V_s{V_t}'\rangle=\frac{s}{4}\begin{pmatrix}1&-1\\-1&1\end{pmatrix},\quad
0<s\leq t.
$$
As above, the claim now follows from the continuous mapping theorem.
\end{proof}
Again, the asymptotics~\eqref{eq:meansquare} for the second moment of the
ERW obtained in~\cite{ScTr} match with the limit. With the same arguments
as in the diffusive case, one deduces moreover that the first steps of the
walker have no influence on the long-time behavior.
\subsection{The superdiffusive case ($3/4<p\leq 1$)}
In this regime, we can make use of Theorems 3.24 and 3.26
in~\cite{Ja}.
\begin{theorem}
\label{thm:el-superdiffusive}
Set $\alpha=2p-1 \in(1/2,1]$. Then, for $n$ tending to infinity, we have
the almost sure convergence
$$
\left(\frac{S_{\lfloor tn\rfloor}}{n^\alpha}, t\geq 0\right)\longrightarrow
(t^\alpha Y, t\geq 0),
$$
where $Y$ is some $\mathbb{R}$-valued random variable different from
zero.
\end{theorem}
Below the proof of the theorem, we give some information on the limiting
variable $Y$.
\begin{proof}We note that in the notation of~\cite[Theorem 3.24]{Ja}, we
have $\Lambda'_{\textup{III}}=\{2p-1\}$. We are therefore in the setting
of the last part of the cited theorem and get
that $$(n^{-\alpha}(X_{\lfloor tn\rfloor}-tn\lambda_1v_1), t\geq 0)$$
converges almost surely to $(t^\alpha\hat{W}, t\geq 0)$, where
$\hat{W}=(\hat{W}^1, \hat{W}^2)$ is some non-zero random vector lying in
the eigenspace $E_{\lambda_2}$ of $A$, i.e.,
$\hat{W}\in\{v\in \mathbb{R}^2: v= \lambda (1,-1)\textup{ for some
}\lambda\in\mathbb{R}\backslash\{0\}\}$.
Since $Y=\hat{W}^1-\hat{W}^2$ almost surely, the claim follows.
\end{proof}
In contrast to the regimes discussed in the two previous sections, the
distribution of $Y$ does depend on the law of the initial step of the
ERW. For example, in the degenerate case $p=1$, $Y$ has the same
distribution as $S_1=\xi^1-\xi^2$ (in fact,
$(S_{\lfloor tn\rfloor}=\lfloor tn\rfloor S_1)$ for all $t\geq 0$ with
probability one). In this regard, see also the remarks in~\cite{Ja} above
Theorem 3.9.
By looking at the skewness and kurtosis of the position of the walker for
large $n$, it was already observed in~\cite{SiCrScViTr} that the law of the
limit $Y$ cannot be Gaussian, even not when starting from the symmetric
initial condition $\mathbb{P}(\xi=(1,0))=\mathbb{P}(\xi=(0,1))=1/2$. See also~\cite{PaEs}
for a similar observation.
Moreover, we point at Theorem 3.26 of~\cite{Ja}, which can be used to
(recursively) calculate the moments of $Y$. Let us
for simplicity assume that $\xi=(1,0)$. Then, using additionally~\cite[Theorem
3.10]{Ja}, one finds for the first two moments
$$
\langle Y\rangle=\frac{1}{\Gamma(2p)},\quad \langle Y^2\rangle=\frac{1}{(4p-3)\Gamma(4p-2)},
$$
as we should have expected from~\eqref{eq:mean}
and~\eqref{eq:meansquare}. For higher moments, see the remark
below Theorem 3.1 of~\cite{Ja}. We however mention that even in the case of
an urn with deterministic replacement rules, there is in general no closed
form for the moments of the limiting variable. See~\cite{ChPoSa} and
further references therein for more on this.
\section{Extensions}
\label{sec:extensions}
It is the purpose of this section to exemplify that the approach {\it via}
P\'olya-type urns is robust and allows extensions and modifications of the
ERW model in various directions. We leave it to the reader to perform the
exact calculations and rather hint at the urn model one should consider.
\subsection{Higher dimensions}
Let us first explain how to obtain limit results for an ERW in higher
dimensions. In dimension $d\geq 1$, one should simply consider an urn with
$2d$ different colors. More specifically, in $d=2$, one might want to study
the urn $X_n=(X_n^1,X_n^2,X_n^3,X_n^4)$, $n\in\mathbb{N}$, with mean replacement matrix
$$
A_2=\begin{pmatrix}p&(1-p)/3&(1-p)/3&(1-p)/3\\
(1-p)/3&p&(1-p)/3&(1-p)/3\\
(1-p)/3&(1-p)/3&p&(1-p)/3\\
(1-p)/3&(1-p)/3&(1-p)/3&p
\end{pmatrix}.
$$
The corresponding nearest neighbor ERW on $\mathbb{Z}^2$ is given by
$$
S_n=(X_n^1-X_n^2)e'_1 + (X_n^3-X_n^4)e'_2,
$$
with $e_1=(1,0)$ and $e_2=(0,1)$. Starting from $X_1=(1,0,0,0)$, say, this
means that the ERW first visits $(1,0)$. Then, at any later time $n\geq 2$,
the walker chooses a time $n'$ uniformly at random among the previous times
$1,\ldots,n-1$ and decides with probability $p$ to perform a step in the
same direction as at time $n'$, and with probability $(1-p)/3$ each to
perfom a step in one of the three other coordinate directions.
The expression for $S_n$ in the display above can again be analyzed with the results of
Janson~\cite{Ja}. In particular, since the eigenvalues of $A_2$ are given
by $\lambda_1=1$ and $\lambda_2=\lambda_3=\lambda_4=(4p-1)/3$, according to
the remarks before Section~\ref{sec:results} a phase transition from
diffusive to superdiffusive behavior occurs at $p=5/8$.
\subsection{ERW with reinforced memory}
In a different direction, one might want to model an ERW which has a
reinforced memory, for example in the sense that the more often a
particular time from the past is remembered, the more likely it is to
remember this time again. From the point of view of neural networks, this
is certainly a reasonable and desirable assumption on the model. More
concretely, one might want to study a random walk with memory where the
remembered time $n'$ at the $n$th step is not chosen uniformly at random
among the previous times ${1,\ldots,n-1}$, but rather proportional to a
weight distribution, with a weight that takes into account the number of
previous choices of $n'$. In this regard, it is interesting to point at
the connection observed in~\cite{Kue} between the ERW model and so-called
random (uniform) recursive trees, which can naturally be used to model the
memory of the walker. The memory tree of an ERW with a reinforced memory
would correspond to a so-called preferential attachment tree, see,
e.g.,~\cite{DoMeSa}. In terms of a two-color urn, one might want to
consider a ``reinforced'' mean replacement matrix, for example
$$
B=\begin{pmatrix}a+p&1-p\\1-p&a+p\end{pmatrix},
$$
where $a\in\mathbb{N}_0$ is an additional parameter measuring the strength of the
reinforcement. Here, when a ball is drawn, one puts it back to the urn with
$a$ additional balls of the same color. In addition, one tosses a coin with
probability $p$ for head and probability $1-p$ for tail. If head shows up,
one adds another ball of the same color, whereas in case of tail, one puts
a ball of the opposite color into the urn. Note that the case $a=0$
corresponds to the uniform ERW model discussed above.
Again, this urn model fits into the general framework of urns treated
in~\cite{Ja}. The eigenvalues of $B$ are given by $\lambda_1=a+1$ and
$\lambda_2=a+2p-1$. Hence, provided $a<3$, a phase transition for the urn
occurs at $p_a=(3-a)/4.$
As above, let us now assume that the starting configuration of the urn is
given by a (possibly random) vector $\xi$ taking values in
$\{(1,0),\,(0,1)\}$. Regarding the corresponding random walk model
$S=(S_n,n\in\mathbb{N}_0)$ (we use the same notation as for the original ERW),
there is a little subtlety here: Most naturally, from time $1$ on, $S$
should not be defined as the difference $(X_n^1-X_n^2,n\in\mathbb{N})$ of black and
red balls as before, but rather as the difference of black and red balls
which were put into the urn as a consequence of the coin tosses, plus the
initial difference $\xi^1-\xi^2$. In other words, one should not take into
account the $a$ additional balls of the same color which are put into the
urn at every draw for determining the position of the walker. In
particular, if $p=1/2$, except for the first step, $S$ behaves again like a
simple symmetric random walk (but note that $p_a<1/2$ if $a\geq 2$ !). If
$p\neq 1/2$, the behavior of the walk $S$ can be traced back to the
composition of the urn $((X_n^1,X_n^2),n\in\mathbb{N})$. Namely, writing
$\Delta_n=S_{n+1}-S_n$ for the increment of the walker at time $n$, one
finds for its mean conditioned on $X_n$,
$$
\langle\Delta_n\rangle =(2p-1)\left(\frac{2X_n^1}{(a+1)n-a}-1\right).
$$
As to the urn, one can apply the results of~\cite{Ja} cited above to
obtain functional limit theorems, more precisely~\cite[Theorem 3.31(i)]{Ja}
in case $p<p_a$,~\cite[Theorem 3.31(ii)]{Ja} in case $p=p_a$,
and~\cite[Theorem 3.24]{Ja} in case $p>p_a$. Usual diffusion approximation
now yields corresponding results for the walker $S$ when $p\neq 1/2$, namely diffusive
behavior if $p<p_a$, marginally superdiffusive behavior if $p=p_a$ (with
the same rescaling as in Theorem~\ref{thm:el-critical}), and superdiffusive
behavior if $p>p_a$ (with the same rescaling as in
Theorem~\ref{thm:el-superdiffusive}).
\subsection{Modified ERW of Harbola, Kumar
and Lindenberg~\cite{HaKuLi} }
Harbola, Kumar and Lindenberg~\cite{HaKuLi} proposed a modified ERW
representing a minimal one-parameter model of a random walk with memory,
which gives rise to all three possible types of behavior (superdiffusive,
diffusive and subdiffusive). Again, $p\in [0,1]$ is a memory parameter
which is inherent to the model.
In contrast to the original ERW, the random walker moves only to the right,
but it may also stay still. More precisely, the modified ERW
$(S_n,n\in\mathbb{N}_0)$ starts at $S_0=0$, and then, at time $n\geq 1$, the
position of the walker is given by
$$
S_n = S_{n-1}+\sigma_n,
$$
with $\sigma_n$, $n\in\mathbb{N}$, being $\{0,1\}$-valued random variables with the
following law. Firstly, for concreteness, we assume that the first step goes
deterministically to the right, $\mathbb{P}(\sigma_1=1)=1$ (this is a slight
simplification compared to the model considered in~\cite{HaKuLi}). At any
later time $n\geq 2$, we choose a number $n'$ uniformly at random among the
previous times ${1,\ldots,n-1}$. If $\sigma_{n'}=1$, i.e., the walker moved
to the right at time $n'$, we set
$$
\sigma_n= \left\{\begin{array}{l@{\quad\mbox{with probability }}l}
1 & p\\
0&1-p
\end{array}\right..
$$
If $\sigma_{n'}=0$, i.e., the walker stood still at time $n'$, we set
$\sigma_n=0$, so that the walker does again not move at time $n$.
In the notation of Janson~\cite{Ja}, the mean replacement matrix of the
corresponding two-color urn (black balls for moving to the right, red balls
for standing still) is
$$
C=\begin{pmatrix}p&0\\1-p&1\end{pmatrix},
$$
where the first (second) column of $C$ is the expected change when a black
(red) is drawn. We stress that often in the literature (e.g., in~\cite{Ma}),
rather the transpose $C'$ is considered as the mean replacement matrix.
In words, the dynamics of the urn process is described as follows: Starting
from some non-trivial initial condition at time $n=1$, we draw at time
$n=2,3,\ldots$, a ball uniformly at random, observe its color and put it
back to the urn. If we drew a black ball, we add with probability $p$
another black ball and with the complementary probability $1-p$ a red ball to the urn,
whereas if the observed color was red, we add deterministically another red
ball to the urn.
Note that if we start the urn model with one single black ball, the position $S_n$ of
the modified ERW at time $n$ is given by the number of black balls at time
$n$.
The eigenvalues of the above matrix $C$ are $\lambda_1=1$ and
$\lambda_2=p$. Here, the results of~\cite{Ja} are not applicable, since
$\lambda_1$ does not belong to the dominating class: Indeed, when starting
the urn process from a single red ball, the dynamics adds only red balls to
the urn, and never a black ball. Such random triangular urn schemes were
however treated by Aguech~\cite{Ag}, generalizing results of
Janson~\cite{Ja2} for triangular urns with deterministic replacement. In
particular,~\cite[Theorem 2(a)]{Ag} shows that the right rescaling for the
number of black balls at time $n$ is $n^p$ (there is no recentering), and
one has almost sure-convergence as $n$ tends to infinity to a non-trivial
(non-Gaussian) limit. This is in accordance with the results of Harbola,
Kumar and Lindenberg~\cite{HaKuLi}, proving that in this random walk model,
subdiffusive (if $p<1/2$), diffusive (if $p=1/2$) and superdiffusive (if
$p>1/2$) behavior does occur.
A slightly more complicated model of a random walker moving to the left,
right and staying put, which also exhibits all three types of behavior, was
presented by the same authors earlier in~\cite{KuHaLi}. There, one should
consider an urn with balls of three different colors: one corresponding to
a movement to the right, one corresponding to a movement to the left, and
one for staying at the same place.
\section{Conclusion}
In this note we have explicitly determined the long-time behavior of the
one-dimensional ERW model introduced in 2004 by Sch\"utz and
Trimper~\cite{ScTr}. We used a simple connection to P\'olya-type urns and
relied on limit results for the latter that were already established before. The
ERW belongs to the class of models describing anomalous diffusion and is
one of the few non-Markovian models which turns out to be explicitly
solvable. However, as we exemplified in this note, (variants of) the ERW
model, or, more generally, processes with reinforcement can sometimes be
reformulated in terms of urn models, which are studied since a long time in
the mathematical literature and are still objects of active research. In
particular, results on urns often lead to a deeper understanding of the
corresponding random walk model.
| {
"timestamp": "2016-09-07T02:00:43",
"yymm": "1608",
"arxiv_id": "1608.01305",
"language": "en",
"url": "https://arxiv.org/abs/1608.01305",
"abstract": "In this paper, we explain the connection between the Elephant Random Walk (ERW) and an urn model à la Pólya and derive functional limit theorems for the former. The ERW model was introduced by Schütz and Trimper [2004] to study memory effects in a one-dimensional discrete-time random walk with a complete memory of its past. The influence of the memory is measured in terms of a parameter $p$ between zero and one. In the past years, a considerable effort has been undertaken to understand the large-scale behavior of the ERW, depending on the choice of $p$. Here, we use known results on urns to explicitly solve the ERW in all memory regimes. The method works as well for ERWs in higher dimensions and is widely applicable to related models.",
"subjects": "Statistical Mechanics (cond-mat.stat-mech); Probability (math.PR)",
"title": "Elephant Random Walks and their connection to Pólya-type urns",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462222582661,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7099326273841081
} |
https://arxiv.org/abs/1703.04520 | Lipschitz Normal Embeddings in the Space of Matrices | The germ of an algebraic variety is naturally equipped with two different metrics up to bilipschitz equivalence. The inner metric and the outer metric. One calls a germ of a variety Lipschitz normally embedded if the two metrics are bilipschitz equivalent. In this article we prove Lipschitz normal embeddedness of some algebraic subsets of the space of matrices. These include the space $m \times n$ matrices, symmetric matrices and skew-symmetric matrices of rank equal to a given number and their closures, and the upper triangular matrices with determinant $0$. We also make a short discussion about generalizing these results to determinantal varieties in real and complex spaces. | \section{Introduction}
If $(X,0)$ is the germ of an algebraic (analytic) variety over $\mathbbm{K} =
\mathbb{R}$ or $\mathbb{C}$, then one
can define two natural metrics on it. Both are defined by choosing an
embedding of $(X,0)$ into $(\mathbbm{K}^N,0)$. The first is the \emph{outer
metric}, where the distance between two points $x,y\in X$ is given by
$d_{out}(x,y) := \norm{x-y}_{\mathbbm{K}^N}$, i.e. the restriction of the
Euclidean metric to $(X,0)$. The other is the \emph{inner metric},
where the distance is defined as
\begin{align}
d_{in}(x,y) := \inf_{\gamma}
\big{\{} length_{\mathbbm{K}^N}(\gamma)\ \big{\vert}\ \morf{\gamma}{[0,1]}{X} \text{
rectifiable, } \gamma(0) = x,\
\gamma(1) = y \big{\}}.\label{innerdefn}
\end{align}
Both of these metrics are independent of the
choice of the embedding up to bilipschitz equivalence. The outer metric
determines the inner metric, and it is clear that $d_{out}(x,y) \leq
d_{in} (x,y)$. The other direction is in general not true, and one says
that $(X,0)$ is \emph{Lipschitz normally embedded} if the inner and
outer metrics are bilipschitz equivalent. \emph{Bilipschitz geometry}
is the study of the bilipschitz equivalence classes of these two
metrics.
The study of bilipschitz geometry of complex spaces
started with Pham and Teissier who studied the case of curves in
\cite{phamteissier}. It then lay dormant for long time until Birbrair
and Fernandes began studying the case of complex surfaces
\cite{birbrairfernandes}. Among important recent results are the
complete classification of the inner metrics of surfaces by Birbrair,
Neumann and Pichon \cite{thickthin}, the proof that Zariski
equisingularity is equivalent to bilipschitz triviality in the case of
surfaces by Neumann and Pichon \cite{zariski} and the proof that
outer Lipschitz regularity implies smoothness by Birbrair,
Fernandes, L\^{e} and Sampaio
\cite{lipschitzregularity}.
Understanding the geometry of the model varieties in the space of
matrices is an important step in understanding determinantal
singularities in real and complex spaces. We will also give a brief
discussion of this.
Determinantal singularities is also an area
that has been around for a long time, that recently saw a lot of
interest. They can be seen as a generalization of isolated complete
intersections (ICIS for short), and the recent
results have mainly been in the study of invariants coming from their
deformation theory. In \cite{ebelingguseinzade} \'Ebeling and
Guse{\u\i}n-Zade defined the index of a $1$-form, and the Milnor
number has been defined in various different ways by Ruas and da
Silva Pereira \cite{cedinhamiriam}, Damon and Pike \cite{damonpike}
and Nu\~no-Ballesteros, Or\'efice-Okamoto and Tomazella
\cite{NunoOreficeOkamotoTomazella}. Their
deformation theory has also been studied by Gaffney and Rangachev
\cite{gaffenyrangachev} and Fr\"uhbis-Kr\"uger and Zach
\cite{fruhbiskrugerzach}.
In January 2016 Asuf Shachar asked the following question on
Mathoverflow.org (\url{http://mathoverflow.net/questions/222162}): Is
the Lie group $\operatorname{GL}_n^+(\mathbb{R})$ Lipschitz normally
embedded, where $\operatorname{GL}_n^+(\mathbb{R})$ is the group of $n\times n$ matrices
with positive determinants. A positive answer was given by the first
author and Katz, Katz and Liokumovich
in \cite{kerneretc}. They first prove it for ${X_{n-1}}$ the
set of $n\times n$ matrices with rank $n-1$ and for its closure $\overline{X_{n-1}},$ the set of matrices with determinant
equal to zero. Then they replace the segments of the straight line
between two points of $\operatorname{GL}_n^+(\mathbb{R})$ that passes trough $\operatorname{GL}_n^-(\mathbb{R})$ with a curve
arbitrarily close to $\overline{X_{n-1}}$. Their proof relies on topological
arguments, and some results on conical stratifications of MacPherson
and Procesi \cite{macphersonprocesi}. In this article we give an
alternative proof relying only on linear algebra and simple
trigonometry, which also works for $m\times n$ matrices of rank equal to $t\leq \min\{m,n\}$ and their closures. (A first version of this proof appeared
in \cite{lnedeterminantalsing}). We also prove the Lipschitz normal
embeddedness of the symmetric
and skew-symmetric matrices of rank equal to a given $t$ and their closures, the
upper triangular matrices which have determinant $0$, and the
intersections with linear subspaces transversal to the rank stratification.
This article is organized as follows. In section \ref{preliminaries}
we discuss the basic notions of Lipschitz normal embeddings and give
some results concerning when a space is Lipschitz normally
embedded. In section \ref{geometryofmatrices} we describe the basic
properties of the bilipschitz geometry of the spaces of matrices we
consider. In section \ref{modelcase} we prove that the set ${X_{t}}$ of
matrices, symmetric matrices and skew-symmetric matrices of rank equal
to a given $t$ and their corresponding closures $\overline{X_{t}}$ are Lipschitz normally embedded, and that the same is
true if $V$ is a linear subspace transverse to the rank
stratification. We prove that the space of upper triangular matrices
with determinant $0$ is Lipschitz normally embedded in section
\ref{uppertriangularcase}. Finally in section \ref{secgeneralcase} we
discuss some of the difficulties to extend these results to the setting
of general determinantal singularities.
\section{Preliminaries on bilipschitz geometry }\label{preliminaries}
In this section we discuss some properties of Lipschitz normal
embeddings.
\begin{Definition}
We say that $X$ is \emph{Lipschitz normally embedded} if there exist
$K\geq 1$ such that for all $x,y\in X$,
\begin{align}
d_{in}(x,y)\leq Kd_{out}(x,y).\label{lneeq}
\end{align}
We call a $K$ that satisfies the inequality \emph{a bilipschitz
constant of} $X$.
\end{Definition}
A trivial example of a Lipschitz normally embedded set is $\mathbb{C}^n$. For
an example of a space that is not Lipschitz normally embedded,
consider the plane curve given by $x^3-y^2=0$, then
$d_{out}((t^2,t^3),(t^2,-t^3))=2\num{t}^{3}$ but the
$d_{in}((t^2,t^3),(t^2,-t^3))= 2\num{t}^2+ o(t^2)$, this implies that
$\tfrac{d_{in}((t^2,t^3),(t^2,-t^3))}{d_{out}((t^2,t^3),(t^2,-t^3))}$
is unbounded as $t\to 0$, hence there cannot exist a $K$
satisfying \eqref{lneeq}.
Pham and Teissier \cite{phamteissier} show that in general the outer
geometry of a complex plane curve is equivalent to its embedded
topological type, and the inner geometry is equivalent to the abstract
topological type. Hence a plane curve is Lipschitz normally embedded
if and only if it is a union of smooth curves intersecting
transversely. See also Fernandes \cite{fernandesplanecurve} and
Neumann and Pichon \cite{NeumannPichonBilpischitzGeometryOfCurves}.
In the cases of higher dimension the question of which singularities
are Lipschitz normally embedded becomes much more complicated. It is no
longer only rather trivial singularities that are Lipschitz normally
embedded, for example in the case of surfaces the second author
together with Neumann and Pichon, shows that rational surface
singularities are Lipschitz normally embedded if and only if they are
minimal \cite{normallyembedded}. As we will later see, singularities
in the space of matrices give examples of non-trivial Lipschitz normally
embedded singularities in arbitrary dimensions.
\begin{Remark}
A couple of remarks about notation. Throughout the article $\mathbbm{K}$ will
always denote $\mathbb{R}$ and $\mathbb{C}$. We will often be talking about different
inner distances of two points $x,y\in \mathbbm{K}^N$, when we consider $x,y$ as
lying in different subspaces, hence $d_{in}^V(x,y)$ is the inner
distance between $x$ and $y$ measured using the inner metric on the
subspace $V\subset\mathbbm{K}^N$. When we are using different outer metrics we
also denote the outer distance measured in $V$ by $d_{out}^V(x,y)$.
\end{Remark}
First we explore the relationship between being Lipschitz
normally embedded local and being it global.
\begin{Definition}
A space $X$ is \emph{locally Lipschitz normally embedded} at $x\in X$
if there is an open neighbourhood $U$ of $x$, such that
$U$ is Lipschitz normally embedded. We say that $X$ is locally
Lipschitz normally embedded if this condition holds for all $x\in X$.
\end{Definition}
It is clear that being Lipschitz normally embedded implies being
locally Lipschitz normally embedded. In the other direction we have:
\begin{Proposition}\label{localglobal}
Let $X$ be a connected, compact locally Lipschitz normally embedded
space. Then $X$ is Lipschitz normally embedded.
\end{Proposition}
\begin{proof}
For each $x\in X$ let $U_x$ be a Lipschitz normally embedded
neighbourhood of $x$, and let $K_x$ be a bilipschitz constant. This implies
that if $y\in X$ is very close to $x$, then $d_{in}(x,y)\leq K_x
d_{out}(x,y)$. Consider the map
\begin{align*}
\morf{f(x,y):= \frac{d_{in}(x,y)}{d_{out}(x,y)}}{ M\times M}{\mathbb{R}}.
\end{align*}
Let $U\subset M\times M$ be a small open tubular neighbourhood of the
diagonal $\Delta$. Then $f$ is continuous on the compact set $(M\times
M)\setminus U$ and locally bounded at each point. Thus it is globally
bounded on $(M\times
M)\setminus U$ and also on $U$.
\end{proof}
A simple consequence of this is the following.
\begin{Corollary}
Let $M$ be a connected compact manifold, then $M$ is Lipschitz
normally embedded.
\end{Corollary}
We will next give some results about when spaces constructed from
Lipschitz normally embedded spaces are themselves Lipschitz normally
embedded. First is the case of product spaces.
\begin{Proposition}\label{product}
Let $X\subset \mathbbm{K}^n$ and $Y\subset \mathbbm{K}^m$ and let $Z = X\times Y
\subset \mathbbm{K}^{n+m}$. $Z$ is Lipschitz normally embedded if and only if
$X$ and $Y$ are Lipschitz normally embedded.
\end{Proposition}
\begin{proof}
First we prove the ``if'' direction.
Let $(x_1,y_1),(x_2,y_2)\in X\times Y$. We need to show that
\begin{align*}
d_{in}^{X\times Y}((x_1,y_1)(x_2,y_2))\leq K d_{out}^{X\times
Y}((x_1,y_1)(x_2,y_2)).
\end{align*}
Let $K_X$ be the constant such that
$d_{in}^X(a,b) \leq K_X d_{out}^X(a,b)$ for all $a,b\in X$, and let $K_Y$
be the constant such that $d_{in}^Y(a,b) \leq K_Y d_{out}(a,b)^Y$ for all
$a,b\in Y$. We get, using the triangle inequality, that
\begin{align*}
d_{in}^{X\times Y}((x_1,y_1)(x_2,y_2))\leq d_{in}^{X\times
Y}((x_1,y_1)(x_1,y_2))+ d_{in}^{X\times Y}((x_1,y_2)(x_2,y_2)).
\end{align*}
Now the points $(x_1,y_1)$ and $(x_1,y_2)$ both lie in the slice
$\{x_1\}\times Y$ and hence $d_{in}^{X\times
Y}((x_1,y_1)(x_1,y_2)) \leq d_{in}^{Y}(y_1,y_2)$ and likewise we have
$d_{in}^{X\times
Y}((x_1,y_2)(x_2,y_2)) \leq d_{in}^{X}(x_1,x_2)$. This then implies that
\begin{align*}
d_{in}^{X\times Y}((x_1,y_1)(x_2,y_2))\leq K_Y d_{out}^{Y}(y_1,y_2)+
K_X d_{out}^{X}(x_1,x_2),
\end{align*}
where we use that $X$ and $Y$ are Lipschitz normally embedded. Now it
is clear that $d_{out}^{X\times Y}((x_1,y_1)(x_1,y_2)) =
d_{out}^{Y}(y_1,y_2)$ and $d_{out}^{X\times Y}((x_1,y_2)(x_2,y_2)) =
d_{out}^{X}(x_1,x_2)$. Also, since
\begin{align*}
d_{out}^{X\times
Y}((x_1,y_1)(x_2,y_2))^2=d_{out}^{Y}(y_1,y_2)^2+
d_{out}^{X}(x_1,x_2)^2
\end{align*}
by definition of the product metric, we have
that
\begin{align*}
&d_{out}^{X\times Y}((x_1,y_1)(x_1,y_2)) \leq d_{out}^{X\times
Y}((x_1,y_1)(x_2,y_2)) \text{ and } \\ &d_{out}^{X\times
Y}((x_1,y_2)(x_2,y_2)) \leq d_{out}^{X\times
Y}((x_1,y_1)(x_2,y_2)).
\end{align*}
It then follows that
\begin{align*}
d_{in}^{X\times Y}((x_1,y_1)(x_2,y_2))\leq (K_Y + K_X) d_{out}^{X\times
Y}((x_1,y_1)(x_2,y_2)).
\end{align*}
For the other direction, let $p,q \in X$ and consider any path
$\morf{\gamma}{ [0,1] }{Z}$ such that $\gamma(0) = (p,0)$ and
$\gamma(1) = (q,0)$. Now $\gamma(t)= \big(\gamma_X(t),\gamma_Y(t)\big)$ where
$\morf{\gamma_X}{ [0,1] }{X}$ and $\morf{\gamma_Y}{ [0,1] }{Y}$ are
paths and $\gamma_X(0) = p$ and $\gamma_X(1) =q$. Now $l(\gamma)\geq
l(\gamma_X)$, hence
\begin{align*}
d_{in}^X(p,q) \leq d_{in}^Z((p,0),(q,0)).
\end{align*}
Since
$Z$ is Lipschitz normally embedded, there exist a $K>1$ such that
$d_{in}^Z(z_1,z_2)\leq K d_{out}(z_1,z_2)$ for all $z_1,z_2\in Z$. We
also have that $d_{out}^Z((p,0),(q,0))= d_{out}^X (p,q)$, since $X$ is
embedded in $Z$ as $X\times \{ 0\}$. Hence
\begin{align*}
d_{in}^X(p,q) \leq K
d_{out}^X (p,q).
\end{align*}
The argument for $Y$ being Lipschitz normally
embedded is the same exchanging $X$ with $Y$.
\end{proof}
\begin{Proposition}\label{bilipschizttrivial}
Let $X =\cup X_r \subset \mathbbm{K}^n$ be a locally Lipschitz sratification
(see Parusi\'nski \cite{parusinski} Definition 1.1), and
assume that $X$ is Lipschitz normally embedded. Let $V$ be a $C^1$
manifold and let $x\in V\cap X$, $x\in X_r$. Assume that there exist an open
neighbourhood $U$ of $x$ such that for all $y\in U\cap X$, $y\in
X_{r(y)}$, we have that $V$ is transverse to $X_{r(y)}$ at $y$. Then
$V\cap X$ is locally Lipschitz normally embedded at $x$.
\end{Proposition}
\begin{proof}
Since $V$ is transverse to $X_{r(y)}$ at all $y\in U\cap X$, we can
(maybe by shrinking $U$) choose a map $\morf{\rho}{U}{X_r\cap U}$
which is a proper submersion restricted to each stratum, such that
$\rho^{-1}(x)=V\cap U$. By
the Lipschitz isotopy lemma (Theorem 1.9 in \cite{parusinski}) there
exist a bilipschitz trivilization $\morf{\phi}{U}{U_S\times U_T}
$ of $X$, where $U_S\subset \mathbbm{K}^{\dim(X_r)}$ and $U_T\subset
\mathbbm{K}^{\operatorname{codim}(X_r)}$, such that the following diagram commutes:
\begin{align*}
\xymatrix{
X\cap U \ar[rr]^{\phi} \ar[rd]^{\rho} && \rho^{-1}(x)\times (X_r\cap
U) \ar[dl]_{\pi} \\
& X_r\cap U,
}
\end{align*}
where $\pi$ is just the projection to the second factor. Now $\phi$ is
a bilipschitz map so $\rho^{-1}(x)\times (X_r\cap U)$ is Lipschitz
normally embedded since $X\cap U$ is. Then we have by Proposition
\ref{product} that $\rho^{-1}(x)=V\cap U$ is Lipschitz normally
embeded.
If $\dim(V)>\operatorname{codim}(X_r)$ then if $V_S=V\cap (X_r\cap U)$ we have
that $V\cap U$ is bilipschitz equivalent to $V_T\times V_S$. Now
$\dim(V_T)=\operatorname{codim}(X_r)$, so we can choose $\rho$ as above such that
$\rho^{-1}(x)=V_T$. Hence $V_T\cap X$ is Lipschitz normally embedded and
since $V_S$ is $C^1$ equivalent to $\mathbbm{K}^{\dim(V)-\operatorname{codim}(X_r)}$ it is
also Lipschitz normally embedded. Thus $V\cap (X\cap U)$ is
Lipschitz normally embedded by Proposition \ref{product}, since it is
bilipschitz equivalnet to $(V_T\cap X)\times V_S$.
\end{proof}
Another case we will need later is the case of cones.
\begin{Proposition}\label{cone}
Let $X\subset \mathbbm{K}^n$ be the cone over $M\subset S$ with cone point
the origin of $\mathbbm{K}^n$, where $S=S^{n-1}$ if $\mathbbm{K}=\mathbb{R}$ and $S=S^{2n-1}$ if
$\mathbbm{K}=\mathbb{C}$. Then the following conditions hold:
\begin{enumerate}[(a)]
\item If $M$ is Lipschitz normally embedded then $X$ is Lipschitz
normally embedded.\label{b}
\item If $X$ is Lipschitz normally embedded and $M$ is compact, then
each of the connected components of $M$ is Lipschitz normally
embedded.\label{a}
\end{enumerate}
\end{Proposition}
\begin{proof}
We first prove \eqref{b}. Since $M$ is Lipschitz
normally embedded with bilipschitz constant $K_M$ the same is true for
$r\cdot M=rM$, where $r\in\mathbb{R}^+$.
Let $x,y\in X$. We can assume that $0\leq \norm{x} \leq \norm{y}$. If
$x=0$ then $d_{in}^{X}(x,y)=d_{out}(x,y)$ since the straight
line through $0$ and $y$ is in $X$ because $X$ is conical.
If $\norm{x}=\norm{y}=r$, then $x$ and $y$ are both in $rM$, and hence
\begin{align*}
d_{in}^X(x,y) \leq d_{in}^{rM}(x,y) \leq K_Md_{out}(x,y).
\end{align*}
Now if $0<\norm{x}<\norm{y}$ let
$y'=\tfrac{y}{\norm{y}}\norm{x}$. Then $d_{in}^X(y,y')=d_{out}(y,y')$
since they both lie on the same straight line through the origin. If
$r=\norm{x}$, then $x,y'\in rM$. Hence like before $d_{in}^X(x,y')\leq
K_Md_{out}(x,y')$. Now $y'$ is the point
closest to $y$ in $rM$. Hence all of $rM$ lies on
the other side of the affine hyperplane through $y'$ orthogonal to the
line $\overline{yy'}$ from $y$ to $y'$. Hence the angle between
$\overline{yy'}$ and the line $\overline{y'x}$ between $y'$ and $x$ is
more than $\tfrac{\pi}{2}$. Therefore, the Euclidean distance from $y$ to
$x$ is larger than each of $l(\overline{yy'})$ and
$l(\overline{y'x})$. This gives us:
\begin{align*}
d_{in}^X(x,y) &\leq d_{in}^X(x,y')+ d_{in}^X(y',y) \leq
K_md_{out}(x,y')+d_{out}(y',y)\\ &\leq (K_m+1)d_{out}(x,y).
\end{align*}
To prove \eqref{a}, assume that $X$ is Lipschitz normally
embedded, but a connected component $M'\subset M$ is not Lipschitz
normally embedded.
Since $M'$ is compact we can assume that $M'$ is not locally
Lipschitz normally embedded at some point by Proposition
\ref{localglobal}. So let $p\in M'$ be a point such that $M'$ is not
Lipschitz normally embedded in a small open neighbourhood $U\subset M'$
of $p$. By
Proposition \ref{product} we have that $U\times (-\epsilon,\epsilon)$ is not
Lipschitz normally embedded, where $0<\epsilon$ is much smaller than
the distance from $M$ to the origin. Now the quotient map from
$\morf{c}{M\times [0,\infty)}{X}$ induces an outer (and therefore also
inner) bilipschitz equivalence of $U\times (-\epsilon,\epsilon)$ with
$c\big(U\times (-\epsilon,\epsilon)\big)$. Since both $U$ and
$\epsilon$ can be chosen to be arbitrarily small, we have that there
does not exist any small open neighbourhood of $p\in X$ that is
Lipschitz normally embedded, contradicting that $X$ is Lipschitz
normally embedded. Hence $X$ being Lipschitz normally embedded implies
that $M'$ is Lipschitz normally embedded.
\end{proof}
\begin{Remark}
\eqref{b} holds under the weaker hypothesis that $M$ has a finite
number of connected components each one being Lipschitz normally
embedded, and such that for each pair of connected components $X$ and
$Y$ we have $d_{out}(X,Y):=\inf_{x\in X,y\in Y}\{d_{out}(x,y)\}>0$.
If the number of connected components of $M$ is not finite, then the
result may fail as seen below.
(In particular, it is not enough to ask that $M$ is locally compact, locally path-connected and locally Lipschitz normal.)
\begin{itemize}
\item Let $M=\mathop\cup\limits^\infty_{n=1}\{e^{\frac{\pi i}{n}}\}\subset S^1$. Thus $M$ is non-connected, non-compact, but (trivially) locally path-connected, locally compact, locally Lispchitz normal. But $Cone(M)\subset\mathbb{R}^2$ is not locally Lipschitz normal at the origin.
\end{itemize}
\end{Remark}
A consequence of Proposition \ref{cone} is the following.
\begin{Corollary}
Let $(X,0)$ be the germ of real or complex homogeneous variety with isolated
singularity, then $(X,0)$ is Lipschitz normally embedded.
\end{Corollary}
We conclude this section with a useful lemma.
Let $\phi \rightturn \mathbb{R}^N$ be a diffeomorphism in $\mathbb{R}^N.$ For each $x \in \mathbb{R}^N$ consider the Jacobian matrix $\frac{d\phi}{dx},$ it is non-degenerate. Let $\{\lambda_i(x)\}$ be its eigenvalues and fix $\lambda_{max}(x)=max||\lambda_i(x)||,$ $\lambda_{min}(x)=min||\lambda_i(x)||.$ Define
$$\lambda_{max}:=\, sup_{x\in \mathbb{R}^N} \lambda_{max}(x) \leq \infty,\,\,\,\,\,\,\,\,\,\, \lambda_{min}:=\, inf_{x\in \mathbb{R}^N} \lambda_{min}(x) \geq 0$$
\begin{Lemma}\label{changediffeo}
For a diffeomorphism $\phi$ as above suppose $0 < \lambda_{min}$ and $\lambda_{max} < \infty.$ Let $X \subset \mathbb{R}^N$ be any path-connected subset. Then for any $x, y \in X$ the following holds:
$\lambda_{min}\, \cdot \,d_{in}^X(x,y)\leq d_{in}^{\phi(X)}(\phi(x), \phi(y))\leq \lambda_{max} \, \cdot \, d_{in}^X(x,y).$
\end{Lemma}
\begin{proof}
For fixed $x,y \in X$ choose a rectifiable path $\gamma \subset X$ connecting $x,y$ and satisfying: $length(\gamma) < d_{in}^{X}(x,y) + \epsilon.$ Then
$\phi(\gamma)\subset \phi(X)$ connects $\phi(x), \phi(y).$ It remains to compare
$length (\gamma)=\int\limits^1_0 \sqrt{||\dot \gamma(t)||^2}dt$ to
$$length (\phi(\gamma))=\int\limits^1_0 \sqrt{||\dot \phi(\gamma(t))||^2}dt=
\int\limits^1_0 \sqrt{||\frac{d\phi}{dx}\cdot({\dot \gamma}(t))||^2}dt.$$
Note that $\lambda_{min}\, \cdot \, ||\dot \gamma(t)||
\leq ||\frac{d\phi}{dx}\cdot({\dot \gamma}(t))|| \leq \lambda_{max}\, \cdot \, ||\dot \gamma(t)||.$ Thus the bounds follow.
\end{proof}
\section{Geometry in the space of matrices}\label{geometryofmatrices}
Let $\mathbbm{K}=\mathbb{R}$ or $\mathbb{C}$ and take the vector space of $m\times n$ matrices
over $\mathbbm{K}$, $Mat_{m\times n}(\mathbbm{K})$, $1\leq m\le n$. We use
the standard inner product on $Mat_{m\times n}(\mathbbm{K})$, $\langle A,B
\rangle :=trace(A\overline{B^t})$, and the corresponding metric on
$Mat_{m\times n}(\mathbbm{K})\approx \mathbbm{K}^{mn}$.
For any subset $X\subseteq Mat_{m\times n}(\mathbbm{K})$ consider the stratification by rank, $X_r:=X\cap\{A\in Mat_{m\times n}(\mathbbm{K})|\ rank(A)=r\}$. The strata $X_r$ are connected when $\mathbbm{K}=\mathbb{C},$ however when $\mathbbm{K}= \mathbb{R}$ they may have various connected components.
Besides the outer metric,
\begin{align*}
d_{out}(A,B)=\sqrt{\operatorname{trace}\Big((A-B)\cdot\overline{(A-B)^t}\Big)},
\end{align*}
the sets $X_r$ have the inner metric, $d^{X_r}_{in}(A,B)$,
as defined in Equation \eqref{innerdefn} in the introduction. Similarly for the closures,
$\{\overline{X_r}\}$,
one has $d^{\overline{X_r}}_{in}(A,B)$.
Note that for some linear subspaces of $ Mat_{m\times n}(\mathbbm{K})$ the rank
stratification is not Lipschitz normally embedded as we will see in
Example \ref{degenerationofcusps}.
\subsection{The relevant group actions}\label{Sec.Group.Actions}
We use the action of two groups on $Mat_{m\times n}(\mathbbm{K})$ and on the
strata $\{X_r\}$.
\begin{itemize}
\item Consider the group $U(m)=\{V \mid V\cdot\overline{V^t} =
{1\hspace{-0.1cm}\rm I}_{m\times m}\} \subset Mat_{m\times m}(\mathbb{C})$ and similarly
$U(n)\subset Mat_{n\times n}(\mathbb{C})$. Their product acts, $U(m)\times
U(n)\circlearrowright Mat_{m\times n}(\mathbb{C})$, by $A\to V_lAV_r$. This
group action is isometric, because we have that $\langle A,B\rangle
\rightsquigarrow \langle V_lAV_r,V_lBV_r \rangle =trace(V_lAV_r\cdot
\overline{(V_lBV_r)^t})=<A,B>$. For $\mathbbm{K}=\mathbb{R}$ one takes the group $O(n)$.
The group $U(n)$ is connected, thus if
$A\stackrel{U(n)}{\sim}B$ then there exists a path from $A$ to $B$,
given by the $U(n)$-action. The group $O(n)$ has two connected
components, in some cases we use the component $SO(n)$.
Given a matrix $A\in X_r$, we can use the $U(n)$, $SO(n)$-
action to bring the left and right kernels, $ker_l(A)$,
$ker_r(A)$, to the form $(\underbrace{0,\dots,0}_{r},*,\dots,*)$. With
this assumption $A$ becomes block-diagonal, hence we have that
\begin{align*}
A\sim \begin{bmatrix} A_{inv} & \mathbb{O} \\ \mathbb{O} &
\mathbb{O}_{(m-r)\times(n-r)}\end{bmatrix},
\end{align*}
here $A_{inv}\in Mat_{r\times r}(\mathbbm{K})$ is invertible.
\item
Consider the group $\operatorname{GL}_m\subset Mat_{m\times m}(\mathbbm{K})$ and
similarly $\operatorname{GL}_n\subset Mat_{n\times n}(\mathbbm{K})$. The product acts,
$\operatorname{GL}_m\times \operatorname{GL}_n\circlearrowright Mat_{m\times n}(\mathbbm{K})$, by
$A\to V_lAV_r$. This group action is not isometric. However,
for any fixed pair $(V_l,V_r)$ the map $A\to V_lAV_r$ is
bilipschitz as we see in Corollary \ref{chageofcoordinates} below.
Moreover, the action preserves all the strata $\{X_r\}$ and
acts on them transitively, e.g. any matrix $A\in X_r$ is
equivalent to the canonical form,
\begin{align*}
A\sim \begin{bmatrix}
{1\hspace{-0.1cm}\rm I}_{r\times r} &\mathbb{O}\\ \mathbb{O} &
\mathbb{O}_{(m-r)\times(n-r)} \end{bmatrix}.
\end{align*}
Therefore the
tangent space to any of $X_r$, at any point $A$, can be
computed as the tangent space to the orbit of $A$ under this
group action.
\end{itemize}
The next result is an easy corollary of Lemma \ref{changediffeo}.
\begin{Corollary}\label{chageofcoordinates}
Let $V\subset Mat_{m,n}(\mathbbm{K})$ and $(C_l,C_r) \in \operatorname{GL}_m\times\operatorname{GL}_n.$ Then the map
$A\to C_lAC_r$
is a bilipschitz map from $V$ to $W=C_lVC_r$. In
particular if $A,B\in V$ satisfy $d_{in}(A,B)\leq Kd_{out}(A,B)$,
then $d_{in}(C_lAC_r, C_lBC_r) \leq K d_{out}(C_lAC_r,C_lBC_r)$
\end{Corollary}
\subsection{Connected components of the
strata}\label{Sec.Connected.Components}
We first remark that in both cases $\mathbbm{K}=\mathbb{R}$ or $\mathbb{C}$, the sets
$\overline{X}_r$ are connected for all $r$ if $X$ is a linear subspace.
Let $\mathbbm{K}=\mathbb{C}$ and $X$ be one of $Mat_{m\times n}(\mathbb{C})$, $Mat^{sym}_{m\times m}(\mathbb{C})$,
$Mat^{skew-sym}_{m\times m}(\mathbb{C})$ or triangular matrices. Then all the
strata $X_r$ are connected. Indeed, they are all irreducible algebraic
varieties and thus $dim_{\mathbb{C}}(\overline{X_r})
-dim_\mathbb{C}(\overline{X}_{r-1})\ge1$, i.e. the complements are of real
codimension$\ge2$.
For $\mathbbm{K}=\mathbb{R}$ the strata can have several connected components.
\begin{itemize}
\item Let $X=Mat_{m\times n}(\mathbb{R})$, for $r=m=n$ we have the classical decomposition $X_n=\operatorname{GL}^+_n(\mathbb{R})\amalg \operatorname{GL}^-_n(\mathbb{R})$.
We prove that for $r<m$ the strata $X_{r}$ are connected. Indeed, given any $A\in X_r$ bring it to the block-diagonal form,
$A\stackrel{SO(m)\times SO(n)}{\sim}A_{inv}\oplus\mathbb{O}$, as above. Here $A_{inv}$ is invertible and is defined up to $SO(r)\times SO(r)$
transformation. Thus for any $A,B\in X_r$ it is enough to connect $A_{inv}\oplus\mathbb{O}$ to $B_{inv}\oplus\mathbb{O}$.
If $det(A_{inv}B_{inv})>0$
then the two matrices are connected just inside $\operatorname{GL}_r(\mathbb{R})$. To address the case $det(A_{inv}B_{inv})<0$, it is enough to connect
$A_{inv}\oplus\mathbb{O}$ to some $\widetilde{A}_{inv}\oplus\mathbb{O}$, with $det(A_{inv}\widetilde{A}_{inv})<0$.
We choose
\begin{align*}
\widetilde{A}_{inv}=\begin{bmatrix}
{1\hspace{-0.1cm}\rm I}_{(r-1)\times(r-1)}&\mathbb{O}\\\mathbb{O}&-1_{1\times
1}\end{bmatrix}\cdot A_{inv}
\end{align*}
and construct the needed path as follows.
Choose any path $(x(t),y(t))$ from $(1,0)$ to $(-1,0)$ inside $\mathbb{R}^2\setminus\{(0,0)\}$, e.g. a half-circle. Let $V(t)\in \operatorname{GL}^+_2(\mathbb{R})$ be a
matrix family inducing this path, i.e. $\Bigl[\begin{smallmatrix}
x(t)\\y(t)\end{smallmatrix}\Bigr] m=V(t)\Bigl[\begin{smallmatrix} 1\\0\end{smallmatrix}\Bigr]$, $V(0)={1\hspace{-0.1cm}\rm I}$ and $V(1)=\Bigl[\begin{smallmatrix} -1&0\\0&1\end{smallmatrix}\Bigr]$.
Accordingly consider the path
\begin{align*}
A(t)=\begin{bmatrix} {1\hspace{-0.1cm}\rm I}_{(r-1)\times(r-1)}&\mathbb{O}&\mathbb{O}\\\mathbb{O}& V(t)&\mathbb{O}\\\mathbb{O}&\mathbb{O}&\mathbb{O}\end{bmatrix}\cdot A
\end{align*}
By the construction $A(t)$ lies inside $X_r$ and connects
$A_{inv}\oplus\mathbb{O}$ to $\widetilde{A}_{inv}\oplus\mathbb{O}$. For $m<n$ all the
strata are connected by the similar argument.
\item
For $X=Mat^{sym}_{n\times n}(\mathbb{R})$ and any $A\in X_r$ we have $A\stackrel{SO(n)}{\sim}A_{inv}\oplus\mathbb{O}$, as before. Then use $SO(r)$
to diagonalize $A_{inv}$. The signs of the eigenvalues are preserved in continuous deformations inside $X_r$.
Therefore the decomposition into
the connected components is $X_r=\mathop\amalg\limits_{r_++r_-=r}\mathcal{U}_{r_+,r_-}$, where $\mathcal{U}_{r_+,r_-}\subset Mat^{sym}_{r\times r}(\mathbb{R})$
is the subset of matrices of signature $(r_+,0,r_-)$.
\item
For $X=Mat^{skew-sym}_{n\times n}(\mathbb{R})$ recall that the rank of a skew-symmetric matrix is always even, thus $X_{2r+1}=\varnothing$ and we work only with $X_{2r}$.
We prove that for $2r<n$ the stratum $X_{2r}$ is connected, while for $n$-even the stratum $X_n$ has two connected components.
Suppose $n$ is even, then the canonical form under the $SO(n)$ action
is $\mathop\oplus\limits_i\begin{pmatrix} 0&\lambda_i\\-\lambda_i&0\end{pmatrix}$, and one can bring any matrix to this form in a continuous way. (Because $SO(n)$ is connected.)
Furthermore, if all $\lambda_i$ are non-zero,
then we can assume $\lambda_{i}>0$ for $i<n$. Indeed, the negative
$\{\lambda_i\}$ can be turned into positive in pairs by the $SO(n)$
transformation
\begin{align*}
&\begin{bmatrix}1&0\\0&-1\\&&1&0\\&&0&-1 \end{bmatrix} \begin{bmatrix}
0&\lambda_i&0&0\\-\lambda_i&0&0&0\\0&0&0&\lambda_j\\0&0&-\lambda_j&0\end{bmatrix}
\begin{bmatrix} 1&0\\0&-1\\&&1&0\\&&0&-1 \end{bmatrix} \\
&=\begin{bmatrix} 0&-\lambda_i&0&0\\\lambda_i&0&0&0\\0&0&0&-\lambda_j\\0&0&\lambda_j&0
\end{bmatrix}.
\end{align*}
(Note again that $SO(n)$ is connected.) Thus any canonical form is connected (inside $X_n$) to either $\mathop\oplus\limits_i \Bigl[\begin{smallmatrix} 0& 1\\ -1&0\end{smallmatrix}\Bigr]$ or to
$(\mathop\oplus\limits_i \Bigl[\begin{smallmatrix} 0&1\\-1&0\end{smallmatrix}\Bigr])\oplus \Bigl[\begin{smallmatrix} 0&-1\\1&0\end{smallmatrix}\Bigr]$. Finally we remark that the Pfaffian polynomial of a skew-symmetric matrix, $Pf(A),$ is continuous under deformations of $A$ and $Pf|_{\mathcal U_{even}}>0,$ while $Pf|_{\mathcal U_{odd}}<0.$ Thus there are two connected components.
Therefore $X_{n}=\mathcal{U}_{even} \amalg \mathcal{U}_{odd}.$
For $X_{2r}$, with $2r<n$, we first use the equivalence $A\to V^tAV$, $V\in SO(n)$, to bring $A$ to the form $A_{inv}\oplus\mathbb{O}$,
with $A_{inv}\in Mat_{2r\times 2r}^{skew-sym}(\mathbb{R})$,
as in paragraph \ref{Sec.Group.Actions}. Then, as in the case of $X_n$, we bring $A_{inv}$ to either $\mathop\oplus\limits_i \Bigl[\begin{smallmatrix} 0& 1\\ -1&0\end{smallmatrix}\Bigr]$ or
$(\mathop\oplus\limits_i \Bigl[\begin{smallmatrix}
0&1\\-1&0\end{smallmatrix}\Bigr])\oplus \Bigl[\begin{smallmatrix}
0&-1\\1&0\end{smallmatrix}\Bigr]$. As $2r<n$, it remains to connect
\begin{align*}
\begin{bmatrix} 0&1&0\\-1&0&0\\0&0&0\end{bmatrix} \text{ to }
\begin{bmatrix} 0&-1&0\\1&0&0\\0&0&0\end{bmatrix}.
\end{align*}
This is done as in the case of $Mat_{m\times n}(\mathbb{R})$. We fix a matrix family, $V(s)\in GL_2^+(\mathbb{R})$,
that connects $(1,0)$ to $(-1,0)$ and consider the path
\begin{align}
\begin{bmatrix}1&\mathbb{O}_{1\times 2}\\\mathbb{O}_{2\times
1}&V(s)\end{bmatrix}\begin{bmatrix}
0&1&0\\-1&0&0\\0&0&0\end{bmatrix}\begin{bmatrix}1&\mathbb{O}\\\mathbb{O}&V(s)\end{bmatrix}^t=
\begin{bmatrix} 0 & v_{11} & v_{21}\\
-v_{11} & 0 & 0\\
-v_{21} & 0 & 0
\end{bmatrix}.
\end{align}
\end{itemize}
\subsection{The local structure of $\overline{X_r}$ and "controlled path-connectedness"}
In this section $\mathbbm{K}\in\mathbb{R},\mathbb{C}$ and we always consider small
neighbourhoods of spaces near some points. We freely use the germ
notation, e.g. $(\mathbbm{K}^n,\mathbb{O})$ denotes
a small neighbourhood of $\mathbbm{K}^n$ near the origin (i.e. near the zero matrix), $(\overline{X_r},A)$ denotes a small neighbourhood of
the matrix $A$ in $\overline{X_r}$, while $T_AX_r$
denotes the tangent space of $X_r$ at the point $A\in X_r$.
Sometimes to keep track of the size we denote the strata by $X^{(m\times n)}_r$.
\begin{Lemma}\label{loacalalmostlne}
\begin{enumerate}[1.]
\item Let $X=Mat_{m\times n}(\mathbbm{K})$,
fix some $A\in\overline{X^{m\times n}_r}$, with $\operatorname{rank}(A)=r_0\le
r$. Then
\begin{align*}
(\overline{X^{m\times n}_r},A)\approx (\mathbbm{K}^{mr_0+nr_0-r^2_0},\mathbb{O})\times
(\overline{X^{(m-r_0)\times (n-r_0)}_{r-r_0}},\mathbb{O}),
\end{align*}
where the homeomorphism is almost metric preserving, i.e. the metric
distortion can be assumed small if the germ representatives are
small.\label{321}
\item Similarly, for $X=Mat^{sym}_{m\times m}(\mathbbm{K})$ one has:
\begin{align*}
(\overline{X^{m\times m}_{r}},A)\approx
(\mathbbm{K}^{mr_0-\bin{r_0}{2}},0)\times(\overline{X^{(m-r_0)\times
(m-r_0)}_{r-r_0}},0),
\end{align*}
while for
$X=Mat^{skew-sym}_{m\times m}(\mathbbm{K})$ one has:
\begin{align*}
(\overline{X^{m\times m}_{r}},A)\approx
(\mathbbm{K}^{mr_0-\bin{r_0+1}{2}},0)\times(\overline{X^{(m-r_0)\times
(m-r_0)}_{r-r_0}},0).
\end{align*}\label{322}
\end{enumerate}
\end{Lemma}
\begin{proof}
\eqref{321}. Using the linear isometries $U(m)\times U(n)$ we can assume the left/right kernels of $A$ in the form $(\underbrace{0,\dots,0}_{r_0},*,\dots,*)$,
see paragraph \ref{Sec.Group.Actions}. Therefore $A=A_{inv}\oplus \mathbb{O}_{(m-r_0)\times(n-r_0)}$, here
$A_{inv}\in Mat_{r_0\times r_0}(\mathbbm{K})$ is invertible.
As the action $\operatorname{GL}_m\times \operatorname{GL}_n\circlearrowright X_{r_0}$ is
transitive (and smooth) we write down the tangent space
$T_AX_{r_0}$ as the tangent to the orbit using
the calculation of the tangent space given in \cite{arbarellocornalbagriffiths}:
\begin{align*}
T_AX_{r_0}=Span_\mathbb{R}(V_lA,AV_r)_{\substack{V_l\in Mat_{m\times m}(\mathbbm{K})\\V_r\in Mat_{n\times n}(\mathbbm{K})}}=
Span_\mathbbm{K}\Big(\begin{bmatrix} *&*&\\ *&\mathbb{O}_{(m-r_0)\times(n-r_0)}\end{bmatrix}\Big)
\end{align*}
As the stratum $X_{r_0}$ is smooth (at any of its points), it can be rectified locally near $A$ to its tangent space. Namely, there exists a homeomorphism,
$(Mat_{m\times n}(\mathbbm{K}),A)\approx (\mathbbm{K}^{mr_0+nr_0-r^2_0},0)\times (\mathbbm{K}^{(m-r_0)(n-r_0)},0)$, that sends $(X_{r_0},A)$
to $(T_AX_{r_0},0)\times\{0\}=(\mathbbm{K}^{mr_0+nr_0-r^2_0},0)\times \{0\}$.
This homeomorphism is assured by the implicit function theorem and can be chosen "almost metric preserving". More precisely, for any $\epsilon>0$ the distortion
of the distances will be less than $\epsilon$ provided we choose a small enough neighbourhood of $A$ in $X_{r_0}$.
Restricting this homeomorphism to $(\overline{X^{m\times n}_r},A)$ we get the statement.
\eqref{322}. The proof is essentially the same, just here one uses the action
$A\to V^t AV$, $V\in U(n)$.
\end{proof}
\begin{Lemma}\label{Thm.Path.Connected.Controlled}
\begin{enumerate}[1.]
\item Let $X=Mat_{m\times n}(\mathbbm{K})$. For any $r\le m\le n$ the connected
components of $X_r$ are "controlled path-connected" near any point
of $\overline{X_r}$ in the following sense:
for any $A\in \overline{X_r}$ and any $\epsilon>0$ there exists
$\delta=\delta(A,\epsilon)$ such that any points of the ball, $P,Q\in
Ball_\delta(A)\cap X_r$, belonging to the same connected component of
$X_r$, are connected (inside $Ball_\delta(A)\cap X_r$) by a path of
length$<\epsilon$.\label{331}
\item Similarly for the spaces of (skew-)symmetric matrices, $X=Mat^{sym}_{m\times m}(\mathbbm{K})$ or $X=Mat^{skew-sym}_{m\times m}(\mathbbm{K})$, their strata are
controlled path connected at any point.\label{332}
\end{enumerate}
\end{Lemma}
\begin{proof}
\eqref{331}. Let $\operatorname{rank}(A)=r_0\le r$, by the last lemma there exist
homeomorphisms as on the diagram.
\begin{align*}
\begin{matrix} (Mat_{m\times n}(\mathbbm{K}),A)&\stackrel{\phi}{\isom}&(\mathbbm{K}^{mr_0+nr_0-r^2_0},\mathbb{O})\times Mat_{(m-r_0)\times (n-r_0)}(\mathbbm{K})
\\\cup&&\cup
\\(\overline{X_r},A)&\isom& (\mathbbm{K}^{mr_0+nr_0-r^2_0},\mathbb{O})\times(\overline{X^{(m-r_0)\times (n-r_0)}_{r-r_0}},\mathbb{O})
\\\cup&&\cup
\\(X_r,A)&\isom&(\mathbbm{K}^{mr_0+nr_0-r^2_0},\mathbb{O})\times(X^{(m-r_0)\times (n-r_0)}_{r-r_0},\mathbb{O}).
\end{matrix}
\end{align*}
Here in the last row we denote by $(X_r,A)$ a small
neighbourhood of $X_r$ near $A$, even though $A\not\in X_r$. Similarly for $(X^{(m-r_0)\times (n-r_0)}_{r-r_0},\mathbb{O})$.
While $\phi$ does not preserve the distances, the distortions are small for small representative, therefore it is
enough to prove the statement for the presentation on the right.
Write the coordinates of $P,Q$ for this splitting, $P\rightsquigarrow(P_1,P_2)$, $Q\rightsquigarrow(Q_1,Q_2)$,
where $P_1,Q_1\in (\mathbbm{K}^{mr_0+nr_0-r^2_0},\mathbb{O})$, while $P_2,Q_2\in (X^{(m-r_0)\times (n-r_0)}_{r-r_0},\mathbb{O})$.
Now take the paths $(tP_1,P_2)$, $(tQ_1,Q_2)$, where $t\in [0,1]$. Both paths lie inside
$(\mathbbm{K}^{mr_0+nr_0-r^2_0},\mathbb{O})\times X^{(m-r_0)\times (n-r_0)}_{r-r_0}$, thus their pre-images under $\phi$ lie inside $X_r$. And the lengths of
both paths are small for $\delta$ small. Therefore it remains to check the points $(0,P_2)$, $(0,Q_2)$, i.e. to connect them by
a short path that lies inside $\{0\}\times X^{(m-r_0)\times (n-r_0)}_{r-r_0}$.
By this transition we have reduced the problem from the case $P,Q,A\in \overline{X^{m\times n}_{r}}$ to the
case, $P_2,Q_2,\mathbb{O}\in \overline{X^{(m-r_0)\times (n-r_0)}_{r-r_0}}$.
Note that $0<r-r_0\le m-r_0\le n-r_0$. Note that $P_2,Q_2$ still lie in the same connected component of $X^{(m-r_0)\times (n-r_0)}_{r-r_0}$, as the paths
are in $X_r$.
Thus we have to prove:
for any $\epsilon>0$ there exists $\delta=\delta(\epsilon)$ such that any points $P,Q\in Ball_\delta(\mathbb{O})\cap X_r\subset Mat_{m\times n}(\mathbbm{K})$ are connected
(inside $X_r$) by a path of length$<\epsilon$.
Alternatively: {\em any point $P\in Ball_\delta(\mathbb{O})\cap X_r$ is connected to
the special point $\delta\cdot{1\hspace{-0.1cm}\rm I}_{r\times r}\oplus\mathbb{O}_{(m-r)\times (n-r)}$ by a path of length$<\epsilon$.} And this later statement is immediate,
apply the Gauss elimination procedure on rows and columns (by $\operatorname{GL}_m\times \operatorname{GL}_n$) to get a path of bounded length.
\eqref{332}. For the (skew-)symmetric case the proof is essentially the same, just the special point is now
$\delta\cdot{1\hspace{-0.1cm}\rm I}\oplus (-\delta\cdot{1\hspace{-0.1cm}\rm I})\oplus\mathbb{O}_{(m-r)\times (n-r)}$ (the sizes depend on the signature) and
instead of the Gauss elimination one uses the action $A\to V^t AV$.
\end{proof}
\section{Lipschitz normality of linear subspaces of the space of
matrices}\label{modelcase}
\subsection{Lipschitz normality for the closures $\overline{X_r}$}
\begin{Theorem}\label{Thm.Lipshitz.Normality.Closures.of.Strata}
Let $\mathbbm{K}\in\mathbb{R},\mathbb{C}$ and $X$ be one of the spaces $Mat_{m\times n}(\mathbbm{K})$, $Mat^{sym}_{n\times n}(\mathbbm{K})$, $Mat^{skew-sym}_{n\times n}(\mathbbm{K})$.
For any $1\le r\le m\le n$ and $A,B\in\overline{X_r}$ holds:
$\frac{d_{in}^{\overline{X_r}}(A,B)}{2\sqrt{2}} \le d_{out}(A,B)\le
d^{\overline{X_r}}_{in}(A,B)$.
\end{Theorem}
\begin{proof}
The inequality on the right is immediate, we prove the one on the left.
We use the group action, $U(m)\times U(n)\circlearrowright Mat_{m\times n}(\mathbbm{K})$, by $A\to UAV$,
and $U(n)\circlearrowright Mat^{sym}_{n\times n}(\mathbbm{K})$,
$Mat^{skew-sym}_{n\times n}(\mathbbm{K})$, by $A\to U^tAU$, to bring $A$ to the
form
\begin{align*}
\begin{bmatrix} A_1&\mathbb{O}_{r\times (n-r)}\\\mathbb{O}_{(m-r)\times
r}&\mathbb{O}_{(m-r)\times(n-r)}\end{bmatrix}.
\end{align*}
Here $A_1\in Mat_{r\times r}(\mathbbm{K})$, $Mat^{sym}_{r\times r}(\mathbbm{K})$,
$Mat^{skew-sym}_{r\times r}(\mathbbm{K})$. This action preserves $X_r$, $\overline{X_r}$ and the inner/outer distances. Therefore we can assume $A$ in this form.
Present $B$ accordingly: $\Bigl[\begin{smallmatrix} B_1&B_2\\B_3&B_4\end{smallmatrix}\Bigr]$. Then:
\begin{align*} d_{out}(A,B)=\sqrt{||A_1-B_1||^2+||B_2||^2+||B_3||^2+||B_4||^2}
\end{align*}
This is the distance along the straight segment. We will replace this straight segment by two parts, lying inside $\overline{X_r}$,
whose total length is less than $2d_{out}(A,B)$
Consider the path $B(t)=\Bigl[\begin{smallmatrix} B_1&tB_2\\ tB_3&t^2B_4\end{smallmatrix}\Bigr]$ for $t\in[0,1]$. We claim: $B(t)\in \overline{X_r}$ for any $t\in[0,1]$. Indeed, scaling a particular
row/column does not increase the rank. And in the (skew-)symmetric case $B(t)$ remains (skew-)symmetric.
Therefore we get an algebraic curve (inside $\overline{X_r}$) that connects $B=B(1)$ to $B(0)=\Bigl[\begin{smallmatrix} B_1&\mathbb{O}\\\mathbb{O}&\mathbb{O}\end{smallmatrix}\Bigr]$. The length of this path is:
$\int\limits^1_0 \sqrt{||B_2||^2+||B_3||^2+4t^2||B_4||^2}dt$.
It remains to move from $B(0)$ to $A$.
In this case the straight segment $\overline{B(0),A}$ lies inside $\overline{X_r}$. In total we get:
\begin{align*}
d^{\overline{X_r}}_{in}(A,B)\le \int\limits^1_0
\sqrt{||B_2||^2+||B_3||^2+4t^2||B_4||^2}dt+||A_1-B_1||.
\end{align*}
Now we use the bounds
\begin{align*}
\int\limits^1_0 \sqrt{||B_2||^2+||B_3||^2+4t^2||B_4||^2}dt<
2\sqrt{||B_2||^2+||B_3||^2+||B_4||^2}
\end{align*}
and $x+y\le \sqrt{2(x^2+y^2)}$ to get:
\begin{align*}
d^{\overline{X_r}}_{in}(A,B) &<2
\sqrt{||B_2||^2+||B_3||^2+||B_4||^2}+||A_1-B_1||\le\\
&\le 2\sqrt{2}
\sqrt{||A_1-B_1||^2+||B_2||^2+||B_3||^2+||B_4||^2} =2\sqrt{2} \cdot
d_{out}(A,B).
\end{align*}
\end{proof}
\begin{Remark} The constant $2\sqrt{2}$ is certainly not the best one. For example, for $X=Mat_{m\times n}(\mathbbm{K})$ one can prove
$d_{in}^{(\overline{X_r})}(A,B)\le \sqrt{2}d_{out}(A,B)$ by first going along the straight segment $\Bigl[\begin{smallmatrix} B_1&tB_2\\B_3&tB_4\end{smallmatrix}\Bigr]$,
thus bringing $B$ to the form $\Bigl[\begin{smallmatrix}
B_1&\mathbb{O}\\B_3&\mathbb{O}\end{smallmatrix}\Bigr]$, and then going along
the straight segment
$\Bigl[\begin{smallmatrix}
tA_1+(1-t)B_1&\mathbb{O}\\(1-t)B_3&\mathbb{O}\end{smallmatrix}\Bigr]$.
Probably one can get even better bounds by using the appropriate metric on the
Grassmanians of linear subspaces, $Gr(\mathbbm{K}^{m-r},\mathbbm{K}^m)$, $Gr(\mathbbm{K}^{n-r},\mathbbm{K}^n)$ or the Stiefel manifolds.
\end{Remark}
\subsection{Lipschitz normality for connected components of $X_r$}
\begin{Theorem}
Let $\mathbbm{K}\in\mathbb{R},\mathbb{C}$ and $X$ be one of the spaces $Mat_{m\times n}(\mathbbm{K})$, $Mat^{sym}_{n\times n}(\mathbbm{K})$, $Mat^{skew-sym}_{n\times n}(\mathbbm{K})$.
Suppose $A,B$ belong to the same connected component of $X_r$, for some $r\le m$.
Then $\frac{d_{in}^{X_r}(A,B)}{2\sqrt{2}} \le d_{out}(A,B)\le d^{X_r}_{in}(A,B)$.
\end{Theorem}
\begin{proof}
The inequality on the right is obvious, we prove the one on the left.
{\bf Step 1.} (Reduction to the case of $X_n$.)
As in the proof for $\overline{X_r}$ we apply the action of
$U(m)\times U(n)$, or $U(n)$ in the (skew-)symmetric case, to bring
$A$ to the form $\Bigl[\begin{smallmatrix} A_1&\mathbb{O}\\\mathbb{O}&\mathbb{O}\end{smallmatrix}\Bigr]$. Accordingly $B$ is
brought to $\Bigl[\begin{smallmatrix} B_1& *\\ *& *\end{smallmatrix}\Bigr]$. It might happen that $rank(B_1)<r$.
To avoid this we can take arbitrarily small but generic deformation of
$B$ inside $X_r$. (For example, apply the group action that adds to
the first $r$ rows/columns a small but generic linear combination of
all the other rows/columns.)
Now, as $rank(B_1)=r$, we can take the path $B(t)=\Bigl[\begin{smallmatrix} B_1&t *\\t
*&t^2 *\end{smallmatrix}\Bigr]$, as in the proof for $\overline{X_r}$. As in that proof
the length of this path is less than $2\cdot\sqrt{(\dots)}$.
We arrive to $\Bigl[\begin{smallmatrix} B_1&\mathbb{O}\\\mathbb{O}&\mathbb{O}\end{smallmatrix}\Bigr]$ and it remains to
connect the matrices $\Bigl[\begin{smallmatrix} A_1&\mathbb{O}\\\mathbb{O}&\mathbb{O}\end{smallmatrix}\Bigr]$, $\Bigl[\begin{smallmatrix}
B_1&\mathbb{O}\\\mathbb{O}&\mathbb{O}\end{smallmatrix}\Bigr]$ inside $X_r$ by a path of the total length
$\le 2d_{out}(A_1,B_1)+\epsilon$. In particular, the initial
question has been reduced to the stratum $X_n$ of square
matrices. Note also: as the path $B(t)$ was fully inside $X_r$, the
points $A_1,B_1$ lie in the same connected component of $X_r$.
{\bf Step 2.} Let $A,B\in X_n$ where $X=Mat_{n\times n}(\mathbbm{K})$,
$Mat^{sym}_{n\times n}(\mathbbm{K})$ or $Mat^{skew-sym}_{n\times n}(\mathbbm{K})$.
(For skew-symmetric matrices this implies: $n$ is even.)
\underline{Let $\mathbbm{K}=\mathbb{C}$} then all the strata are connected. Consider
the straight segment $[A,B]\subset X$. Its endpoints lie in $X_n$, thus,
by algebraicity of the strata, it intersects $\overline{X_{n-1}}$ in a
finite number of points which is at most $\deg
(\overline{X_{n-1}})$. Now, by the controlled path connectedness
(Lemma \ref{Thm.Path.Connected.Controlled}), we can deform the path
slightly at each of these point to push it into the stratum
$X_n$. Hence we get a path inside $X_n$ of $\text{length}\le
d_{out}(A,B)+\epsilon$. Together with the path $B(t)$ of step 1 this
finishes the proof.
\
\underline{Suppose $\mathbbm{K}=\mathbb{R}$,} let $\mathcal{U}\subset X_n$ be the prescribed
connected component. We construct the needed path from $A$ to $B$
inside $\mathcal{U}$.
{\em The idea of construction.} In the case of $\overline{X_r}$ the straight edge $[A,B]$ was replaced by a straight edge $[A,B(0)]$ and an
algebraic curve from $B(0)$ to $B=B(1)$, such that
\begin{align*}
length\big(A,B(0)\big)+length\big(B(0),B(1)\big)\le 2\sqrt{2}
d_{out}\big(A,B\big),
\end{align*}
see the proof of Theorem \ref{Thm.Lipshitz.Normality.Closures.of.Strata}. For $\mathcal{U}$ we use the same idea, but we need to split into more paths
to stay inside $\mathcal{U}$. In this way we produce several straight edges, $[A,A_1]$, $[A_1,A_2]$,\dots, $[A_{k-1},A_k],[A_k,B_k]$, and algebraic curves, $(B_k,B_{k-1})$,
$(B_{k-1},B_{k-2})$,\dots, $(B_1,B)$ such that
\begin{align*}
length[A,A_1]&+\cdots+length[A_{k-1},A_k]+length[A_k,B_k]+\\
&+length(B_k,B_{k-1})+\cdots+length(B_1,B)< 2\sqrt{2} d_{out}(A,B)+\epsilon.
\end{align*}
For $X=Mat_{n\times n}(\mathbb{R})$ or $Mat^{skew-sym}_{n\times n}(\mathbb{R})$ it is enough to take $k=1$, but for $Mat^{sym}_{n\times n}(\mathbb{R})$ the number $k$ can
be $\lfloor \frac{n}{2}\rfloor$.
All these paths lie in $\overline{\mathcal{U}}$ and each of them has some points in $\mathcal{U}$, thus (by algebraicity of $\overline{X_r}$) each of the paths lies in $\mathcal{U}$,
except for a finite number of points. At each such point we use the controlled-path-connectedness,
lemma \ref{Thm.Path.Connected.Controlled}, to (slightly) deform the path into $\mathcal{U}$.
\
{\em The construction.}
Fix $A,B\in\mathcal{U}\subset X_n$. The edge $[A,B]$ does not necessarily lie inside $\overline{\mathcal{U}}$, thus (unlike the case $\mathbbm{K}=\mathbb{C}$) it cannot be pushed back into $\mathcal{U}$ by
a small deformation. Split the edge $[A,B]$ into the intervals $[A,A_1)$, $[A_1,B_1]$, $(B_1,B]$, where $[A,A_1)\subset\mathcal{U}$,
$(B_1,B]\subset\mathcal{U}$ and $A_1,B_1\in\overline{\mathcal{U}}\setminus\mathcal{U}$. Thus $A_1,B_1\in \overline{X_{n-1}}$, and (after a small-but-generic deformation of $A,B$ inside $\mathcal{U}$)
we can assume $A_1,B_1\in X_{n-1}$. (In the case of $X=Mat^{skew-sym}_{n\times n}(\mathbb{R})$ the rank drops by two, thus $A_1,B_1\in X_{n-2}$.)
As in the case of $\overline{X_r}$, we can assume (using the $O(n)\times O(n)$ action)
$A_1=\Bigl[\begin{smallmatrix} \widetilde{A}_1&\mathbb{O}\\\mathbb{O}&\mathbb{O}\end{smallmatrix}\Bigr]$, where $\widetilde{A}_1$ is invertible. As in the case of $\overline{X_r}$, we take the algebraic curve
$B_1(t)=\Bigl[\begin{smallmatrix} \widetilde{B}_1& t*\\ t*& t^2*\end{smallmatrix}\Bigr]$, for $t\in[0,1]$. And we can assume $\widetilde{B}_1$ invertible, so this curve lies inside $X_{n-1}$.
(For skew-symmetric matrices the curve lies inside $X_{n-2}$.)
It remains to connect $A_1$ to $B_1(0)$, inside $\overline{\mathcal{U}}$, and to (slightly) deform this path into $\mathcal{U}$.
\begin{itemize}
\item {\em \underline{The case $\mathcal{U}=\operatorname{GL}^+_n(\mathbb{R})\subset X=Mat_{n\times n}(\mathbb{R})$.}} Take the path
\[\begin{bmatrix} t\widetilde{A}_1+(1-t)\widetilde{B}_1&\mathbb{O}\\\mathbb{O}&\epsilon(t)\end{bmatrix},
\]
with a continuous function $[0,1]\stackrel{\epsilon(t)}{\to}\mathbb{R}$ that satisfies:
\begin{align*}
&\epsilon(t)\cdot det\Big(t\widetilde{A}_1+(1-t)\widetilde{B}_1\Big)\ge0,\\ &\epsilon(t)=0\ \rm{iff}\ det\Big(t\widetilde{A}_1+(1-t)\widetilde{B}_1\Big)=0\\ &\rm{and} \
|\epsilon(t)|\ll1 \ \text{for any} \ t.
\end{align*}
This path lies inside $\mathcal{U}$ except for a finite number of points, where $det\big(t\widetilde{A}_1+(1-t)\widetilde{B}_1\big)=0$. Now, by
the controlled path-connectedness, lemma \ref{Thm.Path.Connected.Controlled}, we can deform the path slightly at each of these points into $\mathcal{U}$.
Thus we have
connected (the small deformations of) $A_1,B_1(0)$, inside $\mathcal{U}$, by a path of total length at most $d_{out}(A_1,B_1(0))+\epsilon$.
Together with $B(t)$ this provides the needed path from $A$ to $B$ inside $\mathcal{U}$.
The case $\mathcal{U}=\operatorname{GL}^-_n(\mathbb{R})$ is similar.
\item {\em \underline{The case $X=Mat^{skew-sym}_{n\times n}(\mathbb{R})$,}}
here $\mathcal{U} \subset X_n$ is prescribed by the parity of the negative
values among $\{\lambda_i\}$, see paragraph \ref{Sec.Connected.Components}. Take the path
\[\begin{bmatrix} t\widetilde{A}_1+(1-t)\widetilde{B}_1&\mathbb{O}&\mathbb{O}\\\mathbb{O}&0&\epsilon(t)\\\mathbb{O}&-\epsilon(t)&0\end{bmatrix},
\]
where a continuous function $[0,1]\stackrel{\epsilon(t)}{\to}\mathbb{R}$ satisfies:
\begin{align*}
\epsilon(t)=0\ \rm{iff}\ det\Big(t\widetilde{A}_1+(1-t)\widetilde{B}_1\Big)=0,\quad |\epsilon(t)|\ll1 \ \text{for any} \ t,
\end{align*}
and the sign of $\epsilon(t)$ is chosen in such a way that (whenever $det\Big(t\widetilde{A}_1+(1-t)\widetilde{B}_1\Big)\neq0$) the total number of
negative values among $\{\lambda_i\}$ is the one prescribed by $\mathcal{U}$. This path lies inside $\mathcal{U}$, except for a finite number of points where
$det\Big(t\widetilde{A}_1+(1-t)\widetilde{B}_1\Big)=0$. Now use the controlled path-connectedness and proceed as in the case $\mathcal{U}=\operatorname{GL}_n^+(\mathbb{R})$.
\item {\em \underline{The case $X=Mat^{sym}_{n\times n}(\mathbb{R})$}, here $\mathcal{U}=\mathcal{U}_{n_+,n_-}=$symmetric matrices of signature $(n_+,0,n_-)$.}
Suppose at all the points of the edge $[A_1,B_1]$ holds: $n_+(t)\le n_+$, $n_-(t)\le +n_-$. Then we take the path
\[\begin{bmatrix} t\widetilde{A}_1+(1-t)\widetilde{B}_1&\mathbb{O}\\\mathbb{O}&\epsilon(t)\end{bmatrix},
\]
and argue as above.
In general on the edge $[\widetilde{A}_1,\widetilde{B}_1]$ there might occur points where one of the conditions $n_+(t)\le n_+$, $n_-(t)\le n_-$ is violated.
And this cannot be corrected by just one factor of $\epsilon(t)$.
Thus we use the reduction on the size of matrix. Split the edge $[A_1,B_1]$ into $[A_1,A_2)$, $[A_2,B_2]$, $(B_2,B_1]$,
where $[A_1,A_2)\subset\overline{\mathcal{U}}\cap X_{n-1}$,
$[B_1,B_2)\subset\overline{\mathcal{U}}\cap X_{n-1}$, and $A_2,B_2\in\overline{\mathcal{U}}\cap \overline{X_{n-2}}$. Push the paths $[A_1,A_2)$, $[B_1,B_2)$ slightly into $\mathcal{U}$
by $\epsilon(t)$-addition as before. Apply the general method to the edge $[A_2,B_2]$, i.e. by $O(n)\times O(n)$ bring $A_2$ to the canonical form,
then degenerate the corresponding blocks in $B_2$ to zero-blocks. The curve $B_2(1)\rightsquigarrow B_2(0)$ is pushed into $\mathcal{U}$ by
$\begin{bmatrix} \widetilde{B}_2(t)&&\\&\epsilon_1(t)&0\\&0&\epsilon_2(t)\end{bmatrix}$, where $\epsilon_i(t)$ are small corrections as above. Now repeat the process for the edge $[A_2,B_2(0)]$, etc.
After at most $\lfloor\frac{n}{2}\rfloor$ steps we get to some $A_k,B_k$ of rank $\le \lfloor\frac{n}{2}\rfloor$. For them we can take the "corrected" path
\[\begin{bmatrix} t\widetilde{A}_1+(1-t)\widetilde{B}_1&\\&\epsilon_1(t)\\&&\ddots\\&&&\epsilon_k(t)\end{bmatrix},
\]
that lies in $\mathcal{U}$, except for a finite number of points. Now use the controlled path-connectedness.
\end{itemize}\end{proof}
\subsection{Lipschitz normality for transversal intersections with $\overline{X_r}$}
For more general linear subspaces of the space of matrices we have the
following result.
\begin{Proposition}\label{linearEIDS}
Let $V\subset X = Mat_{m\times n}(\mathbbm{K})$ be a linear subspace. Assume that
$V$ intersects $X_r$ transversely for all $r\neq 0$. Then $Y:= V\cap
\overline{X}_r$ is Lipschitz normally embedded.
\end{Proposition}
\begin{proof}
First notice that the stratification of $\overline{X}_r$ is a
locally Lipschitz stratification, since it is locally analytically
trivial along any stratum. Also by
Theorem \ref{Thm.Lipshitz.Normality.Closures.of.Strata}
$\overline{X}_r$ is Lipschitz normally embedded.
Since $V$ linear then $Y = V\cap \overline{X}_r$ is a cone over its
link, with vertex $0$.
Since $V$ is transverse to all the strata of $\overline{X}_r$ away from
$0,$ then the stratification of $\overline{X}_r$ induces a
locally Lipschitz trivial stratification on $Y\setminus\{0\}$.
By Proposition \ref{bilipschizttrivial} $Y\setminus \{0\}$ is locally
Lipschitz normally embedded. The sphere $S$ is transverse to all the
strata of $Y\setminus\{0\},$ here $S\subset Mat_{m\times n}$ is the
sphere of radius $1$ of real codimension $1$ (i.e.\ if $\mathbbm{K}=\mathbb{R}$ then
$S=S^{mn-1}$ and if $\mathbbm{K}=\mathbb{C}$ then $S=S^{2mn-1}$). Hence we can again use Proposition
\ref{bilipschizttrivial} to conclude that the link $M:=Y\cap S$ is
locally Lipschitz normally embedded. Then by Proposition \ref{localglobal}
$M$ is Lipschitz normally embedded, and $Y$ is Lipschitz normally
embedded by Proposition \ref{cone} since it is a cone over $M$.
\end{proof}
It is not true for all linear subspaces $V$ that $V \cap \overline{X}_r$
is Lipschitz normally embedded, as the next example shows.
\begin{Example}\label{degenerationofcusps}
Let $V\subset Mat_{3\times 3}(\mathbb{C})$ be the linear subspace given as the
image of the following map $\morf{F}{\mathbb{C}^3}{Mat_{3\times 3}(\mathbb{C})}$:
\begin{align*}
F(x,y,z)=\left(
\begin{array}{@{} c c c @{}}
x & 0 & z \\
y & x & 0 \\
0 & y & x
\end{array}
\right).
\end{align*}
Let $Y:= V \cap \overline X_2,$ where $\overline X_2$ is the set of matrices in $Mat_{3\times 3}(\mathbb{C})$ with zero determinant, which is Lipschitz normally
embedded by Theorem \ref{Thm.Lipshitz.Normality.Closures.of.Strata}. Hence
one would expect $Y$ to be a nice space. On the other hand
${Y}=V(x^3-y^2z)$, hence it is a family of cusps degeneration to a
line. But ${Y}$ being
Lipschitz normally embedded would imply that the cusp $x^3-y^2=0$
is Lipschitz normally embedded by
Proposition \ref{product}, since each non zero point on the $z$-axis has a
neighbourhood which is a product of the cups and the $z$-axis. But the
cusp is not Lipschitz normally embedded by the work of Pham and
Teissier \cite{phamteissier}. Hence
${Y}$ is not Lipschitz normally embedded.
\end{Example}
The proof of Proposition \ref{linearEIDS} uses the matrix structure of
$Mat_{m\times n}(\mathbbm{K})$, and the naive generalization to more general
varieties does not hold.
\begin{itemize}
\item The statement "if $X,Y\subset \mathbb{R}^N$ are two
manifolds intersecting transversally then $X\cap\overline{Y}$ is
Lipschitz normally embedded"
does not hold because of the obvious counterexample:
$Y=\{z=0\}\subset\mathbb{R}^3$, $\overline{X}=\{y^2=x^3\}\subset\mathbb{R}^3$,
$X=\overline{X}\setminus\{x=0=y\}$.
\item The statement
"if $X,Y\subset \mathbb{R}^N$ are two
manifolds intersecting transversally, with $\overline{Y}$ Lipschitz
normal then $X\cap\overline{Y}$ is Lipschitz normally embedded"
does not hold either.
e.g. let $\overline{Y}=\{x^2+y^2=z^k\}\subset\mathbb{R}^3$ and $X=\{x=y\}$.
Then $X\cap \overline{Y}$ is not Lipschitz normally embedded.
\item Consider the embedding$
\mathbb{R}^2\stackrel{j}{\hookrightarrow}X:=Mat_{2\times 2}(\mathbb{R})$ by
\begin{align*}
(x,y)\to\begin{pmatrix} y&x^{k-1}\\x&y\end{pmatrix}.
\end{align*}
Then $j(\mathbb{R}^2)$ intersects
transversally $X_2$ and $X_1$. But
$j(\mathbb{R}^2)\cap\overline{X_1}\approx\{y^2=x^k\}\subset\mathbb{R}^2$ is not
bilipschitz normal at the origin. So the linearity of $V$ is also
important. We will in Section \ref{secgeneralcase} look more on the
case when $V$ is not linear.
\end{itemize}
\section{Lipschitz normality of collections of affine subspaces in $\mathbb{R}^N$}\label{uppertriangularcase}
Fix a (possibly infinite) collection $\{L_i\}$ of affine subspaces in $\mathbb{R}^N$, of (varying) positive dimensions. The union $\cup L_i$ is not
always Lipschitz normally embedded (of course we assume $\cup L_i$ to be connected).
\begin{Example} The subset $\{x(y^2-1)=0\}\subset\mathbb{R}^2$ is not Lipschitz normally embedded because
$d_{out}\big((t,1),(t,-1)\big)=2$, while
$d_{in}\big((t,1),(t,-1)\big)=2+2t$. \end{Example} In this example the
collection contains two non-intersecting lines. We prove that in
many cases this is the only obstruction to bieng Lipschitz normally embedded.
\subsection{}
As a preparation we recall the definition of the angle between two (intersecting) affine subspaces $L_i,L_j\subset\mathbb{R}^N$. (All the metrics here are outer.)
\begin{itemize}
\item Suppose the intersection is just one point, $L_i\cap L_j=\{0\}$. Define the angle, $\alpha_{L_i,L_j}\in[0,\frac{\pi}{2}]$, via the theorem of cosines:
\begin{align*}
cos(\alpha_{L_i,L_j})=\underset{\substack{x\in L_i\setminus\{0\}\\y\in
L_j\setminus\{0\}}}{sup}\frac{d^2(x,0)+d^2(y,0)-d^2(x,y)}{2d(x,0)d(y,0)}.
\end{align*}
If $L_i,L_j$ are lines this gives the classical definition, in particular it is independent on the choice of $x,y$.
\item
If $dim(L_i\cap L_j)>0$ fix a point $0\in L_i\cap L_j$ and take the orthogonal complement at $0$:
$(L_i\cap L_j)\oplus \mathcal{N}=\mathbb{R}^N$. Then we define $\alpha_{L_i,L_j}:=\alpha_{(L_i\cap \mathcal{N}),(L_j\cap \mathcal{N})}$. If $dim(L_i)=dim(L_j)$ and $dim(L_i\cap L_j)=dim(L_i)-1$
then $L_i\cap\mathcal{N}$, $L_j\cap \mathcal{N}$ are lines and we get the classical definition.
\end{itemize}
By its definition $\alpha_{L_i,L_j}\in[0,\frac{\pi}{2}]$.
\begin{Lemma}
If $L_i\not\subseteq L_j$ and $L_j\not\subseteq L_i$ then $\alpha_{L_i,L_j}\neq0$.
\end{Lemma}
\begin{proof}
We can assume $L_i\cap L_j$ is just one point and move this point to the origin.
In the definition of $cos(\alpha_{L_i,L_j})$ apply the homogeneous scaling of $\mathbb{R}^N$ to get: $0<\epsilon\le |x|,|y|\le1$. As $x\not\in L_j$ and $y\not\in L_i$ the points
$x,0,y$ are not on one line, thus:
$f(x,y)=\frac{d^2(x,0)+d^2(y,0)-d^2(x,y)}{2d(x,0)d(y,0)}<1$. As $f(x,y)$ is a continuous function on the compact domain,
$(L_i\cap \{\epsilon\le |x|\le1\} )\times (L_j\cap \{\epsilon\le |y|\le1\} )$, it attains its maximum. Therefore
$cos(\alpha_{L_i,L_j})=\underset{\substack{x\in L_i\setminus\{0\}\\y\in L_j\setminus\{0\}}}{sup} f(x,y)<1$.
\end{proof}
\subsection{}
Now we use the angle $\alpha_{L_i,L_j}$ to get the optimal Lipschitz constant.
\begin{Proposition}
Let $X=\cup L_i\subset\mathbb{R}^N$ be the union of affine subspaces. Suppose
$L_i\not\subseteq L_j$ for any $i\neq j$ and the subspaces intersect
pairwise, $L_i\cap L_j\neq\varnothing$.
\begin{enumerate}
\item For any $x,y\in X$ holds:
\begin{align*}\frac{d^{(X)}_{in}(x,y)}{d_{out}(x,y)}
\leq \sup\limits_{i \neq j}
\frac{1}{sin(\frac{\alpha_{L_i,L_j}}{2})}.
\end{align*}\label{521}
\item If the collection is finite then the bound is asymptotically
sharp, i.e. there exist sequences $\{x_n\}$, $\{y_n\}$ satisfying:
\begin{align*}
\frac{d^{(X)}_{in}(x_n,y_n)}{d_{out}(x_n,y_n)}\to\sup\limits_{i \neq j}
\frac{1}{sin( \frac{\alpha_{L_i,L_j}}{2})}.
\end{align*}\label{522}
\end{enumerate}
\end{Proposition}
\begin{proof}
\eqref{521} Let $x\in L_i$, $y\in L_j$, the non-trivial case is $i\neq
j$. Fix some $0\in L_i\cap L_j$ and use the theorem of sines for the
triangle $Conv(x,y,0)$:
$\frac{d(0,x)}{sin(\alpha_y)} = \frac{d(0,y)}{sin(\alpha_x)} =
\frac{d(x,y)}{sin(\alpha_{0})}$. Thenone has:
\begin{align*}
d^{(X)}_{in}(x,y) &\le
d(x,0)+d(0,y) =\frac{d_{out}(x,y)(sin(\alpha_y)+sin(\alpha_x))}{sin(\alpha_0)}=
\\ &=
d_{out}(x,y)\frac{2sin\frac{\alpha_x+\alpha_y}{2}cos(\frac{\alpha_x-\alpha_y}{2})}{sin(\alpha_0)}\le
2d_{out}(x,y)\frac{cos\frac{\alpha_0}{2}}{sin(\alpha_0)}.
\end{align*}
Finally, $\alpha_0\ge\alpha_{L_i,L_j}$ hence $d^{(X)}_{in}(x,y)\le d_{out}(x,y)\frac{1}{sin\frac{\alpha_{L_i,L_j}}{2}}$. This gives the bound.
\
\eqref{522} To prove the asymptotic sharpness note that for
$|x_n|,|y_n|\to \infty$, the main contribution to
$d^{(X)}_{in}(x_n,y_n)$ comes from the paths inside $L_i,L_j$,
while the possible corrections from other affine spaces become negligible.
\end{proof}
We remark that though the statement does not assume finiteness of the
collection $\{L_i\}$, it is not very useful in the infinite case, as there
$\sup\limits_{i \neq j} \frac{1}{sin(\frac{\alpha_{L_i,L_j}}{2})}$ can easily go to infinity.
In this way one can produce many non-Cohen-Macaulay singularities
which are still Lipschitz normally embedded.
\begin{Example}\label{Ex.Triangular.Lip.Norm.corank=1}
Suppose for some $X\subset Mat_{m\times n}(\mathbbm{K})$ the stratum $\overline{X_r}$ consists of linear subspaces. (They
all intersect as $\mathbb{O}$ belongs to each of them.) Then $\overline{X_r}$ is Lipschitz normally embedded. For example let $X$ be the subspace of (upper/lower)
triangular matrices in $Mat_{m\times m}(\mathbb{R})$, then $\overline{X_{m-1}}$ is Lipschitz normally embedded. As all $L_i$ are orthogonal in this case the optimal
Lipschitz constant is $\sqrt{2}$.
\end{Example}
\section{The case of determinantal singularities}\label{secgeneralcase}
In this section we discuss Lipschitz normal embeddings
of determinantal singularities. The spaces of matrices we worked with
in the previous sections can be seen as special cases of determinantal
singularities. In this section we assume that $X=
Mat_{m\times n}(\mathbbm{K})$, hence $\overline{X}_r$ is the matrices of rank less
than or equal to $r$. One could also work with $Mat_{m\times n}^{sym}(\mathbbm{K})$ or
$Mat_{m\times n}^{skew-sym}(\mathbbm{K})$ but for simplicity we will restrict
our discussion to $Mat_{m\times n}(\mathbbm{K})$.
Let $\morf{F}{(\mathbbm{K}^N,0)}{(Mat_{mn}(\mathbbm{K}),0)}$ be an analytic
map germ. Then $Y:=F^{-1}(\overline{X}_r)$ is a \emph{determinantal variety of
type} $(m,n,r+1)$ if $\operatorname{codim} (Y)=\operatorname{codim} (\overline{X}_r)$, here we
assume that $r<\min\{m,n\}$. Following \'Ebeling and
Guse{\u\i}n-Zade \cite{ebelingguseinzade} a determinantal singularity $Y=F^{-1}(\overline{X}_r)$ has an \emph{essentially isolated singularity} at the origin
(EIDS for
short) if there is a neighbourhood $U$ of the origin, such
that $F|_{U\setminus \{0\}}$ is transversal to the stratification of
$X_r.$ That is, for every $x \in U\setminus \{0\},$ rank of $F(x)=s,
\, 0\leq s \leq r,$ then $F$ is transversal to $X_s$ at $x.$ Any ICIS
is an EIDS of type $(m,1,1)$.
With the notion of determinantal singularities
Proposition \ref{linearEIDS} becomes the following:
\begin{Theorem}\label{EIDS2}
Let $Y$ is an EIDS defined by a linear map-germ
$\morf{F}{\mathbbm{K}^N}{Mat_{m\times n}}$, then $Y$ is Lipschitz normally embedded.
\end{Theorem}
\begin{proof}
If $F$ is injective then this is just a reformulation of Proposition
\ref{linearEIDS}.
So assume that $F$ is not injective, then we can decompose $\mathbbm{K}^N$ as
$\ker (F)\oplus V$, where $F$ induces an isomorphism from $V$ to $\operatorname{Im}
(F)$. Hence $Y=F^{-1}(\overline{X_r})$ is isomorphic to $\ker (F)\oplus
(\operatorname{Im} (F)\cap \overline{X_r}).$ Now $\ker(F)$ is a linear space and
hence Lipschitz normally embedded and $\operatorname{Im} (F)\cap \overline{X_r}$
is Lipschitz normally embedded by Proposition \ref{linearEIDS}, hence $Y$ is
Lipschitz normally embedded by Proposition \ref{product}.
\end{proof}
We can make a more general statement in Theorem \ref{EIDS2}. Take the group $G= \mathcal R \times \mathcal H$ acting on the space of map-germs
$F: (\mathbbm{K}^N,0) \to ( M_{m,n}(\mathbbm{K}),0)$ where $\mathcal R$ is the group
of germs of diffeomorphisms in
$(\mathbbm{K}^N,0)$ and $\mathcal H$ is the group $GL_m(\mathcal O_N) \times GL_n(\mathcal O_N)$, given
by invertible matrices with entries in $(\mathcal O_N,0)$ (see for
instance Fr\"uhbis Kr\"uger and Neumer's
\cite{fruhbiskrugerneumer}). As a consequence of Theorem
\ref{EIDS2} and Lemma \ref{changediffeo} we can state the
following:
\begin{Corollary} If $\morf{F}{(\mathbbm{K}^N,0)}{(Mat_{m\times n}(\mathbbm{K}),0)}$ is
$G$-equivalent to a linear EIDS, then Thoerem \ref{EIDS2} holds.
\end{Corollary}
Whether a determinantal singularity is Lipschitz normally embedded is
in general a more difficult question than for singularities in the
space of matrices. One cannot in general expect a determinantal
singularity to be Lipschitz normally embedded, the easiest way to see
this is to note that all ICIS are determinantal, and that there are
many ICIS that are not Lipschitz normally embedded. For example
among the simple complex surface singularities $A_n$, $D_n$, $E_6$, $E_7$ and
$E_8$ only the $A_n$'s are Lipschitz normally embedded. Since the
structure of determinantal singularities does not give us any new
tools to study ICIS, we will probably not be able to say when an ICIS
is Lipschitz normally embedded. Since $F^{-1}(\overline{X}_0)$
is often an ICIS, we probably have to assume it is Lipschitz normally
embedded to say
anything about whether $F^{-1}(\overline{X}_t)$ is Lipschitz normally
embedded. But before we discuss such assumption further, we will see
what went wrong in our Example \ref{degenerationofcusps} and give
some more examples of determinantal singularities that are Lipschitz
normally embedded and some that are not.
In Example \ref{degenerationofcusps},
$Y_0:= F^{-1}(\overline{X}_0)$ is a point and $Y_1:= F^{-1}(\overline{X}_1)$ is
a line, so both $Y_0$ and $Y_1$ are Lipschitz normally embedded. So it
does not in general follows that if $Y_i$ is Lipschitz normally
embedded then $Y_{i+1}$ is. Now the singularity in Example
\ref{degenerationofcusps} is not an EIDS since $F^{-1}(\overline X_1)$
does not have the expected dimension (the expected dimension is
$-1$). In the next example we will see that EIDS is not enough either.
\begin{Example}[Simple Cohen-Macaulay codimensional 2 surface singularities]\label{scmc2ss}
In \cite{fruhbiskrugerneumer} Fr\"uhbis-Kr\"uger and Neumer classify
simple complex Cohen-Macaulay codimension 2 singularities. They are all EIDS of
type $(3,2,2)$, and the surfaces correspond to the
rational triple points classified by Tjurina \cite{tjurina}. We will
look closer at two of such families. First we have the family
given by the matrices:
\begin{align*}
\left(
\begin{matrix}
z & y+w^l & w^m \\
w^k & y & x
\end{matrix}
\right).
\end{align*}
This family corresponds to the family of triple points in
\cite{tjurina} called $A_{k-1,l-1,m-1}$. Tjurina shows that the dual
resolution graph of their minimal resolution are:
$$
\xymatrix@R=6pt@C=24pt@M=0pt@W=0pt@H=0pt{
&&\\
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\dashto[rr] &
{\hbox to 0pt{\hss$\underbrace{\hbox to 80pt{}}$\hss}}&
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\lineto[r] &
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-3}{8pt}\lineto[r]\lineto[d] &
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\dashto[rr] &
{\hbox to 0pt{\hss$\underbrace{\hbox to 80pt{}}$\hss}}&
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\\
&{k-1}&& \righttag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\dashto[dddd] &&{l-1}\\
&&&&\\
&&&&\blefttag{\quad}{m-1\begin{cases} \quad \\
\ \\ \ \end{cases}}{10pt} & \\
&&&&\\
&&& \righttag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt} & .\\
&&}$$
Using Remark 2.3 of \cite{spivakovsky} we see that these singularities
are minimal, and hence by the result of \cite{normallyembedded} we get
that they are Lipschitz normally embedded.
The second family is given by the matrices:
\begin{align*}
\left(
\begin{matrix}
z & y+w^l & xw \\
w^k & x & y
\end{matrix}
\right).
\end{align*}
Tjurina calls this family $B_{2l,k-1}$ and give the dual resolution
graphs of their minimal resolutions as:
$$
\xymatrix@R=6pt@C=24pt@M=0pt@W=0pt@H=0pt{
&&&& \overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt} &\\
&&&&&\\
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\dashto[rr] &
{\hbox to 0pt{\hss$\underbrace{\hbox to 65pt{}}$\hss}}&
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\lineto[r] &
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-3 \hspace{7pt}}{8pt}\lineto[r] &
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2 \hspace{20pt}}{8pt}\lineto[r] \lineto[uu]&
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\dashto[rr]
&{\hbox to 0pt{\hss$\underbrace{\hbox to 80pt{}}$\hss}}&
\overtag{\lower.2pt\hbox to 3.5pt{\hss$\circ$\hss}}{-2}{8pt}\\
&2l&&& &&k-3.& \\
&&}$$
Following Spivakovsky this is not a minimal singularity, and since it
is rational according to Tjurina it is not Lipschitz normally embedded
by the result of \cite{normallyembedded}.
These two families do not look very different but one is Lipschitz
normally embedded and the other is not. We can do the same for all simple
Cohen-Macaulay codimension 2 surfaces, and using the results in
\cite{normallyembedded}, that rational surface singularities are
Lipschitz normally embedded if and only if they are minimal, we get
that only the family $A_{l,k,m}$ is Lipschitz normally
embedded. This is similar to the case of codimension 1, since only the
$A_n$ singularities are Lipschitz normally embedded among the simple
singularities.
\end{Example}
So as we see in Example \ref{scmc2ss} being an EIDS with singular
set Lipschitz normally embedded, is not enough to ensure the variety is
Lipschitz normally embedded. One should notice that the varieties
in Example \ref{degenerationofcusps} and \ref{scmc2ss} are both
defined by maps $\morf{F}{\mathbb{C}^N}{Mat_{m\times n}}$ where $N<mn$. This
means that one should think of the singularity as a section of
$\overline{X}_t$, but being a subspace of a Lipschitz normally embedded space
does not imply the Lipschitz normally embedded condition.
If $N\geq mn$ then
one can think about the singularity being a fibration over $\overline{X}_t$,
and as we saw in Proposition \ref{product} products of Lipschitz
normally embedded spaces are Lipschitz normally embedded. Now in this
case $Y_0=F^{-1}(\overline{X}_0)$ is ICIS if $Y$ is an EIDS, which means that we
probably can not say anything general about whether it is Lipschitz
normally embedded or not. So natural assumptions would be to assume
that $Y$ is an EIDS and that $Y_0$ is Lipschitz normally embedded.
\section*{Acknowledgements}
The second and third author would like to thank Walter Neumann and
Anne Pichon for
first letting us know
about Asuf Shachar's question on Mathoverlfow.org, and for helpful
comments about the manuscript. We would also like to thank Lev Birbrair for
sending us an early version of the paper by Katz, Katz, Kerner and
Liokumovich \cite{kerneretc}, and encouraging us to work on
the problem. We also thank Nguyen Xuan Viet Nhan for help with the proof of
Proposition \ref{bilipschizttrivial}. The first author was supported by the grant FP7-People-MCA-CIG, 334347, the second author was supported by FAPESP grant
2015/08026-4 and the third author was partially supported by FAPESP
grant 2014/00304-2 and CNPq grant 306306/2015-8.
| {
"timestamp": "2017-03-14T01:15:20",
"yymm": "1703",
"arxiv_id": "1703.04520",
"language": "en",
"url": "https://arxiv.org/abs/1703.04520",
"abstract": "The germ of an algebraic variety is naturally equipped with two different metrics up to bilipschitz equivalence. The inner metric and the outer metric. One calls a germ of a variety Lipschitz normally embedded if the two metrics are bilipschitz equivalent. In this article we prove Lipschitz normal embeddedness of some algebraic subsets of the space of matrices. These include the space $m \\times n$ matrices, symmetric matrices and skew-symmetric matrices of rank equal to a given number and their closures, and the upper triangular matrices with determinant $0$. We also make a short discussion about generalizing these results to determinantal varieties in real and complex spaces.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Lipschitz Normal Embeddings in the Space of Matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987946220838664,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7099326263639899
} |
https://arxiv.org/abs/1110.4896 | Strengthened Brooks Theorem for digraphs of girth three | Brooks' Theorem states that a connected graph $G$ of maximum degree $\Delta$ has chromatic number at most $\Delta$, unless $G$ is an odd cycle or a complete graph. A result of Johansson (1996) shows that if $G$ is triangle-free, then the chromatic number drops to $O(\Delta / \log \Delta)$. In this paper, we derive a weak analog for the chromatic number of digraphs. We show that every (loopless) digraph $D$ without directed cycles of length two has chromatic number $\chi(D) \leq (1-e^{-13}) \tilde{\Delta}$, where $\tilde{\Delta}$ is the maximum geometric mean of the out-degree and in-degree of a vertex in $D$, when $\tilde{\Delta}$ is sufficiently large. As a corollary it is proved that there exists an absolute constant $\alpha < 1$ such that $\chi(D) \leq \alpha (\tilde{\Delta} + 1)$ for every $\tilde{\Delta} > 2$. | \section{Introduction}
Brooks' Theorem states that if $G$ is a connected graph with
maximum degree $\Delta$, then $\chi(G) \leq \Delta + 1$, where
equality is attained only for odd cycles and complete graphs. The
presence of triangles has significant influence on the chromatic
number of a graph. A result of Johansson \cite{J1996} states that
if $G$ is triangle-free, then $\chi(G) = O \left(\Delta / \log
\Delta \right)$. In this note, we study the chromatic number of
digraphs \cite{BFJKM2004}, \cite{M2003}, \cite{N1982} and show
that Brooks' Theorem for digraphs can also be improved when we
forbid directed cycles of length 2.
\subsection*{Digraph colorings and the Brooks Theorem}
Let $D$ be a (loopless) digraph. A vertex set $A \subset V(D)$ is
called \DEF{acyclic} if the induced subdigraph $D[A]$ has no
directed cycles. A \DEF{$k$-coloring} of $D$ is a partition of
$V(D)$ into $k$ acyclic sets. The minimum integer $k$ for which
there exists a $k$-coloring of $D$ is the \DEF{chromatic number}
$\chi(D)$ of the digraph D. The above definition of the chromatic
number of a digraph was first introduced by Neumann-Lara
\cite{N1982}. The same notion was independently introduced much
later by the second author when considering the circular chromatic
number of weighted (directed or undirected) graphs \cite{M2003}.
The chromatic number of digraphs was further investigated by Bokal
et al.~\cite{BFJKM2004}. The notion of chromatic number of a
digraph shares many properties with the notion of the chromatic
number of undirected graphs. Note that if $G$ is an undirected
graph, and $D$ is the digraph obtained from $G$ by replacing each
edge with the pair of oppositely directed arcs joining the same
pair of vertices, then $\chi(D) = \chi(G)$ since any two adjacent
vertices in $D$ induce a directed cycle of length two. Another
useful observation is that a $k$-coloring of a graph $G$ is a
$k$-coloring of a digraph $D$, where $D$ is a digraph obtained
from assigning arbitrary orientations to the edges of $G$. Mohar
\cite{M2010} provides some further evidence for the close
relationship between the chromatic number of a digraph and the
usual chromatic number. For digraphs, a version of Brooks' theorem
was proved in \cite{M2010}. Note that a digraph $D$ is
\DEF{$k$-critical} if $\chi(D) = k$, and $\chi(H) < k$ for every
proper subdigraph $H$ of $D$.
\begin{theorem}[\cite{M2010}]
\label{th:0}
Suppose that $D$ is a $k$-critical digraph in which for every
vertex $v \in V(D)$, $d^{+}(v) = d^{-}(v) = k-1$. Then one of the
following cases occurs:
\begin{enumerate}
\item $k=2$ and $D$ is a directed cycle of length $n \geq 2.$
\item $k=3$ and $D$ is a bidirected cycle of odd length $n \geq 3$.
\item $D$ is bidirected complete graph of order $k \geq 4$.
\end{enumerate}
\end{theorem}
A tight upper bound on the chromatic number of a digraph was first
given by Neumann-Lara \cite{N1982}.
\begin{theorem}[\cite{N1982}]
\label{th:01}
Let $D$ be a digraph and denote by $\Delta_o$ and $\Delta_i$ the
maximum out-degree and in-degree of $D$, respectively. Then
$$\chi(D) \leq \min \{\Delta_o,\Delta_i\} + 1.$$
\end{theorem}
In this note, we study improvements of this result using the
following substitute for the maximum degree. If $D$ is a digraph,
we let
$$
\tilde{\Delta} = \tilde{\Delta}(D) =
\max \{\sqrt{d^{+}(v)d^{-}(v)} \mid v \in V(D)\}
$$
be the maximum geometric mean of the in-degree and
out-degree of the vertices. Observe that $\tilde{\Delta} \leq
\frac{1}{2}(\Delta_o + \Delta_i)$, by the arithmetic-geometric
mean inequality (where $\Delta_o$ and $\Delta_i$ are as in Theorem
\ref{th:01}). We show that when $\tilde{\Delta}$ is large (roughly
$\tilde{\Delta} \geq 10^{10}$), then every digraph $D$ without digons has
$\chi(D) \leq \alpha \tilde{\Delta}$, for some absolute constant $\alpha
< 1$. We do not make an attempt to optimize $\alpha$, but show
that $\alpha = 1 - e^{-13}$ suffices. To improve the value of
$\alpha$ significantly, a new approach may be required.
It may be true that the following analog of Johansson's result
holds for digon-free digraphs, as conjectured by McDiarmid and
Mohar \cite{MM2002}.
\begin{conjecture} \label{conj:1}
Every digraph $D$ without digons has $\chi(D) =
O(\frac{\tilde{\Delta}}{\log \tilde{\Delta}})$.
\end{conjecture}
If true, this result would be asymptotically best possible in view
of the chromatic number of random tournaments of order $n$, whose
chromatic number is $\Omega(\frac{n}{\log n})$ and
$\tilde{\Delta} > \left( \frac{1}{2} - o(1) \right)n$, as shown by
Erd\H{o}s et al.~\cite{EGK1991}.
We also believe that the following conjecture of Reed generalizes
to digraphs without digons.
\begin{conjecture}[\cite{R1998}] \label{conj:2}
Let $\Delta$ be the maximum degree of (an undirected) graph $G$,
and let $\omega$ be the size of the largest clique. Then
$$\chi(G) \leq \left \lceil \frac{\Delta + 1 + \omega}{2} \right \rceil.$$
\end{conjecture}
If we define $\omega = 1$ for digraphs without digons, we can pose
the following conjecture for digraphs.
\begin{conjecture} \label{conj:3}
Let $D$ be a $\Delta$-regular digraph without digons. Then
$$\chi(D) \leq \left \lceil \frac{\Delta}{2} \right \rceil + 1.$$
\end{conjecture}
Conjecture \ref{conj:3} is trivial for $\Delta = 1$, and follows
from Lemma \ref{L:4} for $\Delta = 2, 3$. We believe that the
conjecture is also true for non-regular digraphs with $\Delta$
replaced by $\tilde{\Delta}$.
\subsection*{Basic definitions and notation}
We end this section by introducing some terminology that we will
be using throughout the paper. The notation is standard and we
refer the reader to \cite{BG2001} for an extensive treatment of
digraphs. All digraphs in this paper are \DEF{simple}, i.e. there
are no loops or multiple arcs in the same direction. We use $xy$
to denote the arc joining vertices $x$ and $y$, where $x$ is the
\DEF{initial vertex} and $y$ is the \DEF{terminal vertex} of the
arc $xy$. We denote by $A(D)$ the set of arcs of the digraph $D$.
For $v \in V(D)$ and $e \in A(D)$, we denote by $D-v$ and $D-e$
the subdigraph of $D$ obtained by deleting $v$ and the subdigraph
obtained by removing $e$, respectively. We let $d^{+}_D (v)$ and
$d^{-}_D(v)$ denote the \DEF{out-degree} (the number of arcs whose
initial vertex is $v$) and the \DEF{in-degree} (the number of arcs
whose terminal vertex is $v$) of $v$ in $D$, respectively. The
subscript $D$ may be omitted if it is clear from the context. A
vertex $v$ is said to be \DEF{Eulerian} if $d^{+}(v) = d^{-}(v)$.
The digraph $D$ is \DEF{Eulerian} if every vertex in $D$ is
Eulerian. A digraph $D$ is \DEF{$\Delta$-regular} if $d^{+}(v) =
d^{-}(v) = \Delta$ for all $v \in V(D)$. We say that $u$ is an
\DEF{out-neighbor} (\DEF{in-neighbor}) of $v$ if $vu$ ($uv$) is an
arc. We denote by $N^{+}(v)$ and $N^{-}(v)$ the set of
out-neighbors and in-neighbors of $v$, respectively. The
\DEF{neighborhood} of $v$, denoted by $N(v)$, is defined as $N(v)=
N^{+}(v) \cup N^{-}(v)$. Every undirected graph $G$ determines a
\DEF{bidirected} digraph $D(G)$ that is obtained from $G$ by
replacing each edge with two oppositely directed edges joining the
same pair of vertices. If $D$ is a digraph, we let $G(D)$ be the
\DEF{underlying undirected graph} obtained from $D$ by
``forgetting" all orientations. A digraph $D$ is said to be
\DEF{(weakly) connected} if $G(D)$ is connected. The \DEF{blocks}
of a digraph $D$ are the maximal subdigraphs $D'$ of $D$ whose
underlying undirected graph $G(D')$ is 2-connected. A \DEF{cycle}
in a digraph $D$ is a cycle in $G(D)$ that does not use parallel
edges. A \DEF{directed cycle} in $D$ is a subdigraph forming a
directed closed walk in $D$ whose vertices are all distinct. A
directed cycle consisting of exactly two vertices is called a
\DEF{digon}.
The rest of the paper is organized as follows. In Section 2, we
improve Brooks' bound for digraphs that have sufficiently large
degrees. In Section 3, we consider the problem for arbitrary
degrees.
\section{Strengthening Brooks' Theorem for large $\tilde{\Delta}$}
The main result in this section is the following theorem.
\begin{comment}
\begin{theorem}
Let $D$ be an Eulerian digraph with no digons (2-cycles). Let $\Delta = \max_{v \in V(D)}d^{+}(v)$.
Then there is an absolute constant $\Delta_0$ such that every Eulerian digraph
with $\Delta \geq \Delta_0$ has $\chi(D) \leq \alpha \Delta$, where $\alpha < 1$ is an absolute
constant.
\end{theorem}
\end{comment}
\begin{theorem} \label{th:1}
There is an absolute constant $\Delta_1$ such that every
digon-free digraph $D$ with $\tilde{\Delta} = \tilde{\Delta}(D) \geq \Delta_1$
has $\chi(D) \leq \left(1- e^{-13} \right) \tilde{\Delta}$.
\end{theorem}
\begin{comment}
First, we show that we may assume that $D$ is regular (i.e. $d^{+}(v) = d^{-}(v) = \Delta$).
We apply the following procedure. Let $D'$ be the digraph $D$ with the arcs reversed. Now,
for every $v$ in $V(D)$ not satisfying $d^{+}(v) = d^{-}(v) = \Delta$, we
add the arc $(v, v')$ if $d^{+}(v) \leq d^{-}(v)$, and the arc $(v',v)$
if $d^{+}(v) > d^{-}(v)$. Let $D_1$ be the
resulting digraph. Repeating this process, we will eventually
get a digraph $D_k$ such that $D_k$ is regular with out-degree
$\Delta$ and $D$ is a subgraph of $D$. This implies that
$\chi(D) \leq \chi(D_k)$, therefore, it is sufficient
to prove the theorem for regular digraphs.
\end{comment}
The rest of this section is the proof of Theorem \ref{th:1}. The
proof is a modification of an argument found in Molloy and Reed
\cite{MR2002} for usual coloring of undirected graphs. We first
note the following simple lemma.
\begin{lemma} \label{L:1}
Let $D$ be a digraph with maximum out-degree $\Delta_o$, and
suppose we have a partial proper coloring of $D$ with at most
$\Delta_o + 1 - r$ colors. Suppose that for every vertex $v$ there
are at least $r$ colors that appear on vertices in $N^{+}(v)$ at
least twice. Then $D$ is $\Delta_o + 1 - r$-colorable.
\end{lemma}
\begin{proof}
The proof is easy -- since many colors are repeated on the
out-neighborhood of $v$, there are many colors that are not used
on $N^{+}(v)$. Thus, one can greedily ``extend" the partial
coloring.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:1}]
We may assume that $c_1 \tilde{\Delta} < d^{+}(v) < c_2\tilde{\Delta}$
and $c_1 \tilde{\Delta} < d^{-}(v) < c_2\tilde{\Delta}$
for each $v \in V(D)$, where $c_1 = 1 - \frac{1}{3}e^{-11}$
and $c_2 = 1 + \frac{1}{3}e^{-11}$. If not, we
remove all the vertices $v$ not satisfying the above inequality
and obtain a coloring for the remaining graph with $\left
(1-e^{-13} \right) \tilde{\Delta}$ colors. Now, if a vertex does not
satisfy the above condition either one of $d^{+}(v)$ or $d^{-}(v)$
is at most $c_1 \tilde{\Delta}$ or one of $d^{+}(v)$ or $d^{-}(v)$ is at
most $\frac{1}{c_2} \tilde{\Delta}$. Note that $1 - e^{-13}
> \max \{c_1, 1/c_2\}$. This ensures that there is a color
that either does not appear in the in-neighborhood or does not appear
in the out-neighborhood of $v$, allowing us to complete the coloring.
The core of the proof is probabilistic. We color the vertices
of $D$ randomly with $C$ colors, $C = \lfloor \tilde{\Delta}/2 \rfloor$.
That is, for each vertex $v$ we assign $v$ a color from
$\{1,2,..., C\}$ uniformly at random. After the random coloring,
we uncolor all the vertices that are in a monochromatic directed
path of length at least 2. Clearly, this results in a proper
partial coloring of $D$ since $D$ has no digons. For each vertex
$v$, we are interested in the number of colors which are assigned
to at least two out-neighbors of $v$ and are retained by at least
two of these vertices. For analysis, it is better to define a
slightly simpler random variable. Let $v \in V(D)$. For each color
$i$, $1 \leq i \leq C$, let $O_i$ be the set of out-neighbors of
$v$ that have color $i$ assigned to them in the first phase. Let
$X_v$ be the number of colors $i$ for which $|O_i| \geq 2$ and
such that all vertices in $O_i$ retain their color after the
uncoloring process.
For every vertex $v$, we let $A_v$ be the event that $X_v$ is less
than $\frac{1}{2}e^{-11}\tilde{\Delta} + 1$. We will show that with
positive probability none of the events $A_v$ occur. Then Lemma
\ref{L:1} will imply that $\chi(D) \leq (c_2-\frac{1}{2}e^{-11})
\tilde{\Delta} \leq (1 - e^{-13})\tilde{\Delta}$, finishing the proof. We will
use the symmetric version of the Lov\'{a}sz Local Lemma (see for
example \cite{AS1992}). Note that the color assigned initially to
a vertex $u$ can affect $X_v$ only if $u$ and $v$ are joined by a
path of length at most 3. Thus, $A_v$ is mutually independent of
all except at most $ (2c_2\tilde{\Delta}) + (2c_2\tilde{\Delta})^{2} +
(2c_2\tilde{\Delta})^3 + (2c_2\tilde{\Delta})^4 + (2c_2\tilde{\Delta})^5 +
(2c_2\tilde{\Delta})^6 \leq 100 \tilde{\Delta}^{6}$ other events $A_w$.
Therefore, by the symmetric version of the Local Lemma, it
suffices to show that for each event $A_v$, $4 \cdot 100 \tilde{\Delta}^6
\mathbb P[A_v] < 1$. We will show that $\mathbb P[A_v] < \tilde{\Delta}^{-7}$. We do
this by proving the following two lemmas.
\begin{lemma} \label{L:2}
$\mathbb E[X_v] \geq e^{-11}\tilde{\Delta} - 1$.
\end{lemma}
\begin{proof}
Let $X'_v$ be the random variable denoting the number of colors
that are assigned to exactly two out-neighbors of $v$ and are
retained by both of these vertices. Clearly, $X_v \geq X'_v$ and
therefore it suffices to consider $\mathbb E[X'_v]$.
Note that color $i$ will be counted by $X'_v$ if two vertices $u,w
\in N^{+}(v)$ are colored $i$ and no other vertex in $S = N(u)
\cup N^{+}(v) \cup N(w)$ is assigned color $i$. This will give us
a lower bound on $\mathbb E[X'_v]$. There are $C$ choices for color $i$
and at least $\binom{c_1\tilde{\Delta}}{2}$ choices for the set
$\{u,w\}$. The probability that no vertex in $S$ gets color $i$ is
at least $(1- \frac{1}{C})^{|S|} \geq (1- \frac{1}{C})^{5c_2
\tilde{\Delta}}$. Therefore, by linearity of expectation, we can
estimate:
\begin{eqnarray*}
\mathbb E[X'_v] &\geq& C \binom{c_1\tilde{\Delta}}{2} \left( \frac{1}{C}
\right)^2
\left(1- \frac{1}{C}\right)^{5c_2 \tilde{\Delta}}\\
&\geq& c_1(c_1\tilde{\Delta} - 1) \exp(-5c_2\tilde{\Delta} / C - 1/C) \\
&\geq& \frac{\tilde{\Delta}}{e^{11}}- 1
\end{eqnarray*}
for $\tilde{\Delta}$ sufficiently large.
\end{proof}
\begin{lemma} \label{L:3}
$\mathbb P \left[ | X_v - \mathbb E[X_v] | > \log \tilde{\Delta} \sqrt{\mathbb E[X_v]} \,
\right] < \tilde{\Delta}^{-7}$.
\end{lemma}
\begin{proof}
Let $AT_v$ be the random variable counting the number of colors
assigned to at least two out-neighbors of $v$, and $Del_v$ the
random variable that counts the number of colors assigned to at
least two out-neighbors of $v$ but removed from at least one of
them. Clearly, $X_v = AT_v - Del_v$ and therefore it suffices to
show that each of $AT_v$ and $Del_v$ are sufficiently concentrated
around their means. We will show that for $t = \frac{1}{2} \log
\tilde{\Delta} \sqrt{\mathbb E[X_v]}$ the following estimates hold:
\medskip
Claim 1: $\mathbb P \left[|AT_v - \mathbb E[AT_v]| > t \right] < 2
e^{-t^2/(8 \tilde{\Delta})}$.
\medskip
Claim 2: $\mathbb P \left[|Del_v - \mathbb E[Del_v]| > t \right]
< 4 e^{-t^2/(100 \tilde{\Delta})}$.
\medskip
\noindent
The two above inequalities yield that, for $\tilde{\Delta}$ sufficiently
large,
\begin{eqnarray*}
\mathbb P[ | X_v - \mathbb E[X_v] | > \log \tilde{\Delta} \sqrt{\mathbb E[X_v]}] &\leq& 2
e^{-\frac{t^2}{8 \tilde{\Delta}}} + 4 e^{-\frac{t^2}{100 \tilde{\Delta}}}\\
&\leq& \tilde{\Delta}^{- \log \tilde{\Delta}}\\
&<& \tilde{\Delta}^{-7},
\end{eqnarray*}
as we require. So, it remains to establish both claims.
To prove Claim 1, we use a version of Azuma's inequality found in
\cite{MR2002}, called the Simple Concentration Bound.
\begin{theorem}[Simple Concentration Bound] \label{th:1.5}
Let $X$ be a random variable determined by $n$ independent trials
$T_1,..., T_n$, and satisfying the property that changing the
outcome of any single trial can affect $X$ by at most $c$. Then
$$\mathbb P[|X-\mathbb E[X]| > t] \leq 2e^{-\frac{t^2}{2c^2n}}. $$
\end{theorem}
Note that $AT_v$ depends only on the colors assigned to the
out-neighbors of $v$. Note that each random choice can affect $AT_v$
by at most 1. Therefore, we can take $c=1$ in
the Simple Concentration Bound for $X=AT_v$.
Since the choice of random color assignments are made
independently over the vertices and since $d^{+}(v) \leq c_2
\tilde{\Delta}$, we immediately have the first claim.
For Claim 2, we use the following variant of Talagrand's
Inequality (see \cite{MR2002}).
\begin{theorem}[Talagrand's Inequality] \label{th:2}
Let $X$ be a nonnegative random variable, not equal to 0, which
is determined by $n$ independent trials, $T_1,\dots,T_n$ and
satisfyies the following conditions for some $c,r > 0$:
\begin{enumerate}
\item Changing the outcome of any single trial can affect $X$ by
at most $c$. \item For any $s$, if $X \geq s$, there are at most
$rs$ trials whose exposure certifies that $X \geq s$.
\end{enumerate}
Then for any $0 \leq \lambda \leq \mathbb E[X]$,
$$ \mathbb P \left[|X-\mathbb E[X]| > \lambda + 60c \sqrt{r\mathbb E[X]} \, \right]
\leq 4e^{-\frac{\lambda^2}{8c^2r\mathbb E[X]}}.
$$
\end{theorem}
We apply Talagrand's inequality to the random variable $Del_v$.
Note that we can take $c=1$ since any single random
color assignment can affect $Del_v$ by at most 1. Now, suppose
that $Del_v \geq s$. One can certify that $Del_v \geq s$ by
exposing, for each of the $s$ colors $i$, two random color
assignments in $N^{+}(v)$ that certify that at least two vertices
got color $i$, and exposing at most two other color assignments
which show that at least one vertex colored $i$ lost its color.
Therefore, $Del_v \geq s$ can be certified by exposing $4s$ random
choices, and hence we may take $r=4$ in Talagrand's inequality.
Note that $t= \frac{1}{2} \log \tilde{\Delta} \sqrt{\mathbb E[X_v]}
>\!\!> 60c \sqrt{r\mathbb E[Del_v]} $ since $\mathbb E[X_v] \geq \tilde{\Delta}/e^{11} - 1$ and
$\mathbb E[Del_v] \leq c_2 \tilde{\Delta}$. Now, taking $\lambda$ in
Talagrand's inequality to be $\lambda = \frac{1}{2}t$, we obtain
that $\mathbb P[|Del_v -\mathbb E[Del_v]| > t] \leq \mathbb P[|Del_v-\mathbb E[Del_v]| >
\lambda + 60c \sqrt{r\mathbb E[X]}]$. Therefore, provided that $ \lambda
\leq \mathbb E[Del_v]$, we have the confirmed Claim 2.
It is sufficient to show that $\mathbb E[Del_v] = \Omega (\tilde{\Delta})$,
since $\lambda = O(\log \tilde{\Delta} \sqrt{\tilde{\Delta}})$. The probability
that \emph{exactly} two vertices in $N^{+}(v)$ are assigned a
particular color $c$ is at least $\frac{c_1\tilde{\Delta}^2}{2} C^{-2}
(1-1/C)^{c_2\tilde{\Delta}} \approx 2e^{-10}$, a constant. It remains to
show that the probability that at least one of these vertices
loses its color is also (at least) a constant. We use Janson's
Inequality (see \cite{AS1992}). Let $u$ be one of the two vertices
colored $c$. We only compute the probability that $u$ gets
uncolored. We may assume that the other vertex colored $c$ is not
a neighbor of $u$ since this will only increase the probability.
We show that with large probability there exists a monochromatic
directed path of length at least 2 starting at $u$. Let $ \Omega =
N^{+}(u) \cup N^{++}(u)$, where $N^{++}(u)$ is the second
out-neighborhood of $u$. Each vertex in $\Omega$ is colored $c$
with probability $\frac{2}{\tilde{\Delta}}$. Enumerate all the directed
paths of length 2 starting at $u$ and let $P_i$ be the $i^{th}$
path. Clearly, there are at least $(c_1 \tilde{\Delta})^2$ such paths
$P_i$. Let $A_i$ be the set of vertices of $P_i$, and denote by
$B_i$ the event that all vertices in $A_i$ receive the same color.
Then, clearly $\mathbb P[B_i] = \frac{1}{(\lfloor \tilde{\Delta}/2 \rfloor)^2}
\geq \frac{4}{\tilde{\Delta}^2}$. Then, $\mu = \sum \mathbb P[B_i] \geq
\frac{4}{\tilde{\Delta}^2} \cdot (c_1\tilde{\Delta})^2 = 4c_1^2$. Now, if
$\delta = \sum_{i,j : A_i \cap A_j \neq \emptyset}\mathbb P[B_i \cap
B_j]$ in Janson's Inequality satisfies $\delta < \mu$, then
applying Janson's Inequality, with the sets $A_i$ and events
$B_i$, we obtain that the probability that none of the events
$B_i$ occur is at most $e^{-1}$, and hence the probability that
$u$ does not retain its color is at least $1-e^{-1}$, as required.
Now, assume that $\delta \geq \mu$. The following gives an upper
bound on $\delta$:
\begin{eqnarray*}
\delta &=& \sum_{i,j : A_i \cap A_j \neq \emptyset}\mathbb P[B_i \cap B_j]
~=~ \sum_{i, j: A_i \cap A_j \neq \emptyset} \frac{1}{(\lfloor \tilde{\Delta}/2 \rfloor)^3} \\
&\leq& (c_2\tilde{\Delta})^2 \cdot 2c_2\tilde{\Delta} \cdot
\frac{8}{(\tilde{\Delta}-2)^3} < 32,
\end{eqnarray*}
for $\tilde{\Delta} \geq 100$. Now, we apply Extended Janson's Inequality
(again see \cite{AS1992}). This inequality now implies that the
probability that none of the events $B_i$ occur is at most
$e^{-c_1^2/4}$, a constant. Therefore, by linearity of expectation
$\mathbb E[Del_v] = \Omega(\tilde{\Delta})$.
\end{proof}
Clearly, since $\mathbb E[X_v] \leq c_2 \tilde{\Delta}$, Lemmas \ref{L:2} and
\ref{L:3} imply that $\mathbb P[A_v] < \tilde{\Delta}^{-7}$. This completes the
proof of Theorem \ref{th:1}.
\end{proof}
\section{Brooks' Theorem for small $\tilde{\Delta}$}
The bound in Theorem \ref{th:1} is only useful for large
$\tilde{\Delta}$. Rough estimates suggest that $\tilde{\Delta}$ needs to be at
least in the order of $10^{10}$. The above approach is unlikely to
improve this bound significantly with a more detailed analysis.
In this section, we improve Brooks' Theorem for all values of
$\tilde{\Delta}$. We achieve this by using a result on list colorings
found in \cite{HM2010}. List coloring of digraphs is defined
analogously to list coloring of undirected graphs. A precise
definition is given below.
Let ${\cal C}$ be a finite set of colors. Given a digraph $D$, let $L:
v\mapsto L(v)\subseteq {\cal C}$ be a \DEF{list-assignment} for $D$,
which assigns to each vertex $v\in V(D)$ a set of colors. The set
$L(v)$ is called the \DEF{list} (or the set of \DEF{admissible
colors}) for $v$. We say $D$ is \DEF{$L$-colorable} if there is an
\DEF{$L$-coloring} of $D$, i.e., each vertex $v$ is assigned a
color from $L(v)$ such that every color class induces an acyclic
subdigraph in $D$. $D$ is said to be \DEF{$k$-choosable} if $D$ is
$L$-colorable for every list-assignment $L$ with $|L(v)| \geq k$
for each $v \in V(D)$. We denote by $\chi_l(D)$ the smallest
integer $k$ for which $D$ is $k$-choosable.
The result characterizes the structure of non $L$-colorable
digraphs whose list sizes are one less than under Brooks'
condition.
\begin{theorem}[\cite{HM2010}] \label{th:4}
Let $D$ be a connected digraph, and $L$ an assignment of colors to
the vertices of $D$ such that $|L(v)| \geq d^{+}(v)$ if $d^{+}(v)
= d^{-}(v)$ and $|L(v)| \geq \min \{d^{+}(v), d^{-}(v)\} + 1$
otherwise. Suppose that $D$ is not $L$-colorable. Then $D$ is
Eulerian, $|L(v)|=d^{+}(v)$ for each $v \in V(D)$, and every block
of $D$ is one of the following:
\begin{enumerate}
\item[\rm (a)] a directed cycle (possibly a digon),
\item[\rm (b)] an odd bidirected cycle, or
\item[\rm (c)] a bidirected complete digraph.
\end{enumerate}
\end{theorem}
Now, we can state the next result of this section.
\begin{lemma} \label{L:4}
Let $D$ be a connected digraph without digons, and let $\tilde{\Delta} =
\tilde{\Delta}(D)$. If $\tilde{\Delta} > 1$, then $\chi_l(D) \leq \lceil \tilde{\Delta}
\rceil$.
\end{lemma}
\begin{proof}
We apply Theorem \ref{th:4} with all lists $L(v)$, $v \in V(D)$
having cardinality $\lceil \tilde{\Delta} \rceil$. It is clear that the
conditions of Theorem \ref{th:4} are satisfied for every Eulerian
vertex $v$. It is easy to verify that the conditions are also
satisfied for non-Eulerian vertices. Now, if $D$ is not
$L$-colorable, then by Theorem \ref{th:4}, $D$ is Eulerian and
$d^{+}(v) = \lceil \tilde{\Delta} \rceil$ for every vertex $v$. This
implies that $D$ is $\lceil \tilde{\Delta} \rceil$-regular. Now, the
conclusion of Theorem \ref{th:4} implies that $D$ consists of a
single block of type (a), (b) or (c). This means that either $D$
is a directed cycle (and hence $\tilde{\Delta} = 1$), or $D$ contains a
digon, a contradiction. This completes the proof.
\end{proof}
We can now prove the main result of this section, which improves
Brooks' bound for all digraphs without digons.
\begin{theorem} \label{th:5}
Let $D$ be a connected digraph without digons, and let $\tilde{\Delta} =
\tilde{\Delta}(D)$. If $\tilde{\Delta} > 1$, then $\chi(D) \leq \alpha (\tilde{\Delta}
+ 1) $ for some absolute constant $\alpha < 1$.
\end{theorem}
\begin{proof}
We define $\alpha = \max \left \{ \frac{\Delta_1}{\Delta_1 + 1},
1-e^{-13} \right \}$, where $\Delta_1$ is the constant in the
statement of Theorem \ref{th:1}. Now, if $\tilde{\Delta} < \Delta_1$ then
by Lemma \ref{L:4}, it follows that $\chi(D) \leq \lceil \tilde{\Delta}
\rceil \leq \alpha (\tilde{\Delta} + 1)$. If $\tilde{\Delta} \geq \Delta_1$,
then by Theorem \ref{th:1} we obtain that $\chi(D) \leq \left(1-
e^{-13} \right) \tilde{\Delta} \leq \alpha (\tilde{\Delta} + 1)$, as required.
\end{proof}
An interesting question to consider is the tightness of the bound
of Lemma \ref{L:4}. It is easy to see that the bound is tight for
$\lceil \tilde{\Delta} \rceil = 2$ by considering, for example, a
directed cycle with an additional chord or a digraph consisting of
two directed triangles sharing a common vertex. The graph in
Figure 1 shows that the bound is also tight for $\lceil \tilde{\Delta}
\rceil = 3$. It is easy to verify that, up to symmetry, the
coloring outlined in the figure is the unique 2-coloring. Now,
adding an additional vertex, whose three out-neighbors are the
vertices of the middle triangle and the three in-neighbors are the
remaining vertices, we obtain a 3-regular digraph where three
colors are required to complete the coloring.
Another example of a digon-free 3-regular digraph on 7 vertices
requiring three colors is the following. Take the Fano Plane and
label its points by 1,2,...,7. For every line of the Fano plane
containing points $a, b, c$, take a directed cycle through $a,b,c$
(with either orientation). There is a unique directed 3-cycle
through any two vertices because every two points line in exactly
one line. This shows that the Fano plane digraphs are not isomorphic
to the digraph from the previous paragraph.
Finally, it is easy to verify that the resulting digraph needs three
colors for coloring.
\begin{figure}[htb]
\centering
\includegraphics[height=5cm]{Delta3.pdf}
\caption{Constructing a $3$-regular digraph $D$ with $\chi(D) = 3$.}
\label{fig:2}
\end{figure}
Note that the digraphs in the above examples are 3-regular
tournaments on 7 vertices. It is not hard to check that
every tournament on 9 vertices has $\lceil \tilde{\Delta} \rceil =4$, and
yet is $3$-colorable. In general, we pose the following problem.
\begin{question} \label{Q:1}
What is the smallest integer $\Delta_0$ such that every digraph
$D$ without digons with $\lceil \tilde{\Delta}(D) \rceil = \Delta_0$
satisfies $\chi(D) \leq \Delta_0 - 1$?
\end{question}
\bigskip
Note that this is a weak version of Conjecture \ref{conj:3}. By
Theorem \ref{th:1}, $\Delta_0$ exists. However, we believe that
$\Delta_0$ is small, possibly equal to 4. The following
proposition shows that the above holds for every $\lceil \tilde{\Delta}
\rceil \geq \Delta_0$.
\begin{proposition} \label{prop:1}
Let $\Delta_0$ be defined as in Question \ref{Q:1}. Then every
digon-free digraph $D$ with $\lceil \tilde{\Delta}(D) \rceil \geq
\Delta_0$ satisfies $\chi(D) \leq \lceil \tilde{\Delta}(D) \rceil -1$.
\end{proposition}
\begin{proof}
The proof is by induction on $\lceil \tilde{\Delta} \rceil$. If $\lceil
\tilde{\Delta} \rceil = \Delta_0$ this holds by the definition of
$\Delta_0$. Otherwise, let $U$ be a maximal acyclic subset of $D$.
Then $\lceil \tilde{\Delta}(D-U) \rceil \leq \lceil \tilde{\Delta}(D) \rceil - 1$
for otherwise $U$ is not maximal. Since we can color $U$ by a
single color, we can apply the induction hypothesis to complete
the proof.
\end{proof}
As a corollary we get:
\begin{corollary}
There exists a positive constant $\alpha < 1$ such that for every
digon-free digraph $D$ with $\lceil \tilde{\Delta}(D) \rceil \geq
\Delta_0$, $\chi(D) \leq \alpha \lceil \tilde{\Delta} \rceil$.
\end{corollary}
\begin{proof}
Let $\alpha = \max \left \{ \frac{\lceil \Delta_1 \rceil}{\lceil
\Delta_1 \rceil + 1}, 1-e^{-13} \right \}$, where $\Delta_1$ is
the constant in the statement of Theorem \ref{th:1}. Now, applying
Theorem \ref{th:1} or Proposition \ref{prop:1} gives the result.
\end{proof}
| {
"timestamp": "2011-10-25T02:00:14",
"yymm": "1110",
"arxiv_id": "1110.4896",
"language": "en",
"url": "https://arxiv.org/abs/1110.4896",
"abstract": "Brooks' Theorem states that a connected graph $G$ of maximum degree $\\Delta$ has chromatic number at most $\\Delta$, unless $G$ is an odd cycle or a complete graph. A result of Johansson (1996) shows that if $G$ is triangle-free, then the chromatic number drops to $O(\\Delta / \\log \\Delta)$. In this paper, we derive a weak analog for the chromatic number of digraphs. We show that every (loopless) digraph $D$ without directed cycles of length two has chromatic number $\\chi(D) \\leq (1-e^{-13}) \\tilde{\\Delta}$, where $\\tilde{\\Delta}$ is the maximum geometric mean of the out-degree and in-degree of a vertex in $D$, when $\\tilde{\\Delta}$ is sufficiently large. As a corollary it is proved that there exists an absolute constant $\\alpha < 1$ such that $\\chi(D) \\leq \\alpha (\\tilde{\\Delta} + 1)$ for every $\\tilde{\\Delta} > 2$.",
"subjects": "Combinatorics (math.CO)",
"title": "Strengthened Brooks Theorem for digraphs of girth three",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462215484651,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7099326209201704
} |
https://arxiv.org/abs/2201.09691 | Multidimensional Manhattan Preferences | A preference profile with $m$ alternatives and $n$ voters is $d$-Manhattan (resp. $d$-Euclidean) if both the alternatives and the voters can be placed into the $d$-dimensional space such that between each pair of alternatives, every voter prefers the one which has a shorter Manhattan (resp. Euclidean) distance to the voter. Following Bogomolnaia and Laslier [Journal of Mathematical Economics, 2007] and Chen and Grottke [Social Choice and Welfare, 2021] who look at $d$-Euclidean preference profiles, we study which preference profiles are $d$-Manhattan depending on the values $m$ and $n$.First, we show that each preference profile with $m$ alternatives and $n$ voters is $d$-Manhattan whenever $d$ $\geq$ min($n$, $m$-$1$). Second, for $d = 2$, we show that the smallest non $d$-Manhattan preference profile has either three voters and six alternatives, or four voters and five alternatives, or five voters and four alternatives. This is more complex than the case with $d$-Euclidean preferences (see [Bogomolnaia and Laslier, 2007] and [Bulteau and Chen, 2020]. | \section{Introduction}\label{sec:intro}
Modelling voters' linear preferences as geometric distances is an approach popular in many research fields such as economics~\cite{Hotelling1929,Downs1957,Eguia2011},
political and social sciences~\cite{Sto1963,Poo1989,Ene1990,BoLa2006},
and psychology~\cite{Coombs1964,BorGroMai2018}.
The idea is to consider the alternatives and voters as points in the $d$-dimensional space such that
\begin{align*}%
\text{for each two alternatives, each voter prefers the one which is \emph{closer} to her.}\footnotemark~~\tag{CLOSE}\label{eq:closeness}
\end{align*}\footnotetext{Throughout the paper, we use "she" to refer to a voter.} %
If the proximity is measured via the Euclidean distance, preference profiles obeying \eqref{eq:closeness} are called \dEuclid.
While the \dEuclid{} model seems to be canonical, in real life, however, the shortest path between two points may be Manhattan rather than Euclidean.
For instance, in urban geography, the alternatives (e.g., a shop or a supermarket) and the voters (e.g., individuals) are often located on grid-like streets.
That is, the distance between an alternative and a voter is more likely to be measured according to the Manhattan distance (aka.\ Taxicab distance), i.e., the sum of the the absolute differences of the coordinates of the facility and the individual.
Similarly to the Euclidean preference notion, we call a preference profile \myemph{\dManhattan} if there exists an embedding for the voters and the alternatives which satisfies the condition~\eqref{eq:closeness} under the Manhattan distance.
Besides the previously mentioned literature, Euclidean and Manhattan preferences have been studied for a wide range of applications such as facility location~\cite{LarSad1983}, group decision making~\cite{SSL2007}, and voting and committee elections~\cite{EckKla2010}.
Due to their practical relevance, \citet{BoLa2006} studied how restrictive the assumption of Euclidean preferences is.
They showed that an arbitrary preference profile with $n$ voters and $m$ alternatives is \dEuclid{} if and only if $d\ge \min(n,m-1)$.
For $d=1$, their smallest counter-example of a non-\dEuclid[1] profile consists of either $3$ voters and $3$ alternatives or $2$ voters and $4$~alternatives, which is tight according to \cite{ChenGrottke2021}.
For $d=2$, their smallest counter-example of a non-\dEuclid[2] profile consists of either $4$ voters and $4$ alternatives or $3$ voters and $8$~alternatives, which is also tight by \citet{BulChe2020}.
To the best of our knowledge, there is no such kind of characterization result on the \dManhattan{} preferences.
In this paper, we aim to close this gap and study how restrictive the assumption that a preference profile is \dManhattan{} is.
First, we prove that, similarly to the Euclidean preferences case, a preference profile with $m$ alternatives and $n$ voters is \dManhattan\ if $d\ge \min(m-1,n)$ (\cref{thm:d=n->dManhattan} and \cref{thm:d=m-1->Manhattan}).
Then, focusing on the two-dimensional case, we investigate how restricted two-dimensional \dManhattan{} preferences are.
More precisely, we seek to determine tight bounds on the smallest number of either alternatives or voters of a non-\dManhattan profile.
We find that the result is not comparable with the one for the \dEuclid{} case.
More precisely, we show that an arbitrary preference profile is \dManhattan[2] if and only if it
either has at most three alternatives (\cref{thm:d=m-1->Manhattan} and \cref{thm:no-n5-m4}),
or at most two voters (\cref{thm:d=n->dManhattan}),
or at most three voters and at most five alternatives (\cref{thm:no-n3-m6} and \cref{prop:n3-m5+n4-m4}),
or at most four voters and at most four alternatives (\cref{thm:no-n4-m5} and \cref{prop:n3-m5+n4-m4}),
The paper is organized as follows.
\cref{sec:defi} introduces necessary definitions and notations.
In \cref{sec:Manhattan-positive},
we examine positive findings.
In \cref{sec:Manhattan-negative},
we examine the negative findings.
In \cref{sec:experiments} we present our experimental results.
We conclude with a few future research directions in \cref{sec:conclude}.
\section{Preliminaries}\label{sec:defi}
Given a non-negative integer~$t$, we use \myemph{$[t]$} to denote the set~$\{1,2,\ldots,t\}$.
Let $\vect{x}$ denote a vector of length~$d$ or a point in a $d$-dimensional space, and
let $i$ denote an index $i\in [d]$.
We use \myemph{$\vect{x}[i]$} to refer to the $i^{\text{th}}$~value in~$\vect{x}$.
Given three values~$x,y,z$ we say that $y$ is \myemph{\textsf{between}{}} $x$ and $z$ if either $x \le y \le z$ or $z \le y \le x$ holds. %
Let ${\cal A}\coloneqq \{1,\ldots,m\}$ be a set of alternatives. %
A \myemph{preference order}~$\ensuremath{\succ}$ of ${\cal A}$ is a linear order of~${\cal A}$; a linear order is a binary relation which is total, irreflexive, and transitive.
For two distinct alternatives~$a$ and $b$, the relation \myemph{$a\ensuremath{\succ} b$} means that $a$ is preferred to (or in other words, ranked higher than) $b$ in~$\ensuremath{\succ}$.
An alternative~$c$ is \myemph{the most-preferred alternative in $\ensuremath{\succ}$}
if for any alternative~$b\in {\cal A} \setminus \{c\}$ it holds that $c \ensuremath{\succ} b$.
Let $\ensuremath{\succ}$ be a preference order over~${\cal A}$.
We use \myemph{$\succeq$} to denote the binary relation which includes~$\ensuremath{\succ}$ and preserves the reflexivity,
i.e., $\succeq \coloneqq \succ\!\cup \{(a,a)\mid a\in {\cal A}\}$.
For a subset~$B\subseteq {\cal A}$ of alternatives and an alternative~$c$ not in $B$,
we use \myemph{$B\ensuremath{\succ} c$} to denote that in the preference order $\ensuremath{\succ}$ each~$b\in B$ is preferred to~$c$, i.e., $b \ensuremath{\succ} c$.
A \myemph{preference profile}~${\cal P}$ specifies the preference orders of a number of voters over a set of alternatives.
Formally, \myemph{${\cal P} \coloneqq ({\cal A}, {\cal V}, {\cal R} \coloneqq (\ensuremath{\succ}_1, \ldots, \ensuremath{\succ}_n))$},
where ${\cal A}$ denotes the set of $m$ alternatives,
${\cal V}$ denotes the set of $n$~voters,
and ${\cal R}$ is a collection of $n$ preference orders
such that each voter~$v_i\in {\cal V}$ ranks the alternatives according to the preference order~$\ensuremath{\succ}_i$ on ${\cal A}$.
Throughout the paper, if not explicitly stated otherwise, we assume ${\cal P}$ is a preference profile of the form~$({\cal A},{\cal V},{\cal R})$.
For notational convenience, for each alternative~$a\in {\cal A}$ and each voter~$v_i\in {\cal V}$, let \myemph{$\ensuremath{\mathsf{rk}}_{i}(a)$} denote the rank of alternative~$a$ in the preference order~$\ensuremath{\succ}_i$, which is the number of alternatives which are preferred to~$a$ by voter~$v_i$, i.e., $\ensuremath{\mathsf{rk}}_{i}(a)=|\{b \in {\cal A} \mid b\ensuremath{\succ}_i a\}|$.
Given two points~$\ensuremath{\vect{p}}, \ensuremath{\vect{q}}$ in the $d$-dimensional space~$\ensuremath{\mathds{R}^{d}}$, we write
\myemph{$\Edis{\ensuremath{\vect{p}}-\ensuremath{\vect{q}}}$} to denote the Euclidean distance of $\ensuremath{\vect{p}}$ and $\ensuremath{\vect{q}}$, i.e.,
$\Edis{\ensuremath{\vect{p}}-\ensuremath{\vect{q}}} = \sqrt{\sum_{i=1}^{d}(\ensuremath{\vect{p}}[i]-\ensuremath{\vect{q}}[i])^2}$,
and we write
\myemph{$\Mdis{\ensuremath{\vect{p}}-\ensuremath{\vect{q}}}$} to denote the Manhattan distance of $\ensuremath{\vect{p}}$ and $\ensuremath{\vect{q}}$, i.e.,
$\Mdis{\ensuremath{\vect{p}}-\ensuremath{\vect{q}}} = \sum_{i=1}^{d}|\ensuremath{\vect{p}}[i]-\ensuremath{\vect{q}}[i]|$.
For $d=2$, the Manhattan distance of two points is equal to the length of a path between them on a rectilinear grid.
Hence, under Manhattan distances, a \myemph{circle} is a square rotated at a $45^{\circ}$ angle from the coordinate axes. %
The intersection of two Manhattan-circles can range from two points to two segments as depicted in \cref{fig:L1unit}.
\begin{figure}[t!]
\centering
\begin{tikzpicture}
\def .25 {.1}
\def .25 {.1}
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {-4/-2/u/u/voterV/{below left}/-1/-4, 6/2/v/v/voterW/{above right}/-1/-1} {
\node[\typ] at (\x*.25,\y*.25) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\node[alter,red,fill=red] at (2*.25,-6*.25) (x) {};
\node[alter,red,fill=red] at (-2*.25,6*.25) (y) {};
\drawcircle{x}{u}{colorV}
\drawcircle{x}{v}{colorW}
\node[alter,red,fill=red] at (x) {};
\node[alter,red,fill=red] at (y) {};
\end{tikzpicture}
\begin{tikzpicture}
\def .25 {.12}
\def .25 {.12}
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {-4/-2/u/u/voterV/{below left}/-1/-4, 6/2/v/v/voterW/{above right}/-1/-1} {
\node[\typ] at (\x*.25,\y*.25) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\coordinate (x) at (0*.25,-4*.25);
\coordinate (y) at (-4*.25,4*.25);
%
\drawcircle{x}{u}{colorV}
\drawcircle{x}{v}{colorW}
\gettikzxy{(v)}{\vx}{\vy}
\gettikzxy{(x)}{\xx}{.25}
\node[alter,red,fill=red] at (x) {};
\pgfmathsetlengthmacro\disRX{abs(\xx-\vx)+abs(.25-\vy)}
\path[draw,red,ultra thick] (y) -- (\vx-\disRX,\vy);
\end{tikzpicture}
\begin{tikzpicture}
\def .25 {.14}
\def .25 {.14}
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {-4/-2/u/u/voterV/{below left}/-1/-4, 6/2/v/v/voterW/{above right}/-1/-1} {
\node[\typ] at (\x*.25,\y*.25) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\coordinate (x) at (0*.25,-2*.25);
\coordinate (y) at (-4*.25,2*.25);
%
\drawcircle{x}{u}{colorV}
\drawcircle{x}{v}{colorW}
%
%
\path[draw,red,ultra thick] (y) -- (x);
\end{tikzpicture}
\begin{tikzpicture}
\def .25 {.14}
\def .25 {.14}
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {-4/-2/u/u/voterV/{below left}/-1/-4, 2/-2/v/v/voterW/{above right}/-1/-1} {
\node[\typ] at (\x*.25,\y*.25) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\coordinate (x) at (-4*.25,-6*.25);
\coordinate (y) at (-4*.25,2*.25);
%
\drawcircle{x}{u}{colorV}
\drawcircle{x}{v}{colorW}
\gettikzxy{(v)}{\vx}{\vy}
%
\pgfmathsetlengthmacro\disRX{abs(\xx-\vx)+abs(.25-\vy)}
\path[draw,red,ultra thick] (y) -- (\vx-\disRX,\vy);
\path[draw,red,ultra thick] (x) -- (\vx-\disRX,\vy);
\end{tikzpicture}
%
\caption{The intersection of two circles (in red) in the Manhattan distance in \ensuremath{\mathbb{R}^{2}} can be two points, one point and one line segment, one line segment, or two line segments}
\label{fig:L1unit}
\end{figure}
\subsection{Basic geometric notation}
Throughout this paper,
we use lower case letters in boldface to denote points in a space.
Given two points~$\ensuremath{\vect{p}}$ and $\ensuremath{\vect{q}}$, we introduce the following notions:
Let \myemph{$\ensuremath{\mathsf{BB}}(\ensuremath{\vect{p}},\ensuremath{\vect{q}})$} denote the set of points which are contained in the (smallest) rectilinear bounding box of points~$\ensuremath{\vect{p}}$ and $\ensuremath{\vect{q}}$, i.e.,
$\ensuremath{\mathsf{BB}}(\ensuremath{\vect{p}},\ensuremath{\vect{q}})\coloneqq \{\ensuremath{\vect{x}}\in \ensuremath{\mathds{R}^{d}} \mid \min\{\ensuremath{\vect{p}}[i],\ensuremath{\vect{q}}[i]\} \le \ensuremath{\vect{x}}[i] \le \max\{\ensuremath{\vect{p}}[i],\ensuremath{\vect{q}}[i]\} \text{ for all } i\in [d]\}$. %
In a $d$-dimensional space, a bisector of two points under Manhattan distances can itself be a $d$-dimensional object, while a bisector under Euclidean distances is always $(d-1)$-dimensional.
See the right-most figure in \cref{fig:L1bisect} for an illustration.
\paragraph{The two-dimensional case.}
In the two-dimensional space, the vertical line and the horizontal line crossing any point divide the space into four non-disjoint quadrants: the north-east, south-east, north-west, and south-west quadrants.
Hence, given a point~$\ensuremath{\vect{p}}$, we use $\ensuremath{\mathsf{NE}}(\ensuremath{\vect{p}})$, $\ensuremath{\mathsf{SE}}(\ensuremath{\vect{p}})$, $\ensuremath{\mathsf{NW}}(\ensuremath{\vect{p}})$, and $\ensuremath{\mathsf{SW}}(\ensuremath{\vect{p}})$ to denote these four quadrants.
Formally,
\myemph{$\ensuremath{\mathsf{NE}}(\ensuremath{\vect{p}})\coloneqq\{\ensuremath{\vect{x}} \in \ensuremath{\mathds{R}}^2\mid \ensuremath{\vect{x}}[1]\ge \ensuremath{\vect{p}}[1] \wedge \ensuremath{\vect{x}}[2]\ge \ensuremath{\vect{p}}[2]\}$},
\myemph{$\ensuremath{\mathsf{SE}}(\ensuremath{\vect{p}})\coloneqq\{\ensuremath{\vect{x}} \in \ensuremath{\mathds{R}}^2\mid \ensuremath{\vect{x}}[1]\ge \ensuremath{\vect{p}}[1] \wedge \ensuremath{\vect{x}}[2]\le \ensuremath{\vect{p}}[2]\}$},
\myemph{$\ensuremath{\mathsf{NW}}(\ensuremath{\vect{p}})\coloneqq\{\ensuremath{\vect{x}} \in \ensuremath{\mathds{R}}^2\mid \ensuremath{\vect{x}}[1]\le \ensuremath{\vect{p}}[1] \wedge \ensuremath{\vect{x}}[2]\ge \ensuremath{\vect{p}}[2]\}$}, and
\myemph{$\ensuremath{\mathsf{SW}}(\ensuremath{\vect{p}})\coloneqq\{\ensuremath{\vect{x}} \in \ensuremath{\mathds{R}}^2\mid \ensuremath{\vect{x}}[1]\le \ensuremath{\vect{p}}[1] \wedge \ensuremath{\vect{x}}[2]\le \ensuremath{\vect{p}}[2]\}$}. %
As a convention, we use $x$ and $y$ to refer to the first dimension and the second dimension in the two-dimensional space, respectively.
\begin{figure}
\centering
\begin{tikzpicture}
\def .25 {.25}
\def .25 {.25}
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {-4/0/u/u/voterV/{below left}/0/-4, 6/0/v/v/voterW/{above right}/-1/-1} {
\node[\typ] at (\x*.25,\y*.25) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\coordinate (y) at (1*.25,2*.25);
\coordinate (x) at (1*.25,-2*.25);
\gettikzxy{(v)}{\vx}{\vy}
\gettikzxy{(u)}{\ux}{\uy}
\gettikzxy{(x)}{\xx}{.25}
\gettikzxy{(y)}{\yx}{\yy}
%
\draw[gray,dashed] (v) -- (\ux, \vy);
\draw[gray,dashed] (u) -- (\ux, \vy);
\draw[gray,dashed] (v) -- (\vx, \uy);
\draw[gray,dashed] (u) -- (\vx, \uy);
\draw[thick,darkgreen] (\xx,.25-30) -- (x) -- (y) -- (\yx,\yy+30);
\end{tikzpicture}
\qquad~\qquad
\begin{tikzpicture}
\def .25 {.25}
\def .25 {.25}
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {-4/-2/u/u/voterV/{below left}/-1/-4, 6/2/v/v/voterW/{above right}/-1/-1} {
\node[\typ] at (\x*.25,\y*.25) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\coordinate (y) at (-1*.25,2*.25);
\coordinate (x) at (3*.25,-2*.25);
\gettikzxy{(v)}{\vx}{\vy}
\gettikzxy{(u)}{\ux}{\uy}
\gettikzxy{(x)}{\xx}{.25}
\gettikzxy{(y)}{\yx}{\yy}
%
\draw[gray,dashed] (v) -- (\ux, \vy);
\draw[gray,dashed] (u) -- (\ux, \vy);
\draw[gray,dashed] (v) -- (\vx, \uy);
\draw[gray,dashed] (u) -- (\vx, \uy);
\draw[thick,darkgreen] (\xx,.25-30) -- (x) -- (y) -- (\yx,\yy+30);
\end{tikzpicture}
\qquad~\qquad
\begin{tikzpicture}
\def .25 {.25}
\def .25 {.25}
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {-4/-2/u/u/voterV/{below left}/-1/-4, 0/2/v/v/voterW/{above right}/-1/-1} {
\node[\typ] at (\x*.25,\y*.25) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\coordinate (y) at (-4*.25,2*.25);
\coordinate (x) at (0*.25,-2*.25);
\gettikzxy{(v)}{\vx}{\vy}
\gettikzxy{(u)}{\ux}{\uy}
\gettikzxy{(x)}{\xx}{.25}
\gettikzxy{(y)}{\yx}{\yy}
%
\draw[gray,dashed] (v) -- (\ux, \vy);
\draw[gray,dashed] (u) -- (\ux, \vy);
\draw[gray,dashed] (v) -- (\vx, \uy);
\draw[gray,dashed] (u) -- (\vx, \uy);
%
\draw[darkgreen,thick] (\xx+30,.25) -- (x);
\draw[darkgreen,thick] (\yx-30,\yy) -- (y);
\draw[darkgreen,thick] (\xx,.25-30) -- (x) -- (y) -- (\yx,\yy+30);
\draw[fill=darkgreen!50,draw=none] (\xx+30,.25) -- (x) -- (\xx,.25-30) -- (\xx+30,.25-30) -- cycle;
\draw[fill=darkgreen!50,draw=none] (\yx-30,\yy) -- (y) -- (\yx,\yy+30) -- (\yx-30,\yy+30) -- cycle;
\end{tikzpicture}
%
\caption{The bisector (in green) between two points in Manhattan distances. The green lines and areas extend to infinity.}
\label{fig:L1bisect}
\end{figure}
\subsection{Embeddings}
Generally speaking, the Euclidean (resp.\ Manhattan) representation models the preferences of voters over the alternatives
using the Euclidean (resp.\ Manhattan) distance between an alternative and a voter.
A shorter distance indicates a stronger preference.
\begin{definition}[\dEuclid{} and \dManhattan{} embeddings]\label{def:embeddings}
Let ${\cal P} \coloneqq ({\cal A}, {\cal V}\coloneqq\{v_1, \ldots, v_n\}, {\cal R}\coloneqq(\ensuremath{\succ}_1, \ldots, \ensuremath{\succ}_n))$ be a preference profile.
Let~$E\colon {\cal A} \cup {\cal V} \to \ensuremath{\mathds{R}^{d}}$ be an embedding of the alternatives and the voters into the $d$-dimensional space. %
A voter~$v_i \in V$ %
is \myemph{\dEuclid{}} with respect to~$E$
if for each two distinct alternatives~$a, b \in {\cal A}$
voter~$v_i$ strictly prefers the one closer to her,
that is,
\begin{align*}
a \ensuremath{\succ}_i b \text{ if and only if } \Edis{E(a)-E(v_i)} < \Edis{E(b)-E(v_i)}.
\end{align*}
Similarly, $v_i$ is \myemph{\dManhattan{} with respect to~$E$}
if for each two distinct alternatives~$a, b \in {\cal A}$
voter~$v_i$ strictly prefers the one closer to her,
that is,
\begin{align*}
a \ensuremath{\succ}_i b \text{ if and only if } \Mdis{E(a)-E(v_i)} < \Mdis{E(b)-E(v_i)}.
\end{align*}
An embedding~$E$ of the alternatives and voters is a \myemph{\dEuclid{}} (resp.\ \myemph{\dManhattan{}}) embedding of profile~${\cal P}$
if each voter in~$V$ is
\dEuclid{} (resp.\ \dManhattan{}) with respect to~$E$.
%
A preference profile is \myemph{\dEuclid} (resp.\ \myemph{\dManhattan}) if it admits a \dEuclid{} (resp.\ \dManhattan{}) embedding.
%
%
%
%
%
%
%
%
%
%
\end{definition}
To characterize necessary conditions for \dManhattan[2] profiles, we need to define several notions which describe the relative orders of the points in the $x$- and $y$-coordinates.
\begin{figure}[t!]\centering
\begin{tikzpicture}
\drawgridA
%
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {2/2/u/u/voter/{above left}/-2/-1, 3/3/v/v/voter/above right/-1/-1, 4/4/w/w/voter/above right/-1/-1} {
\node[\typ] at (\x\y) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\drawreg
\node[below = 1ex of 31] {(I)};
\end{tikzpicture}\qquad
\begin{tikzpicture}
%
\drawgridA
%
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {2/3/u/u/voter/{above left}/-2/-1, 4/2/v/v/voter/above right/-1/-1, 3/4/w/w/voter/above right/-1/-1} {
\node[\typ] at (\x\y) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\begin{pgfonlayer}{background}
\foreach \s / \t in {12/52,13/53,14/54,21/25,31/35,41/45} {
\path[draw,lines] (\s) edge (\t);
}
\drawreg
%
%
%
%
%
%
%
%
%
%
\end{pgfonlayer}
\node[below = 1ex of 31] {(O)};
\end{tikzpicture}
\caption{Two possible embeddings illustrating the properties in \cref{def:3voters-configurations} (the numbering will be used in some proofs). (I) means ``inside of'' while (O) ``outside of''.}\label{fig:config-v3}
\end{figure}
\begin{definition}[\textsf{BE}- and \textsf{EX}-properties]\label{def:3voters-configurations}
Let ${\cal P}$ be a preference profile containing at least three voters called $u,v,w$ and let $E$ be an embedding for~${\cal P}$.
\begin{itemize}
\item We say that $E$ satisfies the \myemph{$(v,u,w)$-\textsf{BE}}-property if
$E(v)\in \ensuremath{\mathsf{BB}}(E(u),E(w))$~
(see \cref{fig:config-v3}(I)). %
For brevity's sake, by symmetry, we sometimes omit voters~$u$ and $w$ and just speak of the \myemph{$v$-\textsf{BE}}-property if $u,v,w$ are the only voters contained in~${\cal P}$.
\item We say that $E$ satisfies the \myemph{$(v,u,w)$-\textsf{EX}}-property if
either ``$E(u)[1]$ is \textsf{between}\ $E(v)[1]$ and $E(w)[1]$ and $E(w)[2]$ is \textsf{between}\ $E(v)[2]$ and $E(u)[2]$''
or
``$E(w)[1]$ is \textsf{between}\ $E(v)[1]$ and $E(u)[1]$, and $E(u)[2]$ is \textsf{between}\ $E(v)[2]$ and $E(w)[2]$''%
%
%
%
~(see \cref{fig:config-v3}(O)).
Once again, we sometimes omit voters~$u$ and $w$ and just call it the \myemph{$v$-\textsf{EX}}-property if $u,v,w$ are the only voters contained in~${\cal P}$.
\end{itemize}
\end{definition}
Note that there are four possible embeddings which satisfy the $(v,u,w)$-\textsf{BE}-property while there are eight possible embeddings which satisfy the $(v,u,w)$-\textsf{EX}-property.
However, each of these embeddings satisfying the $(v,u,w)$-\textsf{BE}-property (resp.\ the $(v,u,w)$-\textsf{EX}-property) forbids certain types of preference profiles, specified below.
The following two configurations describe preferences whose existence precludes an embedding from satisfying one of the properties defined in \cref{def:3voters-configurations},
as we will show in \cref{lem:bet-property,lem:ext-property}.
\begin{definition}[\textsf{BE}-configurations]\label{def:3voters-forbidden-profiles-B}
A preference profile~${\cal P}$ with three voters~$u,v,w$ and three alternatives~$a,b,x$ is
a \myemph{$(v,u,w)$-\textsf{BE}-configuration}
if the following holds:
\begin{align*}
u\colon b \succ_u x \succ_u a,\quad
v\colon a \succ_v x \succ_v b,\quad
w\colon b \succ_w x \succ_w a.
\end{align*}
\end{definition}
\begin{definition}[\textsf{EX}-configurations]\label{def:3voters-forbidden-profiles-E}
A preference profile~${\cal P}$ with three voters~$u,v,w$ and six alternatives~$x, a,b,c,d,e$ ($c,d,e$ not necessarily distinct) is
a \myemph{$(v,u,w)$-\textsf{EX}-configuration}
if the following holds:
\begin{align*}
\begin{array}{llll}
u\colon & a \succ_u x \succ_u b, & c\succ_u x, & d\succ_u x\\
v\colon &\{a,b\} \succ_v x, & & x \succ_v \{d,e\},\\
w\colon & b \succ_w x \succ_w a, & c\succ_w x, & e\succ_w x.
\end{array}
\end{align*}
\end{definition}
\begin{example}\label{ex:non-bet-ext}
\setcounter{betcounter}{1}
\setcounter{extcounter}{2}
Consider the following two preference profiles~${\cal Q}_{\thebetcounter}$
and ${\cal Q}_{\theextcounter}$:
\begin{align*}
\begin{array}{l@{\,}l@{\qquad}l@{\,}l}
{\cal Q}_{\thebetcounter}\colon & & {\cal Q}_{\theextcounter}\colon\\
v_1\colon & 1 \succ_1 2 \succ_1 3, &v_1 \colon & \{1,2\} \succ_1 3 \succ_1 4, \\
v_2\colon & 3 \succ_2 2 \succ_2 1, &v_2 \colon & \{1,4\} \succ_2 3 \succ_2 2, \\
v_3\colon & 3 \succ_3 2 \succ_3 1,&v_3 \colon & \{2, 4\} \succ_3 3 \succ_3 1.
\end{array}
\end{align*}
One can verify that ${\cal Q}_{\thebetcounter}$ is a $(v_1,v_2,v_3)$-\textsf{BE}-configuration.
Further, ${\cal Q}_{\theextcounter}$ is a $(v_1,v_2,v_3)$\mbox{-,} $(v_2,v_1,v_3)$\mbox{-,} and $(v_3,v_1,v_2)$-\textsf{EX}-configuration.
One can verify this by setting $(a,b,x,c,d,e)=(1,2,3,4,4,4)$, $(a,b,x,c,d,e)=(1,4,3,2,2,2)$, $(a,b,x,c,d,e)=(2,4,3,1,1,1)$, respectively.
\end{example}
\section{Manhattan Preferences: Positive Results}\label{sec:Manhattan-positive}
In this section, we show that for sufficiently high dimension~$d$, i.e., $d\ge \min(n, m-1)$, a preference profile with $n$ voters and $m$~alternatives is always \dManhattan.
\begin{theorem}\label{thm:d=n->dManhattan}
{Every preference profile with $n$~voters is \dManhattan[n].}
\end{theorem}
\begin{proof}
Let ${\cal P}=({\cal A}, {\cal V}, (\ensuremath{\succ}_i)_{i\in [n]})$ be a preference profile with~$m$ alternatives and $n$~voters~${\cal V}$ such that ${\cal A}=\{1,2,\cdots,m\}$.
The idea is to first embed the voters from~${\cal V}$ onto $n$ carefully selected vertices of an $n$-dimensional hypercube, and then embed the alternatives such that each coordinate of an alternative reflects the preferences of a specific voter.
More precisely, define an embedding $E\colon {\cal A}\cup {\cal V} \to \mathds{N}_0$ such that
for each voter~$v_i\in {\cal V}$ and each coordinate~$z\in [n]$, we have $E(v_i)[z]\coloneqq -m$ if $z=i$, and $E(v_i)[z]\coloneqq 0$ otherwise.
It remains to specify the embedding of the alternatives.
To ease notation, for each alternative~$j\in {\cal A}$, let \myemph{$\ensuremath{\mathsf{mk}}_j$} denote the maximum rank of the voters towards~$j$, i.e.,
\myemph{$\ensuremath{\mathsf{mk}}_j\coloneqq \max_{v_i\in {\cal V}}\ensuremath{\mathsf{rk}}_i(j)$}.
Further, let \myemph{$\ensuremath{\hat{n}}_j$} denote the index of the voter who has maximum rank over~$j$; if there are two or more such voters, then we fix an arbitrary one.
That is, \myemph{$v_{\ensuremath{\hat{n}}_j}\coloneqq \argmax_{v_i\in {\cal V}}\ensuremath{\mathsf{rk}}_i(j)$}.
Then, the embedding of each alternative~$j\in {\cal A}$ is defined as follows:
\begin{align*}
\forall z\in [n]\colon E(j)[z]
\coloneqq
\begin{cases}
\ensuremath{\mathsf{rk}}_z(j) - \ensuremath{\mathsf{mk}}_j, & \text{ if } z \neq \ensuremath{\hat{n}}_j,\\
M + 2\ensuremath{\mathsf{rk}}_z(j) + \sum\limits_{k\in [n]} (\ensuremath{\mathsf{rk}}_{k}(j)-\ensuremath{\mathsf{mk}}_j), & \text{ otherwise.}
\end{cases}
\end{align*}
Herein, $M$ is a large but fixed value such that the second term in the above definition is non-negative.
For instance, we can set $M\coloneqq n\cdot m$.
Notice that by definition, the following holds for each alternative~$j\in {\cal A}$.
\begin{align}
-m \le \ensuremath{\mathsf{rk}}_z(j) - \ensuremath{\mathsf{mk}}_j & \le 0, \quad \text{ and } \label{eq:d=n-1term}\\
M + 2\ensuremath{\mathsf{rk}}_z(j) +\sum\limits_{k\in [n]} (\ensuremath{\mathsf{rk}}_{k}(j)-\ensuremath{\mathsf{mk}}_j) &\ge M - n \cdot m \ge 0.\label{eq:d=n-2term}
\end{align}
In other words, it holds that
\begin{align}
|E(j)[i] - E(v_i)[i]| & = m+E(j)[i], \text{ and } \label{eq:d=n-diff}\\
\nonumber \Mdis{E(j)} = \sum_{z\in [n]}|E(j)[z]| & \stackrel{\eqref{eq:d=n-1term},\eqref{eq:d=n-2term}}{=} M + 2\ensuremath{\mathsf{rk}}_{\ensuremath{\hat{n}}_j}(j) + \sum_{k\in [n]}(\ensuremath{\mathsf{rk}}_k[j]-\ensuremath{\mathsf{mk}}_j) + \sum_{z\in [n]\setminus \{\ensuremath{\hat{n}}_j\}}\!\!-(\ensuremath{\mathsf{rk}}_z[j]-\ensuremath{\mathsf{mk}}_j)\\
& \stackrel{\ensuremath{\mathsf{mk}}_j=\ensuremath{\mathsf{rk}}_{\ensuremath{\hat{n}}_j}(j)}{=} M + 2\ensuremath{\mathsf{rk}}_{\ensuremath{\hat{n}}}(j) \label{eq:d=n-3term}
\end{align}
Now, in order to prove that this embedding is \dManhattan[n] we show that the Manhattan distance between an arbitrary voter~$v_i$ and an arbitrary alternative~$j$ is linear in the rank value~$\ensuremath{\mathsf{rk}}_i(j)$.
By definition, this distance is:
\begin{align}
\nonumber \Mdis{E(v_i)-E(j)} & = \sum\limits_{k\in [n]} |E(j)[k]-E(v_i)[k]|= |E(j)[i]-E(v_i)[i]| + \sum\limits_{k\in [n]\setminus\{i\}} |E(j)[k]|\\
& \stackrel{\eqref{eq:d=n-diff}}{=} m + E(j)[i] + \Mdis{E(j)} - |E(j)[i]|.
\label{eq:d=n-distance}
\end{align}
We distinguish between two cases.
\begin{description}
\item[Case 1:] $i \neq \ensuremath{\hat{n}}_j$.
Then, by definition, it follows that
\allowdisplaybreaks
\begin{align*}
\Mdis{E(v_i)-E(j)} & \stackrel{\eqref{eq:d=n-distance}}{=} m + E(j)[i] + \Mdis{E(j)} - |E(j)[i]|\\
& \stackrel{\eqref{eq:d=n-1term},\eqref{eq:d=n-3term}} = m + 2(\ensuremath{\mathsf{rk}}_i(j)-\ensuremath{\mathsf{mk}}_j) + M + 2\ensuremath{\mathsf{rk}}_{\ensuremath{\hat{n}}_j}(j) \\
& \stackrel{\ensuremath{\mathsf{rk}}_{\ensuremath{\hat{n}}}(j)=\ensuremath{\mathsf{mk}}_j}{=} m + M + 2\ensuremath{\mathsf{rk}}_i(j).
\end{align*}
\item[Case 2:] $i = \ensuremath{\hat{n}}_j$.
Then, by definition, it follows that
\allowdisplaybreaks
\begin{align*}
\Mdis{E(v_i)-E(j)} & \stackrel{\eqref{eq:d=n-distance}}{=} m + E(j)[i] + \Mdis{E(j)} - |E(j)[i]|\\
& \stackrel{\eqref{eq:d=n-2term}}{=} m + \Mdis{E(j)} \\
& \stackrel{\eqref{eq:d=n-3term}}{=} m + M + 2\ensuremath{\mathsf{rk}}_{\ensuremath{\hat{n}}_j}(j).
\end{align*}
\end{description}
In both cases, we obtain that $\Mdis{E(v_i)-E(j)} = m+M+2\ensuremath{\mathsf{rk}}_{i}(j)$, which is linear in the ranks, as desired.
\end{proof}
By \cref{thm:d=n->dManhattan}, we obtain that any profile with two voters is \dManhattan[2].
The following example provides an illustration.
\begin{example}\label{ex:d=n-Manhattan}
\stepcounter{myprofilecounter}
Consider a profile~${\cal P}_{\themyprofilecounter}$ with two voters and five alternatives.
\begin{align*}
\begin{array}{l@{\,}ll}
v_1\colon & 1 \succ 2 \succ 3 \succ 4 \succ 5,\\
v_2\colon & 5 \succ 4 \succ 3 \succ 1 \succ 2.
\end{array}
\end{align*}
By the proof of \cref{thm:d=n->dManhattan}, the maximum ranks and the voters with maximum rank are defined as follows:
\begin{center}
\begin{tabular}{l|cccccc}
$j$ & $1$ & $2$ & $3$ & $4$ & $5$ \\\hline
$\ensuremath{\mathsf{mk}}_j$ & $3$ & $4$ & $2$ & $3$ & $4$ \\
$\ensuremath{\hat{n}}_j$ & $2$ & $2$ & $1$ & $1$ & $1$\\
\end{tabular}
\end{center}
The embedding of the voters and alternatives (according to the proof of \cref{thm:d=n->dManhattan}) is as follows, where we let $M\coloneqq n\cdot m = 10$.
\begin{center}
\begin{tabular}{l|cccccccc}
$x\in {\cal V}\cup {\cal A}$ & $v_1$ & $v_2$ & $1$ & $2$ & $3$ & $4$ & $5$ \\\hline
$E(x)[1]$ & $-5$ & $0$ & $-3$ & $-3$ & $14$ & $14$ & $14$\\
$E(x)[2]$ & $0$ & $-5$ & $13$ & $15$ & $0$ & $-2$ & $-4$\\
\end{tabular}
\end{center}
\cref{fig:d=n-Manhattan} depicts the corresponding embedding.
\stepcounter{myprofilecounter}
\end{example}
\begin{figure}[t!]
\centering
\begin{tikzpicture}[scale=0.25]
\tkzInit[xmax=20,ymax=20,xmin=-6,ymin=-6]
\tkzGrid[color=gray!20]
\exPosiT
\begin{pgfonlayer}{background}
\path[draw, thick] (-6,0) -- (20,0);
\path[draw, thick] (0,-6) -- (0,20);
\end{pgfonlayer}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\end{tikzpicture}~~
\begin{tikzpicture}[scale=0.25]
\tkzInit[xmax=20,ymax=20,xmin=-6,ymin=-6]
\tkzGrid[color=gray!20]
\exPosiT
\path[draw, thick] (-6,0) -- (20,0);
\path[draw, thick] (0,-6) -- (0,20);
\coordinate (start) at (-1,0);
\coordinate (end) at (0,-6);
\coordinate (ss) at (25,0);
\coordinate (ee) at (0,20);
\begin{pgfonlayer}{background}
\foreach \v / \co in {v1/lineA} {
\foreach \c in {a1,a2,a3,a4,a5} {
\drawcircleD{\c}{\v}{\co}{start}{end}{ss}{ee}
}
}
\end{pgfonlayer}
\coordinate (start) at (-6,0);
\coordinate (end) at (0,-1);
\coordinate (ss) at (20,0);
\coordinate (ee) at (0,25);
\begin{pgfonlayer}{background}
\foreach \v / \co in {v2/lineB} {
\foreach \c in {a1,a2,a3,a4,a5} {
\drawcircleD{\c}{\v}{\co}{start}{end}{ss}{ee}
}
}
\end{pgfonlayer}
\end{tikzpicture}
\caption{Illustration of a \dManhattan[2] embedding for \cref{ex:d=n-Manhattan}.} \label{fig:d=n-Manhattan}.
\end{figure}
\begin{theorem}\label{thm:d=m-1->Manhattan}
Every preference profile with $m+1$ alternatives is \dManhattan[m].
\end{theorem}
\begin{proof}
Let ${\cal P}=({\cal A}, {\cal V}, (\ensuremath{\succ}_i)_{i\in [n]})$ be a preference profile with~$m+1$ alternatives and $n$~voters~${\cal V}$ such that ${\cal A}=\{1,2,\cdots,m+1\}$.
The idea is to first embed the alternatives from~${\cal A}$ onto $m+1$ carefully selected vertices of an $m$-dimensional hypercube, and then embed the voters such that the \dManhattan[m] distances from each voter to the alternatives increase as the preferences decrease.
More precisely, define an embedding $E\colon {\cal A}\cup {\cal V} \to \mathds{N}_0$ such that
alternative~$m+1$ is embedded in the origin coordinate, i.e., $E(m+1)=(0)_{z\in [m]}$.
For each alternative~$j\in [m]$ and each coordinate~$z\in [m]$, we have $E(j)[z]\coloneqq 2m$ if $z=j$, and $E(j)[z]\coloneqq 0$ otherwise.
%
%
%
%
%
Then, the embedding of each voter~$v_i\in {\cal V}$ is defined as follows:
\begin{align*}
\forall j\in [m]\colon E(v_i)[j]
\coloneqq
\begin{cases}
2m - \ensuremath{\mathsf{rk}}_i(j), & \text{ if } \ensuremath{\mathsf{rk}}_i(j) < \ensuremath{\mathsf{rk}}_i(m+1),\\
m -\ensuremath{\mathsf{rk}}_i(j), & \text{ if } \ensuremath{\mathsf{rk}}_i(j) > \ensuremath{\mathsf{rk}}_i(m+1).
\end{cases}
\end{align*}
Observe that $0\le E(v_i)[j] \le 2m$.
Before we show that $E$ is a Manhattan embedding for~${\cal P}$, let us establish a simple formula for the Manhattan distance between a voter and an alternative.
\begin{claim}\label{clm:m-Manhattan}
For each voter~$v_i\in {\cal V}$ and each alternative~$j\in {\cal A}$, we have
\begin{align*}
\Mdis{E(v_i)-E(j)} = \begin{cases}
\Mdis{E(v_i)} + 2(m-E(v_i)[j]), & \text{ if } j \neq m+1,\\
\Mdis{E(v_i)}, & \text{ otherwise. }
\end{cases}
\end{align*}
\end{claim}
\begin{proof}\renewcommand{\qedsymbol}{(of
\cref{clm:m-Manhattan})~$\diamond$}
The case with $j=m+1$ is straightforward since alternative~$m+1$ is embedded at the origin.
The proof for $j\neq m+1$ is also straightforward by a direct application of the definition: \begin{align*}
\Mdis{E(v_i)-E(j)} = \sum_{z\in [m]} |E(v_i)[z]-E(j)[z]|
& = \left(\sum_{z\in [m]\setminus \{j\}}|E(v_i)[z]|\right) + |E(v_i)[j]-E(j)[j]|\\
& = \left(\sum_{z\in [m]\setminus \{j\}}|E(v_i)[z]|\right) + (2m-E(v_i)[j])\\
& = \Mdis{E(v_i)} + 2(m-E(v_i)[j]).
\end{align*}
\end{proof}
Now, we proceed with the proof.
Consider an arbitrary voter~$v_i\in {\cal V}$ and let $j,k\in [m+1]$ be two consecutive alternatives in the preference order~$\succ_i$ such that $\ensuremath{\mathsf{rk}}_i(j) = \ensuremath{\mathsf{rk}}_i(k)-1$.
We aim to show that $\Mdis{E(v_i)-E(j)} < \Mdis{E(v_i)-E(k)}$, and we distinguish between three cases.
\begin{description}
\item[Case 1:] $\ensuremath{\mathsf{rk}}_i(k) < \ensuremath{\mathsf{rk}}_i(m+1)$ or $\ensuremath{\mathsf{rk}}_i(j) > \ensuremath{\mathsf{rk}}_i(m+1)$.
Then, by \cref{clm:m-Manhattan} and by definition, it follows that
\begin{align*}
\Mdis{E(v_i)-E(j)} - \Mdis{E(v_i)-E(k)} & = 2(E(v_i)[k]-E(v_i)[j]) = 2 (\ensuremath{\mathsf{rk}}_i(j) - \ensuremath{\mathsf{rk}}_i(k)) < 0,
\end{align*}
as desired.
\item[Case 2:] $\ensuremath{\mathsf{rk}}_i(k) = \ensuremath{\mathsf{rk}}_i(m+1)$, i.e., $k=m+1$ and $E(v_i)[j]=2m-\ensuremath{\mathsf{rk}}_i(j)$.
Then, by \cref{clm:m-Manhattan} and by definition, it follows that
\begin{align*}
\Mdis{E(v_i)-E(j)} - \Mdis{E(v_i)-E(k)} & = 2(m - E(v_i)[j]) =2\ensuremath{\mathsf{rk}}_i(j) - 2m < 0.
\end{align*}
Note that the last inequality holds since $\ensuremath{\mathsf{rk}}_i(j)=\ensuremath{\mathsf{rk}}_i(k)-1 < m$.
\item[Case 3:] $\ensuremath{\mathsf{rk}}_i(j) = \ensuremath{\mathsf{rk}}_i(m+1)$, i.e., $j=m+1$ and $E(v_i)[k]=m-\ensuremath{\mathsf{rk}}_i(k)$.
Then, by \cref{clm:m-Manhattan} and by definition, it follows that
\begin{align*}
\Mdis{E(v_i)-E(j)} - \Mdis{E(v_i)-E(k)} & = -2(m - E(v_i)[k]) = - 2\ensuremath{\mathsf{rk}}_i(k) < 0.
\end{align*}
Note that the last inequality holds since $\ensuremath{\mathsf{rk}}_i(k)=\ensuremath{\mathsf{rk}}_i(j)+1 > 0$.
\end{description}
Since in all cases, we show that $\Mdis{E(v_i)-E(j)} - \Mdis{E(v_i)-E(k)} < 0$,
$E$ is indeed a Manhattan embedding for~${\cal P}$.
\end{proof}
By \cref{thm:d=m-1->Manhattan}, we obtain that any profile for $3$~alternatives is \dManhattan[2].
The following example illustrates how the voters and alternatives are embedded according to the proof of \cref{thm:d=m-1->Manhattan}.
\begin{example} \label{ex:yes-v6-m3}
The following preference profile~${\cal P}_{\themyprofilecounter}$ with $6$~voters and $3$~alternatives is \dManhattan[2].
\begin{align*}
\begin{array}{l@{\,}lll@{\,}l}
v_1\colon & 1 \succ 2 \succ 3, &\qquad & v_2 \colon & 1 \succ 3 \succ 2, \\
v_3\colon & 2 \succ 1 \succ 3, && v_4\colon & 2 \succ 3 \succ 1,\\
v_5\colon & 3 \succ 1 \succ 2, && v_6\colon & 3 \succ 2 \succ 1. \\
\end{array}
\end{align*}
\noindent One can check that the following embedding~$E$ with\\
\begin{tabular}{@{}l@{\;}l@{\;}l@{\;}l@{\;}l@{\;}l@{}}
$E(1) = (4,0)$, & $E(2) =(0, 4)$, & $E[3] = (0,0)$, \\
$E(v_1) = (4,3)$, & $E(v_2)=(4,0)$, & $E(v_3)=(3,4)$, &
$E(v_4)=(0,4)$, & $E(v_5)=(1, 0)$, & $E(v_6)=(0, 1)$\\
\end{tabular}
\stepcounter{myprofilecounter}
\noindent is \dManhattan[2] embedding for~${\cal P}_{\themyprofilecounter}$.
\cref{fig:d=m+1-Manhattan} depicts the corresponding embedding.
\end{example}
\begin{figure}[t!]
\centering
\begin{tikzpicture}[scale=0.6]
\tkzInit[xmax=8,ymax=8,xmin=-1,ymin=-1]
\tkzGrid[color=gray!20]
\exPosi
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\end{tikzpicture}~~~
\begin{tikzpicture}[scale=0.6]
\tkzInit[xmax=8,ymax=8,xmin=-1,ymin=-1]
\tkzGrid[color=gray!20]
\exPosi
\coordinate (start) at (-1,0);
\coordinate (end) at (0,-2);
\coordinate (ss) at (6,0);
\coordinate (ee) at (0,7);
\begin{pgfonlayer}{background}
\foreach \v / \co in {v6/colorF} {
\foreach \c in {a1,a2,a3} {
\drawcircleD{\c}{\v}{\co}{start}{end}{ss}{ee}
}
}
\end{pgfonlayer}
\coordinate (start) at (-5,0);
\coordinate (end) at (5,-1);
\coordinate (ss) at (4,0);
\coordinate (ee) at (0,8);
\begin{pgfonlayer}{background}
\foreach \v / \co in {v2/lineB} {
\foreach \c in {a2,a3} {
\drawcircleD{\c}{\v}{\co}{start}{end}{ss}{ee}
}
}
\end{pgfonlayer}
\end{tikzpicture}
\caption{Illustration of a \dManhattan[2] embedding for \cref{ex:yes-v6-m3}.}\label{fig:d=m+1-Manhattan}
\end{figure}
\section{Manhattan Preferences: Negative Results}\label{sec:Manhattan-negative}
In this section, we prove results regarding minimally non-\dManhattan[2] preferences.
We show that for $n\in \{3,4,5\}$ voters, the smallest non-\dManhattan[2] preference profile has $9-n$ alternatives.
The first negative result is for the case with three voters.
\begin{theorem}\label{thm:no-n3-m6}
There exists a non \dManhattan[2] preference profile with three voters and six alternatives.
\end{theorem}
The second negative result deals with four voters.
\begin{theorem}\label{thm:no-n4-m5}
There exists a non \dManhattan[2] preference profile with four voters and five alternatives.
\end{theorem}
Finally, the last negative result is about the case where there are five voters.
\begin{theorem}\label{thm:no-n5-m4}
There exists a non \dManhattan[2] preference profile with five voters and four alternatives.
\end{theorem}
Before we show the aforementioned results,
we first go through some technical but useful statements for \dManhattan[2] preference profiles in \cref{subsec:technical}.
Then, we show the proofs of the main results in \cref{subsec:n3-m6,subsec:n4-m5,subsec:n5-m4}.
\subsection{Technical results}\label{subsec:technical}
\begin{lemma}\label{lem:two-votes}
Let ${\cal P}$ be a \dManhattan[2] preference profile and let $E$ be a \dManhattan[2] embedding for~${\cal P}$.
For any two voters~$r,s$ and two alternatives~$x,y$ the following holds:
\begin{compactenum}[(i)]
\item \label{lem:not-inside} If $r,s\colon y \succ x$,
then $E(x)\notin \ensuremath{\mathsf{BB}}(E(r), E(s))$.
\item \label{lem:not-outside-corner} If $r\colon x \succ y$ and $s\colon y\succ x$,
then $E(s)\notin \ensuremath{\mathsf{BB}}(E(r), E(x))$.
\end{compactenum}
%
%
\end{lemma}
\begin{proof}
Let ${\cal P}$, $E$, $r,s$, and $x,y$ be as defined.
Both statements follow from using simple calculations and the triangle inequality of Manhattan distances.
For Statement~\eqref{lem:not-inside}, suppose, towards a contradiction, that $r,s\colon y \succ x$ and $E(x)\in \ensuremath{\mathsf{BB}}(E(r), E(s))$.
By the definition of Manhattan distances, this implies that $\Mdis{E(r)-E(x)}+\Mdis{E(x)-E(s)} = \Mdis{E(r)-E(s)}$.
By the preferences of voters~$r$ and $s$ we infer the following:
\begin{align*}
\Mdis{E(s)-E(y)}+\Mdis{E(r)-E(y)} < \Mdis{E(s)-E(x)}+\Mdis{E(r)-E(x)} = \Mdis{E(r)-E(s)},
\end{align*}
a contradiction to the triangle inequality of $\Mdis{\cdot}$.
For Statement~\eqref{lem:not-outside-corner}, suppose, towards a contradiction, that $r\colon x \succ y$ and $s\colon y\succ x$ and $E(s)\in \ensuremath{\mathsf{BB}}(E(r), E(x))$.
By the definition of Manhattan distances, this implies that $\Mdis{E(r)-E(x)}=\Mdis{E(r)-E(s)} + \Mdis{E(s)-E(x)}$.
By the preferences of voters~$r$ and $s$ we infer the following:
\begin{align*}
\Mdis{E(r)-E(s)}+\Mdis{E(s)-E(y)} & < \Mdis{E(r)-E(s)}+\Mdis{E(s)-E(x)} \\
& = \Mdis{E(r)-E(x)} \\
& < \Mdis{E(r)-E(y)},
\end{align*}
a contradiction to the triangle inequality of $\Mdis{\cdot}$.
\end{proof}
The following is a summary of coordinate differences with regard to the preferences.
\begin{observation}\label{obs:pref-relation}
Let ${\cal P}$ be a \dManhattan[2] preference profile and let $E$ be a \dManhattan[2] embedding for~${\cal P}$.
For each voter~$s$ and any two alternatives~$x,y$ with $s\colon x\ensuremath{\succ} y$,
the following holds:
\begin{enumerate}[(i)]
\item\label{obs:pref-NE} If $E(y)\in \ensuremath{\mathsf{NE}}(E(s))$, then $E(y)[1]+E(y)[2]>E(x)[1]+E(x)[2]$.
\item\label{obs:pref-NW} If $E(y)\in \ensuremath{\mathsf{NW}}(E(s))$, then $-E(y)[1]+E(y)[2]>-E(x)[1]+E(x)[2]$.
\item\label{obs:pref-SE} If $E(y)\in \ensuremath{\mathsf{SE}}(E(s))$, then $E(y)[1]-E(y)[2]>E(x)[1]-E(x)[2]$.
\item\label{obs:pref-SW} If $E(y)\in \ensuremath{\mathsf{SW}}(E(s))$, then $-E(y)[1]-E(y)[2]>-E(x)[1]-E(x)[2]$.
\end{enumerate}
\end{observation}
\begin{proof}
All proofs are straightforward by evoking the definition of Manhattan embedding.
We only show the first statement.
Let ${\cal P},E,s,x,y$ be as defined. For brevity's sake, let $\ensuremath{\vect{s}},\ensuremath{\vect{x}}$, and $\ensuremath{\vect{y}}$ denote the points~$E(s)$, $E(x)$, and $E(y)$, respectively.
Assume that $\ensuremath{\vect{y}}\in \ensuremath{\mathsf{NE}}(\ensuremath{\vect{s}})$.
Then, by the Manhattan property and the fact that $s\colon x\ensuremath{\succ} y$, it follows that
\begin{align*}
&&(\ensuremath{\vect{y}}[1]-\ensuremath{\vect{s}}[1])+(\ensuremath{\vect{y}}[2]-\ensuremath{\vect{s}}[2]) & = \Mdis{\ensuremath{\vect{y}}-\ensuremath{\vect{s}}} > \Mdis{\ensuremath{\vect{x}}-\ensuremath{\vect{s}}} = |\ensuremath{\vect{x}}[1]-\ensuremath{\vect{s}}[1]| + |\ensuremath{\vect{x}}[2]-\ensuremath{\vect{s}}[2]|\\
& & &\ge (\ensuremath{\vect{x}}[1]-\ensuremath{\vect{s}}[1]) + (\ensuremath{\vect{x}}[2]-\ensuremath{\vect{s}}[2])\\
\Rightarrow && \ensuremath{\vect{y}}[1] + \ensuremath{\vect{y}}[2] & > \ensuremath{\vect{x}}[1]+\ensuremath{\vect{x}}[2],
\end{align*}
as desired.
\end{proof}
The next technical lemma excludes two alternatives to be put in the same quadrant region of some voters.
\begin{lemma}\label{lem:bet-property-Ntogether}
Let ${\cal P}$ be a \dManhattan[2] profile
and let $E$ be a \dManhattan[2] embedding for~${\cal P}$.
Let $r,s,t$ and $x,y$ be three voters and two alternatives in~${\cal P}$, respectively.
The following holds.
\begin{enumerate}[(i)]
\item\label{lem:Ntogether1} If $r\colon x\succ y$ and $s\colon y\succ x$, then
for each $\Pi\in \{\ensuremath{\mathsf{NE}},\ensuremath{\mathsf{NW}},\ensuremath{\mathsf{SE}},\ensuremath{\mathsf{SW}}\}$ it holds that
if $E(x)\in \Pi(E(s))$, then $E(y)\notin \Pi(E(r))$.
\item\label{lem:Ntogether2} %
%
%
If $r,t\colon x\succ y$, $s\colon y \succ x$, $E(r)[1]\le E(s)[1]\le E(t)[1]$, and $E(r)[2]\le E(s)[2]\le E(t)[2]$,
then the following holds.
\begin{itemize}[--]
\item If $E(x)\in \ensuremath{\mathsf{NW}}(E(s))$, then $E(y)\notin \ensuremath{\mathsf{NW}}(E(s))$.
\item If $E(x)\in \ensuremath{\mathsf{SE}}(E(s))$, then $E(y)\notin \ensuremath{\mathsf{SE}}(E(s))$.
\end{itemize}
\end{enumerate}
\end{lemma}
\begin{proof}
Let ${\cal P},E,r,s,t,x,y$ be as defined. See \cref{fig:config-v3}(I) (replacing $u$, $v$, $w$ with $r$, $s$, and $t$, respectively) for an illustration of the embedding of $r,s,t$.
For brevity's sake, let $\ensuremath{\vect{r}},\ensuremath{\vect{s}},\ensuremath{\vect{t}},\ensuremath{\vect{x}}$, and $\ensuremath{\vect{y}}$ denote the points $E(r)$, $E(s)$, $E(t)$, $E(x)$, and $E(y)$, respectively.
The first statement is straightforward to verify by using \cref{obs:pref-relation}.
Hence, we only show the case for~$\Pi=\ensuremath{\mathsf{NW}}$.
Suppose, towards a contradiction, that $\ensuremath{\vect{x}} \in \ensuremath{\mathsf{NW}}(\ensuremath{\vect{s}})$ and $\ensuremath{\vect{y}} \in \ensuremath{\mathsf{NW}}(\ensuremath{\vect{r}})$.
Then, since $r\colon x\ensuremath{\succ} y$ and $s\colon y\ensuremath{\succ} x$, by \cref{obs:pref-relation}\eqref{obs:pref-NW},
it follows that
\begin{align*}
\ensuremath{\vect{y}}[2]-\ensuremath{\vect{y}}[1] > \ensuremath{\vect{x}}[2]-\ensuremath{\vect{x}}[1] \text{ and } \ensuremath{\vect{x}}[2]-\ensuremath{\vect{x}}[1] > \ensuremath{\vect{y}}[2]-\ensuremath{\vect{y}}[1],
\end{align*}
a contradiction.
For the second statement, the two parts are symmetric. Hence, we only show the first part.
Suppose, towards a contradiction, that $\ensuremath{\vect{x}}, \ensuremath{\vect{y}} \in \ensuremath{\mathsf{NW}}(\ensuremath{\vect{s}})$. %
%
%
%
%
%
Since $r,t\colon x \succ y$, $s\colon y \succ x$, $\ensuremath{\vect{x}}\in \ensuremath{\mathsf{NW}}(\ensuremath{\vect{s}})$, by the first statement,
it follows that $\ensuremath{\vect{y}} \notin \ensuremath{\mathsf{NW}}(\ensuremath{\vect{r}})\cup \ensuremath{\mathsf{NW}}(\ensuremath{\vect{t}})$.
However, since $\ensuremath{\vect{y}}\in \ensuremath{\mathsf{NW}}(\ensuremath{\vect{s}})$, it follows that $\ensuremath{\vect{y}}\in \ensuremath{\mathsf{BB}}(\ensuremath{\vect{r}},\ensuremath{\vect{t}})$,
a contradiction to \cref{lem:two-votes}\eqref{lem:not-inside}.
\end{proof}
%
%
%
%
%
\begin{lemma}\label{lem:bet-property}
If a preference profile contains a $(v,u,w)$-\textsf{BE}-configuration, then no \dManhattan[2] embedding satisfies the $(v,u,w)$-\textsf{BE}-property.
\end{lemma}
\begin{proof}
Suppose, towards a contradiction, that a preference profile, called~${\cal P}$, contains a $(v,u,w)$-\textsf{BE}-configuration,
and there exists a \dManhattan[2] embedding~$E$ for profile~${\cal P}$ which satisfies the $(v,u,w)$-\textsf{BE}-property,
for three voters~$u,v,w$.
Let $a,b,x$ be the three alternatives defined in the $(v,u,w)$-\textsf{BE}-configuration~(see \cref{def:3voters-forbidden-profiles-B}).
For brevity's sake, let $\ensuremath{\vect{u}}, {\ensuremath{\color{red!50!black}\vect{v}}}, \ensuremath{\vect{w}},\ensuremath{\vect{a}},\ensuremath{\vect{b}},\ensuremath{\vect{x}}$ denote $E(u),E(v),E(w),E(a),E(b),E(x)$, respectively.
By symmetry and by the preferences of $u$ and $w$, the embedding~$E$ corresponds to~\cref{fig:config-v3}(I).
Since there are three voters, we can divide the two-dimensional space into $16$ subspaces by drawing a vertical and horizontal line through each voter's embedded point.
We enumerate these regions as in~\cref{fig:config-v3}(I) and use $R_i$ to refer to region~$i$, $i\in [16]$.
%
First, using \cref{lem:two-votes}\eqref{lem:not-inside} (setting $(r,s,y)\coloneqq (u,w,b)$),
we infer that alternative~$x$ cannot be embedded in~$R_6$, $R_7$, $R_{10}$, or $R_{11}$.
Moreover, using \cref{lem:two-votes}\eqref{lem:not-outside-corner} (setting $(r,s,y)\coloneqq (u,v,a)$),
we infer that alternative~$x$ cannot be embedded in $R_3$, $R_4$, $R_7$, or $R_8$.
Similarly, using \cref{lem:two-votes}\eqref{lem:not-outside-corner} (setting $(r,s,y)\coloneqq (w,v,b)$),
we infer that alternative~$x$ cannot be embedded in~$R_9$, $R_{10}$, $R_{13}$, or $R_{14}$.
This implies that $x$ is in one of the regions~$R_1$, $R_2$, $R_5$, $R_{12}$, $R_{15}$ or $R_{16}$.
By exchanging the two coordinates and the roles of $u$ and $w$ and the roles of $a$ and $b$, we know that if $E$ embeds alternative~$x$ in~$R_5$ (resp.\ $R_1$ or $R_2$),
then there exists another Manhattan embedding which embeds~$x$ in~$R_{15}$ (resp.\ $R_{16}$ or $R_{12}$),
and vice versa.
Hence, without loss of generality, assume that $E$ embeds~$x$ in~$R_1$, $R_2$, or $R_5$.
Note that this implies that $\ensuremath{\vect{x}} \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
Similarly, using \cref{lem:two-votes}\eqref{lem:not-outside-corner} (wrt.\ voters~$u$ and $v$, and voters~$w$ and $v$, respectively) we infer that %
$\ensuremath{\vect{b}}\notin \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}})\cup \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
Since $\ensuremath{\vect{x}} \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})$, by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether2}~(wrt.\ alternatives~$x$ and $b$), it follows that $\ensuremath{\vect{b}}\notin \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
This implies that $\ensuremath{\vect{b}}\in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
%
%
%
%
Let us consider alternative~$a$.
On the one hand, since $u,w\colon x\succ a$ and $v\colon a \succ x$,
by \cref{lem:two-votes},
it follows that $\ensuremath{\vect{a}} \notin \ensuremath{\mathsf{BB}}(\ensuremath{\vect{u}},\ensuremath{\vect{w}})\cup \ensuremath{\mathsf{NE}}(\ensuremath{\vect{w}})\cup \ensuremath{\mathsf{SW}}(\ensuremath{\vect{u}})$.
Altogether, it follows that $\ensuremath{\vect{a}} \in \ensuremath{\mathsf{SE}}(\ensuremath{\vect{w}})\cup\ensuremath{\mathsf{NW}}(\ensuremath{\vect{w}})\cup \ensuremath{\mathsf{SE}}(\ensuremath{\vect{u}})\cup \ensuremath{\mathsf{NW}}(\ensuremath{\vect{u}})$.
On the other hand, since $v\colon a \succ b$, $u, w\colon b \succ a$, and $\ensuremath{\vect{b}} \in \ensuremath{\mathsf{SE}}(v)$,
by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1},
it follows that $\ensuremath{\vect{a}} \notin \ensuremath{\mathsf{SE}}(u)\cup \ensuremath{\mathsf{SE}}(w)$.
Analogously, since $v\colon a \succ x$, $u, w\colon x \succ a$, and $\ensuremath{\vect{x}} \in \ensuremath{\mathsf{NW}}(v)$,
by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1},
it follows that $\ensuremath{\vect{a}} \notin \ensuremath{\mathsf{NW}}(u)\cup \ensuremath{\mathsf{NW}}(w)$.
This results in having no place to embed alternative~$a$, a contradiction.
%
%
%
%
%
%
%
%
%
%
%
%
%
\end{proof}
\begin{lemma}\label{lem:ext-property}
If a preference profile contains a $(v,u,w)$-\textsf{EX}-configuration, then no \dManhattan[2] embedding satisfies the $(v,u,w)$-\textsf{EX}-property.
\end{lemma}
\begin{proof}
Suppose, for the sake of contradiction, that a preference profile, called~${\cal P}$,
contains a $(v,u,w)$-\textsf{EX}-configuration,
and there exists a \dManhattan[2] embedding, called~$E$, for profile~${\cal P}$ which satisfies the $(v,u,w)$-\textsf{EX}-property, for three voters~$v,u,w$.
Let $x,a,b,c,d,e$ be the six alternatives defined in the $v$-\textsf{EX}-configuration~(see \cref{def:3voters-forbidden-profiles-E}).
For brevity's sake, let $\ensuremath{\vect{u}}, {\ensuremath{\color{red!50!black}\vect{v}}}, \ensuremath{\vect{w}},\ensuremath{\vect{a}},\ensuremath{\vect{b}},\ensuremath{\vect{x}},{\ensuremath{\color{blue}\vect{c}}},\ensuremath{\vect{d}},\ensuremath{\vect{e}}$ denote $E(u)$, $E(v)$, $E(w)$, $E(a)$, $E(b)$, $E(x)$, $E(c)$, $E(d)$, $E(e)$, respectively.
Observe that the preferences of $u$ and $w$ are symmetric in the sense that if we exchange the roles of $a$ and $b$, and also the roles of~$d$ and $e$, then we arrive at a new $(v,u,w)$-\textsf{EX}-configuration\
for~${\cal P}$.
Hence, up to rotation and mirroring,
embedding~$E$ corresponds to~\cref{fig:config-v3}(O).
Since there are three voters, we can divide the two-dimensional space into $16$ subspaces by drawing a vertical and horizontal line through each voter's embedded point.
We enumerate these regions as in~\cref{fig:config-v3}(O) and use $R_i$ to refer to region~$i$, $i\in [16]$.
First, using \cref{lem:two-votes}\eqref{lem:not-inside} (setting $(r,s,y)\coloneqq (u,w,c)$),
we infer that alternative~$x$ cannot be embedded in~$R_6$.
Analogously, repeatedly using \cref{lem:two-votes}\eqref{lem:not-inside} (setting $(r,s,y)\coloneqq (u,v,a)$ and $(r,s,y)\coloneqq (v,w,b)$, respectively), we infer that $x$ cannot be embedded in regions~$R_7$, $R_{10}$ or $R_{11}$.
Further, using \cref{lem:two-votes}\eqref{lem:not-outside-corner} (setting $(r,s,y)\coloneqq (v,u,d)$),
we infer that alternative~$x$ cannot be in regions~$R_1$ and $R_5$.
Again, using \cref{lem:two-votes}\eqref{lem:not-outside-corner} repeatedly (setting $(r,s,y)\coloneqq (v,w,e)$, $(r,s,y)\coloneqq (u,w,b)$, and $(r,s,y)\coloneqq (w,u,a)$, and $(r,s,y)\coloneqq (u,v,b)$, respectively),
we further infer that alternative~$x$ cannot be in regions~$R_1$--$R_4$, $R_9$, $R_{13}$, and $R_{16}$ .
This implies that $x$ is in one of the regions~$R_8$, $R_{12}$ ,$R_{14}$ or $R_{15}$.
We aim to show that none of the regions is possible for~$x$.
To this end, since ${\ensuremath{\color{red!50!black}\vect{v}}}\in \ensuremath{\mathsf{SE}}(u)\cap \ensuremath{\mathsf{SE}}(w)$,
by \cref{lem:two-votes}\eqref{lem:not-outside-corner}, we observe that
\begin{align}\label{eq:ext-property-ab}
\ensuremath{\vect{a}}[2]\le \ensuremath{\vect{w}}[2] \text{ and } \ensuremath{\vect{b}}[1]\ge \ensuremath{\vect{u}}[1].
\end{align}
If $E$ embeds $x$ in regions~$R_{14}$--$R_{15}$, then
\begin{align}
\ensuremath{\vect{x}} \in \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}})\cap \ensuremath{\mathsf{SE}}(\ensuremath{\vect{u}}).\label{eq:ext-property-xvu}
\end{align}
Since $v\colon a\succ x$, $w\colon x \succ a$,
by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1},
it follows that
$\ensuremath{\vect{a}} \notin \ensuremath{\mathsf{SW}}(\ensuremath{\vect{w}})$.
By \eqref{eq:ext-property-ab}, it follows that $\ensuremath{\vect{a}} \in \ensuremath{\mathsf{SE}}(\ensuremath{\vect{w}})$.
Since $w\colon x \succ a$ and $u\colon a \succ x$,
by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1},
it follows that $\ensuremath{\vect{x}} \notin \ensuremath{\mathsf{SE}}(\ensuremath{\vect{u}})$, a contradiction to~\eqref{eq:ext-property-xvu}.
Analogously, we also obtain a contradiction if $x$ is embedded in region~$R_8$ or $R_{12}$ by focusing on voter~$u$ and alternative~$b$.
Assume that
\begin{align}
\ensuremath{\vect{x}} \in \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}})\cap \ensuremath{\mathsf{SE}}(\ensuremath{\vect{w}}).\label{eq:ext-property-xvw}
\end{align}
Since $v\colon b\succ x$ and $u\colon x \succ b$,
by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1},
it follows that
$\ensuremath{\vect{b}} \notin \ensuremath{\mathsf{NE}}(\ensuremath{\vect{u}})$.
By \eqref{eq:ext-property-ab}, it follows that $\ensuremath{\vect{b}} \in \ensuremath{\mathsf{SE}}(\ensuremath{\vect{u}})$.
Since $u\colon x \succ b$ and $w\colon b \succ x$, by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1}, it follows that $\ensuremath{\vect{x}} \notin \ensuremath{\mathsf{SE}}(\ensuremath{\vect{w}})$, a contradiction to \eqref{eq:ext-property-xvw}.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\end{proof}
\subsection{Proof of \cref{thm:no-n3-m6}}\label{subsec:n3-m6}
Using \cref{lem:bet-property,lem:ext-property}, we can prove \cref{thm:no-n3-m6} with the help of \cref{ex:no-n3-m6} below.
\begin{example} \label{ex:no-n3-m6}
\setcounter{vthreecounter}{\themyprofilecounter}
The following preference profile~${\cal P}_{\thevthreecounter}$ with three voters and six alternatives is not \dManhattan[2].
\stepcounter{myprofilecounter}
\begin{align*}
\begin{array}{llll}
v_1\colon & 1 \succ 2 \succ 3 \succ 4 \succ 5 \succ 6,\\
v_2\colon & 1 \succ 4 \succ 6 \succ 3 \succ 5 \succ 2,\\
v_3\colon & 6 \succ 5 \succ 2 \succ 3 \succ 1 \succ 4.
\end{array}
\end{align*}
\end{example}
\begin{proof}[Proof of \cref{thm:no-n3-m6}]
To show this, we consider profile~${\cal P}_{\thevthreecounter}$ given in \cref{ex:no-n3-m6}.
Suppose, towards a contradiction, that $E$ is a \dManhattan[2] embedding for~${\cal P}_{\thevthreecounter}$.
Since each embedding for three voters must satisfy one of the two properties in \cref{def:3voters-configurations},
we distinguish between two cases: either there exists a voter whose embedding is inside the bounding box of the other two, or there is no such voter.
\begin{description}
\item[Case 1:] There exists a voter~$v_i$, $i\in [3]$, such that $E$ satisfies the $v_i$-\textsf{BE}-property.
Since ${\cal P}$ contains a $(v_1,v_2,v_3)$-\textsf{BE}-configuration with $a=2,b=6,x=5$,
by \cref{lem:bet-property} it follows that $E$ does not satisfy the $v_1$-\textsf{BE}-property.
Analogously, since ${\cal P}$ contains a $(v_2,v_1,v_3)$-\textsf{BE}-configuration regarding~$a=2,b=4,x=3$, and
$(v_3,v_1,v_2)$-\textsf{BE}-configuration with~$a=1,b=5,x=3$,
neither does $E$ satisfy the $v_2$-\textsf{BE}-property or the $v_3$-\textsf{BE}-property.
\item[Case 2:] There exists a voter~$v_i$, $i\in [3]$, such that $E$ satisfies the $v_i$-\textsf{EX}-property.
Now, consider the subprofile~${\cal P}'$ restricted to the alternatives~$1,2,3,6$.
We claim that this subprofile contains an \textsf{EX}-configuration, which by \cref{lem:ext-property} precludes the existence of such a voter~$v_i$ with the $v_i$-\textsf{EX}-property:
First, since ${\cal P}'$ contains a $(v_3,v_1,v_2)$-\textsf{EX}-configuration~(setting $(u,v,w)\coloneqq (v_1,v_3,v_2)$ and $(x,a,b,c,d,e)=(3,2,6,1,1,1)$),
by \cref{lem:ext-property}, it follows that $E$ does not satisfy the $v_3$-\textsf{EX}-property.
In fact, ${\cal P}'$ also contains a $v_2$-\textsf{EX}-configuration~(setting $(u,v,w)\coloneqq (v_1,v_2,v_3)$ and $(x,a,b,c,d,e)=(3,1,6,2,2,2)$) and a $v_1$-\textsf{EX}-configuration~(setting $(u,v,w)\coloneqq (v_2,v_1,v_3)$ and $(x,a,b,c,d,e)=(3,1,2,6,6,6)$).
By \cref{lem:ext-property}, it follows that $E$ does not satisfy the $v_2$-\textsf{EX}-property nor the $v_1$-\textsf{EX}-property.
\end{description}
Summarizing, we obtain a contradiction for~$E$.
\end{proof}
\subsection{Proof of \cref{thm:no-n4-m5}}\label{subsec:n4-m5}
The proof will be based on the following example.
\begin{example}
\setcounter{vfourcounter}{\themyprofilecounter}
Any profile~${\cal P}_{\themyprofilecounter}$ satisfying the following
is not \dManhattan[2].
\begin{align*}
\begin{array}{l@{\,}ll}
v_1\colon & \{1, 2\} \succ 3 \succ 4 \succ 5, \\
v_2 \colon & \{1, 2\} \succ 3 \succ 5 \succ 4, \\
v_3\colon & 1 \succ 4 \succ 5 \succ 3 \succ 2, \\
v_4 \colon & 2 \succ 4 \succ 5 \succ 3 \succ 1.
\end{array}
\end{align*}
\stepcounter{myprofilecounter}
\label{ex:no-n4-m5}
\end{example}
\begin{proof}[Proof of \cref{thm:no-n4-m5}]
To show the statement, we show that profile~${\cal P}_{\thevfourcounter}$ given in \cref{ex:no-n4-m5} is not \dManhattan[2].
Suppose, towards a contradiction, that ${\cal P}_{\thevfourcounter}$ is \dManhattan[2] and $E$ is a \dManhattan[2] embedding for~${\cal P}_{\thevfourcounter}$.
For brevity's sake, we will use ${\ensuremath{\color{red!50!black}\vect{v}}}_i$ and ${\ensuremath{\color{blue}\vect{c}}}_j$, $i\in [4], j\in [5]$ to refer to the embedded point of voter~$v_i$ and alternative~$j$, respectively.
Note that since ${\cal P}_{\thevfourcounter}$ has four voters, there are $4!\cdot 4!=576$ possible combinatorially different embeddings of the voters.
We observe however that there are only two groups of voter embeddings which are relevant.
First, we claim that for all $v \in \{v_1,v_2\}$ and $\{u,w\}=\{v_3,v_4\}$, $E$ cannot satisfy the $(v,u,w)$-\textsf{EX}-property, or the $(u,v,w)$-\textsf{EX}-property, or the $(w,u,v)$-\textsf{EX}-property:
If we restrict the preference profile to voters~$v,u,w$ and alternatives~$\{1,2,3,4\}$,
then we obtain a profile which is equivalent to profile~${\cal Q}_{\theextcounter}$ from \cref{ex:non-bet-ext}.
Since ${\cal Q}_{\theextcounter}$ is an \textsf{EX}-configuration and, by \cref{lem:ext-property}, violates the \textsf{EX}-property,
it follows that
\begin{align}
\label{n4-m5:BET-134} & {\ensuremath{\color{red!50!black}\vect{v}}}_1\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_3,{\ensuremath{\color{red!50!black}\vect{v}}}_4) \vee {\ensuremath{\color{red!50!black}\vect{v}}}_3\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_4)\vee {\ensuremath{\color{red!50!black}\vect{v}}}_4\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_3)&\text{ and }\\
\label{n4-m5:BET-234} &{\ensuremath{\color{red!50!black}\vect{v}}}_2\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_3,{\ensuremath{\color{red!50!black}\vect{v}}}_4)\vee {\ensuremath{\color{red!50!black}\vect{v}}}_3\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_2,{\ensuremath{\color{red!50!black}\vect{v}}}_4)\vee {\ensuremath{\color{red!50!black}\vect{v}}}_4\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_2,{\ensuremath{\color{red!50!black}\vect{v}}}_3).&
\end{align}
Second, it is straightforward to verify that by \cref{lem:bet-property}, embedding~$E$ violates the $(v_2,v_3,v_4)$-\textsf{BE}-property~(wrt.\ alternatives~$\{3,5,4\}$),
the $(v_3,v_1,v_2)$-\textsf{BE}-property (wrt.\ alternatives~$\{2,3,5\}$), and the $(v_4,v_1,v_2)$-\textsf{BE}-property (wrt.\ alternatives~$\{1,3,5\}$).
Together with \eqref{n4-m5:BET-134}--\eqref{n4-m5:BET-234}, this implies:
\begin{align}
\label{n4-m5:NOTBET-132} & {\ensuremath{\color{red!50!black}\vect{v}}}_3\notin \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_2) & \text{ and} \\
\label{n4-m5:BETT-234} &{\ensuremath{\color{red!50!black}\vect{v}}}_3\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_2,{\ensuremath{\color{red!50!black}\vect{v}}}_4) \text{ or } {\ensuremath{\color{red!50!black}\vect{v}}}_4\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_2,{\ensuremath{\color{red!50!black}\vect{v}}}_3). &
\end{align}
Based on \eqref{n4-m5:BETT-234}, we distinguish between two cases.
\begin{description}
\item[Case 1:] ${\ensuremath{\color{red!50!black}\vect{v}}}_3\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_2,{\ensuremath{\color{red!50!black}\vect{v}}}_4)$. By \eqref{n4-m5:BET-134} and \eqref{n4-m5:NOTBET-132}, we can further infer that
${\ensuremath{\color{red!50!black}\vect{v}}}_3\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_4)$. %
Without loss of generality, assume that
${\ensuremath{\color{red!50!black}\vect{v}}}_2[1]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[1] \le {\ensuremath{\color{red!50!black}\vect{v}}}_4[1]$, ${\ensuremath{\color{red!50!black}\vect{v}}}_2[2]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[2] \le {\ensuremath{\color{red!50!black}\vect{v}}}_4[2]$,
and
${\ensuremath{\color{red!50!black}\vect{v}}}_1[1]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[1] \le {\ensuremath{\color{red!50!black}\vect{v}}}_4[1]$, ${\ensuremath{\color{red!50!black}\vect{v}}}_1[2]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[2] \le {\ensuremath{\color{red!50!black}\vect{v}}}_4[2]$.
See \cref{fig:no-n4-m5} for an illustration; note that the relative positions of $v_1$ and $v_2$ need not be exactly as depicted, but they both are embedded to the southwest of $v_3$.
As in previous proofs,
we divide the two-dimensional space into $16$ subspaces by drawing a vertical and horizontal line through the embedded point of each voter~$v_z$, $z\in \{2,3,4\}$.
We enumerate these regions as in \cref{fig:no-n4-m5} and use $R_i$ to refer to region~$i$, $i\in [16]$.
Let us consider alternative~$2$ and its embedded point~${\ensuremath{\color{blue}\vect{c}}}_2$.
By \cref{lem:two-votes}\eqref{lem:not-outside-corner} (wrt.\ $(v_2,v_3,2,3)$), we infer that
alternative~$2$ is not embedded to the north east of $v_3$, i.e.,
${\ensuremath{\color{blue}\vect{c}}}_2\notin \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
Again, by \cref{lem:two-votes}\eqref{lem:not-outside-corner} (wrt.\ $(v_4,v_3,2,3)$), we infer that
${\ensuremath{\color{blue}\vect{c}}}_2\notin \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
%
That is, ${\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
By symmetry, we can assume that ${\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
%
Using a similar reasoning and considering the preferences over $\{x,2\}$ for $x\in \{3,4,5\}$,
we obtain that ${\ensuremath{\color{blue}\vect{c}}}_3,{\ensuremath{\color{blue}\vect{c}}}_4,{\ensuremath{\color{blue}\vect{c}}}_5 \notin \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_2,{\ensuremath{\color{red!50!black}\vect{v}}}_4)\cup \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_4)\cup \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$.
Further, since ${\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$, by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1},
it follows that ${\ensuremath{\color{blue}\vect{c}}}_3,{\ensuremath{\color{blue}\vect{c}}}_4,{\ensuremath{\color{blue}\vect{c}}}_5 \notin \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_4)\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$.
This implies that ${\ensuremath{\color{blue}\vect{c}}}_3,{\ensuremath{\color{blue}\vect{c}}}_4,{\ensuremath{\color{blue}\vect{c}}}_5\in R_1\cup R_2\cup R_3 \cup R_5 \cup R_9$.
Since $v_3\colon 1\succ \{4,5,3\}$ and $v_4\colon \{4,5,3\} \succ 1$,
by \cref{lem:two-votes}\eqref{lem:not-outside-corner},
it follows that ${\ensuremath{\color{blue}\vect{c}}}_3,{\ensuremath{\color{blue}\vect{c}}}_4,{\ensuremath{\color{blue}\vect{c}}}_5\notin \ensuremath{\mathsf{SW}}(v_3)$.
That is,
\begin{align}
%
& {\ensuremath{\color{blue}\vect{c}}}_3, {\ensuremath{\color{blue}\vect{c}}}_4, {\ensuremath{\color{blue}\vect{c}}}_5 \in R_1\cup R_2 \cup R_3\cup R_5.& \label{lem:n4-m5-alter-4}
\end{align}
Since $v_2 \colon \{3,5\} \succ 4$, $v_3 \colon 4 \succ \{5,3\}$,
by \cref{lem:two-votes}\eqref{lem:not-outside-corner},
it follows that ${\ensuremath{\color{blue}\vect{c}}}_3,{\ensuremath{\color{blue}\vect{c}}}_5\notin \ensuremath{\mathsf{NE}}(v_3)$.
By \eqref{lem:n4-m5-alter-4},
we have that
\begin{align}
& {\ensuremath{\color{blue}\vect{c}}}_3,{\ensuremath{\color{blue}\vect{c}}}_5 \in R_1\cup R_2 \cup R_5\subseteq \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3). &\label{lem:n4-m5-alter-35}
\end{align}
%
%
%
%
%
%
%
%
%
%
%
%
Since $v_3\colon 4 \succ 3$ and $v_2 \colon 3 \succ 4$, by \eqref{lem:n4-m5-alter-35}
and \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1},
it follows that ${\ensuremath{\color{blue}\vect{c}}}_4\notin \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$.
By \eqref{lem:n4-m5-alter-4},
this implies that
\begin{align}
& {\ensuremath{\color{blue}\vect{c}}}_4\in R_2\cup R_3\subseteq \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_2).& \qquad \label{lem:n4-m5-c4}
\end{align}
By \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1} regarding $v_1\colon 4 \succ 5$ and $v_2\colon 5 \succ 4$,
it follows that ${\ensuremath{\color{blue}\vect{c}}}_5\notin \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_1)$.
Since ${\ensuremath{\color{red!50!black}\vect{v}}}_1\in \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$, this implies that ${\ensuremath{\color{blue}\vect{c}}}_5 \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_1)$.
However, this contradicts \eqref{lem:n4-m5-alter-35} and \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1} (setting $(r,s,x,y)=(v_3,v_1,5,3)$).
\item[Case 2:] ${\ensuremath{\color{red!50!black}\vect{v}}}_4\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_2,{\ensuremath{\color{red!50!black}\vect{v}}}_3)$. Note that this case is analogous to the first case by exchanging the roles of alternatives~$1$ and $2$ and the roles of voters~$v_3$ and $v_4$, respectively.
\end{description}
Altogether, we prove that ${\cal P}_\thevfourcounter$ is not \dManhattan[2].
\end{proof}
\begin{figure}
\centering
\begin{tikzpicture}
\drawgridA
\drawreg
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {2/2/v2/{v_2}/voterV/above left/-1/-1, 3/3/v3/{v_3}/voterW/above left/-1/-2, 4/4/v4/{v_4}/voterW/above left/-1/-2} {
\node[\typ, fill=black,draw=black] at (\x\y) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\node[voterV,fill=black,draw=black, below right = 10pt and 5pt of 12] (v1) {} ;
\node[right = -1pt of v1] {$v_1$} ;
\end{tikzpicture}
\caption{Illustration of a possible embedding for the proof of \cref{thm:no-n4-m5}.}
\label{fig:no-n4-m5}
\end{figure}
\subsection{Proof of \cref{thm:no-n5-m4}}\label{subsec:n5-m4}
\begin{example}
\setcounter{vfivecounter}{\themyprofilecounter}
Any profile~${\cal P}_{\themyprofilecounter}$ satisfying the following
is not \dManhattan[2].
\begin{align*}
\begin{array}{l@{\,}ll}
v_1\colon & 1 \succ 2 \succ 3 \succ 4,\\
v_2\colon & 1 \succ 4 \succ 3 \succ 2,\\
v_3\colon & \{2, 4\} \succ 3 \succ 1,\\
v_4\colon & 3 \succ 2 \succ 1 \succ 4,\\
v_5\colon & 3 \succ 4 \succ 1 \succ 2.
\end{array}
\end{align*}
\stepcounter{myprofilecounter}
\label{ex:no-n5-m4}
\end{example}
Before we proceed with the proof of \cref{thm:no-n5-m4},
we show a technical but useful lemma.
\begin{lemma}\label{clm:n5-m4}
Let ${\cal P}$ be a preference profile with four voters~$u,v,w,r$ and four alternatives~$a,b,c,d$ satisfying the following:
\begin{align*}
\begin{array}{l@{\,}ll}
u\colon & \{a,b\} \succ c \succ d,\\
v\colon & \{b,d\} \succ c \succ a,\\
w\colon & \{a,d\} \succ c \succ b,\\
r\colon & c \succ \{a,b\} \succ d.\\
\end{array}
\end{align*}
Let $E$ be an embedding of ${\cal P}$ such that $E(v)\in \ensuremath{\mathsf{BB}}(E(u), E(w))$.
If $E$ is \dManhattan[2] for~${\cal P}$, then $E(v)\in \ensuremath{\mathsf{BB}}(E(r), E(w))$.
\end{lemma}
\begin{proof}%
%
Let ${\cal P},u,v,w,r,a,b,c,d,E$ be as defined.
For brevity's sake, we use $\ensuremath{\vect{u}},{\ensuremath{\color{red!50!black}\vect{v}}},\ensuremath{\vect{w}},\ensuremath{\vect{r}},\ensuremath{\vect{a}},\ensuremath{\vect{b}},{\ensuremath{\color{blue}\vect{c}}},\ensuremath{\vect{d}}$ to refer to the embedded point of $u,v,w,r,a,b,c,d$.
Without loss of generality assume that $\ensuremath{\vect{u}}[1]\le {\ensuremath{\color{red!50!black}\vect{v}}}[1] \le \ensuremath{\vect{w}}[1]$ and $\ensuremath{\vect{u}}[2]\le {\ensuremath{\color{red!50!black}\vect{v}}}[2] \le \ensuremath{\vect{w}}[2]$.
We divide the two-dimensional space into 16 subspaces, enumerate these regions as in~\cref{fig:config-v3}(I), and use $R_i$ to refer to region~$i$, $i\in [16]$.
To prove the statement, we need to show that $\ensuremath{\vect{r}}\in \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}})$, i.e., $\ensuremath{\vect{r}}\notin \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}})\cup \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
Hence, we aim to show the contradiction that if $\ensuremath{\vect{r}}\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}})\cup \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}})$,
then $E$ is not \dManhattan[2].
Before we proceed, we establish where the individual alternatives can be embedded.
First, by the preferences of $u$ and $w$ regarding $c$ and $a$,
and by \cref{lem:two-votes}\eqref{lem:not-inside}, we obtain that
${\ensuremath{\color{blue}\vect{c}}}\notin \ensuremath{\mathsf{BB}}(\ensuremath{\vect{u}},\ensuremath{\vect{w}})$.
Further, by the preferences of $u$ and $v$ regarding $c$ and $d$ and by \cref{lem:two-votes}\eqref{lem:not-outside-corner},
we infer that
${\ensuremath{\color{blue}\vect{c}}}\notin \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
Analogously, due to the preference of $v$ and $w$ regarding $b$ and $c$, we have that ${\ensuremath{\color{blue}\vect{c}}}\notin \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
Together, we infer that ${\ensuremath{\color{blue}\vect{c}}} \in R_1\cup R_2\cup R_5\cup R_{12}\cup R_{15}\cup R_{16}$.
By symmetry, assume that ${\ensuremath{\color{blue}\vect{c}}} \in R_1\cup R_2 \cup R_5$, implying that ${\ensuremath{\color{blue}\vect{c}}} \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
Similarly, we can obtain that $\ensuremath{\vect{a}} \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
By the preferences of $u,v,w$ regarding $c$ and $a$ and by \cref{lem:bet-property-Ntogether},
we infer that $\ensuremath{\vect{a}} \in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
Now, we distinguish between three cases regarding the relative position of voter~$r$.
\begin{description}
\item[Case 1:] $\ensuremath{\vect{r}} \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})$. Since $r\colon a \succ d$ and $v\colon d \succ a$, by \cref{lem:two-votes}\eqref{lem:not-outside-corner}, it follows that $\ensuremath{\vect{a}}\notin \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}})$, a contradiction to our assumption.
\item[Case 2:] $\ensuremath{\vect{r}} \in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}})$. This case is analogous to the first case. We consider $c$ and $d$ instead. Since $r\colon c \succ d$ and $v\colon d \succ c$, by \cref{lem:two-votes}\eqref{lem:not-outside-corner}, it follows that ${\ensuremath{\color{blue}\vect{c}}}\notin \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})$, a contradiction to our assumption as well.
\item[Case 3:] $\ensuremath{\vect{r}}\in \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}})$.
Let us consider alternative~$d$.
By the preferences of $u$ and $r$, and by \cref{lem:two-votes}\eqref{lem:not-inside},
we obtain that $\ensuremath{\vect{d}}\notin \ensuremath{\mathsf{BB}}(\ensuremath{\vect{u}},\ensuremath{\vect{r}})$.
By \cref{lem:two-votes}\eqref{lem:not-outside-corner} (considering the preferences of $u$ and $r$ regarding $c$ and $d$),
we further infer that $\ensuremath{\vect{d}} \notin \ensuremath{\mathsf{NE}}(\ensuremath{\vect{r}}) \cup \ensuremath{\mathsf{SW}}(\ensuremath{\vect{u}})$.
Moreover, by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1}--\eqref{lem:Ntogether2}
(considering the preferences of $u,v,r$ regarding $c$ and $d$) and since ${\ensuremath{\color{blue}\vect{c}}} \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})$,
we infer that $\ensuremath{\vect{d}}\notin \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}})\cup \ensuremath{\mathsf{NW}}(\ensuremath{\vect{u}})\cup \ensuremath{\mathsf{NW}}(\ensuremath{\vect{r}})$.
Hence,
\begin{align*}
\ensuremath{\vect{d}} \in \ensuremath{\mathsf{SE}}(\ensuremath{\vect{u}})\cup \ensuremath{\mathsf{SE}}(\ensuremath{\vect{r}}).%
\end{align*}
However, this is a contradiction: Since $v\colon d \succ a$, $u,r\colon a \succ d$, and $\ensuremath{\vect{a}} \in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}})$, by
\cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1},
it follows that $\ensuremath{\vect{d}} \notin \ensuremath{\mathsf{SE}}(\ensuremath{\vect{u}})\cup \ensuremath{\mathsf{SE}}(\ensuremath{\vect{r}})$.
%
%
%
%
\end{description}
Summarizing, this implies that $\ensuremath{\vect{r}}\in \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}})$, and hence ${\ensuremath{\color{red!50!black}\vect{v}}} \in \ensuremath{\mathsf{BB}}(\ensuremath{\vect{r}},\ensuremath{\vect{w}})$.
\end{proof}
\begin{proof}[Proof of \cref{thm:no-n5-m4}]
To show the statement, we show that profile~${\cal P}_{\thevfivecounter}$ given in \cref{ex:no-n5-m4} is not \dManhattan[2].
Suppose, towards a contradiction, that ${\cal P}_{\thevfivecounter}$ is \dManhattan[2] and $E$ is a \dManhattan[2] embedding for~${\cal P}_{\thevfivecounter}$.
For brevity's sake, we use ${\ensuremath{\color{red!50!black}\vect{v}}}_i$ and ${\ensuremath{\color{blue}\vect{c}}}_j$, $i\in [5], j\in [4]$ to refer to the embedded points of voter~$v_i$ and alternative~$j$, respectively.
First, we observe that one of $v_1,v_2,v_3$ is embedded inside the bounding box defined by the other two since
the subprofile of ${\cal P}_{\thevfivecounter}$ restricted to voters~$v_1,v_2$ and $v_3$ is equivalent to profile~${\cal Q}_{\theextcounter}$ which, by \cref{lem:ext-property}, violates the \textsf{EX}-property (for each of $v_1$,$v_2$, and $v_3$, respectively).
Hence, we distinguish between two cases.
\begin{description}
\item[Case 1:] ${\ensuremath{\color{red!50!black}\vect{v}}}_2\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_3)$ or ${\ensuremath{\color{red!50!black}\vect{v}}}_1\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_2,{\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
Note that these two subcases are equivalent in the sense that if we exchange the roles of alternatives~$2$ and $4$, i.e.,
$1\mapsto 1$, $3 \mapsto 3$, $2\mapsto 4$, and $4 \mapsto 2$,
we obtain an equivalent (in terms of the Manhattan property) preference profile where the roles of voters $v_1$ and $v_2$ (resp.\ $v_4$ and $v_5$) are exchanged.
Hence, it suffices to consider the case of ${\ensuremath{\color{red!50!black}\vect{v}}}_2\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
Without loss of generality, assume that ${\ensuremath{\color{red!50!black}\vect{v}}}_1[1]\le {\ensuremath{\color{red!50!black}\vect{v}}}_2[1]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[1]$ and ${\ensuremath{\color{red!50!black}\vect{v}}}_1[2]\le {\ensuremath{\color{red!50!black}\vect{v}}}_2[2]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[2]$ (see \cref{fig:no-n5-m4-case1}).
Then, by \cref{clm:n5-m4} (setting $u=v_1$, $v=v_2$, $w=v_3$, and $r=v_4$),
we obtain that ${\ensuremath{\color{red!50!black}\vect{v}}}_2\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_4,{\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
This implies that ${\ensuremath{\color{red!50!black}\vect{v}}}_4[1]\le {\ensuremath{\color{red!50!black}\vect{v}}}_2[1]$ and ${\ensuremath{\color{red!50!black}\vect{v}}}_4[2]\le {\ensuremath{\color{red!50!black}\vect{v}}}_2[2]$.
%
\begin{figure}
\centering
\begin{subfigure}[b]{.28\textwidth}
\centering
\begin{tikzpicture}
\drawgridB
\drawregNN
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {2/2/v4/{v_4}/voterV/above right/-1/-2, 3/3/v2/{v_2}/voterW/above right/-1/-1, 4/4/v3/{v_3}/voterW/above right/-1/-1} {
\node[\typ, fill=black,draw=black] at (\x\y) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\node[voterV,fill=black,draw=black, below left = 5pt and 2pt of 22] (v1) {} ;
\node[left = -1pt of v1] {$v_1$};
\end{tikzpicture}
\caption{}\label{fig:no-n5-m4-case1}
\end{subfigure}
\begin{subfigure}[b]{.28\textwidth}
\centering
\begin{tikzpicture}
\drawgridB
\drawregNN
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {2/2/v4/{v_4}/voterV/above right/-1/-2, 3/3/v3/{v_3}/voterW/above right/-1/-1, 4/4/v5/{v_5}/voterW/above right/-1/-1} {
\node[\typ, fill=black,draw=black] at (\x\y) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\node[voterV,fill=black,draw=black, below right = 10pt and 20pt of 13] (v1) {} ;
\node[below = 0pt of v1] {$v_1$} ;
\node[voterV,fill=black,draw=black, above right = 10pt and 12pt of 34] (v2) {} ;
\node[right = 0pt of v2] {$v_2$} ;
\end{tikzpicture}
\caption{}\label{fig:no-n5-m4-case2}
\end{subfigure}
\begin{subfigure}[b]{.3\textwidth}
\centering
\begin{tikzpicture}
\drawgridU
\drawregU
\foreach \x / \y / \n / \nn / \typ / \p / \dx / \dy in {2/3/v1/{v_1}/voterV/below right/-1/-2, 3/2/v4/{v_4}/voterV/below right/-1/-2, 4/4/v3/{v_3}/voterW/below right/-1/-2, 6/5/v5/{v_5}/voterW/above right/-1/-1,5/6/v2/{v_2}/voterW/above right/-1/-1} {
\node[\typ, fill=black,draw=black] at (\x\y) (\n) {};
\node[\p = \dx pt and \dy pt of \n] {$\nn$};
}
\gettikzxy{(v1)}{\vx}{\vy}
\gettikzxy{(v4)}{\wx}{\wy}
\node[alter] at ($(\vx*0.5+\wx*0.5,\vy*0.5+\wy*0.5)$) (c2) {};
\node[right = 0pt of c2] {${\ensuremath{\color{blue}\vect{c}}}_2$};
\gettikzxy{(v2)}{\vvx}{\vvy}
\gettikzxy{(v5)}{\wwx}{\wwy}
\node[alter] at ($(\vvx*0.6+\wwx*0.4,\vvy*0.4+\wwy*0.6)$) (c4) {};
\node[right = 0pt of c4] {${\ensuremath{\color{blue}\vect{c}}}_4$};
\node[alter] at ($(\vvx*0.2+\wwx*0.8,\vy*0.2+\wy*0.7)$) (c3) {};
\node[left = 0pt of c3] {${\ensuremath{\color{blue}\vect{c}}}_3$};
\node[alter] at ($(\vx*0.8+\wx*0.2,\vvy*0.8+\wwy*0.2)$) (c1) {};
\node[right = 0pt of c1] {${\ensuremath{\color{blue}\vect{c}}}_1$};
\end{tikzpicture}
\caption{}\label{fig:no-n5-m4-case2-refined}
\end{subfigure}
\caption{Illustrations of possible embeddings for the proof of \cref{thm:no-n5-m4}: Left: ${\ensuremath{\color{red!50!black}\vect{v}}}_2\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_4,{\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
\eqref{fig:no-n5-m4-case1}: ${\ensuremath{\color{red!50!black}\vect{v}}}_2\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
\eqref{fig:no-n5-m4-case2}: ${\ensuremath{\color{red!50!black}\vect{v}}}_3\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_2)\cup \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_4,{\ensuremath{\color{red!50!black}\vect{v}}}_2)\cup \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_5,{\ensuremath{\color{red!50!black}\vect{v}}}_1)$.
\eqref{fig:no-n5-m4-case2-refined}: ${\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_4, {\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$, ${\ensuremath{\color{red!50!black}\vect{v}}}_2, {\ensuremath{\color{red!50!black}\vect{v}}}_5,{\ensuremath{\color{blue}\vect{c}}}_4\in \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$ such that ${\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_1)$, ${\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_4)$, ${\ensuremath{\color{blue}\vect{c}}}_4\in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$, ${\ensuremath{\color{blue}\vect{c}}}_4\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_5)$.}
\label{fig:no-n5-m4}
\end{figure}
By the preferences of $v_4,v_2,v_3$ regarding alternatives~$2$ and $1$, and by \cref{lem:two-votes},
it follows that ${\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$ and
${\ensuremath{\color{blue}\vect{c}}}_1\notin \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_3,{\ensuremath{\color{red!50!black}\vect{v}}}_4)\cup \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)\cup \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}}_4)$.
Similarly, regarding the preferences over~$3$ and $1$, it follows that ${\ensuremath{\color{blue}\vect{c}}}_3\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$.
By \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether2} (considering the preferences of $v_1,v_2$ and $v_3$ regarding alternatives~$2$ and $3$),
we further infer that either ${\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$ and ${\ensuremath{\color{blue}\vect{c}}}_3\in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$ or ${\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$ and ${\ensuremath{\color{blue}\vect{c}}}_3\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$.
Without loss of generality, assume that ${\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$ and ${\ensuremath{\color{blue}\vect{c}}}_3\in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$.
By the preferences of $v_3$ and $v_2$ (resp.\ $v_4$ and $v_2$) regarding $1$ and $3$
and by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1},
it follows that ${\ensuremath{\color{blue}\vect{c}}}_1\notin \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$ (resp.\ ${\ensuremath{\color{blue}\vect{c}}}_1\notin \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_4)$).
By prior reasoning, we have that ${\ensuremath{\color{blue}\vect{c}}}_1\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)\cup \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_4)$.
However, this is a contradiction due to the preferences of $v_4$ and $v_2$ (resp.\ $v_3$ and $v_2$) regarding $1$ and $2$:
By \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether2},
it follows that ${\ensuremath{\color{blue}\vect{c}}}_1\notin \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)\cup \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_4)$.
\item[Case 2:] ${\ensuremath{\color{red!50!black}\vect{v}}}_3\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_2)$.
Without loss of generality, assume that ${\ensuremath{\color{red!50!black}\vect{v}}}_1[1]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[1]\le {\ensuremath{\color{red!50!black}\vect{v}}}_2[1]$ and ${\ensuremath{\color{red!50!black}\vect{v}}}_1[2]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[2]\le {\ensuremath{\color{red!50!black}\vect{v}}}_2[2]$; see \cref{fig:no-n5-m4-case2}.
Then, by \cref{clm:n5-m4} (setting $(u,v,w,r)=(v_1,v_3,v_2,v_4)$ and $(u,v,w,r)=(v_2,v_3,v_1,v_5)$, respectively),
we obtain that ${\ensuremath{\color{red!50!black}\vect{v}}}_3\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_4,{\ensuremath{\color{red!50!black}\vect{v}}}_2)$ and ${\ensuremath{\color{red!50!black}\vect{v}}}_3\in \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_5,{\ensuremath{\color{red!50!black}\vect{v}}}_1)$.
This implies that
\begin{align}
{\ensuremath{\color{red!50!black}\vect{v}}}_4[1]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[1] \text{ and }{\ensuremath{\color{red!50!black}\vect{v}}}_4[2]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[2].\label{eq:n5-m4-v4-v3}
\end{align}
and
\begin{align}
{\ensuremath{\color{red!50!black}\vect{v}}}_5[1]\ge {\ensuremath{\color{red!50!black}\vect{v}}}_3[1] \text{ and }{\ensuremath{\color{red!50!black}\vect{v}}}_5[2]\ge {\ensuremath{\color{red!50!black}\vect{v}}}_3[2].\label{eq:n5-m4-v5-v3}
\end{align}
%
By \cref{lem:two-votes}\eqref{lem:not-outside-corner} (setting $(r,s,x,y)=(v_4,v_3,3,2)$ and $(r,s,x,y)=(v_2,v_3,3,2)$),
we infer that ${\ensuremath{\color{blue}\vect{c}}}_3\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
Similarly, using the preferences of $v_1,v_3,v_2$ regarding $\{1,4\}$,
we infer that
\begin{align}
{\ensuremath{\color{blue}\vect{c}}}_1\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3).\label{eq:n5-m4-c1}
\end{align}
By symmetry, assume that ${\ensuremath{\color{blue}\vect{c}}}_1 \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
Then, by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether2} (setting $(r,s,t,x,y)=(v_1,v_3,v_2,1,3)$),
we obtain that ${\ensuremath{\color{blue}\vect{c}}}_3 \notin \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
By \eqref{eq:n5-m4-c1}, we infer that ${\ensuremath{\color{blue}\vect{c}}}_3 \in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
Next, we specify exactly the relative positions between alternative~$2$ (resp.\ $4$) and voters~$v_1$ and~$v_4$ (resp.\ $v_2$ and $v_5$).
Since $v_3\colon \{2,4\} \succ 3$ and $v_4,v_5\colon 3 \succ \{2,4\}$, and ${\ensuremath{\color{blue}\vect{c}}}_3\in \ensuremath{\mathsf{SE}}(v_3)$,
by \cref{lem:two-votes} (resp.\ \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1}--\eqref{lem:Ntogether2}),
it follows that
${\ensuremath{\color{blue}\vect{c}}}_2,{\ensuremath{\color{blue}\vect{c}}}_4\notin \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_4,{\ensuremath{\color{red!50!black}\vect{v}}}_5) \cup \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_5)\cup \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}}_4)$ (resp.\ ${\ensuremath{\color{blue}\vect{c}}}_2,{\ensuremath{\color{blue}\vect{c}}}_4\notin \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_4)\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_5)\cup \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$).
Analogously, since $v_3\colon 2 \succ 1$, $v_1,v_5\colon 1 \succ 2$, and ${\ensuremath{\color{blue}\vect{c}}}_1 \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$,
by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1}--\eqref{lem:Ntogether2},
it follows that ${\ensuremath{\color{blue}\vect{c}}}_2\notin \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_1)\cup \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_5) \cup \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
Summarizing, it follows that
\begin{align}
{\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)\cap \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_4).\label{eq:n5-m4-c2-v3v5}
\end{align}
Consequently, since $v_3\colon 2 \succ 1$, $v_1,v_2\colon 1\succ 2$ and ${\ensuremath{\color{blue}\vect{c}}}_1 \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$ by \cref{lem:two-votes} (resp.\ \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1}),
it follows that ${\ensuremath{\color{blue}\vect{c}}}_2 \notin \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_3)\cup \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}}_1)$
(resp.\ ${\ensuremath{\color{blue}\vect{c}}}_2\notin \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_1)$).
Together with \eqref{eq:n5-m4-c2-v3v5}, we have that
\begin{align}
{\ensuremath{\color{blue}\vect{c}}}_2\in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_1)\cap \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)\cap \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_4).\label{eq:n5-m4-c2-v143}
\end{align}
Similarly, since $v_3\colon 4\succ 1$, $v_1,v_2\colon 1\succ 4$, and ${\ensuremath{\color{blue}\vect{c}}}_1 \in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$,
by \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1}--\eqref{lem:Ntogether2},
it follows that ${\ensuremath{\color{blue}\vect{c}}}_4\notin \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_1)\cup \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_2) \cup \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_4) \cup \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$.
This implies that
\begin{align}
{\ensuremath{\color{blue}\vect{c}}}_4\in \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)\cap \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_5).\label{eq:n5-m4-c4-v3v5}
\end{align}
Since $v_3\colon 4\succ 1$ and $v_1,v_2\colon 1 \succ 4$, by \cref{lem:two-votes} and \cref{lem:bet-property-Ntogether}\eqref{lem:Ntogether1},
it follows that ${\ensuremath{\color{blue}\vect{c}}}_4\notin \ensuremath{\mathsf{BB}}({\ensuremath{\color{red!50!black}\vect{v}}}_1,{\ensuremath{\color{red!50!black}\vect{v}}}_2)\cup \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)\cup \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_2)$.
Together with \eqref{eq:n5-m4-c4-v3v5}, we infer that
\begin{align}
{\ensuremath{\color{blue}\vect{c}}}_4\in \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)\cap \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_5)\cap \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_2).\label{eq:n5-m4-c4-v253}
\end{align}
See \cref{fig:no-n5-m4-case2-refined} for an illustration.
In the remainder of the proof, we will derive several inequalities, which are mutually inconsistent.
The main idea is that since $v_2$ and $v_4$ (which are on the opposite ``diagonal'' of $v_3$) both have $1\succ 4$ and $3 \succ 2$,
it is necessary that the bisector between alternatives~$1$ and~$4$ and the one between alternatives~$2$ and $3$ ``cross'' twice.
Similarly, due to $v_1$ and $v_5$ the bisector between alternatives~$1$ and~$2$ and the one between alternatives~$3$ and $4$ ``cross'' twice.
This is, however, impossible.
First, let us consider the preferences over~$3$ and $4$,
Since $v_5\colon 3 \succ 4$, by \eqref{eq:n5-m4-c4-v253}, we infer that
\begin{align}
\nonumber & &({\ensuremath{\color{red!50!black}\vect{v}}}_5[1]-{\ensuremath{\color{blue}\vect{c}}}_4[1])+({\ensuremath{\color{blue}\vect{c}}}_4[2]-{\ensuremath{\color{red!50!black}\vect{v}}}_5[2]) & > |{\ensuremath{\color{red!50!black}\vect{v}}}_5[1]-{\ensuremath{\color{blue}\vect{c}}}_3[1]| + |{\ensuremath{\color{red!50!black}\vect{v}}}_5[2]-{\ensuremath{\color{blue}\vect{c}}}_3[2]|\\
\Rightarrow & &-{\ensuremath{\color{blue}\vect{c}}}_4[1]+{\ensuremath{\color{blue}\vect{c}}}_4[2] & > -{\ensuremath{\color{blue}\vect{c}}}_3[1]-{\ensuremath{\color{blue}\vect{c}}}_3[2] + 2\cdot {\ensuremath{\color{red!50!black}\vect{v}}}_5[2]. \label{eq:n5-m4-v5-c34}
\end{align}
Since $v_3\colon 4 \succ 3$, by the assumption that ${\ensuremath{\color{blue}\vect{c}}}_3\in \ensuremath{\mathsf{SE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$ and by \eqref{eq:n5-m4-c4-v253}, we infer that
\begin{align}
\nonumber & &({\ensuremath{\color{blue}\vect{c}}}_3[1]-{\ensuremath{\color{red!50!black}\vect{v}}}_3[1])+({\ensuremath{\color{red!50!black}\vect{v}}}_3[2]-{\ensuremath{\color{blue}\vect{c}}}_3[2]) & > ({\ensuremath{\color{blue}\vect{c}}}_4[1]-{\ensuremath{\color{red!50!black}\vect{v}}}_3[1]) + ({\ensuremath{\color{blue}\vect{c}}}_4[2]-{\ensuremath{\color{red!50!black}\vect{v}}}_3[2])\\
\Rightarrow & &{\ensuremath{\color{blue}\vect{c}}}_3[1]-{\ensuremath{\color{blue}\vect{c}}}_3[2] & > {\ensuremath{\color{blue}\vect{c}}}_4[1]+{\ensuremath{\color{blue}\vect{c}}}_4[2] - 2\cdot {\ensuremath{\color{red!50!black}\vect{v}}}_3[2].\label{eq:n5-m4-v3-c34}
\end{align}
Now, let us consider the preferences for $v_2$ and $v_4$ over the pairs~$\{2,3\}$ and $\{1,4\}$, respectively.
Since $v_2, v_4\colon 3 \succ 2$ and ${\ensuremath{\color{red!50!black}\vect{v}}}_2\in \ensuremath{\mathsf{NE}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$, by~\eqref{eq:n5-m4-c2-v143}, we infer that
\begin{align}
\nonumber & & ({\ensuremath{\color{red!50!black}\vect{v}}}_2[1]-{\ensuremath{\color{blue}\vect{c}}}_2[1]) + ({\ensuremath{\color{red!50!black}\vect{v}}}_2[2]-{\ensuremath{\color{blue}\vect{c}}}_2[2]) & > |{\ensuremath{\color{blue}\vect{c}}}_3[1]-{\ensuremath{\color{red!50!black}\vect{v}}}_2[1]| + |{\ensuremath{\color{red!50!black}\vect{v}}}_2[2]-{\ensuremath{\color{blue}\vect{c}}}_3[2]|\\
\Rightarrow & &2\cdot {\ensuremath{\color{red!50!black}\vect{v}}}_2[1] - {\ensuremath{\color{blue}\vect{c}}}_2[1]-{\ensuremath{\color{blue}\vect{c}}}_2[2] & > {\ensuremath{\color{blue}\vect{c}}}_3[1]-{\ensuremath{\color{blue}\vect{c}}}_3[2],\label{eq:n5-m4-v2-c32}\\
\nonumber & & ({\ensuremath{\color{red!50!black}\vect{v}}}_4[1]-{\ensuremath{\color{blue}\vect{c}}}_2[1]) + ({\ensuremath{\color{blue}\vect{c}}}_2[2]-{\ensuremath{\color{red!50!black}\vect{v}}}_4[2]) & > |{\ensuremath{\color{blue}\vect{c}}}_3[1]-{\ensuremath{\color{red!50!black}\vect{v}}}_4[1]| + |{\ensuremath{\color{blue}\vect{c}}}_3[2]-{\ensuremath{\color{red!50!black}\vect{v}}}_4[2]|\\
\Rightarrow & &2\cdot {\ensuremath{\color{red!50!black}\vect{v}}}_4[1] - {\ensuremath{\color{blue}\vect{c}}}_2[1]+{\ensuremath{\color{blue}\vect{c}}}_2[2] & > {\ensuremath{\color{blue}\vect{c}}}_3[1]+{\ensuremath{\color{blue}\vect{c}}}_3[2].\label{eq:n5-m4-v4-c32}
\end{align}
Since $v_2, v_4\colon 1 \succ 4$ and ${\ensuremath{\color{red!50!black}\vect{v}}}_4\in \ensuremath{\mathsf{SW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$, by~\eqref{eq:n5-m4-c4-v253}, we infer that
\begin{align}
\nonumber & & ({\ensuremath{\color{blue}\vect{c}}}_4[1]-{\ensuremath{\color{red!50!black}\vect{v}}}_2[1]) + ({\ensuremath{\color{red!50!black}\vect{v}}}_2[2]-{\ensuremath{\color{blue}\vect{c}}}_4[2]) & > |{\ensuremath{\color{red!50!black}\vect{v}}}_2[1]-{\ensuremath{\color{blue}\vect{c}}}_1[1]| + |{\ensuremath{\color{red!50!black}\vect{v}}}_2[2]-{\ensuremath{\color{blue}\vect{c}}}_1[2]|\\
\Rightarrow & &{\ensuremath{\color{blue}\vect{c}}}_4[1]-{\ensuremath{\color{blue}\vect{c}}}_4[2] & > -{\ensuremath{\color{blue}\vect{c}}}_1[1]-{\ensuremath{\color{blue}\vect{c}}}_1[2] + 2\cdot {\ensuremath{\color{red!50!black}\vect{v}}}_{2}[1],\label{eq:n5-m4-v2-c14}\\
\nonumber & & ({\ensuremath{\color{blue}\vect{c}}}_4[1]-{\ensuremath{\color{red!50!black}\vect{v}}}_4[1]) + ({\ensuremath{\color{blue}\vect{c}}}_4[2]-{\ensuremath{\color{red!50!black}\vect{v}}}_4[2]) & > |{\ensuremath{\color{red!50!black}\vect{v}}}_4[1]-{\ensuremath{\color{blue}\vect{c}}}_1[1]| + |{\ensuremath{\color{blue}\vect{c}}}_1[2]-{\ensuremath{\color{red!50!black}\vect{v}}}_4[2]|\\
\Rightarrow & & {\ensuremath{\color{blue}\vect{c}}}_4[1]+{\ensuremath{\color{blue}\vect{c}}}_4[2] & > -{\ensuremath{\color{blue}\vect{c}}}_1[1]+{\ensuremath{\color{blue}\vect{c}}}_1[2] + 2\cdot {\ensuremath{\color{red!50!black}\vect{v}}}_{4}[1].\label{eq:n5-m4-v4-c14}
\end{align}
Adding inequalities~\eqref{eq:n5-m4-v5-c34}--\eqref{eq:n5-m4-v4-c14} yields
\begin{align}
2\cdot ({\ensuremath{\color{blue}\vect{c}}}_1[1] - {\ensuremath{\color{blue}\vect{c}}}_2[1]) > 2({\ensuremath{\color{red!50!black}\vect{v}}}_5[2]-{\ensuremath{\color{red!50!black}\vect{v}}}_3[2]). \label{eq:n5-m4-med}
\end{align}
However, this contradicts the preferences of $v_1\colon 1\succ 2$ and $v_3\colon 2 \succ 1$ as these, by~\eqref{eq:n5-m4-c2-v143} and the assumption~${\ensuremath{\color{blue}\vect{c}}}_1\in \ensuremath{\mathsf{NW}}({\ensuremath{\color{red!50!black}\vect{v}}}_3)$, imply that
\begin{align}
& & ({\ensuremath{\color{blue}\vect{c}}}_2[1]-{\ensuremath{\color{red!50!black}\vect{v}}}_1[1]) + ({\ensuremath{\color{red!50!black}\vect{v}}}_1[2]-{\ensuremath{\color{blue}\vect{c}}}_2[2]) & > |{\ensuremath{\color{blue}\vect{c}}}_1[1]-{\ensuremath{\color{red!50!black}\vect{v}}}_1[1]| + |{\ensuremath{\color{blue}\vect{c}}}_1[2]-{\ensuremath{\color{red!50!black}\vect{v}}}_1[2]|\nonumber\\
\Rightarrow & & {\ensuremath{\color{blue}\vect{c}}}_2[1]-{\ensuremath{\color{blue}\vect{c}}}_2[2] & > {\ensuremath{\color{blue}\vect{c}}}_1[1]+{\ensuremath{\color{blue}\vect{c}}}_1[2] - 2\cdot {\ensuremath{\color{red!50!black}\vect{v}}}_{1}[2], \label{eq:n5-m4-v1-12}\\
& & ({\ensuremath{\color{red!50!black}\vect{v}}}_3[1]-{\ensuremath{\color{blue}\vect{c}}}_1[1]) + ({\ensuremath{\color{blue}\vect{c}}}_1[2]-{\ensuremath{\color{red!50!black}\vect{v}}}_3[2]) & > ({\ensuremath{\color{red!50!black}\vect{v}}}_3[1]-{\ensuremath{\color{blue}\vect{c}}}_2[1]) + ({\ensuremath{\color{red!50!black}\vect{v}}}_3[2]-{\ensuremath{\color{blue}\vect{c}}}_2[2])\nonumber\\
\Rightarrow & & -{\ensuremath{\color{blue}\vect{c}}}_1[1]+{\ensuremath{\color{blue}\vect{c}}}_1[2] & > -{\ensuremath{\color{blue}\vect{c}}}_2[1]-{\ensuremath{\color{blue}\vect{c}}}_2[2] + 2\cdot {\ensuremath{\color{red!50!black}\vect{v}}}_{3}[2]. \label{eq:n5-m4-v3-12}
\end{align}
Adding \eqref{eq:n5-m4-med}--\eqref{eq:n5-m4-v3-12} yields
${\ensuremath{\color{red!50!black}\vect{v}}}_1[2] > {\ensuremath{\color{red!50!black}\vect{v}}}_5[2]$, a contradiction to the assumption of ${\ensuremath{\color{red!50!black}\vect{v}}}_1[2]\le {\ensuremath{\color{red!50!black}\vect{v}}}_3[2]$ and inequality~\eqref{eq:n5-m4-v5-v3}.
\end{description}
In summary, we show that it is not possible to find a Manhattan embedding for profile~${\cal P}_{\thevfivecounter}$.
\end{proof}
\section{Experimental Results}\label{sec:experiments}
In this section, we discuss our experimental results.
\begin{proposition}\label{prop:n3-m5+n4-m4}
If $(n,m)=(3,5)$ or $(n,m)=(4,4)$, then
each preference profile with at most $n$ voters and at most $m$ alternatives is \dManhattan[2].
\end{proposition}
\begin{proof}
Since the Manhattan property is monotone, to show the statement, we only need to look at preference profiles which have either three voters and five alternatives, or four voters and four alternatives.
We achieve this by using a computer program employing the CPLEX solver that exhaustively searches for all possible profiles with either three voters and five alternatives, or four voters and four alternatives, and provide a \dManhattan[2] embedding for each of them.
Since the CPLEX solver accepts constraints on the absolute value of the difference between any two variables, our computer program is a simple one-to-one translation of the \dManhattan constraints given in \cref{def:embeddings}.
Following a similar line as in the work of Chen and Grottke~\cite{ChenGrottke2021}, we did some optimization to significantly shrink our search space on all preference profiles: We only consider preference profiles with distinct preference orders and we assume that one of the preference orders is~$1\succ 2 \succ \ldots \succ m$.
Hence, the number of relevant preference profiles with $n$ voters and $m$ alternatives is $\binom{m!-1}{n-1}$.
For $(n,m)=(3,5)$ and $(n,m)=(4,4)$, we need to iterate through $7021$ and $1771$ preference profiles, respectively.
We implemented a program which, for each of these produced profiles, uses the IBM ILOG CPLEX optimization software package to check and find a \dManhattan[2] embedding.
The verification is done by going through each voter's preference order and checking the condition given in \cref{def:embeddings}.
All generated profiles, together with their \dManhattan[2] embeddings and the distances used for the verification, are available online at \url{https://owncloud.tuwien.ac.at/index.php/s/s6t1vymDOx4EfU9}.
\end{proof}
\section{Conclusion}\label{sec:conclude}
Motivated by the question of how restricted \dManhattan{} preferences are, we initiated the study of the smallest dimension sufficient for a preference profile to be \dManhattan.
This work opens up to several future research directions.
One future research direction concerns the characterization of \dManhattan{} preference profiles through forbidden subprofiles.
Such work has been done for other restricted preference domain such as single-peakedness~\cite{BH11}, single-crossingness~\cite{BCW12} and 1-Euclideanness~\cite{ChePruWoe2017}.
Another research direction would be to look into the computational complexity of determining whether a given preference profile is \dManhattan.
To this end, let us mention that \dEuclid[1] preference profiles cannot be characterized via finitely many finite forbidden subprofiles~\cite{ChePruWoe2017}, but they can be recognized in polynomial time~\cite{DoiFal1994,Knoblauch2010,ElkFal2014}.
As for $d\ge 2$, recognizing \dEuclid\ preference profile becomes notoriously hard (beyond NP)~\cite{Peters2017}.
This is in stark contrast to recognizing \dManhattan\ preferences, which is in~NP. %
Finally, it would be interesting to see whether assuming \dManhattan{} preferences can lower the complexity of some computationally hard social choice problems.
\bibliographystyle{abbrvnat}
| {
"timestamp": "2022-01-25T02:41:04",
"yymm": "2201",
"arxiv_id": "2201.09691",
"language": "en",
"url": "https://arxiv.org/abs/2201.09691",
"abstract": "A preference profile with $m$ alternatives and $n$ voters is $d$-Manhattan (resp. $d$-Euclidean) if both the alternatives and the voters can be placed into the $d$-dimensional space such that between each pair of alternatives, every voter prefers the one which has a shorter Manhattan (resp. Euclidean) distance to the voter. Following Bogomolnaia and Laslier [Journal of Mathematical Economics, 2007] and Chen and Grottke [Social Choice and Welfare, 2021] who look at $d$-Euclidean preference profiles, we study which preference profiles are $d$-Manhattan depending on the values $m$ and $n$.First, we show that each preference profile with $m$ alternatives and $n$ voters is $d$-Manhattan whenever $d$ $\\geq$ min($n$, $m$-$1$). Second, for $d = 2$, we show that the smallest non $d$-Manhattan preference profile has either three voters and six alternatives, or four voters and five alternatives, or five voters and four alternatives. This is more complex than the case with $d$-Euclidean preferences (see [Bogomolnaia and Laslier, 2007] and [Bulteau and Chen, 2020].",
"subjects": "Multiagent Systems (cs.MA); Theoretical Economics (econ.TH); Combinatorics (math.CO)",
"title": "Multidimensional Manhattan Preferences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462213710148,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7099326207926556
} |
https://arxiv.org/abs/1011.2991 | Parity conjectures for elliptic curves over global fields of positive characteristic | We prove the $p$-parity conjecture for elliptic curves over global fields of characteristic $p > 3$. We also present partial results on the $\ell$-parity conjecture for primes $\ell \neq p$. | \section{Introduction}
Let $K$ be a global field and let $E$ be an elliptic curve defined over $K$. The conjecture of Birch and Swinnerton-Dyer asserts that the rank of the Mordell-Weil group $E(K)$ is equal to the order of vanishing of the Hasse-Weil $L$-function $L(E/K,s)$ as $s=1$. A weaker question is to know whether these two integers have at least the same parity. This seems more approachable because the parity of the order of vanishing on the analytic side can by expressed in more algebraic terms through local root numbers -- at least when the $L$-function is known to have an analytic continuation. Let $w(E/K)\in \{\pm 1\}$ be the global root number of $E$ over $K$ which is equal to the product of local root numbers $\prod_v w(E/K_v)$ as $v$ runs over all places in $K$. The local terms $w(E/K_v)$ were defined by Deligne without reference to the $L$-function, see~\cite{rohrlich_wd} for the definition.
So we can formulate the following conjecture.
\begin{fullparityconj}
We have $(-1)^{\rank E(K)} = w(E/K)$.
\end{fullparityconj}
This conjecture is unproven except for specific cases. We will focus on the following easier question. Let $\Sha(E/K)$ be the Tate-Shafarevich group defined as the kernel of the localisation maps
$H^1(K,E) \to \prod_{v} H^1(K_v,E)$ in Galois cohomology.
For a prime $\ell$, the $\ell$-primary Selmer group $\Sel_{\ell^{\infty}}(E/K)$ fits into an exact sequence
\begin{equation}\label{mwselsha_eq}
0 \to E(K)\otimes {}^{\mathbb{Q}_{\ell}}\!/\!{}_{\mathbb{Z}_{\ell}} \to \Sel_{\ell^{\infty}}(E/K) \to \Sha(E/K)[\ell^{\infty}] \to 0.
\end{equation}
If the characteristic of $K$ is prime to $\ell$, we may define it as the preimage of $\Sha(E/K)[\ell^{\infty}]$ under the map $H^1(K,E[\ell^\infty])\to H^1(K,E)[\ell^{\infty}]$. If the characteristic is equal to $\ell$, then one should use flat instead of Galois cohomology, see section~\ref{selmer_sec} for definitions. The theorem of Mordell-Weil shows that the dual of $\Sel_{\ell^{\infty}}(E/K)$ is a finitely generated $\mathbb{Z}_{\ell}$-module, whose rank we will denote by $r_{\ell}$. In particular, \eqref{mwselsha_eq} is a short exact sequence of finite cotype $\mathbb{Z}_l$-modules for any prime $\ell$. Since it is conjectured that $\rank E(K) = r_{\ell}$, we can make the following conjecture, which seems more approachable as it links two algebraically defined terms.
\begin{ellparityconj}
Let $\ell$ be a prime.
We have $(-1)^{r_{\ell}} = w(E/K)$.
\end{ellparityconj}
These conjectures have attracted much attention in recent years and the $\ell$-parity conjecture is now known in many cases, in particular when the ground field is $K=\mathbb{Q}$ by work of the Dokchitser brothers~\cite{dok_isogeny, dok_nonab, dok_reg, dok_modsquares, dok_09}, Kim~\cite{kim}, Mazur and Rubin~\cite{mazur_rubin}, Nekov\'{a}\v{r}~\cite{nekovar2,nekovar3,nekovar4}, Coates, Fukaya, Kato, and Sujatha~\cite{cfks} and others.
In this article, we restrict our attention to the case of positive characteristic. So, we suppose from now on that $K$ is a global field of characteristic $p>3$ with constant field $\mathbb{F}_q$.
The main result of this article is the following theorem.
\begin{thm}\label{pparity_thm}
The $p$-parity conjecture holds for any elliptic curve $E$ over a global field $K$ of characteristic $p>3$.
\end{thm}
The proof consists of two steps: first a local calculation linking the local root number to local data on the Frobenius isogeny on $E$, carried out in section~\ref{local_sec}; followed by the use of global duality in section~\ref{duality_sec}. Luckily, we do not have to treat all individual cases of bad reduction for the local considerations, since we are able to use a theorem of Ulmer~\cite{ulmer_gnv} to reduce to the semistable case. This is done in section~\ref{red_ss_sec}.
The proof follows closely the arguments in~\cite{dok_isogeny} and Fisher's appendix of~\cite{dok_nonab}. We repeat it here in details, both for completeness and to make the reader aware of a few subtleties; for instance, it is to note that the Frobenius isogeny $F$ and its dual $V$ do not play an interchangeable role.
The hardest part concerns the global duality. The relevant dualities that we need for our conclusion have never appeared in the literature and we are forced to prove them. We think that it is worthwhile to include in section~\ref{pparity_sec} a general formula for the parity of the corank of the $p$-primary Selmer group and a local formula for the root number in section~\ref{rootno_sec}.
Originally, global dualities have appeared in Cassels' work~\cite{cassels} on the invariance under isogenies of the conjecture of Birch and Swinnerton-Dyer. It should be noted that one could use our duality statements to prove this in the case of characteristic $p>0$, but there is no need for this. In fact it is known by~\cite{kato_trihan} that the conjecture of Birch and Swinnerton-Dyer is equivalent to the finiteness of the Tate-Shafarevich group -- and it is clear that the latter question is invariant under isogeny.
The second main result of this paper is a proof of the $\ell$-parity conjecture when $\ell\neq p$ in some cases. Write $\mu_\ell$ for the $\ell$-th roots of unity.
\begin{thm}\label{ellparity_thm}
Let $E/K$ be an elliptic curve and let $\ell> 2$ be a prime different from $p$.
Furthermore assume that
\begin{enumerate}
\item $a=[K(\mu_{\ell}):K]$ is even, and
\item the analytic rank of $E$ does not grow by more than $1$ in the constant quadratic extension $K\cdot \mathbb{F}_{q^2}/K$.
\end{enumerate}
Then the $\ell$-parity conjecture holds for $E/K$.
\end{thm}
The proof will be presented in section~\ref{ell_sec}. Its main ingredients are the non-vanishing results of Ulmer in~\cite{ulmer_gnv} and the techniques for proving the parity conjectures from representation theoretic considerations as explained in~\cite{dok_modsquares, dok_sd}.
Although the conditions will be fulfilled for many curves, the methods in this paper fail to give a complete proof of the $\ell$-parity conjecture. See the remarks at the beginning of section~\ref{ell_sec} and the more detailed section~\ref{fail_sec} for an explanation of why we are not able to extend the proof any further.
\subsection{Notations}
The constant field of the global field $K$ of characteristic $p>3$ is the finite field $\mathbb{F}_q$ for some power $q$ of $p$.
Let $C$ be a smooth, geometrically connected, projective curve over $\mathbb{F}_q$ with function field $K$. Let $E/K$ be an elliptic curve, which we will assume to be non-isotrivial (i.e. the $j$-invariant of $E$ is transcendental over $\mathbb{F}_q$). We fix a Weierstrass equation
\begin{equation}\label{w_eq}
E \colon\quad y^2\, =\, x^3\, +\, A\,x\, +\, B
\end{equation}
with $A$ and $B$ in $K$ and the corresponding invariant differential $\omega = \tfrac{dx}{2y}$. By $F\colon E \to E'$ we denote the Frobenius isogeny of degree $p$ whose dual $V\colon E'\to E$ is the Verschiebung.
If $f\colon A\to B$ is a homomorphism of abelian groups, we write
\begin{equation*}
z(f) = \frac{\# \coker(f)}{\# \ker(f)}
\end{equation*}
provided that the kernel and the cokernel of $f$ are finite. For any abelian group (or group scheme) $A$ and integer $m$, we denote by $A[m]$ the $m$-torsion part of it; and, for any prime $\ell$, the $\ell$-primary part will be denoted by $A[\ell^{\infty}]$.
The Pontryagin dual of an abelian group $A$ is written $A^{\vee}$. If the Pontryagin dual of $A$ is a finitely generated $\mathbb{Z}_{\ell}$-module for some prime $\ell$, then we write $\divi(A)$ for its maximal divisible subgroup and let $A_{\divi}$ denote the quotient of $A$ by $\divi(A)$.
\section{Reduction to the semistable case}\label{red_ss_sec}
Before, we start we should mention that the conjecture of Birch and Swinnerton-Dyer is known for isotrivial curve $E$ by the work of Milne~\cite{milne_isotrivial}. So for the rest of the paper we will assume that $E$ is not isotrivial as otherwise the parity conjectures are known. In particular, it follows from this assumption that $E/K$ is ordinary. The parity conjecture is also known in the following cases:
\begin{prop}\label{an_rk0&1}
Let $A/K$ be an abelian variety over a function field of characteristic $p>0$ and let $\ell$ be a prime ($\ell=p$ is allowed). The analytic rank of $A/K$ is always greater or equal to the $\ell$-corank of the Selmer group. If the analytic rank of $A/K$ is zero, then the conjecture of Birch and Swinnerton-Dyer holds. If the analytic rank is $1$ then it coincides with the $\mathbb{Z}_\ell$-corank of the $\ell$-primary Selmer group.
\end{prop}
Note that if we restricted ourselves to elliptic curves and to the case $\ell \neq p$, then this result could already be deduced from the work of Artin and Tate~\cite{tate}.
\begin{proof} By~3.2 in~\cite{kato_trihan}, the Hasse-Weil $L$-function of $A/K$ can be expressed as an alternating product of characteristic polynomials of some operators $\phi^i_\ell$ acting on a finite dimensional $\mathbb{Q}_\ell$-vector space $H^i_{\mathbb{Q}_\ell}$, with $i=0,1,2$. Then by~3.5.1 in~\cite{kato_trihan}, the order at $s=1$ of the Hasse-Weil $L$-function can be interpreted as the multiplicity of the eigenvalue 1 for
the operator $\phi^1_\ell$ on $H^1_{\mathbb{Q}_\ell}$. Following the notations of~3.5 in~\cite{kato_trihan}, let $I_{3,\ell}$ denote the part of $H^1_{\mathbb{Q}_\ell}$ on which the operator $\id-\phi^1_\ell$ acts nilpotently and let $I_{2,\ell}$ denote the kernel of $\id-\phi^1_\ell$, such that we have the inclusions:
$$I_{2,\ell}\subset I_{3,\ell}\subset H^1_{\mathbb{Q}_\ell}.$$
Since by~3.5.1 in~\cite{kato_trihan}, the operator $\id-\phi^i_\ell$ is an isomorphism for $i=0,2$, it follows that the analytic rank of $A/K$ is equal to the dimension of $I_{3,\ell}$. On the other hand, it follows from~3.5.5 and~3.5.6 in~\cite{kato_trihan}, that the $\ell$-corank of the Selmer group of $A/K$ is the dimension of $I_{2,\ell}$ so that we deduce that the analytic rank of $A/K$ is always greater or equal to the $\ell$-corank of the Selmer group of $A/K$. If the analytic rank of $A/K$ is trivial, so is the dimension of $I_{3,\ell}$. It implies that the dimension of $I_{2,\ell}$ is zero and by~3.5.6 in~\cite{kato_trihan}, we conclude that the Mordell-Weil group is also of rank zero. We then conclude the proof of the assertion thanks to the main result~1.8 of~\cite{kato_trihan}. If the analytic rank of $A/K$ is one, then $\phi^1_\ell$ acts like the identity on $I_{3,\ell}$ and therefore $I_{2,\ell}=I_{3,\ell}$ and the second assertion immediately follows.
\end{proof}
The following proposition will be used at several places to reduce the conjecture to easier situations.
\begin{prop}\label{red_prop}
Let $E/K$ be a non-isotrivial curve and $L/K$ a separable extension. Let $\ell$ be a prime. Assume one of the following three
conditions:
\begin{enumerate}
\item\label{odd_cond} $\ell\neq p$ and the extension $L/K$ is a Galois extension of odd degree.
\item\label{zero_cond} The analytic rank of $E$ does not grow in $L/K$.
\item\label{one_cond} $\ell\neq p$ and the analytic rank of $E$ does not grow by more than $1$ in $L/K$.
\end{enumerate}
Then the $\ell$-parity conjecture for $E/K$ holds if and only if the $\ell$-parity conjecture for $E/L$ is known.
\end{prop}
\begin{proof}
If condition~(\ref{odd_cond}) holds then the conclusion follows directly from Theorem~1.3 in~\cite{dok_sd}. Note already here that the complete paper~\cite{dok_sd} and its proofs hold in our situation as long as $\ell\neq p$.
Suppose now as in condition~(\ref{zero_cond}) that the analytic rank does not grow in $L/K$. Denote by $A/K$ the Weil restriction of $E$ under $L/K$ and by $B/K$ the quotient of $A$ by the natural image of $E$ in it. Since
\begin{equation*}
L(E/L,s) = L(A/K,s) = L(E/K,s)\cdot L(B/K,s)
\end{equation*}
we see that that the analytic rank of $B/K$ is zero and therefore by Proposition~\ref{an_rk0&1}, the full Birch and Swinnerton-Dyer conjecture holds. In particular, the Mordell-Weil rank of $B/K$ is zero and its Selmer group is a finite group. Moreover, we have an exact sequence
\begin{equation}\label{sels_ses}
\Sel_{\ell^{\infty}}(E/K) \to \Sel_{\ell^{\infty}}(A/K) \to \Sel_{\ell^{\infty}}(B/K),
\end{equation}
and the kernel of the first map lies in $B(K)[\ell^{\infty}]$, which is a finite group. Hence we conclude that
$r_{\ell}$ is equal to the corank of $\Sel_{\ell^{\infty}}(A/K)$ and, by Proposition~3.1 in~\cite{mazur_rubin}, this is the same as the corank of $\Sel_{\ell^{\infty}}(E/L)$. So we are able to deduce the $\ell$-parity for $E/K$ from the $\ell$-parity for $E/L$.
Finally, suppose that $\ell\neq p$ and that the analytic rank grows exactly by $1$; so we are under condition~(\ref{one_cond}). Then we know by Proposition~\ref{an_rk0&1} that the rank of $\Sel_{\ell^{\infty}}(B/K)$ is less or equal to $1$. We wish to exclude the possibility that it is $0$, so assume by now that $\Sel_{\ell^{\infty}}(B/K)$ is finite. But this means that $\Sha(B/K)[\ell^{\infty}]$ is finite and hence the full conjecture of Birch and Swinnerton-Dyer holds by~\cite{kato_trihan} again. So we reach a contradiction since we would have $0 = \rank B(K) = \ord_{s=1} L(B/K,s) = 1$. Hence we have shown that the corank of $\Sel_{\ell^{\infty}}(B/K)$ is $1$.
Note that the left-hand map in~\eqref{sels_ses} still has finite kernel. We will show now that right-hand map has finite cokernel, too. Let $\Sigma$ be the finite set of places in $K$ of bad reduction for $E$. Write $G_{\Sigma}(K)$ for the Galois group of the maximal separable extension of $K$ which is unramified outside $\Sigma$. Note that from the
definition of the Selmer group, we find the following diagram with exact rows and columns
\begin{equation*}
\xymatrix@C-8pt{
0\ar[d] & 0\ar[d] & \\
\Sel_{\ell^{\infty}}(A/K)\ar[d]\ar[r] &
\Sel_{\ell^{\infty}}(B/K) \ar[d]& \\
H^1\bigl(G_{\Sigma}(L),A[\ell^{\infty}]\bigr)\ar[d]\ar[r] &
H^1\bigl(G_{\Sigma}(K),B[\ell^{\infty}]\bigr)\ar[d]\ar[r] &
H^2\bigl(G_{\Sigma}(K),E[\ell^{\infty}]\bigr)\ar[r]^{r} &
H^2\bigl(G_{\Sigma}(K),A[\ell^{\infty}]\bigr) \\
\bigoplus_{v\in \Sigma} H^1(K_v,A)[\ell^{\infty}] \ar[r] &
\bigoplus_{v \in \Sigma} H^1(K_v,B)[\ell^{\infty}]
}
\end{equation*}
We know that the bottom groups are finite as they are dual to $\varprojlim A(K_v)/\ell^k$ and $\varprojlim B(K_v)/\ell^k$ respectively. Hence we see from the snake lemma that we only have to prove that the kernel of $r$ is finite. Shapiro's lemma shows that $H^2(G_{\Sigma}(K), A[\ell^\infty])$ is isomorphic to $H^2(G_{\Sigma}(L),E[\ell^\infty])$ and hence the map $r$ is simply the restriction map. As its kernel will only get larger when increasing $L$, we may assume that $L/K$ is Galois. Then the kernel of the restriction is contained in the part of $H^2(G_{\Sigma}(K),E[\ell^{\infty}])$ that is killed by $[L:K]$. Hence it is finite, because $H^2(G_{\Sigma}(K),E[\ell^{\infty}])$ is a discrete abelian group with finite $\mathbb{Z}_{\ell}$-corank.
Therefore, we conclude again that the corank of the $\ell$-primary Selmer group increased by exactly $1$ in $L/K$.
\end{proof}
\begin{cor}\label{toss_cor}
If the $\ell$-parity conjecture holds for all semistable elliptic curves, then it holds for all elliptic curves.
\end{cor}
\begin{proof}
Theorem~11.1 in~\cite{ulmer_gnv} proves that there is a separable extension $L/K$ such that the reductions of $E$ becomes semistable and the analytic rank does not grow in $L/K$.
\end{proof}
The same argument also reduces the full parity conjecture to the semistable case.
\section{Local computations}\label{local_sec}
The following computations are purely local and we change the notations for this section. Let $K$ be a local field of characteristic $p>3$ with residue field $\mathbb{F}_q$. The ring of integers is written $\mathcal{O}$, the maximal ideal $\mathfrak{m}$ and the normalised valuation by $v$. The elliptic curve $E/K$ is given by the equation~\eqref{w_eq}. By changing the equation, if necessary, we may suppose for this section that $A$ and $B$ are in $\mathcal{O}$.
Define $L$ to be the minimal extension of $K$ such that $E'(L)[p] =\cyclic{p}$, or equivalently that $E[F]$ is isomorphic to $\mu[p]$ as a group scheme over $L$. There is a representation
\begin{equation*}
\rho\colon \Gal(L/K)\to \Aut(E'(L)[p]) \cong (\cyclic{p})^{\times}
\end{equation*}
which shows that $[L:K]$ divides $p-1$. Define $\bigl(\frac{-1}{L/K}\bigr)\in\{\pm 1\}$ to be the image of $-1$ under the composition of the reciprocity map and $\rho$
\begin{equation*}
K^{\times} \to \Gal(L/K)\to (\cyclic{p})^{\times}.
\end{equation*}
So $\bigl(\frac{-1}{L/K}\bigr) = +1$ if and only if $-1$ is a norm from $L^{\times}$ to $K^{\times}$.
We will also consider $z_{\sss V} = z\bigl(V\colon E'(K) \to E(K)\bigr)$, which is a certain power of $p$. Put
\begin{equation*}
\sigma = \sigma(E/K) = \begin{cases} +1 \quad&\text{ if $z_V$ is a square and} \\ -1 &\text{ otherwise.} \end{cases}
\end{equation*}
It is important to note that we cannot define $z\bigl(F\colon E(K) \to E'(K)\bigr)$ since its cokernel will never be finite.
Finally, as in the introduction, we let $w = w(E/K)$ be the local root number of $E$ over $K$, as defined by Deligne and well explained in~\cite{rohrlich_wd}.
The aim of this section is to show the following theorem.
\begin{thm}\label{local_thm}
Let $K$ be a local field of characteristic $p>3$.
For any non-isotrivial elliptic curve $E/K$ whose reduction is not additive and potentially good, we have $w(E/K) = \bigl(\frac{-1}{L/K}\bigr) \cdot \sigma(E/K)$.
\end{thm}
We will prove this theorem by treating each type of reduction separately. In the last section of this paper, we will prove this local theorem without the assumption on the reduction using global methods.
See Conjecture~5.3 in~\cite{dok_09} for the analogue in characteristic zero. In particular, the following computations show that the analogy should take places above $p$ in characteristic zero to supersingular places in characteristic $p$.
Recall the definition of the Hasse invariant $\alpha = A(E,\omega)$ associated to the given integral equation~\eqref{w_eq}.
Write $\mathcal{F}$ for the formal group of $E$ over $\mathcal{O}$, and similarly $\mathcal{F}'$ for the formal group for the isogenous curve $E'$ given by the
integral equation
\begin{equation*}
E'\colon \quad y'^2 \, = \, x'^3\, + \,A^p\, x'\,+\,B^p.
\end{equation*}
Choose $t=-\frac{x'}{y'}$ as the parameter of the formal group $\mathcal{F}'$.
Then the formal isogeny $V_1$ of the Verschiebung $V$ is of the form
\begin{equation*}
\xymatrix@R=3mm{
V_1 \colon \mathcal{F}'(\mathfrak{m})\ar@{-o>}[r] & \mathcal{F}(\mathfrak{m}) \\
t \ar@{|-o>}[r] & \alpha \cdot G(t) + H(t^p)
}
\end{equation*}
for some $G(t) = t+\cdots \in \mathcal{O}[\![t]\!]$ and $H(t) = u\cdot t +\cdots \in \mathcal{O}[\![t]\!]$ with $u$ in $\mathcal{O}^{\times}$. See section~12.4 in~\cite{katz_mazur} for other descriptions of the Hasse invariant $\alpha$.
We begin now the proof of Theorem~\ref{local_thm}. For the computation of the local root number $w$, we can simply refer to Proposition~19 in~\cite{rohrlich_wd}, where we find that $w=-1$ if the reduction is split multiplicative and $w=+1$ if it is good or non-split multiplicative.
\subsection{Good reduction}
\begin{prop}\label{good_prop}
Suppose $E/K$ has good reduction. Then $w=+1$. The quantities
$\sigma$ and $\bigl(\frac{-1}{L/K}\bigr)$ are $+1$ if and only if $q^{v(\alpha)}$ is a square.
In particular, if the reduction is ordinary then $\sigma=\bigl(\frac{-1}{L/K}\bigr)=+1$.
\end{prop}
\begin{proof}
We may suppose that the equation~\eqref{w_eq} is minimal, i.e. that it has good reduction. We then have the diagram
\begin{equation*}
\xymatrix@C=10mm{
0 \ar@{-o>}[r] & \mathcal{F}'(\mathfrak{m}) \ar@{-o>}[r] \ar[d]_{V_1} & E'(K) \ar@{-o>}[r] \ar[d]_{V} & \tilde{E'}(\mathbb{F}_q) \ar@{-o>}[r]\ar[d] & 0 \\
0 \ar@{-o>}[r] & \mathcal{F}(\mathfrak{m}) \ar@{-o>}[r] & E(K) \ar@{-o>}[r] & \tilde{E}(\mathbb{F}_q) \ar@{-o>}[r] & 0
}
\end{equation*}
where $\tilde{E}$ denotes the reduction of $E$. The isogenous curves $\tilde{E}$ and $\tilde{E'}$ over $\mathbb{F}_q$ have the same number of points, so the kernel and cokernel of this map have the same size. Hence $z_{\sss V} = z(V_1)$.
For any $N\geqslant 1$,
\begin{equation*}
\frac{\mathcal{F}(\mathfrak{m}^N)}{\mathcal{F}(\mathfrak{m}^{N+1})} \cong \frac{\mathfrak{m}^N}{\mathfrak{m}^{N+1}} \cong \frac{\mathcal{F}'(\mathfrak{m}^N)}{\mathcal{F}'(\mathfrak{m}^{N+1})}
\end{equation*}
and so the same argument shows that $z_{\sss V} = z(V_1) = z\bigl(V_N\colon \mathcal{F}'(\mathfrak{m}^{N}) \to \mathcal{F}(\mathfrak{m}^{N}) \bigr)$.
We claim that if $N>\upsilon(\alpha)$ then $V_N$ maps $\mathcal{F}'(\mathfrak{m}^N)$ bijectively onto $\mathcal{F}(\mathfrak{m}^{N+\upsilon(\alpha)})$. If $t$ has valuation at least $N$, then the valuation of $\alpha\,t$ is smaller than the valuation of $u\cdot t^p$. Therefore $\upsilon(V_N(t)) = \upsilon(\alpha) + \upsilon(t)$. This shows that $V_N$ maps $\mathcal{F}'(\mathfrak{m}^N)$ injectively to $\mathcal{F}(\mathfrak{m}^{N+\upsilon(\alpha)})$. In particular, the kernel $\ker(V_N)$ is trivial.
Let $s$ have valuation $\upsilon(s) \geqslant N+\upsilon(\alpha)$. Put $t_0 = s/\alpha$. Then $t_0$ is close to a zero of $g(t) = V_N(t) - s$. Namely $g(t_0) = \alpha \,a\, t_0^2 +\cdots +u\, t_0^p+\cdots$ has valuation at least $2\,\upsilon(s)-\upsilon(\alpha) \geqslant 2\,N+\upsilon(\alpha) > 2\,\upsilon(\alpha)$, if we write $G(t) = t+a\,t^2+\cdots$ for some $a\in\mathcal{O}$. Since $g'(t_0) = \alpha + 2\,\alpha\, a \, t_0 + \cdots$ has valuation $\upsilon(\alpha)$, Hensel's lemma shows that there is a $t$ close to $t_0$ such that $g(t) = 0$, i.e. such that $V_N(t) = s$.
We conclude that the cokernel of $V_N$ is equal to the index of $\mathcal{F}(\mathfrak{m}^{N+\upsilon(\alpha)})$ in $\mathcal{F}(\mathfrak{m}^N)$. Hence $z_{\sss V} = z(V_N) = q^{\upsilon(\alpha)}$. In particular $z_{\sss V}=1$ if the reduction is ordinary, i.e. when $\alpha$ is a unit in $\mathcal{O}$.
Let $e_{\sss L/K}$ be the ramification index of $L/K$. If the reduction is good ordinary, then the inertia group acts trivially on $T_p E'$, which is a $\mathbb{Z}_p$-module of rank 1. Hence $L/K$ is unramified and we have immediately that $\bigl(\frac{-1}{L/K}\bigr)=+1$.
\begin{lem}
The parity of ${v(\alpha)}$ is equal to the parity of $\frac{p-1}{e_{\sss L/K}}$.
\end{lem}
\begin{proof}
If $E$ has good ordinary reduction, then ${v(\alpha)}=0$, $e_{L/K}=1$ and $p-1$ is even so that the assertion is true. If $E$ has good supersingular reduction, then since $E'(L)[p]$ contains a non-trivial point $P=(x'_P,y'_P)$, but the reduction does not contain a point of order $p$, there exist a $t_P = -x'_P/y'_P$ in the maximal ideal $\mathfrak{m}_L$ of $L$ such that $V_1(t_P) = 0$. From $V_1 (t_P) = \alpha\,t_P +\cdots + u\, t_P^p +\cdots$, we see that the valuation of $\alpha t_P$ and $ut_P^p$ must cancel out. Hence $\upsilon_L(\alpha) = e_{\sss L/K} \cdot\upsilon(\alpha) = (p-1)\cdot \upsilon_L(t_P)$, where $\upsilon_L$ denotes the normalised valuation in $L$. So if the valuation of $t_P$ is odd, we have proved the assertion.
Assume that $\upsilon_L(t_P)$ is even. Then $\upsilon(\alpha)$ is also even and we have to show that $\frac{p-1}{e_{\sss L/K}}$ is even. The extension $L/K(x'_P)$ is generated by $t_P$ whose square belongs to $K(x'_P)$; so this extension is either unramified quadratic or trivial. If $L=K(x'_P)$, then $\Gal(L/K)$ acts on the set of $\{x'_P\vert O\neq P\in E'(L)[p]\}$ and hence $[L:K]$ divides $\frac{p-1}{2}$, so $\frac{p-1}{[L:K]}$ is even. Otherwise, if $L$ is an unramified quadratic extension of $K(x'_P)$, then $e_{\sss L/K} = e_{K(x'_p)/K}$ and $p-1$ is divisible by $[L:K] = 2 e_{\sss L/K} f_{K(x'_p)/K}$. So $\frac{p-1}{e_{\sss L/K}}$ is even.
\end{proof}
Now we can conclude the proof of Proposition~\ref{good_prop}. Lemma~12 in~\cite{dok_isogeny}, whose proof is valid even if the characteristic of $K$ is not zero, says that
$\bigl(\frac{-1}{L/K}\bigr)=+1$ if and only if $q$ is a square or if $\frac{p-1}{e_{L/K}}$ is even. The previous lemma suffices now to conclude.
\end{proof}
In the good supersingular case, $L/K$ may or may not be totally ramified. We illustrate this with two examples.
We take $p=5$, $w>0$ any integer, and the curve $E$ given by the minimal Weierstrass equation
\begin{equation*}
y^2\ = \ x^3 \ + \ T^w \cdot x \ + \ 1
\end{equation*}
over $\mathcal{O}=\mathbb{F}_5[\![T]\!]$. The Hasse invariant is $\alpha = 2\cdot T^w$. The reduction is good, but supersingular. The division polynomial $f_V$ associated to the isogeny $V$ can be computed to be equal to
\begin{equation*}
f_V(x) = 2\, T^w\,x^2+4\,T^{2w}\,x + (4+3\,T^{3w}+T^{6w})\,.
\end{equation*}
First we take the case $w=2\,m$ is even. Then we can make the substitution $X = T^m \cdot x$ to get
\begin{equation*}
f_V(x) = 2\, X^2+4\,T^{3m}\,X + (4+3\,T^{6m}+T^{12m})\,.
\end{equation*}
We see that $K(x_P)$ is a quadratic unramified extension of $K$. The quantity $\frac{p-1}{e_{\sss L/K}}$ will certainly be even.
Now, take $w=2\,m-1$ to be odd with $m>1$. This time the substitution $X=T^m\cdot x$ gets us to
\begin{equation*}
T\cdot f_V(x) = 2\, X^2+4\,T^{3m-2}\,X + T\cdot (4+3\,T^{6m-3}+T^{12m-6})\,.
\end{equation*}
Therefore $K(x_P)/K$ will be a ramified extension of degree $2$. The valuation of $x_P$ over $K(x_P)$ is odd, so we have to make a further extension $L/K(x_P)$, again ramified of degree $2$, to have a $p$-torsion point in $E'(L)$. So $e_{\sss L/K} =4$ and $\frac{p-1}{e_{\sss L/K}}$ is odd.
\subsection{Split multiplicative}
\begin{prop}\label{split_prop}
Suppose $E/K$ has split multiplicative reduction. Then $w(E/K) = -1$, $\bigl(\frac{-1}{L/K}\bigr) = +1$, and $\sigma(E/K) = -1$.
\end{prop}
\begin{proof}
Let $q_{\sss E}\in K^{\times}$ be the parameter of the Tate curve which is isomorphic to $E$ over $K$. Then the isogenous curve $E'$ has parameter ${q_{\sss E}}^p$ and the Frobenius map
\begin{equation*}
\xymatrix{V\colon \frac{ K^{\times} }{ ({q_{\sss E}}^p)^{\mathbb{Z}} } \ar@{-o>}[r] & \frac{ K^{\times} }{ (q_{\sss E})^{\mathbb{Z}} } }
\end{equation*}
is induced by the identity on $K^{\times}$. Hence $V$ has a kernel with $p$ elements and is surjective, so $z_V = \frac{1}{p}$ and $\sigma = -1$.
Since $E'$ has already a $p$-torsion point over $K$, we have $L=K$ and $\bigl(\frac{-1}{L/K}\bigr)=+1$.
\end{proof}
\subsection{Non-split multiplicative}
\begin{prop}\label{nonsplit_prop}
Suppose $E/K$ has non-split multiplicative reduction. Then $$w(E/K) =\bigl(\frac{-1}{L/K}\bigr)=\sigma(E/K) = +1.$$
\end{prop}
\begin{proof}
There is a quadratic extension $K'$ over which $E$ has split multiplicative reduction. So either $L=K$ or $L=K'$. Let $E^{\dagger}$ be the quadratic twist of $E$ over $K'$. Up to $2$-torsion groups, we have $E(L)= E(K)\oplus E^{\dagger}(K)$. Since $E^{\dagger}$ has split multiplicative reduction over $K$ there is a $p$-torsion point in $E^{\dagger}(K)$. So $L=K'$.
From the previous section, we know that $z_{\sss V}$ for $E/L$ and $E^{\dagger}/K$ both are equal to $\frac{1}{p}$. So by the above formula for $E(L)$ up to $2$-torsion, we get that $z_{\sss V}$ for $E/K$ is $1$. So $\sigma = +1$. Since $L/K$ is unramified, $ \bigl(\frac{-1}{L/K}\bigr) = +1$.
\end{proof}
\subsection{Additive potentially multiplicative}
\begin{prop}
Suppose $E/K$ has additive, potentially multiplicative reduction.
Let $\chi\colon K^{\times} \to \{\pm 1\}$ be the character associated to the quadratic ramified extension over which $E$ has split multiplicative reduction.
Then $w(E/K) =\bigl(\frac{-1}{L/K}\bigr)=\chi(-1)$ and $\sigma(E/K) = +1$.
\end{prop}
\begin{proof}
The root number is computed by Rohrlich~\cite[19.ii]{rohrlich_wd}. The proof that $\sigma = +1$ is the same as in the non-split multiplicative case. The formula $ \bigl(\frac{-1}{L/K}\bigr) = \chi(-1)$ is clear, too.
\end{proof}
\section{Selmer groups}\label{selmer_sec}
We return to the global situation and we wish to define properly the Selmer groups involved in $p$-descent in characteristic $p$ using flat cohomology.
From now on, $K$ is again a global field with field of constants $\mathbb{F}_q$ and $E/K$ is a semistable, non-isotrivial elliptic curve.
We denote by $\mathcal{E}$ the N\'eron model of $E/K$ over $C$ and $\mathcal{E}^0$ its connected component containing the identity.
Let $U$ be a dense open subset of $C$ such that $\mathcal{E}$ has good reduction on $U$. The group schemes $\mathcal{E}$ and $\mathcal{E}^0$ coincide over $U$ and we define for any $v\not\in U$ the group of connected components $\Phi_v$ in the fibre above $v$. So we have the following short exact sequence:
\begin{equation*}
0 \to \mathcal{E}^0 \to \mathcal{E} \to \bigoplus_{v\not \in U} \Phi_v \to 0.
\end{equation*}
Following 2.2 in~\cite{kato_trihan}, the discrete $p^{\infty}$-Selmer group of $E/K$ is defined as
\begin{equation*}
\Sel_{p^\infty}(E/K) := \ker \Bigl[ H_{\fl}^1 (K,E[p^\infty])\rightarrow
\prod_{v} H_{\fl}^1 (K_v,E) \Bigl]
\end{equation*}
where $E[p^{\infty}]$ is the $p$-divisible group associated to $E$
and $H_{\fl}$ stands for flat cohomology.
It is known that $\Sel_{p^{\infty}}(E/K)$ fits into the following exact sequence:
\begin{equation}\label{ses1_seq}
0 \to E(K)\otimes {}^{\mathbb{Q}_p}\! /\!{}_{\mathbb{Z}_p} \to \Sel_{p^\infty}(E/K) \to \Sha(E/K)[p^\infty] \to 0 .
\end{equation}
This follows from the fact that the Tate-Shafarevich group can also be computed using flat cohomology as
the kernel of $H_{\fl}^1 (K,E)\to \prod_v H_{\fl}^1 (K_v,E)$ since for the elliptic curve $E$ over $K$ or $K_v$, the \'etale and flat cohomology groups coincide (see Theorem~3.9 in~\cite{Milne80}).
Note also that the dual of $\Sel_{p^{\infty}}(E/K)$ is a finitely generated $\mathbb{Z}_p$-module by the theorem of Mordell-Weil and the finiteness of $\Sha(E/K)[p]$ (see e.g.~\cite{ulmer_pdescent}).
Let $\phi\colon E\to E'$ be an isogeny of elliptic curves. The map $\phi$ induces a short exact sequence of sheaves in the flat topology
\begin{equation}\label{ses2_seq}
\xymatrix@1{
0 \ar[r] & E[\phi] \ar[r] & E \ar[r]^{\phi} & E' \ar[r] & 0.
}
\end{equation}
The Selmer group $\Sel_\phi(E/K)$ is defined to be the set of elements in $H_{\fl}^1(K,E[\phi])$ whose restrictions to $H_{\fl}^1(K_v,E[\phi])$ lie in the image of the connecting homomorphism $E(K_v)\to H_{\fl}^1(K_v,E[\phi])$ for all $v$. If $U$ is any open subset of $C$ where $E$ has good reduction, we can also describe $\Sel_\phi(E/K)$ as the kernel of the composed map
\begin{equation*}
\xymatrix@1{
H_{\fl}^1(U,\mathcal{E}[\phi]) \ar[r] &
\prod_{v\not\in U} H_{\fl}^1(K_v,E[\phi]) \ar[r] &
\prod_{v\not\in U} H_{\fl}^1(K_v,E)[\phi].
}
\end{equation*}
Passing to cohomology, the short exact sequence~\eqref{ses2_seq} induces the short exact sequence of finite groups
\begin{equation}\label{selsha_seq}
\xymatrix@1{
0 \ar[r] & E'(K)/\phi(E(K)) \ar[r] & \Sel_\phi (E/K) \ar[r] & \Sha(E/K)[\phi] \ar[r] & 0,
}
\end{equation}
where $\Sha(E/K)[\phi]$ is the kernel of the induced map $\phi_{\text{\russesmall{Sh}}} \colon \Sha(E/K) \to \Sha(E'/K)$.
\section{Global Euler characteristics}\label{duality_sec}
We prove in the next three sections a few results on global dualities for the $p$-primary part of the Tate-Shafarevich group in characteristic $p$ using flat cohomology. The main reference will be~\cite{milne}, but we wish to point the reader to related results in~\cite{GT} and~\cite{Go}.
Note that most results in these three sections do not need any condition on the reduction. Also, except where mentioned, the characteristic $p$ can be any prime.
We give a short review of the Oort-Tate classification of finite flat group schemes of order $p$ (see~\cite{oort_tate} for details). Let $X$ be a scheme of characteristic $p>0$. The data of a finite flat group scheme $N$ of order $p$ over $X$ is equivalent to the data of an invertible sheaf $\mathcal{L}$, a section $a\in H^0(C,\mathcal{L}^{\otimes(p-1)})$ and a section $b\in H^0(C,\mathcal{L}^{\otimes(1-p)})$ such that $a\otimes b=0$. We use the notation $N_{\mathcal{L},a,b}$. If $N$ is of height one, then $a=0$. The Cartier dual of $N_{\mathcal{L},a,b}$ is $N_{\mathcal{L}^{-1},b,a}$.
For a scheme $S$ of characteristic $p>0$ and a finite flat group scheme $N/S$ we define the Euler characteristic of $N/S$ as
\begin{equation*}
\chi(S,N) := \prod_i\Bigl(\#H_{\fl}^i(S,N)\Bigr)^{(-1)^i}
\end{equation*}
whenever the groups $H_{\fl}^i(S,N)$ are finite.
\begin{lem}\label{globalchi_lem}
Assume that the prime $p$ is odd. Let $N$ be a finite flat group scheme of order $p$ over $C$. Assume that the Cartier dual $N^D$ of $N$ is of height 1. Then the groups $H_{\fl}^i(C,N)$ are finite and $\chi(C,N)$ is a square in $\mathbb{Q}^{\times}$.
\end{lem}
\begin{proof}
The cohomology is finite by Lemma~III.8.9 in~\cite{milne} since $N$ is finite flat.
If $N^D$ has height $1$ then $N$ corresponds to a group scheme $N_{\mathcal{L},a,b}$ with $b=0$.
Now we follow the explanation after problem~III.8.10 in~\cite{milne}.
Since $N$ is the dual of a group scheme of height 1, we have that there is a sequence
\begin{equation*}
\xymatrix@R-4ex@C+1ex{0\ar[r]& N \ar[r] & \mathcal{L}\ar[r] & \mathcal{L}^{\otimes p}\ar[r] &0,\\
& & z \ar @{|->} [r] & z^{\otimes p}-a\otimes z & }
\end{equation*}
which is exact by the definition of $N_{\mathcal{L},a,b}$. See also Example~III.5.4 in~\cite{milne}.
Hence we have that $\chi(C,N) = q^{\chi(\mathcal{L})-\chi(\mathcal{L}^{\otimes p})}$. Using Riemann-Roch, we get
\begin{align*}
\chi(\mathcal{L}) &= \deg(\mathcal{L}) + 1 - g\\
\chi(\mathcal{L}^{\otimes p}) &= p\cdot \deg(\mathcal{L}) + 1 - g
\end{align*}
and therefore we find the formula
\begin{equation*}
\chi(C,N) = q^{(p-1)\deg \mathcal{L}}.
\end{equation*}
So the lemma follows from the fact that $p$ is odd.
\end{proof}
For any place $v$ in $K$, we denote by $\vert\cdot\vert_v$ the normalised absolute value of the completion $K_v$. In particular the absolute value of a uniformiser is $q_v^{-1}$ where $q_v$ denotes the number of elements in the residue field.
\begin{lem}\label{localchi_lem}
Let $N=N_{\mathcal{L},a,b}$ be a finite flat group scheme of order $p>2$ over the ring $O_v$ of integers in $K_v$. Assume that $N_{K_v}$ is \'etale. Then the Euler characteristic of $N$ is well-defined and we have
\begin{equation*}
\chi(O_v,N)\equiv\vert a\vert_v^{-1}
\end{equation*}
modulo squares in $\mathbb{Q}^{\times}$.
\end{lem}
\begin{proof}
The invertible sheaf $\mathcal{L}$ is $c^{-1}\cdot O_v$ for some $c\in K_v^{\times}$. Then by~III.0.9.(c) in~\cite{milne}, we have $N_{\mathcal{L},a,b}\cong N_{O_v,ac^{p-1},bc^{1-p}}$. Using Remark~III.7.6 and the Example after Theorem~III.1.19 on page~244 of~\cite{milne}, we have $\chi(O_v,N)=\vert a\cdot c^{p-1}\vert_v^{-1}\equiv \vert a\vert_v^{-1}$ modulo squares in $\mathbb{Q}^{\times}$.
\end{proof}
For a scheme $S$ of characteristic $p>0$ and a scheme $X/S$, we denote by $X'$ the fibre product $X\times_S S$ where the map $S\to S$ in this product is the absolute Frobenius of $S$. By the universal property of the fibre product, we have a map $F\colon X\to X'$ called the relative Frobenius. If moreover $X/S$ is a flat group scheme, then there exists a map $V\colon X'\to X$ called the Verschiebung such that $V\circ F$ and $F\circ V$ induce $[p]$, the multiplication by $p$ (see~\cite{SGA3}, VII). In particular, $F\colon E\to E'$ is a $p$-isogeny of elliptic curves which extends to the N\'eron models of $E$ and $E'$ by its universal property. Since the N\'eron model of $E'$ is $\mathcal{E}'$, this map is just the relative Frobenius $F\colon \mathcal{E}\to \mathcal{E}'$.
Over the field $K$, or more generally over any open subset $U$ in $C$ where $E$ has good reduction, Proposition~2.1 of~\cite{ulmer_pdescent} shows that $E[F]=N_{\underline{\omega}^{-1},0,\alpha}$ and $E[V]=N_{\underline{\omega},\alpha,0}$, where $\alpha$ is the Hasse invariant of $E$ and where $\underline{\omega}$ is the invertible sheaf $\pi_{*} \Omega^1_{E/K}$ with $\pi\colon E \to \Spec(K)$ being the structure morphism.
\begin{prop}\label{global_chi_prop}
Let $E/K$ be a non-isotrivial elliptic curve. There exists a dense open subset $U$ of $C$ such that $E$ has everywhere good ordinary reduction and $\chi(U,\mathcal{E}[F])$ is a well-defined square in $\mathbb{Q}^{\times}$.
\end{prop}
\begin{proof}
By the Oort-Tate classification, $E[F]/K$ is isomorphic to $N_{\underline{\omega}^{-1},0,\alpha}$. By Proposition B.4 in~\cite{milne} and its proof, it extends to a finite flat group scheme ${\mathcal N}/C$ of order $p$ of the form $N_{\mathcal{O}_C(W),0,\alpha}$ for some Weil divisor $W\leqslant 0$ such that $(\alpha)\geqslant W$. Let $U_1$ be a dense open subset of $C$ over which $\mathcal{E}$ has good reduction. As in the proof of Theorem~III.8.2 in~\cite{milne} on page~291, we replace $U_1$ by a smaller open set $U_2$, over which $\mathcal{N}\vert_{U_2}\simeq\mathcal{E}[F]\vert_{U_2}$. Finally, we set $U$ equal to the open subset of $U_2$ where we have removed all places $v$ for which $E/K$ has good supersingular reduction.
Write $\mathcal{N}^{D}$ for the Cartier dual of $\mathcal{N}$.
By Proposition~III.0.4.(c) and Remark~III.0.6.(b) in~\cite{milne}, we have a long exact sequence
\begin{equation*}
\xymatrix@1{\cdots \ar[r]& H_{\fl,c}^i\bigl(U,\mathcal{N}^{D} \bigr)\ar[r] &
H_{\fl}^i\bigl(C, \mathcal{N}^D\bigr)\ar[r] &
\prod_{v\not\in U} H_{\fl}^i\bigl(O_v, \mathcal{N}^D\bigr)\ar[r]&
\cdots.}
\end{equation*}
Global duality (Theorem~III.8.2 in~\cite{milne}) shows that
\begin{equation*}
H_{\fl,c}^i\bigl(U,\mathcal{N}^{D} \bigr) = H_{\fl,c}^i\bigl(U, \mathcal{E}'[V]\bigr) \quad\text{ is dual to}\quad
H_{\fl}^i\bigl(U, \mathcal{E}[F]\bigr).
\end{equation*}
By the multiplicative property of the Euler characteristic, we get
\begin{equation*}
\chi(U,\mathcal{E}[F])=\frac{\chi(C,\mathcal{N}^D)}{\prod_{v\not\in U}\chi(O_v,\mathcal{N}^D)}.
\end{equation*}
Since $\mathcal{N}^{D} = N_{\mathcal{O}_C(-W), \alpha, 0}$ is finite flat of order $p$ over $C$,
Lemma~\ref{globalchi_lem} shows that $\chi(C,\mathcal{N}^D)$ is a square.
Furthermore, Lemma~\ref{localchi_lem} yields
\begin{equation*}
\chi(U,\mathcal{E}[F]) \equiv \prod_{v\not\in U}\chi(O_v,\mathcal{N}^D)^{-1} \equiv \prod_{v\not\in U}\vert\alpha\vert_v \pmod{\square}.
\end{equation*}
Since the places of $U$ are places of good ordinary reduction where $\vert\alpha\vert_v$ is a square by Proposition~\ref{good_prop}, we have, using the product formula,
\begin{equation*}
\chi(U,\mathcal{E}[F]) \equiv \prod_{v\not\in U}\vert\alpha\vert_v^{-1}\equiv\prod_{v}\vert\alpha\vert_v^{-1}=1\pmod{\square}.
\qedhere
\end{equation*}
\end{proof}
\section{The Cassels-Tate pairing}\label{cassels-tate}
Recall that there exist a pairing (proof of Theorem II.5.6 in~\cite{milne}) called the Cassels-Tate pairing
\begin{equation*}
\langle\!\langle\cdot,\cdot\rangle\!\rangle \colon \Sha(E/K)\times \Sha(E/K)\to {}^{\mathbb{Q}}\!/\!{}_{\mathbb{Z}}.
\end{equation*}
As claimed in Proposition~III.9.5 in~\cite{milne} its left and right kernels are the divisible part $\divi (\Sha(E/K))$ of the Tate-Shafarevich group. We are calling the attention of the reader to the fact that the initial proof in~\cite{milne} is wrong as noticed by D. Harari
and T. Szamuely in~\cite{HS}. The first correct
published proofs that the Cassels-Tate pairing of~\cite{milne}, Theorem II.5.6(a), annihilates only maximal divisible subgroups appear in~\cite{HS} (for prime-to-$p$ primary components) and in~\cite{Go} (for
$p$-primary components) when the 1-motive considered in these references is taken to be ($0\to E$). This pairing is alternating and hence the order of $\Sha(E/K)_{\divi}$ is a square. This last fact is not always true if we consider general abelian varieties.
\begin{lem}\label{adjoint_lem}
Let $\phi\colon E\to E'$ be an isogeny of elliptic curves and $\hat \phi$ the dual isogeny. Then the induced map $\phi_{\text{\russesmall{Sh}}}\colon \Sha(E/K)\to\Sha(E'/K)$ and $\hat{\phi}_{\text{\russesmall{Sh}}}\colon\Sha(E'/K)\to\Sha(E/K)$ are adjoints with respect to the Cassels-Tate pairings, i.e.
$$\langle\!\langle\phi_{\text{\russesmall{Sh}}}(\eta),\xi\rangle\!\rangle_{E'}=\langle\!\langle \eta,\hat\phi_{\text{\russesmall{Sh}}}(\xi)\rangle\!\rangle_{E}$$
for every $\eta\in \Sha(E/K)$ and $\xi\in \Sha(E'/K)$.
\end{lem}
\begin{proof}
The proof is analogous to the proof in the number field case (see Remark~I.6.10 in~\cite{milne} or \S 2 of~\cite{cassels}) and is deduced from the functoriality of the local pairings in flat cohomology.
\end{proof}
\begin{prop}\label{orthongal_prop}
The orthogonal complement of $\Sha(E'/K)[V]$ in $\Sha(E'/K)[p^{\infty}]$ under the Cassels-Tate pairing
$$\Sha(E'/K)[p^\infty]\times \Sha(E'/K)[p^\infty]\to {}^{\mathbb{Q}}\!/\!{}_{\mathbb{Z}}$$ is the image of $F\colon \Sha(E/K)[p^{\infty}]\to \Sha(E'/K)[p^{\infty}]$.
\end{prop}
\begin{proof} Note that the proposition follows immediately from the previous lemma if the pairing is perfect. Else, by the previous Lemma~\ref{adjoint_lem}, it is immediate that $F\bigl(\Sha(E/K)[p^{\infty}]\bigr)$ is contained in the orthogonal of $\Sha(E'/K)[V]$. Let $\xi$ be an element in $\Sha(E'/K)[p^{\infty}]$ orthogonal to the kernel of $V$.
Let $D'$ denote the maximal divisible subgroup of $\Sha(E'/K)[p^\infty]$ and $D$ the maximal divisible subgroup of $\Sha(E/K)[p^\infty]$. Then there is a perfect paring on the quotients $\Sha(E'/K)[p^{\infty}]/D'$ and $\Sha(E/K)[p^{\infty}]/D$. Since $V$ and $F$ map divisible elements to divisible elements, they induce maps between these quotients.
\begin{equation*}
\xymatrix{%
0 \ar[r] & D' \ar[r] \ar@<1ex>[d]^{V}& \Sha(E'/K)[p^{\infty}] \ar[r]\ar@<1ex>[d]^{V} & \Sha(E'/K)[p^{\infty}]/D' \ar[r]\ar@<1ex>[d]^{V} & 0 \\
0 \ar[r] & D \ar[r] \ar@<1ex>[u]^{F}& \Sha(E/K)[p^{\infty}] \ar[r]\ar@<1ex>[u]^{F} & \Sha(E/K)[p^{\infty}]/D \ar[r]\ar@<1ex>[u]^{F} & 0 }
\end{equation*}
The element $\xi + D'$ in the quotient $\Sha(E'/K)[p^{\infty}]/D'$ is orthogonal to the kernel of $V$. Since the pairing is perfect there, we have an element $\eta$ in $\Sha(E/K)[p^{\infty}]$ such that $F$ maps $\eta+D$ to $\xi+D'$ in the quotients. Hence $F(\eta) = \xi + \delta$ for some $\delta \in D'$.
But since the map $F\circ V = [p]$ is surjective on $D'$, the map $F$ maps $D$ onto $D'$. Hence $\delta$ is in the image of $F$ and so is $\xi$.
\end{proof}
The short exact sequence of finite flat group schemes
\begin{equation}\label{ses5_seq}
0 \to E[F]\to E[p] \to E'[V] \to 0,
\end{equation}
induces, when passing to flat cohomology, the top row of the following exact commutative diagram.
\begin{equation*}
\xymatrix@R=4mm{
\dots\ar[r] & E'(K)[V]\ar[r] & H_{\fl}^1(K,E[F]) \ar[r]\ar[d] & H_{\fl}^1(K,E[p])\ar[r]^F \ar[d] & H_{\fl}^1(K,E'[V]) \ar[d] \\
& 0 \ar[r] & \prod_v H_{\fl}^1(K_v,E)[F] \ar[r] &\prod_v H_{\fl}^1(K_v,E)[p] \ar[r]^F &\prod_v H_{\fl}^1(K_v,E')[V]
}
\end{equation*}
From the above diagram, we obtain an exact sequence
\begin{equation}\label{ses6_seq}
\xymatrix@R=3mm{
0 \ar[r] & E(K)[F] \ar[r] & E(K)[p] \ar[r]^{F} & E'(K)[V] \ar[r] & &\\
\ar[r] & \Sel_F(E/K)\ar[r] & \Sel_p(E/K) \ar[r]^{F} & \Sel_V(E'/K) \ar[r] & T \ar[r] & 0,
}
\end{equation}
where $T$ is the cokernel of the map induced by $F$ on the Selmer groups.
Parallel to this, we have a long exact (kernel-cokerel) sequence
\begin{equation}\label{kcok_seq}
\xymatrix@R=3mm{
0 \ar[r] & E(K)[F] \ar[r] & E(K)[p] \ar[r]^{F} & E'(K)[V] \ar[r] &\\
\ar[r] & E'(K)/F(E(K)) \ar[r]^V & E(K)/p E(K) \ar[r] & E(K)/V(E'(K)) \ar[r] & 0,
}
\end{equation}
We may quotient the exact sequence~\eqref{ses6_seq} by the exact sequence~\eqref{kcok_seq},
using Kummer maps in the short exact sequence~\eqref{selsha_seq}. We get an alternative description of $T$ by an exact sequence.
\begin{equation}\label{ses7_seq}
\xymatrix@1{
0 \ar[r] & \Sha(E/K)[F]\ar[r] & \Sha(E/K)[p] \ar[r]^{F} & \Sha(E'/K)[V] \ar[r] & T\ar[r] & 0.
}
\end{equation}
\begin{cor}\label{T_square_cor}
Let $E/K$ be an elliptic curve. The order of $T$ is a square. In other words,
\begin{equation*}
\#\Sha(E/K)[F]\cdot \#\Sha(E'/K)[V] \equiv \Sha(E/K)[p] \pmod{\square}.
\end{equation*}
\end{cor}
\begin{proof}
By restriction the Cassels-Tate pairing induces a pairing on $\Sha(E'/K)[V]$ with values in $\cyclic{p}$. By the previous proposition the right and left kernels of this pairing are equal to the intersection of $F\bigl(\Sha(E/K)[p^{\infty}]\bigr)$ and $\Sha(E'/K)[V]$, which is equal to $F\bigl(\Sha(E/K)[p]\bigr)$. So the pairing induces a non-degenerate alternating pairing on $T$; hence the order of $T$ is a square.
\end{proof}
\begin{lem}\label{rp_lem}
We have
\begin{equation*}
p^{r_p} \equiv \frac{\#E(K)[F] \cdot \#\Sel_V(E'/K)}{\#E'(K)[V] \cdot \#\Sel_F(E/K)} \pmod{\square}.
\end{equation*}
\end{lem}
Of course, we have $\# E(K)[F] = 1$, but we include it here so as to make the formula resemble the symmetric formula in the classical case, like in Fisher's appendix to~\cite{dok_nonab}.
\begin{proof}
By the short exact sequence~\eqref{ses1_seq}, $r_p=r+ \corank_{\mathbb{Z}_p}\Sha(E/K)[p^\infty]$, where $r=\rank_{\mathbb{Z}}(E(K))$ and $r_p$ is the $\mathbb{Z}_p$-rank of the dual of $\Sel_{p^\infty}(E/K)$.
Now, since
$\Sha(E/K)[p^\infty]$ is cofinitely generated as $\mathbb{Z}_p$-module, we have
\begin{equation*}
\dim_{\mathbb{F}_p} \Sha(E/K)[p] = \corank_{\mathbb{Z}_p} \bigl(\divi \Sha(E/K) [p^\infty]\bigr) + \dim_{\mathbb{F}_p} \bigl( \Sha(E/K)_{\divi} [p] \bigr)
\end{equation*}
As noticed at the beginning of section~\ref{cassels-tate}, $\#\Sha(E/K)_{\divi}$ (and therefore $\#\Sha(E/K)_{\divi}[p]$) is a square. We deduce that
\begin{equation*}
r_p \equiv r+\dim_{\mathbb{F}_p} \Sha(E/K)[p] \pmod{2}.
\end{equation*}
On the other hand, the short exact sequence~\eqref{selsha_seq} applied to $[p]$ implies that
\begin{equation*}
\dim_{\mathbb{F}_p} \Sel_p(E/K) = r + \dim_{\mathbb{F}_p} E(K)[p] + \dim_{\mathbb{F}_p}\Sha(E/K)[p],
\end{equation*}
since $E(K)/pE(K) \simeq E(K)[p]\oplus (\cyclic{p})^r$. So we get the formula
\begin{equation*}
r_p\equiv \dim_{\mathbb{F}_p}E(K)[p] +\dim_{\mathbb{F}_p}\Sel_p(E/K) \pmod{2}.
\end{equation*}
The assertion results then from the exact sequence~\eqref{ses6_seq} and Corollary~\ref{T_square_cor}.
\end{proof}
\section{Global duality}
\begin{prop}\label{global_duality_prop}
Let $E/K$ be a non-isotrivial elliptic curve and let $U$ be an open subset of $C$ over which $E$ has good reduction.
Then we have
\begin{equation*}
\frac{\#E(K)[F] \cdot \#\Sel_V(E'/K)}{\#E'(K)[V] \cdot \#\Sel_F(E/K)} = \frac{1}{\chi(U,\mathcal{E}[F])}\cdot \prod_{v\not \in U} z(V_{E'(K_v)}).
\end{equation*}
\end{prop}
We insist once more that the roles of $F$ and $V$ here are not interchangeable, e.g. the terms $z(F_{E(K_v)})$ in the product would not be finite.
\begin{proof}
The long exact sequence for flat cohomology deduced from the definition of $H_{\fl,c}^i$ in Proposition~III.0.4.(a) in~\cite{milne} reads
\begin{equation*}
\xymatrix@1{\cdots \ar[r]& H_{\fl,c}^i(U,\cdot)\ar[r]&H_{\fl}^i(U,\cdot)\ar[r]& \bigoplus_{v\not\in U} H_{\fl}^i(K_v,\cdot)\ar[r]& H_{\fl,c}^{i+1}(U,\cdot)\ar[r]&\cdots}
\end{equation*}
\color{black}The global duality in Theorem~III.8.2 of~\cite{milne} implies that the group $H_{\fl,c}^i(U, \mathcal{E}[F])$ is dual to $H_{\fl}^{3-i}(U,\mathcal{E}'[V])$ since $\mathcal{E}[F]$ is finite and flat over $U$. We find the following long exact sequence
\begin{equation*}
\xymatrix@R-1ex{%
& H_{\fl}^1\bigl(U,\mathcal{E}[F]\bigr) \ar[r]& \bigoplus_{v \not \in U} H_{\fl}^1\bigl(K_v, E[F]\bigr) \ar[r]& H_{\fl}^1\bigl(U,\mathcal{E}'[V]\bigr)^{\vee} \ar[r]& \\
\ar[r] & H_{\fl}^2\bigl(U,\mathcal{E}[F]\bigr) \ar[r]& \bigoplus_{v \not \in U} H_{\fl}^2\bigl(K_v, E[F]\bigr) \ar[r]& H_{\fl}^0\bigl(U,\mathcal{E}'[V]\bigr)^{\vee} \ar[r]& 0
}
\end{equation*}
Local duality as in Theorem~III.6.10 in~\cite{milne} shows that $H_{\fl}^2(K_v,E[F])$ is dual to $E'(K_v)[V]$.
Our aim is to replace the local term $H_{\fl}^1(K_v,E[F])$ by the cokernel of the map from $E'(K_v)/F(E(K_v))$. By local duality (Theorem~III.7.8 in~\cite{milne} and the functoriality of biextensions), this term is dual to $H_{\fl}^1(K_v,E')[V]$. So we will quotient the term $H_{\fl}^1\bigl(U,\mathcal{E}'[V]\bigr)^{\vee}$ by the image of the map on the right hand side in the following commutative diagram.
\begin{equation}\label{useless_eq}
\xymatrix@R=9mm{
& \bigoplus_{v \not\in U} {}^{E'(K_v)}\!/\!{}_{F(E(K_v))} \ar[r]^{\cong} \ar[d] & \bigoplus_{v \not \in U} \bigl(H_{\fl}^1(K_v,E')[V]\bigr)^{\vee} \ar[d] & \\
H_{\fl}^1(U,\mathcal{E}[F]) \ar[r] & \bigoplus_{v \not \in U} H_{\fl}^1\bigl(K_v, E[F]\bigr) \ar[r]& H_{\fl}^1\bigl(U,\mathcal{E}'[V]\bigr)^{\vee} \ar[r] & \dots
}
\end{equation}
Because of the exact Kummer sequence
\begin{equation*}
\xymatrix@1{ 0\ar[r] & {}^{E'(K_v)}\!/\!{}_{F(E(K_v))} \ar[r] & H_{\fl}^1(K_v,E[F]) \ar[r] & H_{\fl}^1(K_v,E)[F] \ar[r] & 0. }
\end{equation*}
the cokernel of the map on the left in~\eqref{useless_eq} is $\bigoplus_{v\not\in U}H_{\fl}^1(K_v,E)[F]$, which, again by local duality, is dual to $\bigoplus_{v \not \in U} E(K_v)/V(E'(K_v))$. By definition the cokernel of the map on the right in~\eqref{useless_eq} is the dual of the Selmer group $\Sel_V(E'/K)$.
Putting all these results together, we obtain the long exact sequence
\begin{equation*}
\xymatrix@R-2ex{%
0\ar[r] & \Sel_F(E/K) \ar[r] & H_{\fl}^1\bigl(U,\mathcal{E}[F]\bigr) \ar[r]& \bigoplus_{v \not \in U} \Bigl({}^{E(K_v)}\!/\!{}_{V(E'(K_v))}\Bigr)^{\vee} \ar[r] & \\
\ar[r] & \Sel_V(E'/K)^{\vee} \ar[r] & H_{\fl}^2\bigl(U,\mathcal{E}[F]\bigr) \ar[r]& \bigoplus_{v \not \in U} \Bigl(E'(K_v)[V]\Bigr)^{\vee} \ar[r] & \\
\ar[r] & \Bigl(E(K)[V]\Bigr)^{\vee} \ar[r]& 0
}
\end{equation*}
Since all other terms in the sequence are finite, the groups $H_{\fl}^i(U,\mathcal{E}[F])$ are finite, too. The alternating product of its orders gives the result.
\end{proof}
If $E/K_v$ is a non-isotrivial, semistable elliptic curve then one can show that the group scheme $\mathcal{E}[F]$ is finite and flat. So the result of Proposition~\ref{global_duality_prop} can be extended to any open subset $U$ such that $E$ has semistable reduction over all places in $U$. In particular $U$ can be taken to be equal to $C$, if $E/K$ is semistable.
\section{The proof of the $p$-parity}\label{pparity_sec}
We now pass to the proof of Theorem~\ref{pparity_thm}. We return now to our running assumptions. $K$ has characteristic $p>3$ and $E/K$ is not isotrivial. We present first the main results coming from global duality and the local computations and then we just have to put them together. But both these statements are interesting in their own right.
\begin{thm}\label{parity_corank_thm}
Let $E/K$ be a non-isotrivial elliptic curve. We have
$$
p^{r_p} \equiv \prod_v z\Bigl( V_{E'(K_v)}\colon E'(K_v)\to E(K_v)\Bigr) \pmod{\square}
$$
where the product runs over all places $v$ in $K$.
\end{thm}
\begin{proof}
Proposition~\ref{global_chi_prop} provides us with an open subset $U$ in $C$ such that $E$ has good ordinary reduction at all places in $U$.
It follows from Lemma~\ref{rp_lem}, Proposition~\ref{global_duality_prop}, and Proposition~\ref{global_chi_prop} that
\begin{align*}
p^{r_p} &\equiv \frac{\#E(K)[F] \cdot \#\Sel_V(E'/K)}{\#E'(K)[V] \cdot \#\Sel_F(E/K)} \pmod{\square}\\
&= \frac{1}{\chi(U,\mathcal{E}[F])}\cdot \prod_{v\not \in U} z\Bigl( V_{E'(K_v)}\colon E'(K_v)\to E(K_v)\Bigr) \\
& \equiv \prod_{v\not \in U} z\Bigl( V_{E'(K_v)}\colon E'(K_v)\to E(K_v)\Bigr) \pmod{\square}
\end{align*}
Finally from Proposition~\ref{good_prop}, we know that $z(V_{E'(K_v)})$ is a square for all places $v \in U$ as $E$ has good ordinary reduction there.
\end{proof}
Next, we collect from section~\ref{local_sec} the following result.
\begin{prop}\label{rootno_prop}
Let $E/K$ be a semistable elliptic curve. Then the root number is $w(E/K) = (-1)^s$ where $s$ is the number of split
multiplicative primes for $E/K$. Furthermore $s$ has the same parity as the $p$-adic valuation of
\begin{equation*}
\prod_{v} \frac{ c_v(E/K) }{c_v(E'/K)}
\end{equation*}
where $c_v$ are Tamagawa numbers.
\end{prop}
\begin{proof}
From Theorem~\ref{local_thm} we deduce that
\begin{equation*}
w(E/K) = \prod_v w(E/K_v) = \prod_v \sigma(E/K_v) \cdot \Bigl(\frac{-1}{L_w/K_v}\Bigr) = \prod_v \sigma(E/K_v)
\end{equation*}
by the product formula for the norm symbols $\prod_v \bigl(\frac{-1}{L_w/K_v}\bigr)$ with $L$ being the extension of $K$ over which $E[F]=\mu_p$ and $w$ is any place above $v$. Using the Propositions~\ref{good_prop}, \ref{split_prop}, and~\ref{nonsplit_prop}, we see that $\sigma(E/K_v)$ is $-1$ if and only if $E$ has split multiplicative reduction at $v$.
If the reduction at $v$ is split multiplicative, then we have $c_v(E'/K) = p\cdot c_v(E/K)$ since the parameters in the Tate parametrisation satisfy $q_{E'} = {q_E}^p$. If the reduction is non-split multiplicative
then the Tamagawa numbers can only be 1 or 2.
\end{proof}
Note that we could have used the known modularity and the Atkin-Lehner operators to prove this statement without the computations in section~\ref{local_sec}, at least if $E$ has at least one place of split multiplicative reduction.
\begin{proof}[Proof of Theorem~\ref{pparity_thm}]
First, we use Corollary~\ref{toss_cor} which allows us to assume that $E/K$ is semistable.
Then by the previous Proposition~\ref{rootno_prop} we have $w(E/K) = \prod_v \sigma(E/K_v)$ and Theorem~\ref{parity_corank_thm} states
that $(-1)^{r_p}= \prod_v \sigma(E/K_v)$.
\end{proof}
\section{Local Root Number Formula}\label{rootno_sec}
We now prove Theorem~\ref{local_thm} without any hypothesis on the reduction. We use, without repeating the definitions, the notations from section~\ref{local_sec}.
\begin{thm}\label{local2_thm}
Let $K$ be a local field of characteristic $p>3$.
For any non-isotrivial elliptic curve $E/K$, we have $w(E/K) = \bigl(\frac{-1}{L/K}\bigr) \cdot \sigma(E/K)$.
\end{thm}
As mentioned in section~\ref{local_sec} this answers positively a conjecture in~\cite{dok_09} for the isogeny $V$. This theorem could certainly be shown by local computations only, but they would tend to be very tedious for additive potentially supersingular reduction. We can avoid this here by using a global argument. This is a similar idea as in the proof of Theorem~5.7 in~\cite{dok_09}.
\begin{proof}
By Theorem~\ref{local_thm}, we may assume that $E/K$ has additive, potentially good reduction. Let $n\geqslant 12$.
We can find a minimal integral equation $y^2 = x^3 +A\,x + B$ for $E/K$. Choose a global field $\mathcal{K}$ of characteristic $p$ with a place $v_0$ such that $\mathcal{K}_{v_0} = K$.
Choose another place $v_1 \neq v_0$ in $\mathcal{K}$ and choose a large even integer $N$ such that $(N-1)\cdot \deg(v_1) > 2g - 1 + n\deg(v_0)$, where $g$ is the genus of $\mathcal{K}$. For a divisor $D$ on the projective smooth curve $\mathcal{C}$ corresponding to $\mathcal{K}$, we write $L(D)$ for the Riemann-Roch space
$H^0(\mathcal{C},\mathcal{O}_C(D))$.
The inequality on $N$ guarantees that the dimensions of the Riemann-Roch spaces in the exact sequence
\begin{equation*}
\xymatrix@1{ 0 \ar[r]& L\bigl(N (v_1) - n (v_0) \bigr) \ar[r]& L\bigl(N (v_1) \bigr) \ar[r] & \mathcal{O}_{v_0}/\mathfrak{m}_{v_0}^n \ar[r]& 0 }
\end{equation*}
are positive -- e.g. equal to $N\deg(v_1) -n\deg(v_0) + 1 -g> g+\deg(v_1)$ for the smaller space. Choose an element $a$ in $L(N (v_1))$ which maps to $A+\mathfrak{m}_{v_0}^n$ on the right. We can even impose that it does not lie in $L((N-1)(v_1))$, since this is a subspace of codimension $\deg(v_1)>0$ in $L(N(v_1))$. Then $a$ has a single pole of order $N$ at $v_1$ and it satisfies $v_0(A-a) \geq n$.
Next, we use that $N$ is even and we choose an element $b$ in $\mathcal{K}$ such that $v_1(b) = -\tfrac{3}{2}N$, and $v_0(B-b)\geq n$. We can also impose that
$v_1(4 a^3 + 27 b^2) > -3N$. Furthermore we impose that the zeroes of $b$ are distinct from the zeroes of $a$; this excludes at worst $N\deg(v_1)$ subspaces of codimension $1$ in $L\bigl(\tfrac{3N}{2}(v_1)\bigr)$.
Let $\mathcal{E}/\mathcal{K}$ be the elliptic curve given by $y^2=x^3 +a\,x +b$.
By the congruences on $a$ and $b$ at $v_0$ and the continuity of Tate's algorithm, the reduction of $\mathcal{E}$ at $v_0$ is additive, potentially good.
At the place $v_1$ the valuation of the $j$-invariant $j(\mathcal{E}) = 2^8\cdot 3^3\cdot a^3 /(4 \,a^3+27\, b^2)$ will be negative by our choices. Hence the reduction is either multiplicative or potentially multiplicative. For any other place $v$ with $v(a)>0$, we have $v(4 \,a^3+27\, b^2) = 0$ and hence the curve has good reduction at $v$, and for any other place $v$ with $v(a) = 0$, either the reduction is good or $v(j(\mathcal{E}))< 0$.
Therefore we have constructed an elliptic curve $\mathcal{E}/\mathcal{K}$ with a single place $v_0$ of additive, potentially good reduction.
So for all other places Theorem~\ref{local_thm} applies.
Let $\mathcal{L}$ be the extension of $\mathcal{K}$ such that $E[F]\cong\mu_p$ over $\mathcal{L}$.
Now we use the results of Theorem~\ref{parity_corank_thm} and the proven $p$-parity in Theorem~\ref{pparity_thm} to compute
\begin{align*}
w(\mathcal{E}/K) &= \frac{w(\mathcal{E}/\mathcal{K})}{\prod_{v\neq v_0} w(\mathcal{E}/\mathcal{K}_{v})}
= \frac{(-1)^{r_p}}{
\prod_{v\neq v_0}\bigl(\frac{-1}{\mathcal{L}_w/\mathcal{K}_v}\bigr)
\sigma(\mathcal{E}/\mathcal{K}_{v})} \\
& = \frac{\bigl(\frac{-1}{\mathcal{L}_{w_0}/K}\bigr)}{
\prod_{\textnormal{all }v} \bigl(\frac{-1}{\mathcal{L}_w/\mathcal{K}_v}\bigr)}
\cdot\frac{\prod_{\textnormal{all }v} \sigma(\mathcal{E}/\mathcal{K}_{v})}{
\prod_{v\neq v_0}\sigma(\mathcal{E}/\mathcal{K}_{v})}
= \Bigl(\frac{-1}{\mathcal{L}_{w_0}/K}\Bigr)\cdot\sigma(\mathcal{E}/K).
\end{align*}
Once again we used the product formula for the norm symbol. Now we argue that the three terms are all continuous in the topology of $K$ as $a$ and $b$ varies: For the local root number this is exactly the statement of Proposition~4.2 in~\cite{helfgott}. The field $\mathcal{L}_{w_0}$ and the order of the kernel $E'(K)[V]$ of Verschiebung are locally constant because they are defined by continuously varying separable polynomials. Finally, the order of the cokernel of $V\colon E'(K)\to E(K)$ is locally constant, because the group of connected components and the reduction and the induced map $V$ on them will not change and on the formal group the cokernel is determined by the valuation of the Hasse invariant (which again is a polynomial in $a$ and $b$) by the argument in the proof of Proposition~\ref{good_prop}.
Since all three terms take value $\pm 1$, they will eventually, for big enough $n$, be equal to the corresponding values for $E$.
\end{proof}
\section{On the $\ell$-parity conjecture}\label{ell_sec}
We switch now to investigating the $\ell$-parity conjecture when $\ell\neq p$. As mentioned in the introduction, we have only a partial result in this case.
Recall that $p>3$ is a prime and that $K$ is a global field of characteristic $p$ with constant field $\mathbb{F}_q$.
For any $n$ and any extension $L$ of $K$, we denote by $L_n$ the field $L\cdot \mathbb{F}_{q^n}$.
The aim of this section is to show the following partial result (given as Theorem~\ref{ellparity_thm} in the introduction).
\begin{thm}\label{ell_thm}
Let $E/K$ be an elliptic curve and let $\ell$ be an odd prime different from $p$.
Furthermore assume that
\begin{enumerate}
\item\label{a_as} $a=[K(\mu_{\ell}):K]$ is even and that
\item\label{b_as} the analytic rank of $E$ does not grow by more than $1$ in the extension $K_2/K$.
\end{enumerate}
Then the $\ell$-parity conjecture holds for $E/K$.
\end{thm}
Note first that we believe that~(\ref{a_as}) holds for roughly two thirds of the $\ell$ as $a$ is also the order of $q$ in the group $(\cyclic{\ell})^{\times}$. The second condition should hold quite often as it says that the analytic rank of the twist of $E$ by the unramified quadratic character is less or equal to $1$. So for instance if $b$ as in Lemma~11.3.1 in~\cite{ulmer_gnv} is odd, then~(\ref{b_as}) holds.
See the next section for a discussion about why we were not able to extend the proof here to any situation without these hypotheses.
\begin{proof}
Corollary~\ref{toss_cor} allows us to assume that $E$ is semistable and we may assume that $E$ is not isotrivial as for isotrivial curves even BSD is known.
First we use a non-vanishing result, to produce from the analytic information a useful extension of $K$, which we want to link to the algebraic side later.
We write $\mathfrak{n}$ for the conductor of $E/K$.
The degree of $\mathfrak{n}$ is linked to the degree of the polynomial $L(E/K,T)$ in $T=q^{-s}$ by the formula of Grothendieck-Ogg-Shafarevich (as used in formula~(5.1) of~\cite{ulmer_analogies}):
\begin{equation*}
\deg (\mathfrak{n}) = \deg \bigl(L(E/K,T)\bigr) - 2(2g_{K} -2).
\end{equation*}
We can factor the polynomial to
\begin{equation*}
L(E/K,T) = (1-qT)^r \cdot (1+qT)^{r'}\cdot \prod_{i} (1-\alpha_i T)(1-\bar\alpha_i T)
\end{equation*}
where $\alpha_i$ are non-real, complex numbers of absolute value $q$. By definition $r$ is the analytic rank of $E/K$ and it is easy to see that $r+r'$ is the analytic rank of $E/K_2$ since the analytic rank of $E/K_2$ is the number of inverse zeroes $\alpha$ of $L(E/K,T)$ such that $\alpha^2 = q^2$. So we get
\begin{equation}\label{nran_eq}
\sum_{v \text{ bad}} \deg(v) = \deg(\mathfrak{n})\equiv \deg \bigl(L(E/K,T)\bigr) \equiv r+r' = \ord_{s=1} L(E/K_2,s) \pmod{2}.
\end{equation}
We are now going to use Theorem~5.2 in~\cite{ulmer_gnv} to construct suitable extensions of $K$. The argument is very similar to the proof of Step~2 in~11.4.2 of~\cite{ulmer_gnv}. The following is a very special case of this very general and powerful theorem.
\begin{thm}[Ulmer]\label{ulmer_main_thm}
Let $K$ be a global field of characteristic $p>3$, let $S$ be a finite non-empty set of places in $K$, let $\ell\neq p$ be an odd prime, and let $E/K$ be a semistable elliptic curve. Assume that $a=[K(\mu_{\ell}):K]$ is even and suppose that the sum of the degree of the bad places not belonging to $S$ is even. Then there exists an integer $n$ coprime to $a$ and a element $z\in K_n^{\times}$ such that the extension $K_n(\sqrt[\ell]{z})/K_n$ is totally ramified at all places above $S$ and unramified at all bad places not in $S$ and such that the analytic rank of $E$ does not grow in it.
\end{thm}
\begin{proof}
All notations and results in this proof refer to~\cite{ulmer_gnv}.
We use Theorem~5.2.(1) with $F=K$, $\alpha_n=q^n$, $d=\ell$, $S_r = S$ and $\rho$ the symplectically self-dual representation of weight $w=1$ attached to $E$ on the Tate module $V_{\ell}(E)$ as in section~11. We can choose the sets $S_s$ and $S_i$ arbitrarily as long as we make sure that $S$, $S_s$, and $S_i$ are disjoint.
The conditions (especially from his section~3.1) are satisfied.
Let $o$ be an orbit in $(\cyclic{\ell})^{\times}$ for the multiplication by $q$. Then $d_{o}=\ell$ and $a_{o} = a$. So we can conclude the existence of $n$ and $z$ such that $L(\rho\otimes\sigma_{o,z},K_n,T)$ does not have $\alpha_n$ as an inverse root in~5.2.(1) unless we are in the exceptional cases (i) to (iv) in~5.1.1.1. Now, (iv) cannot hold because $\rho$ is not orthogonally self-dual and (i) and (ii) are impossible because $d=\ell$ is odd. However, all the condition in (iii) are satisfied apart from maybe the condition~4.2.3.1. (In particular, we know that $-o=o$ because $a$ is even.)
We now have to show that the hypothesis on $S$ imposes that the condition~4.2.3.1 fails. Since $E$ is semistable, the local exponent of the conductor $\cond_v(\rho)$ is $1$. Let $v$ be a bad place in $S$ and $\chi_v$ be a totally ramified character of the decomposition group $D_v$ which has exact order $\ell$. Then the conductor $\cond_v(\rho\otimes\chi_v)=2$ again because $E$ has multiplicative reduction at $v$. So the first condition in~4.2.3.1 saying that this has constant parity as $\chi_v$ varies is always fulfilled. In order to make the condition~4.2.3.1 fail, we must have that
\begin{equation*}
\sum_{\text{bad }v \in S} \cond_v(\rho\otimes\chi_v)\deg(v) + \sum_{\text{bad } v \not\in S} \cond_v(\rho) \deg(v)
\end{equation*}
is even. That is exactly what the hypothesis in the theorem imposes.
\end{proof}
\begin{lem}\label{ellext_lem}
To prove Theorem~\ref{ell_thm},
we may assume that there exists a non-constant Kummer extension $L/K$ of degree $\ell$
in which the analytic rank does not grow and such that
\begin{itemize}
\item if the analytic rank of $E/K_2$ is even then no place of bad reduction ramifies in $L/K$, or
\item if the analytic rank of $E/K_2$ is odd then exactly one place of bad reduction ramifies. Moreover, in the latter case, the degree of this place is odd.
\end{itemize}
\end{lem}
\begin{proof}
If the analytic rank is even we choose the finite non-empty set of places $S$ to be disjoint from the set of bad places.
If the analytic rank is odd, then the congruence~\eqref{nran_eq} shows that there is at least one bad place $v$ of odd degree. So we choose $S$ to contain this as the only bad place.
Then~\eqref{nran_eq} shows that the hypothesis in Theorem~\ref{ulmer_main_thm} with the above choice for $S$ holds. So we have an integer $n$ and an element $z \in K_n^{\times}$. Now we use the first item in Proposition~\ref{red_prop} to replace $K$ by its odd Galois extension $K_n$. So $L=K(\sqrt[\ell]{z})$ is the requested extension.
\end{proof}
We now come to the algebraic part of the argument. Using the previous two lemmata, we have now a Kummer extension $L/K$ of degree $\ell$ in which the analytic rank does not grow. The Galois closure of $L/K$ is $L_a$ containing $L_2$. We have the following picture of extensions
\begin{equation*}
\xymatrix@R-1ex@C+1.5ex{
&& L_a \ar@{-}[dddll]_{\langle\sigma\rangle}^{\ell}
\ar@{-}[rd]^{\langle\tau^2\rangle}_{a/2}
\ar@{-}@/^3pc/[rrdd]_{a}^{\langle\tau\rangle}&& \\
&&& L_2 \ar@{.}[dddll]^{\ell} \ar@{-}[rd]_{2} & \\
&&&& L \ar@{.}[dddll]^{\ell} \\
K_a \ar@{-}[rd]_{a/2} &&&&\\
& K_2 \ar@{-}[rd]_{2}&&& \\
&& K &&
}
\end{equation*}
The dotted lines are non-Galois extensions. We have written the degree under each inclusion. The Galois group $G=\Gal(L_a/K)$ is a meta-cyclic group generated by elements $\sigma$ and $\tau$ of order $\ell$ and $a$ respectively, with $L=(L_a)^{\tau}$. We have
\begin{equation*}
G = \bigl\langle \sigma,\tau\bigl\vert \tau^a=\sigma^{\ell} = 1, \tau\sigma\tau^{-1} = \sigma^q\bigr\rangle.
\end{equation*}
We list the irreducible $\mathbb{Q}_{\ell}[G]$-modules. By $\mathbbm{1}$ we denote the trivial representation. Fix a primitive character $\chi\colon\langle\tau\rangle \cong \cyclic{a}\cdot\tau\to \mathbb{Q}_{\ell}^{\times}$ that we can view as a character of $G$ by setting $\chi(\sigma)=1$. (Note that $a$ divides $\ell-1$, so $\chi$ is indeed realisable over $\mathbb{Q}_{\ell}$.) The non-trivial $1$-dimensional representations of $G$ are exactly the $\chi^i$ for $1\leqslant i \leqslant a-1$. There is only one non-trivial irreducible $\mathbb{Q}_{\ell}[\langle\sigma\rangle]$-module. It is of degree $\ell-1$. We can represent it as $\rho = \mathbb{Q}_{\ell}[\xi]$ where $\xi$ is a primitive $\ell$\textsuperscript{th} root of unity and $\sigma$ acts on $\rho$ by multiplication with $\xi$. (Over $\bar\mathbb{Q}_{\ell}$ is would split into the $\ell-1$ non-trivial characters of $\langle\sigma\rangle\cong\cyclic{\ell}$.)
We make $\rho$ into a $G$-module by defining the $\mathbb{Q}_{\ell}$-linear action of $\tau$ by $\tau(\xi^j) = \xi^{qj}$ for all $0\leqslant j\leqslant \ell-2$. It is easy to see that $\rho$ is an irreducible $\mathbb{Q}_\ell[G]$-module of degree $\ell-1$ and in fact it is the only higher dimensional irreducible $\mathbb{Q}_\ell[G]$-module. (Note that $\rho\otimes\overline\mathbb{Q}_{\ell}$ decomposes into $\tfrac{\ell-1}{a}$ irreducibles of degree $a$ corresponding to the orbits of the multiplication by $q$ on $(\cyclic{\ell})^{\times}$.) We have
\begin{equation*}
\mathbb{Q}_{\ell}[G] = \mathbbm{1} \oplus\bigoplus_{i=1}^{a-1} \chi^i \oplus \rho^a.
\end{equation*}
For convenience we will denote $\chi^{a/2}$ by $\varepsilon$. The fixed field of the kernel of $\varepsilon$ is $K_2$.
To announce the next lemma, we need to introduce the corrected product of Tamagawa numbers.
Fix the invariant $1$-form $\omega$ on $E/K$ corresponding to the fixed Weierstrass equation. For each place $v$, write $c_v(E/K)$ for the Tamagawa number and define
\begin{equation*}
C_v(E/K,\omega) = c_v(E/K)\cdot \Biggl\vert \frac{\omega}{\omega^{o}_{v}}\Biggr\vert_v
\end{equation*}
where $\omega^o_v$ is a N\'eron differential for $E/K_v$. The global product over all places $v$ of $K$
\begin{equation*}
C(E/K) = \prod_v C_v(E/K,\omega)
\end{equation*}
is no longer dependent on the choice of $\omega$ by the product formula.
For any irreducible $\mathbb{Q}_\ell[G]$-module $\psi$, write $m_{\psi}$ for the multiplicity of the $\psi$-part of the $\ell$-primary Selmer group $\Sel_{\ell^{\infty}}(E/L_2)$.
\begin{lem}
We have
\begin{equation*}
m_{\mathbbm{1}} + m_{\varepsilon} + m_{\rho} \equiv \ord_{\ell} \Biggl( \frac{C(E/L_2)}{C(E/K_2)} \Biggr) \pmod{2}.
\end{equation*}
\end{lem}
\begin{proof}
We are interested in the following relation between permutations representations (in the terminology of Dokchitsers' work, say~2.3 in~\cite{dok_modsquares})
\begin{equation*}
\Theta = 2 \cdot G + \langle \tau^2 \rangle - 2 \cdot \langle \tau\rangle - \langle \sigma,\tau^2\rangle
\end{equation*}
corresponding to the equality of $L$-functions
\begin{equation*}
L(E/K,s)^2 \cdot L(E/L_2,s) = L(E,\mathbbm{1},s)^3 \cdot L(E,\varepsilon,s)\cdot L(E,\rho,s)^2
= L(E/K_2,s) \cdot L(E/L,s)^2.
\end{equation*}
It can be seen that the regulator constants (as defined in~2.11 of~\cite{dok_modsquares}) satisfy
\begin{equation*}
C_{\Theta}(\mathbbm{1})\equiv C_{\Theta}(\varepsilon)\equiv C_{\Theta}(\rho)\equiv\ell\pmod{\square}
\end{equation*}
in $\mathbb{Q}^{\times}$ modulo squares. For $\mathbbm{1}$ and $\varepsilon$ this is straightforward; for $\rho$ we best use
Theorem 4.(4) of~\cite{dok_reg} with $D=\langle\tau\rangle$, implying that
\begin{equation*}
C_{\Theta}(\rho)\cdot C_{\Theta}(\mathbbm{1}) = C_{\Theta}\Bigl(\mathbb{Q}_\ell[G/\langle\tau\rangle]\Bigr) = 1.
\end{equation*}
So $S_{\Theta} = \{\mathbbm{1},\varepsilon,\rho\}$ in Dokchitsers' notation in~\cite{dok_sd}.
In short everything looks just like if $L_2/K$ were a dihedral extension (which it is not unless $a=2$). For $a=2$ this is computed in Example~1 in~\cite{dok_reg} and Example~4.5 in~\cite{dok_modsquares} and Example~3.5 in~\cite{dok_sd}. For $a=\ell-1$, this is Example~2.20 in~\cite{dok_modsquares} and Example~3.6 in~\cite{dok_sd}.
Now, Theorem~1.6 in~\cite{dok_sd} shows that
\begin{equation*}
m_{\mathbbm{1}} + m_{\varepsilon} + m_{\rho} \equiv \ord_{\ell} \Biggl( \frac{C(E/K)^2\cdot C(E/L_2)}{C(E/L)^2\cdot C(E/K_2)} \Biggr) \pmod{2}
\end{equation*}
which proves the lemma.
\end{proof}
\begin{lem}\label{fracc_lem}
Suppose that no bad place ramifies in $L/K$, then
the $\ell$-adic valuation of the integer $C(E/L_2)/C(E/K_2)$ is even.
If there is only one bad place that ramifies in $L/K$ and this place is of odd degree, then the $\ell$-adic valuation of $C(E/L_2)/C(E/K_2)$ is odd.
\end{lem}
The more general statement for $a=2$ can be found in Remark 4.18 in~\cite{dok_modsquares}.
\begin{proof}
Let $v$ be a place of $K_2$. Write $y$ for $\omega/\omega^o_v$. Then
\begin{equation*}
\frac{\prod_{w\mid v} C_w(E/L_2,\omega)}{C_v(E/K_2,\omega)}
= \frac{\prod_{w\mid v} c_w(E/L_2)}{c_v(E/K_2)}\cdot \frac{\prod_{w\mid v} \vert y \vert_w}{\vert y \vert_v}
\equiv \frac{\prod_{w\mid v} c_w(E/L_2)}{c_v(E/K_2)} \pmod{\square}
\end{equation*}
because $\prod_{w\mid v} \vert y \vert_w / \vert y \vert_v = \vert y \vert_v^{\ell-1}$ is a square.
If the place $v$ is unramified, then the type of reduction and the Tamagawa number do not change and we have
\begin{equation*}
\frac{\prod_{w\mid v} c_w(E/L_2)}{c_v(E/K_2)} =
\begin{cases}
c_v(E/K_2)^{\ell-1} \quad\ & \text{ if $v$ decomposes in $L_2/K_2$ and}\\
1 & \text{ if $v$ is inert.}
\end{cases}
\end{equation*}
In either case it is a square. If the reduction is good at $v$ then $c_w(E/L_2) = c_v(E/K_2) = 1$. This proves the first case.
Suppose now $v$ is a place in $K_2$ which lies above a place of odd degree in $K$ and which ramifies in $L_2/K_2$. Then the place is inert in $K_2/K$ and hence the reduction of $E/K_2$ at $v$ is necessarily split multiplicative. Let $q$ be the Tate parameter of $E$ at $v$. Then $c_v(E/K_2) = v(q)$ and $c_w(E/L_2) = w(q) = \ell\cdot v(q)$ for the place $w$ above $v$. So the quotient is $\ell$ which has odd $\ell$-adic valuation. This proves the second statement.
\end{proof}
Finally we can finish the proof of Theorem~\ref{ell_thm}. By construction, we have $\ord_{s=1} L(E,\rho,s) = 0$. As in the proof of Proposition~\ref{red_prop}, this implies that $m_{\rho} = 0$; in fact $L(E,\rho,s)$ is $L(B/K,s)$ for the extension $L/K$. So the last two lemmata show that $m_{\mathbbm{1}}+m_{\varepsilon}$, which is the corank of the $\ell$-primary Selmer group $\Sel_{\ell^{\infty}}(E/K_2)$, has the same parity as the analytic rank of $E/K_2$. This proves the $\ell$-parity conjecture for $E/K_2$. Assumption~(\ref{b_as}) and Proposition~\ref{red_prop} prove that the $\ell$-parity holds over $K$, too.
\end{proof}
\section{Failure to extend}\label{fail_sec}
Although it is not usual to write in a mathematical article about unsuccessful attempts to prove a result, we wish to include in this last section a short explanation of why we were unable to extend the proof in the previous section. We hope this might be the starting point for a complete proof of the $\ell$-parity conjecture. We try to outline here the missing non-vanishing result for $L$-functions, which might be accessible using automorphic methods.
The main ingredient for proving Theorem~\ref{ell_thm} was the existence of a Kummer extension of degree $\ell$ in which the analytic rank does not grow. Moreover this extension was linked in a ``non-commutative way'' to an even abelian extension. The machinery using representation theory set up by Tim and Vladimir Dokchitser is then sufficient to prove the parity.
First, if condition~(\ref{b_as}) in Theorem~\ref{ell_thm} does not hold but condition~(\ref{a_as}) still holds, then there is no hope that a Kummer extension will do.
In order to obtain a Galois extension of $K$ from a Kummer extension, we need to make the extension $K_2/K$. But without any control about the growth of the analytic rank in this quadratic extension, we do not know how to prove the $\ell$-parity over $K$. With some extra work, one can conclude that the $\ell$-parity conjecture holds for $E/K_2$. In this case, we would need a non-vanishing result for an extension of $K$ of degree dividing $\ell$ which is not a Kummer extension.
Suppose now that the condition~(\ref{a_as}) does not hold. Then we would need to find the ``dihedral'' extension somewhere else. The Proposition~\ref{ell_non_prop} below formulates this in a positive way.
\begin{prop}\label{ell_non_prop}
Suppose $E/K$ is semi-stable and non-isotrivial.
Let $F/K$ be a quadratic extension such that the analytic rank of $E$ grows at most by $1$ in $F/K$ and the analytic rank of $E/F$ is even.
Assume
\begin{enumerate}
\item $a=[F(\mu_{\ell}):F]$ is odd and
\item there exists an odd $n\geqslant 1$ and a $z\in F_n^{\times}$ with the property that $L=F_n(\sqrt[\ell]{z})$ is an extension of degree $\ell$ of $F_n$ such that $L_a/K_{an}$ is a dihedral extension, no bad place of $E/F_n$ ramifies in $L/F_n$, and the analytic rank of $E$ does not grow in $L/F_n$.
\end{enumerate}
Then the $\ell$-parity conjecture holds for $E/K$.
\end{prop}
Remark that there is a large supply of quadratic extensions $F/K$ by Theorem~11.2 in~\cite{ulmer_gnv}.
The main problem here seems to find the extension $L_a/F_{an}$. Theorem~5.2 in~\cite{ulmer_gnv} provides us with many extensions that satisfy all the properties except that we can not guarantee that $L_a/K_{an}$ is dihedral. We first had hopes that Ulmer's proof could be adapted to enforce that $L/K_{n}$ is dihedral. In the notations of~\cite{ulmer_gnv}, we may sketch the problem. Let $D$ be a divisor of large degree as in section 6.2. Then the density (as $n$ grows) of elements in the Riemann-Roch space $H^1(\mathcal{C}\times \mathbb{F}_{q^n}, \mathcal{O}(D))$ which give rise to a dihedral extension of $F_n$ with respect to the fixed quadratic extension $F/K$ will be tending very fast to 0. So we would need to modify the parameter space $X$ and it is not clear how to find a nice variety parametrising such dihedral extensions.
\begin{proof}
This is very similar to the proof of Theorem~\ref{ell_thm}. By Proposition~\ref{red_prop}, we may assume that $a=1$ and $n=1$. So $L/K$ is a dihedral extension with group $G$. Let $\rho$ be the irreducible $\mathbb{Q}_{\ell}[G]$-module of degree $\ell-1$ and let $\varepsilon$ the character corresponding to the quadratic extension $F/K$. The usual relation of induced representation $\Theta$ for $G$, as in Example~3.5 in~\cite{dok_sd}, yields the congruence
\begin{equation*}
m_{\mathbbm{1}} + m_{\varepsilon} + m_{\rho} \equiv \ord_{\ell}\Bigl(\frac{C(E/L)}{C(E/F)}\Bigr) \pmod{2}.
\end{equation*}
We have $m_{\rho} = 0$ and from Lemma~\ref{fracc_lem}, we know that the assumption that no bad place ramifies in $L/K$ implies that $\frac{C(E/L)}{C(E/F)}$ has even $\ell$-adic valuation. This implies that $m_{\mathbbm{1}} + m_{\varepsilon}$ is even, i.e. the $\ell$-parity is valid for $E/F$. By Proposition~\ref{red_prop} implies that the $\ell$-parity also holds for $E/K$.
\end{proof}
\section*{Acknowledgements}
We express our gratitude to Tim and Vladimir Dokchitser, Jean Gillibert, Christian Liedtke, James S. Milne, Takeshi Saito, Ki-Seng Tan, Douglas Ulmer, and the anonymous referee for useful comments on the preliminary versions of this paper.
\bibliographystyle{amsalpha}
| {
"timestamp": "2010-12-15T02:02:30",
"yymm": "1011",
"arxiv_id": "1011.2991",
"language": "en",
"url": "https://arxiv.org/abs/1011.2991",
"abstract": "We prove the $p$-parity conjecture for elliptic curves over global fields of characteristic $p > 3$. We also present partial results on the $\\ell$-parity conjecture for primes $\\ell \\neq p$.",
"subjects": "Number Theory (math.NT)",
"title": "Parity conjectures for elliptic curves over global fields of positive characteristic",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462211935647,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.709932620665141
} |
https://arxiv.org/abs/0804.0224 | Characterization of the critical values of branching random walks on weighted graphs through infinite-type branching processes | We study the branching random walk on weighted graphs; site-breeding and edge-breeding branching random walks on graphs are seen as particular cases. We describe the strong critical value in terms of a geometrical parameter of the graph. We characterize the weak critical value and relate it to another geometrical parameter. We prove that, at the strong critical value, the process dies out locally almost surely; while, at the weak critical value, global survival and global extinction are both possible. | \section{Introduction}\label{sec:intro}
\setcounter{equation}{0}
We consider
the branching random walk (briefly BRW) as a continuous-time process
where particles live on an at most countable set $X$ (the set of sites).
Each particle lives on a site and, independently of the others, has a random lifespan; during its life it breeds at random intervals and sends its
offspring to randomly chosen sites.
More precisely each particle has an exponentially distributed lifespan
with mean 1. To a particle living at site $x$, for any $y\in X$, there corresponds a Poisson clock of rate $\lambda k_{xy}$: when the clock
rings, a new particle is born in $y$ (where $(k_{xy})_{x,y\in X}$
is a matrix with nonnegative entries and $\lambda>0$), provided that the
particle at $x$ is still alive.
This approach unifies the two main points of view which may be found in
the literature: the \textit{site-breeding} BRW and the most widely used \textit{edge-breeding}
BRW. Indeed in the first case there is a constant reproduction rate $\lambda$
at each site and the offspring is sent accordingly to a probability
distribution on $X$ (thus $(k_{xy})_{x,y\in X}$ is a stochastic matrix).
Examples can be found in \cite{cf:BZ}, \cite{cf:HuLalley} and \cite{cf:Stacey03}
(where it is called \textit{modified} BRW).
In the edge-breeding model, $X$ is a graph and to each (oriented)
edge one associates a reproduction rate $\lambda$
(thus $(k_{xy})_{x,y\in X}$ is the adjacency matrix of the graph).
Some examples are in \cite{cf:BZ}, \cite{cf:Ligg2}, \cite{cf:Pem},
\cite{cf:PemStac1} and \cite{cf:Stacey03}.
On regular graphs (see for instance \cite{cf:Ligg1} and \cite{cf:MadrasSchi}), the site-breeding model employing the
transition matrix of the simple random walk is equivalent, up to a multiplicative constant, to the edge-breeding one.
We consider the BRW with initial configuration given by a single
particle at a fixed site $x$: there are two kinds of survival:
\begin{enumerate}[$(i)$]
\item
\textit{weak} (or global) \textit{survival} -- the total number of particles is positive at each time;
\item
\textit{strong} (or local) \textit{survival} -- the number of particles at site $x$ is not eventually $0$.
\end{enumerate}
Let us denote by $\lambda_w(x)$ (resp.~$\lambda_s(x)$) the infimum of the values
of $\lambda$ such that there is weak (resp.~strong)
survival with positive probability. Clearly $\lambda_w(x) \le \lambda_s(x)$
and these values do not depend on $x$ when $K$ is irreducible
(see Section~\ref{subsec:graphs}).
For the edge-breeding BRW on a connected graph, in \cite{cf:PemStac1} it was proved
that $\lambda_s=1/M_s$ where $M_s$ is a geometrical parameter of the
graph. This result can be extended to the BRW on weighted graphs
(Theorem~\ref{th:pemantleimproved}). To our knowledge the
behavior of the BRW at $\lambda=\lambda_s(x)$ was yet unknown:
we prove that there is almost sure extinction in Theorem~\ref{th:critb}
(we proved the same result for BRW on multigraphs in \cite{cf:BZ}).
More challenging is the characterization of the weak critical parameter
$\lambda_w(x)$ and the study of the weak critical behavior.
Following the ideas which lead to the characterization of $\lambda_s(x)$
one naturally guesses that $\lambda_w(x)=1/M_w(x)$ (see Section~\ref{subsec:graphs} for the definition).
Indeed in \cite{cf:BZ} we proved that in the irreducible case, $\lambda_w\ge1/M_w$ and we gave sufficient conditions for equality
(for instance all site-breeding BRWs satisfy these conditions).
In this paper we use a different approach which allows us to characterize
$\lambda_w(x)$ in terms of the existence of solutions of certain
infinite-dimensional linear systems (Theorem~\ref{th:equiv1});
in particular we show that $\lambda_w(x)$ is related to the
so-called Collatz-Wielandt numbers of some linear operator
(see \cite{cf:FN1}, \cite{cf:FN2} and \cite{cf:Marek1} for the
definition).
Thanks to this characterization, we prove a stronger lower bound,
$\lambda_w(x)\ge1/M_w^-(x)$ and give sufficient conditions for equality
(Remark~\ref{finite} and Propositions~\ref{th:fgraph} and \ref{th:condU}).
We show (Example~\ref{exm:2}) that it may be that $\lambda_w(x)=
1/M_w^-(x)\neq1/M_w(x)$. As for the critical behavior, Example~\ref{exm:4}
is a BRW which globally survives at $\lambda_w(x)$ (while for instance
on finite weighted graphs the BRW dies out at $\lambda_w(x)$ -
this is a particular case of Theorem~\ref{th:critg}).
The question whether $\lambda_w(x)=1/M_w^-(x)$ always holds
is, as far as we know, still open.
The basic idea behind the study of $\lambda_w(x)$ relies on the
comparison between the BRW and an infinite-type branching process
(briefly IBP).
It is well known that the probability of extinction of a
Galton-Watson branching process is the smallest positive fixed point
of a certain generating function.
In Section~\ref{sec:GBP} we prove some results on IBPs
by studying an infinite-dimensional generating function and its
fixed points.
The paper is organized as follows: Sections~\ref{subsec:graphs}
and \ref{subsec:genfun} introduce the basic definitions
(among which the definition of weighted graph and of the geometrical
parameters of the graph).
In Section~\ref{sec:fixed} we prove some results on fixed points
for monotone functions in partially ordered sets.
In Section~\ref{sec:GBP} we define IBPs and associate
in a ``canonical'' way an IBP to a given BRW.
Section~\ref{sec:critical} is devoted to the study of
the critical values $\lambda_s(x)$ and $\lambda_w(x)$
(Section~\ref{subsec:critical}) and of the strong and weak critical
behaviors (Section~\ref{subsec:criticalb}).
Finally in Section~\ref{sec:examples} we give some examples of IBPs and BRWs.
\section{Basic definitions and preliminaries}\label{sec:def}
\subsection{Weighted graphs}\label{subsec:graphs}
Let us consider $(X,K)$ where $X$ is a countable (or finite) set and $K=(k_{xy})_{x,y \in X}$ is
a matrix of nonnegative \textit{weights} (that is, $k_{xy} \ge 0$) such that $\sup_{x \in X} \sum_{y \in X}
k_{xy} = M< \infty$. We denote by $(X,K)$ the \textit{weighted graph} with set of edges
$E(X):=\{(x,y) \in X \times X: k_{xy}>0 \}$, where to each edge $(x,y)$ we associate
the weight $k_{xy}$.
We say that $K$ is \textit{irreducible} if
$(X,E(X))$ is a connected graph.
We define recursively
$k^n_{xy}:=\sum_{w\in X} k^{n-1}_{x w} k_{w y}$
(where $k^0_{xy}:=\delta_{xy}$); moreover we set
$T^n_x:=\sum_{y \in X} k^n_{xy}$ and
$\phi^n_{xy}:=\sum_{x_1,\ldots,x_{n-1} \in X \setminus\{y\}} k_{x x_1} k_{x_1 x_2} \cdots k_{x_{n-1} y}$;
by definition
$\phi^0_{xy}:=0$ for all $x,y \in X$.
Clearly $k^n_{xy}$ is the total weight of all paths of length $n$ from $x$ to $y$,
$T^n_x$ is the total weight of all paths of length $n$ from $x$, while
$\phi^n_{xy}$ is the analog of $k^n_{xy}$ regarding only paths reaching
$y$ for the first time at the $n$-th step.
For $k^n_{xy}$ and $T_x^n$ the following recursive
relations hold for all $n,m \geq 0$
\[
k^{n+m}_{xy}=\sum_{w \in X} k^n_{xw} k^m_{wy}; \\
\qquad
\begin{cases}
T_x^{n+m}=\sum_{w \in X} k^n_{xw} T_w^m \\
\\
T_x^0=1\\
\end{cases}
\]
and, for all $n \ge 1$,
\[
k_{xy}^n=\sum_{i=0}^n \phi_{xy}^i k^{n-i}_{yy}.
\]
Whenever, given $x,y \in X$, there exists $n \in \mathbb N$ such that $k^n_{xy}>0$ we
write $x \to y$; if $x \to y$ and $y \to x$ then
we write $x \leftrightarrow y$.
This is an equivalence relation; let us denote by $[x]$ the equivalence class of $x$ (usually
called \textit{irreducible class}).
We observe that the summations involved in $k^n_{xx}$ could be equivalently restricted
to sites in $[x]$, moreover $\lambda_s(x)$ depends only on
$[x]$.
Similarly one can prove that $\lambda_w(x)$ depends only on $[x]$.
We introduce the following
geometrical parameters
\[
M_s(x,y;X):=\limsup_{n} (k^n_{xy})^{1/n}, \qquad
M_w(x;X):=\limsup_{n} (T_x^n)^{1/n}, \qquad
M^-_w(x;X):=\liminf_{n} (T_x^n)^{1/n}.
\]
In the rest of the paper, whenever there is no ambiguity, we will omit
the dependence on $X$.
Moreover, we write $M_s(x):=M_s(x,x)$;
supermultiplicative arguments imply that
$M_s(x) =\lim_{n} (k^{dn}_{xx})^{1/dn}$ for some $d \in \mathbb N$ hence,
for all $x \in X$, we have that
$M_s(x) \le M^-_w(x) \le M_w(x)$.
It is easy to show that the above
quantities are constant within an irreducible class; hence in the
irreducible case the dependence on $x,y$ will be omitted.
\subsection{Generating functions}\label{subsec:genfun}
Let us consider the following generating functions
\[
\begin{split}
\Gamma(x,y|\lambda)&:=\sum_{n =0}^\infty k^n_{xy} \lambda^n,
\qquad \Theta(x|\lambda):=\sum_{n =0}^\infty T_x^n \lambda^n,
\qquad \Phi(x,y|\lambda):=\sum_{n =1}^\infty \phi_{xy}^n \lambda^n;
\end{split}
\]
note that the radii of convergence of $\Gamma(x,y|\lambda)$ and $\Theta(x|\lambda)$ are
$1/M_s(x,y)$ and $1/M_w(x)$ respectively.
The following relation holds
\begin{equation}\label{eq:HTheta}
\Gamma(x,y|\lambda)=\Phi(x,y|\lambda)\Gamma(y,y|\lambda)+\delta_{xy}, \quad \forall
\lambda: |\lambda|< \min(1/M_s(x,y),1/M_s(y)).
\end{equation}
Since
\begin{equation}\label{eq:genfun1}
\Gamma(x,x|\lambda)=\frac{1}{1-\Phi(x,x|\lambda)},
\qquad \forall \lambda \in \mathbb C: |\lambda|< 1/M_s(x),
\end{equation}
we have that $1/M_s(x)=\max\{ \lambda \geq 0 :\Phi(x,x|\lambda)\leq 1\}$
for all $x \in X$ (see Section 2.2 of \cite{cf:BZ} for details).
\subsection{Fixed points in partially ordered sets}\label{sec:fixed}
Let $(Q, \ge)$ be a partially ordered set and $W:Q \mapsto Q$ be a nondecreasing function,
that is, $x\ge y$ implies $W(x)\ge W(y)$.
Let us denote by $(-\infty, y]$ and $[y, +\infty)$ the \textit{intervals} $\{w \in Q: w \le y\}$ and
$\{w \in Q: w \ge y\}$ respectively.
We consider a topology $\tau$ on $Q$ such that
all the intervals $(-\infty, y]$ and $[y, +\infty)$ are closed.
\begin{pro}
\label{teo:monotone}
Let $W:Q \mapsto Q$ be a nondecreasing function.
\begin{enumerate}
\item[(a)]
If $q \ge W(q)$ then $W((-\infty,q]) \subseteq (-\infty,q]$.
If $q \le W(q)$ then $W([q,+\infty)) \subseteq [q,+\infty)$.
\end{enumerate}
Moreover let us suppose that $q_0 \in Q$ satisfies $W(q_0) \ge q_0$
(resp.~$W(q_0) \le q_0$)
and define the sequence $\{q_n\}_{n \in \mathbb N}$
recursively by $q_{n+1}=W(q_n)$, for all $n \in \mathbb N$.
The following hold.
\begin{enumerate}
\item[(b)]
The sequence
is nondecreasing (resp.~nonincreasing).
\item[(c)]
If the sequence has a cluster point $q$
and $y$ is such that $y \ge q_0$, $y \ge W(y)$ (resp.~$y \le q_0$, $y \le W(y)$) then
$q \le y$ (resp.~$q \ge y$).
\item[(d)]
Every cluster point $q$ of
$\{q_n\}_{n \in \mathbb N}$ satisfies $q \ge q_0$ (resp.~$q \le q_0$).
If $W$ is continuous then
there is at most one cluster point $q$ and
\[
\begin{split}
W(q)&=q \quad \text{ and } \quad
(-\infty,q] =
\bigcap_{y \ge q_0:W(y) \le y} (-\infty, y]
=\bigcap_{y \ge q_0:W(y)=y} (-\infty, y] \\
\Big(\text{resp.~}W(q)&=q \quad \text{ and } \quad
(-\infty,q] =
\bigcup_{y \le q_0:W(y) \ge y} (-\infty, y]
=\bigcup_{y \le q_0:W(y)=y} (-\infty, y]\,\Big).
\end{split}
\]
\end{enumerate}
\end{pro}
\begin{proof}
\begin{enumerate}[(a)]
\item
Note that if $y \in (-\infty,q]$ then
$W(y) \le W(q) \le q$. The second assertion is proved analogously.
\item
This is easily proved by induction on $n$.
\item
By induction on $n$ we have $q_n \in (-\infty,y]$ which is closed
by assumption, thus $q \in (-\infty,y]$. The second assertion is proved analogously.
\item
The first claim follows since $[q_0,+\infty)$ is closed.
Continuity implies that for every cluster point $W(q)=q$.
Moreover
if $q$ and $\widetilde q$ are two cluster points then since
$q_0\le \widetilde q$
then by (c)
$q \le \widetilde q$ and similarly $\widetilde q \le q$ whence $q= \widetilde q$.
By (c)
$(-\infty, q]=\bigcap_{y \ge q_0:W(y) \le y} (-\infty, y]$.
Moreover since $W(q)=q$
\[
(-\infty, q] \supseteq
\bigcap_{y \ge q_0:W(y) = y} (-\infty, y] \supseteq
\bigcap_{y \ge q_0:W(y) \le y} (-\infty, y]
\]
whence the claim. The proof of the second claim is analogous.
\end{enumerate}
\end{proof}
\begin{cor}
\label{cor:monotone}
Let $Q$ have a smallest element $\mathbf 0$ (resp.~a largest element $\mathbf 1$),
$W:Q \mapsto Q$ be a continuous nondecreasing function.
If $\{q_n\}_{n \in \mathbb N}$ is recursively defined by
\begin{equation}
\label{eq:qn}
\begin{cases}
q_{n+1}=W(q_n) \\
q_0=\mathbf 0 \quad \text{ (resp.~}q_0=\mathbf 1 \text{)}. \\
\end{cases}
\end{equation}
then $\{q_n\}_{n \in \mathbb N}$ has at most one cluster point $q$; moreover $q$
is the smallest (resp.~largest) fixed point of $W$ and for any $y \in Q$, we have that
$q<y$ (resp.~$q>y$) if and only if there exists $y^\prime < y$ (resp.~$y^\prime > y$) such that $W(y^\prime) \le y^\prime$
(resp.~$W(y^\prime) \ge y^\prime$).
\end{cor}
\begin{proof}
Clearly $W(\mathbf 0) \ge \mathbf 0$ hence (according to the previous proposition)
the sequence
$\{q_n\}_{n \in \mathbb N}$ is nondecreasing and
since $q_0=\mathbf 0 \le y$ for all $y \in X$, there at most one cluster point
$q$, and it is the smallest
fixed point of $W$. If $q<y$ then take $y^\prime=q$; on the other hand if
there exists $y^\prime < y$ such that $W(y^\prime) \le y^\prime$ then $q\le y^\prime <y$.
The proof of the second claim follows analogously.
\end{proof}
\section{Infinite-type branching processes}\label{sec:GBP}
Let $X$ be a set which is at most countable. Each element of this set
represents a different type of particle of a (possibly) infinite-type branching process.
Given $f \in \Psi:=\{g \in \mathbb N^X:
S(g):=\sum_{x \in X} g(x)< +\infty\}$, at the end of its life
a particle of type $x$ gives birth to $f(y)$ children of type $y$ (for all $y \in X$) with
probability $\mu_x(f)$ where $\{\mu_x\}_{x \in X}$ is a family of probability
distributions on the (countable) measurable space $(\Psi,2^\Psi)$.
To the family $\{\mu_x\}_{x \in X}$ we associate a generating function $G:[0,1]^X \to [0,1]^X$
which can be considered as an infinite dimensional power series. More precisely,
for all $z \in [0,1]^X$ the function $G(z) \in [0,1]^X$ is defined as follows
\begin{equation}
\label{eq:genfun}
G(z|x):= \sum_{f \in \Psi} \mu_x(f) \prod_{y \in X} z(y)^{f(y)}.
\end{equation}
Note that $G$ is continuous with respect to the \textit{pointwise convergence topology}
(or \textit{product topology}) on $[0,1]^X$.
Indeed,
every $f \in \Psi$ is finitely supported, hence
$\prod_{y \in X} z(y)^{f(y)}$ is
a finite product and
$z \mapsto \prod_{y \in X} z(y)^{f(y)}$
is continuous.
The continuity of $G$ follows from Weierstrass criterion for uniform convergence,
since
$\sup_{z \in [0,1]^X}
\mu_x(f) \prod_{y \in X} z(y)^{f(y)}
= \mu_x(f)$ which is summable (with respect to $f \in \Psi$).
The set $[0,1]^X$ is partially ordered by $z \ge z^\prime$ if and only if $z(x) \ge z^\prime(x)$ for all $x \in X$;
by $z > z^\prime$ we mean that $z \ge z^\prime$ and $z \not = z^\prime$.
We denote by $\mathbf 0$ and $\mathbf 1$
the smallest and largest element of $[0,1]^X$ respectively, that is $\mathbf 0(x):=0$ and $\mathbf 1(x):=1$ for every $x \in X$.
The topological (partially ordered) space $[0,1]^X$ is compact and every monotone sequence has a cluster point, moreover
all the intervals $(-\infty,z] \equiv [\mathbf 0, z]$ and $[z,+\infty) \equiv [z,\mathbf 1]$
are closed sets whence all the hypotheses of Corollary~\ref{cor:monotone}
are satisfied. Let us note that $G(\mathbf 1)=\mathbf 1$
and $G$ is nondecreasing.
From now on we suppose that $\mu_x(\mathbf 0)>0$ for some $x \in X$
in order to avoid a trivial case of almost sure survival.
Let $q_n(x)$ be
the probability of extinction before or at the $n$-th generation starting from a single
initial particle of type $x$; and let $q(x)$ be the probability of extinction
at any time starting from the same configuration.
Note that $q_n$ and $q$ can be viewed as
elements of $[0,1]^X$. Clearly $q_0=\mathbf 0$ and
\[\begin{split}
q_{n+1}(x) &= \sum_{f \in \Psi} \mu_x(f) \prod_{y \in X} q_n(y)^{f(y)}= G(q_n|x);\\
q(x)&=\lim_{n \to \infty} q_n(x).
\end{split}
\]
According to Proposition~\ref{teo:monotone} and Corollary~\ref{cor:monotone},
$q$ is the smallest fixed point of $G$ and $q< \mathbf 1$ if and only if
\begin{equation}
\label{eq:ineq}
G(y) \le y \qquad \text{for some } y < \mathbf 1.
\end{equation}
Hence, if $y$ satisfies~\eqref{eq:ineq}
then $y(x)$ is an upper bound for $q(x)$.
Conversely if we define
\begin{equation}
\label{eq:H1}
H(v):=\mathbf1-G(\mathbf1-v)
\end{equation}
then $H$ is nondecreasing and continuous; moreover if
\begin{equation}
\label{eq:vn}
\begin{cases}
v_{n+1}=H(v_n) \\
v_0=\mathbf 1. \\
\end{cases}
\end{equation}
then $\{v_n\}_{n\in\mathbb N}$ is nonincreasing and has a unique cluster point
$v:=\lim_{n \to \infty} v_n=\mathbf1-q$.
Clearly $v_n(x)$ can be interpreted as the probability of survival up to
the $n$-th generation for the BRW starting with one particle
on $x$ ($v(x)$ being the probability of surviving forever).
Moreover $v > \mathbf 0$ if and only if $H(y) \ge y$ for
some $y \ge \mathbf 0$.
Note that in this case $y(x)$ is a lower bound for
$v(x)$.
Let $G_n$ and $H_n$ be the $n$-th iterates
of $G$ and $H$; $H_n(v)=\mathbf1-G_n(\mathbf1-v)$
and they are continuous and nondecreasing.
\begin{rem}
\label{rem:irrid}
Let us consider the graph $(X,E_\mu)$ where $E_\mu:=\{(x,y) \in X^2: \exists f \in \Psi, f(y)>0, \mu_x(f)>0\}$.
We call the IBP \textit{irreducible} if and only if the graph
$(X,E_\mu)$ is connected. It is easy to show that for the extinction probabilities $q$ of
an irreducible IBP we have
$q<\mathbf 1$ (that is $v>\mathbf 0$) if and only if $q(x)< 1$ for all $x \in X$
(that is $v(x)>0$ for all $x \in X$).
\end{rem}
\subsection{Infinite-type branching processes associated to branching random walks}
\label{subsec:ibrw}
In order to study the weak behavior of the BRW,
we associate a discrete-time branching process to the (continuous-time)
BRW in such a way that
they both survive or both die at the same time.
Each particle of the BRW living on a site $x$ will be given
the label $x$ which represents its type.
We suppose that the BRW starts from a single particle in
a vertex $x_0$; if there are several particles we repeat
this construction for each initial particle.
The IBP is constructed as follows: the 0th generation
is one particle of type $x_0$; the 1st generation of the IBP
is the collection of the children of this particle (ever born):
this collection is almost surely
finite, say, $r_1$ particles in the vertex $x_1$, $\ldots$,
$r_m$ particles in $x_m$. Thus from the point of view of the IBP
the 1st generation is the collection of
$r_1$ particles of type $x_1$, $\ldots$,
$r_m$ particles of type $x_m$.
Take one particle of type $x_1$ in the 1-st generation and collect all its children,
repeat this
for all the particles in the 1st generation: the set of all
these new particles is the 2nd generation.
Proceeding
in the same way we construct the 3rd generation and so on.
Clearly the progeny of the IBP is the same as the progeny
of the BRW hence the latter is finite (i.e.~the BRW dies out)
if and only if the former is finite (i.e.~the IBP dies out).
The probabilities
of extinction of the IBP
(that is, the smallest fixed point of the
generating function), regarded as an element of $[0,1]^X$, coincide with
the probabilities of extinction
of the BRW.
Let us compute the generating function of this IBP.
Roughly speaking, the probability for a particle of type $x$ of having
$f(y)$ children of type $y$ for all $y \in X$ (where $f \in \Psi$)
is the probability that, for all $y \in Y$, a Poisson clock
of rate $\lambda k_{xy}$ rings $f(y)$ times before the death
of the original particle (i.e.~a clock of rate 1).
Elementary computations show that
\[
\mu_x(f)= \frac{
S(f)! \prod_{y \in X} (\lambda k_{xy})^{f(y)}}
{(1+\lambda \sum_{y \in X} k_{xy})^{
S(f)
+1}\prod_{y \in X} f(y)!}.
\]
Recalling~\eqref{eq:genfun} we have
\begin{equation}
\label{eq:G-BRW}
\begin{split}
G^\lambda(z|x)&=\sum_{f \in \Psi}
\frac{
S(f)! \prod_{y \in X} (\lambda k_{xy})^{f(y)}}
{(1+\lambda \sum_{y \in X} k_{xy})^{
S(f)+1}\prod_{y \in X} f(y)!}
\prod_{y \in Y} z(y)^{f(y)} \\
&= \frac{1}{1+\lambda \sum_{y \in X} k_{xy}}
\sum_{i=0}^{+\infty} \sum_{f:
S(f)
=i}
\frac{i!}{\prod_{y \in X} f(y)!}
\frac{1}{
(1+\lambda \sum_{y \in X} k_{xy})^i
}
\prod_{y \in X} \Big ( \lambda k_{xy}z(y)
\Big )^{f(y)} \\
&=
\frac{1}{1+\lambda \sum_{y \in X} k_{xy}}
\sum_{i=0}^{+\infty}
\Big ( \frac{\lambda \sum_{y \in X} k_{xy}z(y) }
{1+\lambda \sum_{y \in X} k_{xy}} \Big )^i
=
\frac{1}{1+\lambda \sum_{y \in X} k_{xy}(1-z(y))}.
\end{split}
\end{equation}
We note that
the quantity $\lambda k_{xy}$
can be interpreted as the expected number of offsprings of type $y$ of
a particle of type $x$.
Clearly in this case
\[
H^\lambda(v;x)=
\frac{\lambda \sum_{y \in X} k_{xy}v(y)}{1+\lambda \sum_{y \in X} k_{xy}v(y)}.
\]
If we define the bounded linear operator $K:l^\infty(X) \mapsto l^\infty(X)$ as
$Kv(x):=\sum_{y \in X} k_{xy}v(y)$ then
\begin{equation}
\label{eq:H-BRW}
H^\lambda(v)= \frac{\lambda Kv}{\mathbf1+\lambda Kv},
\end{equation}
hence the functions $H^\lambda_n$ and $G^\lambda_n$ from $[0,1]^X$ into itself are nondecreasing and continuous with respect
to $\|\cdot\|_\infty$ for every $n \ge 1$.
In particular each iterate $H^\lambda_n$ can be extended to the positive cone
$l^\infty_+(X):=\{v \in l^\infty(X): v \ge \mathbf 0\}$.
We observe that the operator $K$ preserves $l^\infty_+(X)$.
When there is no ambiguity, we will drop the dependence on $\lambda$ in these functions.
From now on, if not stated otherwise, it will be tacitly understood that
$G$ and $H$ are defined by equations~\eqref{eq:G-BRW} and~\eqref{eq:H-BRW}
respectively.
It is easy to show that $K$ is irreducible (as stated in
Section~\ref{subsec:graphs}) if and only if the corresponding
IBP is irreducible in the sense of Remark~\ref{rem:irrid}.
\section{The critical values and the critical behaviors}\label{sec:critical}
\subsection{The critical values}\label{subsec:critical}
In \cite{cf:PemStac1} it was proved that, in the irreducible case,
$\lambda_s=1/M_s$ for any
graph.
In \cite{cf:BZ} we used a different approach to extend this result to multigraphs;
the same arguments hold for weighted graphs (we repeat the proof for
completeness).
This approach allows us to study the critical behavior when $\lambda=\lambda_s(x)$
(see Theorem~\ref{th:critb}).
We observe that in the proof of the following theorem to the BRW we associate a particular branching process
which is not the one introduced in Section~\ref{subsec:ibrw}.
The proof relies on the concept of (reproduction) \textit{trail}: see \cite{cf:PemStac1}
for the definition.
\begin{teo}\label{th:pemantleimproved}
For every weighted graph $(X,K)$ we have that $\lambda_s(x)=1/M_s(x)$.
\end{teo}
\begin{proof}
Fix $x \in X$,
consider a path $\Pi:=\{x=x_0, x_1, \ldots, x_n=x\}$ and
define its number of cycles $\mathbb{L}(\Pi):=|\{i=1,
\ldots,n:x_i=x\}|$; the expected number of trails along such a path
is $\lambda^n \prod_{i=0}^{n-1} k_{x_i x_{i+1}}$
(i.e. the expected number of particles ever born at $x$, descending from the original particle
at $x$ and whose genealogy is described by the path $\Pi$ -- their mothers were at $x_{n-1}$, their
grandmothers at $x_{n-2}$ and so on).
Disregarding the original time scale, to the BRW there
corresponds a Galton-Watson branching process: given any particle $p$ in $x$
(corresponding to a trail with $n$ cycles), define its children
as all the particles whose trail is a prolongation of the trail of $p$ and is associated
with a spatial path with $n+1$ cycles.
Hence a particle is of the $k$-th generation if and only if the
corresponding trail has $k$ cycles; moreover it has one (and only one)
parent in the $(k-1)$-th generation. Since each particle behaves
independently of the others then the process is markovian. Thus the BRW
survives strongly if and only if this branching process does.
The expected number of children of the branching process is the sum
over $n$ of the expected number of trails of length
$n$ and one cycle, that is $\sum_{n=1}^\infty \phi_{x,x}^n\lambda^n=\Phi(x,x|\lambda)$.
Thus we have a.s.~local extinction if and only if $\Phi(x,x|\lambda)\leq 1$, that is,
$\lambda(x) \leq 1/M_s(x)$.
\end{proof}
We turn our attention to the weak critical parameter $\lambda_w(x)$, which, by Corollary~\ref{cor:monotone}, may be characterized in terms of the function $H^\lambda$
(defined by equation~\eqref{eq:H-BRW}):
\begin{equation}
\label{eq:lambdaw}
\begin{split}
\lambda_w(x)&=\inf \{\lambda \in \mathbb R: \exists v \in l^\infty_+(X), v(x) > 0 , H^\lambda(v) \ge v\} \\
&=
\inf \Big \{\lambda \in \mathbb R: \exists v \in [0,1]^X, v(x)> 0 , \lambda Kv \ge \frac{v}{1-v} \Big \}.
\end{split}
\end{equation}
Our goal is to give other characterizations of $\lambda_w(x)$.
Theorem~\ref{th:equiv1} shows that, for every $n \ge 1$
\begin{equation}
\label{eq:lambdaw3}
\begin{split}
\lambda_w(x)&=\inf \{\lambda \in \mathbb R: \exists v \in l^\infty_+(X), v(x) > 0 , H^\lambda_n(v) \ge v\}\\
&=\inf \{\lambda \in \mathbb R: \exists v \in l^\infty_+(X), v(x)> 0, \lambda^n K^n v \ge v \};
\end{split}
\end{equation}
thus, by taking $n=1$ in the previous equation,
$\lambda_w(x)=\inf \{\/ \undertilde r\,\!_K(v): v \in l^\infty(X), v(x)=1\}$ where
$\undertilde r\,\!_K(v)$ is the
\textit{lower
Collatz-Wielandt number of} $v$ (see \cite{cf:FN1}, \cite{cf:FN2} and \cite{cf:Marek1}).
We note that equation~\eqref{eq:lambdaw3} is particularly useful to compute
the value of $\lambda_w$ (indeed solving the linear inequality therein is easier than
solving the nonlinear inequality in~\eqref{eq:lambdaw}). Unfortunately
the critical (global) survival of the BRW (with one initial particle at $x$)
is equivalent to the existence of
a solution of $\lambda_w(x) Kv\ge v/(1-v)$ with $v(x)>0$ (see Example~\ref{exm:4}).
The existence of a solution of $\lambda_w(x) Kv\ge v$ does not imply critical survival.
\begin{teo}
\label{th:equiv1}
Let $(X,K)$ be a weighted graph and let $x \in X$.
\begin{enumerate}[(a)]
\item If $\lambda \le \lambda_w(x)$ and $v \in [0,1]^X$ is such that $\lambda Kv \ge v/(1-v)$
then $\inf_{y:x \to y,v(y) > 0} v(y) = 0$.
\item For all $n \in \mathbb N$, $n \ge 1$ we have
\[
\lambda_w(x)
=\inf \{\lambda \in \mathbb R: \exists v \in l^\infty_+(X), v(x) > 0 \text{ such that } H_n^\lambda(v) \ge v\}.
\]
\item For all $n \in \mathbb N$, $n \ge 1$ we have
\[
\lambda_w(x)=
\inf \{\lambda \in \mathbb R: \exists v \in l^\infty_+(X), v(x)> 0 \text{ such that } \lambda^n K^n v \ge v \}.
\]
\end{enumerate}
\end{teo}
\begin{proof}
\begin{enumerate}[(a)]
\item
Let $X^\prime :=\{y \in X: x \to y\}$ and consider $v^\prime(y):=v(y) {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{X^\prime}(y)$, then
$\lambda K v^\prime \ge v^\prime/(1-v^\prime)$ (since $\lambda Kv^\prime(y)=\lambda Kv(y)$ for
all $y \in X^\prime$). Thus we may suppose, without loss of generality, that $X^\prime=X$.
For all $t \in [0,1]$ we have
$\lambda K(tv) \ge \frac{tv}{1-tv} \frac{1-tv}{1-v}$ and
$v \mapsto \frac{1-tv}{1-v}$ is nondecreasing.
By contradiction, suppose that $\inf_{y\in Y}v(y)=\delta>0$, hence
$\frac{1-tv}{1-v} \ge \frac{1-t\delta}{1-\delta} \mathbf 1$ and
$(\lambda\frac{1-\delta}{1-t\delta}) K(tv) \ge \frac{tv}{1-tv}$
thus, for all $t \in (0,1)$, $\lambda> \lambda\frac{1-\delta}{1-t\delta} \ge \lambda_w(x)$.
\item
Define $\lambda_n(x):=\inf \{\lambda \in \mathbb R: \exists v \in l^\infty_+(X), v(x) > 0 , H_n^\lambda(v) \ge v\}$.
Clearly $H^\lambda(v) \ge v$ implies $H_n^\lambda(v) \ge v$, thus $\lambda_w(x) \ge \lambda_n$.
If $\lambda >\lambda_n$ then by Corollary~\ref{cor:monotone}
the sequence $\{\widetilde v_i\}_{i\in\mathbb N}$ defined by $\widetilde v_0=\mathbf 1$,
$\widetilde v_{i+1}=H_n^\lambda(\widetilde v_i)$ converges monotonically to some $v > \mathbf 0$, namely
$\widetilde v_i \downarrow v$. But $\widetilde v_i=v_{ni}$ (for all $i \in \mathbb N$)
where the nonincreasing sequence $\{v_j\}_{j\in\mathbb N}$ is defined by
equation~\eqref{eq:vn}, whence $v_j \downarrow v$. By \eqref{eq:lambdaw},
since
$H^\lambda(v) =v$, we get $\lambda\ge\lambda_w(x)$.
\item
Define now $\lambda_n:=\inf \{\lambda \in \mathbb R: \exists v \in l^\infty_+(X), v(x)> 0 \text{ such that } \lambda^n K^n v \ge v \}$.
We prove that
$\lambda_{n} \le \lambda_w(x)$ for all $n\ge1$. Indeed, if $\lambda>\lambda_w(x)$,
then there exists $\tilde v$ such that $\lambda K\tilde v\ge\frac{\tilde v}{1-\tilde v}\ge\tilde v$.
By induction on $n$, $\lambda^nK^n\tilde v\ge\tilde v$,
thus, for all $n$, $\lambda\ge\lambda_n$ which implies $\lambda_n\le\lambda_w$.
On the other hand, if $\lambda >\lambda_n$ then there exists
$\lambda^\prime \in [\lambda_n, \lambda)$ such that
$(\lambda^\prime)^n K^n v \ge v$ for some $v \in l^\infty_+(X)$ such that $v(x)>0$.
If $\varepsilon=\lambda/\lambda^\prime-1$
and $\delta>0$ is such that $\|\lambda K H^\lambda_{n-1}(\delta^\prime v)\|_\infty \le \varepsilon$ for all $\delta^\prime \in (0, \delta]$
(which is possible since $H^\lambda_n$ is continuous and
$H^\lambda_n(\mathbf 0)=\mathbf 0$) then we have that
$H^\lambda_n(\delta^\prime v) \ge (\lambda/(1+\varepsilon)) K H^\lambda_{n-1}(\delta^\prime v)$. By induction
on $n$ and since $K$ is a positive operator
there exists $\widetilde \delta>0$ such that
$H^\lambda_n(\widetilde \delta v) \ge (\lambda/(1+\varepsilon))^n K^n H^\lambda_0(\widetilde \delta v)=(\lambda^\prime)^n K^n (\widetilde \delta v)
\ge \widetilde \delta v$
whence $\lambda \ge \lambda_w(x)$ by (b) and this implies $\lambda_n \ge \lambda_w(x)$.
\end{enumerate}
\end{proof}
\noindent The following theorem improves Lemma 3.2 of \cite{cf:BZ}.
\begin{teo}
\label{th:weak}
For every weighted graph $(X,K)$ we have that $\lambda_w(x) \ge 1/M^-_w(x)$.
\end{teo}
\begin{proof}
Let $\lambda<1/M^-_w(x)$.
If there exists $v\in l^\infty_+(X)$
such that $\lambda Kv \ge \frac{v}{1-v}\ge v$,
then for all $n\in\mathbb N$ we have $\lambda^n K^nv \ge v$.
Thus $\|v\|_\infty \lambda^n \sum_{y \in X} k^n_{xy} \ge \lambda^n K^nv(x) \ge v(x)$,
but, since $\lambda \liminf_n \sqrt[n]{\sum_{y \in X} k^n_{xy}}<1$, we have that $\liminf_{n} \lambda^n \sum_{y \in X} k^n_{xy}=0$,
whence $v(x)=0$. By~\eqref{eq:lambdaw}, $\lambda\le\lambda_w(x)$.
\end{proof}
\begin{rem}
\label{finite}
Let us focus on the particular case where $X$ is finite. Clearly
if $K$ is irreducible, then
$\lambda_w=\lambda_s=1/M_w=1/M_s=1/M_w^-$
and these parameters do not depend
on the site of the initial particle.
If $X$ is finite, but $K$ is not irreducible,
it may happen that $\lambda_w(x)\neq\lambda_s(x)$
and also $\lambda_w(x)\neq\lambda_w(y)$
(although $\lambda_w(x)\leq\lambda_w(y)$ for all $y$ such that
$x\to y$).
Moreover in the finite case
$\lambda_w(x,[x])=1/M_w^-(x,[x])$ (where by adding $[x]$ we
consider the parameters corresponding to the process
restricted to $[x]$, namely $([x],K|_{[x]\times [x]})$):
the proof is the same as in Proposition 2.2
of \cite{cf:BZ}.
From this and the fact that the BRW starting from
one particle in $x$ survives globally if and only if it survives (locally and globally) in at least one irreducible class,
it follows that $
\lambda(x,X)=\min\{1/M_w^-(y,[y]):x \to y \}$.
By induction on the number of equivalence classes, it is not difficult to
prove that $M_w^-(x,X)= \max\{M_w^-(y,[y]):x \to y \}$
which proves, for finite weighted graphs, that
$\lambda_w(x,X)=1/M_w^-(x)$.
As for the critical behavior,
the $\lambda_w(x)$-BRW dies out (globally, thus locally)
almost surely. Indeed if
$\lambda_w(x)<\lambda_w(y,[y])=\lambda_s(y)$ it cannot
survive confined to $[y]$. If $\lambda_w(x)=\lambda_w(y,[y])$ then
according to \textit{(a)} of Theorem~\ref{th:equiv1} the
probabilities of survival $v$ for the process confined to $[y]$
satisfy $\inf_{z \in [y]} v(z)=0$. Being $[y]$ finite and irreducible,
this means that $v(z)=0$ for all $z \in [y]$.
\end{rem}
We say that $(X,K)$ is \textit{locally isomorphic} to $(Y,\widetilde K)$ if and only if
there exists a surjective map $f:X \to Y$ such that
$\sum_{z \in f^{-1}(y)} k_{xz}=\widetilde k_{f(x)y}$
for all
$x \in X$ and $y \in Y$.
An $(X,K)$ which is locally isomorphic to some
$(Y,\widetilde K)$ ``inherits'' its $M_w^-$s and $\lambda_w$s
(in a sense which is clear in the proof of the following
proposition).
\begin{pro}\label{th:fgraph}
If $Y$ is a finite set and $(X,K)$ is locally
isomorphic to $(Y, \widetilde K)$
then $\lambda_w(x)=1/M_w^-(x)$.
\end{pro}
\begin{proof}
The definition of the map $f$ immediately implies that $\sum_{z \in X} k^n_{xz}
= \sum_{y \in Y} \widetilde k^n_{f(x)y}$, hence $M_w^-(x,X)=
M_w^-(f(x),Y)$. Moreover
$\lambda_w(x,X)= \lambda_w(f(x),Y)$. Indeed it is easy to prove that
$\lambda_w(x,X) \ge \lambda_w(f(x),Y)$. Conversely, if $\lambda>\lambda_w(f(x),Y)$
and $\widetilde v$ is such that $\lambda \widetilde K \widetilde v \ge \widetilde v$
then we define $v(x):=\widetilde v(f(x))$. Clearly $\widetilde K \widetilde v (f(x))=
Kv(x)$ hence $\lambda \ge \lambda_w(x,X)$. Remark~\ref{finite} yields the conclusion.
\end{proof}
Examples of BRWs $(X,K)$ which are locally isomorphic to some finite $(Y,\widetilde K)$ are BRWs where $\sum_{z\in X}k_{xz}$
does not depend on $x$ (in this case $Y=\{y\}$ is a singleton and
$\widetilde k_{yy}=\sum_{z\in X}k_{xz}$).
Another example is given
by \textit{quasi-transitive} BRWs, that is, there exists a finite $X_0\subset X$ such that for any $x\in X$ there is
a bijective map $\gamma_x:X\to X$ satisfying $\gamma_x^{-1}(x)
\in X_0$ and $k_{yz}=k_{\gamma_x y\,\gamma_x z}$ for all
$y,z$ (in this case $Y=X_0$ and
$\widetilde k_{wz}=\sum_{y:y=\gamma_y(z)}k_{wy}$).
Let us consider now the irreducible case;
since $\lambda_w(x)$ and $M_w^-(x)$ do not depend on $x$ let us write $\lambda_w$ and $M_w^-$ instead.
Note that the characterization of $\lambda_w$ can be written as
\begin{equation}
\label{eq:lambdaw2}
\begin{split}
\lambda_w&=\inf \{\lambda \in \mathbb R: \exists v \in l^\infty(X), v > \mathbf 0 , H^\lambda(v) \ge v\} \\
&=
\inf \Big \{\lambda \in \mathbb R: \exists v \in l^\infty(X), v> \mathbf 0 , \lambda^n K^nv \ge v \Big \}\\
&=\inf \{\lambda \in \mathbb R: \exists v \in l^\infty(X), v > \mathbf0 , H^\lambda_n(v) \ge v\},
\end{split}
\end{equation}
where the requirement $v > \mathbf0$ seems less restrictive
than $v(x)>0$ for all $x\in X$, which is the one we would
expect in view of equation~\eqref{eq:lambdaw}. Nevertheless
by Remark ~\ref{rem:irrid} it follows that if there exists
$v > \mathbf0$ satisfying one of the inequalities in
\eqref{eq:lambdaw}, then there exists a solution
$v^\prime$ of the same inequality with $v^\prime(x)>0$ for all $x\in X$.
\begin{pro}\label{th:condU}
Let $(X,K)$ be an irreducible weighted graph.
If for all $\varepsilon>0$ there exists $N$ such
that $\sum_{y \in X} k^{N}_{xy} \ge (M_w^--\varepsilon)^{N}$,
for all $x\in X$, then $\lambda_w =1/M_w^-$.
\end{pro}
\begin{proof}
Let $\lambda>1/M_w^-$. Choose $\varepsilon$ such that
$\lambda(M_w^--\varepsilon)>1$. Then $\lambda^NK^N\mathbf1(x)=\lambda^N
\sum_{y\in X}k^N_{xy}\ge (\lambda(M_w^--\varepsilon))^N>1$. Hence by
Theorem~\ref{th:equiv1} $\lambda>\lambda_w$.
Theorem~\ref{th:weak} yields the conclusion.
\end{proof}
We note that if $(X,K)$ is irreducible and satisfies the hypothesis of Proposition~\ref{th:fgraph},
then Proposition~\ref{th:condU} provides an alternative proof of $\lambda_w=1/M_w^-$.
For an example (which is not locally isomorphic to a finite weighted graph),
where one can use Proposition~\ref{th:condU}, see Example 3 in \cite{cf:BZ}
(which is a BRW on a particular radial tree).
\subsection{The critical behavior}
\label{subsec:criticalb}
\begin{teo}\label{th:critb}
For each weighted graph $(X,K)$ if
$\lambda=\lambda_s(x)$ then the $\lambda$-BRW starting from one particle at $x \in X$ dies out locally almost surely.
\end{teo}
\begin{proof}
Recall that (see the proof of Theorem~\ref{th:pemantleimproved})
the $\lambda(x)$-BRW survives locally if and only if the Galton-Watson branching process
with expected number of children $\Phi(x,x|\lambda)$ does. Since
$\Phi(x,x|1/M_s)\leq 1$ and $\lambda_s(x)=1/M_s$ then there is
a.s.~local extinction at $\lambda_s(x)$.
\end{proof}
\begin{teo}\label{th:critg}
If $Y$ is a finite set and $(X,K)$ is locally isomorphic to $(Y, \widetilde K)$
then the $\lambda_w(x)$-BRW starting from one particle at $x \in X$ dies out
globally almost surely.
\end{teo}
\begin{proof}
By reasoning as in Proposition~3.7 of \cite{cf:BZ} it is clear that the $\lambda_w(x)$-BRW
on $X$ dies out if and only if the $\lambda_w(x)$-BRW
on $Y$ does.
Remark~\ref{finite} yields the conclusion.
\end{proof}
\section{Examples}
\label{sec:examples}
We start by giving an example of an irreducible IBP
where, although the expected number of children of each particle is less than 1,
nevertheless the colony survives with positive probability.
\begin{exm}
\label{exm:1}
Let $X=\mathbb N$, $\{p_n\}_{n\in\mathbb N}$ be a sequence in $[0,1)$ and suppose that a particle of type $n\ge1$ at the end of its life
has one child of type $n+1$ with
probability $1-p_n$, one child of type $n-1$ with probability $p_n/2$
(if $n=0$ then it has one child of type $0$ with probability $p_0/2$)
and no children with probability $p_n/2$.
The generating function $G$ can be explicitly computed
\[
G(z|n)=
\begin{cases}
\frac{p_n}{2} + \frac{p_n}{2} z(n-1)+ (1-p_n) z(n+1) & n \ge 1 \\
\frac{p_0}{2} + \frac{p_0}{2} z(0) + (1-p_0) z(1) & n=0.\\
\end{cases}
\]
This process clearly dominates the (reducible) one where
a particle of type $n$ at the end of its life
has one child of type $n+1$ with
probability $1-p_n$ and no children with probability $p_n$.
The latter process has generating function
$\widetilde G(z|n)=p_n + (1-p_n)z(n+1) $.
By induction it is easy to show that the probabilities of extinction before or at generation
$n$ of the second process
are $q_{n}(j)=1-\prod_{i=j}^{j+n-1} (1-p_i)$ for all $n \ge 1$; hence it survives with positive probability,
that is $q_n(0) \not\to 1$ as $n \to \infty$, if
and only if $\sum_{i=1}^\infty p_i < +\infty$.
\end{exm}
In all the following examples, $X=\mathbb N$ and $k_{ij}=0$ whenever $|i-j|>1$. Although
this looks quite restrictive, one quickly realizes that many BRWs are
locally isomorphic to BRWs of this kind. For instance,
every BRW on a homogeneous tree of degree $m$ (with $k_{ij}=1$ on each edge)
is locally isomorphic to the BRW on $\mathbb N$ with $k_{0\,1}:=m$, $k_{n\, n+1}:=m-1$,
$k_{n\, n-1}:=1$ and $0$ otherwise. More generally any \textit{radial BRW on a radial tree} is
locally isomorphic to a BRW on $\mathbb N$. Indeed a general radial BRW on a radial tree is constructed
as follows: let us consider two positive real sequences $\{k_n^+\}_{n\in\mathbb N}$,
$\{k_n^-\}_{n\in\mathbb N}$ and a positive integer valued sequence $\{a_n\}_{n\in\mathbb N}$. By construction, the root
of the tree is some vertex $o$ which has $a_0$ neighbors and the rates are $k_{o x}:=k_0^+$,
$k_{x o}:=k_0^-$ for all neighbors $x$. Each vertex $x$ at distance $1$ from $o$ has
$1+a_1$ neighbors (one is $o$) and
we set $k_{x y}:=k^+_1$ and $k_{y x}:=k^-_1$ for all its $a_1$ neighbors $y$ at distance $2$
from $o$. Now each vertex at distance $2$ from $o$ has $1+a_2$ neighbors,
an outward rate $k_2^+$ and an inward rate $k_2^-$ and so on. This BRW is clearly locally
isomorphic to (therefore it has the same global behavior of) the BRW on $\mathbb N$ with
$k_{n\, n+1}:=a_n k_n^+$, $k_{n+1\, n}:= k_n^-$ and $0$ otherwise.
The next one is an example of a BRW on $\mathbb N$ which is not irreducible and where $\lambda_w>1/M_w$.
This answers
an open question raised in \cite{cf:BZ}.
\begin{exm}
\label{exm:2}
Let $\{k_n\}_{n \in \mathbb N}$ be a bounded sequence of positive real numbers and
let us consider the BRW on
$\mathbb N$ with rates $k_{ij}:=k_i$ if $i=j-1$ and $0$ otherwise.
By using Equations~\eqref{eq:vn} and~\eqref{eq:H-BRW} one can show
that $v_n(i)=\lambda^n \beta_{i+n} /(1+\sum_{r=1}^{n} \lambda^r
\beta_{i+n}/\beta_{i+n-r})$ where $\beta_n:=\prod_{i=0}^{n-1} k_i$.
In order to prove that
$\lambda_w(i)=1/\liminf_n \sqrt[n]{\beta_{n+i}/\beta_i}=1/M_w^-(i)$ (which
does not depend on $i$, though the BRW is not irreducible)
one may either study the behavior of $\{v_n\}_{n\in\mathbb N}$ above or, which is
simpler, use Theorem~\ref{th:equiv1}.
Indeed, without loss of generality,
we just need to prove that
for all $\lambda > 1/\liminf_n \sqrt[n]{\beta_n}$ it is
possible to solve the inequality
$\lambda K v \ge v$ for some $v \in l^\infty(X)$, $v> \mathbf 0$.
One can easily check that $ v(n):=1/(\lambda^n \beta_n)$ is a solution;
since $\lambda > 1/\liminf_n \sqrt[n]{\beta_n}$ we have that
$\lim_n v(n) =0$ and then $v \in l^\infty(X)$
.
Note that $\lambda_w=1/M_w^-$ which may be different from $1/M_w$
with the following choice of the rates.
Our goal is to define big intervals of consecutive vertices where $k_{i\, i+1}=1$, followed
by bigger intervals of vertices where $k_{i\, i+1}=2$ and so on. The result is a BRW where
$M_w=\limsup_n \sqrt[n]{\beta_n}=2$ while
$M_w^- = \liminf_n \sqrt[n] \beta_n=1$.
Define $a_n:=\lceil \log 2/\log (1+1/n) \rceil$, $b_n:=\lceil \log 2/(\log 2 - \log (2-1/n)) \rceil$ and
$\{c_n\}_{n\ge1}$ recursively by $c_1=1$, $c_{2r}=a_{2r}c_{2r-1}$, $c_{2r+1}=b_{2r+1}c_{2r}$ (for all $r \ge 1$).
Let $k_{i }$ be equal to $1$ if $i \in (c_{2r-1}, c_{2r}]$ (for some $r \in \mathbb N$) and
equal to $2$ if $ i \in (c_{2r}, c_{2r+1}]$ (for some $r \in \mathbb N$).
Clearly $\sqrt[n]{\beta_n} \in [1,2]$ for all $n \in \mathbb N$ and it is easy to check that, for all $r \ge 1$,
$\sqrt[c_{2r+1}]{\beta_{c_{2r+1}}} > 2-1/(2r+1)$ and $\sqrt[c_{2r}]{\beta_{c_{2r}}} \le 1+1/(2r)$, whence
$2=\limsup_n \sqrt[n] \beta_n>\liminf_n \sqrt[n] \beta_n=1$.
\end{exm}
Although this BRW is not irreducible, it is clear that a slight modification
(that is, adding a small backward rate as in the following example) does not
modify significantly the behavior of the process and allows to construct an irreducible
example with the same property.
Finally, the last example shows that the weak critical
survival is possible (while, according to Theorem~\ref{th:critb}, any strong critical BRW
dies out locally).
\begin{exm}
\label{exm:4}
Let $X:=\mathbb N$ and $K$ be defined by
$k_{0\, 1}:=2$, $k_{n\, n+1}:=(1+1/n)^2$, $k_{n+1\, n}:= 1/3^{n+1}$ and $0$ otherwise.
Hence the inequality $\lambda Kv \ge v/(1-v)$ becomes
\[
\begin{cases}
2 \lambda v(1) \ge v(0)/(1-v(0)) \\
\lambda(v(n+1) (1+1/n)^2+v(n-1)/3^n) \ge v(n)/(1-v(n)).\\
\end{cases}
\]
Clearly $v(0)=1/2$ and $v(n):=1/(n+1)$ (for all $n \ge 1$) is a solution for all
$\lambda \ge 1$.
If $\lambda <1$ then one can prove by induction that a solution must satisfy
$v(n+1)/v(n) \ge \frac1\lambda\left(\frac{n}{n+1}\right)^2\left(
1-\frac{1}{2^n}\right)$ for all $n \ge 2$. Thus
$v(n+1)/v(n)$ is eventually larger than $1+\varepsilon$ for some $\varepsilon>0$,
hence either $v=\mathbf 0$ or $\lim_n v(n)= +\infty$.
This implies that $\lambda_w=1$ and there is global survival if $\lambda=\lambda_w$.
\end{exm}
| {
"timestamp": "2008-09-07T14:38:00",
"yymm": "0804",
"arxiv_id": "0804.0224",
"language": "en",
"url": "https://arxiv.org/abs/0804.0224",
"abstract": "We study the branching random walk on weighted graphs; site-breeding and edge-breeding branching random walks on graphs are seen as particular cases. We describe the strong critical value in terms of a geometrical parameter of the graph. We characterize the weak critical value and relate it to another geometrical parameter. We prove that, at the strong critical value, the process dies out locally almost surely; while, at the weak critical value, global survival and global extinction are both possible.",
"subjects": "Probability (math.PR)",
"title": "Characterization of the critical values of branching random walks on weighted graphs through infinite-type branching processes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462208386641,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7099326204101114
} |
https://arxiv.org/abs/2112.14943 | Non-jumping Turán densities of hypergraphs | A real number $\alpha\in [0, 1)$ is a jump for an integer $r\ge 2$ if there exists $c>0$ such that no number in $(\alpha , \alpha + c)$ can be the Turán density of a family of $r$-uniform graphs. A classical result of Erd\H os and Stone \cite{ES} implies that that every number in $[0, 1)$ is a jump for $r=2$. Erd\H os \cite{E64} also showed that every number in $[0, r!/r^r)$ is a jump for $r\ge 3$ and asked whether every number in $[0, 1)$ is a jump for $r\ge 3$. Frankl and Rödl \cite{FR84} gave a negative answer by showing a sequence of non-jumps for every $r\ge 3$. After this, Erd\H os modified the question to be whether $\frac{r!}{r^r}$ is a jump for $r\ge 3$? What's the smallest non-jump? Frankl, Peng, Rödl and Talbot \cite{FPRT} showed that ${5r!\over 2r^r}$ is a non-jump for $r\ge 3$. Baber and Talbot \cite{BT0} showed that every $\alpha\in[0.2299, 0.2316)\cup [0.2871, \frac{8}{27})$ is a jump for $r=3$. Pikhurko \cite{Pikhurko2} showed that the set of all possible Turán densities of $r$-uniform graphs has cardinality of the continuum for $r\ge 3$. However, whether $\frac{r!}{r^r}$ is a jump for $r\ge 3$ remains open, and $\frac{5r!}{2r^r}$ has remained the known smallest non-jump for $r\ge 3$. In this paper, we give a smaller non-jump by showing that ${54r!\over 25r^r}$ is a non-jump for $r\ge 3$. Furthermore, we give infinitely many irrational non-jumps for every $r\ge 3$. | \section{Introduction}
For a finite set $V$ and a positive integer $r$ we denote by ${{V \choose r}}$ the family of all $r$-subsets of $V$. An {\em $r$-uniform graph} ({\em $r$-graph}) $G$ is a set $V(G)$ of vertices together with a set $E(G) \subseteq {V(G) \choose r}$ of edges. The {\em density} of $G$ is defined by $d(G) = {\vert E(G)\vert\over \vert{V(G) \choose r}\vert}$.
For a family ${\cal F}$ of $r$-graphs, an $r$-graph $G$ is called $\cal F$-free if it does not contain an isomorphic copy of any $r$-graph of $\cal F$. For a fixed positive integer $n$ and a family of $r$-graphs $\cal F$, the {\em Tur\'an number} of $\cal F$, denoted by $ex(n,\cal F)$, is the maximum number of edges in an $\cal F$-free $r$-graph on $n$ vertices. An averaging argument in \cite{KNS} by Katona, Nemetz, and Simonovits shows that the sequence ${ex(n, \cal F)\over {n\choose r}}$ is non-increasing. Hence $\lim_{n\to\infty}{ex(n, \cal F)\over {n\choose r}}$ exists. The {\em Tur\'{a}n density} of $\cal F$ is defined as $$\pi(\cal F)=\lim_{n\rightarrow\infty}{ex(n, \cal F) \over {n \choose r }}.$$
If $\cal F$ consists of a single $r$-graph $F$, we simply write $ex(n, \{F\})$ and $\pi(\{F\})$ as $ex(n, F)$ and $\pi(F)$. Denote
$$\Pi_{\infty}^{r}=\{ \pi(\cal F): \cal F {\rm \ is \ a \ family \ of \ } r{\rm-uniform \ graphs} \}, $$
$$\Pi_{fin}^{r}=\{\pi(\cal F): \cal F {\rm \ is \ a \ finite \ family \ of \ } r{\rm-uniform \ graphs}, \}$$
and
$$\Pi_{t}^{r}=\{ \pi(\cal F): \cal F {\rm \ is \ a \ family \ of \ } r{\rm-uniform \ graphs \ and }\ \vert \cal F \vert\le t \}. $$
Clearly,
$$\Pi_{1}^{r}\subseteq \Pi_{2}^{r}\subseteq \cdots \subseteq\Pi_{fin}^{r}\subseteq \Pi_{\infty}^{r}.$$
Finding good estimation for Tur\'an densities in hypergraphs ($r\ge 3$) is believed to be one of the most challenging problems in extremal combinatorics. The following concept concerns the accumulation points of the set $\Pi_{\infty}^{r}$.
\begin{defi}
A real number $\alpha\in [0, 1)$ is a {\em jump} for an integer $r\ge 2$ if there exists a constant $c>0$ such that for any $\epsilon>0$ and any integer $m \ge r$, there exists an integer $n_0(\epsilon, m)$ such that any $r$-uniform graph with $n > n_0(\epsilon, m)$ vertices and density $\ge \alpha + \epsilon$ contains a subgraph with $m$ vertices and density $\ge \alpha + c$.
\end{defi}
This concept describes where the set of `jumps' is closely related to Tur\'an densities. It was shown in \cite{FR84} that $\alpha$ is a jump for $r$ if and only if there exists $c>0$ such that $\Pi_{\infty}^r \cap (\alpha, \alpha+c)=\emptyset$. So every non-jump is an accumulation point of $\Pi_{\infty}^{r}$.
For 2-graphs, Erd\H{o}s-Stone-Simonovits \cite{ESi,ES} determined the Tur\'an numbers of all non-bipartite graphs asymptotically. Their result implies that $$\Pi_{\infty}^{2}=\Pi_{fin}^{2}=\Pi_{1}^{2}=\{0, {1 \over 2}, {2 \over 3}, ..., {l-1 \over l}, ...\}.$$
This implies that every $\alpha\in [ 0, 1)$ is a jump for $r=2$. For $r\geq 3$, Erd\H{o}s \cite{E64} proved that every $\alpha\in [0, r!/r^r)$ is a jump. Furthermore, Erd\H{o}s proposed the {\it jumping constant conjecture}: Every $\alpha\in [0, 1)$ is a jump for every integer $r \geq 2$. In \cite{FR84}, Frankl and R\"{o}dl disproved the Conjecture by showing that $\displaystyle{1-\frac{1}{l^{r-1}}}$ is not a jump for $r$ if $r\ge 3$ and $l>2r$. However, there are still a lot of unknowns on whether a number is a jump for $r\ge 3$. A well-known open question of Erd\H{o}s is whether $r!/r^r$ is a jump for $r\ge 3$ and what is the smallest non-jump? Another question raised in \cite{FPRT} is whether there is an interval of non-jumps for some $r\ge 3$? Both questions seem to be very challenging. Frankl-Peng-R\"{o}dl-Talbot \cite{FPRT} showed that ${5r! \over 2 r^r}$ is a non-jump for $r\ge 3$. Baber and Talbot \cite{BT0} showed that for $r=3$ every $\alpha\in[0.2299, 0.2316)\cup [0.2871, \frac{8}{27})$ is a jump. Pikhurko \cite{Pikhurko2} showed that $\Pi_{\infty}^r$ has cardinality of the continuum for $r\ge 3$. However, whether $\frac{r!}{r^r}$ is a jump remains open. Regarding the first question, we determine a non-jump smaller than ${5r! \over 2 r^r}$ for $r\ge 3$.
\begin{theo}\label{theo}
$\frac{12}{25}$ is not a jump for $r=3$.
\end{theo}
In \cite{jumpgeneral}, a way to generate non-jumps for every $p\ge r$ based on a non-jump for $r$ was given. The following result was shown there.
\begin{theo}\label{resultg}\cite{jumpgeneral}
Let $p\ge r\ge 3$ be positive integers. If $\alpha\cdot {r! \over r^r} $ is a non-jump for $r$, then $\alpha \cdot {p! \over p^p}$ is a non-jump for $p$.
\end{theo}
Combining Theorems \ref{theo} and \ref{resultg}, we will get
\begin{coro}
${54r! \over 25r^r}$ is a non-jump for $r\ge 3$.
\end{coro}
Chung and Graham \cite{FG} proposed the conjecture that every element in $\Pi_{fin}^{r}$ is a rational number. Baber and Talbot \cite{BT}, and Pikhurko \cite{Pikhurko2} disproved this conjecture independently by showing that there is an irrational number in $\Pi_{fin}^{r}$. Baber and Talbot asked that whether there is an irrational number in $\Pi_1^r$. Recently, Yan and Peng \cite{YP} showed that there is an irrational number in $\Pi_1^3$ and Wu-Peng \cite{WP} showed that there is an irrational number in $\Pi_1^4$. Pikhurko \cite{Pikhurko2} showed that $\Pi_{\infty}^r$ is closed which implies that every non-jump is a Tur\'an density (a Tur\'an density may not be a non-jump). Brown and Simonovits \cite{BS} showed that the Lagrangian of an $r$-uniform hypergraph is in $\Pi_{\infty}^r$ also indicating the existence of irrational numbers in $\Pi_{\infty}^r$. No irrational non-jump has been previously given. In this paper, we will give a infinite sequence of irrational non-jumps for $r=3$.
\begin{theo}\label{theo1}
Let $k\ge 2$ be an integer. Then $\alpha_k=\frac{2k-6k^3+4k^4-k\sqrt{4k - 1}+4k^2\sqrt{4k - 1}}{(2k^2+1)^2}$ is not a jump for $r=3$.
\end{theo}
Combining Theorem \ref{theo1} and Theorem \ref{resultg}, we can also get corresponding non-jumps for $r\ge 3$.
The proof of Theorem \ref{theo} and \ref{theo1} will be given in Section \ref{prooftheo} and Section \ref{prooftheo1}, respectively. Both proofs applied an approach developed by Frankl and R\"odl in \cite{FR84}. The crucial part in our proof is to give a `proper' construction. In the following section, we will introduce some preliminary results and sketch the idea of the proof.
\section{Preliminaries and Sketch of the proof}
\subsection{Karush-Kuhn-Tucker Conditions}
Let us consider the optimisation problem:
\begin{flushleft}
\quad\quad\quad\quad\quad maximise\ $f(x)$\\
\quad\quad\quad\quad\quad subject to $g_i(x)\leq 0$, $i=1,\dots,m,$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad (3.1)
\end{flushleft}
where $x\in \mathbb{R}^n$ and $f$ and $g_i$ are differentiable functions from $\mathbb{R}^n$ to $\mathbb{R}$ for all $i$. Let $\nabla{f(x)}$ be the gradient of $f$ at $x$ i.e., the vector in $\mathbb{R}^n$ whose $i$th coordinate is ${\partial\over\partial{x_i}}f(x)$. We say that KKT conditions hold at $x^*\in \mathbb{R}^n$ if there exist $\lambda_1,\dots\lambda_m\in \mathbb{R}$ such that
\begin{enumerate}
\item $\nabla{f(x^*)}=\sum_{i=1}^m\lambda_i\nabla{g_i(x^*)},$
\item $\lambda_i\geq 0$ for $i=1,\dots,m,$
\item $\lambda_ig_i(x^*)=0$ for $i=1,\dots,m.$
\end{enumerate}
We call the constraints linear if $g_1,\dots,g_m$ are all affine functions.
\begin{theo}\label{KKT}(\cite{BV},\cite{Jenssen})
If the constraints of (3.1) are linear, then any optimal solution to (3.1) must satisfy the KKT conditions.
\end{theo}
\subsection{Properties of the Lagrangian function}
In this section we will give the definition of the Lagrangian of an $r$-uniform graph, which is a helpful tool in our proof.
\begin{defi}
For an $r$-uniform graph $G$ with vertex set $\{1,2,\ldots,n\}$, edge set $E(G)$ and a vector $\vec{x}=(x_1,\ldots,x_n) \in R^n$, define the Lagrangian fuction
$$\lambda (G,\vec{x})= \sum_{\{i_1,\ldots,i_r\}\in E(G)}x_{i_1}x_{i_2}\ldots x_{i_r}.$$
\end{defi}
Let $S=\{\vec{x}=(x_1,x_2,\ldots ,x_n): \sum_{i=1}^{n} x_i =1, x_i \ge 0 {\rm \ for \ } i=1,2,\ldots , n \}$. The Lagrangian of $G$, denoted by $\lambda (G)$, is defined as
$$\lambda (G) = \max \{\lambda (G, \vec{x}): \vec{x} \in S \}.$$
A vector $\vec{x}\in S$ is called a {\em feasible vector} on $G$, and $x_i$ is called the {\em weight} of the vertex $i$.
A feasible vector is called {\em optimal} if $\lambda (G, \vec{y})=\lambda(G)$.
\begin{fact}\label{mono}
If $G_1\subseteq G_2$, then $$\lambda (G_1) \le \lambda (G_2).$$
\end{fact}
\begin{fact}(\cite{FR84})\label{fact2}
Let $G$ be an $r$-graph on $[n]$. Let $\vec{x}=(x_1,x_2,\dots,x_n)$ be an optimal vector on $G$. Then
$$ \frac{\partial \lambda (G, \vec{x})}{\partial x_i}=r\lambda(G)$$
for every $i \in [n]$ satisfying $x_i>0$.
\end{fact}
Given an $r$-graph $G$, and $i, j\in V(G),$ define $$L_G(j\setminus i)=\{e: i\notin e, e\cup\{j\}\in E(G)\:and\: e\cup\{i\}\notin E(G)\}.$$
\begin{fact}\label{symmetry}(\cite{FR84})
Let $G$ be an $r$-graph on $[n]$. Let $\vec{x}=(x_1,x_2,\dots,x_n)$ be a feasible vector on $G$, and $i,j\in [n]$, $i\neq j$ satisfy $L_G(i \setminus j)=L_G(j \setminus i)=\emptyset$. Let $\vec{y}=(y_1,y_2,\dots,y_n)$ be defined by letting $y_\ell=x_\ell$ for every $\ell \in [n]\setminus \{i,j\}$ and $y_i=y_j={1 \over 2}(x_i+x_j)$.
Then $\lambda(G,\vec{y})\geq \lambda(G,\vec{x})$. Furthermore, if the pair $\{i,j\}$ is contained in an edge of $G$, $x_i>0$ for each $1\le i\le n$, and $\lambda(G,\vec{y})=\lambda(G,\vec{x})$, then $x_i=x_j$.
\end{fact}
We also note that for an $r$-graph $G$ with $n$ vertices, if we take $\vec{u}=(u_1, \ldots, u_n)$, where each $u_i={1\over n}$, then
$$\lambda(G)\ge \lambda(G, \vec{u})={\vert E(G)\vert \over n^r} \ge {d(G) \over r!}-\epsilon$$ for $n\ge n'(\epsilon)$, where $n'(\epsilon)$ is a sufficiently large integer. On the other hand, the blow-up of an $r$-uniform graph $G$ will allow us to construct $r$-uniform graphs with large number of vertices and density close to $r!\lambda (G)$.
\begin{defi}
Let $G$ be an $r$-uniform graph with $V(G) =\{1,2,\ldots,t\}$ and $(n_1, \ldots, n_t)$ be a positive integer vector. Define the $(n_1, \ldots, n_t)$ blow-up of $G$, $(n_1, \ldots, n_t)\otimes G$ as a $t$-partite $r$-uniform graph with vertex set $V_1\cup \ldots \cup V_t, |V_i|=n_i, 1\leq i\leq t$, and edge set $E((n_1, \ldots, n_t)\otimes G)=\{\{v_{i_1}, v_{i_2},\ldots, v_{i_r}\}, { \ \rm where \ } \{i_1,i_2,\ldots, i_r\} \in E(G) {\ \rm and \ } v_{i_k} \in V_{i_k} {\rm \ for} \ 1\le k\le r \}$.
\end{defi}
\begin{remark} (\cite{FR84})\label{remarkblow}
Let $G$ be an $r$-uniform graph with $t$ vertices and $\vec{y}=(y_1, \ldots, y_t)$ be an optimal vector for $\lambda(G)$. Then for any $\epsilon >0$, there exists an integer $n_1(\epsilon)$, such that for any integer $n\ge n_1(\epsilon)$,
\begin{equation}\label{blowden}
d((\lceil ny_1\rceil, \lceil ny_2\rceil, \ldots, \lceil ny_t\rceil)\otimes G)\ge r!\lambda(G)-\epsilon.
\end{equation}
\end{remark}
Let us also state a fact which follows directly from the definition of the Lagrangian.
\begin{fact}(\cite{FR84})\label{lblow}
For every $r$-uniform graph $G$ and every positive integer $n$, $\lambda((n, n, \ldots,n)\otimes G) =\lambda (G)$ holds.
\end{fact}
Lemma \ref{arrow} in \cite{FR84} gives a necessary and sufficient condition for a number $\alpha$ to be a jump.
\begin{lemma}\label{arrow}(\cite{FR84})
The following two properties are equivalent.
\begin{enumerate}
\item $\alpha$ is a jump for $r$.
\item There exists some finite family $\cal F$ of $r$-uniform graphs satisfying $\pi({\cal F})\le \alpha$ and $\displaystyle{\lambda (F)> \frac{\alpha}{r!}}$ for all $F \in \cal F$.
\end{enumerate}
\end{lemma}
\subsection{Sketch of the proofs of Theorem \ref{theo} and \ref{theo1}}
The general approach in proving Theorem \ref{theo} and Theorem \ref{theo1} is sketched as follows: Let $\alpha$ be a number to be proved to be a non-jump for $r=3$. Assuming that $\alpha$ is a jump for $r=3$, we will derive a contradiction by the following steps.
Step1. Construct a `proper' $3$-uniform graph
$G^*(t)$ with the Lagrangian at least ${\alpha \over 6}+\epsilon$ for some $\epsilon>0$. Then we `blow up' it to a $3$-uniform graph, say $\vec{m}\otimes G^*(t)$ with large enough number of vertices and density $\ge \alpha+\epsilon$ (see Remark \ref{remarkblow}). If $\alpha$ is a jump, then by Lemma \ref{arrow}, there exists some finite family ${\mathcal F}$ of $3$-uniform graphs with Lagrangians $>{\alpha \over 6}$ and $\pi(\cal{F})\le \alpha$. So $\vec{m}\otimes G^*(t)$ must contain some member of ${\mathcal F}$ as a subgraph.
Step 2. We show that any subgraph of $G^*(t)$ with the number of vertices not greater than $\max\{\vert V(F)\vert, F \in {\mathcal F}\}$ has the Lagrangian $\le {\alpha \over 6}$ and derive a contradiction.
\bigskip
The crucial part is to construct an $r$-uniform graph satisfying the properties in both Steps 1 and 2. Generally, whenever we find such a construction, we can obtain a corresponding non-jump. This method was first developed by Frankl and R\"odl in \cite{FR84}. The technical part in the proof is to show that the construction satisfies the property in Step 2.
\section{Proof of Theorem \ref{theo}} \label{prooftheo}
{\em Proof.} Suppose that ${12\over 25}$ is a jump for $r=3$. By Lemma \ref{arrow}, there exists a finite collection $\cal F$ of $3$-uniform graphs satisfying the following:
\begin{enumerate}
\item[i)] $\displaystyle \lambda (F) > {2 \over 25} $ for all $F \in \cal F$, and
\item[ii)] $\pi ({\cal F})\le {12\over 25}$.
\end{enumerate}
Let $G(t)=(V, E)$ be the 3-uniform defined as follows. The vertex set $V=V_1\cup V_2\cup V_{3}$, where $|V_1|=|V_2|=\frac{2t}{5}$ and $|V_{3}|=\frac{t}{5}$ and the value of $t$ will be determined later. The edge set of $G(t)$ is
$$\bigg(V_1 \times V_2 \times V_3\bigg) \bigcup \bigg( {V_1\choose 2}\times V_2\bigg) \bigcup \bigg({V_2\choose 2}\times V_3\bigg),$$
i.e., the edges consisting of one vertex from each $V_1, V_2$ and $V_3$, or two vertices from $V_1$ and one vertex from $V_2$, or two vertices from $V_2$ and one vertex from $V_3$. Then
\begin{eqnarray}\label{eg1}
\vert E(G(t))\vert&=&\frac{2t^3}{25}-\frac{3t^2}{25}.
\end{eqnarray}
We will apply the following lemma from \cite{FR84}.
\begin{lemma}\label{add}\cite{FR84}
For any $c\ge 0$ and any integer $s\ge r$, there exists $t_0(s, c)$ such that for every $t\ge t_0(s, c)$, there exists an $r$-uniform graph $A=A(s, c, t)$ satisfying:
\begin{enumerate}
\item $|V(A)|=t,$
\item $|E(A)|\geq ct^{r-1},$
\item For all $V_0 \subset V(A), r\leq |V_0| \leq s$, we have $|E(A)\cap {V_0 \choose r}| \leq |V_0|-r+1$.
\end{enumerate}
\end{lemma}
Set $s= {\rm max}_{F \in {\cal F}} |V(F)|$ and $c=1$. Let $r=3$ in Lemma \ref{add}, $t_0(s, 1)$ be given as in Lemma \ref{add} and $\frac{2t}{5}\ge t_0(s, 1)$. The $3$-uniform graph $G^*(t)$ is obtained by adding $A(s, 1, \frac{2t}{5})$ to the $3$-uniform hypergraph $G(t)$ in $V_1$. Then
$$\lambda(G^*(t))\ge\lambda(G^*(t), (\frac{1}{t}, \frac{1}{t}, \dots, \frac{1}{t}))=\frac{\big\vert E(G^*(t))\big\vert}{t^3}.$$
In view of the construction of $G^*(t)$ and equation (\ref{eg1}), we have
\begin{eqnarray}\label{egAl}
\frac{\big\vert E(G^*(t))\big\vert}{t^3}&\ge&\frac{\big\vert E(G(t))\big\vert}{t^3}+\bigg(\frac{2t}{5}\bigg)^2/{t^3} \nonumber \\
&\ge& \frac{2}{25}+\frac{1}{25t}.
\end{eqnarray}
Now suppose $\vec{y}=(y_1, y_2, ..., y_t)$ is an optimal vector of $\lambda(G^*(t))$. Let $n$ be large enough. By Remark \ref{remarkblow}, $3$-uniform graph $S_n=(\lfloor ny_1\rfloor, \ldots, \lfloor ny_{n}\rfloor)\otimes G^*(t)$ has density at least${12\over 25}+\frac{1}{50t}.$ Since $\pi({\cal F})\le {12\over 25}$, some member $F$ of $\cal F$ is a subgraph of $S_n$ for $n$ sufficiently large. For such $F\in \cal F$, there exists a subgraph $M$ of $G^*(t)$ with $|V(M)|\le |V(F)|\leq s$ so that $F\subset (s, s, \ldots, s) \otimes M$. By Fact \ref{mono} and Fact \ref{lblow}, we have
\begin{equation}\label{lambdasmall0}
\lambda(F)\overset{Fact \ref{mono}}{\le}\lambda ((s, s, \ldots, s) \otimes M)\overset{Fact \ref{lblow}}{=} \lambda (M).
\end{equation}
The following lemma will be proved in Section \ref{prooflemmaresult01}.
\begin{lemma}\label{lemmaresult01}
Let $M$ be any subgraph of $G^*(t)$ with $|V(M)| \leq s$. Then
\begin{equation}
\lambda (M) \leq \frac{2}{25}
\end{equation}
holds.
\end{lemma}
Assuming that Lemma \ref{lemmaresult01} is true and applying Lemma \ref{lemmaresult01} to (\ref{lambdasmall0}), we have $$\lambda(F) \le {2 \over 25}$$ which contradicts our choice of $F$, i.e., contradicts that $\displaystyle \lambda(F) >{2 \over 25}$ for all $F \in \cal F$. \hspace*{\fill}$\Box$\medskip
\bigskip
To complete the proof of Theorem \ref{theo1}, what remains is to show Lemma \ref{lemmaresult01}.
\subsection{Proof of Lemma \ref{lemmaresult01}}\label{prooflemmaresult01}
By Fact \ref{mono}, we may assume that $M$ is an induced subgraph of $G^*(t)$. Let $$U_i=V(M)\cap V_i=\{v_1^i, v_2^i, \cdots, v_{s_i}^i\}.$$ So $s=s_1+s_2+s_3$. Let $\vec{z}=(z_1, z_2, ..., z_s)$ be an optimal vector for $M$. Without loss of generality, assume that $v_1^1$, $v_2^1$, $\cdots$, $v_{s+2}^1$ have the $s+2$ largest weights. Then replacing the $s$ edges in $M[U_1]$ by $v_1^1v_2^1v_3^1, v_1^1v_2^1v_4^1, \dots, v_1^1v_2^1v_{s+2}^1$ doesn't decrease the Lagrangian. So we have the following claim similar to Claim 4.4 in \cite{FR84}.
\begin{claim}\label{reduce0}
If $N$ is the $3$-uniform graph formed from $M$ by removing the edges contained in $U_{1}$ and inserting the edges $v^1_{1}v^1_2v^1_{j}$, where $3\leq j \leq s_1$, then $\lambda(M)\leq \lambda(N)$.
\end{claim}
By Claim \ref{reduce0}, the proof of Lemma \ref{lemmaresult01} will be completed if we show that $\lambda(N)\leq {2 \over 25}$. By Lemma \ref{symmetry}, we can obtain an optimal vector $\vec{z}$ of $\lambda(N)$ such that
\begin{equation} \label{weights}
w(v_1^1)=w(v_2^1)\stackrel{\rm\scriptscriptstyle def}{=}\frac{a}{2}, \ \ w(v_3^1)=w(v_4^1)=\cdots =w(v^1_{s_1}) \stackrel{\rm\scriptscriptstyle def}{=}\frac{b}{s_1-2},
\end{equation}
where $w(v)$ denotes the component of $\vec{z}$ corresponding to vertex $v$.
Let $c$, $d$ be the sum of the components of $\vec{z}$ corresponding to all vertices in $U_2$ and $U_3$, respectively. Note that
$$a+b+c+d=1.$$
Then
\begin{eqnarray*}
\lambda(N)\le\bigg(\frac{a^2}{4}+ab+\frac{b^2}{2}\bigg)c+(a+b)cd+\frac{c^2d}{2}+\frac{a^2}{4}b=\lambda(a, b, c, d).
\end{eqnarray*}
From now on, we assume that $(a, b, c, d)$ is an optimal vector for $\lambda(a, b, c, d)$.
If $c=0$, then $$\lambda(a, b, c, d)=\frac{a^2b}{4}\leq \frac{1}{8}\bigg(\frac{a+a+2b}{3}\bigg)^3\leq \frac{1}{27}.$$
So we may assume that $c>0$.
If $a=0$, then $$\lambda(a, b, c, d)=\frac{b^2c}{2}+bcd+\frac{c^2d}{2}\triangleq\lambda.$$ If $b=0$, then $\lambda=\frac{c^2d}{2}\leq \frac{2}{27}.$ Similarly we have $d>0$. So we may assume that $b, c, d>0$ in this case. By Theorem \ref{KKT}, we have
$$\frac{\partial\lambda}{\partial b}=\frac{\partial\lambda}{\partial c}=\frac{\partial\lambda}{\partial d},$$ so
$$bc+cd=\frac{b^2}{2}+bd+cd=\frac{c^2}{2}+bc.$$
Combining with $b+c+d=1$, we have $b=c=2d=0.4$, and $\lambda=\frac{2}{25}$. So we may assume that $a>0$.
If $b=0$, then $$\lambda(a, b, c, d)=\frac{a^2c}{4}+acd+\frac{c^2d}{2}<\frac{a^2c}{2}+acd+\frac{c^2d}{2}\leq \frac{2}{25}$$
as we have shown that $\frac{b^2c}{2}+bcd+\frac{c^2d}{2}\le \frac{2}{25}$. So we may assume that $b>0$.
We will prove that $d>0$ next. If $d=0$, then $$\lambda(a, b, c, d)=\bigg(\frac{a^2}{4}+ab+\frac{b^2}{2}\bigg)c+\frac{a^2}{4}b\triangleq\lambda.$$
By Theorem \ref{KKT}, we have
$$\frac{\partial\lambda}{\partial a}=\frac{\partial\lambda}{\partial b}.$$ So
$$\bigg(\frac{a}{2}+b\bigg)c+\frac{ab}{2}=(a+b)c+\frac{a^2}{4},$$
i.e., $a=2b-2c$. Since $a+b+c=1$, then $c=3b-1$ and $a=2-4b$. So
\begin{eqnarray*}
\lambda&\le&\bigg(\frac{a^2}{4}+ab+\frac{b^2}{2}\bigg)c+\frac{a^2}{4}b \\
&=&\frac{11b^3}{2}-\frac{21b^2}{2}+6b-1=f(b). \\
f'(b)&=&\frac{33b^2}{2}-21b+6.
\end{eqnarray*}
Since $a, c>0$, then $\frac{1}{3}\leq b\leq \frac{1}{2}$. Therefore $f(b)$ is increasing in $[\frac{1}{3}, \frac{7-\sqrt5}{11}]$ and decreasing in $[\frac{7-\sqrt5}{11}, \frac{1}{2}]$. Then $\lambda<f(\frac{7-\sqrt5}{11})<0.076.$
So we assume that $a, b, c, d>0$, then we have
\begin{eqnarray*}
\frac{\partial\lambda(a, b, c, d)}{\partial a}&=&\bigg(\frac{a}{2}+b\bigg)c+cd+\frac{ab}{2}, \\
\frac{\partial\lambda(a, b, c, d)}{\partial b}&=&(a+b)c+cd+\frac{a^2}{4}, \\
\frac{\partial\lambda(a, b, c, d)}{\partial c}&=&\frac{a^2}{4}+ab+\frac{b^2}{2}+ad+bd+cd, \\
\frac{\partial\lambda(a, b, c, d)}{\partial d}&=&ac+bc+\frac{c^2}{2}, \\
d&=&1-a-b-c.
\end{eqnarray*}
By Theorem \ref{KKT}, we have $$\frac{\partial\lambda(a, b, c, d)}{\partial a}=\frac{\partial\lambda(a, b, c, d)}{\partial b},$$
and we get $a=2b-2c$. Therefore $d=1-a-b-c=1-3b+c$.
By $$\frac{\partial\lambda(a, b, c, d)}{\partial b}=\frac{\partial\lambda(a, b, c, d)}{\partial d},$$
we get $\frac{c^2}{2}=cd+\frac{a^2}{4}=c-3bc+c^2+b^2-2bc+c^2,$ so $$c=\frac{5b-1\pm \sqrt{19b^2-10b+1}}{3}.$$
By $$\frac{\partial\lambda(a, b, c, d)}{\partial b}=\frac{\partial\lambda(a, b, c, d)}{\partial c},$$ we get $$c=\frac{13b^2-6b}{8b-4}.$$
Therefore, $$\frac{13b^2-6b}{8b-4}=\frac{5b-1\pm \sqrt{19b^2-10b+1}}{3}.$$
By direct calculation, we have $(-b^2+10b-4)^2=\bigg(\pm(8b-4)\sqrt{19b^2-10b+1}\bigg)^2.$ Simplifying, we get $$9b(5b-2)(9b-4)(3b-2)=0.$$
If $b=\frac{2}{5}$, then $c=\frac{13b^2-6b}{8b-4}=\frac{2}{5}$ and $a=2b-2c=0$, a contradiction. \\
If $b=\frac{4}{9}$, then $c=\frac{13b^2-6b}{8b-4}=\frac{2}{9}$, $a=2b-2c=\frac{4}{9}$ and $d=-\frac{1}{9}$, a contradiction. \\
If $b=\frac{2}{3}$, then $c<\frac{1}{3}$ and $a=2b-2c>\frac{2}{3}$ and $d<0$, a contradiction.
\hspace*{\fill}$\Box$\medskip
\bigskip
\section{Proof of Theorem \ref{theo1}} \label{prooftheo1}
Let $B(2k, n)$ be the 3-graph with vertex set $[n]$ and edge set $E(B(2k, n))=\{e\in {[n]\choose 3}: e\cap [2k]\not=\emptyset\}$. Let $\alpha_k=\frac{2k-6k^3+4k^4-k\sqrt{4k - 1}+4k^2\sqrt{4k - 1}}{(2k^2+1)^2}$. We first show that $\alpha_k=6\lim_{n\to\infty}\lambda(B(2k, n))$.
Let $\vec{x}=\{x_1,x_2,\dots,x_n\}$ be an optimal vector of $\lambda(B(2k, n))$. Let $x_1+x_2+\cdots+x_{2k}=a$ and $b=1-a$. Then
\begin{eqnarray*}
\lim_{n\to\infty}\lambda(B(2k, n))&=&\bigg(\frac{a}{2k}\bigg)^3{2k\choose 3}+\bigg(\frac{a}{2k}\bigg)^2{2k\choose 2}(1-a)+a\frac{(1-a)^2}{2}=f(a)\\
f'(a)&=&(\frac{1}{4k^2}+\frac{1}{2})a^2-(\frac{1}{2k}+1)a+\frac{1}{2}.
\end{eqnarray*}
Note that $f(a)$ is increasing in $[0, \frac{2k^2+k-k\sqrt{4k-1}}{2k^2+1}]$ and decreasing in $[\frac{2k^2+k-k\sqrt{4k-1}}{2k^2+1}, 1]$. Therefore
\begin{eqnarray*}
f(\frac{2k^2+k-k\sqrt{4k-1}}{2k^2+1})&=&\frac{2k-6k^3+4k^4-k\sqrt{4k - 1}+4k^2\sqrt{4k - 1}}{6(2k^2+1)^2}\\
&=&\frac{\alpha_k}{6}.
\end{eqnarray*}
Since $k\ge 1$ and $4k-1$ is not a square number (a square number is 0 or 1 mod(4)), then $\alpha_k$ is an irrational number.
{\em Proof of Theorem \ref{theo1}.} Suppose that $\alpha_k$ is a jump. By Lemma \ref{arrow}, there exists a finite collection $\cal F$ of $3$-uniform graphs satisfying the following:
\begin{enumerate}
\item[i)] $\displaystyle \lambda (F) > {\alpha_k \over 6} $ for all $F \in \cal F$, and
\item[ii)] $\pi ({\cal F})\le \alpha_k$.
\end{enumerate}
Let $G(t)=(V, E)$ be the 3-uniform defined as follows. The vertex set $V=V_1\cup V_2\cdots\cup V_{2k}\cup V_{2k+1}$, where $|V_1|=|V_2|=\cdots=|V_{2k}|=\frac{2k+1-\sqrt{4k-1}}{4k^2+2}t$ and $|V_{2k+1}|=\frac{k\sqrt{4k-1}+1-k}{2k^2+1}t$. The edge set of $G(t)$ is
$$\bigcup_{1\le i_1<i_2<i_3\le 2k} (V_{i_1}\times V_{i_2} \times V_{i_3})\bigcup_{1\le i_1<i_2\le 2k} (V_{i_1}\times V_{i_2} \times V_{2k+1})\bigcup_{1\le i_1\le 2k} (V_{i_1}\times{V_{2k+1}\choose 2}).$$
Then
\begin{eqnarray}\label{egl}
\vert E(G(t))\vert&=&\frac{2k-6k^3+4k^4-k\sqrt{4k - 1}+4k^2\sqrt{4k - 1}}{6(2k^2+1)^2}t^3 \nonumber \\
&+&\frac{-3k-6k^2+18k^3+3k\sqrt{4k - 1}-6k^2\sqrt{4k - 1}-6k^3\sqrt{4k - 1}}{6(2k^2+1)^2}t^2. \nonumber
\end{eqnarray}
Let $\vec{u}=(u_1, \ldots,u_{t})$, where $u_i=\frac{1}{t}$ for $1\le i\le t$,
then
\begin{eqnarray}\label{egl}
\lambda(G(t))&\ge&\lambda(G(t),\vec{u})={\vert E(G)\vert \over t^3} \nonumber \\
&=&\frac{2k-6k^3+4k^4-k\sqrt{4k - 1}+4k^2\sqrt{4k - 1}}{6(2k^2+1)^2} \nonumber \\
&+&\frac{-3k-6k^2+18k^3+3k\sqrt{4k - 1}-6k^2\sqrt{4k - 1}-6k^3\sqrt{4k - 1}}{6(2k^2+1)^2t}\\
&=&\frac{\alpha_k}{6}-\frac{c_0}{t}, \nonumber
\end{eqnarray}
where $c_0=\frac{3k+6k^2-18k^3-3k\sqrt{4k - 1}+6k^2\sqrt{4k - 1}+6k^3\sqrt{4k - 1}}{6(2k^2+1)^2}>0$.
Set $s= {\rm max}_{F \in {\cal F}} |V(F)|$ and $c=k$. Let $r=3$ in Lemma \ref{add} and $t_0(s, k)$ be given as in Lemma \ref{add}. Take an integer $t>\frac{2k^2+1}{k\sqrt{4k-1}+1-k}t_0$. The $3$-uniform graph $G^*(t)$ is obtained by adding $A(s, k)$ to the $3$-uniform hypergraph $G(t)$ in $V_{2k+1}$. Then
$$\lambda(G^*(t)) \ge \frac{\big\vert E(G^*(t))\big\vert}{t^3}.$$
In view of the construction of $G^*(t)$ and equation (\ref{egl}), we have
\begin{eqnarray}\label{egAl}
\frac{\big\vert E(G^*(t))\big\vert}{t^3}&\ge&\frac{\big\vert E(G(t))\big\vert}{t^3}+\frac{k(\frac{k\sqrt{4k-1}+1-k}{2k^2+1}t)^2}{t^3} \nonumber \\
&{(\ref{egl}) \atop =}&\frac{2k-6k^3+4k^4-k\sqrt{4k - 1}+4k^2\sqrt{4k - 1}}{6(2k^2+1)^2} \nonumber \\
&+&\frac{3k-18k^2+18k^3+24k^4+3k\sqrt{4k - 1}+6k^2\sqrt{4k - 1}-18k^3\sqrt{4k - 1}}{6(2k^2+1)^2t} \nonumber \\
&\ge & {1 \over 6}(\frac{2k-6k^3+4k^4-k\sqrt{4k - 1}+4k^2\sqrt{4k - 1}}{(2k^2+1)^2})+{c_1 \over t}={\alpha_k \over 6}+{c_1 \over t}
\end{eqnarray}
where $t$ is a sufficiently large integer and $c_1=\frac{3k-18k^2+18k^3+24k^4+3k\sqrt{4k - 1}+6k^2\sqrt{4k - 1}-18k^3\sqrt{4k - 1}}{6(2k^2+1)^2} >0$.
Now suppose $\vec{y}=(y_1, y_2, ..., y_t)$ is an optimal vector of $\lambda(G^*(t))$. Let $n$ be large enough. By Remark \ref{remarkblow}, $3$-uniform graph $S_n=(\lfloor ny_1\rfloor, \ldots, \lfloor ny_{t}\rfloor)\otimes G^*(t)$ has density at least $\alpha_k+\frac{c_1}{2t}.$ Since $\pi({\cal F})\le \alpha_k$, some member $F$ of $\cal F$ is a subgraph of $S_n$ for $n$ sufficiently large. For such $F\in \cal F$, there exists a subgraph $M$ of $G^*(t)$ with $|V(M)|\le |V(F)|\leq s$ so that $F\subset (n, n, \ldots, n) \otimes M$. By Fact \ref{mono} and Fact \ref{lblow}, we have
\begin{equation}\label{lambdasmall}
\lambda(F) {{\rm Fact} \ \ref{mono} \atop \leq }\lambda ((n, n, \ldots, n) \otimes M){{\rm Fact} \ \ref{lblow} \atop =} \lambda (M).
\end{equation}
Theorem \ref{theo1} will follow from the following lemma to be proved in Section \ref{prooflemmaresult1}.
\begin{lemma}\label{lemmaresult1}
Let $M$ be any subgraph of $G^*(t)$ with $|V(M)| \leq s$. Then
\begin{equation}
\lambda (M) \leq \frac{1}{6}\alpha_k
\end{equation}
holds.
\end{lemma}
Assuming that Lemma \ref{lemmaresult1} is true and applying Lemma \ref{lemmaresult1} to (\ref{lambdasmall}), we have $$\lambda(F) \le {1 \over 6}\alpha_k$$ which contradicts our choice of $F$, i.e., contradicts that $\displaystyle \lambda(F) >{1 \over 6}\alpha_k$ for all $F \in \cal F$. \hspace*{\fill}$\Box$\medskip
\bigskip
To complete the proof of Theorem \ref{theo1}, what remains is to show Lemma \ref{lemmaresult1}.
\subsection{Proof of Lemma \ref{lemmaresult1}}\label{prooflemmaresult1}
By Fact \ref{mono}, we may assume that $M$ is an induced subgraph of $G^*(t)$. Let $$U_i=V(M)\cap V_i=\{v_1^i, v_2^i, \cdots, v_{s_i}^i\}.$$ So $s=s_1+\cdots+s_{2k+1}$.
Similar to Claim \ref{reduce0}, we have
\begin{claim}\label{reduce} If $N$ is the $3$-uniform graph formed from $M$ by removing the edges contained in $U_{2k+1}$ and inserting the edges $v^{2k+1}_{1}v^{2k+1}_2v^{2k+1}_j$, where $3\leq j \leq s_{2k+1}$, then $\lambda(M)\leq \lambda(N)$.
\end{claim}
By Claim \ref{reduce}, the proof of Lemma \ref{lemmaresult1} will be completed if we show that $\lambda(N)\leq {\alpha_k \over 6}$.
By Lemma \ref{symmetry}, there exists an optimal vector $\vec{z}$ of $\lambda(N)$ such that
\begin{equation} \label{weights}
w(v_1^{2k+1})=w(v_2^{2k+1})\stackrel{\rm\scriptscriptstyle def}{=}\frac{a}{2}, \ \ w(v_3^{2k+1})=w(v_4^{2k+1})=\cdots =w(v^{2k+1}_{s_{2k+1}}) \stackrel{\rm\scriptscriptstyle def}{=}\frac{b}{s_{2k+1}-2},
\end{equation}
where $w(v)$ denotes the component of $\vec{z}$ corresponding to vertex $v$. Let $w_1$ be the sum of the components of $\vec{z}$ corresponding to all vertices in $\cup_{i=1}^{2k}U_i$. Then
\begin{eqnarray*}
&&\lambda(N) \le \bigg(\frac{w_1}{2k}\bigg)^3{2k\choose 3}+\bigg(\frac{w_1}{2k}\bigg)^2{2k\choose 2}(1-w_1)+w_1\bigg(\frac{a^2}{4}+ab+\frac{b^2}{2}\bigg)+\frac{a^2}{4}b,
\end{eqnarray*}
where $w_1+a+b=1.$
Note that if $b\le w_1$ or $a=0$ or $w_1\ge \frac{1}{2}$, then
\begin{eqnarray*}
\lambda(N)&\le&\bigg(\frac{w_1}{2k}\bigg)^3{2k\choose 3}+\bigg(\frac{w_1}{2k}\bigg)^2{2k\choose 2}(1-w_1)+w_1\bigg(\frac{a^2}{2}+ab+\frac{b^2}{2}\bigg)\\
&=&\bigg(\frac{w_1}{2k}\bigg)^3{2k\choose 3}+\bigg(\frac{w_1}{2k}\bigg)^2{2k\choose 2}(1-w_1)+w_1\frac{(1-w_1)^2}{2}\\
&\leq &\lim_{n\to\infty}\lambda(B(2k, n))={\alpha_k\over 6}.
\end{eqnarray*}
So we may always assume that $w_1<\frac{1}{2}$. Since $b=1-w_1-a$, then
\begin{eqnarray*}
\lambda(N)&\le&\bigg(\frac{w_1}{2k}\bigg)^3{2k\choose 3}+\bigg(\frac{w_1}{2k}\bigg)^2{2k\choose 2}(1-w_1)\\
&+&w_1\bigg(\frac{a^2}{4}+a(1-w_1-a)+\frac{(1-w_1-a)^2}{2}\bigg)+\frac{a^2}{4}(1-w_1-a)\triangleq f(a),
\end{eqnarray*}
where $w_1+a<1.$
\begin{eqnarray*}
f'(a)&=&w_1\bigg(\frac{a}{2}+(1-w_1-a)-a-(1-w_1-a)\bigg)+\frac{a}{2}(1-w_1-a)-\frac{a^2}{4}\\
&=&-\frac{3a^2}{4}+\frac{a}{2}-aw_1.
\end{eqnarray*}
Note that $w_1<\frac{1}{2}$, then $f(a)$ is increasing in $[0, \frac{2-4w_1}{3}]$ and decreasing in $[\frac{2-4w_1}{3}, 1]$.
So
\begin{eqnarray*}
f(a)&\le&f(\frac{2-4w_1}{3})\\
&=&\bigg(\frac{w_1}{2k}\bigg)^3{2k\choose 3}+\bigg(\frac{w_1}{2k}\bigg)^2{2k\choose 2}(1-w_1)+\frac{11w_1^3}{54}-\frac{5w_1^2}{9}+\frac{5w_1}{18}+\frac{1}{27}=g(w_1).
\end{eqnarray*}
Then $g'(w_1)=(\frac{1}{4k^2}-\frac{7}{18})w_1^2-(\frac{1}{2k}+\frac{1}{9})w_1+\frac{5}{18}.$ Solving $g'(w_1)=0$, we obtain that $w_1=\frac{\pm\sqrt{\frac{36}{81}+\frac{1}{9k}-\frac{1}{36k^2}}-\frac{1}{9}-\frac{1}{2k}}{\frac{7}{9}-\frac{1}{2k^2}}.$
Note that $\frac{-\sqrt{\frac{36}{81}+\frac{1}{9k}-\frac{1}{36k^2}}-\frac{1}{9}-\frac{1}{2k}}{\frac{7}{9}-\frac{1}{2k^2}}<0$ and $\frac{1}{4k^2}-\frac{7}{18}<0.$ We will show that $\frac{\sqrt{\frac{36}{81}+\frac{1}{9k}-\frac{1}{36k^2}}-\frac{1}{9}-\frac{1}{2k}}{\frac{7}{9}-\frac{1}{2k^2}}>\frac{1}{2}$. It's sufficient to show that $\sqrt{\frac{36}{81}+\frac{1}{9k}-\frac{1}{36k^2}}>\frac{1}{2}+\frac{1}{2k}-\frac{1}{4k^2}$. Note that $$\sqrt{\frac{36}{81}+\frac{1}{9k}-\frac{1}{36k^2}}>\frac{2}{3}>\frac{23}{36}\ge \frac{1}{2}+\frac{1}{2k}-\frac{1}{4k^2}$$
holds for $k\ge 3$. As for $k=2$, we have
$$\sqrt{\frac{36}{81}+\frac{1}{9k}-\frac{1}{36k^2}}=\frac{\sqrt{165}}{18}>\frac{11}{16}=\frac{1}{2}+\frac{1}{4}-\frac{1}{36}.$$
Since earlier discussion allows us to assume that $w_1<\frac{1}{2}$, therefore $g(w_1)$ is increasing in $[0, \frac{1}{2}]$. Note that
$$\frac{11w_1^3}{54}-\frac{5w_1^2}{9}+\frac{5w_1}{18}+\frac{1}{27}\bigg|_{w_1=\frac{1}{2}}=\frac{1}{16}=w_1\frac{(1-w_1)^2}{2}\bigg|_{w_1=\frac{1}{2}}.$$
Then
\begin{eqnarray*}
\lambda(N)&\le& g(\frac{1}{2})\\
&=&\bigg(\frac{w_1}{2k}\bigg)^3{2k\choose 3}+\bigg(\frac{w_1}{2k}\bigg)^2{2k\choose 2}(1-w_1)+\frac{11w_1^3}{54}-\frac{5w_1^2}{9}+\frac{5w_1}{18}+\frac{1}{27}\bigg|_{w_1=\frac{1}{2}}\\
&=&\bigg(\frac{w_1}{2k}\bigg)^3{2k\choose 3}+\bigg(\frac{w_1}{2k}\bigg)^2{2k\choose 2}(1-w_1)+w_1\frac{(1-w_1)^2}{2}\bigg|_{w_1=\frac{1}{2}}\\
&\leq& \lim_{n\to\infty}\lambda(B(2k, n))={\alpha_k\over 6}.
\end{eqnarray*}
\hspace*{\fill}$\Box$\medskip
\bigskip
| {
"timestamp": "2022-01-03T02:07:02",
"yymm": "2112",
"arxiv_id": "2112.14943",
"language": "en",
"url": "https://arxiv.org/abs/2112.14943",
"abstract": "A real number $\\alpha\\in [0, 1)$ is a jump for an integer $r\\ge 2$ if there exists $c>0$ such that no number in $(\\alpha , \\alpha + c)$ can be the Turán density of a family of $r$-uniform graphs. A classical result of Erd\\H os and Stone \\cite{ES} implies that that every number in $[0, 1)$ is a jump for $r=2$. Erd\\H os \\cite{E64} also showed that every number in $[0, r!/r^r)$ is a jump for $r\\ge 3$ and asked whether every number in $[0, 1)$ is a jump for $r\\ge 3$. Frankl and Rödl \\cite{FR84} gave a negative answer by showing a sequence of non-jumps for every $r\\ge 3$. After this, Erd\\H os modified the question to be whether $\\frac{r!}{r^r}$ is a jump for $r\\ge 3$? What's the smallest non-jump? Frankl, Peng, Rödl and Talbot \\cite{FPRT} showed that ${5r!\\over 2r^r}$ is a non-jump for $r\\ge 3$. Baber and Talbot \\cite{BT0} showed that every $\\alpha\\in[0.2299, 0.2316)\\cup [0.2871, \\frac{8}{27})$ is a jump for $r=3$. Pikhurko \\cite{Pikhurko2} showed that the set of all possible Turán densities of $r$-uniform graphs has cardinality of the continuum for $r\\ge 3$. However, whether $\\frac{r!}{r^r}$ is a jump for $r\\ge 3$ remains open, and $\\frac{5r!}{2r^r}$ has remained the known smallest non-jump for $r\\ge 3$. In this paper, we give a smaller non-jump by showing that ${54r!\\over 25r^r}$ is a non-jump for $r\\ge 3$. Furthermore, we give infinitely many irrational non-jumps for every $r\\ge 3$.",
"subjects": "Combinatorics (math.CO)",
"title": "Non-jumping Turán densities of hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462204837635,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7099326201550817
} |
https://arxiv.org/abs/1806.00079 | Open and closed random walks with fixed edgelengths in $\mathbb{R}^d$ | In this paper, we consider fixed edgelength $n$-step random walks in $\mathbb{R}^d$. We give an explicit construction for the closest closed equilateral random walk to almost any open equilateral random walk based on the geometric median, providing a natural map from open polygons to closed polygons of the same edgelength. Using this, we first prove that a natural reconfiguration distance to closure converges in distribution to a Nakagami$(\frac{d}{2},\frac{d}{d-1})$ random variable as $n \rightarrow \infty$. We then strengthen this to an explicit probabilistic bound on the distance to closure for a random $n$-gon in any dimension with any collection of fixed edgelengths $w_i$. Numerical evidence supports the conjecture that our closure map pushes forward the natural probability measure on open polygons to something very close to the natural probability measure on closed polygons; if this is so, we can draw some conclusions about the frequency of local knots in closed polygons of fixed edgelength. | \section{Introduction}
Random walks in space with fixed edgelengths have been of interest to statistical physicists and chemists since Lord Rayleigh's day. These walks model polymers in solution (at least under $\theta$-solvent conditions)~\cite{Rayleigh:1919do,hughes1995random,FloryPaulJ1969Smoc} and are similarly interesting in computational geometry and mathematics as a space of ``linkages''~\cite{Demaine:2007jh,MR2004a:14059}.
While 2- and 3-dimensional walks are the most relevant to this case, high-dimensional random walks often shed light on the lower dimensional situation~\cite{Rudnick:1987jn}.
In this paper, we will consider the relationship between open and closed random walks of fixed edgelengths. We will provide an explicit algorithm for finding the nearest closed polygon with given edgelengths to almost any collection of edge directions, and use our construction to provide tail bounds on the fraction of polygon space within a fixed distance of the closed polygons in any dimension. Our results will be strongest for equilateral polygons, but provide explicit bounds for any collection of edgelengths.
To establish notation, we describe random walks in $\mathbb{R}^d$ with (fixed) positive edgelengths $w_i$ by their \emph{edge clouds} $(w_1, \hat{x}_1),\ldots , (w_n,\hat{x}_n)$ where $\hat{x}_i \in S^{d-1}$ is the direction of the $i$th edge. The space of polygonal arms $\operatorname{Arm}(n,d,w)$ is topologically equivalent to $(S^{d-1})^n$. If we let $\omega_i = \frac{w_i}{\sum w_i}$ be the relative edgelengths, then we can define the submanifold $\{ \pmb{x} : \sum \omega_i \hat{x}_i = \vec{0} \}$ of closed polygons $\operatorname{Pol}(n,d,w)$.
Using Bernstein's inequality~(e.g. \cite{Dubhashi:2009ho}), there is an easy concentration inequality which suggests that the endpoints of random arms are close together. For equilateral polygons in $\mathbb{R}^3$, this takes the simple form
\begin{theorem}
If $\pmb{x}$ is chosen randomly in $\operatorname{Arm}(n,3,1)$ with edges $\hat{x}_1,\ldots , \hat{x}_n$,
\label{thm:sum}
\begin{equation*}
\mathcal{P}\left( \frac{1}{n} \left\| \sum \hat{x}_i \right\| < t \right) \geq 1 - 3 \,
e^{-n t^2 \cdot \frac{3}{6 + 2\sqrt{3} t}}.
\end{equation*}
\end{theorem}
That is, the center of mass of a random edge cloud is very likely to be close to the origin.
We can clearly close a random polygon in $\operatorname{Arm}(n,3,1)$ by subtracting the (small) $\frac{1}{n} \sum \hat{x}_i$ from each edge. That closed polygon is clearly near the original arm, but it is no longer equilateral. This raises the question of whether we can generally close a polygon in $\operatorname{Arm}(n,3,1)$ (or $\operatorname{Arm}(n,d,w)$) while preserving edgelengths and changing the polygon only a small amount. This question is the focus of this paper.
Given $\pmb{x}$ and $\pmb{y}$ in $\operatorname{Arm}(n,d,w)$, we view both as vectors in $\mathbb{R}^{dn}$ and measure the distance between them accordingly. We call this the~\emph{chordal} distance because it does not measure the arc on the spheres of radius $w_i$ for each pair of edges, but rather measures the straight line distance between edge vectors.
Our first main result is Proposition~\ref{prop:dchordal asymptotics}, which shows that the chordal distance between a random $\pmb{x} \in \operatorname{Arm}(n,d,1)$ and the nearest $\pmb{y} \in \operatorname{Pol}(n,d,1)$ converges in distribution to a Nakagami-$(\nicefrac{d}{2},\nicefrac{d}{d-1})$ random variable with PDF proportional to $x^{d-1} e^{-\frac{d-1}{2} x^2}$ as $n \rightarrow \infty$.
Our second main result is a general probabilistic bound on the chordal distance to closure for random polygons in any $\operatorname{Arm}(n,d,w)$. For equilateral polygons in $\mathbb{R}^3$, our main theorem (Corollary~\ref{cor:chordal concentration}) takes the very simple form
\begin{equation*}
\mathcal{P}\left(d_\text{chordal}(\pmb{x},\operatorname{Pol}(n,3,1)) < t \right) \geq 1 - 6 \exp\left( \nicefrac{-t^2}{4} \right)
\end{equation*}
for $t < \frac{\sqrt{n}}{200 \sqrt{2}}$.
Here is a broad overview of our arguments. Given a polygon $\pmb{x}$ in $\operatorname{Arm}(n,d,w)$, we will provide an explicit construction for a nearby closed polygon in $\operatorname{Pol}(n,d,w)$, which we call the \emph{geometric median closure} of $\pmb{x}$ (denoted $\operatorname{gmc}(\pmb{x})$). It will be clear how to construct the geodesic in $\operatorname{Arm}(n,d,w)$ from $\pmb{x}$ to $\operatorname{gmc}(\pmb{x})$. For equilateral polygons, we show $\operatorname{gmc}(\pmb{x})$ is the closest closed polygon to $\pmb{x}$ in chordal distance (Theorem~\ref{thm:gmc is closest closed}).
The distance between $\pmb{x}$ and $\operatorname{gmc}(\pmb{x})$ depends on the norm $\| \vec{\mu} \|$ of the geometric median (or Fermat-Weber point) of the edge cloud (Proposition~\ref{prop:distance bound}). For equilateral polygons, we will be able to leverage existing results of Niemiro~\cite{Niemiro:1992ez} to find the asymptotic distribution of the geometric median of a random point cloud (Proposition~\ref{prop:geometric median asymptotics}). Combining this with the matrix Chernoff inequalities proves our first main result (Proposition~\ref{prop:dchordal asymptotics}).
The second main result follows from a concentration inequality for a random polygon in any $\operatorname{Arm}(n,d,w)$, which bounds the probability of a large $\| \vec{\mu} \|$ in terms of $n$, $d$, and $w$. This concentration result (Theorem~\ref{thm:main}) follows from parallel uses of the scalar and matrix Bernstein inequalities to control the expected properties of a random edge cloud, together with the definition of the geometric median as the minimum of a convex function.
Last, we will observe that the pushforward measure on closed polygons obtained by closing random open polygons appears to converge rapidly to the uniform distribution on closed polygons (Conjecture~\ref{conj:pushforward}). Since these closures involve only very small motions of any part of the polygon, local features (such as small knots) should be preserved -- it would follow (Conjecture~\ref{conj:local knotting}) that the rate of production of local knots in open and closed arcs should be almost the same.
\section{Constructing a nearby closed polygon}
As mentioned above, we view $n$-edge polygons (up to translation) in $\mathbb{R}^d$ as collections of edge vectors $\vec{x}_i \in \mathbb{R}^d$.\footnote{Throughout this paper, we use boldface to indicate elements of $\mathbb{R}^{dn}$, which we usually think of as vectors of edge vectors. We use a superscript arrow -- as in $\vec{x}_i$ -- to denote an arbitrary element of $\mathbb{R}^d$, though any such vector which is definitionally a unit vector we mark with a hat rather than an arrow.} The vertices are obtained by summing the $\vec{x}_i$ from an arbitrary basepoint. In this section of the paper, we will assume only that the lengths of the edges of the polygon are fixed to some arbitrary $w_i = \|\vec{x}_i\|$. We will think of these fixed edgelength polygons in two ways:
\begin{itemize}
\item as a weighted point cloud on the unit sphere $S^{d-1} \subset \mathbb{R}^d$ where the points are denoted $\hat{x}_i = \vec{x}_i/\|\vec{x}_i\|$ and the weights are the $w_i$. We will call $(w_i,\hat{x}_i)$ the \emph{edge cloud} of the polygon.
\item as a point $\pmb{x} \in \prod S^{d-1}(w_i) \subset (\mathbb{R}^d)^n = \mathbb{R}^{dn}$ (where $S^{d-1}(r)$ is the sphere of radius $r$). We will call $\pmb{x}$ the \emph{vector of edges} of the polygon.
\end{itemize}
The space of these polygons will be denoted $\operatorname{Arm}(n,d,w) = \prod S^{d-1}(w_i)$. Within this space, there is a submanifold $\operatorname{Pol}(n,d,w)$ of closed polygons defined by the condition $\sum w_i \hat{x}_i = \vec{0}$. (Equivalently, $\pmb{x}$ is closed if it lies in the codimension $d$ subspace of $\mathbb{R}^{dn}$ normal to the $\pmb{n}^j = (\hat{e}_j,\ldots , \hat{e}_j)$, where $\hat{e}_1, \ldots , \hat{e}_d$ is the standard basis in $\mathbb{R}^d$.) Both $\operatorname{Arm}(n,d,w)$ and $\operatorname{Pol}(n,d,w)$ are Riemannian manifolds with standard metrics, but it will be useful to use two additional metrics as well:
\begin{definition}
The \emph{chordal} metric on $\operatorname{Arm}(n,d,w)$ is given by
\begin{equation*}
d_\text{chordal}(\pmb{x},\pmb{y}) = \|\pmb{x} - \pmb{y}\|_{\mathbb{R}^{dn}}
= \left( \sum \|w_i \hat{x}_i - w_i \hat{y}_i\|_{\mathbb{R}^d}^2 \right)^{\nicefrac{1}{2}}.
\end{equation*}
The \emph{max-angular} metric on $\operatorname{Arm}(n,d,w)$ is given by
\begin{equation*}
d_\text{max-angular}(\pmb{x},\pmb{y}) = \max_i \angle(\vec{x}_i,\vec{y}_i).
\end{equation*}
\end{definition}
We now make an important definition:
\begin{definition}
A \emph{geometric median} (also known as a \emph{Fermat-Weber point}) of an edge cloud
$(w_i,\hat{x}_i)$ is any point $\vec{\mu}$ which minimizes the \emph{weighted average distance function} $\Adx(\vec{y})$ given by
\begin{equation*}
\Adx(\vec{y}) = \sum_i \omega_i \| \hat{x}_i - \vec{y} \|.
\end{equation*}
where $\omega_i = \nicefrac{w_i}{\sum w_i}$.
To clarify notation, we will only use $\vec{\mu}$ for points which are a geometric median of a weighted point cloud; the point cloud will be clear from the context.
\label{def:gm}
\end{definition}
This is a very old construction with a beautiful theory around it; see the nice review in \cite{Hamacher:2002vp}. We note that the geometric median differs from the center of mass (or geometric \emph{mean}) of the points, which minimizes the weighted average of the \emph{squared} distances between $\vec{y}$ and the $\hat{x}_i$ and that the geometric median is unique unless
the points are all colinear and the geometric median is not one of the points.
This section is devoted to analyzing the following construction:
\begin{definition}
Suppose $\pmb{x}$ is a polygon and $\vec{\mu}$ is a geometric median of its edge cloud $(w_i, \hat{x}_i)$ which is not one of the $\hat{x}_i$. The \emph{geometric median closure} $\operatorname{gmc}(\pmb{x})$ of $\pmb{x}$ is the polygon whose edge cloud has the same weights and edge directions obtained by recentering the $\hat{x}_i$ on $\vec{\mu}$ and renormalizing:
$\operatorname{gmc}(\pmb{x})$ has edge cloud $\left( w_i, \frac{\hat{x}_i - \vec{\mu}}{\left| \hat{x}_i - \vec{\mu} \right|} \right)$.
If every geometric median of $(w_i,\hat{x}_i)$ is one of the $\hat{x}_i$, $\operatorname{gmc}(\pmb{x})$ is not defined. If $\operatorname{gmc}(\pmb{x})$ is defined, we say that $\pmb{x}$ is \emph{median-closeable}.
\label{def:gmc}
\end{definition}
Of course, we need to justify our choice of name by proving that $\operatorname{gmc}(\pmb{x})$ is closed. The key observation is the following Lemma, which follows by direct computation:
\begin{lemma}
The function $\Adx(\vec{y})$ is a convex function of $\vec{y}$. The gradient is given by
\begin{equation*}
\nabla\!\Adxy = \sum \omega_i \frac{\vec{y} - \hat{x}_i}{\|\vec{y} - \hat{x}_i\|}.
\end{equation*}
The Hessian of $\Adx(\vec{y})$ is given by
\begin{equation*}
\mathcal{H}\Adxy = \left(\sum_i \frac{\omega_i}{\norm{\hat{x}_i - \vec{y}}}\right) I_d -
\left(\sum_i \frac{\omega_i}{\norm{\hat{x}_i - \vec{y}}^3} (\vec{y} - \hat{x}_i)(\vec{y} - \hat{x}_i)^T \right).
\end{equation*}
\label{lem:gradad and had}
\end{lemma}
\begin{proposition}
If $\pmb{x}$ is median-closeable, $\operatorname{gmc}(\pmb{x})$ is a unique closed polygon with edgelengths $w_i$.
\label{prop:gmc is closed}
\end{proposition}
\begin{proof}
The proof follows from assembling several standard facts about the geometric median. These are in \cite{Anonymous:1UuVxm-1}, but are easily checked by hand.
As it is a sum of convex functions, the average distance function $\Ad_{\pmb{x}}$ is convex. Away from the $\hat{x}_i$, it is differentiable. If the points $\hat{x}_i$ are not colinear, $\Ad_{\pmb{x}}$ is strictly convex and $\vec{\mu}$ is unique. If the points $\hat{x}_i$ are colinear, either the geometric median is one of the $\hat{x}_i$ or the set of geometric medians consists of the interval between two $\hat{x}_i$.
Any geometric median which is not one of the $\hat{x}_i$ must be a critical point of the average distance function. For any such $\vec{\mu}$, using Lemma~\ref{lem:gradad and had},
\begin{equation}
\sum_i w_i \frac{\hat{x}_i - \vec{\mu}}{\left|\hat{x}_i - \vec{\mu}\right|} = -\left(\sum w_i\right) \nabla\!\Adx(\vec{\mu}) = 0.
\label{eq:grad total distance}
\end{equation}
This implies that $\operatorname{gmc}(\pmb{x})$ is closed.
If $\vec{\mu}$ is unique, then $\operatorname{gmc}(\pmb{x})$ is obviously unique. If $\vec{\mu}$ is not unique, the $\hat{x}_i$ are colinear, and $\vec{\mu}$ is on the line segment between two of the $\hat{x}_i$. In this case, it is not hard to see that~\eqref{eq:grad total distance} implies that the edges of $\operatorname{gmc}(\pmb{x})$ are two antipodal groups of points on $S^{k-1}$, each containing $n/2$ points, regardless of where we take $\vec{\mu}$ on the segment.
\end{proof}
Our next goal is to prove an optimality property for the geometric median closure. We will start by proving a more general fact about recentering and renormalizing:
\begin{proposition}
Suppose that $\hat{x}_i$ is any point cloud in $(S^{d-1})^n$, and $\vec{p} \in \mathbb{R}^d$ is not one of the $\hat{x}_i$. Given any set of weights $w_i$, we let $r(\pmb{x};\vec{p},w)$ denote the renormalized, recentered, and reweighted point cloud, and $\vec{s}$ denote its weighted sum:
\begin{equation*}
r(\pmb{x};\vec{p},w) :=
\left( w_i,
\frac{\hat{x}_i - \vec{p}}{\left| \hat{x}_i - \vec{p}\right|}
\right)
\quad \text{and} \quad
\vec{s} := \sum_i w_i \frac{\hat{x}_i - \vec{p}}{\left| \hat{x}_i - \vec{p}\right|}.
\end{equation*}
If $\pmb{x}=(\hat{x}_1,\ldots , \hat{x}_n)$ and $\pmb{r}$ is the vector of edges corresponding to the edge cloud $(w_i,\hat{r}_i)$, then
\begin{equation*}
\pmb{r} = \operatorname{argmin}_{\sum w_i \hat{y}_i = \vec{s}} \|\pmb{y} - \pmb{x}\|,
\end{equation*}
that is, $\pmb{r}$ is the closest vector of edges to $\pmb{x}$ (in $\mathbb{R}^{dn}$) with edge weights $w_i$ and vector sum $\vec{s}$.
\label{prop:recentering and renormalizing is closest}
\end{proposition}
\begin{proof}
Suppose that $(w_i,\hat{y}_i)$ is a point cloud with the same weights which also has $\sum_i w_i \hat{y}_i = \vec{s}$ and $\pmb{y}$ is the corresponding vector of edges in $\mathbb{R}^{dn}$. Let $\pmb{v} = \pmb{y} - \pmb{r}$. Since $\sum_i \vec{y}_i = \sum_i \vec{r}_i$, we know $\sum \vec{v}_i = \vec{0}$.
Remembering that $\|\vec{y}_i\| = w_i = \|\vec{r}_i\|$, we compute
\begin{equation*}
\left< \vec{v}_i, \vec{r}_i \right> =
\left< \vec{y}_i - \vec{r}_i, \vec{r}_i \right> = \left< \vec{y}_i, \vec{r}_i \right> - w_i^2 \leq 0.
\end{equation*}
Since $\vec{r}_i$ is a positive scalar multiple of $\hat{x}_i - \vec{p}$, this implies that $\left< \vec{v}_i,\hat{x}_i - \vec{p} \right> \leq 0$, and so we have $\left<\vec{v}_i,\hat{x}_i\right> \leq \left< \vec{v}_i, \vec{p} \right>$.
Since $\sum \vec{v}_i = \vec{0}$, we see
\begin{equation*}
\left< \pmb{v}, \pmb{x} \right> = \sum_i \left< \vec{v}_i, \hat{x}_i \right> \leq \sum_i \left< \vec{v}_i, \vec{p} \right>
= \left< \sum_i \vec{v}_i, \vec{p} \right> = 0,
\end{equation*}
or that $-\left< \pmb{x},\pmb{v} \right> \geq 0$. Using the facts $\left< \pmb{y}, \pmb{y} \right> = \sum w_i^2 = \left< \pmb{r}, \pmb{r} \right>$ and $\pmb{y} = \pmb{r} + \pmb{v}$,
\begin{align*}
\left< \pmb{x} - \pmb{y},\pmb{x} - \pmb{y} \right> &=
\left< \pmb{x}, \pmb{x} \right> + \left< \pmb{y}, \pmb{y} \right> - 2 \left< \pmb{x}, \pmb{y} \right> \\
&= \left< \pmb{x}, \pmb{x} \right> + \left< \pmb{r},\pmb{r} \right> - 2 \left< \pmb{x}, \pmb{r} \right> - 2 \left<\pmb{x},\pmb{v} \right> \\
&= \left<\pmb{x} - \pmb{r}, \pmb{x} -\pmb{r} \right> - 2 \left< \pmb{x}, \pmb{v} \right>
\end{align*}
so $\left\| \pmb{x} - \pmb{y} \right\| \geq \left\| \pmb{x} - \pmb{r} \right\|$, as claimed.
\end{proof}
Combining Propositions~\ref{prop:gmc is closed} and~\ref{prop:recentering and renormalizing is closest} with Definition~\ref{def:gmc}, we have
\begin{theorem}
If $\pmb{x}$ is a median-closeable equilateral polygon, $\operatorname{gmc}(\pmb{x})$ is the closed equilateral polygon closest to $\pmb{x}$ in the chordal metric.
\label{thm:gmc is closest closed}
\end{theorem}
\noindent\textbf{Remarks.} This construction may seem unexpected, but it has deep roots. In~\cite{Kapovich:1996p2605}, Kapovich and Millson provide an analogous closure construction which associates a unique closed equilateral polygon to any equilateral polygon where no more than half the edge vectors coincide by viewing the unit ball as the Poincar\'e ball model of hyperbolic space and (essentially) recentering and renormalizing in hyperbolic geometry around a point called the ``conformal median'' (see~\cite{Douady:1986go}) which is in many ways parallel to the geometric median. This is an example of a ``Geometric Invariant Theory'' (or GIT) quotient: see~\cite{Howard:2008uy}. These ideas inspired our work above: we did not adopt them entirely only because working in hyperbolic geometry makes the whole endeavor seem much more abstract and because we have not managed to prove an optimality property for their construction analogous to Theorem~\ref{thm:gmc is closest closed}.
\section{Asymptotics of the geometric median and the distance to closure}
Now that we've established the connection between the geometric median and closure, we will establish some facts about the large-$n$ behavior of the geometric median. Since the geometric median is a symmetric estimator of a large number of i.i.d. random variables, it seems natural to expect that the distribution of $\mu$ should converge to a multivariate normal, even though the classical central limit theorem doesn't apply. In fact, this is true:
\begin{proposition}
Let $n$ points $\hat{x}_i$ be sampled independently and uniformly on $S^{d-1}$, with geometric median $\vec{\mu}$. The random variable $\sqrt{n}\,\vec{\mu}$ converges in distribution to $\mathcal{N}\left(\vec{0},\frac{d}{(d-1)^2}I_d\right)$ as $n \rightarrow \infty$. This implies that $\norm{\sqrt{n}\, \vec{\mu}}$ converges in distribution to a Nakagami$\left(\frac{d}{2},\left(\frac{d}{d-1}\right)^2\right)$ random variable.
\label{prop:geometric median asymptotics}
\end{proposition}
\begin{figure}[t!]
\centering
\includegraphics[width=4in]{NormalizedMedianNorm-3.pdf}
\caption{For various $n$, we generated 250,000 random elements of $\operatorname{Arm}(n,3,1)$ and computed $\sqrt{n}\,\|\vec{\mu}\|$, where $\vec{\mu}$ is the geometric median of the edge cloud. By Proposition~\ref{prop:geometric median asymptotics}, $\sqrt{n}\,\|\vec{\mu}\|$ converges to a Nakagami$\left(\frac{3}{2},\frac{9}{4}\right)$ distribution, the pdf of which is the solid curve. Though we don't show it, the behavior in other dimensions is quite similar: by $n=50$ the density of the limiting Nakagami$\left(\frac{d}{2},\left(\frac{d}{d-1}\right)^2\right)$ distribution matches the histogram rather well.}
\label{fig:convergence of geometric median}
\end{figure}
\begin{proof}
We start by defining $\operatorname{Ed}(\vec{y})$ to be the expected distance from $\vec{y} \in \mathbb{R}^d$ to the unit sphere; formally,
\[
\operatorname{Ed}(\vec{y}) := \frac{1}{\operatorname{Vol} S^{d-1}} \int_{\hat{x} \in S^{d-1}} \|\hat{x} - \vec{y}\| \thinspace\operatorname{dVol}_{S^{d-1}}.
\]
Observe that, by symmetry, the minimizer of $\operatorname{Ed}(\vec{y})$ is the origin. Now the geometric median of a finite collection of points $\hat{x}_i$ uniformly sampled from the sphere is the minimizer of the average distance $\operatorname{Ad}$ to the $\hat{x}_i$ (Definition~\ref{def:gm}). For a large number of points, we expect $\operatorname{Ad}$ to be close to $\operatorname{Ed}$ as a function, and hence that the minimizers of the functions should be nearby as well.
In fact, Niemiro studied exactly this situation, showing\footnote{Under some technical hypotheses which are obviously satisfied in our case.}~(\cite[p. 1517]{Niemiro:1992ez}, cf. Haberman~\cite{Haberman:1989iq}) that
\begin{equation*}
\sqrt{n}\, \vec{\mu} \overset{d}{\rightarrow} \mathcal{N}(\vec{0},\mathcal{H}^{-1} V \mathcal{H}^{-1})
\end{equation*}
where $V$ is the covariance matrix of a random point $\hat{x}$ on $S^{d-1}$ and $\mathcal{H}$ is the Hessian of $\operatorname{Ed}$, evaluated at the origin.
The off-diagonal elements of $V$ are zero by symmetry. Using cylindrical coordinates on $S^{d-1}$ with axis $\hat{e}_i$, the $i$th diagonal entry in the covariance matrix is computed by the integral
\[
\sigma_i^2 = \frac{1}{\sqrt{\pi}}\frac{\Gamma(\frac{d}{2})}{\Gamma(\frac{d - 1}{2})} \int_{-1}^1
x^2 (1 - x^2)^{\frac{d - 3}{2}} \,\,\mathrm{d}x = \frac{1}{d}.
\]
We prove in the \href{appendix}{Appendix} (Proposition~\ref{prop:mister ed}) that the expected distance function $\operatorname{Ed}(\vec{y})$ is given as a function of $r = \|\vec{y}\|$ by
\[
\operatorname{Ed}(r) = \, _2F_1 \left( -\frac{1}{2}, \frac{1-d}{2}; \frac{d}{2}; r^2 \right).
\]
When $d$ is odd, the standard Taylor series representation of the hypergeometric function truncates, and $\operatorname{Ed}(r)$ is a polynomial in $r$. For example, when $d=3$ we have $\operatorname{Ed}(r) = 1+\nicefrac{r^2}{3}$. In turn, a straightforward computation shows that the Hessian of $\operatorname{Ed}$ evaluated at the origin is simply
\[
\mathcal{H} = \mathcal{H} \operatorname{Ed}(\vec{0}) = \frac{d-1}{d} I_d,
\]
where $I_d$ is the $d \times d$ identity matrix. This completes the proof of the first statement. To get the second, we note that the norm of a Gaussian $\mathcal{N}(\vec{0},\sigma^2I_d)$ random variate is Nakagami$\left(\frac{d}{2},d \sigma^2\right)$-distributed.
\end{proof}
We now see that the geometric median is becoming asymptotically normal, and concentrating around the origin. We can use this to prove an asymptotic result for the distance to closure for equilateral polygons.
\begin{figure}[t!]
\centering
\includegraphics[width=4in]{MonochromeChordalDistanceHistograms.pdf}
\caption{For $d=2,3,4,10$, we generated 250,000 random elements of $\operatorname{Arm}(1000,d,1)$ and computed their chordal distance to $\operatorname{Pol}(1000,d,1)$ using \thm{gmc is closest closed}. This plot shows the histograms of chordal distance to closure together with the densities of Nakagami$\left(\frac{d}{2},\frac{d}{d-1}\right)$ distributions.}
\label{fig:chordal distance histograms}
\end{figure}
\begin{proposition}
For a random equilateral $n$-gon $\pmb{x}$ with edges $\hat{x}_i$ sampled independently and uniformly from $S^{d-1}$, the random variable $d_\text{chordal}(\pmb{x},\operatorname{Pol}(n,d,1))$ converges in distribution to a Nakagami$\left(\frac{d}{2},\frac{d}{d-1}\right)$ as $n \rightarrow \infty$.
\label{prop:dchordal asymptotics}
\end{proposition}
\begin{proof}
We know from Theorem~\ref{thm:gmc is closest closed} that $d_\text{chordal}(\pmb{x},\operatorname{Pol}(n,d,1))$ is actually the chordal distance from $\pmb{x}$ to $\operatorname{gmc}(\pmb{x})$. To estimate this distance, we will make use of the recentering and renormalizing map $r(\pmb{x},\vec{p},1)$ from Proposition~\ref{prop:recentering and renormalizing is closest}.
When $\norm{\mu}$ is small, we can estimate
\begin{equation*}
\norm{\operatorname{gmc}(\pmb{x}) - \pmb{x}} = \norm{r(\pmb{x};\vec{\mu},1) - r(\pmb{x};\vec{0},1)} \sim \norm{\vec{\mu}} \norm{D_{\hat{\mu}}r(\pmb{x};\vec{0},1)}
\end{equation*}
where $D_{\hat{\mu}} r(\pmb{x};\vec{0},1)$ is the derivative of $r(\pmb{x};\vec{v},1)$ with respect to the vector $\vec{v}$ in the direction of the unit vector $\hat{\mu} = \nicefrac{\vec{\mu}}{\norm{\vec{\mu}}}$ (while leaving the $\pmb{x}$ variables constant).
Using the definition of $r(\pmb{x};\vec{p},1)$, a direct computation reveals that
\begin{equation*}
\norm{D_{\hat{\mu}}r(\pmb{x};\vec{0},1)} = \left( n - \sum_i \left< \hat{x}_i, \hat{\mu} \right>^2 \right)^{\frac{1}{2}} = \sqrt{n} \left(1 - \frac{1}{n} \sum_i \left< \hat{x}_i, \hat{\mu} \right>^2 \right)^{\frac{1}{2}}.
\end{equation*}
Since $\hat{\mu}$ is a unit vector, the sum is the Rayleigh quotient for the matrix $X = \frac{1}{n} \sum_i \hat{x}_i \hat{x}_i^T$, and so obeys the estimates
\begin{equation*}
\lambda_{\operatorname{min}} (X) \leq \frac{1}{n} \sum_i \left< \hat{x}_i, \hat{\mu} \right>^2 \leq \lambda_{\operatorname{max}} (X)
\end{equation*}
Now $\lambda_{\operatorname{min}} (X)$ and $\lambda_{\operatorname{max}} (X)$ are also random variables depending on the $\hat{x}_i$, but we can use the matrix Chernoff inequalities~\cite[Remark 5.3]{Tropp:2012fb} to bound the probability that they are far from $\frac{1}{d}$.
It's quite standard to prove that $\mathcal{E}(\hat{x}_i \hat{x}_i^T) = \frac{1}{d} I_d$, so $\mathcal{E}(X) = \frac{1}{d} I_d$. The matrix Chernoff inequalities then reduce to
\begin{equation}
\mathcal{P}\left\{ \lambda_{\operatorname{min}} (X) < (1 - \delta) \frac{1}{d} \right\}
\leq d \left(\frac{e^{-\delta}}{(1-\delta)^{1-\delta}} \right)^{\frac{n}{d}}
\label{eq:lmin lower}
\end{equation}
and
\begin{equation}
\mathcal{P}\left\{ \lambda_{\operatorname{max}} (X) > (1+\delta) \frac{1}{d} \right\} \leq d \left(\frac{e^{\delta}}{(1+\delta)^{1+\delta}} \right)^{\frac{n}{d}}
\label{eq:lmax upper}
\end{equation}
For any $\delta > 0$, the quantities raised to $\frac{n}{d}$ are $< 1$, and so as $n \rightarrow \infty$ the probability that the bounds in~\eqref{eq:lmin lower} and~\eqref{eq:lmax upper} both hold $\rightarrow 1$. In turn, this means that for any fixed $\delta > 0$,
\begin{equation*}
\mathcal{P} \left\{ \left| \frac{1}{d} - \frac{1}{n} \sum_i \left< \hat{x}_i, \hat{\mu} \right>^2 \right| > \frac{\delta}{d} \right\} \rightarrow 0
\end{equation*}
and so the random variable $\frac{1}{n} \sum_i \left< \hat{x}_i, \hat{\mu} \right>^2$ converges in probability to $\frac{1}{d}$. By the continuous mapping theorem, this means that $\left(1 - \frac{1}{n} \sum_i \left< \hat{x}_i, \hat{\mu} \right>^2\right)^{1/2} \overset{p}{\rightarrow}\sqrt{\frac{d-1}{d}}$.
We can now rewrite the random variable $\norm{\vec{\mu}} \norm{D_{\hat{\mu}}r(\pmb{x};\vec{0},1)}$ as the product of $\norm{\sqrt{n}\, \vec{\mu}}$, which by Proposition~\ref{prop:geometric median asymptotics} converges in distribution to a Nakagami$\left(\frac{d}{2},\left(\frac{d}{d-1}\right)^2\right)$ random variable, and $\left(1 - \frac{1}{n} \sum_i \left< \hat{x}_i, \hat{\mu} \right>^2\right)^{1/2}$, which we have just proved converges in probability to the constant random variable $\sqrt{\frac{d-1}{d}}$.
Using Slutsky's theorem and a little algebra, this implies that the product converges in distribution to a Nakagami $\left(\frac{d}{2},\frac{d}{d-1}\right)$ random variable, as claimed.
\end{proof}
We have now learned something interesting: the distribution of chordal distances to closure should be converging to a distribution which doesn't depend on the number of edges! This is surprising because the diameter of $\operatorname{Arm}(n,d,1)$ is clearly $\Theta(\sqrt{n}) \rightarrow \infty$. This means that some arms might indeed be very far from closure -- but they are very rare. We will look for this feature in the more specific probability inequalities to come.
We can also see how fast the tail of the distribution of $d_\text{chordal}$ can be expected to decay. The survival function of the Nakagami distribution is an incomplete Gamma function. Using~\cite[8.10.1]{NIST:DLMF}, we can show that there is a constant $C(d) > 0$ so that if $x$ is Nakagami$\left(\frac{d}{2},\frac{d}{d-1}\right)$, then
\begin{equation}
\mathcal{P} \left\{ x < t \right\} \geq 1 - C(d) \, t^{d-2} e^{-\frac{d-1}{2} t^2}.
\label{eq:precise tail bound}
\end{equation}
\section{Concentration inequalities for $\norm{\vec{\mu}}$ and $d_\text{chordal}$}
We now know what to expect in the large-$n$ limit, at least for equilateral polygons:~\eqref{eq:precise tail bound} tells us that we should aim for a tail bound for $d_\text{chordal}$ which does not depend on $n$ and is proportional to $e^{-\alpha t^2}$ for some $\alpha < \frac{d-1}{2}$. We will get exactly such a bound in Corollary~\ref{cor:chordal concentration} at the end of the section. Our bounds will apply for finite $n$, and also apply to the non-equilateral case, where it is not even clear what the large-$n$ limit should mean.
\subsection{A bound connecting $\norm{\vec{\mu}}$ and $d_\text{chordal}$.}
To start with, we prove a hard bound on the relationship between the geometric median and our two measures of distance in polygon space. First, we note that our procedure of recentering and renormalizing changes each $\hat{x}_i$ by a controlled amount.
\begin{lemma}
If $\hat{x}_i \in S^{d-1}$ and $\vec{p} \in \mathbb{R}^d$ is any vector with $\| \vec{p} \| < 1$, then $\left\| \hat{x}_i - \frac{\hat{x}_i - \vec{p}}{\|\hat{x_i} - \vec{p}\|} \right\| \leq \sqrt{2} \| \vec{p} \|$ and $\angle (\hat{x}_i,\frac{\hat{x}_i - \vec{p}}{\|\hat{x}_i - \vec{p}\|}) \leq \arcsin \|\vec{p}\|< \frac{\pi}{2} \| \vec{p} \|$.
\label{lem:distance bound}
\end{lemma}
\begin{proof}
This is a calculus exercise; it is straightforward to establish the (sharp) bound
\begin{equation*}
\left\| \hat{x}_i - \frac{\hat{x}_i - \vec{p}}{\|\hat{x_i} - \vec{p}\|} \right\| \leq \sqrt{2 - 2\sqrt{1 - \|\vec{p}\|^2}}.
\end{equation*}
Further, it is easy to check that the right-hand side is a convex function of $\| \vec{p} \|$ which is equal to $0$ when $\|\vec{p}\|=0$, and $\sqrt{2}$ when $\|\vec{p}\|=1$, so it is bounded above by the line $\sqrt{2}\, \| \vec{p} \|$. The angle bound is also straightforward.
\end{proof}
We now can give a bound on the distance between a given $\pmb{x} \in \operatorname{Arm}(n,d,w)$ and $\operatorname{Pol}(n,d,w)$ in terms of the norm of the geometric median $\vec{\mu}$ of the edge cloud $(\hat{x}_i,w_i)$.
\begin{proposition}
If the edge cloud $(\hat{x}_i,w_i)$ has geometric median $\vec{\mu}$ with $\|\vec{\mu}\| < 1$,
\begin{equation*}
d_{\text{chordal}}(\pmb{x},\operatorname{Pol}(n,d,w)) < \sqrt{2 \sum \omega_i^2}\, \|\vec{\mu}\|
\quad
\text{and}
\quad
d_{\text{max-angular}}(\pmb{x},\operatorname{Pol}(n,d,w)) < \arcsin \|\vec{\mu}\|.
\end{equation*}
\label{prop:distance bound}
\end{proposition}
\begin{proof}
Since $\|\vec{\mu}\| < 1$, our polygon is median-closeable and $\operatorname{gmc}(\pmb{x})$ is a closed polygon with edge cloud $\left(w_i, \frac{\hat{x}_i - \vec{\mu}}{\|\hat{x}_i - \vec{\mu}\|}\right)$. Lemma~\ref{lem:distance bound} immediately yields the bound on $d_{\text{max-angular}}$; to get the chordal distance bound, we write
\begin{equation*}
\sqrt{ \sum \left\| w_i \hat{x}_i - w_i \frac{\hat{x}_i - \vec{\mu}}{\|\hat{x}_i - \vec{\mu}\|} \right\|^2 } \leq
\sqrt{ \sum w_i^2 \cdot 2 \| \vec{\mu} \|^2 }.
\end{equation*}
\end{proof}
\subsection{Strategy for the tail bound}
To derive our explicit tail bound on the norm of the geometric median, our strategy is as follows. First, we will prove two probabilistic bounds: an upper bound on $\|\nabla\!\Adx(\vec{0})\|$ and a positive lower bound on $\lambda_{\operatorname{min}}(\mathcal{H}\Adx(\vec{0}))$. These will come from scalar and matrix versions of Bernstein's inequality.
If we restrict $\Ad_{\pmb{x}}$ to a scalar function $\Ad_{\pmb{x}}(z)$ on a ray from the origin, these bounds yield an upper bound on $|\Ad_{\pmb{x}}'(0)|$ and a lower bound on $\Ad_{\pmb{x}}''(0)$. We will get a uniform lower bound on $\Ad_{\pmb{x}}''(z)$ for $z \in [0,\nicefrac{1}{50}]$ by showing that, $\Ad_{\pmb{x}}''(z) \geq \Ad_{\pmb{x}}''(0) - 7 z$ on this interval. We prove this using the special structure of $\mathcal{H}\Adx$.
By Taylor's theorem, there is some $z_*$ in $[0,z]$ so that
\begin{equation*}
\Ad_{\pmb{x}}'(z) = \Ad_{\pmb{x}}'(0) + z \Ad_{\pmb{x}}''(z_*) \geq -|\Ad_{\pmb{x}}'(0)| + \lambda z.
\end{equation*}
This means that for $z > |\Ad_{\pmb{x}}'(0)|/\lambda$, this directional derivative must be positive: in particular, since the geometric median $\vec{\mu}$ is by definition a point where $\nabla\!\Ad(\vec{\mu}) = \vec{0}$, $\vec{\mu}$ can lie no farther than $|\Ad_{\pmb{x}}'(0)|/\lambda$ from the origin.
\subsection{A probabilistic bound on $\|\nabla\!\Adx(\vec{0})\| = \norm{\sum \omega_i \hat{x}_i}$}
We want to bound the norm of the gradient $\nabla\!\Adx(\vec{0})$, which we recall from Lemma~\ref{lem:gradad and had} is equal to $\sum \omega_i \hat{x}_i$. We will start with a lemma which helps us understand the effect of variable weights $\omega_i$.
\begin{lemma}
For any collection of $n$ non-negative real numbers $w_i$, if we define $\omega_i = \nicefrac{w_i}{\sum w_i}$,
\begin{equation*}
n \geq 1 + n^2 \operatorname{Var}(\omega_i) = n \sum \omega_i^2 \geq 1,
\end{equation*}
where $\operatorname{Var}(\omega_i)$ is the variance of $\{\omega_1, \dots, \omega_n\}$. We have equality on the left precisely when all but one of the $w_i$ equal zero and equality on the right precisely when all the $w_i$ are equal.
\label{lem:mysteryweight}
\end{lemma}
\begin{proof}[Proof of Lemma]
Starting with the definition of variance, and remembering that $\sum \omega_i = 1$,
\begin{equation*}
\operatorname{Var}(\omega_i) = \frac{1}{n} \sum \omega_i^2 - \left( \frac{1}{n} \sum \omega_i \right)^2
= \frac{1}{n} \sum \omega_i^2 - \frac{1}{n^2}
\end{equation*}
Solving for $\sum \omega_i^2$,
\begin{equation*}
\sum \omega_i^2 = \frac{1 + n^2 \operatorname{Var}(\omega_i)}{n}
\end{equation*}
which proves the central equality. Since $\operatorname{Var}(\omega_i) \geq 0$ with equality precisely when all the $\omega_i$ are equal, the inequality on the right follows easily.
To prove the inequality on the left, we invoke the Bhatia-Davis inequality~\cite{Bhatia:2000ge}, which says that since the $0 \leq \omega_i \leq 1$ and the mean of the $\omega_i$ is $\nicefrac{1}{n}$, we have $\operatorname{Var}(\omega_i) \leq (1 - \frac{1}{n})(\frac{1}{n} - 0)$ with equality precisely when one $\omega_i = 1$ and the remainder are zero.
\end{proof}
Now we can give our first result:
\begin{proposition}
If we have $n$ points $\hat{x}_i$ sampled independently and uniformly from $S^{d-1}$, and $n$ weights $\omega_i \geq 0$ with $\sum_i \omega_i = 1$ and $\Omega = \max_i \omega_i$, then for any $t > 0$
\begin{equation*}
\mathcal{P}\left(\norm{\sum \omega_i \hat{x}_i} > t \right) \leq d \exp\left(- \frac{3 n t^2}{
2 n t \Omega \sqrt{d} + 6 (1 + n^2 \operatorname{Var} \omega_i)} \right).
\end{equation*}
If the $\omega_i$ are all equal (the polygon is equilateral), this simplifies to
\begin{equation*}
\mathcal{P}\left(\norm{\frac{1}{n} \sum \hat{x}_i} > t \right) \leq d \exp\left(- \frac{3 n t^2}{6+2 t \sqrt{d}}\right).
\end{equation*}
\label{prop:ftc}
\end{proposition}
\begin{proof}
We will use Bernstein's inequality~\cite[Theorem~1.2]{Dubhashi:2009ho}: Suppose $X_1, \dots, X_n$ are independent random variables with $X_i - \mathcal{E}(X_i) \leq b$ for each $i$, the variance of each $X_i$ is given by $\sigma_i^2$, and $X = \sum X_i$ (with variance $\sigma^2 = \sum \sigma_i^2$). Then for any $t > 0$,
\[
\mathcal{P}\left( X > \mathcal{E}(X) + t \right) \leq \exp\left( -\frac{t^2}{2 \sigma^2 \left( 1 + \frac{b t}{3 \sigma^2} \right)} \right).
\]
For any unit vector $\vec{v}$, we can set $X_i = \left< \omega_i \hat{x}_i,\vec{v} \right>$. These random variables clearly have expectation 0 and $X_i - \mathcal{E}(X_i) \leq \omega_i \leq \Omega$. Using cylindrical coordinates on $S^{d-1}$ with axis $\vec{v}$, the variance is computed by the integral
\[
\sigma_i^2 = \frac{1}{\sqrt{\pi}}\frac{\Gamma(\frac{d}{2})}{\Gamma(\frac{d - 1}{2})} \int_{-1}^1
(\omega_i x)^2 (1 - x^2)^{\frac{d - 3}{2}} \,\,\mathrm{d}x = \frac{\omega_i^2}{d}.
\]
Using Lemma~\ref{lem:mysteryweight}, this implies
\begin{equation*}
\sigma^2 = \frac{1}{d} \sum \omega_i^2 = \frac{1 + n^2 \operatorname{Var} \omega_i}{d n}.
\end{equation*}
This proves that for any $\vec{v}$,
\begin{equation*}
\mathcal{P}\left( \left<\sum\omega_i \hat{x}_i,\vec{v} \right> > t \right)
\leq \exp\left( -n d t^2 \frac{3}{6 + 2\, d n t \, \Omega + 6 n^2 \operatorname{Var} \omega_i} \right).
\end{equation*}
Applying this inequality $d$ times for $\vec{v}=\hat{e}_1, \dots, \hat{e}_d$, and using the union bound, we can bound the $L^\infty$ norm of $\sum \omega_i \hat{x}_i$:
\[
\mathcal{P}\left( \norm{\sum \omega_i \hat{x}_i}_\infty = \max_j \left(\sum\omega_i \hat{x}_i\right)_j > t \right)
\leq d \exp\left( -n d t^2 \frac{3}{6 + 2\, d n t \, \Omega + 6 n^2 \operatorname{Var} \omega_i} \right).
\]
But we know that for any $\vec{u}\in\mathbb{R}^d$ we have $\frac{1}{\sqrt{d}} \norm{\vec{u}} \leq \norm{\vec{u}}_\infty$, so
\begin{equation*}
\mathcal{P}\left( \frac{1}{\sqrt{d}} \norm{\sum \omega_i \hat{x}_i} > t \right) \leq
\mathcal{P}\left( \norm{\sum \omega_i \hat{x}_i}_\infty > t \right) \leq d \exp\left( -n d t^2 \frac{3}{6 + 2\, d n t \, \Omega + 6 n^2 \operatorname{Var} \omega_i} \right).
\end{equation*}
Replacing $t$ by $\frac{t}{\sqrt{d}}$ yields the statement of the Proposition.
\end{proof}
The terms $\Omega$ and $\operatorname{Var} \omega_i$ in the statement of Proposition~\ref{prop:ftc} at first seem mysterious. However, if you read them in light of Lemma~\ref{lem:mysteryweight}, they become clearer.
At one extreme, if one $\omega_i$ is close to 1 and the remaining $\omega_j$ are small, the sum $\norm{\sum_i \omega_i \hat{x}_i} \sim 1$ regardless of $n$, and $\norm{\sum_i \omega_i \hat{x}_i}$ cannot concentrate on $0$ as $n \rightarrow \infty$. To see this in the statement of the Proposition, observe that in this case $\Omega$ and $\operatorname{Var} \omega_i$ approach their maximum values of $\Omega \sim 1$ and $1 + n^2 \operatorname{Var} \omega_i \sim n$, the $n$'s in numerator and denominator cancel, and the exponent no longer depends on $n$ at all.
At the other extreme, if the $\omega_i$ are all equal, $\Omega$ and $\operatorname{Var} \omega_i$ are minimized: $\Omega = \nicefrac{1}{n}$ and ${\operatorname{Var} \omega_i = 0}$. In this case, the denominator in the exponent does not depend on $n$ and $\norm{\sum \omega_i \hat{x}_i}$ concentrates on $0$ as fast as possible. We can compare this result to that of Khoi~\cite{Khoi:2005ch}, who showed in a different sense that the equilateral polygons are the ``most flexible'' of all the fixed edgelength polygons.
In the middle, if the $\omega_i$ are variable, but the number of comparably large $\omega_i$ increases, $\Omega$ and $\operatorname{Var} \omega_i$ act to slow the rate of concentration, but they do not stop it: $\norm{\sum_i \omega_i \hat{x}_i}$ still concentrates on $0$ as $n \rightarrow \infty$.
\subsection{A probabilistic bound on $\lambda_{\operatorname{min}}(\mathcal{H}\Adx(\vec{0})) = \lambda_{\operatorname{min}}\left(I - \sum \omega_i \hat{x}_i \hat{x}_i^T\right)$}
We now want to bound the lowest eigenvalue of the Hessian of $\Ad_{\pmb{x}}$ at the origin. Again using Lemma~\ref{lem:gradad and had}, we see that $\mathcal{H}\Adx(\vec{0}) = I - \sum \omega_i \hat{x}_i \hat{x}_i^T$, where the quantities being summed are outer products of the vectors $\hat{x}_i$. That is, they are the symmetric, positive semidefinite projection matrices which project to the lines spanned by the $\hat{x}_i$. We now show
\begin{proposition}
If we have $n$ points $\hat{x}_i$ sampled independently and uniformly from $S^{d-1}$, and $n$ weights $0 \leq \omega_i$ with $\sum_i \omega_i = 1$ and $\Omega = \max_i \omega_i$, then for any $t > 0$
\begin{equation*}
\mathcal{P}\left( \lambda_{\operatorname{min}}\left(I - \sum \omega_i \hat{x}_i \hat{x}_i^T \right) > \frac{d-1}{d} - t \right) \leq d \exp\left( -\frac{d}{d-1} \cdot \frac{3 d n t^2}{2 n
t \Omega d +6(1 + n^2 \operatorname{Var} \omega_i)} \right).
\end{equation*}
If the $\omega_i$ are all equal (the polygon is equilateral), this simplifies to
\begin{equation*}
\mathcal{P}\left( \lambda_{\operatorname{min}}\left(I - \frac{1}{n}\sum \hat{x}_i \hat{x}_i^T \right) > \frac{d-1}{d} - t \right) \leq d \exp\left( -\frac{d}{d-1} \cdot \frac{3 d n t^2}{2 t d + 6} \right).
\end{equation*}
\label{prop:had zero}
\end{proposition}
\begin{proof}
The statement is similar to the statement of Proposition~\ref{prop:ftc}, so it should not be surprising that this also follows from a Bernstein inequality, this time for~matrices~\cite[Theorem~1.4]{Tropp:2012fb}: suppose $X_1, \dots, X_n$ are independent random symmetric $d \times d$ matrices, $\mathcal{E}(X_i) = 0$, $\lambda_{\operatorname{max}}(X_i) \leq b$, the ``matrix variance'' of each $X_i$ is given by $\sigma_i^2 = \mathcal{E}(X_i^2)$, and $X = \sum X_i$ (with ``scalar variance'' $\sigma^2 = \norm{\sum \sigma_i^2}$). Then for any $t > 0$,
\begin{equation}
\mathcal{P}\left( \lambda_{\operatorname{max}}(X) \geq t \right) \leq d \exp\left( -\frac{t^2}{2 \sigma^2 \left( 1 + \frac{b t}{3 \sigma^2} \right)} \right)
\label{eq:matrix bernstein}
\end{equation}
We will set $X_i = \omega_i\left(\hat{x}_i \hat{x}_i^T - \frac{1}{d} I_d\right)$. These are clearly symmetric $d \times d$ matrices.
We now prove $\mathcal{E}(X_i) = 0$. Since the $\hat{x}_i$ are uniformly sampled on $S^{d-1}$, their distribution is $O(d)$-invariant. This means we can first average $\hat{x}_i$ over any subgroup of $O(d)$ without changing $\mathcal{E}\left(\hat{x}_i \hat{x}_i^T\right)$. We'll choose the orthotope group of all $2^d$ possible diagonal matrices $D$ with $D_{ii} = \pm 1$. For any vector $\vec{v} \in \mathbb{R}^d$:
\begin{equation*}
\frac{1}{2^d} \left(\sum_{D} (D\vec{v})(D\vec{v})^T \right)_{ij} =
\frac{1}{2^d} \sum_{D} D_{ii} D_{jj} v_i v_j
\end{equation*}
Now for each of the 4 possible combinations of signs $D_{ii} = \pm 1$ and $D_{jj} = \pm 1$, there are the same number $2^{d-2}$ of elements of the orthotope group with these signs. If $i \neq j$, two products are $+1$ and two are $-1$ and the terms cancel. If $i=j$ all the products are the same. Thus the average matrix $\frac{1}{2^d} \sum_D (D\vec{v}) (D\vec{v})^T$ is a diagonal matrix with entries $v_i^2$.
Since the expectation of the square of a coordinate of a randomly distributed unit vector on $S^{d-1}$ was computed in the proof of Proposition~\ref{prop:ftc} to be $\nicefrac{1}{d}$, we have $\mathcal{E}\left(\hat{x}_i \hat{x}_i^T\right) = \frac{1}{d} I_d$, proving that $\mathcal{E}(X_i) = 0$.
We now prove that $\lambda_{\operatorname{max}}(X_i) \leq \Omega \frac{d-1}{d} $. For any matrix $A$, the eigenvalues of $A + kI_d$ are simply $k$ added to the eigenvalues of $A$~(cf.~\cite[Theorem 2.4.8.1]{Horn:2013tf}). So
\begin{equation*}
\lambda_{\operatorname{max}}(X_i) = \omega_i \left( \lambda_{\operatorname{max}}(\hat{x}_i \hat{x}_i^T) - \frac{1}{d} \right) = \omega_i \frac{d-1}{d} \leq \Omega \frac{d-1}{d}
\end{equation*}
since the largest eigenvalue of a projection matrix like $\hat{x}_i \hat{x}_i^T$ is 1.
Next, we want to show that $\sigma_i^2 = \mathcal{E}(X_i^2) = \omega_i^2 \frac{d-1}{d^2} I_d$. A direct computation reveals
\begin{equation*}
X_i^2 = \omega_i^2 \left( \left(1 - \frac{2}{d}\right) \hat{x}_i \hat{x}_i^T + \frac{1}{d^2} I_d \right)
\end{equation*}
and the result follows from our previous computation that $\mathcal{E}\left(\hat{x}_i \hat{x}_i^T\right) = \frac{1}{d}I_d$. Summing the $\sigma_i^2$ and taking the operator norm, we get
\begin{equation*}
\sigma^2 = \frac{d-1}{d^2} \sum \omega_i^2.
\end{equation*}
Plugging $b$ and $\sigma$ into~\eqref{eq:matrix bernstein} yields a bound on the probability that $\lambda_{\operatorname{max}}(X) > t$ or, since ${\lambda_{\operatorname{max}}(X) = \lambda_{\operatorname{max}}(\sum \omega_i \hat{x}_i \hat{x}_i^T) - \frac{1}{d}}$, that $\lambda_{\operatorname{max}}(\sum \omega_i \hat{x}_i \hat{x}_i^T) > \frac{1}{d} + t$. This completes the proof.
\end{proof}
We note that this concentration inequality is better than Proposition~\ref{prop:ftc}: there is an extra factor of $d$ in the numerator which means that the concentration gets faster as $d$ increases. The effect of variable edgelengths is to slow (or stop) the concentration, just as in Proposition~\ref{prop:ftc}; the same comments on the role of $\Omega$ and $\operatorname{Var} \omega_i$ apply here.
\subsection{A bound on the change in the radial second derivative}
For any point $\vec{s} \in \mathbb{R}^{d}$, the second derivative of $\Ad_{\pmb{x}}$ along the ray through $\vec{s}$ is given by evaluating the Hessian as a quadratic form on the vector $\vec{s}$ itself. Our last proposition gave us a lower bound on the result at the origin; we now show that this can't change too fast as we move away from the origin.
\begin{proposition}
For $\norm{\vec{s}} < 1$ we have
\begin{equation*}
\frac{\left< \mathcal{H}\Adx(\vec{s}) \vec{s}, \vec{s} \right>}{\left<\vec{s},\vec{s}\right>} - \frac{\left< \mathcal{H}\Adx(0) \vec{s},\vec{s} \right>}{\left< \vec{s}, \vec{s}\right>} \geq -\norm{\vec{s}} \frac{6 + \norm{\vec{s}} + \norm{\vec{s}}^2}{(1 - \norm{\vec{s}})^3}.
\end{equation*}
Since the fraction at right is increasing in $\norm{\vec{s}}$, we can easily simplify the statement given a better upper bound on $\norm{\vec{s}}$. In particular, for $\norm{\vec{s}} < \nicefrac{1}{50}$, the right-hand side $\geq - 7 \norm{\vec{s}}$.
\label{prop:change in hessian}
\end{proposition}
\begin{proof}
Using Lemma~\ref{lem:gradad and had}, we see that
\begin{multline*}
\left< \mathcal{H}\Adx(\vec{s}) \vec{s}, \vec{s} \right> - \left< \mathcal{H}\Adx(0) \vec{s},\vec{s} \right> = \\
\left(\!\left( \sum \frac{\omega_i}{\norm{\hat{x}_i - \vec{s}}}\right) - 1\right) \left< \vec{s}, \vec{s} \right> -
\sum \omega_i \left( \frac{\left<\hat{x}_i - \vec{s},\vec{s}\right>^2}{\norm{\hat{x}_i-\vec{s}}^3} - \left< \hat{x}_i, \vec{s}\right>^2 \right).
\end{multline*}
Using the estimates $1 - \norm{\vec{s}} \leq \norm{\hat{x}_i - \vec{s}} \leq 1 + \norm{\vec{s}}$ and recalling that $\sum \omega_i = 1$, we can underestimate the right hand side by
\begin{equation*}
-\frac{\norm{\vec{s}}^3}{1 + \norm{\vec{s}}} - \sum \omega_i \left( \frac{\left<\hat{x}_i - \vec{s},\vec{s}\right>^2}{(1 - \norm{\vec{s}})^3} - \left< \hat{x}_i, \vec{s}\right>^2 \right) \geq
-\frac{\norm{\vec{s}}^3}{1 + \norm{\vec{s}}} - \frac{5 \norm{\vec{s}}^3 + \norm{\vec{s}}^4 + \norm{\vec{s}}^5}{(1 - \norm{\vec{s}})^3}
\end{equation*}
where the second part follows from finding a common denominator, expanding, and cancelling, using Cauchy--Schwartz carefully to underestimate the inner product terms as needed. Observing that $1 + \norm{\vec{s}} > 1 > (1 - \norm{\vec{s}})^3$ allows us to underestimate $-\nicefrac{\norm{\vec{s}}^3}
{1 + \norm{\vec{s}}} \geq -\nicefrac{\norm{\vec{s}}^3}{(1 - \norm{\vec{s}})^3}$, completing the proof. \end{proof}
\subsection{Bounding the norm of the geometric median}
We are now in a position to bound the norm of the geometric median! This will proceed in two stages: first, we'll use the Poincar\'e--Hopf index theorem to show that $\norm{\vec{\mu}} < \nicefrac{1}{50}$ under certain hypotheses. Then we can immediately bootstrap to get a sharper bound.
\begin{proposition}
If $\norm{\sum \omega_i \hat{x}_i} = \|\nabla\!\Adx(\vec{0})\| < \nicefrac{5}{1000}$, $\lambda_{\operatorname{min}}(\mathcal{H}\Adx(\vec{0})) > \frac{d-1}{d} - \frac{1}{100}$, and $d \geq 2$, then $\norm{\vec{\mu}} \leq \nicefrac{1}{50}$.
\label{prop:preparatory bound}
\end{proposition}
\begin{proof}
Given our hypothesis on $\lambda_{\operatorname{min}}$ of the Hessian, we know that the $\hat{x}_i$ are not all colinear. This means that $\vec{\mu}$ is the unique point inside $S^{d-1}$ where the vector field $\nabla\!\Adx$ vanishes. We will now show that $\nabla\!\Adxy$ has a zero inside the sphere of radius $\nicefrac{1}{50}$; by uniqueness, this point must be the geometric median.
Along any ray from the origin, we may restrict $\Ad_{\pmb{x}}$ to a scalar function $\Ad_{\pmb{x}}(z)$. Using Proposition~\ref{prop:change in hessian}, on the interval $[0,\nicefrac{1}{50}]$ our hypotheses imply
\begin{equation*}
\Ad_{\pmb{x}}'(0) \geq -\frac{5}{1000} \quad \text{and} \quad \Ad_{\pmb{x}}''(z) \geq \frac{1}{2} - \frac{1}{100} - \frac{7}{50} = \frac{7}{20}.
\end{equation*}
By Taylor's theorem, there is some $z_* \in [0,\nicefrac{1}{50}]$ so that
\begin{equation*}
\Ad_{\pmb{x}}'(\nicefrac{1}{50}) = \Ad_{\pmb{x}}'(0) + \frac{1}{50} \Ad_{\pmb{x}}''(z_*) \geq -\frac{5}{1000} + \frac{1}{50}\cdot \frac{7}{20} = \frac{2}{1000} > 0.
\end{equation*}
This means that the directional derivative of $\Ad_{\pmb{x}}$ in the outward direction is positive on the boundary of the sphere of radius $\nicefrac{1}{50}$, or that $\nabla\!\Adxy$ points outward on this sphere. In particular, this implies that the vector field has index 1 on the sphere, and so by the Poincar\'e--Hopf index theorem must vanish at some point inside the sphere.
\end{proof}
We can now prove our main theorem.
\begin{theorem}
If we have $n$ points $\hat{x}_i$ sampled uniformly on $S^{d-1}$ ($d \geq 2$), $n$ weights $\omega_i > 0$ so that $\sum \omega_i = 1$, and $\max \omega_i = \Omega$, then for any $t < \nicefrac{5}{1000}$ we have
\begin{equation}
\mathcal{P}\left( \norm{\vec{\mu}} < \frac{t}{\frac{d-1}{d} - \frac{3}{20}} \right) \geq
1 - 2 d \exp\left(- \frac{3 n t^2}{
2 n t \Omega \sqrt{d} + 6 (1 + n^2 \operatorname{Var} \omega_i)} \right).
\label{eq:main bound}
\end{equation}
If all the $\omega_i$ are equal (the polygon is equilateral)
\begin{equation*}
\mathcal{P}\left( \norm{\vec{\mu}} < \frac{t}{\frac{d-1}{d} - \frac{3}{20}} \right) \geq 1 - 2 d \exp\left( - \frac{3 n t^2}{2 \sqrt{d} t + 6} \right).
\end{equation*}
For $d = 3$, we have the further simplification
\begin{equation*}
\mathcal{P}\left( \norm{\vec{\mu}} < t \right) \geq 1 - 6 \exp\left( -\nicefrac{n t^2}{9} \right).
\end{equation*}
\label{thm:main}
\end{theorem}
\begin{proof}
We first define two random events: $\lambda_{\operatorname{min}}(\mathcal{H}\Adx(\vec{0})) > \frac{d-1}{d} - \frac{1}{100}$ (event $A$) and $\norm{\nabla\!\Adx(\vec{0})} < t < \nicefrac{5}{1000}$ (event $B$), which will happen for some choices of $\hat{x}_i$. Suppose both events occur.
As in Proposition~\ref{prop:preparatory bound}, we restrict $\Ad_{\pmb{x}}$ to a scalar function $\Ad_{\pmb{x}}(z)$ on a ray; this time, the ray is assumed to pass through $\vec{\mu}$. By Taylor's theorem, if we evaluate at $z = \norm{\vec{\mu}}$, there is some $0\leq z_* \leq \norm{\vec{\mu}}$ so that
\begin{equation}
0 = \Ad_{\pmb{x}}'(\norm{\vec{\mu}}) = \Ad_{\pmb{x}}'(0) + \norm{\vec{\mu}} \Ad_{\pmb{x}}''(z_*).
\label{eq:taylor theorem setup}
\end{equation}
Since we are assuming $A \land B$, the hypotheses of~Proposition~\ref{prop:preparatory bound} are satisfied and $\norm{\vec{\mu}} < \nicefrac{1}{50}$. In turn, this means that Proposition~\ref{prop:change in hessian} holds at $z_*$, and
\begin{equation*}
\Ad_{\pmb{x}}''(z_*) \geq \Ad_{\pmb{x}}''(0) - \nicefrac{7}{50} \geq \lambda_{\operatorname{min}}(\mathcal{H}\Adx(\vec{0})) - \nicefrac{7}{50}.
\end{equation*}
Since $A$, we have $\Ad_{\pmb{x}}''(z_*) > \frac{d-1}{d} - \frac{3}{20}$.
As before, since $\Ad_{\pmb{x}}'(\vec{0})$ is a directional derivative, it satisfies $\Ad_{\pmb{x}}'(0) \geq -\|\nabla\!\Adx(\vec{0})\| > -t$. We can plug both estimates into~\eqref{eq:taylor theorem setup} and solve for $\norm{\vec{\mu}}$, obtaining
\begin{equation*}
\norm{\vec{\mu}} < \frac{t}{\frac{d-1}{d} - \frac{3}{20}}.
\end{equation*}
If we call this event $C$, we have shown that $A \land B \implies C$, and hence that $\mathcal{P}(C) \geq \mathcal{P}(A \land B)$. This means that
\begin{equation}
\mathcal{P}(\lnot C) \leq \mathcal{P}(\lnot(A \land B)) = \mathcal{P}(\lnot A \lor \lnot B) \leq \mathcal{P}(\lnot A) + \mathcal{P}(\lnot B).
\label{eq:logic}
\end{equation}
Now $\mathcal{P}(\lnot A)$ was bounded above in Proposition~\ref{prop:had zero}, while $\mathcal{P}(\lnot B)$ was bounded above in Proposition~\ref{prop:ftc}. We now compare these upper bounds, noting that we have chosen $t_* = \frac{1}{100}$ in the statement of Proposition~\ref{prop:had zero} while the $t$ in Proposition~\ref{prop:ftc} is smaller -- less than $\frac{5}{1000} = \frac{1}{200}$. The bounds are
\begin{equation*}
d \exp\left( -\frac{d}{d-1} \cdot \frac{3 d n t_*^2}{2 n
t_* \Omega d +6(1 + n^2 \operatorname{Var} \omega_i)} \right) \quad\text{and}\quad
d \exp\left(- \frac{3 n t^2}{
2 n t \Omega \sqrt{d} + 6 (1 + n^2 \operatorname{Var} \omega_i)} \right).
\end{equation*}
Of course, it suffices to compare the absolute values of the fractions inside the exponential functions (since both are negative). We can simplify the comparison by rewriting these as
\begin{equation*}
\frac{d}{d-1} \cdot \frac{3 n t_*}{2 n
\Omega + \frac{6(1 + n^2 \operatorname{Var} \omega_i)}{d t_*}}
\quad\text{and}\quad
\frac{3 n t}{
2 n \Omega \sqrt{d} + \frac{6 (1 + n^2 \operatorname{Var} \omega_i)}{t}}.
\end{equation*}
It is now evident that if we compare the right fraction with the second fraction on the left, the numerator on the right is smaller and each term in the denominator is larger (recall $t < t_*$). Multiplying by $\frac{d}{d-1} > 1$ makes the left hand side even larger. Restoring the minus sign reverses this conclusion, and we see that our bound on $\mathcal{P}(\lnot B)$ is larger than our bound on $\mathcal{P}(\lnot A)$, as claimed. Returning this conclusion to~\eqref{eq:logic}, we see $\mathcal{P}(\lnot C) \leq 2 \, \mathcal{P}(\lnot B)$, which is the first statement of the Theorem.
The simplification when all the $\omega_i = \nicefrac{1}{n}$ is an immediate consequence. To simplify to $d=3$, we observe that $\frac{t}{\nicefrac{2}{3} - \nicefrac{3}{20}} = \frac{60}{31} t$; substituting $t \rightarrow \frac{31}{60} t$ on the right hand side yields an expression in the form $1 - 6 \exp(-f(t) n t^2)$, where $f(t)$ is a rational function bounded below by $\nicefrac{1}{9}$ for $t \in [0,\nicefrac{5}{1000}]$.
\end{proof}
We now make a few remarks. First, if you carefully examine Proposition~\ref{prop:change in hessian}, the lower bound on $\Ad_{\pmb{x}}''(z)$ improves as $z \rightarrow 0$. One can wring some extra information out of this, but the improvement in the final bound is minimal. Similarly, it is clear that one could set $t_* < \frac{1}{100}$ in our bound on $\mathcal{P}(\lnot A)$ without losing the conclusion, as long as $t_* < t$. Again, this does not significantly improve things.
\section{Distances and Angles}
We now want to restate our main Theorem~\ref{thm:main} in terms of the chordal and max-angular distance from a random arm to the nearest closed polygon using Proposition~\ref{prop:distance bound}.
\begin{corollary}
If we have $n$ points $\hat{x}_i$ sampled uniformly on $S^{d-1}$ ($d \geq 2$), $n$ weights $\omega_i > 0$ so that $\sum \omega_i = 1$, and $\max \omega_i = \Omega$, then for any $t < \nicefrac{5}{1000} \cdot \nicefrac{1}{\sqrt{2\sum\omega_i^2}}$ we have
\begin{equation*}
\mathcal{P}\left(d_\text{chordal}(\pmb{x},\operatorname{Pol}(n,d,w)) < \frac{t}{\frac{d-1}{d} - \frac{3}{20}}\right) \geq 1 - 2 d \exp\left( \frac{-3 t^2}{3 + t \Omega \sqrt{\frac{2 d n}{1 + n^2 \operatorname{Var} \omega_i}}} \right).
\end{equation*}
If all the $\omega_i$ are equal (the polygon is equilateral), for $t < \nicefrac{5}{1000} \cdot \sqrt{\nicefrac{n}{2}}$ we have
\begin{equation*}
\mathcal{P}\left(d_\text{chordal}(\pmb{x},\operatorname{Pol}(n,d,1)) < \frac{t}{\frac{d-1}{d} - \frac{3}{20}}\right) \geq 1 - 2 d \exp\left( \frac{-t^2}{1 + \frac{\sqrt{d}}{600}} \right).
\end{equation*}
In dimension 3, this simplifies (again, for $t < \nicefrac{5}{1000} \cdot \sqrt{\nicefrac{n}{2}}$), as
\begin{equation*}
\mathcal{P}\left(d_\text{chordal}(\pmb{x},\operatorname{Pol}(n,3,1)) < t \right) \geq 1 - 6 \exp\left( \nicefrac{-t^2}{4} \right).
\end{equation*}
\label{cor:chordal concentration}
\end{corollary}
\begin{figure}[t!]
\centering
\includegraphics[width=2.5in]{ChordalDistanceBoundPlot-2.pdf}\hspace{.4in}
\includegraphics[width=2.5in]{ChordalDistanceBoundPlot-3.pdf}
\vspace{.2in}
\includegraphics[width=2.5in]{ChordalDistanceBoundPlot-4.pdf}\hspace{.4in}
\includegraphics[width=2.5in]{ChordalDistanceBoundPlot-10.pdf}
\caption{For $d=2,3,4,10$, we generated 250,000 random elements of $\operatorname{Arm}(10,d,1)$. The plots show the implied bound from Corollary~\ref{cor:chordal concentration} (solid), the empirical CDF of chordal distance to closure for those samples which were median-closeable (dots), and the CDF of the Nakagami$\left(\frac{d}{2},\frac{d}{d-1}\right)$ distribution (dashed) given by Proposition~\ref{prop:dchordal asymptotics} for the large-$n$ limit (which is only slightly different, even though $n=10$ is quite small). Though the hypotheses of Corollary~\ref{cor:chordal concentration} are only satisfied when $t<\frac{5}{1000}\sqrt{5}\approx 0.01118$, the data strongly suggests that the bound is valid on a much larger range. We see from the plots that the bound cannot be dramatically improved.}
\label{fig:bound vs data}
\end{figure}
The problem with Corollary~\ref{cor:chordal concentration} is that the hypotheses (on $t$) are disappointingly restrictive: for $\operatorname{Arm}(n,3,1)$, we need $n > 538,519$ to extend the domain of $t$ to the point where the right-hand side becomes positive! On the other hand, numerical experiments (Figure~\ref{fig:bound vs data}) comparing our bounds to experimental data and to the large-$n$ Nakagami distribution proved in Proposition~\ref{prop:dchordal asymptotics} show that the conclusions of Corollary~\ref{cor:chordal concentration} cannot be made much stronger. Further, these experiments suggest that, at least in the equilateral case, one should be able to entirely remove the upper bound on $t$-- we leave this as
\begin{conjecture}
The conclusions of Corollary~\ref{cor:chordal concentration} hold for any $t > 0$.
\end{conjecture}
We now proceed to prove Corollary~\ref{cor:chordal concentration}.
\begin{proof}[Proof of Corollary~\ref{cor:chordal concentration}]
Proposition~\ref{prop:distance bound} tells us $d_\text{chordal}(\pmb{x},\operatorname{Pol}(n,d,w))<\sqrt{2 \sum \omega_i^2} \norm{\vec{\mu}}$, so to get a bound on the probability that $d_\text{chordal}(\pmb{x},\operatorname{Pol}(n,d,w)) < \frac{t}{\frac{d-1}{d} - \frac{3}{20}}$ we need to
make the substitution $t \rightarrow t \sqrt{2 \sum \omega_i^2}$ on the right hand side of~\eqref{eq:main bound}. Recalling that Lemma~\ref{lem:mysteryweight} shows $\sum \omega_i^2 = \frac{1 + n^2 \operatorname{Var} \omega_i}{n}$ and carefully simplifying yields the first result.
For the second result, it follows immediately from the assumption that $\omega_i = \frac{1}{n}$ that the first result simplifies to
\begin{equation*}
\mathcal{P}\left(d_\text{chordal}(\pmb{x},\operatorname{Pol}(n,d,1)) < \frac{t}{\frac{d-1}{d} - \frac{3}{20}}\right) \geq 1 - 2 d \exp\left( \frac{-3 t^2}{3 + t\sqrt{\frac{2 d}{n}}} \right).
\end{equation*}
Using our upper bound on $t$, we see that the right hand side obeys
\begin{equation*}
1 - 2 d \exp\left( \frac{-3 t^2}{3 + t\sqrt{\frac{2 d}{n}}} \right) >
1 - 2 d \exp\left( \frac{-3 t^2}{3 + \frac{\sqrt{d}}{200}} \right)
\end{equation*}
which immediately implies the second result.
For the third result, we simplify the fraction on the left hand side and substitute $t \rightarrow \frac{31}{60} t$ as we did above in the simplification of Theorem~\ref{thm:main}; the complicated constant that results as the coefficient of $t^2$ in the exponent is slightly less that $-\nicefrac{1}{4}$.
\end{proof}
The statements for the maximum angular change in edge direction are similar, but somewhat easier to prove because the relationship between $\norm{\vec{\mu}}$ and the max-angular distance is simpler.
\begin{corollary}
If we have $n$ points $\hat{x}_i$ sampled uniformly on $S^{d-1}$ ($d \geq 2$), $n$ weights $\omega_i > 0$ so that $\sum \omega_i = 1$, and $\max \omega_i = \Omega$, then for any $t < \nicefrac{5}{1000}$ we have
\begin{equation*}
\mathcal{P}\left(d_\text{max-angular}(\pmb{x},\operatorname{Pol}(n,d,w)) < \frac{t}{\frac{d-1}{d} - \frac{3}{20}}\right) \geq 1 - 2 d \exp\left(- \frac{13 n t^2}{
9 n t \Omega \sqrt{d} + 30 (1 + n^2 \operatorname{Var} \omega_i)} \right).
\end{equation*}
If all the $\omega_i$ are equal (the polygon is equilateral), for $t < \nicefrac{5}{1000}$ we have
\begin{equation*}
\mathcal{P}\left(d_\text{max-angular}(\pmb{x},\operatorname{Pol}(n,d,1)) < \frac{t}{\frac{d-1}{d} - \frac{3}{20}}\right) \geq 1 - 2 d \exp\left( \frac{-26 n t^2}{60 + \frac{9\sqrt{d}}{100}} \right).
\end{equation*}
In dimension 3, this simplifies (again, for $t < \nicefrac{5}{1000}$), as
\begin{equation*}
\mathcal{P}\left(d_\text{max-angular}(\pmb{x},\operatorname{Pol}(n,3,1)) < t \right) \geq 1 - 6 \exp\left( -\nicefrac{n t^2}{9} \right).
\end{equation*}
\label{cor:angular concentration}
\end{corollary}
\begin{proof}
We know from Proposition~\ref{prop:distance bound} that $d_\text{max-angular}(\pmb{x},\operatorname{Pol}(n,d,w)) < \arcsin \norm{\vec{\mu}}$. Since we're only going to apply this bound when $\norm{\vec{\mu}} < \frac{t}{\frac{d-1}{d} - \frac{3}{20}} < \frac{1}{70}$ (since $d \geq 2$ and $t \leq \nicefrac{5}{1000}$), we can safely make the overestimate $\arcsin \norm{\vec{\mu}} \leq \frac{14}{13} \norm{\vec{\mu}}$.
Substituting $t \rightarrow \frac{13}{14}t$ in~\eqref{eq:main bound} leads us to replace $3$ by $3 (\frac{13}{14})^2 < 2.6$ in the coefficient of $n t^2$ in the numerator and $2$ by $2 (\nicefrac{13}{14}) > 1.8$ in the coefficient of $t$ in the denominator. Simplifying gives us the first statement.
To reach the second statement, we first observe that $\omega_i = \frac{1}{n}$ means $\operatorname{Var} \omega_i = 0$ and $\Omega = \frac{1}{n}$. Substituting these into the first statement (and overestimating the $t$ in the denominator by $\nicefrac{5}{1000}$) yields the result.
Finally, the third statement (as in the proof of Corollary~\ref{cor:chordal concentration}) requires us to substitute $t \rightarrow \frac{31}{60} t$ to simplify the left-hand side. The resulting complicated coefficient of $n t^2$ on the right-hand side is about $-0.115521 < -\nicefrac{1}{9}$.
\end{proof}
\section{Discussion}
\begin{figure}[t!]
\centering
\includegraphics[width=6.1in]{TrefoilRandomWalk.png}
\caption{A 10,000 step equilateral arm in $\mathbb{R}^3$ containing a small trefoil (top left) and its geometric median closure (bottom right). The intermediate images show equally-spaced points along the geodesic between the arm and its closure in $\operatorname{Arm}(10,\!000,3,1)$. The failure to close of the arm is $\approx 101.118$ and the geometric median has norm $\|\vec{\mu}\|\approx 0.0151318$. The chordal distance between the arm and its closure is $\approx 1.23696$ and $d_\text{max-angular}\approx 0.0151324$, which agrees with the bound $\arcsin \|\vec{\mu}\|$ to eleven decimal places.}
\label{fig:trefoil closure}
\end{figure}
From Corollary~\ref{cor:angular concentration} we see that closing a random arm is unlikely to change any edge very much. In particular, we should expect local features to be preserved by closure, as in the case of the local trefoil knot shown in \figr{trefoil closure}. This suggests that closing up an arm is unlikely to destroy any local knots: in other words, the probability of local knotting in the standard measure on $\operatorname{Arm}(n,3,w)$ should be essentially the same as the probability of local knotting in the pushforward measure on $\operatorname{Pol}(n,3,w)$ via the map $\pmb{x} \mapsto \operatorname{gmc}(\pmb{x})$.
Of course, this map is not defined on all of $\operatorname{Arm}(n,d,w)$, but we know from \thm{main} that it is defined on all but an exponentially small fraction of $\operatorname{Arm}(n,d,w)=\prod S^{d-1}(w_i)$; pushing forward the restriction of the product measure to the domain of $\operatorname{gmc}$ produces what we'll call the \emph{pushforward measure} on $\operatorname{Pol}(n,d,w)$. On the other hand, the standard probability measure on $\operatorname{Pol}(n,d,w)$ is simply the volume measure induced by the Riemannian metric it inherits from $\operatorname{Arm}(n,d,w)$. Since we've seen in Corollaries~\ref{cor:chordal concentration} and~\ref{cor:angular concentration} that almost all of $\operatorname{Arm}(n,d,w)$ is within a fixed distance of $\operatorname{Pol}(n,d,w)$, it is reasonable to expect that this pushforward measure is close to the standard measure.
Indeed, this seems to be true. Rayleigh~\cite{Rayleigh:1919do} showed that the distribution of end-to-end distances in a random element of $\operatorname{Arm}(n,3,1)$ is
\begin{equation*}
\Phi_n(\ell) = \frac{1}{2 \pi^2 \ell} \int_0^\infty x \sin \ell x \operatorname{sinc}^n x\, dx.
\end{equation*}
We note that a closed form for $\Phi_n$ is classical (see~\cite[2.181]{hughes1995random}). Since a random closed polygon is formed from two random arms, conditioned on the hypothesis that their end-to-end distances are the same, the pdf of the length of the chord connecting vertices $0$ and $k$ in an polygon of $n$ edges turns out to be given by
\begin{equation*}
\operatorname{Chord}_{n,k}(\ell) = \frac{1}{C(n)} 4 \pi \ell^2 \Phi_k(\ell) \Phi_{n-k}(\ell).
\end{equation*}
where the factor of $4 \pi \ell^2$ comes from the fact that vertex $k$ lies on a sphere of radius $\ell$ and $C(n)$ is the volume of polygon space (which is known; see~\cite{Cantarella:2013wl} for an identification between polygon space and a certain polytope which yields an explicit, though complicated, formula for $C(n)$).
\begin{figure}[t!]
\centering
\includegraphics[width=2.5in]{quadrilateral-gmsample-histogram-bw.pdf}\qquad\qquad\qquad
\includegraphics[width=2.5in]{pentagon-gmsample-histogram-bw.pdf}
\caption{For $n=4,5$, we generated 1,000,000 random equilateral $n$-edge arms in $\mathbb{R}^3$, computed their geometric median closures (when they existed), and then computed the distance from the first to the third vertex in the resulting closed $n$-gon. The histograms show the resulting distributions of chordlengths as well as the density of the chordlength for the standard distribution on $\operatorname{Pol}(n,3,1)$. Closure failed for 2474 quadrilaterals and for 117 pentagons.}
\label{fig:quadrilateral and pentagon chordlengths}
\end{figure}
Therefore, the extent to which the distributions of the chordlengths match $\operatorname{Chord}_{n,k}$ gives a sense of how close a given distribution on $\operatorname{Pol}(n,3,w)$ is to the standard one. For $n=4$ and $5$, we can see in \figr{quadrilateral and pentagon chordlengths} that the pushforward measure from $\operatorname{Arm}(n,3,1)$ is not particularly close to the standard measure. However, as $n$ increases these statistics cannot distinguish between the pushforward measure and the standard measure; see \figr{10-gon chordlengths}.
\begin{figure}[t!]
\centering
\includegraphics[width=1.4in]{decagon-gmsample-3-bw.pdf}
\includegraphics[width=1.4in]{decagon-gmsample-4-bw.pdf}
\includegraphics[width=1.4in]{decagon-gmsample-5-bw.pdf}
\includegraphics[width=1.4in]{decagon-gmsample-6-bw.pdf}
\vspace{.2in}
\includegraphics[width=1.4in]{decagon-gmsample-7-bw.pdf}
\includegraphics[width=1.4in]{decagon-gmsample-8-bw.pdf}
\includegraphics[width=1.4in]{decagon-gmsample-9-bw.pdf}
\caption{We generated 1,000,000 random equilateral 10-edge arms in $\mathbb{R}^3$. All 1,000,000 had geometric median closures, and these are the histograms of distances from the first vertex to the $i$th vertex in the resulting closed 10-gons, along with the chordlength densities for the standard measure on equilateral 10-gons.}
\label{fig:10-gon chordlengths}
\end{figure}
\begin{conjecture}
As $n \to \infty$, the pushforward measure from $\operatorname{Arm}(n,d,w)$ to $\operatorname{Pol}(n,d,w)$ converges to the standard measure.
\label{conj:pushforward}
\end{conjecture}
Assuming the truth of this conjecture implies that, at least for large $n$, random elements of $\operatorname{Pol}(n,d,w)$ look essentially like geometric median closures of random elements of $\operatorname{Arm}(n,d,w)$. Since Corollary~\ref{cor:angular concentration} implies that individual edges are practically unchanged by closure, this would mean that \emph{all} local phenomena happen at essentially the same rate in $\operatorname{Arm}(n,d,w)$ and $\operatorname{Pol}(n,d,w)$.
When $d=3$, a particularly important local phenomenon is that of local knotting. Say that a subsegment $\varsigma$ of $\pmb{x} \in \operatorname{Arm}(n,3,w)$ is an \emph{$r$-local knot} if it only intersects the boundary of a ball $B$ of radius $r$ at its endpoints and $(B,\varsigma)$ forms a knotted ball-arc pair. Let $K^\text{Arm}(n,w,k,r)$ be the probability that a length-$k$ arc of a random element of $\operatorname{Arm}(n,3,w)$ is an $r$-local knot, and similarly for $K^\text{Pol}(n,w,k,r)$.
\begin{conjecture}
For small $r$ and large $n$ and for $k \ll n$, $K^\text{Arm}(n,w,k,r)\simeq K^\text{Pol}(n,w,k,r)$.
\label{conj:local knotting}
\end{conjecture}
\section*{Acknowledgements}
This paper is a contribution to the Festschrift for Stu Whittington, a giant in the area of random polymers and random knots. We are indebted to Stu for years of insightful talks, perceptive questions, and remarkable mathematical results. His interest, enthusiasm, and explanation of the importance of these questions to the polymer science community have shaped our mathematical trajectory more than we can say.
We are also grateful for the continued support of the Simons Foundation (\#524120 to Cantarella, \#354225 to Shonkwiler), the German Research Foundation (DFG-Grant RE 3930/1--1, to Reiter), and the organizers of the ``Workshop on Topological Knots and Polymers'' at Ochanomizu University, where key steps in the present work came together. In particular, we are indebted to Cristian Micheletti, Tetsuo Deguchi, Rob Kusner (for reducing everything to conformal geometry yet again!), Eric Rawdon, and Erik Schreyer for many helpful conversations. As always, we look to Yuanan Diao for inspiration -- one of the motivations for this paper was the desire to find an alternate proof of~\cite{Diao:1995iw}.
| {
"timestamp": "2018-06-04T02:02:35",
"yymm": "1806",
"arxiv_id": "1806.00079",
"language": "en",
"url": "https://arxiv.org/abs/1806.00079",
"abstract": "In this paper, we consider fixed edgelength $n$-step random walks in $\\mathbb{R}^d$. We give an explicit construction for the closest closed equilateral random walk to almost any open equilateral random walk based on the geometric median, providing a natural map from open polygons to closed polygons of the same edgelength. Using this, we first prove that a natural reconfiguration distance to closure converges in distribution to a Nakagami$(\\frac{d}{2},\\frac{d}{d-1})$ random variable as $n \\rightarrow \\infty$. We then strengthen this to an explicit probabilistic bound on the distance to closure for a random $n$-gon in any dimension with any collection of fixed edgelengths $w_i$. Numerical evidence supports the conjecture that our closure map pushes forward the natural probability measure on open polygons to something very close to the natural probability measure on closed polygons; if this is so, we can draw some conclusions about the frequency of local knots in closed polygons of fixed edgelength.",
"subjects": "Statistical Mechanics (cond-mat.stat-mech); Probability (math.PR)",
"title": "Open and closed random walks with fixed edgelengths in $\\mathbb{R}^d$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987946220128863,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7099326199000523
} |
https://arxiv.org/abs/2209.11517 | An order-theoretic perspective on modes and maximum a posteriori estimation in Bayesian inverse problems | It is often desirable to summarise a probability measure on a space $X$ in terms of a mode, or MAP estimator, i.e.\ a point of maximum probability. Such points can be rigorously defined using masses of metric balls in the small-radius limit. However, the theory is not entirely straightforward: the literature contains multiple notions of mode and various examples of pathological measures that have no mode in any sense. Since the masses of balls induce natural orderings on the points of $X$, this article aims to shed light on some of the problems in non-parametric MAP estimation by taking an order-theoretic perspective, which appears to be a new one in the inverse problems community. This point of view opens up attractive proof strategies based upon the Cantor and Kuratowski intersection theorems; it also reveals that many of the pathologies arise from the distinction between greatest and maximal elements of an order, and from the existence of incomparable elements of $X$, which we show can be dense in $X$, even for an absolutely continuous measure on $X = \mathbb{R}$. |
\section{Introduction}
\label{sec:introduction}
In diverse applications such as statistical inference and the analysis of transition paths of random dynamical systems it is desirable to summarise a complicated probability measure $\mu$ on a space $X$ by a single distinguished point $x^{\star} \in X$ that is, in some sense, a ``point of maximum probability'' under $\mu$ --- i.e.\ a \emph{mode} or, in the Bayesian context, a \emph{maximum a post\-eriori} (\emph{MAP}) \emph{estimator}.
Many optimisation-based approaches to inverse problems (e.g.\ Tikhonov regularisation of the misfit) aim to calculate or approximate modes, at least heuristically understood.
Over the last decade, it has become common to define modes in terms of masses of metric balls in the limit as the ball radius tends to zero, since this makes sense even when $X$ is a very general --- possibly infinite-dimensional --- space, as is often the case for modern inference problems \citep{Stuart2010}.
However, this ``small balls'' theory of modes is not entirely straightforward.
There are various definitions --- e.g.\ the strong mode of \citet{DashtiLawStuartVoss2013}, the generalised strong mode of \citet{Clason2019GeneralizedModes}, the weak mode of \citet{HelinBurger2015} --- with various subtle distinctions among them.
Furthermore, even the existence theory for modes is not entirely straightforward:
there are already examples in the literature, and this article will supply further examples, of relatively simple probability measures that have no mode.
It can even be the case that the sum or average of two disjointly supported unimodal probability measures may have no mode.
The purpose of this article is to formulate the notion of a mode in an order-theoretic manner and thereby to clarify some of these pathologies in the theory of modes.
We claim that this is a natural step to take in view of the heuristic understanding of modes as ``most probable points''.
With an order-theoretic point of view, many of the difficulties can be seen to arise from the distinction between greatest and maximal elements of an ordered set\footnote{For the sake of having simple introductory remarks, we gloss over the fact that this article actually works with \emph{preorders} and not with orders.
In a preorder, as opposed to an order, $x \preceq x' \preceq x$ does not imply that $x = x'$.} $(X, \preceq)$ when the order $\preceq$ is not total, i.e.\ when there exist $x, x' \in X$ for which neither $x \preceq x'$ nor $x \succeq x'$ holds.
Motivated by the needs of inverse problems theory, current notions of modes correspond to greatest elements.
However, many ordered spaces lack maximal elements, and even those that have maximal elements may lack greatest elements;
this is exactly the situation of the examples discussed in \Cref{thm:oscillation_example,thm:countable_dense_antichain,thm:countable_dense_antichain_hilbert}.
Thus, one might argue that current notions of mode are order-theoretically ``too strong'', and perhaps maximal elements should be considered as modes, but that these are ``too weak'' for the needs of applications communities.
We hope that the present article will stimulate discussion on this point.
\paragraph{Outline of the paper.}
The rest of this paper is structured as follows:
\Cref{sec:notation} sets out basic notation for the rest of the paper, including a brief recap of necessary concepts from functional analysis, measure theory, and order theory.
\Cref{sec:related} gives an overview of related work in this area, in particular the ``small balls'' approach to defining MAP estimators for non-parametric statistical inverse problems.
\Cref{sec:positive-radius_preorder} introduces and analyses the total preorder $\preceq_{r}$ on $X$ induced by the $\mu$-measures of metric balls of fixed radius $r > 0$.
Because the preorder $\preceq_{r}$ is total, its maximal elements are also greatest, and can be seen as approximate ``radius-$r$ modes'' for $\mu$.
We are able to provide several criteria for the existence of such radius-$r$ modes $x_{r}^{\star}$ (\Cref{thm:r_greatest}) as well as examples of measures that admit none (\Cref{eg:no_radius_1_mode,eg:no_radius_r_mode}).
As a prelude to the next section, we also consider the convergence of $x_{r}^{\star}$ as $r \to 0$ (\Cref{thm:limits_of_r-modes,thm:nesting_yields_GWM}).
In \Cref{sec:limiting_preorders} we attempt to take the limit as $r \to 0$ of the preorders $\preceq_{r}$ to define a preorder $\preceq_{0}$ whose greatest elements will be weak modes of $\mu$.
However, because the preorder $\preceq_{0}$ is not total, the distinction between greatest and maximal elements becomes important.
Incomparable maximal elements are particularly troubling because their maximality means that one would like to think of them as candidate modes, yet their incomparability means that one cannot actually say which is ``most probable'' and hence a bona fide mode, as in \Cref{thm:oscillation_example}.
We show that antichains (collections of mutually incomparable elements) can be topologically dense in $X$ even when $\mu$ is absolutely continuous with respect to Lebesgue measure on $X \subseteq \mathbb{R}$ (\Cref{thm:countable_dense_antichain}) or a non-degenerate Gaussian measure on a separable Hilbert space $X$ (\Cref{thm:countable_dense_antichain_hilbert}).
\Cref{sec:conclusion} gives some closing remarks, while technical supporting results can be found in \Cref{sec:technical}, and \Cref{sec:alternative_small-radius_preorders} discusses some alternative limiting preorders to the preorder $\preceq_{0}$ of \Cref{sec:limiting_preorders} and illustrates their shortcomings.
\section{Problem setting and notation}
\label{sec:notation}
\subsection{Spaces of interest}
Throughout, unless noted otherwise, $X$ will be a metric space with metric $d$;
we write $\Borel{X, d}$ or simply $\Borel{X}$ for its Borel $\sigma$-algebra, i.e.\ the one generated by the closed balls $\cball{x}{r} \coloneqq \set{x' \in X}{d(x, x') \leq r}$, $x \in X$, $r \geq 0$;
we also write $\oball{x}{r} \coloneqq \set{x' \in X}{d(x, x') < r}$ for the corresponding open ball.
We will often assume that $X$ is complete and separable, and occasionally we will specialise to the case of $X$ being a separable Banach or Hilbert space.
\subsection{Measures of non-compactness and intersection theorems}
Our approach in \Cref{sec:positive-radius_preorder} will make much use of measures of non-compactness and intersection theorems;
see \citet[Sections~7.5--7.8]{MalkowskyRakocevic2019} for a thorough treatment.
Given $A \subseteq X$, its \emph}%{\textbf{separation} (or \emph}%{\textbf{Istr\u{a}\c{t}escu}) \emph}%{\textbf{measure of non-compactness} $\gamma(A)$ is defined by
\begin{equation}
\label{eq:separation_measure_nc}
\gamma(A) \coloneqq \inf \Set{ r \geq 0 }{ \text{there is no $(x_{n})_{n \in \mathbb{N}} \subseteq A$ with } \inf_{\substack{ m, n \in \mathbb{N} \\ m \neq n }} d(x_{m}, x_{m}) \geq r } .
\end{equation}
This is an increasing function with respect to inclusion of sets, is finite precisely when $A$ is bounded, and is zero precisely when $A$ is precompact.
The function $\gamma$ is bi-Lipschitz equivalent with several other measures of non-compactness such as the set (or Kuratowski) measure of non-compactness and the ball (or Hausdorff) measure of non-compactness.
\begin{theorem}[Generalised intersection theorem]
\label{thm:intersection_theorem}
Let $(A_{n})_{n \in \mathbb{N}}$ be a decreasingly nested sequence of non-empty, closed subsets of a topological space $X$ and let $A \coloneqq \bigcap_{n \in \mathbb{N}} A_{n}$.
\begin{enumerate}[label=(\alph*)]
\item
\label{item:intersection_theorem_Cantor_1}
(Cantor)
If each $A_{n}$ is compact, then $A$ is non-empty.
If $X$ is Hausdorff, then $A$ is also compact.
\item
\label{item:intersection_theorem_Cantor_2}
(Cantor)
If $X$ is a complete metric space and $\mathop{\textup{diam}}\nolimits(A_{n}) \to 0$ as $n \to \infty$, then $A$ is a singleton.
\item
\label{item:intersection_theorem_Kuratowski}
(Kuratowski)
If $X$ is a complete metric space and $\gamma(A_{n}) \to 0$ as $n \to \infty$, then $A$ is non-empty and compact.
\end{enumerate}
\end{theorem}
\subsection{Measure-theoretic concepts}
Given a metric space $X$, $\prob{X}$ denotes the set of all probability measures on $\Borel{X}$.
We denote the absolute continuity of $\mu$ with respect to $\nu$ by $\mu \ll \nu$.
The \emph}%{\textbf{topological support} of $\mu \in \prob{X}$ is
\begin{equation}
\label{eq:supp_mu}
\supp(\mu) \coloneqq \set{ x \in X }{ \text{for all $r > 0$, } \crcdf{x}{r} > 0 } ,
\end{equation}
which is always a closed subset of $X$, and is non-empty when $X$ is separable (or, equivalently, second countable or Lindel\"of) \citep[Theorem~12.14]{AliprantisBorder2006}.
The quantity $\crcdf{x}{r}$ will play a major role in this work, especially when thought of as a function of $r > 0$ for various choices of $x \in X$;
we shall call the map $r \mapsto \crcdf{x}{r}$ the \emph}%{\textbf{radial cumulative distribution function} (RCDF) and some of its key properties are given in \Cref{lem:RCDF} and \Cref{cor:RCDF_spherically_nonatomic}.
\subsection{Order-theoretic concepts}
We summarise here some basic terms from order theory;
for a comprehensive introduction to order theory, see e.g.\ \citet{Davey2022Introduction}.
In the course of this work, the set $X$ will be equipped with various \emph{preorders} $\preceq$, i.e.\ relations satisfying both
\begin{enumerate}[label=(\alph*)]
\item \emph{reflexivity}:
for all $x \in X$, $x \preceq x$;
and
\item \emph{transitivity}:
for all $x, y, z \in X$,
if both $x \preceq y$ and $y \preceq z$, then $x \preceq z$.
\end{enumerate}
For any such preorder, we will write $x \asymp x'$ if both $x \preceq x'$ and $x \succeq x'$ hold true, in which case $x$ and $x'$ are called \emph{equivalent} in the preorder;
we write $x \prec x'$ if $x \preceq x'$ but $x \not\succeq x'$.
If at least one of $x \preceq x'$ and $x \succeq x'$ holds true, then we call $x$ and $x'$ \emph{comparable};
if neither holds, then we call them \emph{incomparable} and write $x \mathrel{\Vert} x'$.
A preorder $\preceq$ is \emph{total} or \emph{linear} if there are no incomparable elements.
A subset of $X$ on which $\preceq$ is total is called a \emph{chain}, and a subset for which every two distinct elements are incomparable is called an \emph{antichain}.
We highlight and contrast two notions of a ``biggest'' element for a preorder:
\begin{definition}
\label{defn:greatest_maximal_element}
Let $X$ be a set equipped with a preorder $\preceq$.
\begin{enumerate}[label=(\alph*)]
\item $m \in X$ is a \emph}%{\textbf{greatest element} if, for every $x \in X$, $m \succeq x$.
\item $m \in X$ is a \emph}%{\textbf{maximal element} if, whenever $x \in X$ is such that $m \preceq x$, it follows that $m \succeq x$ (and hence that $m \asymp x$).
\item $u \in X$ is an \emph}%{\textbf{upper bound} for $A \subseteq X$ if, for all $x \in A$, $u \succeq x$.
\end{enumerate}
\end{definition}
Note in particular that a greatest element is also a maximal element, but it must additionally be comparable to (and dominate) every element of $X$.
On the other hand, a maximal element is only required to dominate those elements of $X$ with which it is comparable, and those elements could constitute a rather small subset of $X$.
The most famous statement about the existence of maximal elements is \emph{Zorn's lemma}:
under the Axiom of Choice, if $(X, \preceq)$ is a preordered space in which every chain $Y \subseteq X$ has an upper bound, then $X$ has at least one maximal element.
However, Zorn's lemma says nothing about the existence of greatest elements.
We write $\mathop{\uparrow}\nolimits Y \coloneqq \set{ x \in X }{ x \succeq y \text{ for some } y \in Y }$ for the \emph}%{\textbf{upward closure} of $Y \subseteq X$, and further write, for $y \in X$, $\mathop{\uparrow}\nolimits y \coloneqq \mathop{\uparrow}\nolimits \{ y \} = \set{ x \in X }{ x \succeq y }$, so that $\mathop{\uparrow}\nolimits Y = \bigcup_{y \in Y} \mathop{\uparrow}\nolimits y$.
Finally, since many of the preorders we consider will be parametrised by radius $r \geq 0$, we will write $\preceq_{r}$ for the preorder, $\mathrel{\Vert}_{r}$ for the induced relation of incomparability, $\mathop{\uparrow}\nolimits_{r} Y$ for the upward closure of $Y$ with respect to $\preceq_{r}$, etc.
\section{Overview of related work}
\label{sec:related}
Modes, loosely understood as points of maximum probability, arise in many areas of pure and applied mathematics.
Two application domains where modes are particularly prominent are the analysis of the transition paths of random dynamical systems and the Bayesian approach to inverse problems.
The random dynamical systems setting is exemplified by mathematical models of chemical reactions using diffusion processes.
One is typically interested in the (rare) transitions of the process from one energy well or metastable state to another, and in particular one wishes to understand the transition paths that a diffusion process is most likely to take.
This amounts to a study of the modes of the law $\mu$ of the diffusion process on the associated path space $X$;
e.g.\ for a molecule consisting of $n$ atoms in three-dimensional space, $X = C([0, T]; \mathbb{R}^{3 n})$.
In this community, the modes of $\mu$ are understood as \emph{minimum-action paths}, and the behaviour of $\mu$ near the mode is quantified using Freidlin--Wentzell theory or large deviations theory \citep{DemboZeitouni1998,ERenVandenEijnden2004,FreidlinWentzell1998}.
In the Bayesian approach to inverse problems \citep{KaipioSomersalo2005, Stuart2010}, the reconstruction of an $X$-valued parameter of interest from observed $Y$-valued data is expressed in the form of a probability measure $\mu \in \prob{X}$, the \emph{posterior distribution}.
In many modern inverse problems, particularly those coupled to partial differential equations, the space $X$ is an infinite-dimensional function space or a high-dimensional discretisation of such a space, e.g.\ via a system of finite elements.
The posterior measure $\mu$ arises from three ingredients:
a \emph{prior measure} $\pi \in \prob{X}$, which encodes (subjective) beliefs about the parameter that are held in advance of knowing the observation mechanism or the specific data that are observed;
a \emph{likelihood model}, i.e.\ a family of probability measures $L(\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss}|x) \in \prob{Y}$, one for each $x \in X$, which models how observed data would be expected to arise if the parameter value $x$ were the truth;
and a specific observed instance of the data, a point $y \in Y$.
Strictly speaking, the posterior measure $\mu$ is defined as the disintegration (conditional distribution) of the joint measure $\nu (\mathrm{d} x, \mathrm{d} y) \coloneqq L(\mathrm{d} y|x) \pi(\mathrm{d} x) \in \prob{X \times Y}$ along the $y$-fibre \citep{ChangPollard1997}.
For simplicity, however, we often concentrate on the case that $\mu$ is absolutely continuous with respect to $\pi$ with a density given by Bayes' formula,
\begin{equation}
\label{eq:Bayes}
\mu (\mathrm{d} x) = \frac{ \exp(- \Phi(x; y)) \, \pi(\mathrm{d} x) }{ \int_{X} \exp(- \Phi(x'; y)) \, \pi(\mathrm{d} x') } ,
\end{equation}
where $\Phi \colon X \times Y \to \mathbb{R}$ is called the \emph{potential}.
In simple settings with $\dim Y < \infty$, the Lebesgue probability density of $L(\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss}|x)$ is proportional to $\exp( - \Phi(x; \setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss}))$ and $\Phi$ can be interpreted as a non-negative misfit functional.
The case of infinite-dimensional data, $\dim Y = \infty$, is considerably more subtle and does not generally admit a density for $\mu$ with respect to $\pi$ as in \eqref{eq:Bayes};
see e.g.\ \citet[Remark~3.8]{Stuart2010} and \citet[Remark~9]{Lasanen2012_I}.
Since the full posterior distribution $\mu$ can be a rather intractable object, it is often desirable to have access to a convenient point summary:
the two principal such point estimators are the \emph{conditional mean estimator} (i.e.\ the mean of $\mu$) and a \emph{maximum a posteriori estimator} (i.e.\ a mode, or point of maximum probability, for $\mu$), and here we focus on this second approach.
Heuristically, at least when $X = \mathbb{R}^{d}$, a MAP estimator is just an essential maximiser of the Lebesgue density of $\mu$, i.e.\ a minimiser of the sum of $\Phi(\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss}; y)$ and the negative logarithm of the Lebesgue density of $\pi$.
However, this definition is not effective if we have no access to Lebesgue densities;
in particular, it makes no sense when $\dim X = \infty$ \citep[e.g.][]{Sudakov1959}.
To handle the general infinite-dimensional case, various definitions of modes / MAP estimators have been advanced over recent years, and we summarise them here.\footnote{\citet{DashtiLawStuartVoss2013}, \citet{HelinBurger2015}, and \citet{Clason2019GeneralizedModes} all gave their definitions in the case of a separable Banach space $X$, but the definitions generalise easily to the metric setting, as given here.
Also, their definitions were given in terms of open rather than closed balls.}
\citet{DuerrBach1978} proposed that a mode for the path measure $\mu$ of a diffusion process should be understood as the minimiser of the \emph}%{\textbf{Onsager--Machlup (OM) functional} $I_{\mu}$ of $\mu$, which is defined by the relation
\begin{equation}
\label{eq:Onsager--Machlup}
\lim_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}} = \frac{\exp(-I_{\mu}(x))}{\exp(-I_{\mu}(x'))}
\quad
\text{for $x, x' \in X$.}
\end{equation}
In some sense, $I_{\mu}$ is a formal negative log-density for $\mu$, but it is in general only a partially-defined extended-real-valued function.
For example, the OM functional of a Gaussian measure on a Hilbert space is finite only on the Cameron--Martin space.
The rigorous interpretation of modes as minimisers of $I_{\mu}$ requires considerable care, especially since in some cases it is not even possible to assign $+\infty$ as an exceptional value for $I_{\mu}$:
the ratio in \eqref{eq:Onsager--Machlup} may oscillate and fail to converge as $r \to 0$.
\citet{DashtiLawStuartVoss2013} defined a \emph}%{\textbf{strong mode} of $\mu$ to be any $x^{\star} \in X$ such that
\begin{align}
\label{eq:strong_mode}
\lim_{r \to 0} \frac{\crcdf{x^{\star}}{r}}{M_{r}} = 1 , \\
\label{eq:M_r}
M_{r} \coloneqq \sup_{x \in X} \crcdf{x}{r} .
\end{align}
(By \Cref{cor:no_unbounded_sequence_approximates_M_r}, separability of $X$ ensures that $\supp(\mu) \neq \varnothing$ and $M_{r} > 0$.)
Any strong mode must lie in $\supp(\mu)$, and the ratio in \eqref{eq:strong_mode} is at most 1 for every choice of $x^{\star} \in X$, so
\begin{equation}
\label{eq:strong_mode_equivalent}
\text{$x^{\star}$ is a strong mode}
\iff
\liminf_{r \to 0} \frac{\crcdf{x^{\star}}{r}}{M_{r}} \geq 1
\iff
\limsup_{r \to 0} \frac{M_{r}}{\crcdf{x^{\star}}{r}} \leq 1 .
\end{equation}
\citet{Clason2019GeneralizedModes} observed that even elementary absolutely continuous measures on $\mathbb{R}$ such as $\mu(E) \coloneqq \int_{E \cap [-1, 1]} \absval{x} \, \mathrm{d} x$ do not have strong modes, even though the Lebesgue density of $\mu$ is clearly maximised at $\pm 1$.
Therefore, they call $x^{\star} \in X$ a \emph}%{\textbf{generalised strong mode} if, for every positive null sequence $(r_{n})_{n \in \mathbb{N}}$, there exists a sequence $(x_{n})_{n \in \mathbb{N}}$ converging to $x^{\star}$ such that
\begin{equation}
\label{eq:generalised_strong_mode}
\lim_{n \to \infty} \frac{\crcdf{x_{n}}{r_{n}}}{M_{r_{n}}} = 1 .
\end{equation}
Motivated by \eqref{eq:strong_mode_equivalent}, \citet{HelinBurger2015} call $x^{\star} \in \supp(\mu) \subseteq X$ a \emph}%{\textbf{weak mode} if\footnote{In fact, \citet{HelinBurger2015} used ``$\lim$'' in place of ``$\limsup$'' in \eqref{eq:weak_mode}, implicitly assuming the existence of the limit.
However, as \citet{AyanbayevKlebanovLieSullivan2022_I} observe, this yields an unsatisfying definition because it excludes the case in which the ratio oscillates, while remaining bounded away from unity, from being a weak mode.
The desirable implication ``strong mode $\implies$ weak mode'' fails for the original ``$\lim$'' version of the definition, but holds for the ``$\limsup$'' version.}
\begin{equation}
\label{eq:weak_mode}
\limsup_{r \to 0} \frac{\crcdf{x'}{r}}{\crcdf{x^{\star}}{r}} \leq 1
\text{ for all } x' \in X .
\end{equation}
As a point of terminology, \citet{HelinBurger2015} were primarily interested in the restricted case that the $x' \in x^{\star} + E$, where $x^{\star} \in E$ and $E$ is a topologically dense linear subspace of a Banach space $X$, and \citet{LieSullivan2018} later called this case an \emph}%{\textbf{$E$-weak mode}.
Conversely, \citet{AyanbayevKlebanovLieSullivan2022_I} call $x^{\star}$ satisfying \eqref{eq:weak_mode} a \emph}%{\textbf{global weak mode}.
Since we are only going to consider global weak modes, we can simply call them ``weak modes'' without any ambiguity.
Under the assumption that the OM functional $I_{\mu}$ of $\mu$ is real-valued on $\varnothing \neq E \subseteq X$ and
\begin{equation}
\label{eq:property_M}
\text{for some $x \in E$ and all $x' \in X \setminus E$,} \quad \lim_{r \to 0} \frac{\crcdf{x'}{r}}{\crcdf{x}{r}} = 0 ,
\end{equation}
which \citet[Definition~3.1]{AyanbayevKlebanovLieSullivan2022_I} call \emph{property $M(\mu, E)$}, $I_{\mu}$ can be regarded as having the value $+ \infty$ on $X \setminus E$ and weak modes are precisely minimisers of this extended version of $I_{\mu}$.
This enabled \citet{AyanbayevKlebanovLieSullivan2022_I, AyanbayevKlebanovLieSullivan2022_II} to establish a stability and convergence theory for weak modes in terms of the $\Gamma$-convergence and equicoercivity of the associated OM functionals.\footnote{Frustratingly, while there are some situations in which strong modes can be characterised as minimisers of Onsager--Machlup functionals \citep{DashtiLawStuartVoss2013, AgapiouBurgerDashtiHelin2018}, there are also situations in which this correspondence breaks down, even when property $M(\mu, E)$ holds \citep[Example~B.5]{AyanbayevKlebanovLieSullivan2022_I}.}
Furthermore, as we show below in \Cref{lem:GWM_greatest_maximal}, weak modes are exactly the greatest elements of a natural preorder $\preceq_{0}$ on $X$, namely the one induced by the limiting ratios of masses of balls in the small-radius limit (\Cref{defn:analytic_small-radius_preorder}).
\citet{AgapiouBurgerDashtiHelin2018} also introduced local versions of the strong and weak modes, in which $x^{\star}$ is only compared to points in a sufficiently small ball $\cball{x^{\star}}{\delta}$, $\delta > 0$, analogous to local maximisers of the Lebesgue probability density function / local minimisers of the OM functional.
\citet{DashtiLawStuartVoss2013} considered the existence of strong modes for measures $\mu$ as in \eqref{eq:Bayes}, choosing the prior $\pi$ to be a centred Gaussian measure on an infinite-dimensional separable Banach space and imposing some regularity conditions on the potential $\Phi$ \citep[Assumption~2.1]{DashtiLawStuartVoss2013}.
Their approach considered maximisers of $x \mapsto \crcdf{x}{r}$ for fixed $r > 0$, which we call \emph}%{\textbf{radius-$r$ modes} in \Cref{subsection:convergence_of_radius_r}, and established that any family of radius-$r$ modes has a subsequence which must converge to a strong mode.
The arguments of \citet{DashtiLawStuartVoss2013} assume the existence of radius-$r$ modes without proof; in \Cref{subsection:existence_of_radius_r}, we prove results on the existence of radius-$r$ modes in various settings but also provide examples that have no such radius-$r$ modes.
Despite the contributions of \citet{DashtiLawStuartVoss2013}, \citet{Kretschmann2019}, and \citet{KlebanovWacker2022} among others --- and our own offerings --- a surprising amount is still unknown even about the existence of radius-$r$ modes, let alone weak and strong modes, even for ``nicely'' reweighted Gaussian measures on general Banach spaces.
\section{The positive-radius preorder}
\label{sec:positive-radius_preorder}
\subsection{Definition and basic properties}
A probability measure on a metric space $X$ induces a family of preorders on $X$, one for each positive radius, in a very straightforward way:
\begin{definition}[Positive-radius preorder]
\label{defn:positive_radius_preorder}
Let $X$ be a metric space and let $\mu \in \prob{X}$.
For each $r > 0$, define a relation $\preceq_{r}$ on $X$ by
\begin{align}
\label{eq:preceq_r}
x \preceq_{r} x'
& \iff \crcdf{x}{r} \leq \crcdf{x'}{r} .
\end{align}
\end{definition}
It is almost trivial to verify that $\preceq_{r}$ satisfies the axioms for a preorder.
We will write $x \asymp_{r} x'$ if both $x \preceq_{r} x'$ and $x \succeq_{r} x'$ hold, and $x \mathrel{\Vert}_{r} x'$ if neither $x \preceq_{r} x'$ nor $x \succeq_{r} x'$ hold.
In fact, though, incomparability never arises for this preorder:
totality of the usual order $\leq$ on $\mathbb{R}$ implies totality of $\preceq_{r}$ on $X$.
Totality implies that the maximal and greatest elements of $X$ with respect to $\preceq_{r}$ coincide (\Cref{lem:radius_r_mode}), which simplifies the discussion considerably.
Upward closures with respect to $\preceq_{r}$ are notably well behaved.
In particular, \Cref{lem:upper_closure_is_closed_and_bounded}\ref{lem:upper_closure_is_closed_and_bounded_2} says that the relation $\preceq_{r}$ is \emph{upper semicontinuous} \citep[p.44]{AliprantisBorder2006}.
\begin{lemma}[Closedness, boundedness, and non-compactness of upward closures]
\label{lem:upper_closure_is_closed_and_bounded}
Let $X$ be a metric space, let $\mu \in \prob{X}$, and fix $r > 0$.
\begin{enumerate}[label=(\alph*)]
\item \label{lem:upper_closure_is_closed_and_bounded_1}
For each $t \geq 0$, $\set{ x' \in X }{ \crcdf{x'}{r} \geq t }$ is closed.
\item \label{lem:upper_closure_is_closed_and_bounded_2}
For each $x \in X$, $\mathop{\uparrow}\nolimits_{r} x \coloneqq \set{ x' \in X }{ x' \succeq_{r} x }$ is closed.
\item \label{lem:upper_closure_is_closed_and_bounded_3}
For each $t > 0$, $\set{ x' \in X }{ \crcdf{x'}{r} \geq t }$ is bounded, with separation measure of non-compactness $\gamma( \set{ x' \in X }{ \crcdf{x'}{r} \geq t } ) \leq 2 r$.
\item \label{lem:upper_closure_is_closed_and_bounded_4}
For each $x \in X$ with $\crcdf{x}{r} > 0$, $\mathop{\uparrow}\nolimits_{r} x$ is bounded with $\gamma(\mathop{\uparrow}\nolimits_{r} x) \leq 2 r$.
\end{enumerate}
\end{lemma}
\begin{proof}
Claim \ref{lem:upper_closure_is_closed_and_bounded_1} is immediate from \Cref{lem:RCDF}\ref{lem:RCDF_in_x}, and \ref{lem:upper_closure_is_closed_and_bounded_2} is a special case of claim \ref{lem:upper_closure_is_closed_and_bounded_1}.
Now fix $t > 0$ and suppose for a contradiction that $(x_{n})_{n \in \mathbb{N}}$ is an unbounded sequence in $\set{ x' \in X }{ \crcdf{x'}{r} \geq t }$.
By passing to a subsequence if necessary, we may assume that $d(x_{n}, x_{n'}) > 2 r$ for all distinct $n, n' \in \mathbb{N}$.
We thus obtain the contradiction that
\[
1 = \mu(X) \geq \mu( \set{ x' \in X }{ \crcdf{x'}{r} \geq t } ) \geq \mu \left( \biguplus_{n \in \mathbb{N}} \cball{x_{n}}{r} \right) = \sum_{n \in \mathbb{N}} \crcdf{x_{n}}{r} \geq \sum_{n \in \mathbb{N}} t = \infty .
\]
This shows that $\set{ x' \in X }{ \crcdf{x'}{r} \geq t }$ must be bounded and also that it admits no infinite subset with separation $2 r$, thus establishing \ref{lem:upper_closure_is_closed_and_bounded_3}, of which \ref{lem:upper_closure_is_closed_and_bounded_4} is a special case.
\end{proof}
\subsection{Existence of greatest elements}
\label{subsection:existence_of_radius_r}
Our first aim is to establish existence of greatest elements for $\preceq_{r}$, which we also call \emph{radius-$r$ modes}.
Such points can be seen as approximate modes\footnote{The intuition that radius-$r$ modes are approximate modes must be treated sceptically.
For example, consider $\mu \in \prob{X}$ with bimodal continuous Lebesgue density $\rho(x) \propto \max \{ 0, 1 - 4 (x - 1)^{2} \} + \max \{ 0, 1 - 4 (x + 1)^{2} \}$, for which a radius-$1$ mode is located at $0$, which is neither a maximiser of $\rho$ nor even in $\supp(\mu)$.} with respect to the positive radius / spatial resolution $r$;
only in the next section will we attempt to take the limit as $r \searrow 0$.
The following lemma gives several equivalent conditions for a point to be a radius-$r$ mode;
the intersection criterion will prove especially useful in what follows.
\begin{lemma}[Characterisation of radius-$r$ modes]
\label{lem:radius_r_mode}
Let $X$ be any metric space, let $\mu \in \prob{X}$, and let $r > 0$.
Then the following are equivalent and if one (and hence any) holds, then $x_{r}^{\star} \in X$ is called a \emph}%{\textbf{radius-$r$ mode}:
\begin{enumerate}[label=(\alph*)]
\item
\label{item:radius_r_mode_maximal}
$x_{r}^{\star}$ is a $\preceq_{r}$-maximal element;
\item
\label{item:radius_r_mode_greatest}
$x_{r}^{\star}$ is a $\preceq_{r}$-greatest element;
\item
\label{item:radius_r_mode_Mr}
$\crcdf{x_{r}^{\star}}{r} = M_{r}$.
\end{enumerate}
Furthermore, let $\mathfrak{M}_r$ denote the set of radius-$r$ modes for $\mu$.
If $(x_n)_{n \in \mathbb{N}} \subset X$ is any sequence with $\crcdf{x_n}{r} \to M_r$, then $\bigcap_{n \in \mathbb{N}} \mathop{\uparrow}\nolimits_r x_n = \mathfrak{M}_r$.
\end{lemma}
\begin{proof}
\mbox{(\ref{item:radius_r_mode_maximal}$\iff$\ref{item:radius_r_mode_greatest})}$\quad$
This equivalence holds because $\preceq_{r}$ is a total preorder.
\noindent\mbox{(\ref{item:radius_r_mode_greatest}$\iff$\ref{item:radius_r_mode_Mr})}
By the definition of $\preceq_r$, $x_r^\star$ is greatest if and only if $\crcdf{x}{r} \leq \crcdf{x_r^\star}{r}$ for all $x \in X$; the latter inequality is true if and only if $M_r \coloneqq \sup_{x \in X} \crcdf{x}{r} \leq \crcdf{x_r^\star}{r}$.
The inequality $\crcdf{x_r^\star}{r} \leq M_r$ is obvious from the definition of $M_r$ as a supremum.
For the final claim, let $(x_n)_{n \in \mathbb{N}}$ be any sequence with $\crcdf{x_n}{r} \to M_r$.
The definition of $\preceq_r$ gives that $x_r^\star \in \bigcap_{n \in \mathbb{N}} \mathop{\uparrow}\nolimits_r x_n$ if and only if
\begin{equation} \label{eq:radius_r_mode_intersection}
\crcdf{x_n}{r} \leq \crcdf{x_r^\star}{r} \text{~for all $n \in \mathbb{N}$}.
\end{equation}
Taking limits as $n \to \infty$ in \eqref{eq:radius_r_mode_intersection}, it follows that if $x_r^\star \in \bigcap_{n \in \mathbb{N}} \mathop{\uparrow}\nolimits_r x_n$, then $\crcdf{x_r^\star}{r} = M_r$.
Conversely, if $x_r^\star$ is a radius-$r$ mode, then $\crcdf{x_r^\star}{r} = M_r$, and therefore \eqref{eq:radius_r_mode_intersection} holds.
Therefore, $\bigcap_{n \in \mathbb{N}} \mathop{\uparrow}\nolimits_r x_n$ is precisely the set of radius-$r$ modes.
\end{proof}
\begin{proposition}[Existence of radius-$r$ modes in compact spaces]
\label{prop:compact_yields_r-mode}
Let $X$ be a compact metric space, let $\mu \in \prob{X}$, and let $r > 0$.
Then $\preceq_{r}$ has at least one radius-$r$ mode $x_{r}^{\star} \in X$.
\end{proposition}
\begin{proof}
This is a special case of \Cref{thm:r_greatest}\ref{item:r_greatest_HB}, and also follows from \citet[Theorem~2.44]{AliprantisBorder2006}, but a self-contained proof is given by observing that the map $\crcdf{\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss}}{r} \colon X \to [0, 1]$ is upper semicontinuous (\Cref{lem:RCDF}\ref{lem:RCDF_in_x}) and hence has at least one global maximiser $x_{r}^{\star}$ in the compact space $X$.
\end{proof}
We now adopt a very different approach to establishing the existence of radius-$r$ modes, one based in applying intersection theorems to upward closures with respect to $\preceq_{r}$.
We begin with a very general lemma;
when \Cref{lem:r_greatest_T_compact} used in practice, $\mathcal{T}$ will often be the metric topology, but another useful case is the weak topology of a Banach space.
\begin{lemma}
\label{lem:r_greatest_T_compact}
Let $X$ be a separable metric space, $\mu \in \prob{X}$, and $r > 0$.
Suppose that $\mathcal{T}$ is a topology on $X$ such that, for some sequence $(x_{n})_{n \in \mathbb{N}} \subset X$ with $\crcdf{x_{n}}{r} \nearrow M_{r} > 0$, $\mathop{\uparrow}\nolimits_{r} x_{n}$ is $\mathcal{T}$-closed and $\mathcal{T}$-compact for all sufficiently large $n$.
Then $\mu$ has a non-empty set $\mathfrak{M}_r$ of radius-$r$ modes, and $\mathfrak{M}_r$ is also $\mathcal{T}$-compact if $\mathcal{T}$ is Hausdorff.
\end{lemma}
\begin{proof}
Separability of $X$ implies that $M_{r} > 0$.
Let $(x_{n})_{n \in \mathbb{N}}$ be such that $\crcdf{x_{n}}{r} \nearrow M_{r}$ as $n \to \infty$.
The sets $\mathop{\uparrow}\nolimits_{r} x_{n}$ are non-empty;
since the sequence $\bigl(\crcdf{x_{n}}{r}\bigr)_{n \in \mathbb{N}}$ is increasing, $\mathop{\uparrow}\nolimits_{r} x_{n + 1} \subseteq \mathop{\uparrow}\nolimits_{r} x_{n}$ for each $n$, i.e.\ they are decreasingly nested;
by hypothesis, for sufficiently large $n$, they are also $\mathcal{T}$-closed and $\mathcal{T}$-compact.
Therefore, by Cantor's intersection theorem (\Cref{thm:intersection_theorem}\ref{item:intersection_theorem_Cantor_1}), their intersection is non-empty (and $\mathcal{T}$-compact, if $\mathcal{T}$ is Hausdorff);
by \Cref{lem:radius_r_mode}, this intersection is exactly the set of radius-$r$ modes.
\end{proof}
\begin{theorem}[Existence of radius-$r$ modes]
\label{thm:r_greatest}
Let $X$ be a separable metric space, $\mu \in \prob{X}$, and $r > 0$.
Let $\mathfrak{M}_{r}$ denote the set of radius-$r$ modes for $\mu$.
\begin{enumerate}[label=(\alph*)]
\item
\label{item:r_greatest_HB}
Suppose that $X$ has the Heine--Borel property, i.e.\ that every closed and bounded subset of $X$ is compact.
Then $\mathfrak{M}_{r}$ is non-empty and compact.
\item
\label{item:r_greatest_doubling}
Suppose that $X$ is complete and that $\mu$ is a doubling measure, i.e.\ there exists a constant $C > 0$ such that
\begin{equation}
\label{eq:r_greatest_doubling_measure}
\text{for all $x \in X$ and $r > 0$,}
\quad
\crcdf{x}{2r} \leq C \crcdf{x}{r} .
\end{equation}
Then $\mathfrak{M}_{r}$ is non-empty and compact.
\item
\label{item:r_greatest_lower_bound}
Suppose that $X$ is complete and there exists a point $o \in X$ and a function $f \colon (0, \infty)^{2} \to (0, \infty)$ such that
\begin{equation}
\label{eq:r_greatest_lower_bound}
\text{for all $x \in \cball{o}{R}$,}
\quad
\crcdf{x}{\delta} \geq f(\delta, R) > 0 .
\end{equation}
Then $\mathfrak{M}_{r}$ is non-empty and compact.
\item
\label{item:r_greatest_vanishing_MNC}
Suppose that $X$ is complete and that there exists $(x_{n})_{n \in \mathbb{N}}$ with $\crcdf{x_{n}}{r} \nearrow M_{r}$ and $\gamma(\mathop{\uparrow}\nolimits_{r} x_{n}) \to 0$ as $n \to \infty$.
Then $\mathfrak{M}_{r}$ is non-empty and compact.
\item
\label{item:r_greatest_vanishing_diameter}
Suppose that $X$ is complete and that there exists $(x_{n})_{n \in \mathbb{N}}$ with $\crcdf{x_{n}}{r} \nearrow M_{r}$ and $\mathop{\textup{diam}}\nolimits(\mathop{\uparrow}\nolimits_{r} x_{n}) \to 0$ as $n \to \infty$.
Then $\mathfrak{M}_{r}$ is a singleton.
\item
\label{item:r_greatest_weakly_compact}
Suppose that $X$ is a Banach space and that there exists $(x_{n})_{n \in \mathbb{N}}$ with $\crcdf{x_{n}}{r} \nearrow M_{r}$, and that $\mathop{\uparrow}\nolimits_{r} x_{n}$ is weakly compact for all sufficiently large $n$.
Then $\mathfrak{M}_{r}$ is non-empty and weakly compact.
\item
\label{item:r_greatest_convex}
Suppose that $X$ is a reflexive Banach space and that there exists $(x_{n})_{n \in \mathbb{N}}$ with $\crcdf{x_{n}}{r} \nearrow M_{r}$ and that $\mathop{\uparrow}\nolimits_{r} x_{n}$ is convex for all sufficiently large $n$.
Then $\mathfrak{M}_{r}$ is non-empty, weakly compact, and convex.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[label=(\alph*)]
\item
\Cref{lem:upper_closure_is_closed_and_bounded} ensures that each $\mathop{\uparrow}\nolimits_{r} x_{n}$ is closed and bounded in the metric topology on $X$.
The claim now follows from \Cref{lem:r_greatest_T_compact}.
\item
By \citet[Proposition~3.1]{BjoernBjoern2011}, any complete metric space with a doubling measure has the Heine--Borel property.
The claim now follows from \ref{item:r_greatest_HB}.
\item
Let $R > 0$ and $\delta > 0$ be arbitrary.
The lower bound hypothesis \eqref{eq:r_greatest_lower_bound} implies that there cannot be an infinite set of pairwise-disjoint balls $\cball{x_{n}}{\delta}$, $n \in \mathbb{N}$, with centres $x_{n} \in \cball{o}{R}$ since, if there were, then we would obtain the contradiction
\[
1 \geq \crcdf{o}{R + \delta} \geq
\mu \left( \biguplus_{n \in \mathbb{N}} \cball{x_{n}}{\delta} \right) =
\sum_{n \in \mathbb{N}} \crcdf{x_{n}}{\delta} \geq
\sum_{n \in \mathbb{N}} f(\delta, R) = \infty .
\]
Since $\delta > 0$ was arbitrary, $\gamma(\cball{o}{R}) = 0$, i.e.\ $\cball{o}{R}$ is compact.
Now, given any closed and bounded set $A \subseteq X$, choose $R > 0$ large enough that $A \subseteq \cball{o}{R}$ to see that $A$ must be compact.
Therefore, $X$ has the Heine--Borel property.
The claim now follows from \ref{item:r_greatest_HB}.
\item
The claim follows from Kuratowski's intersection theorem (\Cref{thm:intersection_theorem}\ref{item:intersection_theorem_Kuratowski}).
\item
As already observed, by \Cref{lem:upper_closure_is_closed_and_bounded}, each upward closure $\mathop{\uparrow}\nolimits_{r} x_{n}$ is both closed and bounded in the metric topology and they are decreasingly nested.
Since $X$ is complete, Cantor's intersection theorem (\Cref{thm:intersection_theorem}\ref{item:intersection_theorem_Cantor_2}) yields that $\bigcap_{n \in \mathbb{N}} \mathop{\uparrow}\nolimits_{r} x_{n} = \{ x_{r}^{\star} \}$ for some $x_{r}^{\star} \in X$.
This intersection is precisely the set of radius-$r$ modes for $\mu$ (\Cref{lem:radius_r_mode}).
\item
This is simply \Cref{lem:r_greatest_T_compact} in the special case that $\mathcal{T}$ is the weak topology of the separable Banach space $X$.
\item
Each closed, bounded, and convex subset of the separable, reflexive Banach space $X$ is necessarily weakly compact, and so the claim follows from \ref{item:r_greatest_weakly_compact}.
\end{enumerate}
\vspace{-\baselineskip}
\end{proof}
\Cref{thm:r_greatest} is by no means universally applicable, and indeed there are measures that have no radius-$r$ modes, as the next two examples show.
\begin{example}[An atomic measure with no radius-$r$ mode for $1 \leq r < 2$]
\label{eg:no_radius_1_mode}
Let $X = \mathbb{N}$ be equipped with the following variant of the discrete metric:
\begin{equation}
\label{eq:funny_discrete}
\Delta(k, \ell) \coloneqq
\begin{cases}
0 , & \text{if $k = \ell$,} \\
2 , & \text{if $\min \{ k, \ell \}$ is odd and $\max \{ k, \ell \} = \min \{ k, \ell \} + 1$,} \\
1 , & \text{otherwise.}
\end{cases}
\end{equation}
In the space $(X, \Delta)$, distinct points are a unit distance apart, with the exception of each odd number and its successor, which are doubly spaced.
Equip this space with the measure $\mu \coloneqq \sum_{k \in \mathbb{N}} 2^{-k} \delta_{k} \in \prob{X}$, where $\delta_{k}$ is the unit Dirac measure centred at $k \in \mathbb{N}$.
For $k \in \mathbb{N}$,
\begin{align}
\label{eg:no_radius_1_mode_odd}
\crcdf{2 k - 1}{1} & = \mu(X \setminus \{ 2 k \}) = 1 - 2^{- 2 k} , \\
\label{eg:no_radius_1_mode_even}
\crcdf{2 k}{1} & = \mu(X \setminus \{ 2 k - 1 \}) = 1 - 2^{- (2 k - 1)} .
\end{align}
Both \eqref{eg:no_radius_1_mode_odd} and \eqref{eg:no_radius_1_mode_even} show that $M_{1} = 1$;
\eqref{eg:no_radius_1_mode_odd} shows that no odd number is a radius-$1$ mode;
\eqref{eg:no_radius_1_mode_even} shows that no even number is a radius-$1$ mode.
Thus, $\mu$ has no radius-$1$ mode at all.
Similar arguments also show that $\mu$ has no radius-$r$ mode for $1 \leq r < 2$;
for $r \geq 2$, every point of $X$ is a radius-$r$ mode;
for $0 < r < 1$, the point $1 \in X$ is the unique radius-$r$ mode.
It is interesting to relate this example to \Cref{thm:r_greatest}.
In this setting, for each $k \in X$, $\mathop{\uparrow}\nolimits_{1} k \supseteq \{ k, k + 2, k + 4, \dots \}$.
This set is non-compact with $\gamma(\mathop{\uparrow}\nolimits_{1} k) \geq 1$, since it contains an infinite $1$-separated sequence.
Thus, neither \Cref{thm:r_greatest}\ref{item:r_greatest_HB} nor \ref{item:r_greatest_vanishing_MNC} can apply.
Also, although the space $(X, \Delta)$ is complete,\footnote{Just as in the case of the discrete metric, in this space, the properties of being a Cauchy sequence, being a convergent sequence, and being eventually constant all coincide.} \Cref{thm:r_greatest}\ref{item:r_greatest_vanishing_diameter} does not apply because $\mathop{\textup{diam}}\nolimits(\mathop{\uparrow}\nolimits_{1} k) \geq 1$.
\end{example}
\begin{example}[A non-atomic measure with no radius-$r$ mode for any $0 < r < \nicefrac{1}{8}$.]
\label{eg:no_radius_r_mode}
Building on the ideas of \Cref{eg:no_radius_1_mode}, consider the space
\begin{align}
\label{eq:no_radius_r_mode_X}
X & \coloneqq \Set{ (\xi, k, m) \in \mathbb{R} \times \mathbb{N}^{2} }{ \absval{ \xi } \leq 2^{- k - m - 1} } ,
\end{align}
equipped with the metric $d$ and probability measure $\mu$ given by
\begin{align}
\label{eq:no_radius_r_mode_d}
d \bigl( (\xi, k, m) , (\eta, \ell, n) \bigr) & \coloneqq
\begin{cases}
2 , & \text{if $m \neq n$,} \\
2^{-m} \Delta (k, \ell) , & \text{if $m = n$ and $k \neq \ell$,} \\
\absval{ \xi - \eta } , & \text{if $m = n$ and $k = \ell$,}
\end{cases} \\
\label{eq:no_radius_r_mode_mu}
\mu \left( \biguplus_{k, m \in \mathbb{N}} \bigl( E_{k, m} \times \{ k \} \times \{ m \} \bigr) \right) & \coloneqq \frac{1}{Z} \sum_{k, m \in \mathbb{N}} \sigma^{-m} \lambda \bigl( E_{k, m} \cap [-2^{- k - m - 1}, 2^{- k - m - 1}] \bigr),
\end{align}
where $E_{k, m} \in \Borel{\mathbb{R}}$, $\lambda$ denotes one-dimensional Lebesgue measure, $\nicefrac{1}{2} < \sigma < 1$ is a free parameter, and the normalisation constant is $Z \coloneqq \sum_{m \in \mathbb{N}} (2 \sigma)^{-m} < \infty$.
Now let $0 < r < \nicefrac{1}{8}$ be arbitrary and let $n \in \mathbb{N}$ be uniquely determined by $2^{-n} \leq r < 2^{-n + 1}$.
We now determine $M_{r}$ and whether or not it can be attained by the masses of balls $\cball{x}{r}$, where $x = (\xi, k, m)$ has $m = n$, $m < n$, or $m > n$ respectively.
Note that, since $r < 2$, the first case of \eqref{eq:no_radius_r_mode_d} implies that $\cball{x}{r} \subseteq \mathbb{R} \times \mathbb{N} \times \{ m \}$.
\begin{enumerate}[label=(\roman*)]
\item
First suppose that $m = n$.
For odd $k \in \mathbb{N}$,
\[
\mu(\cball{x}{r}) = \mu(\mathbb{R} \times (\mathbb{N} \setminus \{ k + 1 \}) \times \{ m \}) = \frac{(2\sigma)^{-m}}{Z} (1 - 2^{-(k + 1)}) .
\]
Taking the limit as $k \to \infty$ shows that $M_{r} \geq \frac{(2\sigma)^{-n}}{Z}$ but that no such ball realises this supremal mass.
The case of even $k$ is similar, just as in \Cref{eg:no_radius_1_mode}.
\item
For $m > n$, since $\cball{x}{r} \subseteq \mathbb{R} \times \mathbb{N} \times \{ m \}$, it follows that
\[
\mu(\cball{x}{r}) \leq \frac{(2 \sigma)^{-m}}{Z} < \frac{(2\sigma)^{-n}}{Z}.
\]
\item
If $m < n$, then $r < 2^{-n} < 2^{-m}$, and so the second case of \eqref{eq:no_radius_r_mode_d} ensures that $\cball{x}{r} \subseteq \mathbb{R} \times \{ k \} \times \{ m \}$.
The mass of such a ball is maximised by the case $\xi = 0$, $k = 1$, $m = n - 1$, in which case the ball (which is a single line segment) has mass
\begin{align*}
\cball{x}{r}
& = \frac{\sigma^{-m}}{Z} \lambda \bigl( [-r, r] \cap [ -2^{-k - m - 1}, 2^{-k - m - 1} ] \bigr) \\
& \leq \sigma \frac{\sigma^{-n}}{Z} \lambda \bigl( [-2^{-n}, 2^{-n}] \cap [ -2^{-n - 1}, 2^{-n - 1} ] \bigr) \\
& = \sigma \frac{(2\sigma)^{-n}}{Z}
< \frac{(2\sigma)^{-n}}{Z},
\end{align*}
where the last inequality follows from the fact that $\sigma < 1$.
\end{enumerate}
Hence, $M_{r} = \frac{(2 \sigma)^{-n}}{Z}$ but $\crcdf{x}{r} < M_{r}$ for all $x \in X$, i.e.\ $\mu$ has no radius-$r$ mode.
\end{example}
The various sufficient conditions for existence of radius-$r$ modes (\Cref{thm:r_greatest}) cover a variety of different spaces, though \Cref{eg:no_radius_r_mode} shows that existence cannot always be guaranteed.
Motivated by applications to inverse problems, we now specialise somewhat and prove that radius-$r$ modes always exist when $X = \ell^{p}(\mathbb{N}; \mathbb{R})$, $1 < p < \infty$, is equipped with a reweighting $\mu$ of a Gaussian measure $\mu_{0}$.
We note that the existence of strong modes for this setting was recently studied by \citet{KlebanovWacker2022} under the assumption that $\mu_{0}$ has diagonal covariance operator.
The main technical steps are based on the arguments of \citet[Proposition~4.4]{AgapiouBurgerDashtiHelin2018}, and in particular we exploit the fact that $\ell^{p}(\mathbb{N}; \mathbb{R})$ has a canonical Schauder basis $(e_n)_{n \in \mathbb{N}}$ to define the finite-dimensional projections into the first $n$ components:
\begin{align*}
P_n \colon X &\to \mathbb{R}^n \\
x &\mapsto (x_1, \dots, x_n).
\end{align*}
The pushforward measures $\mu \circ P_n^{-1}$ of sufficiently ``nice'' measures $\mu$ satisfy a technical condition which we call \emph}%{\textbf{spherical non-atomicity} (\Cref{defn:spherically_non-atomic}); this allows us to obtain weak upper semicontinuity of the functional $x \mapsto \crcdf{x}{r}$ for any fixed $r > 0$.
\begin{theorem}[Radius-$r$ modes in $\ell^{p}$ sequence spaces]
\label{thm:radius-r_modes_lp}
Let $X = \ell^{p}(\mathbb{N}; \mathbb{R})$, $1 \leq p < \infty$, and let $\mu \in \prob{X}$.
Define the finite-dimensional projections $(P_n)_{n \in \mathbb{N}}$ and suppose that the pushforward measures $\mu \circ P_n^{-1}$ are spherically non-atomic for each $n \in \mathbb{N}$.
Then, for each fixed $r > 0$,
\begin{enumerate}[label=(\alph*)]
\item
\label{item:radius-r_modes_lp_weak_usc}
$x \mapsto \crcdf{x}{r}$ is weakly upper semicontinuous;
and
\item
\label{item:radius-r_modes_lp_non-atomic}
if $p > 1$, then $\mu$ has a radius-$r$ mode.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[label=(\alph*)]
\item
Suppose that $x_k \rightharpoonup x^\star$ as $k \to \infty$, and let $\mu_n(A) \coloneqq (\mu \circ P_n^{-1})(P_n A)$.
As $\mu_n(\cball{x^\star}{r}) \searrow \crcdf{x^\star}{r}$ (\Cref{lem:lp_projection_properties}), it follows that, for any $\varepsilon > 0$, there exists $N \in \mathbb{N}$ such that, for all $n \geq N$,
\begin{equation}
\label{eq:approx_mu_by_mu_n}
\mu_n(\cball{x^\star}{r}) - \crcdf{x^\star}{r} < \varepsilon.
\end{equation}
Using \eqref{eq:approx_mu_by_mu_n} and the inequality $\mu_n(\cball{x_k}{r}) \geq \mu(\cball{x_k}{r})$ for any $k \in \mathbb{N}$, we obtain
\begin{equation}
\label{eq:approx_of_rcdf_difference_by_mu_n}
\crcdf{x_k}{r} - \crcdf{x^\star}{r} \leq \mu_n(\cball{x_k}{r}) - \mu_n(\cball{x^\star}{r}) + \varepsilon.
\end{equation}
By hypothesis, $x_k \rightharpoonup x^\star$, so $P_n x_k \to P_n x^\star$ as $k \to \infty$.
As $\mu \circ P_n^{-1}$ is assumed to be spherically non-atomic, $x \mapsto (\mu \circ P_n^{-1})(\cballn{x}{r}{n})$ is continuous (\Cref{cor:RCDF_spherically_nonatomic}).
Hence,
\begin{align*}
\lim_{k \to \infty} (\mu \circ P_n^{-1})(P_n \cball{x_k}{r}) &= \lim_{k \to \infty} (\mu \circ P_n^{-1})(\cballn{P_n x_k}{r}{n}) &&\text{(\Cref{lem:lp_projection_properties}\ref{item:lp_projection_properties_1})} \\
&= (\mu \circ P_n^{-1})(\cballn{P_n x^\star}{r}{n}) &&\text{(by continuity)} \\
&= (\mu \circ P_n^{-1})(P_n \cball{x^\star}{r}) &&\text{(\Cref{lem:lp_projection_properties}\ref{item:lp_projection_properties_1}).}
\end{align*}
Hence, $\lim_{k \to \infty} \mu_n(\cball{x_k}{r}) = \mu_n(\cball{x^\star}{r})$.
Taking limits as $k \to \infty$ in \eqref{eq:approx_of_rcdf_difference_by_mu_n} yields that
\begin{equation*}
\lim_{k \to \infty} \crcdf{x_k}{r} - \crcdf{x^\star}{r}
\leq
\lim_{k \to \infty} \mu_n(\cball{x_k}{r}) - \mu_n(\cball{x^\star}{r}) + \varepsilon
= \varepsilon.
\end{equation*}
As $\varepsilon > 0$ was arbitrary, this shows that $x \mapsto \crcdf{x}{r}$ is weakly upper semicontinuous.
\item
Fix $r > 0$ and let $(x_n)_{n \in \mathbb{N}}$ be a maximising sequence such that $\crcdf{x_n}{r} \nearrow M_r > 0$.
The sequence $(x_n)_{n \in \mathbb{N}}$ is bounded by \Cref{cor:no_unbounded_sequence_approximates_M_r};
the Banach--Alaoglu theorem implies that it has a weakly convergent subsequence $(x_{n_k})_{k \in \mathbb{N}}$ with $x_{n_{k}} \rightharpoonup x^\star$ for some $x^{\star} \in X$.
Part \ref{item:radius-r_modes_lp_weak_usc} established that $x \mapsto \crcdf{x}{r}$ is weakly upper semicontinuous, and so
\begin{equation*}
M_r = \lim_{k \to \infty} \crcdf{x_{n_k}}{r} \leq \crcdf{x^\star}{r} .
\end{equation*}
Hence, $x^\star$ is a radius-$r$ mode (\Cref{lem:radius_r_mode}).
\end{enumerate}
\vspace{-\baselineskip}
\end{proof}
\begin{corollary}[Radius-$r$ MAP estimators for Gaussian priors]
\label{cor:radius-r_modes_lp_gaussian}
Let $\mu_0 \in \prob{\ell^{p}(\mathbb{N}; \mathbb{R})}$, $1 < p < \infty$, be a non-degenerate Gaussian measure and let $\mu \ll \mu_0$.
Then, for any $r > 0$, $\mu$ has a radius-$r$ mode.
\end{corollary}
\begin{proof}
As $\mu_0$ is a non-degenerate Gaussian, each pushforward $\mu_0 \circ P_n^{-1}$ is a non-degenerate Gaussian measure on $\mathbb{R}^n$, i.e.\ it has an invertible covariance matrix, and thus is absolutely continuous with respect to Lebesgue measure.
Since $\mu \ll \mu_0$, it follows that $\mu \circ P_n^{-1}$ is also absolutely continuous and therefore spherically non-atomic (\Cref{lem:lp_posterior_pushforwards_non-atomic}).
The result now follows from \Cref{thm:radius-r_modes_lp}\ref{item:radius-r_modes_lp_non-atomic}.
\end{proof}
\subsection{Convergence of greatest and near-greatest elements}
\label{subsection:convergence_of_radius_r}
If one can establish that radius-$r$ modes $x_r^\star$ exist for each $r > 0$, it is then natural to ask whether sequences of radius-$r$ modes can approximate true modes, e.g.\ strong or weak modes.
This approach is used by \citet[Theorem~3.5]{DashtiLawStuartVoss2013} to study strong modes of Bayesian posteriors obtained from Gaussian priors:
they show that, for such measures, a net of radius-$r$ modes must have a subsequence converging to a strong mode.
However, we have seen that existence of radius-$r$ modes can be difficult to prove, and in some cases no radius-$r$ modes exist (\Cref{eg:no_radius_1_mode,eg:no_radius_r_mode}).
Taking limits of radius-$r$ modes is also not straightforward:
the limit need not be a strong or weak mode, and not every mode can be represented as the limit of radius-$r$ modes.
This reveals the difficulty in extending the arguments of \citet{DashtiLawStuartVoss2013} to more general probability measures, as the correspondence between modes and limits of radius-$r$ modes breaks down.
We first give an example of a measure with a bounded and continuous Lebesgue density possessing a mode that cannot be represented as the limit of radius-$r$ modes.
The problem here is that the points $-1$ and $+1$ have asymptotically equivalent mass, but the point $+1$ has slightly more mass in each ball $\cball{+1}{r}$; as a result, $+1$ ``hides'' the other mode $-1$.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/modes_limit_of_r_modes.pdf}
\subcaption{\raggedright Unnormalised density \eqref{eq:modes_limit_of_r-modes_density} of the measure in \Cref{eg:modes_limit_of_r-modes}.
The point $+1$ is the unique radius-$r$ mode for all sufficiently small $r$.}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/modes_limit_of_r_modes_ratio.pdf}
\subcaption{\raggedright The ratio $\nicefrac{\crcdf{+1}{r}}{\crcdf{-1}{r}}$
converges to $1$ as $r \to 0$, but it is strictly greater than $1$ for any $r > 0$.}
\end{subfigure}
\caption{\Cref{eg:modes_limit_of_r-modes} shows that not every strong mode is the limit of a sequence of radius-$r$ modes: $-1$ is a strong mode but $+1$ is the unique radius-$r$ mode for all small $r$.}
\label{fig:modes_limit_of_r-modes}
\end{figure}
\begin{example}
\label{eg:modes_limit_of_r-modes}
Define $\mu \in \prob{\mathbb{R}}$ by the Lebesgue density
\begin{equation}
\label{eq:modes_limit_of_r-modes_density}
\rho(x) \propto \max \{ 1 - (x - 1)^2, 0 \} + \max \{ 1 - (x + 1)^2 - (x + 1)^4, 0 \},
\end{equation}
as shown in \Cref{fig:modes_limit_of_r-modes}.
When $r$ is sufficiently small, $\crcdf{+1}{r} = 2r - \frac{2}{3} r^3$ and
$\crcdf{-1}{r} = 2r - \frac{2}{3} r^3 - \frac{2}{5} r^5$, so there is a unique
radius-$r$ mode at $+1$.
However, $+1$ and $-1$ are both strong modes: $+1$ is a radius-$r$ mode for all sufficiently small $r$, so it is a strong mode (\Cref{thm:limits_of_r-modes}) and $-1$ is a strong mode because
\begin{equation*}
\lim_{r \to 0} \frac{\crcdf{-1}{r}}{M_r} = \lim_{r \to 0} \frac{\crcdf{+1}{r}}{M_r}
\lim_{r \to 0} \frac{\crcdf{-1}{r}}{\crcdf{+1}{r}} = 1.
\end{equation*}
\end{example}
\citet{KlebanovWacker2022} have proposed the following solution to this problem:
instead of representing modes as limits of radius-$r$ modes --- which might not be possible --- one can treat modes as limits of points that are ``nearly greatest'', whose existence is always guaranteed.
\begin{definition}
\label{defn:AMF}
Let $X$ be a metric space and let $\mu \in \prob{X}$.
A net $(x_r)_{r > 0} \subseteq X$ is an \emph}%{\textbf{asymptotic maximising family} (AMF) if there exists a positive function $\varepsilon$ with $\lim_{r \to 0} \varepsilon(r) = 0$ and
\begin{equation}
\label{eq:AMF}
\frac{\crcdf{x_r}{r}}{M_r} \geq 1 - \varepsilon(r)
\quad
\text{for all $r > 0$.}
\end{equation}
\end{definition}
The following results shed light on the subtleties involved in taking limits of radius-$r$ modes, or, more generally, AMFs.
\begin{theorem}
\label{thm:limits_of_r-modes}
Let $X$ be a separable metric space and let $\mu \in \prob{X}$.
\begin{enumerate}[label=(\alph*)]
\item
\label{item:limits_of_r-modes_strong}
If $(x^\star)_{r > 0}$ is an AMF, then $x^\star$ is a strong mode.
\item
If $x^\star$ is a radius-$r$ mode for all small enough $r > 0$, then $x^\star$ is a strong mode.
\item
\label{item:limits_of_r-modes_gen_strong}
If the AMF $(x_r^\star)_{r > 0}$ converges to $x^\star$, then $x^\star$ is a generalised strong mode.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[label=(\alph*)]
\item
As $\crcdf{x^\star}{r} = (1 - \varepsilon(r)) M_r$, it is immediate that $x^\star$ is a strong mode, because
\begin{equation*}
\lim_{r \to 0} \frac{\crcdf{x^\star}{r}}{M_r} = \lim_{r \to 0} 1 - \varepsilon(r) = 1 .
\end{equation*}
\item
This is immediate from \ref{item:limits_of_r-modes_strong} as $(x^\star)_{r > 0}$ forms an AMF.
\item
This is precisely \citet[Lemma~2.4]{Clason2019GeneralizedModes}.
\end{enumerate}
\vspace{-\baselineskip}
\end{proof}
These results are somewhat tight, in the sense of the following counterexamples.
For \ref{item:limits_of_r-modes_gen_strong}, the limit of radius-$r$ modes need not be a strong mode:
in \Cref{eg:upward_closures_wrt_preceq_0}\ref{item:upward_closures_wrt_preceq_0_1}, when $r$ is small, $1 - r$ is a radius-$r$ mode, yet $1$ is not a strong mode.
Furthermore, the points $\pm 1$ in \Cref{thm:oscillation_example} are limit points of a net of radius-$r$ modes, but the measure in that example has no generalised modes, so it is not possible to relax \ref{item:limits_of_r-modes_gen_strong} to require that $x^\star$ is merely a limit point.
The general question of classifying measures $\mu$ for which limits of radius-$r$ modes are strong modes is still open, although \citet{DashtiLawStuartVoss2013} show that reweightings of Gaussian measures on Hilbert spaces enjoy this property, and \cite{KlebanovWacker2022} show the same for some Gaussian measures on sequence spaces.
Under an additional nesting assumption, intersection arguments can be applied to AMFs to yield the existence of weak modes.
\begin{theorem}[AMFs and weak modes]
\label{thm:nesting_yields_GWM}
Let $X$ be a complete and separable metric space and let $\mu \in \prob{X}$.
Let $(x_{r})_{r > 0}$ be any AMF satisfying \eqref{eq:AMF}.
Then every point of $\bigcap_{r > 0} \mathop{\uparrow}\nolimits_{r} x_{r}$ is a weak mode of $\mu$ and lies in $\supp(\mu)$.
If, furthermore,
\begin{equation}
\label{eq:nesting_for_GWM}
0 < r \leq s \implies \mathop{\uparrow}\nolimits_{r} x_{r} \subseteq \mathop{\uparrow}\nolimits_{s} x_{s},
\end{equation}
then $\bigcap_{r > 0} \mathop{\uparrow}\nolimits_{r} x_{r}$ is non-empty and compact, and there exists a null sequence $(r_{n})_{n \in \mathbb{N}}$ and $x^{\star} \in \bigcap_{r > 0} \mathop{\uparrow}\nolimits_{r} x_{r}$ such that $(x_{r_{n}})_{n \in \mathbb{N}}$ converges to the weak mode $x^{\star}$ as $n \to \infty$.
\end{theorem}
\begin{proof}
Let $x^{\star} \in \bigcap_{r > 0} \mathop{\uparrow}\nolimits_{r} x_{r}$.
For all sufficiently small $r > 0$,
\[
\crcdf{x^{\star}}{r} \geq \crcdf{x_{r}}{r} \geq M_{r} ( 1 - \varepsilon(r) ) > 0,
\]
and so $x^{\star} \in \supp(\mu)$.
Also, for any $x' \in X$,
\[
\limsup_{r \to 0} \frac{\crcdf{x'}{r}}{\crcdf{x^{\star}}{r}}
\leq \limsup_{r \to 0} \frac{\crcdf{x'}{r}}{\crcdf{x_r}{r}}
\leq \limsup_{r \to 0} \frac{M_r}{\crcdf{x_r}{r}}
\leq \limsup_{r \to 0} \frac{1}{1 - \varepsilon(r)} = 1.
\]
which shows that $x^{\star}$ is a weak mode.
Let $(r_{n})_{n \in \mathbb{N}}$ be some null sequence of radii.
The nesting hypothesis \eqref{eq:nesting_for_GWM} implies that
\[
\bigcap_{r > 0} \mathop{\uparrow}\nolimits_{r} x_{r} = \bigcap_{n \in \mathbb{N}} \mathop{\uparrow}\nolimits_{r_{n}} x_{r_{n}} .
\]
For each $n$, $\mathop{\uparrow}\nolimits_{r_{n}} x_{r_{n}}$ is non-empty and, by \Cref{lem:upper_closure_is_closed_and_bounded}, is closed and bounded with $\gamma(\mathop{\uparrow}\nolimits_{r_{n}} x_{r_{n}}) \leq 2 r_{n}$.
This, together with the nesting hypothesis \eqref{eq:nesting_for_GWM} and Kuratowski's intersection theorem (\Cref{thm:intersection_theorem}\ref{item:intersection_theorem_Kuratowski}), ensures that $\bigcap_{n \in \mathbb{N}} \mathop{\uparrow}\nolimits_{r_{n}} x_{r_{n}}$ is non-empty and compact.
Finally, the claim that there exists a subsequence $(x_{r_{n}})_{n \in \mathbb{N}}$ converging to a weak mode $x^{\star} \in \bigcap_{r > 0} \mathop{\uparrow}\nolimits_{r} x_{r}$ follows from the compactness of the intersection.
\end{proof}
\begin{remark}
\begin{enumerate}[label=(\alph*)]
\item
An AMF satisfying \eqref{eq:AMF} alone always exists;
the key hypothesis here is the nesting hypothesis \eqref{eq:nesting_for_GWM}, which fails for measures displaying oscillatory behaviour of the kind discussed in \Cref{thm:oscillation_example}.
\item
The nesting hypothesis \eqref{eq:nesting_for_GWM}, in conjunction with \Cref{lem:upper_closure_is_closed_and_bounded}, ensures that the family of nearly-greatest elements $(x_{r})_{r > 0}$ --- and indeed any family of greatest elements $(x_{r}^{\star})_{r > 0}$ --- must be bounded.
This means that \Cref{thm:nesting_yields_GWM} does not apply to measures such as \Cref{eg:upward_closures_wrt_preceq_0}\ref{item:upward_closures_wrt_preceq_0_2}, for which the radius-$r$ modes ``escape to infinity'' as $r \to 0$.
\item
\Cref{thm:nesting_yields_GWM} is not sharp, in the sense that there can exist weak modes $x^{\star} \notin \bigcap_{r > 0} \mathop{\uparrow}\nolimits_{r} x_{r}$.
See \Cref{eg:modes_limit_of_r-modes} for an example of this situation with
\[
x_{r} \equiv {+1} ,
\quad
\mathop{\uparrow}\nolimits_{r} x_{r} = \{ +1 \} ,
\quad
x^{\star} = {-1} \notin \bigcap_{r > 0} \mathop{\uparrow}\nolimits_{r} x_{r} .
\]
\end{enumerate}
\end{remark}
\section{Preorders in the small-radius limit}
\label{sec:limiting_preorders}
One would like to think of $x^{\star} \in X$ as a mode of $\mu \in \prob{X}$ if $x^{\star}$ is a greatest or maximal element of $X$ with respect to the preorder $\preceq_{r}$ ``in the limit as $r \to 0$'' in some sense.
However, is such a limiting preorder well defined?
Must this preorder have greatest or maximal elements?
In fact, there are several candidates for a small-radius limiting preorder and it appears that each of them has at least one undesirable feature.
This work will focus on the analytic small-radius limiting preorder $\preceq_{0}$, to be defined shortly (\Cref{defn:analytic_small-radius_preorder});
this preorder has the advantage that its greatest elements are weak modes;
however, it has the disadvantage that it is not total, i.e.\ the existence of greatest elements is not guaranteed, and indeed the collection of incomparable elements may be rather large.
We claim that this is a small price to pay:
we show in \Cref{sec:alternative_small-radius_preorders} that the alternative definitions are even more ill behaved.
\subsection{Definition and basic properties}
\begin{definition}[Small-radius limiting preorder]
\label{defn:analytic_small-radius_preorder}
Let $X$ be a metric space and let $\mu \in \prob{X}$.
Define a preorder $\preceq_{0}$ on $X$ by
\begin{align}
\label{eq:preceq_0}
x \preceq_{0} x'
& \iff \limsup_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}} \leq 1
\iff \liminf_{r \to 0} \frac{\crcdf{x'}{r}}{\crcdf{x}{r}} \geq 1 ,
\end{align}
if both $x, x' \in \supp(\mu)$.
Additionally, as exceptional cases, $x \preceq_{0} x'$ is defined to be false for $x \in \supp(\mu)$ and $x' \notin \supp(\mu)$, and $x \preceq_{0} x'$ is defined to be true for $x \notin \supp(\mu)$ and $x' \in X$.
\end{definition}
It is relatively straightforward to verify that $\preceq_{0}$, as defined above, is a preorder on $X$;
the only subtleties are correct handling of points outside the support, and the use of the upper bound (but not equality)
\begin{equation}
\label{eq:limsup_product_rule}
\limsup_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{y}{r}} \frac{\crcdf{y}{r}}{\crcdf{z}{r}}
\leq
\limsup_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{y}{r}} \limsup_{r \to 0} \frac{\crcdf{y}{r}}{\crcdf{z}{r}}
\end{equation}
when verifying transitivity.
As usual, we will write $x \asymp_{0} x'$ if both $x \preceq_{0} x'$ and $x \succeq_{0} x'$ hold, and $x \mathrel{\Vert}_{0} x'$ if neither $x \preceq_{0} x'$ nor $x \succeq_{0} x'$ hold.
The appeal of the preorder $\preceq_{0}$ is that its greatest elements are exactly the weak modes of $\mu$, as defined in \eqref{eq:weak_mode}, as the next two results show.
\begin{lemma}[Properties of $\preceq_0$-maximal elements]
\label{lem:properties_of_maximal}
Let $X$ be separable and let $\mu \in \prob{X}$.
\begin{enumerate}[label=(\alph*)]
\item
\label{item:0_maximal_implies_in_supp}
If $x^{\star}$ is $\preceq_{0}$-maximal, then $x^{\star} \in \supp(\mu)$.
\item
\label{item:0_maximal_limits}
The point $x^{\star} \in \supp(\mu)$ is $\preceq_{0}$-maximal if and only if any $x \in X$ satisfies either
\begin{equation} \label{eq:liminf_of_small-ball_ratio}
\liminf_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x^{\star}}{r}} < 1
\text{ or } \lim_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x^{\star}}{r}} = 1.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[label=(\alph*)]
\item
Suppose that $x^{\star}$ is maximal but, for a contradiction, suppose also that $x^{\star} \notin \supp(\mu)$.
As $X$ is separable, take $x \in \supp(\mu) \neq \varnothing$.
By the exceptional cases of \Cref{defn:analytic_small-radius_preorder}, $x^{\star} \prec_0 x$, contradicting the assumption that $x^\star$ is maximal.
\item
It is straightforward to check that for $x, x' \in \supp(\mu)$,
\begin{equation}
\label{eq:equiv_dominant_analytic_order}
x \prec_0 x' \iff \liminf_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}} < 1 \text{ and } x \asymp_0 x' \iff \lim_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}} = 1.
\end{equation}
Suppose first that $x^\star$ is maximal, and let $x \in X$ be arbitrary.
If $x \notin \supp(\mu)$, then $\crcdf{x}{r} = 0$ for all sufficiently small $r$, so \eqref{eq:liminf_of_small-ball_ratio} holds.
If $x \in \supp(\mu)$, then maximality of $x^\star$ implies that either $x \prec_0 x^\star$ or $x \asymp_0 x^\star$, from which \eqref{eq:liminf_of_small-ball_ratio} follows.
Conversely, suppose that $x \in X$ satisfies $x^\star \preceq_0 x$.
The exceptional cases in \Cref{defn:analytic_small-radius_preorder} imply that $x \in \supp(\mu)$.
Hence, by \eqref{eq:equiv_dominant_analytic_order}, $x^\star \asymp_0 x$, proving that $x^\star$ is $\preceq_0$-maximal.
\end{enumerate}
\vspace{-\baselineskip}
\end{proof}
\begin{lemma}[Characterisation of weak modes]
\label{lem:GWM_greatest_maximal}
Let $X$ be separable and let $\mu \in \prob{X}$.
Then the following are equivalent:
\begin{enumerate}[label=(\alph*)]
\item
\label{item:GWM_greatest_maximal_GWM}
$x^{\star} \in X$ is a weak mode for $\mu$;
\item
\label{item:GWM_greatest_maximal_greatest}
$x^{\star} \in X$ is a $\preceq_{0}$-greatest element;
\item
\label{item:GWM_greatest_maximal_maximal}
$x^{\star} \in X$ is a $\preceq_{0}$-maximal element that is comparable with every other $x' \in X$.
\end{enumerate}
\end{lemma}
\begin{proof}
(\ref{item:GWM_greatest_maximal_GWM}$\implies$\ref{item:GWM_greatest_maximal_greatest})$\quad$
Suppose that $x^{\star}$ is a weak mode for $\mu$.
Relation \eqref{eq:weak_mode} proves that $x \preceq_{0} x^{\star}$ for each $x \in \supp(\mu)$.
As $x^\ast \in \supp(\mu)$, any point $x \notin \supp(\mu)$ satisfies
$x \preceq_{0} x^{\star}$ by the special cases in the definition of $\preceq_{0}$.
\noindent(\ref{item:GWM_greatest_maximal_greatest}$\implies$\ref{item:GWM_greatest_maximal_GWM})$\quad$
Suppose that $x^{\star}$ is a $\preceq_{0}$-greatest element.
Then $x^{\star} \in \supp(\mu)$ by \Cref{lem:properties_of_maximal}.
Hence, for $x' \in \supp(\mu)$, \eqref{eq:weak_mode} holds because $x' \preceq_{0} x^{\star}$.
For $x' \notin \supp(\mu)$, we obtain
\begin{equation*}
\frac{\crcdf{x'}{r}}{\crcdf{x^\ast}{r}} = 0 \text{ for sufficiently small } r,
\end{equation*}
proving that $x^{\star}$ is a weak mode.
\noindent(\ref{item:GWM_greatest_maximal_greatest}$\iff$\ref{item:GWM_greatest_maximal_maximal})$\quad$
This is obvious, since the defining property of being greatest is exactly the property of being maximal and globally comparable.
\end{proof}
The preorder $\preceq_{0}$ does have some shortcomings.
One is that, in contrast to $\preceq_{r}$ with $r > 0$ (\Cref{lem:upper_closure_is_closed_and_bounded}), upward closures under $\preceq_{0}$ need be neither closed nor bounded.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/upward_closure_1.pdf}
\subcaption{\raggedright Density of the measure in \Cref{eg:upward_closures_wrt_preceq_0}\ref{item:upward_closures_wrt_preceq_0_1} with non-closed upward closure $\mathop{\uparrow}\nolimits_0 y$, $\nicefrac{1}{2} < y < 1$.}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/upward_closure_2.pdf}
\subcaption{\raggedright Density of the measure in \Cref{eg:upward_closures_wrt_preceq_0}\ref{item:upward_closures_wrt_preceq_0_2} with unbounded upward closure $\mathop{\uparrow}\nolimits_0 y$, $y \in \mathbb{N}$.}
\end{subfigure}
\caption{The measures in \Cref{eg:upward_closures_wrt_preceq_0} show that upward closures under $\preceq_{0}$ might not be closed or bounded.
Indeed, neither of these measures has a $\preceq_{0}$-maximal element, let alone a $\preceq_{0}$-greatest element (weak mode).}
\label{fig:upward_closures_wrt_preceq_0}
\end{figure}
\begin{example}
\label{eg:upward_closures_wrt_preceq_0}
\begin{enumerate}[label=(\alph*)]
\item \label{item:upward_closures_wrt_preceq_0_1}
For an example of a non-closed upward closure under $\preceq_{0}$, similar in spirit to the example of \citet{Clason2019GeneralizedModes} mentioned in \Cref{sec:related}, let $\mu \in \prob{\mathbb{R}}$ have the Lebesgue density $\rho \colon \mathbb{R} \to \mathbb{R}$, $\rho(x) \coloneqq 2 x \mathds{1} [ 0 \leq x \leq 1 ]$, with $\supp(\mu) = [0, 1]$, and consider $y \in \mathbb{R}$.
If $y < 0$ or $y > 1$, then $y \notin \supp(\mu)$ and $\mathop{\uparrow}\nolimits_{0} y = \mathbb{R}$.
For $0 \leq y \leq \nicefrac{1}{2}$, $\mathop{\uparrow}\nolimits_{0} y = [y, 1]$, which is closed.
However, for $\nicefrac{1}{2} < y < 1$,
\[
\lim_{r \to 0} \frac{\crcdf{1}{r}}{\crcdf{y}{r}}
=
\lim_{r \to 0} \frac{2 r - r^{2}}{4 y r}
=
\frac{1}{2 y}
<
1
\]
and so $\mathop{\uparrow}\nolimits_{0} y = [y, 1)$, which is not closed.
Finally, $\mathop{\uparrow}\nolimits_{0} 1 = [\nicefrac{1}{2}, 1]$, which is closed.
\item \label{item:upward_closures_wrt_preceq_0_2}
For an example of an unbounded upward closure under $\preceq_{0}$, let $\mu \in \prob{\mathbb{R}}$ have the unbounded Lebesgue density $\rho \colon \mathbb{R} \to \mathbb{R}$,
\[
\rho(x) \coloneqq \sum_{n \in \mathbb{N}} n \mathds{1} \bigl[ n - \tfrac{ 2^{-n - 1} }{ n } \leq x \leq n + \tfrac{ 2^{-n - 1} }{ n } \bigr].
\]
That is, $\rho$ consists of a sum of disjoint indicator functions centred on the natural numbers $n \in \mathbb{N}$, each having mass $2^{-n}$ and height $n$.
Then, for any $x, y \in \mathbb{N}$ with $x > y$,
\[
\lim_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{y}{r}}
=
\lim_{r \to 0} \frac{2 x r}{2 y r}
=
\frac{x}{y}
>
1
\]
and so $\mathop{\uparrow}\nolimits_{0} y \supseteq \mathbb{N} \cap [y, \infty)$.
\end{enumerate}
\end{example}
\Cref{eg:upward_closures_wrt_preceq_0} furnishes two examples of measures with no $\preceq_{0}$-maximal element, let alone a $\preceq_{0}$-greatest element (weak mode), or strong mode.
However, these examples are relatively tame, in the sense that the lack of a mode is due to the fact that, given any candidate mode $x^{\star}$, there is always some $x'$ with $x' \succ_{0} x^{\star}$;
the \emph{real} shortcoming and subtlety of $\preceq_{0}$ is that it is not total, i.e.\ it admits incomparable elements, and we make this the topic of the next subsection.
\subsection{Criteria for incomparability and comparability}
For $r > 0$, totality of $\preceq_{r}$ followed immediately from \Cref{defn:positive_radius_preorder}.
This is certainly not so obvious for $\preceq_{0}$.
Indeed, what is immediate from \Cref{defn:analytic_small-radius_preorder} is that $\preceq_{0}$-incomparable elements can be characterised as follows:
\begin{lemma}[Incomparability in the limiting preorder]
\label{lem:incomp_0}
For $x, x' \in X$,
\begin{equation}
\label{eq:incomp_0}
x \mathrel{\Vert}_{0} x' \iff x, x' \in \supp(\mu) \text{ and } \liminf_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}} < 1 < \limsup_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}} .
\end{equation}
\end{lemma}
In the other direction, we can give a (very strong) sufficient condition for two points to be comparable under $\preceq_{0}$:
\begin{lemma}
Let $X$ be any metric space and let $\mu \in \prob{X}$.
Suppose that on some interval $(0, r^\star)$, the function $r \mapsto \nicefrac{\crcdf{x}{r}}{\crcdf{x'}{r}}$ is uniformly continuous for $x$, $x' \in X$.
Then $x$ and $x'$ are $\preceq_{0}$-comparable.
\end{lemma}
\begin{proof}
The ratio function $\nicefrac{\crcdf{x}{r}}{\crcdf{x'}{r}}$ can be uniquely extended to a uniformly continuous function on $[0, r^\star]$ \citep[Lemma~3.11]{AliprantisBorder2006}.
By continuity, the limit of the ratio function as $r \to 0$ must exist;
the result follows by \Cref{lem:incomp_0}.
\end{proof}
The previous two lemmas hint at a way to construct concrete examples of measures with incomparable points under $\preceq_{0}$:
one must make the masses near two points ``oscillate'' relative to one another in the small-radius limit.
We now construct such a measure on $\mathbb{R}$ with a Lebesgue density and two $\preceq_{0}$-incomparable maximal points, neither of which is $\preceq_0$-greatest.
(\Cref{sec:dense_antichains} will supply even more extreme and general examples, but it is pedagogically useful to consider a simpler construction first.)
The essential idea is to have the RCDF $r \mapsto \crcdf{x}{r}$ piecewise linearly interpolate $r \mapsto \sqrt{r}$ through either the interpolation knots $r = a^{-n}$ with $n \in \mathbb{N}$ even or the interpolation knots $r = a^{-n}$ with $n \in \mathbb{N}$ odd, where $a > 1$ is chosen arbitrarily.
It turns out that these mild perturbations of the integrable singularity $\rho(x) \propto \absval{ x }^{-1/2}$ produce ``incomparable modes''.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/oscillation_example_1.pdf}
\subcaption{\raggedright The RCDFs $\mu^\textup{e}(\cball{0}{r})$ and $\mu^\textup{o}(\cball{0}{r})$, shown here on a linear scale, interpolate between the knots $a^{-n}$ of mild perturbations of the function $r \mapsto \sqrt{r}$.}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/oscillation_example_2.pdf}
\subcaption{\raggedright The RCDFs $\mu^\textup{e}(\cball{0}{r})$ and $\mu^\textup{o}(\cball{0}{r})$ shown on a logarithmic scale.
One sees that $\mu^\textup{o}$ agrees with $\sqrt{r}$ at the knots $a^{-n}$, $n$ odd, and $\mu^\textup{e}$ agrees with $\sqrt{r}$ at $a^{-n}$, $n$ even.}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/oscillation_example_3.pdf}
\subcaption{\raggedright The probability density functions $\rho^{\textup{e}} (\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} + 1)$ and $\rho^{\textup{o}} (\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} - 1)$ have singularities which behave like $\absval{ \setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} }^{-\nicefrac{1}{2}}$ at $-1$ and $+1$ respectively.}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./fig/oscillation_example_4.pdf}
\subcaption{\raggedright The ratio $\nicefrac{ \crcdf{-1}{r} }{ \crcdf{+1}{r} }$ oscillates between $\alpha$ and $\alpha^{-1}$ as $r \to 0$, so the $\liminf$ of the ratio is below $1$ and the $\limsup$ is above $1$.}
\end{subfigure}
\caption{Illustration of the measures defined in \Cref{thm:oscillation_example} for the parameter choice $a = 2$.}
\label{fig:oscillation_example}
\end{figure}
\begin{example}[An absolutely continuous measure on $\mathbb{R}$ with incomparable maximal points and neither weak nor generalised modes; after an example of I.~Klebanov]
\label{thm:oscillation_example}
Let $X$ be any Borel-measurable subset of $\mathbb{R}$ containing $[-2, 2]$.
Fix $a > 1$ and, as illustrated in \Cref{fig:oscillation_example}, define $\mu^{\textup{e}}, \mu^{\textup{o}} \in \prob{X}$ via their Lebesgue densities $\rho^{\textup{e}}, \rho^{\textup{o}} \colon X \to [0, \infty]$,
\[
\rho^{\textup{e}} (x)
\coloneqq
\begin{cases}
0, & \text{if $\absval{ x } > 1$,} \\
\dfrac{1}{2} \dfrac{a^{\nicefrac{n}{2}} ( 1 - a^{-1} )}{1 - a^{-2}} , & \text{if $a^{-2 - n} \leq \absval{ x } \leq a^{-n}$ for even $n \in \mathbb{N} \cup \{ 0 \}$,} \\
\infty, & \text{if $x = 0$,}
\end{cases}
\]
and
\[
\rho^{\textup{o}} (x)
\coloneqq
\begin{cases}
0, & \text{if $\absval{ x } > 1$,} \\
\dfrac{1}{2} \dfrac{1 - a^{-\nicefrac{1}{2}}}{1 - a^{- 1}} , & \text{if $a^{-1} \leq \absval{ x } \leq 1$,} \\
\dfrac{1}{2} \dfrac{a^{\nicefrac{n}{2}} ( 1 - a^{-1} )}{1 - a^{-2}} , & \text{if $a^{-2 - n} \leq \absval{ x } \leq a^{-n}$ for odd $n \in \mathbb{N}$,} \\
\infty, & \text{if $x = 0$,}
\end{cases}
\]
so that the RCDFs are
\[
\mu^{\textup{e}} (\cball{0}{r})
=
\begin{cases}
1, & \text{if $r \geq 1$,} \\
a^{- 1 -\nicefrac{n}{2}} + ( r - a^{-2 - n} ) \dfrac{a^{\nicefrac{n}{2}} ( 1 - a^{-1} )}{1 - a^{-2}} , & \text{if $a^{-2 - n} \leq r \leq a^{-n}$ for even $n \in \mathbb{N} \cup \{ 0 \}$,} \\
0, & \text{if $r = 0$,}
\end{cases}
\]
and
\[
\mu^{\textup{o}} (\cball{0}{r})
=
\begin{cases}
1, & \text{if $r \geq 1$,} \\
a^{-\nicefrac{1}{2}} + ( r - a^{-1} ) \dfrac{1 - a^{-\nicefrac{1}{2}}}{1 - a^{- 1}} , & \text{if $a^{-1} \leq r \leq 1$,} \\
a^{- 1 -\nicefrac{n}{2}} + ( r - a^{-2 - n} ) \dfrac{a^{\nicefrac{n}{2}} ( 1 - a^{-1} )}{1 - a^{-2}} , & \text{if $a^{-2 - n} \leq r \leq a^{-n}$ for odd $n \in \mathbb{N}$,} \\
0, & \text{if $r = 0$.}
\end{cases}
\]
We now consider the probability measure $\mu \coloneqq \tfrac{1}{2} \mu^{\textup{e}} (\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} + 1) + \tfrac{1}{2} \mu^{\textup{o}} (\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} - 1) \in \prob{X}$ with Lebesgue density $\rho \coloneqq \tfrac{1}{2} \rho^{\textup{e}} (\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} + 1) + \tfrac{1}{2} \rho^{\textup{o}} (\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} - 1)$.
We first observe that $\pm 1 \succeq_{0} x$ for any $x \neq \pm 1$
For sufficiently small $r > 0$, both $\rho^{\textup{e}}$ and $\rho^{\textup{o}}$ are bounded above by a constant on $\cball{x}{r} = [x - r, x + r]$, so that $\crcdf{x}{r} \leq c r$ for some $c \geq 0$.
On the other hand, by construction, both $\crcdf{-1}{r}$ and $\crcdf{+1}{r}$ are asymptotically equivalent to $\nicefrac{\sqrt{r}}{2}$ as $r \to 0$, from which it follows that ${\pm 1} \succeq_{0} x$.
However, ${-1}$ and ${+1}$ are incomparable.
Observe that, for $r = a^{-n}$ with $n \in \mathbb{N}$ even,
\[
\frac{\crcdf{-1}{r}}{\crcdf{+1}{r}} = \alpha \coloneqq \frac{a + 1}{2 a^{1/2}} > 1 ,
\]
whereas for $r = a^{-n}$ with $n \in \mathbb{N}$ odd, this ratio of ball masses takes the value $\alpha^{-1}$, and, for all $r > 0$, it lies in the interval $[\alpha^{-1}, \alpha]$, all of which can be verified easily from the interpolation formulae for $\mu^{\textup{e}} (\cball{0}{r})$ and $\mu^{\textup{o}} (\cball{0}{r})$.
\Cref{lem:incomp_0} now implies that ${-1} \mathrel{\Vert}_{0} {+1}$, since
\[
\alpha^{-1} = \liminf_{r \to 0} \frac{\crcdf{-1}{r}}{\crcdf{+1}{r}} < 1 < \limsup_{r \to 0} \frac{\crcdf{-1}{r}}{\crcdf{+1}{r}} = \alpha .
\]
Thus, the preorder $\preceq_{0}$ induced by $\mu$ has two incomparable maximal elements, namely $\pm 1$, has no greatest elements, and hence $\mu$ has no weak modes (\Cref{lem:GWM_greatest_maximal}).
We now check that $+1$ and $-1$ are not generalised modes.
Let $r_n \coloneqq a^{-2n}$, and suppose that $x_n \to 1$ as $n \to \infty$.
Choose $N$ large enough that, for all $n \geq N$, $\absval{ x_n - 1 } < \nicefrac{1}{2}$ and $r_n < \nicefrac{1}{2}$.
As the density $\rho^\textup{o}(\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} - 1)$ is a symmetric singularity around $+1$, it follows that $\crcdf{x_n}{r_n} \leq \crcdf{+1}{r_n}$.
As $M_{r_n} = \crcdf{-1}{r_n}$, we obtain that
\begin{equation*}
\liminf_{n \to \infty} \frac{\crcdf{x_n}{r_n}}{M_{r_n}} \leq \liminf_{n \to \infty} \frac{\crcdf{+1}{r_n}}{M_{r_n}} = \alpha^{-1} < 1.
\end{equation*}
This proves that $+1$ is not a generalised mode; a similar argument with $(r_n)_{n \in \mathbb{N}} = a^{-2n + 1}$ proves that $-1$ is not a generalised mode.
Finally, suppose that $x \neq \pm 1$, and let $(r_n)_{n \in \mathbb{N}}$ be any null sequence.
Let $\varepsilon \coloneqq \min \bigl\{ \absval{ x - 1 }, \absval{ x + 1 } \bigr\}$.
Suppose that $x_n \to x$ as $n \to \infty$.
There must exist $N \in \mathbb{N}$ such that, for all $n \geq N$, $\absval{ x_n - 1 } > \nicefrac{\varepsilon}{2}$ and $\absval{ x_n + 1 } > \nicefrac{\varepsilon}{2}$.
The Lebesgue density of $\mu$ is bounded on $\mathbb{R} \setminus \bigl(\cball{+1}{\nicefrac{\varepsilon}{2}} \cup \cball{-1}{\nicefrac{\varepsilon}{2}}\bigr)$ by some constant $C > 0$, so $\crcdf{x_n}{r_n} \leq Cr_n$ for $n \geq N$.
As $M_{r_n} \in \Theta\bigl(r_n^{\nicefrac{1}{2}}\bigr)$ as $n \to \infty$, it follows that
\begin{equation*}
\liminf_{n \to \infty} \frac{\crcdf{x_n}{r_n}}{M_{r_n}} = 0,
\end{equation*}
so $x$ is not a generalised mode.
\end{example}
\Cref{thm:oscillation_example} illustrates a difficulty with weak modes, and one whose cause can be traced to incomparability:
if the space $X$ is partitioned into disjoint positive-mass sets $A$ and $B$, existence of modes for $\mu$ restricted to (or conditioned upon) $A$ and $B$ individually cannot ensure existence of a mode for $\mu$, since the modes of $\mu|_{A}$ and $\mu|_{B}$ may be $\preceq_{0}$-incomparable.
The extension theorems of Szpilrajn, Arrow, and Hansson \citep{Szpilrajn1930,Hansson1968} assert that any non-total preorder $\preceq$ can be extended to a total preorder $\preceq'$.
Thus, given the non-totality of $\preceq_{0}$, one might hope to resolve all these issues by defining a mode of $\mu$ to be a $\preceq'_{0}$-greatest element.
Unfortunately, such a total extended preorder is not uniquely determined and so such a definition of a mode would not be well defined:
for the measure $\mu$ of \Cref{thm:oscillation_example}, there are total extensions $\preceq'_{0}$ of $\preceq_{0}$ yielding each of the three situations
\[
- 1 \preceq'_{0} +1 \not\preceq'_{0} -1 ,
\quad
+1 \preceq'_{0} -1 \not\preceq'_{0} +1 ,
\quad
\text{and } -1 \asymp'_{0} +1 .
\]
That is, which (if any) of $\pm 1$ counts as a mode would seem to be a matter of personal choice.
Finally, we note that similar ideas could be used to construct incomparable points that are not $\preceq_{0}$-maximal, but such examples have less importance for the theory of modes.
\subsection{Absolutely continuous measures with dense antichains}
\label{sec:dense_antichains}
\Cref{thm:oscillation_example} can be easily extended to construct a measure $\mu \in \prob{\mathbb{R}}$ with any finite number of mutually incomparable $\preceq_{0}$-maximal elements, none of which are greatest elements.
Indeed, it is natural to wonder how bad the situation of incomparability can be, and in particular how large an antichain can be.
\begin{proposition}
Let $X$ be a finite or discrete metric space and let $\mu \in \prob{X}$.
Then $\preceq_{0}$ has no incomparable elements.
\end{proposition}
\begin{proof}
Let $x, x' \in \supp(\mu)$.
As $X$ is discrete, the measure $\mu$ must be atomic, so
\begin{equation*}
\lim_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}} = \frac{\mu(\{x\})}{\mu(\{x'\})}.
\end{equation*}
Comparability of $x$ and $x'$ follows from \Cref{lem:incomp_0}.
\end{proof}
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{fig/countable_space_rcdfs.pdf}
\subcaption{\raggedright The RCDFs $\crcdf{+1}{r}$ and $\crcdf{-1}{r}$ for the atomic measure $\mu$.}%
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{fig/countable_space_rcdf_ratio.pdf}
\subcaption{\raggedright The ratio $\nicefrac{\crcdf{+1}{r}}{\crcdf{-1}{r}}$ oscillates as $r \to 0$.}%
\end{subfigure}
\caption{RCDFs of the measure $\mu$ defined in \Cref{example:countable_space_incomparability} showing how incomparability can arise in a countable metric space.}
\label{fig:countable_space_rcdfs}
\end{figure}
\begin{example}[Incomparability in a countable metric space]
\label{example:countable_space_incomparability}
Let $X$ be the closure of the set $\set{-1 + 2^{-n}}{n \in \mathbb{N}} \cup \set{1 - 2^{-n}}{n \in \mathbb{N}}$
with the Euclidean metric inherited from $\mathbb{R}$.
Define a measure $\mu$ on $X$ by
\begin{equation*}
\mu \coloneqq 3 \sum_{k = 1}^\infty 2^{-2k-1} \delta_{-1+2^{-2k+1}} + 3 \sum_{k = 1}^\infty 2^{-2k-2} \delta_{1 - 2^{-2k}}.
\end{equation*}
The RCDFs at the points $-1$ and $+1$ satisfy
\begin{equation*}
\liminf_{r \to 0} \frac{\crcdf{+1}{r}}{\crcdf{-1}{r}} = \frac{1}{2} < 2 = \limsup_{r \to 0} \frac{\crcdf{+1}{r}}{\crcdf{-1}{r}},
\end{equation*}
as shown in \Cref{fig:countable_space_rcdfs}, so $-1 \mathrel{\Vert}_0 +1$.
\end{example}
Examples such as \Cref{thm:oscillation_example,example:countable_space_incomparability} can be extended to show that an antichain may be countably infinite.
To do so, we first introduce a family of ``coprime'' oscillatory RCDFs to generalise the RCDFs $\mu^\textup{e}$ and $\mu^\textup{o}$ of \Cref{thm:oscillation_example}:
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{fig/rho_kr.pdf}
\subcaption{\raggedright As in \Cref{thm:oscillation_example}, the density $\rho_{k, r}$ is based on perturbations of the singularity $\absval{ \setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} }^{-\nicefrac{1}{2}}$.}%
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{fig/rcdf_k.pdf}
\subcaption{\raggedright The RCDF $\mu_{k, r}(\cball{0}{\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss}})$ linearly interpolates between the knots $r = a^{-n}$, $n \in \mathbb{N}$ (marked as circles) to obtain the desired oscillations around the ``growth rate'' $\sqrt{r}$.}%
\end{subfigure}
\caption{Example of a density $\rho_{k, r}$ and RCDF $\mu_{k,r}$ from \Cref{prop:rcdf_family} with $a = 2$.}
\label{fig:rcdf_family}
\end{figure}
\begin{proposition}[A family of oscillatory RCDFs]
\label{prop:rcdf_family}
Fix $a > 1$ and a natural number $k \geq 2$.
Construct the Lebesgue densities $\rho_{k} \colon \mathbb{R} \to [0, \infty]$, defined by
\begin{equation*}
\rho_{k}(x) \coloneqq
\begin{cases}
0, & \text{ if } \absval{ x } > a^{-1},\\
\frac{1}{2} a^{\frac{n + 1}{2}}, & \text{ if }
a^{-n-1} < \absval{ x } \leq a^{-n} \text{ for } n \in \mathbb{N}
\text{ with } k \mid n, \\
0, & \text{ if } a^{-n-1} < \absval{ x } \leq a^{-n} \text{ for }
n \in \mathbb{N} \text{ with } k \mid n + 1, \\
\frac{1}{2} a^{\frac{n}{2}} \left(\frac{1 - a^{-\nicefrac{1}{2}}}{1 - a^{-1}} \right), &
\text{ if } a^{-n-1} < \absval{ x } \leq a^{-n} \text{ for } n \in \mathbb{N}
\text{ with }
k \nmid n \text{ and } k\nmid n + 1, \\
\infty, & \text{ if } x = 0,
\end{cases}
\end{equation*}
and define the corresponding truncated density
\begin{equation*}
\rho_{k,r}(x) \coloneqq \rho_k(x) \mathds{1}\big[\absval{ x } \leq r\big].
\end{equation*}
Let $\mu_{k, r}$ denote the (unnormalised) measure on $\mathbb{R}$ with Lebesgue density $\rho_{k,r}$.
Then:
\begin{enumerate}[label=(\alph*)]
\item \label{item:rcdf_family_1} the RCDF $s \mapsto \mu_{k, r}(\cball{0}{s})$ linearly interpolates between the knots
\begin{equation*}
\bigset{(a^{-n}, a^{-\nicefrac{n}{2}})}{n \in \mathbb{N},~k\nmid n}
\cup \bigset{(a^{-n}, a^{\nicefrac{1}{2} - \nicefrac{n}{2}})}{n \in \mathbb{N},~k \mid n} \cup \bigl\{ (0, 0) \bigr\},
\end{equation*}
as shown in \Cref{fig:rcdf_family}, and has formula
\begin{equation*}
\hspace{-2.5em}
\mu_{k,r}(\cball{0}{s}) =
\begin{cases}
\mu_{k,r}(\cball{0}{r}), & \text{if } s > r, \\
a^{\frac{n + 1}{2}} s, & \text{if $a^{-n-1} < s \leq a^{-n}$ for $n \in \mathbb{N}$ with $k \mid n$,} \\
a^{-\frac{n}{2}}, & \text{if $a^{-n-1} < s \leq a^{-n}$ for $n \in \mathbb{N}$ with $k \mid n + 1$,} \\
\frac{1 - a^{-\nicefrac{1}{2}}}{1 - a^{-1}} \left(a^{\frac{n}{2}} s + a^{-\frac{n}{2}}\right), &
\text{if $a^{-n-1} < s \leq a^{-n}$ for $n \in \mathbb{N}$ with $k \nmid n$ and $k \nmid n + 1$,} \\
0,& \text{if $r = 0$;}
\end{cases}
\end{equation*}
\item \label{item:rcdf_family_2} in particular, if $a^{-n} \leq r$, the value of the RCDF at the knots $a^{-n}$ is
\begin{equation*}
\mu_{k, r}(\cball{0}{a^{-n}}) = \begin{cases}
a^{-\nicefrac{n}{2}}, & \text{if $k \nmid n$,} \\
a^{\nicefrac{1}{2} - \nicefrac{n}{2}}, & \text{if $k \mid n$;}
\end{cases}
\end{equation*}
\item \label{item:rcdf_family_3} given two distinct RCDFs $\mu_{k, r}$ and $\mu_{k', r'}$ with $k$ and $k'$ coprime,
\begin{equation*}
\liminf_{s \to 0} \frac{\mu_{k,r}(\cball{0}{s})}{\mu_{k',r'}(\cball{0}{s})} <
1 <
\limsup_{s \to 0} \frac{\mu_{k,r}(\cball{0}{s})}{\mu_{k',r'}(\cball{0}{s})};
\end{equation*}
\item \label{item:rcdf_family_4} for $s \leq r$, the RCDFs $s \mapsto \mu_{k, r}(\cball{0}{s})$ satisfy the bounds
\begin{equation*}
\sqrt{\nicefrac{s}{a}} \leq \mu_{k, r}(\cball{0}{s}) \leq \sqrt{as}.
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proof}
Claim \ref{item:rcdf_family_1} follows by integrating the density $\rho_{k, r}$, and
claim \ref{item:rcdf_family_2} follows by evaluating the formula from \ref{item:rcdf_family_1} at the knots.
To prove \ref{item:rcdf_family_3}, we exploit the fact that $k$ and $k'$ are coprime, so the sequence $(n_i)_{i \in \mathbb{N}} = (ik' - 1)k \nearrow \infty$ is divisible by $k$ but not $k'$, and the sequence $(m_i)_{i \in \mathbb{N}} = (ik - 1)k' \nearrow \infty$ is divisible by $k'$ but not $k$.
For sufficiently large $i$, $a^{-n_i} \leq \min \{ r, r' \}$, hence by \ref{item:rcdf_family_2}
\begin{equation*}
\frac{\mu_{k, r}(\cball{0}{a^{-n_i}})}{\mu_{k',r'}(\cball{0}{a^{-n_i}})} = \frac{a^{\nicefrac{1}{2} - \nicefrac{n_i}{2}}}{a^{-\nicefrac{n_i}{2}}} = a^{\frac{1}{2}}.
\end{equation*}
Similarly, choose $i$ sufficiently large such that $a^{-m_i} \leq \min \{ r, r' \}$.
Then
\begin{equation*}
\frac{\mu_{k, r}(\cball{0}{a^{-m_i}})}{\mu_{k',r'}(\cball{0}{a^{-m_i}})} = \frac{a^{-\nicefrac{m_i}{2}}}{a^{\nicefrac{1}{2} - \nicefrac{m_i}{2}}} = a^{-\frac{1}{2}}.
\end{equation*}
As these hold for all $i$ sufficiently large, and $a^{-n_i}$ and $a^{-m_i}$ converge to zero, the desired inequality in \ref{item:rcdf_family_3} follows.
The lower bound of claim \ref{item:rcdf_family_4} follows because, for $s \leq r$,
\begin{equation*}
\mu_{k,r}(\cball{0}{s}) \geq \mu_{k,r}(\cball{0}{a^{\lfloor \log_a(s) \rfloor}})
\geq a^{-\lfloor \log_a(s) \rfloor/2} \geq \sqrt{\nicefrac{s}{a}},
\end{equation*}
where the penultimate inequality uses \ref{item:rcdf_family_2}; the upper bound is easily verified from the construction of $\mu_{k,r}$ as a linear interpolation of the knots.
\end{proof}
We now use \Cref{prop:rcdf_family} to show that the maximal antichain of a measure can be topologically dense even in the apparently well-behaved case of an absolutely continuous probability measure on the real line.
The idea is to centre mutually incomparable compactly-supported singularities at a dense collection of points $\{ q_{k} \}_{k \in \mathbb{N}}$, taking care to ensure that the points $q_{k}$ are distant enough from one another that the supports of the singularities do not interfere with each another.
Here, this is achieved by taking the $q_{k}$ to be multiples of powers of two, a case that is easily analysed but quite sparse.
It is only slightly more difficult to use the dense set $\mathbb{Q} \cap [0, 1]$; the changes needed are described in \Cref{lem:fill_distances}.
\begin{theorem}[An absolutely continuous measure on $\mathbb{R}$ with a countable dense antichain]
\label{thm:countable_dense_antichain}
Let $\mu \in \prob{\mathbb{R}}$ have the Lebesgue density $\rho \colon \mathbb{R} \to \mathbb{R}$ as shown in \Cref{fig:countable_antichain}, defined by
\begin{equation}
\label{eq:countable_dense_antichain}
\rho(x) = \frac{1}{Z} \sum_{k \in \mathbb{N}} \rho_{p_{k}, r_{k}} (x - q_{k}) ,
\end{equation}
where $\rho_{k, r}$ is as defined in \Cref{prop:rcdf_family}, $p_{k} \in \mathbb{N}$ is the $k$\textsuperscript{th} prime number in the usual enumeration, $r_{k} \coloneqq a^{-k}$, $q_{k} \in [0, 1]$ is the $k$\textsuperscript{th} element of the set
\begin{equation*}
D \coloneqq \set{k2^{-n}}{n \in \mathbb{N},~0<k<2^{n}},
\end{equation*}
enumerated by concatenating the enumerations for the sets $D_n \coloneqq \set{k 2^{-n}}{0 < k < 2^{n},~k \text{ odd}}$, and $Z$ is a normalisation constant to ensure $\rho$ is a probability density.
Then the set $D$ is a $\preceq_0$-antichain which is dense in $[0, 1]$.
\end{theorem}
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{fig/countable_antichain_incremental.pdf}
\subcaption{\raggedright The density $\rho$ is constructed as a sum of $\rho_{p_k, r_k}$.
The orange density is $\rho_{2, 2^{-1}}(\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} - \nicefrac{1}{2})$, and the grey densities are $\rho_{3, 2^{-2}}(\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} - \nicefrac{1}{4})$ and $\rho_{5, 2^{-3}}(\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} - \nicefrac{3}{4})$.}%
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{fig/countable_antichain.pdf}
\subcaption{\raggedright Approximation of the density $\rho$ with a countable dense antichain in $[0, 1]$.
The plot includes the first 31 densities, i.e.\ all centres $q_k$ in $\bigcup_{k = 1}^5 D_k$.}%
\end{subfigure}
\caption{The density $\rho$ from \Cref{thm:countable_dense_antichain}.}
\label{fig:countable_antichain}
\end{figure}
\begin{proof}
Let $x$ and $x'$ be the $k$\textsuperscript{th} and $\ell$\textsuperscript{th} terms of the enumeration of $D$ respectively, with $k \neq \ell$.
As $\crcdf{x}{r} \sim \frac1Z \mu_{p_k,r_k}(\cball{0}{r})$ and $\crcdf{x'}{r} \sim \frac1Z \mu_{p_\ell, r_\ell}(\cball{0}{r})$ as $r \to 0$ (\Cref{lem:small_ball_limits_countable_antichain}), the identity
$\limsup_{r \to 0} f(r) g(r) = \limsup_{r \to 0} f(r) \lim_{r \to 0} g(r)$ yields
\begin{align*}
& \limsup_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}} \\
& \quad = \limsup_{r \to 0} \frac{\mu_{p_k,r_k}(\cball{0}{r})}{\mu_{p_\ell, r_\ell}(\cball{0}{r})}
\lim_{r \to 0} \frac{\crcdf{x}{r}}{\frac1Z \mu_{p_k,r_k}(\cball{0}{r})}
\frac{\frac1Z \mu_{p_\ell, r_\ell}(\cball{0}{r})}{\crcdf{x'}{r}} \\
& \quad = \limsup_{r \to 0} \frac{\mu_{p_k, r_k}(\cball{0}{r})}{\mu_{p_\ell, r_\ell}(\cball{0}{r})} & & \text{(\Cref{lem:small_ball_limits_countable_antichain})}\\
& \quad > 1 &&\text{(\Cref{prop:rcdf_family}\ref{item:rcdf_family_3}).}
\end{align*}
Swapping $x$ and $x'$ proves the $\liminf$ part of the inequality
\begin{equation*}
\liminf_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}} < 1 < \limsup_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}},
\end{equation*}
proving that $x \mathrel{\Vert}_0 x'$ (\Cref{lem:incomp_0}).
\end{proof}
An analogous construction is also possible in any infinite-dimensional separable Hilbert space.
This is somewhat more technical: the measures $\mu_{k, r}$ can no longer be defined by a Lebesgue density, so we instead reweight a centred Gaussian measure so that its RCDF at $x = 0$ has the same behaviour as $\mu_{k, r}$.
Other than this, the proof strategy generalises easily to Hilbert space.
\begin{theorem}
\label{thm:countable_dense_antichain_hilbert}
Let $X$ be an infinite-dimensional, real, separable Hilbert space.
There exists $\mu \in \prob{X}$, absolutely continuous with respect to a centred non-degenerate Gaussian measure $\mu_0 \in \prob{X}$, such that $\mu$ has a countable dense $\preceq_{0}$-antichain.
\end{theorem}
\begin{proof}
Fix an orthonormal basis $(e_k)_{k \in \mathbb{N}}$ of $X$ and construct $\mu_0 \in \prob{X}$ as a product measure $\mu_0 \coloneqq \bigotimes_{k \in \mathbb{N}} N(0, \sigma_k^2)$ using this basis, with $\sigma_k = k^{-2}/2$.
This ensures that we have the small-ball asymptotic $\mu_0(\cball{0}{r}) \sim C\exp(-ar^{-2})$ as $r \to 0$ (\Cref{lem:hilbert-space_gaussian_asymptotic}).
To obtain measures in the Hilbert space $X$ analogous to the measures $\mu_{k, r}$ from \Cref{thm:countable_dense_antichain}, reweight $\mu_0$ by a family of densities $f_{k, r}$ on $X$ (\Cref{lem:annular-reweighted_gaussian}) to obtain measures $\nu_{k, r}$, chosen so that
\begin{equation*}
\nu_{k, r}(\cball{0}{a^{-n}}) = \mu_{k, r}(\cball{0}{a^{-n}})
\text{ for } a^{-n} \leq r.
\end{equation*}
The RCDFs of $\nu_{k, r}$ and $\mu_{k, r}$ only agree at $a^{-n}$, but this is enough to ensure that $\nu_{k, r}$ has the crucial incomparability property (as in \Cref{prop:rcdf_family}\ref{item:rcdf_family_3}) that, for $k$ and $k'$ coprime,
\begin{equation*}
\liminf_{s \to 0} \frac{\nu_{k, r}(\cball{0}{s})}{\nu_{k',r'}(\cball{0}{s})} <
1 <
\limsup_{s \to 0} \frac{\nu_{k, r}(\cball{0}{s})}{\nu_{k',r'}(\cball{0}{s})}.
\end{equation*}
Proceed by defining $\mu \in \prob{X}$ by
\begin{equation}
\label{eq:hilbert-space_antichain_measure}
\mu(\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss}) \coloneqq \frac{1}{Z} \sum_{k \in \mathbb{N}} \nu_{p_k, r_k}(\setbox0\hbox{$x$}\hbox to\wd0{\hss$\cdot$\hss} - q_k),
\end{equation}
where $r_k \coloneqq a^{-k}$, $(q_k)_{k \in \mathbb{N}}$ is the enumeration of the dense set $D \subseteq X$ described in \Cref{lem:hilbert-space_dense_set} and $Z \coloneqq \sum_{k \in \mathbb{N}} \nu_{p_k, r_k}(X)$.
The construction of each $\nu_{p_k, r_k}$ (\Cref{lem:annular-reweighted_gaussian}) ensures that $\mu$ is absolutely continuous with respect to the Gaussian measure $\mu_0$.
As $\nu_{p_k, r_k}(X) = \nu_{p_k, r_k}(\cball{0}{a^{-k}}) \leq a^{\nicefrac{1}{2} - \nicefrac{n}{2}}$ (\Cref{lem:annular-reweighted_gaussian}\ref{item:annular-reweighted_gaussian_2}), the normalisation constant $Z$ is finite.
In a similar approach to the proof of \Cref{thm:countable_dense_antichain}, $\crcdf{q_k}{r} \sim \frac1Z \nu_{p_k, r_k}(\cball{0}{r})$ as $r \to 0$
(\Cref{lem:hilbert-space_rcdf_asymptotic}), so for distinct $q_k$, $q_\ell \in D$,
\begin{equation*}
\limsup_{r \to 0} \frac{\crcdf{q_k}{r}}{\crcdf{q_\ell}{r}} =
\limsup_{r \to 0} \frac{\nu_{p_k, r_k}(\cball{0}{r})}{\nu_{p_\ell,
r_\ell}(\cball{0}{r})} > 1,
\end{equation*}
where the final inequality follows from \Cref{lem:annular-reweighted_gaussian}\ref{item:annular-reweighted_gaussian_4}.
Interchanging the roles of $q_k$ and $q_\ell$ proves that $q_k \mathrel{\Vert}_0 q_\ell$ (\Cref{lem:incomp_0}).
\end{proof}
\begin{remark}
The generalisation of \Cref{thm:countable_dense_antichain_hilbert} to a separable Banach space $X$ is not trivial.
The problem lies in the proof of \Cref{lem:annular-reweighted_gaussian}\ref{item:annular-reweighted_gaussian_5}, which relies on an asymptotic for $\mu_0(\cball{0}{r})$ as $r \to 0$.
These are not well understood for arbitrary Gaussian measures on separable Banach spaces \citep{KuelbsLi1993_Metric}, and we do not know of a way to construct a measure $\mu_0$ on an arbitrary Banach space with the desired small-ball asymptotics.
In contrast, in the Hilbert space setting, we can explicitly construct a Gaussian with the desired small-ball asymptotics (\Cref{lem:hilbert-space_gaussian_asymptotic}) and then reweight it to obtain the target RCDF at $0$.
\end{remark}
\subsection{Essential totality}
\label{sec:essential_totality}
The need for a $\preceq_0$-greatest element to be globally comparable is a non-trivial one, and it can fail rather dramatically, e.g.\ when the maximal elements form a dense antichain as in \Cref{thm:countable_dense_antichain}.
Such examples could be criticised as somewhat artificial, but we feel that they highlight the importance of checking for incomparability and developing technical conditions on the measure which prevent it.
One could rule out incomparability if $\preceq_0$ were total, but this is not generally true, and checking this condition is often difficult in practice.
We propose a somewhat weaker condition, where one can tolerate incomparability away from the ``top'' of the preorder, as long as any candidate for a maximal element is also globally comparable.
Our condition of \emph}%{\textbf{essential totality} can be interpreted as an order-theoretic generalisation of the $M$-property of \citet{AyanbayevKlebanovLieSullivan2022_I};
recall \eqref{eq:property_M}.
A motivating example is that of a Gaussian measure $\mu$ on an infinite-dimensional space $X$:
the Cameron--Martin space $H(\mu)$ is an essentially total subspace where a maximal element must lie, and any element of the Cameron--Martin space is globally comparable using the OM functional and property $M(\mu, H(\mu))$.
\begin{definition}
\label{defn:essentially_total}
Let $X$ be a metric space and let $\mu \in \prob{X}$.
A non-empty subset $E \subseteq X$ is \emph}%{\textbf{$\mu$-essentially total} if:
\begin{enumerate}[label=(\alph*)]
\item
\label{item:essentially_total_1}
any two elements of $E$ are comparable (i.e.\ $E$ is a $\preceq_0$-chain);
\item
\label{item:essentially_total_2}
for any $x \in E$ and $x' \in X \setminus E$, $x' \preceq_{0} x$; and
\item
\label{item:essentially_total_3}
for any $x' \in X \setminus E$, there exists $x \in E$ such that $x' \prec_{0} x$.
\end{enumerate}
\end{definition}
Condition \ref{item:essentially_total_2} says that if $x^\star \in E$ is an upper bound on $E$, then it is $\preceq_0$-greatest;
\ref{item:essentially_total_3} says that no element in $X \setminus E$ can be greatest.
We emphasise, though, that there is no need for $E$ to be a large set in any measure-theoretic or topological sense.
\begin{proposition}[Examples of essentially total subsets]
\label{prop:essentially_total_examples}
\begin{enumerate}[label=(\alph*)]
\item
\label{item:essentially_total_examples_lebesgue}
Suppose that $X \subseteq \mathbb{R}^n$ is open and equipped with $n$-dimensional Lebesgue measure $\lambda$, and that $\mu \in \prob{X}$ has continuous Lebesgue density $\rho \colon X \to [0, \infty)$.
Then $E \coloneqq \set{x \in X}{\rho(x) > 0}$ is $\mu$-essentially total, and $I_\mu(x) \coloneqq -\log \rho(x)$ is an OM functional with domain $E$.
\item
\label{item:essentially_total_examples_om_and_m_prop}
Suppose that $\mu \in \prob{X}$ has an OM functional $I_\mu \colon E \to \mathbb{R}$ and property $M(\mu, E)$ holds.
Then $E$ is $\mu$-essentially total.
\item
\label{item:essentially_total_examples_reweighted}
Suppose more generally that $\mu_0 \in \prob{X}$ has an OM functional $I_{\mu_0} \colon E \to \mathbb{R}$ and property $M(\mu_0, E)$ holds, and that
\begin{equation*}
\frac{\mathrm{d} \mu}{\mathrm{d} \mu_0}(x) \propto \exp\bigl(-\Phi(x)\bigr)
\end{equation*}
for some locally uniformly continuous \emph}%{\textbf{potential} $\Phi \colon X \to \mathbb{R}$.
Then $E$ is $\mu$-essentially total, and $I_\mu(x) \coloneqq I_{\mu_0}(x) + \Phi(x)$ is an OM functional for $\mu$.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[label=(\alph*)]
\item
The Lebesgue differentiation theorem implies that for any $x \in X$,
\begin{equation*}
\lim_{r \to 0} \frac{\crcdf{x}{r}}{\lambda(\cball{x}{r})} = \rho(x).
\end{equation*}
For any $x$ and $x' \in E$, one can pick $r$ sufficiently small such that $\cball{x}{r}$ and $\cball{x'}{r}$ lie in $X$.
This implies that $\lambda(\cball{x}{r}) = \lambda(\cball{x'}{r})$, and so
\begin{equation}
\label{eq:lebesgue_differentiation}
\lim_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x'}{r}} = \lim_{r \to 0} \frac{\crcdf{x}{r}}{\lambda(\cball{x}{r})} \lim_{r \to 0} \frac{\lambda(\cball{x'}{r})}{\crcdf{x'}{r}} = \frac{\rho(x)}{\rho(x')}.
\end{equation}
Hence, $E$ is a chain and $I_\mu$ is an OM functional on $E$.
When $x' \in X \setminus E$,
\begin{equation*}
\lim_{r \to 0} \frac{\crcdf{x'}{r}}{\lambda(\cball{x'}{r})} = 0,
\end{equation*}
so an argument similar to that in \eqref{eq:lebesgue_differentiation} proves that $x' \prec_0 x$ for any $x \in E$.
\item
The existence of an OM functional $I_\mu$ proves that $E$ is a chain.
By \citet[Lemma~B.1]{AyanbayevKlebanovLieSullivan2022_I}, for $x' \in X \setminus E$ and $x \in E$, we must have $x' \prec_0 x$, because
\begin{equation*}
\lim_{r \to 0} \frac{\crcdf{x'}{r}}{\crcdf{x}{r}} = 0.
\end{equation*}
\item By \citet[Lemma~B.8]{AyanbayevKlebanovLieSullivan2022_I}, $I_\mu$ is an OM functional for $\mu$ and property $M(\mu, E)$ holds.
The result follows by \ref{item:essentially_total_examples_om_and_m_prop}.
\end{enumerate}
\vspace{-\baselineskip}
\end{proof}
\begin{proposition}
\label{prop:essentially_total_properties}
Let $X$ be a metric space and let $\mu \in \prob{X}$.
Suppose that $\varnothing \neq E \subseteq X$ is $\mu$-essentially total.
\begin{enumerate}[label=(\alph*)]
\item
\label{item:essentially_total_properties_maximal_greatest_in_E}
Any $\preceq_0$-maximal element must lie in $E$ and is $\preceq_0$-greatest.
\item
\label{item:essentially_total_properties_variational}
If $\mu$ admits an OM functional $I_\mu \colon E \to \mathbb{R}$, then
\begin{equation*}
x^\star \text{ is $\preceq_0$-greatest} \iff x^\star \in E \text{ and } x^\star \text{ minimises } I_\mu.
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[label=(\alph*)]
\item
A maximal element $x^\star$ must lie in $E$, or else one could find $x \in E$ such that $x^\star \prec_0 x$ by essential totality, contradicting the maximality of $x^\star$.
Conditions \ref{item:essentially_total_1} and \ref{item:essentially_total_2} of essential totality together imply that $x^\star$ is globally comparable, so it must be greatest (\Cref{lem:GWM_greatest_maximal}).
\item
Using the OM functional for $E$, one finds that
\begin{align*}
x^\star \in E \text{ is an upper bound for } E &\iff
\lim_{r \to 0} \frac{\crcdf{x}{r}}{\crcdf{x^\star}{r}} \leq 1 \text{ for all } x \in E \\
&\iff
\frac{e^{-I_\mu(x)}}{e^{-I_\mu(x^\star)}} \leq 1 \text{ for all } x \in E \\
&\iff
x^\star \text{ minimises } I_\mu.
\end{align*}
If $x^\star$ is $\preceq_0$-greatest, then $x^\star \in E$ by \ref{item:essentially_total_properties_maximal_greatest_in_E}, and the previous implications prove that $x^\star$ minimises $I_\mu$.
Conversely, the definition of essential totality ensures that an upper bound for $E$ is $\preceq_0$-greatest, proving the reverse implication.
\end{enumerate}
\vspace{-\baselineskip}
\end{proof}
The variational characterisation of weak modes as minimisers of the OM functional generalises the result of \citet[Proposition~4.1]{AyanbayevKlebanovLieSullivan2022_I} to essentially total subsets.
Specialising to the Lebesgue case recovers the intuitive result that $x^\star$ is a weak mode if and only if it is a global maximiser of $\rho$.
The situation is more subtle if $X$ is not open:
the measure in \Cref{eg:upward_closures_wrt_preceq_0}\ref{item:upward_closures_wrt_preceq_0_1} restricted to $X = [0, 1]$ has a continuous Lebesgue density maximised at $x^\star = 1$, but $x^\star$ is not a weak mode.
A significant corollary is that $\preceq_0$-maximal elements are always globally comparable for reweightings of ``nice'' measures of the form in \Cref{prop:essentially_total_examples}\ref{item:essentially_total_examples_reweighted}.
In particular, taking $\mu_0$ to be Gaussian gives the well-studied setting of \citet{DashtiLawStuartVoss2013} and \citet{KlebanovWacker2022}, so one can identify maximal and greatest elements without issue here.
This is highly reassuring from the perspective of applications: pathological examples in the style of \Cref{thm:countable_dense_antichain} with non-greatest maximal elements do not occur in Bayesian posteriors for ``nice'' inverse problems.
\section{Closing remarks}
\label{sec:conclusion}
This article has proposed that modes of probability measures should be understood as greatest elements of preorders that are defined using the masses of metric balls.
At fixed radius $r > 0$, there is an obvious choice of total preorder, and the order-theoretic point of view opens up attractive proof techniques for the existence of maximal/greatest elements (radius-$r$ modes) (\Cref{thm:r_greatest}).
However, we have also seen that such radius-$r$ modes can fail to exist (\Cref{eg:no_radius_1_mode,eg:no_radius_r_mode}), which provides further justification for the use of asymptotically maximising families as proposed by \citet{KlebanovWacker2022}, and we are able to contribute to the convergence analysis of such families as $r \to 0$ (\Cref{thm:limits_of_r-modes,thm:nesting_yields_GWM}).
In the limit as $r \to 0$, there are several limiting preorders that one could consider.
The one on which we have focussed, whose greatest elements are weak modes, is a non-total preorder.
Indeed, we have shown that even absolutely continuous measures can admit topologically dense antichains (\Cref{thm:countable_dense_antichain,thm:countable_dense_antichain_hilbert}), indicating that a measure must satisfy stringent regularity conditions to be certain of having greatest elements, i.e.\ weak modes.
As remarked in the introduction, we hope that this article will stimulate further discussion in the community about the ``correct'' definition of a mode.
We argue that there is a tension between the order-theoretic desire for modes to be merely \emph{maximal} elements of some preorder and an application-driven desire for modes to be \emph{greatest} elements.
To some extent, this tension can be avoided if one works only with particularly nice measures that display no oscillatory properties or that satisfy criteria such as essential totality, thus keeping all pathologies away from the ``top'' of the preorder.
Further useful new definitions of modes may be introduced and one would hope that they correspond to preorders.
However, as explored in \Cref{sec:alternative_small-radius_preorders}, it may well be that such definitions only induce non-transitive \emph{relations}.
In such cases, the loss of transitivity is not necessarily fatal, so long as it is kept away from the ``top'' of the relation, so that maximal/greatest elements may be defined.
On a high level, it would be interesting to know whether or not there can exist a function assigning to every (sufficiently well-behaved) measure $\mu \in \prob{X}$ a total preorder $\preceq^{\mu}$ whose maximal or greatest elements are useful modes for $\mu$.
This would appear to be a major open question that will involve much further investigation.
| {
"timestamp": "2022-09-26T02:09:46",
"yymm": "2209",
"arxiv_id": "2209.11517",
"language": "en",
"url": "https://arxiv.org/abs/2209.11517",
"abstract": "It is often desirable to summarise a probability measure on a space $X$ in terms of a mode, or MAP estimator, i.e.\\ a point of maximum probability. Such points can be rigorously defined using masses of metric balls in the small-radius limit. However, the theory is not entirely straightforward: the literature contains multiple notions of mode and various examples of pathological measures that have no mode in any sense. Since the masses of balls induce natural orderings on the points of $X$, this article aims to shed light on some of the problems in non-parametric MAP estimation by taking an order-theoretic perspective, which appears to be a new one in the inverse problems community. This point of view opens up attractive proof strategies based upon the Cantor and Kuratowski intersection theorems; it also reveals that many of the pathologies arise from the distinction between greatest and maximal elements of an order, and from the existence of incomparable elements of $X$, which we show can be dense in $X$, even for an absolutely continuous measure on $X = \\mathbb{R}$.",
"subjects": "Statistics Theory (math.ST); Probability (math.PR); Methodology (stat.ME)",
"title": "An order-theoretic perspective on modes and maximum a posteriori estimation in Bayesian inverse problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462194190619,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7099326193899933
} |
https://arxiv.org/abs/2009.06486 | Oblique Derivative Problems for Elliptic Equations on Conical Domains | We study the oblique derivative problem for uniformly elliptic equations on cone domains. Under the assumption of axi-symmetry of the solution, we find sufficient conditions on the angle of the oblique vector for Hölder regularity of the gradient to hold up to the vertex of the cone. The proof of regularity is based on the application of carefully constructed barrier methods or via perturbative arguments. In the case that such regularity does not hold, we give explicit counterexamples. We also give a counterexample to regularity in the absence of axi-symmetry. Unlike in the equivalent two dimensional problem, the gradient Hölder regularity does not hold for all axi-symmetric solutions, but rather the qualitative regularity properties depend on both the opening angle of the cone and the angle of the oblique vector in the boundary condition. | \section{Introduction}
The aim of this paper is to study the regularity up to the boundary for solutions of oblique derivative problems for uniformly elliptic equations on domains with conical singularities. As well as being of interest from the point of view of elliptic PDE theory, oblique derivative problems on cone domains arise naturally in a range of important physical problems, such as shock reflection problems in gas dynamics. The basic H\"older regularity of solutions has been known since the work of Miller \cite{Miller}, who initially derived suitable barrier functions on cone domains. In general, on domains with cone singularities, one cannot expect that the gradient of the solution will remain H\"older continuous up to the boundary of the domain, but under a symmetry assumption on the solution (axi-symmetry), the situation becomes more subtle. In fact, as we will show below, for these symmetric solutions, the gradient H\"older regularity (or lack of it) depends on a relationship between the opening angle of the cone at its vertex and the angle of the oblique vector in the boundary condition. Such a relationship is somewhat surprising given the theory for the equivalent two-dimensional problem (where the cone is replaced by a wedge). For such problems, reflectional symmetry of the solution is sufficient to guarantee the H\"older regularity of the gradient up to the vertex with no restriction on the angle of the oblique vector (beyond the necessity of obliqueness). This regularity theory for symmetric solutions of oblique problems in two dimensions played a crucial role in the recent resolution of the shock reflection problem for potential flow past a wedge by Chen and Feldman \cite{CF}, and so the results contained in this paper may be expected to play a similarly important role in the three dimensional theory.
To fix ideas, we work with the equation
\begin{equation}\label{eq:fullspace}
\begin{cases}
Lu:=A^{ij}\partial_{ij}u+A^{i}\partial_iu+A^0u=f &\text{ in }\Omega,\\
Mu:=\beta\cdot Du+\beta^0u=g & \text{ on }\partial\Omega,
\end{cases}
\end{equation}
where $\Omega\subset\mathbb{R}^n$ is a bounded, Lipschitz domain satisfying an exterior cone condition at every point of its boundary, $A^{ij},A^i,A^0:\Omega\to\mathbb{R}$. We assume the existence of constants $\lambda,\Lambda>0$ such that the principal coefficients $A^{ij}$ satisfy the uniform ellipticity assumption
\begin{equation}\begin{aligned}
\lambda|\xi|^2\leq A^{ij}(x)\xi_i\xi_j\leq \Lambda|\xi|^2 \text{ for all }\xi\in\mathbb{R}^n,\:x\in\Omega.
\end{aligned}\end{equation}
Further regularity assumptions on the coefficients will be stated below in Theorem \ref{thm:main}. The vector field $\beta$ is assumed to be piecewise smooth on $\partial\Omega$, inward pointing and oblique at all of its points of continuity. Throughout this paper, we use the following sense of the term obliqueness: a vector $\beta$ is said to be oblique at a point $x_0\in\partial\Omega$ if there is an orthonormal coordinate system $(x',x_n)$ based at $x_0$, a positive radius $\rho>0$, and a Lipschitz function $F$ such that
$$\Omega\cap B_\rho(x_0)=\{x_n>F(x')\,|\,|x|<\rho\}$$
such that $\beta$ is parallel to the $x_n$ axis. Note that this definition may be weakened in certain directions, see for example the book of Lieberman \cite{L}.
As the main issue that this paper will be concerned with is the regularity at cone points of the boundary for solutions satisfying a rotational symmetry, we will assume for simplicity of notation that $\Omega$ is the intersection of a ball of fixed radius $R>0$ with a fixed cone $\mathcal{C}$ with vertex at the origin and axis of symmetry the $x_n$ axis. Following the terminology of Miller \cite{Miller}, we define polar coordinates such that
$$r=|x|,\quad r\cos\th=x_n,$$
and then let the open cone be
$$\mathcal{C}=\{(r,\th)\in(0,\infty)\times[0,\th_0)\}$$
for some fixed $\th_0\in(0,\pi)$. Our domain is then
$$\Omega=\mathcal{C}\cap B_R(0)=\{(r,\th)\in(0,R)\times[0,\th_0)\}.$$
In general, we will write
$$\Omega[\rho]=\Omega\cap B_\rho(0)\quad \text{ for }0<\rho<R.$$
We write ${\Gamma_{\textup{cone}}}$ and ${\Gamma_{\textup{ball}}}$ for the portions of $\partial\Omega$ as follows:
\begin{equation*}
{\Gamma_{\textup{cone}}}=\{0<r<R,\,\th=\th_0\},\quad {\Gamma_{\textup{ball}}}=\{r=R,\,\th\in[0,\th_0)\}.
\end{equation*}
Note that both portions are relatively open, so that $\partial\Omega=\overline{\Gamma_{\textup{cone}}}\cup\overline{\Gamma_{\textup{ball}}}$.
The assumption of axi-symmetry that we will make requires certain compatibility conditions on the coefficients in order that the rotational solution can exist in the first place. For given examples, this is typically easy to compute in cylindrical coordinates. The cylindrical coordinates are the following:
$$y_1=x_n, \quad y_2=|x'|,\quad \phi\in \mathbb{S}^{n-2},$$
where $\phi$ is a standard coordinate system on $\mathbb{S}^{n-2}$. Note then that $(r,\th)$ coincide with the polar coordinates on the half space $\{(y_1,y_2)\,|\,y_2\geq 0\}$: $r=|(y_1,y_2)|$ and
$$y_1=r\cos\th,\quad y_2=r\sin\th.$$
Axi-symmetry of the solution means that $u=u(y)$ depends only on the axi-symmetric coordinates $(y_1,y_2)$, i.e.~is independent of $\phi$, and we may therefore work equivalently on the two-dimensional domain
$$\omega=\omega[R]=\{y\,|\,r(y)\in(0, R),\,\,\th(y)\in[0,\th_0)\}.$$
We define three (relatively open) boundary portions for $\omega$:
$${\gamma_{\textup{cone}}}=\{r\in(0,R),\th=\th_0\},\quad{\gamma_{\textup{symm}}}=\{r\in(0,R),\th=0\},\quad{\gamma_{\textup{ball}}}=\{r=R,\th\in(0,\th_0)\}.$$
In the $y$ coordinates, we obtain the equation
\begin{equation}
\begin{cases}
a^{ij}\partial_{ij}u+b^i\partial_iu+cu=f &\text{ in }\omega,\\
\beta\cdot Du+\beta^0u=g &\text{ on }{\gamma_{\textup{cone}}}\cup{\gamma_{\textup{ball}}},\\
u_{y_2}=0 &\text{ on }{\gamma_{\textup{symm}}},
\end{cases}
\end{equation}
where the boundary operator $\beta$, $\beta^0$ (assumed axi-symmetric) is defined in the obvious way,
\begin{equation}\begin{aligned}
&a^{11}=A^{nn},\quad
a^{12}=a^{21}=\sum_{i=1}^{n-1}A^{in}\frac{x_i}{y_2},\quad
a^{22}=\sum_{i,j=1}^{n-1}A^{ij}\frac{x_ix_j}{y_2^2},\\
&b^1=A^n,\quad b^2=\frac{1}{y_2}\big(\sum_{i=1}^{n-1}A^{ii}-\sum_{i,j=1}^{n-1}A^{ij}\frac{x_ix_j}{y_2^2}\big)+\sum_{i=1}^{n-1}A^i\frac{x_i}{y_2},\quad c=A^0.
\end{aligned}\end{equation}
Uniform ellipticity of the coefficients $a^{ij}$ is inherited from that of $A^{ij}$. Indeed, the assumption that \eqref{eq:fullspace} admits a rotationally symmetric solution allows us to take the coefficients $a^{ij}$ to be independent of $\phi$ and the ellipticity is straightforward to check. A significant role in our analysis will be played by the singular coefficient $b^2$, arising from the use of symmetry to reduce the problem to a two-dimensional one. In general, coefficients of the form $b^i$ satisfying only a bound $|b^i|\leq \frac{C}{y_2}$ are not suitable for the application of the methods in this paper. For the treatment of equations with coefficients of this degree of singularity for the Dirichlet problem, see the work of Fichera \cite{Fichera}, Lieberman \cite{L08}, Michael \cite{Michael} and the references therein. That this coefficient arises from a symmetry reduction of dimension is crucial. We therefore split the first order (singular) coefficient $b^2$ into two pieces:
\begin{equation}
b^{2,1}=\sum_{i=1}^{n-1}A^{ii}-\sum_{i,j=1}^{n-1}A^{ij}\frac{x_ix_j}{y_2^2},\quad b^{2,2}=b^2-\frac{b^{2,1}}{y_2}.
\end{equation}
We see that only the principal part, $b^{2,1}$, depends on the principal coefficients of the equation and that $b^{2,1}$ and the remainder, $b^{2,2}$, satisfy the estimates
\begin{equation}
0<(n-2)\frac{\lambda}{y_2}\leq \frac{b^{2,1}}{y_2}\leq (n-2)\frac{\Lambda}{y_2},\quad |b^{2,2}|\leq \Big(\sum_{i=1}^{n-1}A_i^2\Big)^{\frac{1}{2}}.
\end{equation}
We also make the technical assumption, satisfied by many physically motivated problems, that at the cone vertex, the constant coefficient operator $\overline{L}_0$ defined on $\Omega$ by
\begin{equation}\label{eq:L0bar}
\overline{L}_0=A^{ij}(0)\partial_{ij}\quad \text{ is invariant under rotations around the axis of symmetry}.
\end{equation}
The oblique vector $\beta$ is always taken to be inward pointing and (without loss of generality) such that
$$\lim_{\substack{y\to0\\y\in{\gamma_{\textup{cone}}}}}\beta=(\cos(s),\sin(s)) \text{ for some }s\in(-\pi+\th_0,\th_0).$$
Note that this limit only makes sense in the $y$ coordinates as $\beta$ is generally discontinuous at the origin when considered on ${\Gamma_{\textup{cone}}}$ (for example, the unit normal to ${\Gamma_{\textup{cone}}}$).
We are now in a position to state a rough version of our main theorem.
\begin{thm}\label{thm:rough}
Suppose that the coefficients $A^{ij}$, $A^i$, $A^0$ of \eqref{eq:fullspace} satisfy \eqref{eq:L0bar} and are H\"older continuous in $\Omega$, that $\beta$ and $\beta^0$ are H\"older continuous on each of the smooth portions of $\partial\Omega$ with $\beta$ uniformly oblique and inward pointing, and that the problem admits an axi-symmetric solution $u$ in the sense described above. Finally, suppose $f$ and $g$ are H\"older continuous. Then there exists $s_1\in(-\frac{\pi}{2},0)$ such that if
$$s\in\big((-\pi,-\frac{\pi}{2})\cup(s_1,\frac{\pi}{2})\big)\cap(-\pi+\th_0,\th_0),$$
then the gradient of $u$ satisfies an \textit{a priori} H\"older estimate up to the vertex of the cone in terms of the H\"older norms of the data and coefficients.
\end{thm}
The requirement $s\in(-\pi+\th_0,\th_0)$ is equivalent to the choice of $\beta$ as inward-pointing. However, the restriction on $s$ within this interval is necessary, due to the following theorem.
\begin{thm}\label{thm:counterexample}
Consider the oblique derivative problem for the Laplace equation on a cone:
\begin{equation}\label{eq:laplace}
\begin{cases}
\Delta u=0 &\text{ in }\Omega,\\
\beta\cdot Du=0 &\text{ on }{\Gamma_{\textup{cone}}},
\end{cases}
\end{equation}
where $\beta=\sin(s)\partial_{r'}+\cos(s)\partial_{x_n}$, $r'=|x'|$, corresponds to a constant vector $\beta=(\cos(s),\sin(s))$ on ${\gamma_{\textup{cone}}}$, $s\in(-\pi+\th_0,\th_0)$.
\begin{itemize}
\item[(i)] Suppose that $\th_0\in(\frac{\pi}{2},\pi)$. There exists $s_0\in(-\pi+\th_0,0)$ depending on $\th_0$ such that if either $s\in(-\pi+\th_0,s_0)$ or $s\in(\frac{\pi}{2},\th_0)$, then there exists a H\"older continuous axi-symmetric solution of \eqref{eq:laplace} which is not $C^1$ up to the origin.
\item[(ii)] Suppose that $\th_0\in(0,\frac{\pi}{2})$. There exists $s_0\in(-\frac{\pi}{2},0)$ depending on $\th_0$ such that if $s\in(-\frac{\pi}{2},s_0)$, then there exists a H\"older continuous axi-symmetric solution of \eqref{eq:laplace} which is not $C^1$ up to the origin.
\end{itemize}
\end{thm}
As we will see in \S\ref{sec:counter2}, these solutions are smooth away from the vertex; the loss of regularity is due to the angle of the oblique vector at the vertex and the opening angle of the cone.
To provide some context for these results, we compare the situation to that for the two-dimensional problem. In two dimensions, the equation is posed on the exterior of a wedge,
$$\tilde\Omega=\{r\in(0,R),\,\th\in(-\th_0,\th_0)\},$$
with $(r,\th)$ being polar coordinates on $\mathbb{R}^2$. The coefficients and data are taken to be H\"older continuous as in Theorem \ref{thm:rough} above. The corresponding gradient regularity result for solutions with reflectional symmetry ($u(x_1,x_2)=u(x_1,-x_2)$) is then that for all $\th_0\in(0,\pi)$, there exists $\alpha\in(0,1)$ depending on $\th_0$ such that $u\in C^{1,\alpha}$ up the vertex, see \cite[Chapter 4]{CF} or \cite[Section 4.5]{L}. The key point here is that gradient H\"older regularity holds for all uniformly oblique vectors (again, only in the case of symmetric solutions). This is a stark contrast to the situation under consideration at present, where the regularity depends on a relationship between the opening angle of the cone and the oblique vector.
Oblique derivative problems for elliptic equations arise naturally in a wide variety of physical situations such as the theory of reflected shock waves in transonic flow and the capillary problem. In the theory of transonic shocks, in certain circumstances, one can pose the Rankine-Hugoniot conditions across the shock as an oblique derivative condition for a potential function, for example in two dimensions by \v{C}ani\'c, Keyfitz and Lieberman, \cite{CKL}. Such a formulation of the Rankine-Hugoniot conditions is also used for the shock reflection problem for potential flow, solved recently by Chen and Feldman \cite{CF} in two dimensions. The capillary problem has also been studied widely, see for example the monograph of Finn, \cite{Finn}.
Extensions of these results to three-dimensional domains require a more detailed understanding of the regularity of oblique derivative problems on domains such as cones. In particular, the results in this paper are suitable for application to the three-dimensional shock reflection problem from a cone. For this and other physical problems, the symmetry conditions that we impose here to study the oblique derivative problem are very natural. As mentioned above, if we do not have such symmetry assumptions, we cannot, in general, expect the H\"older regularity of the gradient. See Appendix \ref{sec:counter1} for a discussion of this and the construction of a counterexample.
From the point of view of the analysis of oblique derivative problems for elliptic equations, we mention in particular the early work of Fiorenza \cite{Fiorenza}. The theory of such problems was taken up in a series of papers by Lieberman, of which the most significant for our purposes here are \cite{L87,L88} which provide gradient H\"older regularity under the stronger condition that either the smooth portions of the boundary $\partial\Omega$ meet along co-dimension 2 hypersurfaces or that the vector field $\beta$ is continuous (see also Lieberman's monograph \cite{L} which contains many details of the theory of oblique problems in a variety of settings).
Concerning the H\"older regularity of the solution $u$, for domains with conical singularities, Miller \cite{Miller} constructed a barrier function for such regularity theory for the Dirichlet problem. We mention also the early result of Nadirashvili \cite{N} on smooth domains for the oblique derivative problem and the more recent work of Nadirashvili and Kenig \cite{KN} for oblique derivative problems on smooth domains with source terms $f\in L^p$. In general, we refer to \cite{L01} for the Harnack inequality and pointwise estimates (including H\"older estimates) of solutions of such problems on Lipschitz (or less regular) domains. For H\"older estimates for viscosity solutions of fully non-linear Neumann problems, we refer to the paper of Barles and Da Lio, \cite{BDL}. The theory of elliptic equations with strongly singular lower order terms (such as the $y_2^{-1}$ term that we find here in the axi-symmetric coordinates) with Dirichlet data has been studied by Fichera \cite{Fichera} from the point of view of degenerate equations and also by Lieberman \cite{L08}, generalising the earlier work of Michael \cite{Michael}.
A precise statement of Theorem \ref{thm:rough} will be given below in \S\ref{sec:mainproofs} in two parts: Theorems \ref{thm:main} and \ref{thm:perturb}. The first of these two results covers the case $s\in(0,\frac{\pi}{2})\cup(-\pi,-\frac{\pi}{2})$, while the second is for $s\in(s_1,0]$. These theorems give precise \textit{a priori} H\"older estimates for the gradient of the solution $u$. We outline the strategy of proof for Theorem \ref{thm:main} as follows. As the intersection of the boundaries of the ball and the cone is a smooth set of co-dimension 2 in $\mathbb{R}^n$, the boundary regularity along this portion of the boundary fits into the framework of \cite{L88} (see also \cite{L} for an alternative exposition). We will therefore focus attention on the \textit{a priori} estimates locally around the vertex of the cone. We first construct the solutions to a pair of auxiliary problems, one to handle the source term $f$ and errors from the method of frozen coefficients, and the other to reduce to a problem with homogeneous boundary condition. Using Schauder type estimates for these auxiliary problems, we will then apply the barrier method, relying on carefully constructed barrier functions and the comparison principle for suitable problems solved by the derivatives of $u$, to show a growth condition on the gradient of the solution $u$ near the cone vertex. Finally, a standard scaling argument will convert this growth into the desired H\"older regularity. Theorem \ref{thm:perturb} is proven via a perturbative argument around the case of a continuous boundary operator (the case $s=0$).
In order to apply the barrier method to the derivatives of our solution $u$, we must find good derivatives or derivative combinations to estimate. The availability and choice of a good derivative in fact depends on the angle of $\beta$. The reason for this is that when one derives an elliptic problem for the chosen derivative, the problem obtained (and, crucially, its associated boundary conditions) must be such that the comparison principle applies in order to make use of the barrier method. In particular, any zero order terms in the PDE or the oblique operator must come equipped with a good sign condition.
The outline of the paper is as follows. First, in \S\ref{sec:holder}, we give definitions and basic results for the (weighted) H\"older spaces in which we will work. With these definitions, we will be able to give more precise statements of the main theorem in the case that we have the positive result (gradient H\"older regularity up to the boundary). This is stated in Theorems \ref{thm:main} and \ref{thm:perturb}, which are proved in \S\ref{sec:mainproofs}. The proof relies on carefully constructed barrier functions, and so in \S\ref{sec:barrier} we give the construction of these barriers and relevant estimates of their directional derivatives that are used in the proof of the main result. The version of the comparison principle that we use in the main proofs is then stated and proved in \S\ref{sec:comp}. In \S\ref{sec:counter2}, we give the construction of the counterexamples of Theorem \ref{thm:counterexample}. Finally, in Appendix \ref{sec:counter1}, we provide the construction of solutions to the Neumann problem for the Laplacian which are H\"older continuous but not $C^1$ if we drop the assumption of axi-symmetry.
\section{Weighted H\"older spaces}\label{sec:holder}
Although we will ultimately end up with a $C^{1,\alpha}$ estimate on the solutions $u$ of oblique derivative problems, the estimates and proofs are most conveniently stated in certain weighted H\"older spaces.
Due to the lack of regularity of the domain at the vertex of the cone, we incorporate the distance to the vertex in defining our H\"older spaces. For $k\in\mathbb{N}\cup\{0\}$, $\alpha\in(0,1]$, we define the $\sup$ norm $\|u\|_0$, standard H\"older semi-norm $[u]_\alpha$ and norm $\|u\|_{k,\alpha}$ of a function $u:\overline{\Omega}\to\mathbb{R}$ to be
\begin{equation}
\|u\|_{0,\Omega}=\sup_{{\Omega}} |u|,\quad [u]_{\alpha,\Omega}=\sup_{\substack{x_1,x_2\in{\Omega},\{\bar x}_1\neq x_2}}\frac{|u(x_1)-u(x_2)|}{|x_1-x_2|^\alpha},\quad \|u\|_{k,\alpha,\Omega}=\sum_{j=0}^{k}\|D^ju\|_{0,\Omega}+[D^ku]_{\alpha,\Omega},
\end{equation}
where $D^ju$ is the tensor of all $j$-th derivatives of $u$.
For $x,x_1,x_2\in\Omega$, we define the distances $d_{x}=|x|$ and $d_{x_1,x_2}=\min\{d_{x_1},d_{x_2}\}$. For a function $u:\Omega\to\mathbb{R}$, $k\in\mathbb{N}\cup\{0\}$, $\alpha\in(0,1]$ and $\beta\in\mathbb{R}$, we define weighted norms
\begin{equation}\begin{aligned}
&\|u\|_{k,0,\Omega}^{(\beta)}=\sum_{j=0}^k \sup_{x\in{{\Omega}}}\big(d_x^{\max\{j+\beta,0\}}|D^j u(x)|\big),\\
&[u]_{k,\alpha,\Omega}^{(\beta)}=\sup_{\substack{x_1,x_2\in{{\Omega}}\{\bar x}_1\neq x_2}}\Big(d_{x_1,x_2}^{\max\{k+\alpha+\beta,0\}}\frac{|D^ku(x_1)-D^ku(x_2)|}{|x_1-x_2|^\alpha}\Big),\\
&\|u\|_{k,\alpha,\Omega}^{(\beta)}=\|u\|_{k,0,\Omega}^{(\beta)}+[u]_{k,\alpha,\Omega}^{(\beta)}.
\end{aligned}\end{equation}
We denote by $C_{k,\alpha}^{(\beta)}(\Omega)$ the space of functions whose norm $\|u\|_{k,\alpha,\Omega}^{(\beta)}$ is finite. When no confusion can arise, we usually drop the subscript $\Omega$ from the definition of the norm, writing instead $\|u\|_{k,\alpha}^{(\beta)}$ etc.
Note in particular that if $u\in C_{2,0,\Omega}^{(-1-\alpha)}$, then $u\in C^{1,\alpha}(\overline{\Omega})$. The norms $\|\cdot\|_{k,\alpha,\omega}^{(\beta)}$ and $\|\cdot\|_{k,\alpha,\gamma}^{(\beta)}$ for $\gamma\subset\partial\omega$ are defined similarly (with weights to the vertex).
We will need the following lemmas concerning these weighted norms. Proofs may be found in \cite[Chapter 2]{L}.
\begin{lemma}\label{lemma:holder}
(i) Suppose $\alpha\in(0,1]$, $\beta\geq-\alpha$ and $\beta_1,\beta_2,\beta_1',\beta_2'\in\mathbb{R}$ such that $\beta=\beta_1+\beta_2=\beta_1'+\beta_2'$, $\beta_1,\beta_2'\geq-\alpha$, $\beta_2,\beta_1'\geq0$.
Then for any $u\in C_{0,\alpha}^{(\beta_1)}$, $v\in C_{0,\alpha}^{(\beta_2')}$, we have
\begin{equation}\label{ineq:holderprod2}
[uv]_{0,\alpha}^{(\beta)}\leq[u]_{0,\alpha}^{(\beta_1)}\|v\|_{0}^{(\beta_2)}+\|u\|_{0}^{(\beta_1')}[v]_{0,\alpha}^{(\beta_2')}.
\end{equation}
(ii) Suppose that $k_1,k_2\in\mathbb{N}\cup\{0\}$, $\alpha_1,\alpha_2\in(0,1]$, $\beta_1,\beta_2\in\mathbb{R}$ satisfy
$$k_j+\alpha_j+\beta_j\geq 0,\:\max\{k_j+\alpha_j+\beta_j\}>0,\quad \text{ for }j=1,2.$$
Let $\th\in(0,1)$. Then, for $k\in\mathbb{N}\cup\{0\}$, $\alpha\in(0,1]$, $\beta\in\mathbb{R}$ defined by
$$k+\alpha=\th(k_1+\alpha_1)+(1-\th)(k_2+\alpha_2),\quad \beta=\th\beta_1+(1-\th)\beta_2,$$
there exists a constant $C>0$ such that, for any $u\in C_{k,\alpha}^{(\beta)}$,
\begin{equation}\label{ineq:holderinterp}
\|u\|_{k,\alpha}^{(\beta)}\leq C\big(\|u\|_{k_1,\alpha_1}^{(\beta_1)}\big)^\th\big(\|u\|_{k_2,\alpha_2}^{(\beta_2)}\big)^{1-\th}.
\end{equation}
\end{lemma}
Finally, in the proofs of Lemmas \ref{lemma:V} and Lemma \ref{lemma:W} below, we will also need the following norms, weighted by distance to a portion of the boundary. Let $\Sigma\subset\partial\Omega$ be closed and define $d_x^\Sigma={\textrm{dist}}(x,\Sigma)$, $d_{x_1,x_2}^\Sigma=\min\{d_{x_1}^\Sigma,d_{x_2}^\Sigma\}$. We then define
\begin{equation}\begin{aligned}\label{def:Sigmaholder}
&\|u\|_{k,0,\Omega}^{(\beta),\Sigma}=\sum_{j=0}^k \sup_{x\in\Omega}\big((d_x^\Sigma)^{\max\{j+\beta,0\}}|D^j u(x)|\big),\\
&[u]_{k,\alpha,\Omega}^{(\beta),\Sigma}=\sup_{\substack{x_1,x_2\in\Omega\{\bar x}_1\neq x_2}}\Big((d_{x_1,x_2}^\Sigma)^{\max\{k+\alpha+\beta,0\}}\frac{|D^ku(x_1)-D^ku(x_2)|}{|x_1-x_2|^\alpha}\Big),\\
&\|u\|_{k,\alpha,\Omega}^{(\beta),\Sigma}=\|u\|_{k,0,\Omega}^{(\beta),\Sigma}+[u]_{k,\alpha,\Omega}^{(\beta),\Sigma}.
\end{aligned}\end{equation}
\section{Main estimates and proof of main theorem}\label{sec:mainproofs}
We break the proof of Theorem \ref{thm:rough} into two parts. The first concerns the case in which the oblique vector points into the first or third quadrant. In this case, we may apply the barrier technique in order to conclude that the desired H\"older regularity of the gradient holds. This is the content of Theorem \ref{thm:main} below. The second part is to prove a perturbative result for oblique vectors close to $\partial_{x_n}$ ($\partial_{y_1}$ in axi-symmetric coordinates). This is contained in Theorem \ref{thm:perturb} below.
\begin{thm}\label{thm:main}
Suppose $u\in C_{2,\alpha,\Omega}^{(-1-\alpha)}$ is axi-symmetric and satisfies \eqref{eq:fullspace}. Let the coefficients of \eqref{eq:fullspace} satisfy
\begin{equation}\label{ass:1}
A^{ij}\in C^0\cap C_{0,\alpha,\Omega}^{(0)},\quad A^i,A^0\in C_{0,\alpha,\Omega}^{(1-\alpha)},
\end{equation}
and suppose moreover that $\beta$ is axi-symmetric, uniformly oblique and inward pointing on $\partial\Omega\setminus\{0\}$, $\beta^0$ is axi-symmetric and, when considered in the $y$ coordinates,
\begin{equation}\label{ass:2}
\beta,\beta^0\in C_{1,\alpha}^{(-\alpha)}({\gamma_{\textup{cone}}}\cup{\gamma_{\textup{ball}}}).\end{equation}
Let $\eta_1:[0,\infty)\to[0,\infty)$ be a continuous, increasing function such that $\eta_1(0)=0$ and
\begin{equation}\label{ass:3}
|A^{ij}(x)-A^{ij}(\bar x)|\leq \eta_1(|x-\bar x|) \quad \text{ for any }x\in\Omega,\,\,\bar x\in\partial\Omega.
\end{equation}
For compatibility at the intersection $\overline{\Gamma_{\textup{cone}}}\cap\overline{\Gamma_{\textup{ball}}}$, we assume either
\begin{equation}\label{ass:4}
\Big|\frac{\beta_{\textup{b}}}{|\beta_{\textup{b}}|}\pm\frac{\beta_{\textup{c}}}{|\beta_{\textup{c}}|}\Big|\geq\tilde\epsilon>0\quad\text{ or } \quad \beta_{\textup{b}}=\beta_{\textup{c}} \text{ on }\overline{\Gamma_{\textup{cone}}}\cap\overline{\Gamma_{\textup{ball}}},
\end{equation}
where $\beta_{\textup{b}}$ and $\beta_{\textup{c}}$ are the limits of $\beta$ on $\overline{\Gamma_{\textup{cone}}}\cap\overline{\Gamma_{\textup{ball}}}$ from either side.\\
Finally, we assume that the data $f,g$ are axi-symmetric and
\begin{equation}\label{ass:5}
f\in C_{0,\alpha,\Omega}^{(1-\alpha)},\quad g\in C_{1,\alpha}^{(-\alpha)}({\Gamma_{\textup{cone}}}\cup{\Gamma_{\textup{ball}}}).
\end{equation}
We write, in $y$ coordinates,
\begin{equation}\label{eq:beta0}
\lim_{\substack{y\to 0\\ y\in{\gamma_{\textup{cone}}}}}\beta(y)=(\cos(s),\sin(s))\quad\text{ such that }s\in(-\pi+\th_0,\th_0).
\end{equation} Suppose $\cos(s)\sin(s)>0$. Then there exists $\alpha_1=\alpha_1(\th_0,\tilde\epsilon,s)\in(0,1)$ such that if $\alpha\in(0,\alpha_1)$ then
$$\|u\|_{2,\alpha}^{(-1-\alpha)}\leq C\big(\|f\|_{0,\alpha}^{(1-\alpha)}+\|g\|_{1,\alpha}^{(-\alpha)}+\|u\|_0\big).$$
The constant $C$ depends on the norms $\|A^{ij}\|_{0,\alpha}^{(0)}$, $\|A^i\|_{0,\alpha}^{(1-\alpha)}$, $\|A^0\|_{0,\alpha}^{(1-\alpha)}$, $\|\beta\|_{1,\alpha,{\gamma_{\textup{cone}}}}^{(-\alpha)}$, $\|\beta^0\|_{1,\alpha,{\gamma_{\textup{cone}}}}^{(-\alpha)}$, as well as $\Lambda$, $\lambda$, $\eta_1$, $\th_0$, $\tilde\epsilon$, $R$, $s$ and $\alpha$.
\end{thm}
\begin{rmk}
The constant $\alpha_1$ is defined in Lemma \ref{lemma:Miller}. As discussed in \S\ref{sec:holder}, this gives the estimate on the $C^{1,\alpha}$ norm of $u$. We remind the reader that this class of vector fields $\beta$ includes oblique vector fields that are discontinuous at the vertex of the cone such as the unit normal vector field as, once we reduce to the axi-symmetric coordinates, this becomes continuous (indeed, constant) on ${\gamma_{\textup{cone}}}$. We note in passing that the assumptions of this theorem could be weakened in various directions. Firstly, the choice of boundary conditions and data on ${\gamma_{\textup{ball}}}$ is unimportant for the regularity at the cone vertex, which is what we are interested in here. These boundary conditions could of course be replaced with other suitable conditions.
\end{rmk}
We begin with two auxiliary lemmas. In order to state these lemmas, we must first define the notation we use for our frozen coefficients on the domain $\omega$. Let
\begin{equation}\label{def:frozen}
L_0:=a^{ij}_0\partial_{ij}+\frac{b^{2,1}_0}{y_2}\partial_2,\quad \beta_0=(\beta_1,\beta_2)=\lim_{\substack{y\to 0\\ y\in{\gamma_{\textup{cone}}}}}\beta(y),
\end{equation}
where $a^{ij}_0=a^{ij}(0)$ and $b^{2,1}_0=b^{2,1}(0)$. Note that $L_0$ corresponds to the operator $\overline{L}_0=A^{ij}(0)\partial_{ij}$ on $\Omega$ as in \eqref{eq:L0bar}.
In the following lemma, we recall the notation $\omega[\rho]=\omega\cap B_\rho(0)$ and define ${\gamma_{\textup{symm}}}[\rho],{\gamma_{\textup{cone}}}[\rho]$ similarly for $\rho\in(0,R]$.
\begin{lemma}\label{lemma:V}
Suppose $g_0\in C_{1,\alpha}^{(-\alpha)}({\gamma_{\textup{cone}}}[2\rho])$ for some $0<\rho\leq\min\{1,R/2\}$. Then there exists $V\in C_{2,\alpha,\omega[\rho]}^{(-1-\alpha)}$ satisfying
\begin{equation}
\begin{cases}
a^{ij}_0\partial_{ij}V=0 &\text{ in }\omega[\rho],\\
\beta_0\cdot DV=g_0 &\text{ on }{\gamma_{\textup{cone}}}[\rho],\\
V_{y_2}=0\ &\text{ on }{\gamma_{\textup{symm}}}[\rho].
\end{cases}
\end{equation}
Moreover, for any $\delta\in(0,\alpha]$, $V$ satisfies the estimate
\begin{equation}
\|V\|_{2,\delta,\omega[\rho]}^{(-1-\alpha)}\leq C\|g_0\|_{1,\delta,{\gamma_{\textup{cone}}}[2\rho]}^{(-\alpha)}.
\end{equation}
\end{lemma}
\begin{lemma}\label{lemma:W}
Suppose $f_1\in C_{0,\alpha,\omega}^{(1-\alpha)}$. There exists $W\in C_{2,\alpha,\omega}^{(-1-\alpha)}$ such that
\begin{equation}
\begin{cases}
L_0W=f_1 & \text{ in }\omega,\\
W_{y_2}=0 & \text{ on }{\gamma_{\textup{symm}}}.
\end{cases}
\end{equation}
Moreover, $W$ satisfies $DW(0)=0$ and, for any $\delta\in(0,\alpha]$, the estimates
$$|DW(y)|\leq C\|f_1\|_{0,\delta}^{(1-\alpha)}|y|^\alpha,\quad |D^2W(y)|\leq C\|f_1\|_{0,\delta}^{(1-\alpha)}|y|^{\alpha-1}.$$
\end{lemma}
Delaying the proofs of these lemmas temporarily, we now present the proof of Theorem \ref{thm:main}. The basic strategy of the proof is the following: we first use Lemmas \ref{lemma:V} and \ref{lemma:W} to reduce the problem to a constant coefficient problem with no source term. We then derive further problems solved by specific derivatives or derivative combinations of the solution and apply the barrier method (using the barriers of \S\ref{sec:barrier}) to get estimates of the form
$$|Du(y)|\leq C_0|y|^\alpha,$$
where $C_0$ depends on the data. Finally, we apply a standard scaling argument and interpolation to deduce from this the full H\"older regularity.
The precise construction of the barriers is delayed until \S\ref{sec:barrier}, as the notation we will require occurs most naturally in the proof of Theorem \ref{thm:main} below.
\begin{proof}[Proof of Theorem \ref{thm:main}]
We begin by noting that by the results of \cite{L88}, the desired estimates hold locally in $\overline{\omega}\setminus\{0\}$. Indeed, the conditions \eqref{ass:4} are precisely those required in \cite[Lemma 1.3]{L88}. It is therefore sufficient to show the estimate locally around the origin (the vertex of the cone). By a standard partition of unity argument, we may assume that $u$ is compactly supported near 0 on a ball $B_\rho(0)\cap\overline\omega$, $0<\rho\leq\min\{1,R/2\}$. Moreover, without loss of generality, we may assume $u(0)=0$ (else consider $u-u(0)$) and that $Du(0)=0$ also (else consider $u-\frac{g(0)}{\beta_1}y_1$). Thus also $g(0)$ may be assumed to be zero. Note that such adjustments to $u$ (subtracting an affine function) do not change the regularity of the data $f$ and $g$ due to the assumptions made on the coefficients $A^i,A^0$ and $\beta,\beta^0$.\\
\textbf{Step 1:} We begin by freezing coefficients. Defining
$$f_0=f-(a^{ij}-a^{ij}_0)\partial_{ij}u-b^1\partial_1u-(b^2-\frac{b^{2,1}_0}{y_2})\partial_2u-cu,$$
we note that on $\omega[\rho]$, for any $\delta\in(0,\alpha]$,
\begin{equation}\label{ineq:F_0}
\|f_0\|_{0,\delta}^{(1-\alpha)}\leq \|f\|_{0,\delta}^{(1-\alpha)}+\eta_1(\rho)\|u\|_{2,\delta}^{(-1-\alpha)}+C[A^{ij}]_{0,\delta}^{(0)}\|u\|_{2,0}^{(-1-\alpha)}+C(\|A^i\|_{0,\delta}^{(1-\alpha)}+\|A^0\|_{0,\delta}^{(1-\alpha)})\|u\|_{1,\delta}^{(-1)},
\end{equation}
where $\eta_1(\rho)$ is the continuous, increasing function such that $\eta_1(0)=0$ from the statement of the theorem and we have used \eqref{ineq:holderprod2}.
Defining also $$g_0:=g-(\beta-\beta_0)\cdot Du -\beta^0 u,$$ we have that
$$\beta_0\cdot Du=g_0\text{ on }{\gamma_{\textup{cone}}},$$
where, for $\delta\in(0,\alpha]$ to be chosen later,
\begin{equation}\label{ineq:G_0}
G_0:=\|g_0\|_{1,\delta,{\gamma_{\textup{cone}}}}^{(-\alpha)}\leq \|g\|_{1,\delta,{\gamma_{\textup{cone}}}}^{(-\alpha)}+\eta_2(\rho)\|u\|_{2,\delta}^{(-1-\alpha)}+C[\beta,\beta^0]_{0,\delta}^{(0)}\|u\|_{2,0}^{(-1-\alpha)}+C\|u\|_{1,\delta}^{(-1)},
\end{equation}
where $\eta_2(\rho)$ is continuous, increasing, $\eta_2(0)=0$ (such an $\eta_2$ exists by \eqref{eq:beta0}) and we have used Lemma \ref{lemma:holder}(i).
Taking now the function $V$ defined by Lemma \ref{lemma:V} with boundary data $g_0$, we obtain
\begin{equation*}
\begin{cases}
L_0(u-V)=f_0-\frac{b^{2,1}_0}{y_2}\partial_2V=:f_1 & \text{ in }\omega,\\
\beta_0\cdot D(u-V)=0 & \text{ on }{\gamma_{\textup{cone}}},\\
(u-V)_{y_2}=0 & \text{ on } {\gamma_{\textup{symm}}}.
\end{cases}
\end{equation*}
Using the estimate of Lemma \ref{lemma:V} and \eqref{ineq:F_0}--\eqref{ineq:G_0}, we have (for $\delta\in(0,\alpha)$ to be chosen later)
\begin{equation}\label{ineq:F_1}
F_1:=\|f_1\|_{0,\delta}^{(1-\alpha)}\leq C\big(\|f\|_{0,\delta}^{(1-\alpha)}+\|g\|_{1,\delta,{\gamma_{\textup{cone}}}}^{(-\alpha)}+(\eta_1(\rho)+\eta_2(\rho))\|u\|_{2,\delta}^{(-1-\alpha)}+\rho^{\alpha-\delta}\|u\|_{2,0}^{(-1-\alpha)}+\|u\|_{1,\delta}^{(-1)}\big),
\end{equation}
where we have used that $[A^{ij}]_{0,\delta}^{(0)}+[\beta,\beta^0]_{0,\delta}^{(0)}\leq C\rho^{\alpha-\delta}$ on $B_\rho(0)$. Note also that $g_0(0)=0$, so as $DV$ is continuous up to the origin, we must have $DV(0)=0$.
Next we apply Lemma \ref{lemma:W} with source term $f_1$ to obtain a function $W$ satisfying
\begin{equation*}
\begin{cases}
L_0W=f_1 & \text{ in }\omega,\\
W_{y_2}=0 & \text{ on }{\gamma_{\textup{symm}}},
\end{cases}
\end{equation*}
and the estimates
$$|DW(y)|\leq CF_1|y|^{\alpha},\quad |D^2W(y)|\leq CF_1|y|^{\alpha-1}.$$
\textbf{Step 2:} We now proceed to derive suitable problems for the derivatives of $u$ and apply the barrier method to obtain a growth rate estimate on $|Du|$ near the vertex.\\
We write $\nu=(\nu_1,\nu_2)$ for the inward unit normal on ${\gamma_{\textup{cone}}}$ and $\beta_0=(\beta_1,\beta_2)$, where $\beta_1,\beta_2$ have the same sign (by the assumption made on $s$).
Define
\begin{equation}
v_1=(u-V)_{y_1},\quad v_2=(u-V)_{y_2}.
\end{equation}
Define coordinates $(z_1,z_2)$ such that $\partial_{z_1}$ is parallel to $\beta_0=(\beta_1,\beta_2)$ and $\partial_{z_2}$ is parallel to $\tau$, the tangent to ${\gamma_{\textup{cone}}}$, so
\begin{equation}\begin{pmatrix}\label{def:zcoords}
z_1\\z_2
\end{pmatrix}=\begin{pmatrix}
\beta_1 & \beta_2\\
\nu_2 & -\nu_1
\end{pmatrix}\begin{pmatrix}
y_1\\y_2
\end{pmatrix}.\end{equation}
The reverse coordinate change is given by
$$\begin{pmatrix}
y_1\\y_2
\end{pmatrix}=\frac{1}{\nu\cdot\beta_0}\begin{pmatrix}
\nu_1 & \beta_2\\
\nu_2 & -\beta_1
\end{pmatrix}\begin{pmatrix}
z_1\\z_2
\end{pmatrix}.$$
From here on, we write $\nu\cdot\beta_0=\epsilon>0$ by obliqueness (recall $\beta$, $\nu$ are both inward pointing).
Now by changing coordinates in the operator $L_0$, we find coefficients $\tilde{a}^{ij}$ such that for any $\psi:\omega\to\mathbb{R}$,
\begin{equation}\label{def:tildecoords}
\tilde{a}^{ij}\psi_{z_iz_j}=a^{ij}_0\psi_{y_iy_j}\quad\text{ and }\quad\frac{\lambda}{\epsilon^2}\leq \tilde{a}^{11},\tilde{a}^{22}\leq \frac{\Lambda}{\epsilon^2}.
\end{equation}
We derive an oblique derivative condition for $v_1$ by computing on ${\gamma_{\textup{cone}}}$. Noting that $(u-V)_{z_1z_2}=0$ on ${\gamma_{\textup{cone}}}$, we calculate on ${\gamma_{\textup{cone}}}$
\begin{equation*}\begin{aligned}
\partial_{z_1}v_1=&\,\frac{1}{\epsilon}\partial_{z_1}\big(\nu_1(u-V)_{z_1}+\beta_2(u-V)_{z_2}\big)=\frac{1}{\epsilon}\nu_1(u-V)_{z_1z_1}\\
=&\,\frac{1}{\epsilon}\frac{\nu_1}{\tilde{a}^{11}} f_1-\frac{1}{\epsilon}\nu_1\frac{\tilde{a}^{22}}{\tilde a^{11}}(u-V)_{z_2z_2}-\frac{1}{\epsilon}\frac{\nu_1}{\tilde a^{11}}\frac{b_0^{2,1}}{y_2}(u-V)_{y_2}\\
=&\,\frac{1}{\epsilon}\frac{\nu_1}{\tilde{a}^{11}} f_1-\frac{1}{\epsilon}\frac{\nu_1}{\beta_2}\frac{\tilde{a}^{22}}{\tilde a^{11}}\big(\nu_1(u-V)_{z_1z_2}+\beta_2(u-V)_{z_2z_2}\big)+\frac{1}{\epsilon}\frac{\nu_1}{\tilde a^{11}}\frac{\beta_1}{\beta_2}\frac{b_0^{2,1}}{y_2}(u-V)_{y_1}\\
=&\,\frac{1}{\epsilon}\frac{\nu_1}{\tilde{a}^{11}} f_1-\frac{\nu_1}{\beta_2}\frac{\tilde a^{22}}{\tilde a^{11}}\partial_{z_2}v_1+\frac{1}{\epsilon}\frac{\nu_1}{\tilde a^{11}}\frac{\beta_1}{\beta_2}\frac{b_0^{2,1}}{y_2}v_1,
\end{aligned}\end{equation*}
where we have also used that on ${\gamma_{\textup{cone}}}$ $$(u-V)_{y_2}=-\frac{\beta_1}{\beta_2}(u-V)_{y_1}\text{ as }\beta_0\cdot D(u-V)=0.$$
Thus we obtain the boundary condition
\begin{equation}
M_1v_1:=\beta_0\cdot Dv_1+\frac{\nu_1}{\beta_2}\frac{\tilde a^{22}}{\tilde a^{11}}\tau\cdot Dv_1-\frac{1}{\epsilon}\frac{\nu_1}{\tilde a^{11}}\frac{\beta_1}{\beta_2}\frac{b_0^{2,1}}{y_2}v_1=\frac{1}{\epsilon}\frac{\nu_1}{\tilde{a}^{11}} f_1.
\end{equation}
Noting that $\nu_1,\beta_1/\beta_2,\tilde{a}^{11},b_0^{2,1},\epsilon>0$, the zero order term in this boundary operator comes equipped with a negative sign, which is necessary for the application of the comparison principle.
As $\partial_{y_1}$ commutes with $L_0$ and the Neumann condition on ${\gamma_{\textup{symm}}}$, we arrive at the following problem for $v_1$,
\begin{equation}
\begin{cases}
L_0(v_1-W_{y_1})=0 &\text{ in }\omega,\\
(v_1-W_{y_1})_{y_2}=0 &\text{ on }{\gamma_{\textup{symm}}},\\
M_1(v_1-W_{y_1})= \frac{1}{\epsilon}\frac{\nu_1}{\tilde{a}^{11}} f_1-M_1W_{y_1} &\text{ on }{\gamma_{\textup{cone}}}.
\end{cases}
\end{equation}
Let $v_\alpha$ be the Miller barrier function as in \S\ref{sec:barrier} and choose $\alpha_1$ as in Lemma \ref{lemma:Miller} so that for $\alpha\in(0,\alpha_1)$, we have the boundary inequality
$$M_1v_\alpha\leq -c_1|y|^{\alpha-1}\text{ on }{\gamma_{\textup{cone}}}.$$
Therefore, using the estimate of Lemma \ref{lemma:W} for $W$, we may choose a constant $\widehat{C}=C_1\big(F_1+\|u\|_1\big)$, where $C_1>0$ is independent of $u$ and $F_1$, such that $\hat{v}=\widehat{C}v_\alpha$ satisfies
\begin{equation}
\begin{cases}
L_0(\hat{v}\pm(v_1-W_{y_1}))\leq 0 &\text{ in }\omega,\\
(\hat{v}\pm(v_1-W_{y_1}))_{y_2}=0 &\text{ on }{\gamma_{\textup{symm}}},\\
M_1(\hat{v}\pm(v_1-W_{y_1}))\leq 0 &\text{ on }{\gamma_{\textup{cone}}},\\
\hat{v}\pm(v_1-W_{y_1})\geq 0 &\text{ on }{\gamma_{\textup{ball}}},
\end{cases}
\end{equation}
where we have used that $$\big|\frac{1}{\epsilon}\frac{\nu_1}{\tilde{a}^{11}} f_1-M_1W_{y_1}\big|\leq C(|f_1|+|D^2W|+\frac{1}{y_2}|DW|)\leq CF_1|y|^{\alpha-1} \text{ on }{\gamma_{\textup{cone}}}.$$
As $D(u-V-W)(0)=0$, we also have the one-point Dirichlet condition $\hat{v}\pm(v_1-W_{y_1})(0)=0$.
Applying the first version of the comparison principle in Theorem \ref{thm:comparison}, we obtain that
$|v_1-W_{y_1}|\leq |\hat{v}|$, and hence, applying also the estimate for $V$,
\begin{equation}\label{ineq:v1est}
|u_{y_1}(y)|\leq C\big(F_1+G_0+\|u\|_1\big)|y|^{\alpha}.
\end{equation}
Next, we make a similar argument for a second derivative direction $w=v_1+\varepsilon v_2$, where $\varepsilon>0$ is sufficiently small. First, we derive a PDE for $w$ in the domain $\omega$: Define $\tilde W=W_{y_1}+\varepsilon W_{y_2}$. Then
$$0=L_0(w-\tilde W)-\varepsilon\frac{b_0^{2,1}}{y_2^2}(u-V-W)_{y_2}=L_0(w-\tilde W)-\frac{b_0^{2,1}}{y_2^2}(w-\tilde W)+\frac{b_0^{2,1}}{y_2^2}(u-V-W)_{y_1},$$
hence
\begin{equation}
L_0(w-\tilde W)-\frac{b_0^{2,1}}{y_2^2}(w-\tilde W)=-\frac{b_0^{2,1}}{y_2^2}(u-V-W)_{y_1}.
\end{equation}
Note that the zero order term on the left comes equipped with the correct sign for application of the comparison principle as $b^{2,1}_0>0$. Moreover, by Lemmas \ref{lemma:V} and \ref{lemma:W} and \eqref{ineq:v1est}, we already have an estimate for the right hand side:
\begin{equation}\label{ineq:pdeest}
\big|\frac{b_0^{2,1}}{y_2^2}(u-V-W)_{y_1}\big|\leq C\big(F_1+G_0+\|u\|_1\big)\frac{|y|^\alpha}{y_2^2}.\end{equation}
Next, we find an oblique derivative condition on ${\gamma_{\textup{cone}}}$ by using the PDE for $u-V$ in $z$ coordinates and the boundary condition $(u-V)_{z_1z_2}=0$ on ${\gamma_{\textup{cone}}}$:
\begin{equation*}\begin{aligned}
\partial_{z_1}w=&\,\frac{1}{\epsilon}\partial_{z_1}\big((\nu_1+\varepsilon \nu_2)(u-V)_{z_1}+(\beta_2-\varepsilon \beta_1)(u-V)_{z_2}\big)=\frac{1}{\epsilon}(\nu_1+\varepsilon \nu_2)(u-V)_{z_1z_1}\\
=&\,\frac{1}{\epsilon}\frac{\nu_1+\varepsilon \nu_2}{\tilde{a}^{11}} f_1-\frac{1}{\epsilon}(\nu_1+\varepsilon \nu_2)\frac{\tilde{a}^{22}}{\tilde a^{11}}(u-V)_{z_2z_2}-\frac{1}{\epsilon}\frac{\nu_1+\varepsilon \nu_2}{\tilde a^{11}}\frac{b_0^{2,1}}{y_2}(u-V)_{y_2}\\
=&\,\frac{1}{\epsilon}\frac{\nu_1+\varepsilon \nu_2}{\tilde{a}^{11}} f_1-\frac{1}{\epsilon}\frac{\nu_1+\varepsilon \nu_2}{\beta_2-\varepsilon \beta_1}\frac{\tilde{a}^{22}}{\tilde a^{11}}\big((\nu_1+\varepsilon \nu_2)(u-V)_{z_1z_2}+(\beta_2-\varepsilon \beta_1)(u-V)_{z_2z_2}\big)\\
&+\frac{1}{\epsilon}\frac{\nu_1+\varepsilon \nu_2}{\tilde a^{11}}\frac{\beta_1}{\beta_2-\varepsilon \beta_1}\frac{b_0^{2,1}}{y_2}w\\
=&\,\frac{1}{\epsilon}\frac{\nu_1+\varepsilon \nu_2}{\tilde{a}^{11}} f_1-\frac{\nu_1+\varepsilon \nu_2}{\beta_2-\varepsilon \beta_1}\frac{\tilde a^{22}}{\tilde a^{11}}\partial_{z_2}w+\frac{1}{\epsilon}\frac{\nu_1+\varepsilon \nu_2}{\tilde a^{11}}\frac{\beta_1}{\beta_2-\varepsilon \beta_1}\frac{b_0^{2,1}}{y_2}w,
\end{aligned}\end{equation*}
where we have also used that $w=-\frac{\beta_2-\varepsilon\beta_1}{\beta_1}(u-V)_{y_2}$ on ${\gamma_{\textup{cone}}}$ from the boundary condition.
Thus we arrive at the problem satisfied by $w$:
\begin{equation}
\begin{cases}
L_0(w-\tilde W)-\frac{b_0^{2,1}}{y_2^2}(w-\tilde W)=-\frac{b_0^{2,1}}{y_2^2}(u-V-W)_{y_1} &\text{ in }\omega,\\
w-\tilde W=u_{y_1}-V_{y_1}-W_{y_1} &\text{ on }{\gamma_{\textup{symm}}},\\
M_2(w-\tilde W)=\frac{1}{\epsilon}\frac{\nu_1+\varepsilon \nu_2}{\tilde{a}^{11}} f_1- M_2\tilde W &\text{ on }{\gamma_{\textup{cone}}},
\end{cases}
\end{equation}
where $M_2$ is the operator acting on functions $\psi$ by $$ M_2\psi=\beta_0\cdot D\psi+\frac{\nu_1+\varepsilon \nu_2}{\beta_2-\varepsilon \beta_1}\frac{\tilde a^{22}}{\tilde a^{11}}\tau\cdot D\psi-\frac{1}{\epsilon}\frac{\nu_1+\varepsilon \nu_2}{\tilde a^{11}}\frac{\beta_1}{\beta_2-\varepsilon \beta_1}\frac{b_0^{2,1}}{y_2}\psi.$$
Taking $\varepsilon>0$ sufficiently small so that $\nu_1+\varepsilon\nu_2>0$ (recall as $\nu$ is inward pointing, $\nu_1=\sin\th_0>0$) and $\textup{sgn}(\beta_2-\varepsilon \beta_1)=\textup{sgn}(\beta_2)$ (recall $\beta_2\neq 0$ by assumption), we have that the zero order term in $M_2$ has a negative coefficient. If $\varepsilon>0$ is sufficiently small, we also have the estimate of Lemma \ref{lemma:Miller} for the Miller barrier. Therefore we may take $\bar{v}=\overline{C}v_\alpha$ with $\overline{C}=C_2\big(F_1+G_0+\|u\|_1\big)$ with $C_2>0$. Using the positivity of $v_\alpha$, we have
$$L_0\bar v-\frac{b_0^{2,1}}{y_2^2}\bar v\leq -\overline{C}c_*\frac{b_0^{2,1}}{y_2^2}|y|^{\alpha},
$$
where $c_*>0$ is as in \eqref{ineq:miller} and $b_0^{2,1}>0$.
Thus from \eqref{ineq:pdeest}, for $C_2$ sufficiently large, we have
$$L_0\big(\bar v\pm(w-\tilde{W})\big)-\frac{b_0^{2,1}}{y_2^2}\big(\bar v\pm(w-\tilde{W})\big)\leq 0.$$
Similarly, on ${\gamma_{\textup{symm}}}$, we use the Neumann condition $u_{y_2},V_{y_2},W_{y_2}=0$ to make the estimate
$$\bar v\pm(w-\tilde{W})\geq \bar v-\big|u_{y_1}-V_{y_1}-W_{y_1}\big|\geq 0,$$
where we have used the already obtained estimate $|u_{y_1}(y)|\leq C\big(F_1+G_0+\|u\|_1\big)|y|^{\alpha}$ and the estimates on $DV$, $DW$.
We therefore apply the second form of the comparison principle in Theorem \ref{thm:comparison} to $\bar{v}\pm(w-\tilde{W})$. This gives
$$|u_{y_1}+\varepsilon u_{y_2}|\leq C\big(F_1+G_0+\|u\|_1\big)|y|^{\alpha},$$
and hence also
$$|Du|\leq C\big(F_1+G_0+\|u\|_1\big)|y|^{\alpha}.$$
\textbf{Step 3:} We now conclude via a standard scaling argument. For the convenience of the reader, we include the argument here. As will become clear, such an argument could be performed either on $\omega$ or on the original domain $\Omega$. For ease of notation, we continue to work on $\omega$.
As we have assumed without loss of generality that $u(0)=0$, we have obtained the inequality
\begin{equation}\label{ineq:scaling}
|u(y)|\leq C_0|y|^{\alpha+1},
\end{equation}
where $C_0=C\big(F_1+G_0+\|u\|_1\big)$.
Let $y\in\overline{\omega}[R/2]$ (without loss of generality, we suppose here $R\geq 1$), $y\neq 0$, and recall the notation $d_y=|y|$. Then at least one of the following is true:
\begin{itemize}
\item[(i)] $B_{\frac{d_y}{10 }}(y)\subset\omega$,
\item[(ii)] $y\in B_{\frac{d_{\hat{y}}}{2}}(\hat{y})$ for some $\hat{y}\in{\gamma_{\textup{cone}}}$,
\item[(iii)] $y\in B_{\frac{d_{\hat{y}}}{2}}(\hat{y})$ for some $\hat{y}\in{\gamma_{\textup{symm}}}$.
\end{itemize}
We focus on case (ii) here, as cases (i) and (iii) may be treated similarly (for case (iii) we return to $\Omega$ and note that ${\gamma_{\textup{symm}}}$ lies in the interior of $\Omega$). We therefore assume we have a point $\hat{y}\in{\gamma_{\textup{cone}}}$ such that $\hat{d}=\frac{1}{2}d_{\hat{y}}\in(0,1)$ and derive an estimate on $B_{\frac{\hat{d}}{2}}(\hat{y})$.
Define new coordinates $z=\frac{y-\hat{y}}{\hat{d}}$. Rescaling $\omega\cap B_{r\hat{d}}(\hat{y})$ for $r\in(0,1]$ gives us the new domain
$$\omega_r^{\hat{y}}:=\begin{cases}
B_r(0)\cap\{z_2<\tan\th_0 z_1\} &\text{ if }\th_0\in(0,\frac{\pi}{2}),\\
B_r(0)\cap\{z_2>\tan\th_0 z_1\} &\text{ if }\th_0\in(\frac{\pi}{2},\pi).
\end{cases}$$
On $\omega_1^{\hat{y}}$, we define a new unknown $$v(z)=\frac{u(\hat{y}+\hat{d}z)}{\hat{d}^{1+\alpha}}.$$
Then from the inequality \eqref{ineq:scaling}, we have the estimate
$$\|v\|_{0,\omega_1^{\hat{y}}}= \|u\|_{0,\omega\cap B_{\hat{d}}(\hat{y})}\hat{d}^{-1-\alpha}\leq CC_0.$$
Defining also
$$\hat{f}(z)=\hat{d}^{1-\alpha}f(\hat{y}+\hat{d}z),\quad \hat{g}(z)=\hat{d}^{-\alpha}g(\hat{y}+\hat{d}z),$$
one easily sees that $v$ satisfies the equation
\begin{equation}
\begin{cases}
a^{ij}\partial_{ij}v+\hat{d}b^i\partial_i v+\hat{d}^2 cv=\hat{f} & \text{ in } \omega_1^{\hat{y}},\\
\beta\cdot Dv+\beta^0 v=\hat{g} &\text{ on }\omega_{1,\textrm{cone}}^{\hat{y}}=\omega_1^{\hat{y}}\cap\{z_2=\tan\th_0 z_1\},
\end{cases}
\end{equation}
where the coefficients $a^{ij},b^i,c,\beta,\beta^0$ are all evaluated at $y(z)=\hat{y}+\hat{d}z$. It is then straightforward to obtain the estimates
$$\|\hat{f}\|_{0,\alpha,\omega_1^{\hat{y}}}\leq C\|f\|_{0,\alpha,\omega}^{(1-\alpha)},\quad \|\hat{g}\|_{1,\alpha,\omega_{1,\textrm{cone}}^{\hat{y}}}\leq C\|g\|_{1,\alpha,{\gamma_{\textup{cone}}}}^{(-\alpha)}.$$
Thus the standard elliptic regularity theory for oblique problems on smooth domains, e.g.~in \cite[Theorem 6.30]{GT} (note that $\hat{d}b^i$ is a bounded, regular coefficient on $\omega_{1}^{\hat{y}}$), gives the estimate
$$\|v\|_{2,\alpha,\omega_{1/2}^{\hat{y}}}\leq C\big(\|v\|_{0,\omega_{1}^{\hat{y}}}+\|\hat{f}\|_{0,\alpha,\omega_1^{\hat{y}}}+\|\hat{g}\|_{1,\alpha,\omega_{1,\textrm{cone}}^{\hat{y}}}\big)\leq C\big(C_0+\|f\|_{0,\alpha,\omega}^{(1-\alpha)}+\|g\|_{1,\alpha,{\gamma_{\textup{cone}}}}^{(-\alpha)}\big).$$
It is simple to check the equivalence $$\|v\|_{2,\alpha,\omega_{1/2}^{\hat{y}}}\approx \|u\|_{2,\alpha,\omega\cap B_{\hat{d}/2}(\hat{y})}^{(-1-\alpha)},$$ and so we use this estimate and similar estimates for interior balls (including balls centred on ${\gamma_{\textup{symm}}}$ as discussed above) to cover the domain and apply \eqref{ineq:G_0}--\eqref{ineq:F_1} to
arrive (via an argument such as that of \cite[Theorem 4.8]{GT}) at the estimate
\begin{equation}\begin{aligned}
\|u\|_{2,\alpha}^{(-1-\alpha)}\leq&\, C\big(F_1+G_0+\|f\|_{0,\alpha}^{(1-\alpha)}+\|g\|_{1,\alpha}^{(-\alpha)}\big)\\
\leq&\, C\big(\|f\|_{0,\alpha}^{(1-\alpha)}+\|g\|_{1,\alpha}^{(-\alpha)}+\eta(\rho)\|u\|_{2,\alpha}^{(-1-\alpha)}+\|u\|_{1,\delta}^{(-1)}+\|u\|_0\big),
\end{aligned}\end{equation}
where $\eta(\rho)=\eta_1(\rho)+\eta_2(\rho)+\rho^{\alpha-\delta}$.
Choosing $\delta>0$ such that $\delta<\alpha$ and $\delta\leq(1+\alpha)^{-1}$, we use the interpolation estimate for weighted H\"older spaces, \eqref{ineq:holderinterp}, to observe that
$$\|u\|_{1,\delta}^{(-1)}\leq C\big(\|u\|_{2,\alpha}^{(-1-\alpha)}\big)^\th\big(\|u\|_0\big)^{1-\th}$$
for some $\th\in(0,1)$. Thus, by Young's inequality, for any $\varepsilon>0$, we have
$$\|u\|_{2,\alpha}^{(-1-\alpha)}\leq C\big(\|f\|_{0,\alpha}^{(1-\alpha)}+(\eta(\rho)+\varepsilon)\|u\|_{2,\alpha}^{(-1-\alpha)}+\|g\|_{1,\alpha}^{(-\alpha)}+\|u\|_0\big).$$
Taking now $\rho,\varepsilon>0$ sufficiently small so that $C(\eta(\rho)+\varepsilon) <\frac{1}{2}$, we conclude the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:V}]
We find $V$ by applying \cite[Theorem 1.4]{L88} to the following problem on all of $\omega$:
\begin{equation}\label{eq:V}
\begin{cases}
a^{ij}_0\partial_{ij}V=0 &\text{ in }\omega,\\
\bar\beta_0\cdot DV+\bar\beta^0 V=\bar g &\text{ on }{\gamma_{\textup{cone}}}\cup{\gamma_{\textup{ball}}},\\
V_{y_2}=0 &\text{ on }{\gamma_{\textup{symm}}},
\end{cases}
\end{equation}
where $\bar g$ and $\bar\beta_0$ are H\"older continuous extensions of $g_0$ and $\beta_0$ to all of ${\gamma_{\textup{cone}}}\cup{\gamma_{\textup{ball}}}$ such that $\|\bar g\|_{1,\delta}^{(-\alpha)}\leq C\| g_0\|_{1,\delta}^{(-\alpha)}$ and $\bar\beta_0$ remains uniformly oblique with a similar estimate. $\bar\beta^0$ is a smooth scalar function such that $\bar\beta^0\leq 0$, $\bar\beta^0\not\equiv 0$, and $\bar\beta^0=0$ on ${\gamma_{\textup{cone}}}[\rho]$. Then Theorem 1.4 of \cite{L88} gives the estimate
$$\|V\|_{2,\delta}^{(-1-\alpha),\partial\omega}\leq C\|\bar g\|_{1,\delta}^{(-\alpha)}\leq C\| g_0\|_{1,\delta}^{(-\alpha)},$$
where the H\"older spaces with weight up to the boundary were defined in \eqref{def:Sigmaholder}.
To remove the dependence on the full boundary $\partial\omega$ in the norm on the left, we use standard regularity theory away from the vertex and observe that the arguments of \cite{L88} apply equally when we only weight the H\"older spaces with distance to the vertex (as the source term in the PDE of \eqref{eq:V} is zero, and hence is not singular along $\partial\omega$). This gives us the desired estimate,
$$\|V\|_{2,\delta}^{(-1-\alpha)}\leq C\| g_0\|_{1,\delta}^{(-\alpha)}.$$
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:W}]
We define $W$ by solving a further auxiliary problem. We begin by observing that, by construction and assumption \eqref{eq:L0bar} on $A^{ij}(0)$, $L_0$ extends to the (axi-symmetric) operator $\overline{L}_0$ on $\Omega$ defined by
$$\overline{L}_0w=A^{ij}(0)\partial_{ij}w \text{ for $w:\Omega\to\mathbb{R}$}.$$
We now define a smooth vector field $\tilde\beta$ such that $\tilde\beta=e_n$ on $B_{R/2}(0)\cap\partial\Omega$ and $\tilde\beta$ is uniformly oblique on all of $\partial\Omega$ and axi-symmetric (here $e_n$ is the standard Cartesian unit vector in the $x_n$ direction). Choose a smooth function $\tilde\beta^0\leq 0$ on $B_{2R}(0)$ such that $\tilde\beta^0=0$ on $B_{R/2}$ and $\tilde\beta^0\not\equiv0$. We then define $W$ as the solution to the problem
\begin{equation}
\begin{cases}
\overline{L}_0 W=f_1 &\text{ in }\Omega,\\
\tilde\beta\cdot DW- \tilde\beta^0W=0 &\text{ on }\partial\Omega.
\end{cases}
\end{equation}
Such a solution exists by \cite[Corollary 3.3]{L87} as the oblique vector $\tilde\beta$ is continuous and the zero order term in the boundary condition is negative. Moreover, that same theorem gives the estimate
$$\|W\|_{2,\delta,\Omega}^{(-1-\alpha),\partial\Omega}\leq C\|f_1\|_{0,\delta,\Omega}^{(1-\alpha),\partial\Omega},$$
where we recall the notation for H\"older spaces weighted by distance to the boundary as in \eqref{def:Sigmaholder}.
As in the previous proof, we note that the methods of \cite{L87} (see especially Lemmas 2.1--2.2 and also the proof of \cite[Lemma 1.2]{L88}) improve this estimate to
\begin{equation}\label{eq:Wnorm}
\|W\|_{2,\delta,\Omega}^{(-1-\alpha)}\leq C\|f_1\|_{0,\delta,\Omega}^{(1-\alpha)}.
\end{equation}
It only remains to show that $W$, in the $y$ coordinates, satisfies also the Neumann condition $W_{y_2}=0$ on ${\gamma_{\textup{symm}}}$ and deduce the growth conditions on $DW$, $D^2W$. From the uniqueness part of \cite[Theorem 3.2]{L87}, as the operator $\overline{L}_0$ is invariant under axial rotations by \eqref{eq:L0bar} and the data and boundary condition are both axi-symmetric, $W$ is also axi-symmetric, so we may return to $\omega$ and deduce $W_{y_2}=0$ on ${\gamma_{\textup{symm}}}$. Finally, combining the two boundary conditions at the origin, we deduce that $DW(0)=0$, and therefore obtain the growth rates $|DW|\leq C\|f_1\|_{0,\delta,\Omega}^{(1-\alpha)}|y|^\alpha$, $|D^2W|\leq C\|f_1\|_{0,\delta,\Omega}^{(1-\alpha)}|y|^{\alpha-1}$ from \eqref{eq:Wnorm} (where we have recalled that $\|W\|_{1,\alpha}\leq C\|W\|_{2,\delta}^{(-1-\alpha)}$ for the former estimate).
\end{proof}
The next theorem covers the remaining part of Theorem \ref{thm:rough}: the case $s\in(s_1, 0]$. As stated in the introduction, the theorem is a perturbative result around the case $s=0$, where the boundary operator is continuous.
\begin{thm}\label{thm:perturb}
Let $\th_0\in(0,\pi)$ and $u\in C_{2,\alpha}^{(-1-\alpha)}$ be an axi-symmetric solution of \eqref{eq:fullspace}. We assume \eqref{ass:1}--\eqref{ass:5} hold. Then there exists $\alpha_0=\alpha_0(\th_0,\tilde\epsilon,\Lambda/\lambda)\in(0,1)$ such that if $\alpha\in(0,\alpha_0)$, then there exists $s_1=s_1(\th_0,\alpha,\Lambda/\lambda)\in(0,\frac{\pi}{2})$ such that if
\begin{equation}\label{eq:beta02}
\beta_0=\lim_{\substack{y\to 0\\ y\in{\gamma_{\textup{cone}}}}}\beta(y)=(\cos(s),\sin(s))\quad\text{ such that }s\in(-s_1,s_1),
\end{equation} then
$$\|u\|_{2,\alpha}^{(-1-\alpha)}\leq C\big(\|f\|_{0,\alpha}^{(1-\alpha)}+\|g\|_{1,\alpha}^{(-\alpha)}+\|u\|_0\big).$$
The constant $C$ depends on the norms $\|A^{ij}\|_{0,\alpha}^{(0)}$, $\|A^i\|_{0,\alpha}^{(1-\alpha)}$, $\|A^0\|_{0,\alpha}^{(1-\alpha)}$, $\|\beta\|_{1,\alpha,{\gamma_{\textup{cone}}}}^{(-\alpha)}$, $\|\beta^0\|_{1,\alpha,{\gamma_{\textup{cone}}}}^{(-\alpha)}$, as well as $\Lambda$, $\lambda$, $\eta_1$, $\th_0$, $\tilde\epsilon$, $R$, $s_1$ and $\alpha$.
If $[\th_1,\th_2]\subset(0,\pi)$, then $\alpha_0$ may be uniform with respect to $\th_0\in[\th_1,\th_2]$ and then $s_1$ and $C$ may also be taken uniform with respect to $\th_0$ and $\alpha\in(0,\alpha_0)$ .
\end{thm}
\begin{proof}
We perturb around the case of a continuous oblique vector, $s=0$ by recalling the results of Lieberman \cite[Proposition 3.1]{L87} (alternatively see \cite[Section 4.1]{L}). In particular, by working on the symmetry domain $\omega$, we may define a new boundary vector
$$\tilde\beta=\beta-(0,\sin(s)),$$
so that $u$ satisfies
\begin{equation}
\begin{cases}
a^{ij}\partial_{ij}u+b^i\partial_iu+cu=f &\text{ in }\omega,\\
\tilde\beta \cdot Du+\beta^0u=g-\sin(s)\partial_{y_2}u &\text{ on }{\gamma_{\textup{cone}}}\cup{\gamma_{\textup{ball}}},\\
u_{y_2}=0 &\text{ on }{\gamma_{\textup{symm}}}.
\end{cases}
\end{equation}
From \cite[Proposition 3.1]{L87} (applying the same argument as in the proof of Lemma \ref{lemma:W} above to weight only by distance to the vertex, not all of $\partial\omega$), we then have the estimate
\begin{equation}\label{ineq:absorb}
\|u\|_{2,\alpha,\omega}^{(-1-\alpha)}\leq C\big(\|f\|_{0,\alpha}^{(1-\alpha)}+\|g\|_{1,\alpha}^{(-\alpha)}+\|\sin(s)\partial_{y_2}u\|_{1,\alpha}^{(-\alpha)}+\|u\|_0\big),
\end{equation}
where $C>0$ depends on the H\"older norms of the coefficients and the ellipticity.
Thus if $s\in(-s_1,s_1)$ and $s_1$ is sufficiently small, then we make the further estimate
$$C\|\sin(s)u_{y_2}\|_{1,\alpha}^{(-\alpha)}\leq Cs_1\|u\|_{2,\alpha}^{(-1-\alpha)}\leq \frac 12\|u\|_{2,\alpha}^{(-1-\alpha)}$$
for $s_1$ sufficiently small and conclude the claimed estimate by absorbing this term onto the left in \eqref{ineq:absorb} and returning to the original domain $\Omega$.
To show that the estimates may be taken to be locally uniform with respect to $\th_0$, a careful inspection of the proofs of \cite[Lemma 2.1, Proposition 3.1]{L87} (see also \cite[Section 4.5]{L}) shows that the dependence of $\alpha_0$ and the constant $C>0$ on $\th_0$ comes solely from the estimates of the Miller barrier, as in \S\ref{sec:barrier} and Lemma \ref{lemma:Miller} below. However, it is clear from the construction in \cite{Miller} (compare \cite[Chapter 3]{L} and also Remark \ref{rmk:barrier} below) that the barrier function $v_\alpha$ and the constant $\alpha_0(\th_0)$ depend continuously on $\th_0$. Thus, given $[\th_1,\th_2]\subset(0,\pi)$, we may take $\alpha_*<\min_{\th_0\in[\th_1,\th_2]}\{\alpha_0(\th_0)\}$ and use a single barrier $v_\alpha$ with $\alpha\leq\alpha_*$ to show that the constants are all locally uniform with respect to $\th_0$. The uniform dependence of $s_1$ on $\alpha\leq \alpha_*$ then follows directly from the proof above as the constant $C>0$ may now be taken to be uniform.
\end{proof}
\section{Barrier functions for oblique problems on cones}\label{sec:barrier}
A vital tool in the proof of Theorem \ref{thm:main} is the \textit{Miller barrier}. We recall from \cite{Miller} (see also \cite[Chapter 3]{L}) that for each $\alpha\in(0,1)$ and $\Lambda>\lambda>0$, there exists a function $F_\alpha(\th)$ such that
$$v_\alpha=r^\alpha F_\alpha(\th)$$
satisfies, for all constant coefficient operators of the form $A_0^{ij}$ with ellipticity $\lambda$ and upper bound $\Lambda$,
$$A^{ij}_0v_\alpha\leq 0\text{ in }\Omega,$$
and that, moreover, $F_\alpha'(0)=0$, so that $v_\alpha$ is a well-defined axi-symmetric function on $\Omega$. By construction (see \cite[Lemmas 3.5--3.8]{L}), the limits
\begin{equation}
\lim_{\alpha\to0+}F_\alpha(\th)=1,\quad \lim_{\alpha\to0+} F_\alpha'(\th)=0
\end{equation}
hold uniformly for $\th\in(0,\pi)$.
For each $\th_0\in(0,\pi)$, there exists an $\alpha_0(\th_0)\in(0,1]$ such that for each $\alpha\in(0,\alpha_0)$, $c_*\leq F_\alpha(\th)\leq 1$ for all $\th\in[0,\th_0]$ and some $c_*>0$ (depending on $\alpha$). In addition, $F_\alpha'(\th)<0$ on $(0,\th_0]$. For $\alpha\in(0,\alpha_0)$, we refer to the function $v_\alpha$ as the \textit{Miller barrier} and note that, by construction,
\begin{equation}\label{ineq:miller}
c_*|y|^\alpha\leq v_\alpha\leq |y|^\alpha.
\end{equation}
To use the Miller barrier in proving Theorem \ref{thm:main}, we need to investigate its behaviour under certain oblique boundary operators. Converting to axi-symmetric coordinates, we first need some notation. Let $\beta_0=(\beta_1,\beta_2)$ be a constant, inward pointing, oblique vector and suppose that $A^{ij}_0$ is a constant matrix of ellipticity $\lambda$ and upper bound $\Lambda$. Define coordinates $(z_1,z_2)$ on $\omega$ as in \eqref{def:zcoords} such that $\partial_{z_1}=\partial_\beta$, $\partial_{z_2}=\partial_\tau$, where $\tau=\nu^\perp$ is the unit tangent to ${\gamma_{\textup{cone}}}$ and $\nu$ is the inward unit normal. In these coordinates, following \eqref{def:tildecoords}, the elliptic operator takes the form
$$A^{ij}_0\partial_{x_ix_j}\psi=\tilde{a}^{ij}\partial_{z_iz_j}\psi+\frac{b_0^{2,1}}{y_2}\partial_{y_2}\psi\text{ for axi-symmetric functions }\psi,$$
where the constant coefficients $\tilde{a}^{ij}$ satisfy the ellipticity
$$\frac{\lambda}{\epsilon^2}\leq \tilde{a}^{11},\tilde{a}^{22}\leq\frac{\Lambda}{\epsilon^2},\quad \epsilon=\beta_0\cdot\nu>0.$$
With this notation, we obtain the following lemma.
\begin{lemma}\label{lemma:Miller}
Let $\beta_0=(\beta_1,\beta_2)$ be a constant, inward pointing, oblique vector on ${\gamma_{\textup{cone}}}$ such that $\beta_1,\beta_2\neq0$ have the same sign, $\th_0\in(0,\pi)$. Suppose that the constant coefficients $\tilde{a}^{ij}$ are derived from a constant matrix $A^{ij}_0$ of ellipticity $\lambda$ and upper bound $\Lambda$ as above. Then there exists $\alpha_1\in(0, \alpha_0]$ such that for all $\alpha\in(0,\alpha_1)$, there exists $c_1(\alpha)>0$ so that the Miller barrier $v_\alpha$ satisfies
$$ M_1v_\alpha:=\beta_0\cdot Dv_\alpha+\frac{\nu_1}{\beta_2}\frac{\tilde a^{22}}{\tilde a^{11}}\tau\cdot Dv_\alpha-\frac{1}{\epsilon}\frac{\nu_1}{\tilde a^{11}}\frac{\beta_1}{\beta_2}\frac{b_0^{2,1}}{y_2}v_\alpha\leq -c_1|y|^{\alpha-1} \text{ on }{\gamma_{\textup{cone}}},$$
and also, for $\varepsilon>0$ sufficiently small (depending on $\th_0$, $\beta_1$, $\beta_2$, $\alpha$, $\Lambda$, $\lambda$),
$$ M_2v_\alpha:=\beta_0\cdot Dv_\alpha+\frac{\nu_1+\varepsilon \nu_2}{\beta_2-\varepsilon \beta_1}\frac{\tilde a^{22}}{\tilde a^{11}} \tau\cdot Dv_\alpha-\frac{1}{\epsilon}\frac{\nu_1+\varepsilon \nu_2}{\tilde a^{11}}\frac{\beta_1}{\beta_2-\varepsilon \beta_1}\frac{b_0^{2,1}}{y_2}v_\alpha\leq -c_1|y|^{\alpha-1} \text{ on }{\gamma_{\textup{cone}}}.$$
\end{lemma}
\begin{proof}
Note first that
\begin{equation*}\begin{aligned}
(v_\alpha)_{y_1}=\alpha r^{\alpha-1}\cos\th F_\alpha(\th)-r^{\alpha-1}\sin\th F_\alpha'(\th),\quad (v_\alpha)_{y_2}=\alpha r^{\alpha-1}\sin\th F_\alpha(\th)+r^{\alpha-1}\cos\th F_\alpha'(\th).
\end{aligned}\end{equation*}
Then, noting $\nu=(\sin\th_0,-\cos\th_0)$,
\begin{equation*}\begin{aligned}
M_1v_\alpha=&\,\beta_0\cdot Dv_\alpha+\frac{\nu_1}{\beta_2}\frac{\tilde a^{22}}{\tilde a^{11}}\tau\cdot Dv_\alpha-\frac{1}{\epsilon}\frac{\nu_1}{\tilde a^{11}}\frac{\beta_1}{\beta_2}\frac{b_0^{2,1}}{y_2}v_\alpha\\
=&\,r^{\alpha-1}\Big(F_\alpha(\th_0)\big(\alpha \beta_1\cos\th_0+\alpha \beta_2\sin\th_0+\frac{\nu_1}{\beta_2}\frac{\tilde a^{22}}{\tilde a^{11}}(\alpha \nu_2\cos\th_0-\alpha \nu_1\sin\th_0)\big)\\
&\quad\qquad F_\alpha'(\th_0)\big(-\beta_1\sin\th_0+\beta_2\cos\th_0-\frac{\nu_1}{\beta_2}\frac{\tilde a^{22}}{\tilde a^{11}}(\nu_2\sin\th_0+\nu_1\cos\th_0)\big)\\
&\quad\qquad -\frac{1}{\epsilon}\frac{\nu_1}{\tilde{a}^{11}}\frac{\beta_1}{\beta_2}\frac{b_0^{2,1}}{\sin\th_0}F_\alpha(\th_0)\Big)\\
=&\,r^{\alpha-1}\Big(F_\alpha(\th_0)\big(-\alpha\beta_0\cdot\tau-\alpha\frac{\nu_1}{\beta_2}\frac{\tilde a^{22}}{\tilde a^{11}}\big)-\beta_0\cdot\nu F_\alpha'(\th_0)-\frac{1}{\epsilon}\frac{\beta_1}{\beta_2}\frac{b_0^{2,1}}{\tilde a^{11}}F_\alpha(\th_0)\Big).
\end{aligned}\end{equation*}
By taking $\alpha>0$ sufficiently small, we recall the uniform limits as $\alpha\to0$, $F_\alpha(\th_0)\to1$ and $F_\alpha'(\th_0)\to0$. Thus we obtain the claimed inequality by observing that $\frac{\beta_1}{\beta_2}>0$.
Finally, a similar calculation shows that
$$ M_2v_\alpha\leq -c_1r^{\alpha-1}$$
provided first $\varepsilon>0$ is sufficiently small and then $\alpha>0$ is sufficiently small.
\end{proof}
\begin{rmk}\label{rmk:barrier}
It follows from the construction of the Miller barrier as described above (and given in \cite[Chapter 3]{L}) that the dependence of $\alpha_0$ on $\th_0$ is monotone, and that also $v_\alpha$ depends continuously on $\alpha,\th_0$ in the admissible range. Therefore, given a fixed range $\th_0\in[\th_1,\th_2]$, one may fix a single $\alpha_0$ and use a single barrier $v_\alpha$ for the whole interval of $\th_0$, thereby making the constants $c_*$, $c_1$ uniform.
\end{rmk}
\section{Comparison Principle}\label{sec:comp}
In the proof of Theorem \ref{thm:main}, we made use of two versions of the comparison principle. We state and prove both of them together here.
\begin{thm}\label{thm:comparison}
Let $L_0$ and $\beta_0$ be the operator and boundary vector as in \eqref{def:frozen} and let $\tilde a\in\mathbb{R}$, $\tilde b>0$ be given. Suppose that $u\in C^2(\omega)\cap C^1(\overline{\omega}\setminus\{0\})\cap C(\overline{\omega})$ satisfies either
\begin{equation}\label{eq:compare1}
\begin{cases}
L_0 u\leq0 & \text{ in }\omega,\\
u_{y_2}=0 & \text{ on }{\gamma_{\textup{symm}}},\\
\beta_0\cdot D u+\tilde{a}\tau\cdot D u-\frac{\tilde b}{y_2}u\leq0 & \text{ on }{\gamma_{\textup{cone}}},\\
u\geq 0 & \text{ on } \overline{{\gamma_{\textup{ball}}}},\\
u=0 & \text{ at } \{0\},
\end{cases}
\end{equation}
or
\begin{equation}\label{eq:compare2}
\begin{cases}
L_0 u+\tilde{c}_0(y)u\leq0 & \text{ in }\omega,\\
u\geq 0 & \text{ on }\overline{{\gamma_{\textup{symm}}}\cup{\gamma_{\textup{ball}}}},\\
\beta_0\cdot D u+\tilde{a}\tau\cdot D u-\frac{\tilde b}{y_2}u\leq0 & \text{ on }{\gamma_{\textup{cone}}},
\end{cases}
\end{equation}
where also $\tilde{c}_0(y)<0$ in $\omega$.\\
Then $\inf u=0$.
\end{thm}
\begin{proof}
First suppose that $u$ satisfies problem \eqref{eq:compare1}. We begin by returning to the domain $\Omega$ by rotating around the $y_1$-axis. By \eqref{eq:L0bar}, the function $u$ then satisfies the uniformly elliptic equation obtained from \eqref{eq:fullspace} by freezing the principal coefficients at 0 and setting the lower order terms to be zero. By the strong maximum principle, $u$ cannot attain a minimum in $\Omega$ unless it is constant. However, no negative constant will satisfy the oblique condition on ${\gamma_{\textup{cone}}}$ as $\tilde{b}<0$. Returning to the original domain, this implies that $u$ does not attain a negative minimum in $\omega\cup{\gamma_{\textup{symm}}}$.
Clearly $u$ cannot attain a negative minimum on either $\overline{{\gamma_{\textup{ball}}}}$ or at $0$ by the Dirichlet conditions imposed there.
Finally, we check that $u$ does not attain a negative minimum on ${\gamma_{\textup{cone}}}$. If $u$ attains a negative minimum at $y^*\in {\gamma_{\textup{cone}}}$, then $\tau\cdot D u(y^*)=0$, $\beta_0\cdot Du(y^*)\geq0$ (as $\beta_0$ is inward pointing). So $$\beta_0\cdot D u(y^*)+\tilde{a}\tau\cdot D u(y^*)-\frac{\tilde b}{y^*_2}u(y^*)\geq-\frac{\tilde b}{y^*_2}u(y^*) >0,$$
contradicting the boundary condition.
In the second case, \eqref{eq:compare2}, we observe first that on the set where $u<0$, the partial differential inequality may be strengthened to the strict inequality $L_0u<0$. Hence, returning to the domain $\Omega$, we have that $u$ satisfies this strict inequality for the uniformly elliptic operator $\overline{L}_0$, hence cannot attain a negative minimum in the interior of the set $\Omega\cap\{u<0\}$. Moreover, by the boundary condition on $\overline{{\gamma_{\textup{symm}}}\cup{\gamma_{\textup{ball}}}}$, $u$ cannot attain a negative minimum on those portions of the boundary either. Finally, we follow the argument above to conclude that $u$ cannot attain a negative minimum on ${\gamma_{\textup{cone}}}$.
\end{proof}
\section{Counterexamples and Proof of Theorem \ref{thm:counterexample}}\label{sec:counter2}
We work on a domain $\Omega$, a cone with boundary ${\Gamma_{\textup{cone}}}$. For simplicity, we work in $\mathbb{R}^3$ and suppose that the axis of the cone is in the $x_3$ direction. In standard spherical coordinates $(r,\th,\varphi)$, we have that
$$\Omega=\{(r,\th,\varphi)\,|\,r>0,\,0\leq\th<\th_0,\,\varphi\in[0,2\pi)\},$$
where $\th_0\in(0,\pi)$. In this notation, the cylindrical symmetry coordinates for axi-symmetric functions correspond to $y_1=r\cos\th$, $y_2=r\sin\th$. We denote by ${\mathrm{P}}_\alpha$ the Legendre polynomial of degree $\alpha$.
\begin{thm}
Consider the oblique derivative problem with axi-symmetric oblique vector
\begin{equation}\label{eq:purelaplace}
\begin{cases}
\Delta u=0 &\text{ in }\Omega,\\
\beta\cdot Du=0 &\text{ on }{\Gamma_{\textup{cone}}},
\end{cases}
\end{equation}
where, in the (cylindrical) symmetry coordinates $(y_1,y_2)$, $\beta=(\cos(s),\sin(s))$, $s\in(-\pi+\th_0,\th_0)$.
\begin{enumerate}
\item\label{itm:1} Suppose $\th_0\in(\frac{\pi}{2},\pi)$. Then there exists $s_0\in(-\pi+\th_0,0)$ such that if either
$$s\in(-\pi+\th_0,s_0)\quad\text{ or }\quad s\in(\frac{\pi}{2},\th_0),$$
then there exists $\alpha=\alpha(\th_0,s)\in(0,1)$ such that
$$u(r,\th)=r^\alpha{\mathrm{P}}_\alpha(\cos\th)$$
is an axi-symmetric solution of \eqref{eq:purelaplace} that lies in $C^{0,\alpha}\setminus C^{0,\alpha+\epsilon}$ for any $\epsilon>0$.
\item\label{itm:2} Suppose $\th_0\in(0,\frac{\pi}{2})$. Then there exists $s_0\in(-\frac{\pi}{2},0)$ such that if $s\in(-\frac{\pi}{2},s_0)$, then there exists $\alpha=\alpha(\th_0,s)\in(0,1)$ such that
$$u(r,\th)=r^\alpha{\mathrm{P}}_\alpha(\cos\th)$$
is an axi-symmetric solution of \eqref{eq:purelaplace} that lies in $C^{0,\alpha}\setminus C^{0,\alpha+\epsilon}$ for any $\epsilon>0$.
\end{enumerate}
\end{thm}
We note that in case (2), the obtained solution is strictly positive away from the origin. Numerics suggest that this is also the case for the solutions obtained for case (1).
\begin{proof}
We begin by seeking a separable, axi-symmetric solution of the Laplace equation, that is, a solution of the form
$$u(r,\th,\varphi)=R(r)\Theta(\th).$$
By standard ODE arguments, we then arrive at the function
$$u_\alpha(r,\th,\varphi):=r^{\alpha}{\mathrm{P}}_{\alpha}(\cos\th),$$
where ${\mathrm{P}}_\alpha$ is the Legendre polynomial of degree $\alpha$. This function is easily seen to satisfy the PDE of \eqref{eq:purelaplace} (see Appendix \ref{sec:counter1} below for more details on the derivation). To give a solution in full space, $u_\alpha$ must also satisfy the symmetry requirement that $\Theta'(0)=0$. This is easy to verify as
$$\Theta'(0)=\lim_{\th\to0+}\big(-\sin\th{\mathrm{P}}_\alpha'(\cos\th)\big)=\lim_{z\to1-}\Big(-\sqrt{1-z^2}\frac{(\alpha + 1) (z {\mathrm{P}}_\alpha(z) - {\mathrm{P}}_{\alpha + 1}(z))}{1-z^2}\Big)=0,$$
where we have applied the identity of \cite[(14.10.4)]{DLMF} for the derivative and \cite[(14.8.1)]{DLMF} for the limit.
We therefore search for $\alpha$ solving the oblique derivative condition. To that end, we move into the cylindrical coordinates $y=(y_1,y_2)$ and apply again the identity of \cite[(14.10.4)]{DLMF},
$${\mathrm{P}}_\alpha'(z)=(\alpha+1)\frac{z{\mathrm{P}}_\alpha(z)-{\mathrm{P}}_{\alpha+1}(z)}{1-z^2},$$
to check
\begin{equation*}\begin{aligned}
(u_\alpha)_{y_1}= r^{\alpha-1}\big((2\alpha+1)\cos\th{\mathrm{P}}_\alpha(\cos\th)-(\alpha+1){\mathrm{P}}_{\alpha+1}(\cos\th)\big),
\end{aligned}\end{equation*}
and
\begin{equation*}\begin{aligned}
(u_\alpha)_{y_2}= r^{\alpha-1}\big(\sin\th\big(\alpha-(\alpha+1)\frac{\cos^2\th}{\sin^2\th}\big){\mathrm{P}}_\alpha(\cos\th)+(\alpha+1)\frac{\cos\th}{\sin\th}{\mathrm{P}}_{\alpha+1}(\cos\th)\big).
\end{aligned}\end{equation*}
As both of these derivatives separate, we define their angular components
\begin{equation*}\begin{aligned}
U_1(\th,\alpha)=&\,(2\alpha+1)\cos\th{\mathrm{P}}_\alpha(\cos\th)-(\alpha+1){\mathrm{P}}_{\alpha+1}(\cos\th),\\
U_2(\th,\alpha)=&\,\sin\th\big(\alpha-(\alpha+1)\frac{\cos^2\th}{\sin^2\th}\big){\mathrm{P}}_\alpha(\cos\th)+(\alpha+1)\frac{\cos\th}{\sin\th}{\mathrm{P}}_{\alpha+1}(\cos\th).
\end{aligned}\end{equation*}
Solving the oblique condition is therefore equivalent to finding $\alpha$ such that
\begin{equation*}\begin{aligned}
B(\th_0,\alpha,s):=&\,\cos(s)U_1(\th_0,\alpha)+\sin(s)U_2(\th_0,\alpha)=0.
\end{aligned}\end{equation*}
From the identities ${\mathrm{P}}_0(z)=1$, ${\mathrm{P}}_1(z)=z$, ${\mathrm{P}}_2(z)=\frac{3z^2-1}{2}$ for all $z\in[-1,1]$, one easily checks that, for all $\th\in(0,\pi)$,
$$\lim_{\alpha\to0+}U_1(\th,\alpha)=\lim_{\alpha\to0+}U_2(\th,\alpha)=0,$$
and also
$$\lim_{\alpha\to1-}U_1(\th,\alpha)=1,\quad \lim_{\alpha\to 1-}U_2(\th,\alpha)=0.$$
Thus
$$B(\th_0,0,s)=0,\quad B(\th_0,1,s)=\cos(s),\quad\text{ for all }\th_0\in(0,\pi).$$
\textbf{Proof of \eqref{itm:1}.}\\
\textit{Case 1:} $s\in(-\pi+\th_0,0)$. Then we have that $B(\th_0,1,s)=\cos(s)>0$.
We prove that there exists $s_0(\th_0)\in(-\pi+\th_0,0)$ such that for $s\in(-\pi+\th_0,s_0)$ we have that
$$\frac{\partial B}{\partial\alpha}(\th_0,\alpha,s)\big|_{\alpha=0}<0.$$
Assuming the existence of such an $s_0$, we clearly obtain for all $s\in(-\pi+\th_0,s_0)$ an $\alpha=\alpha(\th_0,s)\in(0,1)$ such that $B(\th_0,\alpha,s)=0$ as claimed in the theorem.
We begin by computing the derivatives of $U_1$, $U_2$ with respect to $\alpha$.
\begin{equation*}\begin{aligned}
\frac{\partial U_1}{\partial\alpha}=&\,2\cos\th_0{\mathrm{P}}_{\alpha}(\cos\th_0)-{\mathrm{P}}_{\alpha+1}(\cos\th_0)\\
&+(2\alpha+1)\cos\th_0\partial_\alpha{\mathrm{P}}_{\alpha}(\cos\th_0)-(\alpha+1)\partial_\alpha{\mathrm{P}}_{\alpha+1}(\cos\th_0),\\
\frac{\partial U_2}{\partial\alpha}=&\,\sin\th_0\big(1-\frac{\cos^2\th_0}{\sin^2\th_0}\big){\mathrm{P}}_{\alpha}(\cos\th_0)+\frac{\cos\th_0}{\sin\th_0}{\mathrm{P}}_{\alpha+1}(\cos\th_0)\\
&+\sin\th_0\big(\alpha-(\alpha+1)\frac{\cos^2\th_0}{\sin^2\th_0}\big)\partial_\alpha{\mathrm{P}}_{\alpha}(\cos\th_0)+(\alpha+1)\frac{\cos\th_0}{\sin\th_0}\partial_\alpha{\mathrm{P}}_{\alpha+1}(\cos\th_0).
\end{aligned}\end{equation*}
From \cite[(4.18)]{Sm06}, we have the identity
\begin{equation}
(\alpha+1)\frac{\partial{\mathrm{P}}_{\alpha+1}(z)}{\partial\alpha}-(2\alpha+1)z\frac{\partial{\mathrm{P}}_\alpha(z)}{\partial\alpha}+\alpha\frac{\partial{\mathrm{P}}_{\alpha-1}(z)}{\partial\alpha}=-{\mathrm{P}}_{\alpha+1}(z)+2z{\mathrm{P}}_\alpha(z)-{\mathrm{P}}_{\alpha-1}(z).
\end{equation}
Thus
\begin{equation*}\begin{aligned}
\frac{\partial U_1}{\partial\alpha}\big|_{\alpha=0}=&\,2\cos\th_0{\mathrm{P}}_{0}(\cos\th_0)-{\mathrm{P}}_{1}(\cos\th_0)\\
&+\cos\th_0\partial_\alpha{\mathrm{P}}_{\alpha}(\cos\th_0)\big|_{\alpha=0}-\partial_\alpha{\mathrm{P}}_{\alpha+1}(\cos\th_0)\big|_{\alpha=0}\\
=&\,{\mathrm{P}}_{-1}(\cos\th_0)=1,
\end{aligned}\end{equation*}
where we have used in the last line that ${\mathrm{P}}_{-1}(z)=1$ for all $z$.
Similarly,
\begin{equation*}\begin{aligned}
\frac{\partial U_2}{\partial\alpha}\big|_{\alpha=0}=&\,\sin\th_0\big(1-\frac{\cos^2\th_0}{\sin^2\th_0}\big){\mathrm{P}}_{0}(\cos\th_0)+\frac{\cos\th_0}{\sin\th_0}{\mathrm{P}}_{1}(\cos\th_0)\\
&-\frac{\cos^2\th_0}{\sin\th_0}\partial_\alpha{\mathrm{P}}_{\alpha}(\cos\th_0)\big|_{\alpha=0}+\frac{\cos\th_0}{\sin\th_0}\partial_\alpha{\mathrm{P}}_{\alpha+1}(\cos\th_0)\big|_{\alpha=0}\\
=&\,\sin\th_0+\frac{\cos\th_0}{\sin\th_0}\big(-{\mathrm{P}}_1(\cos\th_0)+2\cos\th_0{\mathrm{P}}_0(\cos\th_0)-{\mathrm{P}}_{-1}(\cos\th_0)\big)\\
=&\,\sin\th_0+\frac{\cos\th_0}{\sin\th_0}(\cos\th_0-1)=\frac{1-\cos\th_0}{\sin\th_0}.
\end{aligned}\end{equation*}
Hence we find
\begin{equation}\label{eq:7.3}
\frac{\partial B}{\partial\alpha}(\th_0,\alpha,s)\big|_{\alpha=0}=\cos(s)+\sin(s)\frac{1-\cos\th_0}{\sin\th_0}=:V(\th_0,s).
\end{equation}
One sees easily that for each fixed $\th_0\in(\frac{\pi}{2},\pi)$, $V$ is a strictly increasing function of $s$ on $(-\pi+\th_0,0)$ such that $V(\th_0,-\pi+\th_0)=-1$ and $V(\th_0,0)=1$. We choose $s_0(\th_0)$ to be the solution of $V(\th_0,s_0(\th_0))=0$ on this interval, so that for all $s\in(-\pi+\th_0,s_0)$,
$$\frac{\partial B}{\partial\alpha}(\th_0,\alpha,s)\big|_{\alpha=0}=V(\th_0,s)<0,$$
as required.\\
\textit{Case 2:} $s\in(\frac{\pi}{2},\th_0)$.
In this case, we check from the above identities that $B(\th_0,0,s)=0$, $B(\th_0,1,s)=\cos(s)<0$ and, from the formula \eqref{eq:7.3}, find also
$$\frac{\partial B}{\partial\alpha}(\th_0,\alpha,s)\big|_{\alpha=0}=V(\th_0,s)>0\quad\text{ for all }\th_0\in(\frac{\pi}{2},\pi),\: s\in(\frac{\pi}{2},\th_0),$$ so that again there exists $\alpha=\alpha(\th_0,s)\in(0,1)$ such that $B(\th_0,\alpha,s)=0$, concluding the proof.\\
\textbf{Proof of \eqref{itm:2}.}\\
This proceeds in much the same manner as the above.
It suffices to consider first that
$$B(\th_0,0,s)=0,\quad B(\th_0,1,s)=\cos(s)>0\text{ for }s\in(-\frac{\pi}{2},0),$$
and again observe that the derivative
$$\frac{\partial B}{\partial\alpha}(\th_0,\alpha,-\frac{\pi}{2})\big|_{\alpha=0}=V(\th_0,-\frac{\pi}{2})<0\quad\text{ for all }\th_0\in(0,\frac{\pi}{2}),$$
where $V(\th_0,-\frac{\pi}{2})<0$ follows directly from \eqref{eq:7.3}.
By continuity of $V$ with respect to $s$, we therefore obtain for each $\th_0\in(0,\frac{\pi}{2})$, an $s_0(\th_0)\in(-\frac{\pi}{2},0)$ such that for all $s\in(-\frac{\pi}{2},s_0)$, $V(\th_0,s_0)<0$, and conclude as before.
\end{proof}
| {
"timestamp": "2020-09-15T02:32:56",
"yymm": "2009",
"arxiv_id": "2009.06486",
"language": "en",
"url": "https://arxiv.org/abs/2009.06486",
"abstract": "We study the oblique derivative problem for uniformly elliptic equations on cone domains. Under the assumption of axi-symmetry of the solution, we find sufficient conditions on the angle of the oblique vector for Hölder regularity of the gradient to hold up to the vertex of the cone. The proof of regularity is based on the application of carefully constructed barrier methods or via perturbative arguments. In the case that such regularity does not hold, we give explicit counterexamples. We also give a counterexample to regularity in the absence of axi-symmetry. Unlike in the equivalent two dimensional problem, the gradient Hölder regularity does not hold for all axi-symmetric solutions, but rather the qualitative regularity properties depend on both the opening angle of the cone and the angle of the oblique vector in the boundary condition.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Oblique Derivative Problems for Elliptic Equations on Conical Domains",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462222582661,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7099326154763508
} |
https://arxiv.org/abs/1611.09806 | Squarefree values of polynomial discriminants I | We determine the density of monic integer polynomials of given degree $n>1$ that have squarefree discriminant; in particular, we prove for the first time that the lower density of such polynomials is positive. Similarly, we prove that the density of monic integer polynomials $f(x)$, such that $f(x)$ is irreducible and $\mathbb Z[x]/(f(x))$ is the ring of integers in its fraction field, is positive, and is in fact given by $\zeta(2)^{-1}$.It also follows from our methods that there are $\gg X^{1/2+1/n}$ monogenic number fields of degree $n$ having associated Galois group $S_n$ and absolute discriminant less than $X$, and we conjecture that the exponent in this lower bound is optimal. | \section{Introduction}
The pupose of this paper is to determine the density of monic integer
polynomials of given degree whose discriminant is squarefree. For
polynomials $f(x)=x^n+a_1x^{n-1}+\cdots+a_n$, the term $(-1)^ia_i$
represents the sum of the $i$-fold products of the roots of $f$. It is
thus natural to order monic polynomials
$f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ by the height
$H(f):={\rm max}\{|a_i|^{1/i}\}$ (see, e.g., \cite{BG2}, \cite{PS2},
\cite{SW}). We determine the density of monic integer polynomials
having squarefree discriminant with respect to the ordering by this
height, and show that the density is positive. The existence of infinitely many monic integer
polynomials of each degree having squarefree discriminant was
first demonstrated by Kedlaya~\cite{Kedlaya}.
However, it has not previously been known whether the density exists or even that the lower density
is positive.
To state the theorem, define the constants $\lambda_n(p)$ by
\begin{equation}\label{jos}
\lambda_n(p)=\left\{
\begin{array}{cl}
1 & \mbox{if $n =1$,}\\[.075in]
1-\displaystyle\frac1{p^2} & \mbox{if $n= 2$,}\\[.135in]
1-\displaystyle\frac2{p^2}+\frac1{p^3} & \mbox{if $n= 3$,}\\[.185in]
1-\displaystyle\frac1{p}+\frac{(p-1)^2(1-(-p)^{2-n})}{p^2(p+1)} & \mbox{if $n\geq 4$}
\end{array}\right.
\end{equation}
for $p\neq 2$; also, let $\lambda_1(2)=1$ and $\lambda_n(2)=1/2$ for
$n\geq2$. Then a result of Brakenhoff~\cite[Theorem~6.9]{ABZ} states
that $\lambda_n(p)$ is the density of monic polynomials over~${\mathbb Z}_p$
having discriminant indivisible by~$p^2$.
Let~$\lambda_n:=\prod_p\lambda_n(p)$, where the product is over all
primes $p$. We prove:
\begin{theorem}\label{polydisc2}
Let $n\geq1$ be an integer. Then when
monic integer polynomials $f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ of
degree~$n$ are ordered by $H(f):=
{\rm max}\{|a_1|,|a_2|^{1/2},\ldots,|a_n|^{1/n}\}$, the density having
squarefree discriminant $\Delta(f)$ exists and is equal to $\lambda_n>0$.
\end{theorem}
Our method of proof implies that the theorem remains true even if we
restrict only to those polynomials of a given degree $n$ having a
given number of real roots.
It is easy to see from the definition of the $\lambda_n(p)$ that the
$\lambda_n$ rapidly approach a limit $\lambda$ as $n\to\infty$,
namely,
\begin{equation}
\lambda=\lim_{n\to\infty} \lambda_n = \prod_p
\left(1-\displaystyle\frac1{p}+\frac{(p-1)^2}{p^2(p+1)}\right)
\approx 35.8232\%.
\end{equation}
Therefore, as the degree tends to infinity, the
probability that a random monic integer polynomial has squarefree
discriminant tends to $\lambda\approx 35.8232\%$.
In algebraic number theory, one often considers number fields that are
defined as a quotient ring $K_f:={\mathbb Q}[x]/(f(x))$ for some irreducible
integer polynomial $f(x)$. The question naturally arises as to whether
$R_f:={\mathbb Z}[x]/(f(x))$ gives the ring of integers of $K_f$. Our second
main theorem states that this is in fact the case for {most}
polynomials $f(x)$. We prove:
\begin{theorem}\label{polydiscmax2}
The density of irreducible monic integer polynomials
$f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ of degree~$n>1$, when ordered by
$H(f):={\rm max}\{|a_1|,|a_2|^{1/2},\ldots,|a_n|^{1/n}\}$, such that
${\mathbb Z}[x]/(f(x))$ is the ring of integers in
its fraction field is
$\prod_p(1-1/p^2)=\zeta(2)^{-1}$.
\end{theorem}
Note that $\zeta(2)^{-1}\approx\, 60.7927\%$. Since a density of
100\% of monic integer polynomials are irreducible (and indeed have
associated Galois group $S_n$) by Hilbert's irreducibility theorem, it
follows that $\approx 60.7927\%$ of monic integer polynomials $f$ of
any given degree $n>1$ have the property that $f$ is irreducible and
${\mathbb Z}[x]/(f(x))$ is the maximal order in its fraction field. The
quantity $\rho_n(p):=1-1/p^2$ represents the density of monic integer polynomials
of degree $n>1$ over ${\mathbb Z}_p$ such that ${\mathbb Z}_p[x]/(f(x))$ is the maximal
order in ${\mathbb Q}_p[x]/(f(x))$. The determination of this beautiful
$p$-adic density, and its independence of $n$, is due to Hendrik
Lenstra (see~\cite[Proposition~3.5]{ABZ}). Theorem~\ref{polydiscmax2}
again holds even if we restrict to polynomials of degree $n$ having a
fixed number of real roots.
If the discriminant of an order in a number field is squarefree, then that order must be maximal. Thus the irreducible polynomials counted in Theorem~\ref{polydisc2} are a subset of those counted in Theorem~\ref{polydiscmax2}. The additional usefulness of Theorem~\ref{polydisc2} in some arithmetic applications is that if $f(x)$ is a monic irreducible integer polynomial of degree $n$ with squarefree discriminant, then not only is ${\mathbb Z}[x]/(f(x))$ maximal in the number field ${\mathbb Q}[x]/(f(x))$ but the associated Galois group is necessarily the symmetric group $S_n$ (see, e.g., \cite{Yamamura}, \cite{Kondo} for further details and applications).
We prove both Theorems~\ref{polydisc2} and \ref{polydiscmax2} with
power-saving error terms. More precisely, let ${V_n^{\textrm{mon}}}({\mathbb Z})$ denote
the subset of ${\mathbb Z}[x]$ consisting of all monic integer polynomials of
degree $n$. Then it is easy to see that
\begin{equation*}
{\#\{f\in {V_n^{\textrm{mon}}}({\mathbb Z}): H(f)<X \}} = 2^nX^{\frac{\scriptstyle n(n+1)}{\scriptstyle2}} + O(X^{\frac{\scriptstyle n(n+1)}{\scriptstyle2}-{ 1}}).
\end{equation*}
We prove
\begin{equation}\label{errorterms}
{\begin{array}{ccl}
\displaystyle
{\#\{f\in {V_n^{\textrm{mon}}}({\mathbb Z}) : H(f)<X \mbox{ and $\Delta(f)$ squarefree}\}}
&\!\!=\!\!& \lambda_n\cdot 2^n{X^\frac{\scriptstyle n(n+1)}{\scriptstyle2}} + O_\varepsilon(X^{\frac{\scriptstyle n(n+1)}{\scriptstyle2}-{\textstyle \frac15}+\varepsilon});\\[.25in]
\displaystyle
{\#\{f\in {V_n^{\textrm{mon}}}({\mathbb Z}) : H(f)<X \mbox{ and ${\mathbb Z}[x]/(f(x))$ maximal}\}}&\!\!=\!\!& {\displaystyle\frac{6}{\pi^2}}\cdot 2^nX^{\frac{\scriptstyle n(n+1)}{\scriptstyle2}} + O_\varepsilon(X^{\frac{\scriptstyle n(n+1)}{\scriptstyle2}-{\textstyle \frac15}+\varepsilon})
\end{array}}
\end{equation}
for $n>1$.
These asymptotics imply Theorems~\ref{polydisc2} and
\ref{polydiscmax2}. Since it is known that the number of reducible
monic polynomials of a given degree~$n$ is of a strictly smaller order
of magnitude than the error terms above (see Proposition
\ref{propredboundall}), it does not matter whether we require $f$ to
be irreducible in the above asymptotic formulae.
Recall that a number field $K$ is called {\it monogenic} if its ring
of integers is generated over~${\mathbb Z}$ by one element, i.e., if
${\mathbb Z}[\theta]$ gives the maximal order of $K$ for some $\theta\in K$.
As a further application of our methods, we obtain the following
corollary to Theorem~\ref{polydisc2}:
\begin{corollary}\label{monogenic}
Let $n>1$. The number of isomorphism classes of number fields
of degree~$n$ and absolute discriminant less than $X$ that are
monogenic and have associated Galois group $S_n$ is $\gg X^{1/2+1/n}$.
\end{corollary}
We note that our lower bound for the number of monogenic $S_n$-number
fields of degree $n$ improves slightly the best-known lower bounds for
the number of $S_n$-number fields of degree $n$, due to Ellenberg and
Venkatesh~\cite[Theorem 1.1]{EV}, by simply forgetting the monogenicity condition
in Corollary~\ref{monogenic}. We conjecture that the exponent in our
lower bound in Corollary~\ref{monogenic} for monogenic number fields of degree $n$ is optimal.
As is illustrated by Corollary~\ref{monogenic}, Theorems~\ref{polydisc2} and \ref{polydiscmax2} give a powerful method to produce number fields of a given degree having given properties or invariants. We give one further example of interest. Given a number field $K$ of degree $n$ with $r$ real embeddings $\xi_1,\dots,\xi_r$ and $s$ complex conjugate pairs of complex embeddings $\xi_{r+1},\bar\xi_{r+1},\ldots,\xi_{r+s},\bar\xi_{r+s}$, the ring of integers $\mathcal O_K$ may naturally be viewed as a lattice in ${\mathbb R}^n$ via the map $x\mapsto (\xi_1(x),\ldots,\xi_{r+s}(x))\in {\mathbb R}^r\times{\mathbb C}^s\cong {\mathbb R}^n$. We may thus ask about the length of the shortest vector in this lattice generating $K$.
In their final remark~\cite[Remark~3.3]{EV}, Ellenberg and Venkatesh conjecture that the number of number fields $K$ of degree $n$
whose shortest vector in ${\mathcal O}_K$ generating $K$ is of length less than~$Y$ is $\,\asymp Y^{(n-1)(n+2)/2}$. They prove an upper bound of this order of magnitude. We use Theorem~\ref{polydiscmax2} to prove also a lower bound of this size, thereby proving their conjecture:
\begin{corollary}\label{shortvector}
Let $n>1$. The number of isomorphism classes of number fields $K$ of degree~$n$ whose shortest vector in
${\mathcal O}_K$ generating $K$ has length less than $Y$ is $\,\asymp$ $Y^{(n-1)(n+2)/2}$.
\end{corollary}
Again, Corollary~\ref{shortvector} remains true even if we impose the condition that the associated Galois group is~$S_n$ (by using Theorem~\ref{polydisc2} instead of Theorem~\ref{polydiscmax2}).
Finally, we remark that our methods allow the analogues of all of the
above results to be proven with any finite set of local conditions
imposed at finitely many places (including at infinity); the orders of
magnitudes in these theorems are then seen to remain the same---with different (but easily computable in the cases of Theorems~\ref{polydisc2} and \ref{polydiscmax2}) positive
constants---provided that no local conditions
are imposed that force the set being counted to be empty (i.e., no
local conditions are imposed at~$p$ in Theorem~\ref{polydisc2} that
force $p^2$ to divide the discriminant, no local conditions are
imposed at~$p$ in Theorem~\ref{polydiscmax2} that cause
${\mathbb Z}_p[x]/(f(x))$ to be non-maximal over~${\mathbb Z}_p$, and no local
conditions are imposed at $p$ in Corollary~\ref{monogenic} that cause
such number fields to be non-monogenic locally).
\vspace{.1in} We now briefly describe our methods. It is easily seen
that the desired densities in Theorems~\ref{polydisc2} and
\ref{polydiscmax2}, if they exist, must be bounded above by the Euler
products $\prod_p \lambda_n(p)$ and $\prod_p (1-1/p^2)$,
respectively. The difficulty is to show that these Euler products are
also the correct lower bounds. As is standard in sieve theory, to
demonstrate the lower bound, a ``tail estimate'' is required to show
that not too many discriminants of polynomials $f$ are divisible by
$p^2$ when $p$ is large relative to the discriminant $\Delta(f)$ of
$f$ (here, large means larger than $\Delta(f)^{1/(n-1)}$, say).
For any prime $p$, and a monic integer polynomial $f$ of degree $n$
such that $p^2\mid \Delta(f)$, we say that $p^2$ {\it strongly
divides} $\Delta(f)$ if $p^2\mid \Delta(f + pg)$ for any integer
polynomial $g$ of degree $n$; otherwise, we say that $p^2$ {\it weakly
divides} $\Delta(f)$. Then $p^2$ strongly divides $\Delta(f)$ if and
only if $f$ modulo $p$ has at least two distinct multiple roots in
$\bar{{\mathbb F}}_p$, or has a root in ${\mathbb F}_p$ of multiplicity at least 3; and
$p^2$ weakly divides $\Delta(f)$ if $p^2\mid \Delta(f)$ but $f$ modulo
$p$ has only one multiple root in ${\mathbb F}_p$ and this root is a simple
double root.
For any squarefree positive integer $m$, let ${\mathcal W}_m^{\rm {(1)}}$
(resp.\ ${\mathcal W}_m^{\rm {(2)}}$) denote the set of monic integer polynomials in $V^{\textrm{mon}}_n({\mathbb Z})$ whose
discriminant is strongly divisible (resp.\ weakly divisible) by $p^2$
for every prime factor $p$ of $m$.
Then we prove tail estimates for ${\mathcal W}_m^{\rm {(1)}}$ and ${\mathcal W}_m^{\rm {(2)}}$ separately,
as follows.
\begin{theorem}\label{thm:mainestimate} For any positive real number $M$ and any $\epsilon>0$, we have
\vspace{-5pt}\begin{eqnarray*}
\label{eq:equs}
{\rm (a)}\quad \#\bigcup_{\substack{m>M\\ m\;\mathrm{ squarefree}
}}\{f\in{\mathcal W}_m^{\rm {(1)}}:H(f)<X\}&=&
O_\epsilon(X^{n(n+1)/2+\epsilon}/M)+O(X^{n(n-1)/2});\\[.075in]
\label{equ1}
{\rm (b)}\quad
\#\bigcup_{\substack{m>M\\
m\;\mathrm{ squarefree}
}}\{f\in{\mathcal W}_m^{\rm {(2)}}:H(f)<X\}&=&
O_\epsilon(X^{n(n+1)/2+\epsilon}/M)+O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}),
\end{eqnarray*}
where the implied constants are independent of $M$ and $X$.
\end{theorem}
The power savings in the error terms above also have applications towards
determining the distributions of low-lying zeros in families of
Dedekind zeta functions of monogenic degree-$n$ fields; see \cite[\S5.2]{SST}.
We prove the estimate in the strongly divisible case (a) of
Theorem~\ref{thm:mainestimate} by geometric techniques, namely, a
quantitative version of the Ekedahl sieve (\cite{Ek}, \cite[Theorem
3.3]{geosieve}). While the proof of \cite[Theorem~3.3]{geosieve}
uses homogeneous heights, and considers the union over all primes
$p>M$, the same proof also applies in our case of weighted homogeneous
heights, and a union over all squarefree $m>M$. Since the last
coefficient $a_n$ is in a larger range than the other coefficients, we
in fact obtain a smaller error term than in \cite[Theorem~3.3]{geosieve}.
The estimate in the weakly divisible case (b) of
Theorem~\ref{thm:mainestimate} is considerably more difficult. Our
main idea is to embed polynomials $f$, whose discriminant is {\it weakly}
divisible by $p^2$, into a larger space that has more symmetry, such
that the invariants under this symmetry are given exactly by the
coefficients of $f$; moreover, we arrange for the image of $f$ in the
bigger space to have discriminant {\it strongly} divisible by $p^2$. We
then count in the bigger space.
More precisely, we make use of the representation of $G={\rm SO}_n$ on the
space $W=W_n$ of symmetric $n\times n$ matrices, as studied in
\cite{BG2,SW}. We fix $A_0$ to be the $n\times n$ symmetric matrix
with $1$'s on the anti-diagonal and $0$'s elsewhere. The group
$G={\rm SO}(A_0)$ acts on $W$ via the action $g\cdot B=gBg^t$ for $g\in G$
and $B\in W$. Define the {\it invariant polynomial} of an element
$B\in W$ by $$f_B(x) = (-1)^{n(n-1)/2}\det(A_0x - B).$$ Then $f_B$ is
a monic polynomial of degree~$n$. It is known that the ring of
polynomial invariants for the action of $G$ on $W$ is freely generated
by the coefficients of the invariant polynomial. Define the {\it
discriminant} $\Delta(B)$ and {\it height} $H(B)$ of an element
$B\in W$ by $\Delta(B)=\Delta(f_B)$ and $H(B)=H(f_B)$. This
representation of $G$ on $W$ was used in \cite{BG2,SW} to study
2-descent on the hyperelliptic curves $C:y^2=f_B(x)$.
A key step of our proof of Theorem~\ref{thm:mainestimate}(b) is the
construction, for every positive squarefree integer $m$, of a map
\begin{equation*}
\sigma_m:{\mathcal W}_m^{\rm {(2)}}\to \frac14W({\mathbb Z}),
\end{equation*}
such that $f_{\sigma_m(f)}=f$ for every $f\in {\mathcal W}_m^{\rm {(2)}}$; here
$\frac14W({\mathbb Z})\subset W({\mathbb Q})$ is the lattice of elements $B$ whose
coefficients have denominators dividing $4$. In our construction, the
image of $\sigma_m$ in fact lies in a special subspace $W_0$ of $W$;
namely, if $n=2g+1$ is odd, then $W_0$ consists of symmetric matrices
$B\in W$ whose top left $g\times g$ block is 0, and if $n=2g+2$ is
even, then $W_0$ consists of symmetric matrices $B\in W$ whose top
left $g\times (g+1)$ block is 0. We associate to any element of $W_0$
a further polynomial invariant which we call the $Q$-{\it invariant}
(which is a relative invariant for the subgroup of ${\rm SO}(A_0)$ that
fixes $W_0$). The significance of the $Q$-invariant is that, if the
discriminant polynomial $\Delta$ is restricted to $W_0$, then it is
not irreducible as a polynomial in the coordinates of $W_0$, but rather is
divisible by the polynomial $Q^2$.
Moreover, we show that for elements $B$ in the image of $\sigma_m$, we
have $|Q(B)|=m$.
Finally, {even though the discriminant polynomial of $f\in{\mathcal W}_m^{\rm {(2)}}$ is
{\it weakly} divisible by $p^2$, the discriminant polynomial of its
image $\sigma_m(f)$, when viewed as a polynomial on $W_0\cap
\frac14W({\mathbb Z})$, is {\it strongly} divisible by $p^2$.} This is the
key point of our construction.
To obtain Theorem~\ref{thm:mainestimate}(b), it thus suffices to
estimate the number of $G({\mathbb Z})$-equivalence classes of elements $B\in
W_0\cap \frac14W({\mathbb Z})$ of height less than $X$ having $Q$-invariant
larger than $M$. This can be reduced to a geometry-of-numbers
argument in the spirit of \cite{BG2,SW}, although the current count is
more subtle in that we are counting certain elements in a cuspidal
region of a fundamental domain for the action of $G({\mathbb Z})$ on $W({\mathbb R})$.
The $G({\mathbb Q})$-orbits of elements $B\in W_0\cap W({\mathbb Q})$ are called {\it
distinguished orbits} in \cite{BG2,SW}, as they correspond to the
identity 2-Selmer elements of the Jacobians of the corresponding
hyperelliptic curves $y^2=f_B(x)$ over ${\mathbb Q}$; these were not counted
separately by the geometry-of-numbers methods of \cite{BG2,SW}, as
these elements lie deeper in the cusps of the fundamental domains. We
develop a method to count the desired elements in the cusp, following
the arguments of \cite{BG2,SW} while using the invariance and
algebraic properties of the $Q$-invariant polynomial. This yields
Theorem~\ref{thm:mainestimate}(b), which then allows us to carry out
the sieves required to obtain Theorems~\ref{polydisc2} and
\ref{polydiscmax2}.
Corollary~\ref{monogenic} can be deduced from
Theorem~\ref{polydisc2} roughly as follows. Let $g\in {V_n^{\textrm{mon}}}({\mathbb R})$ be
a monic real polynomial of degree~$n$ and nonzero discriminant having
$r$ real roots and $2s$ complex roots. Then ${\mathbb R}[x]/(g(x))$ is
isomorphic to ${\mathbb R}^n\cong {\mathbb R}^r\times {\mathbb C}^s$ via its real and complex
embeddings. Let $\theta$ denote the image of $x$ in ${\mathbb R}[x]/(g(x))$
and let $R_g$ denote the lattice formed by taking the ${\mathbb Z}$-span of
$1,\theta,\ldots,\theta^{n-1}$. Suppose further that: there exist
monic integer polynomials $h_i$ of degree $i$ for $i=1,\ldots,n-1$
such that $1,h_1(\theta),h_2(\theta),\ldots,h_{n-1}(\theta)$ is the
unique Minkowski-reduced basis of $R_g$; we say that the polynomial
$g(x)$ is {\it strongly quasi-reduced} in this case. Note that if $g$
is an integer polynomial, then the lattice $R_g$ is simply the image
of the ring ${\mathbb Z}[x]/(g(x))\subset {\mathbb R}[x]/(g(x))$ in ${\mathbb R}^n$ via its
archimedean embeddings.
When ordered by
their heights, we prove that 100\% of monic integer polynomials
$g(x)$ are strongly quasi-reduced. We furthermore prove
that two distinct strongly quasi-reduced integer polynomials $g(x)$
and $g^\ast(x)$ of degree~$n$ with vanishing $x^{n-1}$-term
necessarily yield non-isomorphic rings $R_g$ and $R_{g^\ast}$.
The proof of the positive density result of Theorem~\ref{polydisc2}
then produces
$\gg X^{1/2+1/n}$ strongly quasi-reduced monic integer
polynomials $g(x)$ of degree~$n$ having vanishing $x^{n-1}$-term,
squarefree discriminant, and height less than $X^{1/(n(n-1))}$. These
therefore correspond to $\gg X^{1/2+1/n}$ non-isomorphic monogenic
rings of integers in $S_n$-number fields of degree $n$ having absolute discriminant less
than~$X$, and Corollary~\ref{monogenic} follows.
A similar argument proves Corollary~\ref{shortvector}. Suppose $f(x)$ is a strongly quasi-reduced irreducible monic integer polynomial of degree $n$ with squarefree discriminant $\Delta(f)$.
Elementary estimates show that if $H(f)<Y$, then $\|\theta\|\ll Y$, and so the shortest vector in the ring of integers generating the field also has length bounded by $O(Y)$. The above-mentioned result on the number of strongly quasi-reduced irreducible monic integer polynomial of degree $n$ with squarefree discriminant, vanishing $x^{n-1}$-coefficient, and height bounded by $Y$ then gives the desired lower bound of
$\gg Y^{(n-1)(n+2)/2}.$
\vspace{.1in} This paper is organized as follows. In Section~\ref{sQ},
we collect some algebraic facts about the representation $2\otimes
g\otimes(g+1)$ of ${\rm SL}_2\times{\rm SL}_g\times{\rm SL}_{g+1}$ and we define the
$Q$-invariant, which is the unique polynomial invariant for this action.
In Sections~\ref{sec:monicodd} and~\ref{sec:moniceven},
we then apply geometry-of-numbers techniques as described above to prove
the critical estimates of Theorem~\ref{thm:mainestimate}. In Section~\ref{sec:sieve}, we then show
how our main theorems, Theorems \ref{polydisc2} and \ref{polydiscmax2}, can be deduced from
Theorem~\ref{thm:mainestimate}. Finally, in
Section~\ref{latticearg}, we prove Corollary~\ref{monogenic} on the number of monogenic $S_n$-number fields
of degree~$n$ having bounded absolute discrminant, as well as Corollary~\ref{shortvector} on the number of rings of integers in number fields of degree $n$ whose shortest vector generating the number field is of bounded length.
\section{The representation $V_g=2\otimes g\otimes(g+1)$ of $H_g={\rm SL}_2\times{\rm SL}_g\times{\rm SL}_{g+1}$ and the $Q$-invariant}\label{sQ}
In this section, we collect some algebraic facts about the
representation $V_g=2\otimes g\otimes(g+1)$ of the group
$H_g={\rm SL}_2\times{\rm SL}_g\times{\rm SL}_{g+1}$. This representation will also play
an important role in the sequel.
First, we claim that this representation is {\it prehomogeneous},
i.e., the action of ${\mathbb G}_m\times H_g$ on $V_g$ has a single Zariski
open orbit. We prove this by induction on $g$. The assertion is
clear for $g=1$, where the representation is that of ${\mathbb G}_m\times
{\rm SL}_2\times {\rm SL}_2$ on $2\times 2$ matrices; the single relative
invariant in this case is the determinant, and the open orbit consists
of nonsingular matrices. For higher $g$, we note that $V_g$ is a {\it
castling transform} of $V_{g-1}$ in the sense of Sato and
Kimura~\cite{SK}; namely, the orbits of ${\mathbb G}_m\times
{\rm SL}_2\times{\rm SL}_g\times{\rm SL}_{g-1}$ on $2\times g \times (g-1)$ are in
natural one-to-one correspondence with the orbits of
${\mathbb G}_m\times{\rm SL}_2\times{\rm SL}_g\times{\rm SL}_{g+1}$ on $2\times g\times
(2g-(g-1))=2\times g\times(g+1)$, and under this correspondence, the
open orbit maps to an open orbit (cf. \cite{SK}). Thus all the
representations $V_g$ for the action of ${\mathbb G}_m\times H_g$ are
prehomogeneous.
Next, we may construct an invariant for the action of $H_g$ on $V_g$
(and thus a relative invariant for the action of ${\mathbb G}_m\times H_g$ on
$V_g$) as follows. We write any $2\times g\times (g+1)$ matrix
$v$ in $V_g$ as a pair $(A,B)$ of $g\times(g+1)$ matrices. Let
$M_v(x,y)$ denote the vector of $g\times g$ minors of $Ax-By$, where
$x$ and $y$ are indeterminates; in other words, the $i$-th coordinate
of the vector $M_v(x,y)$ is given by $(-1)^{i-1}$ times the
determinant of the matrix obtained by removing the $i$-th column of
$Ax-By$.
Then $M_v(x,y)$ is a vector of length $g+1$ consisting of binary forms
of degree $g$ in $x$ and~$y$. Each binary form thus consists of $g+1$
coefficients. Taking the determinant of the resulting
$(g+1)\times(g+1)$ matrix of coefficients of these $g+1$ binary forms
in $M_v(x,y)$ then yields a polynomial $Q=Q(v)$ in the coordinates of
$V_g$, invariant under the action of $H_g$. We call this polynomial
the $Q$-{\it invariant}. It is irreducible and homogeneous of degree
$g(g+1)$ in the coordinates of $V_g$, and generates the ring of
polynomial invariants for the action of $H_g$ on $V_g$. The
$Q$-invariant is also the {\it hyperdeterminant} of the $2\times g
\times (g+1)$ matrix (cf.\ \cite[Theorem 3.18]{GKZ}).
Note that castling transforms preserve stabilizers over any field.
Since, for any field $k$, the generic stabilizer for the action of
$H_1(k)$ on $V_1(k)$ is isomorphic to ${\rm SL}_2(k)$, it follows
that this remains the generic stabilizer for the action of $H_g(k)$ on
$V_g(k)$ for all $g\geq 1$.
\section{A uniformity estimate for odd degree monic polynomials}\label{sec:monicodd}
In this section, we prove the estimate of
Theorem~\ref{thm:mainestimate}(b) when $n=2g+1$ is odd, for any $g\geq
1$.
\subsection{Invariant theory for the fundamental representation:
${\rm SO}_n$ on the space $W$ of symmetric $n\times n$ matrices}
Let $A_0$ denote the $n\times n$ symmetric matrix with $1$'s on the
anti-diagonal and $0$'s elsewhere. The group $G={\rm SO}(A_0)$ acts on $W$
via the action
\begin{equation*}
\gamma\cdot B=\gamma B\gamma^t.
\end{equation*}
We recall some of the arithmetic invariant theory for the
representation $W$ of $n\times n$ symmetric matrices of the split
orthogonal group $G$; see \cite{BG2} for more details. The ring of
polynomial invariants for the action of $G({\mathbb C})$ on $W({\mathbb C})$ is freely
generated by the coefficients of the {\it invariant polynomial
$f_B(x)$ of $B$}, defined by $$f_B(x):=(-1)^{g}\det(A_0x-B).$$ We
define the {\it discriminant} $\Delta$ on $W$ by
$\Delta(B)=\Delta(f_B)$, and the $G({\mathbb R})$-invariant {\it height} of
elements in $W({\mathbb R})$ by $H(B)=H(f_B).$
Let $k$ be any field of characteristic not $2$.
For a monic polynomial $f(x)\in k[x]$ of degree~$n$ such that
$\Delta(f)\neq0$, let $C_f$ denote the smooth hyperelliptic curve
$y^2=f(x)$ of genus $g$ and let $J_f$ denote the Jacobian of
$C_f$. Then $C_f$ has a rational Weierstrass point at infinity.
The stabilizer of an element $B\in W(k)$ with invariant polynomial
$f(x)$ is naturally isomorphic to $J_f[2](k)$
by~\cite[Proposition~5.1]{BG2}, and hence has cardinality at most
$\#J_f[2](\bar k)=2^{2g}$, where $\bar k$ denotes a separable closure
of $k$.
We say that the element (or
$G(k)$-orbit of) $B\in W(k)$ is {\it distinguished} over $k$ if there exists a
$g$-dimensional subspace defined over $k$ that is isotropic with
respect to both $A_0$ and $B$. If $B$ is distinguished, then the set
of these $g$-dimensional subspaces over $k$ is again in bijection with $J_f[2](k)$ by~\cite[Proposition~4.1]{BG2}, and so it too has cardinality at most $2^{2g}$.
In fact, it is known (see \cite[Proposition~5.1]{BG2}) that the elements of $J_f[2](k)$ are in natural bijection with the even-degree factors of $f$ defined over $k$. (Note that the number of even-degree factors of $f$ over $\bar k$ is indeed $2^{2g}$.) In particular, if $f$ is irreducible over $k$, then the group $J_f[2](k)$ is trivial.
Now let $W_0$ be the subspace of $W$ consisting of matrices whose top left
$g\times g$ block is zero. Then elements $B$ in $W_0(k)$ with nonzero
discriminant are all evidently distinguished since the $g$-dimensional subspace
$Y_g$ spanned by the first $g$ basis vectors is isotropic with respect
to both $A_0$ and $B$.
Let $G_0$ denote the subgroup of $G$
consisting of elements $\gamma$ such that $\gamma^t$ preserves
$Y_g$. Then $G_0$ acts on $W_0$.
An element $\gamma\in G_0$ has the block matrix form
\begin{equation}\label{eq:G_0}
\gamma=\Bigl(\begin{array}{cc}\gamma_1 & 0\\ \delta & \gamma_2
\end{array}\Bigr)\in\Bigl(\begin{array}{cc}M_{g\times g} & 0\\ M_{(g+1)\times g} & M_{(g+1)\times (g+1)}
\end{array}\Bigr),
\end{equation}
and so $\gamma\in G_0$ transforms the top right $g\times (g+1)$ block
of an element $B\in W_0$ as follows:
$$(\gamma\cdot B)^{\textrm{top}} = \gamma_1B^{\textrm{top}}\gamma_2^t,$$ where we use the
superscript ``top'' to denote the top right $g\times (g+1)$ block of
any given element in $W_0$. We may thus view $(A_0^{\textrm{top}},B^{\textrm{top}})$ as an element
of the representation $V_g=2\times g\times (g+1)$ considered in
Section \ref{sQ}. In particular, we may define the $Q$-{\it invariant} of $B\in W_0$ to be the
$Q$-invariant of $(A_0^{\textrm{top}},B^{\textrm{top}})$:
\begin{equation}\label{eqQB}
Q(B):=Q(A_0^{\textrm{top}},B^{\textrm{top}}).
\end{equation}
Then the $Q$-invariant is also a relative invariant for the action of $G_0$ on $W_0$, since for any
$\gamma\in G_0$ expressed in the form \eqref{eq:G_0}, we have
\begin{equation}\label{eq:weightG_0}
Q(\gamma\cdot B) = \det(\gamma_1)Q(B).
\end{equation}
In fact, we may extend the definition of the
$Q$-invariant to an even larger subset of $W({\mathbb Q})$ than $W_0({\mathbb Q})$. We have the following proposition.
\begin{proposition}\label{prop:extendQ}
Let $B\in W_0({\mathbb Q})$ be an element whose invariant polynomial $f(x)$ is
irreducible over~${\mathbb Q}$. Then for every $B'\in W_0({\mathbb Q})$ such that $B'$ is $G({\mathbb Z})$-equivalent to $B$, we have
$Q(B')=\pm Q(B)$.
\end{proposition}
\begin{proof}
Suppose $B'=\gamma\cdot B$ with $\gamma\in G({\mathbb Z})$ and $B,B'\in W_0({\mathbb Q})$.
Then $Y_g$ and $\gamma^t Y_g$ are both $g$-dimensional subspaces over ${\mathbb Q}$ isotropic with respect to both $A_0$ and $B$. Since $f$ is irreducible over ${\mathbb Q}$, we have that $J_f[2]({\mathbb Q})$ is trivial, and so these
two subspaces must be the same.
We conclude that $\gamma\in
G_0({\mathbb Z})$, and thus $Q(\gamma\cdot B)=\pm Q(B)$ by~\eqref{eq:weightG_0}.
\end{proof}
We may thus define the $|Q|$-{\it invariant} for any element $B\in W({\mathbb Q})$ that is $G({\mathbb Z})$-equivalent to some element $B'\in W_0({\mathbb Q})$ and whose invariant polynomial is irreducible over ${\mathbb Q}$; indeed, we set $|Q|(B):=|Q(B')|$. By Proposition~\ref{prop:extendQ}, this definition of $|Q|(B)$ is independent of the choice of $B'$. Note that all such elements $B\in W({\mathbb Q})$ are {distinguished}.
\subsection{Embedding ${\mathcal W}_m^{\rm {(2)}}$ into $\frac12W({\mathbb Z})$}\label{sembedodd}
We begin by describing those monic integer polynomials in ${V_n^{\textrm{mon}}}({\mathbb Z})$ that lie in ${\mathcal W}_m^{\rm {(2)}}$, i.e., the monic integer polynomials that have discriminant weakly divisible by $p^2$ for all $p\mid m$.
\begin{proposition}
Let $m$ be a positive squarefree integer, and let $f$ be a monic
integer polynomial whose discriminant is weakly divisible by $p^2$ for all $p\mid m$.
Then there exists an integer $\ell$ such that $f(x+\ell )$ has
the form
\begin{equation}
f(x+\ell ) = x^n + c_1x^{n-1} + \cdots + c_{n-2}x^2 + mc_{n-1}x +
m^2c_n
\end{equation}
for some integers $c_1,\ldots,c_n$.
\end{proposition}
\begin{proof}
Since $m$ is
squarefree, by the Chinese Remainder Theorem it suffices to prove the assertion in the case that $m=p$
is prime. Since $p$ divides the discriminant
of~$f$, the reduction of $f$ modulo~$p$ must have a repeated factor $h(x)^e$ for some polynomial $h\in
{\mathbb F}_p[x]$ and some integer $e\geq2$. As the discriminant of $f$ is not strongly divisible by
$p^2$, we see that $h$ is linear and $e=2$. By replacing $f(x)$ by
$f(x+\ell )$ for some integer $\ell$, if necessary, we may assume that
the repeated factor is $x^2$, i.e., we may assume that the constant
coefficient $c_n$ as well as the coefficient $c_{n-1}$ of $x$ are both
multiples of~$p$. By the resultant definition of the discriminant---$\Delta(f):={\textrm{Res}}(f(x),f'(x))$---it follows that there exist polynomials
$\Delta_1,\Delta_2,\Delta_3\in{\mathbb Z}[c_1,\ldots,c_n]$ such that
\begin{equation}\Delta(f) = c_n\Delta_1 + c_{n-1}^2\Delta_2 + c_{n-1}c_n\Delta_3.
\end{equation}
Since $p$ divides $c_{n-1}$ and $c_n$, we see that $p^2\mid \Delta(f)$
if and only if $p^2\mid c_n\Delta_1$. If $p^2$ does not divide $c_n$,
then $p$ divides $\Delta_1$, and continues to divide it even if one
modifies each $c_i$ by a multiple of $p$, and so $p^2$ divides $\Delta(f)$
strongly in that case, a contradiction. Therefore, we must have that
$p^2\mid c_n$.
\end{proof}
Having identified the monic integer polynomials whose discriminants are weakly divisible by $p^2$ for all $p\mid m$, our aim now is to map these polynomials into a larger space, so that: 1) there is a discriminant polynomial defined on the larger space; 2) the map is discriminant-preserving; and, 3) the images of these polynomials have discriminant {\it strongly divisible by $p^2$} for all $p\mid m$.
To this end, consider the matrix
\begin{equation}\label{mat1}
B_m(c_1,\ldots,c_n) =
\left(\!\begin{array}{ccccccccc}&&&&&&&m&\!0\\[.1in]&&&&&&\iddots&\iddots& \\[.15in]&&&&&\,1\,&\,0\,&&\\[.125in] &&&&\,1\,&0&&&\\[.125in] &&&1&-c_1&\!\!-c_2/2\!\!&&& \\[.15in] &&1&\;\,0\;\,&\!\!-c_2/2\!\!&-c_3&\!\!-c_4/2\!\!&&\\[.045in]&\iddots&\;\,\,0\;\,\,&&&\!\!-c_4/2\!\!&-c_5&\ddots&\\[.025in] \;m\;&\,\,\,\iddots\,\,\,&&&&&\ddots&\ddots&\!\!\!-c_{n-1}/2\!\!\!
\\[.105in] \,0\,&&&&&&&\!\!\!-c_{n-1}/2\!\!\!&-c_n \end{array}\right)
\end{equation}
in $\frac12 W_0({\mathbb Z})$. It follows from a direct computation that
$$f_{B_m(c_1,\ldots,c_n)}(x) = x^n + c_1x^{n-1} + \cdots + c_{n-2}x^2
+ mc_{n-1}x + m^2c_n.$$ We set $\sigma_m(f) := B_m(c_1,\ldots,c_n) +
\ell A_0\in \frac12 W_0({\mathbb Z})$. Then we have $f_{\sigma_m(f)}=f.$ Another direct computation shows that
$|Q(B_m(c_1,\ldots,c_n))|=m$. Since the $Q$-invariant on $2\otimes
g\otimes (g+1)$ is ${\rm SL}_2$-invariant, we conclude
that $$|Q({\sigma_m(f)})|=m.$$
Finally, we note that for all odd primes $p\mid m$, we have that $p^2$ weakly divides $\Delta(f)$, and $p^2$ weakly divides $\Delta(\sigma_m(f))$ as an element of $\frac12W({\mathbb Z})$, {\it but $p^2$ strongly divides $\Delta(f)$ as an element of $\frac12W_0({\mathbb Z})$}!
We have proven the following theorem.
\begin{theorem}\label{keymap}
Let $m$ be a positive squarefree integer. There exists a map
$\sigma_m:{\mathcal W}_m^{\rm {(2)}}\to \frac12W_0({\mathbb Z})$ such that $f_{\sigma_m(f)}=f$ for
every $f\in {\mathcal W}_m^{\rm {(2)}}$ and, furthermore, $p^2$ strongly divides $\Delta(\sigma_m(f))$ for all $p\mid m$. In~addition, elements in the image of $\sigma_m$
have $|Q|$-invariant equal to $m$.
\end{theorem}
Let ${\mathcal L}$ be the set of elements $v\in \frac12W({\mathbb Z})$ satisfying the
following conditions: $v$ is $G({\mathbb Z})$-equivalent to some element in
$\frac12W_0({\mathbb Z})$ and the invariant polynomial of $v$ is irreducible
over ${\mathbb Q}$.
Then by the remark following Proposition~\ref{prop:extendQ}, we may view $|Q|$ as a function also on ${\mathcal L}$.
Using ${\mathcal W}_m^{{\rm {(2)}},{\rm irr}}$ to denote the set of irreducible
polynomials in ${\mathcal W}_m^{\rm {(2)}}$, we then have the following immediate consequence of Theorem \ref{keymap}:
\begin{theorem}\label{keymaporbit}
Let $m$ be a positive squarefree integer. There exists a map
$$\bar{\sigma}_m:{\mathcal W}_m^{{\rm {(2)}},{\rm irr}}\to G({\mathbb Z})\backslash{\mathcal L}$$ such that
$f_{\bar{\sigma}_m(f)}=f$ for every $f\in {\mathcal W}_m^{{\rm {(2)}},{\rm irr}}$. Moreover, for every
element $B$ in the $G({\mathbb Z})$-orbit of an element in the image of $\bar{\sigma}_m$, we have
$|Q|(B)=m$.
\end{theorem}
It is well known that the number of reducible monic integer polynomials having height less than $X$ is of a strictly smaller order of magnitude than the total number of such polynomials (see, e.g., Proposition~\ref{propredboundall}). Thus, for our purposes of proving Theorem~\ref{thm:mainestimate}(b), it will suffice to count elements in ${\mathcal W}_m^{{\rm {(2)}},{\rm irr}}$ of height less than $X$ over all $m>M$, which by Theorem~\ref{keymaporbit} we may do by
counting these special $G({\mathbb Z})$-orbits on ${\mathcal L}\subset \frac12W({\mathbb Z})$ having height less than $X$ and $|Q|$-invariant greater than $M$.
More precisely,
let $N({\mathcal L};M;X)$ denote the number of $G({\mathbb Z})$-equivalence classes of
elements in ${\mathcal L}$ whose $|Q|$-invariant is greater than $M$ and whose
height is less than $X$. Then, by Theorem~\ref{keymaporbit}, to obtain an upper bound for the left hand
side in Theorem~\ref{thm:mainestimate}(b), it suffices to obtain the
same upper bound for $N({\mathcal L};M;X)$.
On the other hand, we may estimate the number of orbits counted by
$N({\mathcal L}; M;X)$ using the averaging method as utilized in~\cite[\S3.1]{BG2}. Namely, we construct fundamental domains for
the action of $G({\mathbb Z})$ on $W({\mathbb R})$ using {\it Siegel sets}, and then
count the number of points in these fundamental domains that are
contained in ${\mathcal L}$. We describe the coordinates on $W({\mathbb R})$ and
$G({\mathbb R})$ needed to describe these fundamental domains explicitly in
Section~\ref{scoeffodd}. In Section~\ref{sgomodd}, we then
describe the integral that must be evaluated in order to estimate
$N({\mathcal L};M;X)$, as per the counting method of \cite[\S3.1]{BG2}, and
finally we evaluate this integral. This will complete the proof
of Theorem~\ref{thm:mainestimate}(b) in the case of odd integers~$n$.
\subsection{Coordinate systems on $G({\mathbb R})$}\label{scoeffodd}
In this subsection, we describe a
coordinate system on the group $G({\mathbb R})$.
Let us write the Iwasawa decomposition of $G({\mathbb R})$ as
$$
G({\mathbb R})=N({\mathbb R})TK,
$$ where $N$ is a unipotent group, $K$ is compact, and $T$ is the split torus of $G$ given by
\begin{equation*}
T=
\left\{\left(\begin{array}{ccccccc}
t_1^{-1}&&&&&&\\
&\ddots &&&&& \\
&& t_{g}^{-1} &&&&\\
&&& 1 &&&\\
&&&& t_g &&\\
&&&&&\ddots & \\
& &&&&& t_{1}
\end{array}\right):t_1,\ldots,t_g\in{\mathbb R}\right\}.
\end{equation*}
We may also make the following change of variables. For $1\leq
i\leq g-1$, set $s_i$ to be
$$
s_i=t_i/t_{i+1},
$$
and set $s_g=t_g$.
It follows that for $1\leq i\leq g$,
we have
\begin{equation*}
t_i=\prod_{j=i}^g s_j.
\end{equation*}
We denote an element of $T$ with coordinates $t_i$
(resp.\ $s_i$) by $(t)$ (resp.\ $(s)$).
The Haar measure on $G({\mathbb R})$ is given by
$$
dg=dn\,H(s)d^\times s\,dk,
$$ where $dn$ is Haar measure on the unipotent group $N({\mathbb R})$,
$dk$ is Haar measure on the compact group~$K$, $d^\times s$ is
given by
$$
d^\times s:=\prod_{i=1}^g\frac{ds_i}{s_i},
$$
and
\begin{equation}\label{eqhaarodd}
H(s)=\prod_{k=1}^g s_k^{k^2-2kg};
\end{equation}
see \cite[(10.7)]{BG2}.
We denote the coordinates on $W$ by $b_{ij}$, for $1\leq i\leq j\leq
n$. These coordinates are eigenvectors for the action of $T$ on $W^*$,
the dual of $W$. Denote the $T$-weight of a coordinate $\alpha$ on
$W$, or more generally a product $\alpha$ of powers of such
coordinates, by $w(\alpha)$. An elementary computation shows that
\begin{equation}\label{wbij}
w(b_{ij})=\left\{
\begin{array}{rcl}
t_i^{-1}t_j^{-1} &\mbox{ if }& i,j\leq g\\
t_i^{-1} &\mbox{ if }& i\leq g,\;j=g+1\\
t_i^{-1}t_{n-j+1} &\mbox{ if }& i\leq g,\; j>g+1\\
1 &\mbox{ if }& i=j=g+1\\
t_{n-j+1} &\mbox{ if }& i=g+1,\;j>g+1\\
t_{n-i+1}t_{n-j+1} &\mbox{ if }& i,j>g+1.
\end{array}
\right.
\end{equation}
We may also compute the weight of the invariant $Q$. The polynomial
$Q$ is homogeneous of degree $g(g+1)/2$ in the coefficients of $W_0$.
We view the torus $T$ as sitting inside $G_0$. Then by
\eqref{eq:weightG_0}, the polynomial $Q$ has a well-defined weight, given by
\begin{equation}\label{eqQweight}
w(Q)=\prod_{k=1}^gt_k^{-1}=\prod_{k=1}^gs_k^{-k}.
\end{equation}
\subsection{Proof of Theorem~\ref{thm:mainestimate}(b)
for odd $n$}\label{sgomodd}
Let ${\mathcal F}$ be a fundamental set for the action of $G({\mathbb Z})$ on $G({\mathbb R})$
that is contained in a {\it Siegel set}, i.e., contained in $N'T'K$,
where $N'$ consists of elements in $N({\mathbb R})$ whose coefficients are
absolutely bounded and $T'\subset T$ consists of elements in $(s)\in
T$ with $s_i\geq c$ for some positive constant $c$. Let~$R$ be a
bounded fundamental set for the action of $G({\mathbb R})$ on the set of
elements in $W({\mathbb R})$ having nonzero discriminant and height less than
$1$; such a set $R$ was constructed in \cite[\S9.1]{BG2}. Then for
every $h\in G({\mathbb R})$, we have
\begin{equation}\label{eqoddfundv}
N({\mathcal L};M;X)\ll \#\{B\in (({\mathcal F} h)\cdot (XR))\cap{\mathcal L}:Q(B)>M\},
\end{equation}
since ${\mathcal F} h$ remains a
fundamental domain for the action of $G({\mathbb Z})$ on $G({\mathbb R})$, and so
$({\mathcal F} h)\cdot (XR)$ (when viewed as a multiset)
is the union of a bounded number (namely, between $1$ and $2^{2g}$) fundamental domains for the action of
$G({\mathbb Z})$ on the elements in $V({\mathbb R})$ having height bounded by $X$.
Let $G_0$ be a compact left $K$-invariant set in $G({\mathbb R})$ which is the
closure of a nonempty open set.
Averaging \eqref{eqoddfundv} over $h\in G_0$ and exchanging the
order of integration as in \cite[\S10.1]{BG2}, we obtain
\begin{equation}
N({\mathcal L};M;X)\ll \int_{\gamma\in{\mathcal F}} \#\{B\in((\gamma G_0)\cdot (XR))\cap{\mathcal L}:Q(B)>M\}
d\gamma,
\end{equation}
where the implied constant depends only on $G_0$ and $R$.
Let $W_{00}\subset W$ denote the space of symmetric matrices $B$ whose
$(i,j)$-entries are 0 for $i+j<n$. It was shown in \cite[Propositions~10.5 and 10.7]{BG2} that most lattice points in the fundamental domains $({\mathcal F} h)\cdot (XR)$ that are distinguished lie in $W_0$ and in fact lie in $W_{00}$. The reason for this is that in the main bodies of these fundamental domains, we expect a negligible number of distinguished elements (e.g., because each distinguished element will be distinguished $p$-adically as well, which happens with $p$-adic density bounded above by some constant $c_n$ strictly less than $1$, and $\prod_p c_n$=0). Meanwhile, in the cuspidal regions of these fundamental domains, the values of the $s_i$ become very large, yielding many integral points lying in $W_0$ and in fact in $W_{00}$ (the top left entries of $B$ must vanish for integral points in $(\gamma G_0)\cdot (XR)$ when the $s_i$ are large, as these top left entries have negative powers of $s_i$ as weights).
These arguments from \cite{BG2} can be used in the identical manner to show that the number of points in our fundamental domains lying in
${\mathcal L}$ but not in $\frac12W_{00}({\mathbb Z})$ is negligible.
\begin{proposition}\label{propodd}
We have
\begin{equation*}
\displaystyle\int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap ({\mathcal L}\setminus {\textstyle\frac12} W_{00}({\mathbb Z}))\}d\gamma=O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}).
\end{equation*}
\end{proposition}
\begin{proof}
First, we consider elements $B\in {\mathcal L}$ such that $b_{11}\neq 0$. Since
$B$ is distinguished in $W({\mathbb Q})$, it is also distinguished as an element of $W({\mathbb Q}_p)$, which occurs in $W({\mathbb Z})$ with $p$-adic density at most $1-\frac{n}{2n+1}+O(\frac{1}{p})$. Since $\prod_p(1-\frac{n}{2n+1}+O(\frac{1}{p}))=0$, we thus obtain as in \cite[Proof of Proposition~10.7]{BG2} that
\begin{equation}\label{selapp}
\int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap {\mathcal L}:b_{11}\neq 0\}d\gamma=o(X^{n(n+1)/2}).
\end{equation}
An application of the Selberg sieve exactly as in \cite{ShTs} can be
used to improve the right hand side of~(\ref{selapp}) to
$O_\epsilon(X^{n(n+1)/2-1/5+\epsilon})$.
Meanwhile, as already mentioned above, \cite[Proof of Proposition~10.5]{BG2} immediately gives
\begin{equation*}
\int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap ({\textstyle\frac12} W({\mathbb Z})\setminus {\textstyle\frac12} W_{00}({\mathbb Z})):b_{11}=0\}d\gamma=O_\epsilon(X^{n(n+1)/2-1+\epsilon}).
\end{equation*}
Since ${\mathcal L}$ is contained in $\frac12W({\mathbb Z})$, this completes the proof of the proposition.
\end{proof}
Proposition~\ref{propodd} shows that the number of points in ${\mathcal L}$ in our fundamental domains outside $W_{00}$ is negligible (even without any condition on the $Q$-invariant!). It remains to estimate the number of points in our fundamental domains that lie in ${\mathcal L}\cap \frac12W_{00}({\mathbb Z})$ and which have $Q$-invariant larger than $M$. By \cite[Proof of Proposition 10.5]{BG2}, the total number of such points without any condition on the $Q$-invariant is $O(X^{n(n+1)/2})$. Thus, to obtain a saving, we must use the condition that the $Q$-invariant is larger than $M$.
We accomplish this via two observations. First, as already noted above, if $\gamma\in{\mathcal F}$ has Iwasawa coordinates
$(n,(s_i)_i,k)$, then the integral points in $((\gamma
G_0)\cdot (XR))\cap \frac12W_{00}({\mathbb Z})$ with irreducible invariant polynomial occur predominantly when the coordinates
$s_i$ are large. On the other hand, since the weight of the $Q$-invariant is a
product of negative powers of $s_i$, the $Q$-invariants of such points in
$((\gamma G_0)\cdot (XR))\cap \frac12W_{00}({\mathbb Z})$ become large when the coordinates $s_i$ are
small. The tension between these two requirements on integral points in $((\gamma G_0)\cdot (XR))\cap {\mathcal L}$
will yield the desired saving.
\begin{proposition}\label{proplargeQbound}
We have
\begin{equation*}
\int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap {\mathcal L}\cap
{\textstyle\frac12} W_{00}({\mathbb Z}):|Q(B)|>M\}d\gamma=O(\frac{1}{M}X^{n(n+1)/2}\log X).
\end{equation*}
\end{proposition}
\begin{proof}
Since $s_i\geq c$ for every $i$, there exists a compact subset $N''$ of
$N({\mathbb R})$ containing $(t)^{-1}N'\,(t)$ for all $t\in T'$. Let $E$ be the
pre-compact set $N''KG_0R$. Then we have
\begin{eqnarray}
&&\int_{\mathcal F} \#\{B\in((\gamma G_0)\cdot(XR))\cap {\mathcal L}\cap {\textstyle\frac12} W_{00}({\mathbb Z}):|Q(B)|>M\}d\gamma \nonumber \\
&\ll&\int_{s_i\gg 1} \#\{B\in((s)\cdot (XE))\cap {\mathcal L}\cap {\textstyle\frac12} W_{00}({\mathbb Z}):|Q(B)|>M\}H(s)d^\times s,
\label{inttoest}
\end{eqnarray}
where $H(s)$ is defined in \eqref{eqhaarodd}.
To estimate the integral in (\ref{inttoest}), we note first that the $(i,j)$-entry of any element of $(s)\cdot (XE)$ is bounded by $Xw(b_{ij}).$ Now, by \cite[Lemma~10.3]{BG2}, if an element in $\frac12 W_{00}({\mathbb Z})$ has
$(i,j)$-coordinate $0$ for some $i+j=n$, then the element has
discriminant $0$ and hence is not in ${\mathcal L}$. Since the weight of $b_{i,n-i}$ is $s_i^{-1}$, to count points in ${\mathcal L}$ it suffices to integrate only in the region where $s_i\ll X$ for all $i$, so that it is possible for an element of ${\mathcal L}\cap (s)\cdot(XE)$ to have nonzero $(i,n-i)$-entry.
Furthermore,
it suffices to integrate only in the region where $X^{g(g+1)/2}w(Q) \gg M$, since the $Q$-invariant has weight $w(Q)$ and is homogeneous of degree $g(g+1)/2$.
Let $S$ denote the set of coordinates of $W_{00}$, i.e.,
$S=\{b_{ij}:i+j\geq n\}$. For $(s)$ in the range $1\ll s_i\ll X$, we
have $Xw(\alpha)\gg 1$ for all $\alpha\in S$; thus the number of lattice points in $(s)\cdot (XE)$ for $(s)$ in this range is $\ll \prod_{\alpha\in S}(Xw(\alpha))$. Therefore, we have
\begin{eqnarray*}
&&\displaystyle\int_{s_i\gg 1} \#\{B\in((s)\cdot (XE))\cap {\mathcal L}\cap {\textstyle\frac12} W_{00}({\mathbb Z}):|Q(B)|>M\}H(s)d^\times s \\
&\ll&\displaystyle\int_{1\ll s_i\ll X,\,\,X^{g(g+1)/2}w(Q) \gg M} \prod_{\alpha\in S}\bigl(Xw(\alpha)\bigr)H(s)d^\times s \\
&\ll&\displaystyle\int_{1\ll s_i\ll X,\,\,X^{g(g+1)/2}w(Q) \gg M} X^{n(n+1)/2-g^2}\prod_{k=1}^gs_k^{2k-1}d^\times s \\
&\ll&\displaystyle\frac{1}{M}\int_{s_i=1}^X X^{n(n+1)/2-g^2+g(g+1)/2}w(Q)\prod_{k=1}^gs_k^{2k-1}d^\times s \\
&\ll&\displaystyle\frac{1}{M}\int_{s_i=1}^X X^{n(n+1)/2-g(g-1)/2}\prod_{k=1}^gs_k^{k-1}d^\times s \\
&\ll&\displaystyle\frac{1}{M}X^{n(n+1)/2}\log(X),
\end{eqnarray*}
where the second inequality follows from the definition \eqref{eqhaarodd} of $H(s)$ and the computation (\ref{wbij}) of the weights of the coordinates $b_{ij}$, the third inequality follows from the fact that $X^{g(g+1)/2}w(Q) \gg M$, the fourth inequality follows from the computation of the weight
of $Q$ in \eqref{eqQweight}, and the $\log X$ factor comes from the integral
over $s_1$.
\end{proof}
The estimate in Theorem \ref{thm:mainestimate}(b) for odd $n$ now follows
from Theorem \ref{keymaporbit} and Propositions~\ref{propodd} and~\ref{proplargeQbound}, in conjunction with the bound on the number of
reducible polynomials proved in Proposition~\ref{propredboundall}.
\section{A uniformity estimate for even degree monic polynomials}\label{sec:moniceven}
In this section, which is structured similarly to Section~\ref{sec:monicodd}, we
prove the estimate of Theorem~\ref{thm:mainestimate}(b) when $n=2g+2$
is even, for any $g\geq 1$.
\subsection{Invariant theory for the fundamental representation: ${\rm SO}_n$ on the space $W$ of symmetric $n\times n$ matrices}
We recall some of the arithmetic invariant theory of the
representation $W$ of $n\times n$ symmetric matrices of the
(projective) split orthogonal group $G={\rm PSO}_n.$ See \cite{SW} for more
details.
Let $A_0$ denote the $n\times n$ symmetric matrix with $1$'s on the
anti-diagonal and $0$'s elsewhere. The group ${\rm SO}(A_0)$ acts on $W$
via the action
\begin{equation*}
\gamma\cdot B=\gamma B\gamma^t.
\end{equation*}
The central $\mu_2$ acts trivially and so the action descends to an
action of $G={\rm SO}(A_0)/\mu_2$. The ring of polynomial invariants over ${\mathbb C}$ is
freely generated by the coefficients of the {\it invariant
polynomial} $$f_B(x):=(-1)^{g+1}\det(A_0x-B).$$ We define the {\it
discriminant} $\Delta$ and {\it height} $H$ on $W$ as the discriminant
and height of the invariant polynomial.
Let $k$ be a field of characteristic not $2$.
For any monic polynomial $f(x)\in k[x]$ of degree~$n$ such
that $\Delta(f)\neq0$, let $C_f$ denote the smooth hyperelliptic curve
$y^2=f(x)$ of genus $g$ and let $J_f$ denote its Jacobian. Then $C_f$
has two rational non-Weierstrass points at infinity that are conjugate
by the hyperelliptic involution.
The stabilizer of an element $B\in W(k)$ with invariant polynomial $f(x)$
is isomorphic to $J_f[2](k)$ by~\cite[Proposition~2.33]{W}, and hence has cardinality at most $\#J_f[2](\bar k)=2^{2g}$, where $\bar k$ denotes a separable closure of $k$.
We say that the element (or the $G(k)$-orbit of) $B\in W(k)$ is
{\it distinguished} if there exists a flag $Y'\subset Y$ defined over
$k$ where $Y$ is $(g+1)$-dimensional isotropic with respect to $A_0$
and $Y'$ is $g$-dimensional isotropic with respect to $B$. If $B$ is
distinguished, then the set of these flags is in bijection with
$J_f[2](k)$ by \cite[Proposition~2.32]{W}, and so it too has cardinality at most $2^{2g}$.
In fact, it is known (see~\cite[Proposition~22]{BGWhyper}) that the elements of $J_f[2](k)$ are in natural bijection with the even degree factorizations of $f$ defined over $k$. (Note that the number of such factorizations of $f$ over $\bar k$ is indeed $2^{2g}$.) In particular, if $f$ is irreducible over $k$ and does not factor as $g(x)\bar g(x)$ over some quadratic extension of $k$, then the group $J_f[2](k)$ is trivial.
Let $W_0$ be the subspace of $W$ consisting of matrices whose top left
$g\times (g+1)$ block is zero. Then elements $B$ in $W_0(k)$ with
nonzero discriminant are all distinguished since the $(g+1)$-dimensional
subspace $Y_{g+1}$ spanned by the first $g+1$ basis vectors is
isotropic with respect to $A_0$ and the $g$-dimensional subspace
$Y_g\subset Y_{g+1}$ spanned by the first $g$ basis vectors is
isotropic with respect to $B$.
Let $G_0$ be the parabolic subgroup of
$G$ consisting of elements $\gamma$ such that $\gamma^t$ preserves the
flag $Y_g\subset Y_{g+1}$. Then $G_0$ acts on $W_0$.
An element $\gamma\in G_0$ has the block matrix form
\begin{equation}\label{eq:G_0even}
\gamma=\left(\begin{array}{ccc}\gamma_1 & 0 & 0\\ \delta_1 & \alpha & 0\\ \delta_2 & \delta_3 & \gamma_2
\end{array}\right)\in\left(\begin{array}{ccc}M_{g\times g} & 0 & 0\\ M_{1\times g} & M_{1\times 1} & M_{1\times (g+1)}\\ M_{(g+1)\times g} & M_{(g+1)\times 1} & M_{(g+1)\times (g+1)}
\end{array}\right),
\end{equation}
and so $\gamma\in G_0$ acts on the top right $g\times (g+1)$ block
of an element $B\in W_0$ by
$$\gamma.B^{\textrm{top}} = \gamma_1B^{\textrm{top}}\gamma_2^t,$$ where we use the superscript
``top'' to denote the top right $g\times (g+1)$ block of any given element
of $W_0$. We may thus view $(A_0^{\textrm{top}},B^{\textrm{top}})$ as an element of the
representation $V_g=2\times g\times (g+1)$ considered in Section \ref{sQ}.
In particular, we may define the $Q$-{\it invariant} of $B\in W_0$ as the $Q$-invariant
of $(A_0^{\textrm{top}},B^{\textrm{top}})$:
\begin{equation}\label{eqQBeven}
Q(B):=Q(A_0^{\textrm{top}},B^{\textrm{top}}).
\end{equation}
Then the $Q$-invariant is a relative invariant for the action of $G_0$ on $W_0$, i.e.,
for any $\gamma\in G_0$ in the form \eqref{eq:G_0}, we have
\begin{equation}\label{eq:weightG_0even}
Q(\gamma.B) = \det(\gamma_1)^{g+1}\det(\gamma_2)^gQ(B) =
\det(\gamma_1)\alpha^{-g}Q(B).
\end{equation}
In fact, we may extend the definition of the
$Q$-invariant to an even larger subset of $W({\mathbb Q})$ than $W_0({\mathbb Q})$.
We have the following proposition.
\begin{proposition}\label{prop:extendQeven}
Let $B\in W_0({\mathbb Q})$ be an element whose invariant polynomial $f(x)$ is
irreducible over ${\mathbb Q}$ and, when $n\geq 4$, does not factor as $g(x)\bar{g}(x)$ over some quadratic extension of ${\mathbb Q}$. Then for every $B'\in W_0({\mathbb Q})$ such that $B'$ is $G({\mathbb Z})$-equivalent to $B$, we have
$Q(B')=\pm Q(B)$.
\end{proposition}
\begin{proof}
The assumption on the factorization property of $f(x)$ implies that $J_f[2]({\mathbb Q})$ is trivial.
The proof is now identical to that of Proposition~\ref{prop:extendQ}.
\end{proof}
We may thus define the $|Q|$-{\it invariant} for any element $B\in W({\mathbb Q})$ that is $G({\mathbb Z})$-equivalent to some $B'\in W_0({\mathbb Q})$ and whose invariant polynomial is irreducible over ${\mathbb Q}$ and does not factor as $g(x)\bar{g}(x)$ over any quadratic extension of ${\mathbb Q}$; indeed, we set $|Q|(B):=|Q(B')|$. By Proposition~\ref{prop:extendQeven}, this definition of $|Q|(B)$ is independent of the choice of $B'$. We note again that all such elements $B\in W({\mathbb Q})$ are distinguished.
\subsection{Embedding ${\mathcal W}_m^{\rm {(2)}}$ into $\frac14W({\mathbb Z})$}\label{sembedeven}
Let $m$ be a positive squarefree integer and let $f$ be an monic
integer polynomial whose discriminant is weakly divisible by
$m^2$. Then as proved in \S\ref{sembedodd}, there exists an integer
$\ell$ such that $f(x+\ell)$ has the form
$$f(x+\ell) = x^n + c_1x^{n-1} + \cdots + c_{n-2}x^2 + mc_{n-1}x + m^2c_n.$$
Consider the following matrix:
\begin{equation}\label{mat2}
B_m(c_1,\ldots,c_n) =
\left(\!\!\!\!\!\begin{array}{cccccccccc}&&&&&&&&m&0\\[.065in]&&&&&&&\iddots&\:\;\iddots& \\[.025in]&&&&&&1&\;\;\iddots&&\\[.185in] &&&&&\,1&0&&&\\[.185in] &&&&\:\;1\;&-c_1/2&&&& \\[.175in]&&&1&\!\!-c_1/2\,&\!\!c_1^2/4\!-\!c_2\!\!&-c_3/2&&&\\[.175in]&&1&\;\;\;0\;\;\;&&-c_3/2&-c_4&\!\!\!-c_5/2\!\!\!&&\\[.085in]&\iddots&\;\;\:\iddots\:\;\;&&&&-c_5/2&-c_6&\ddots&\\[.0125in] \;\;\;m\;\;\:&\;\,\,\iddots&&&&&&\ddots&\ddots&\!\!-c_{n-1}/2\!\!
\\[.125in] 0&&&&&&&&\!\!\!\!-c_{n-1}/2\!\!\!\!&-c_n \end{array}\!\right).
\end{equation}
It follows from a direct computation that
$$f_{B_m(c_1,\ldots,c_n)}(x) = x^n + c_1x^{n-1} + \cdots + c_{n-2}x^2 +
mc_{n-1}x + m^2c_n.$$ We set $\sigma_m(f) := B_m(c_1,\ldots,c_n) +
\ell A_0\in \frac14 W({\mathbb Z})$. Then evidently $f_{\sigma_m(f)}=f.$ A direct computation again shows that
$|Q(B_m(c_1,\ldots,c_n))|=m$. Since the $Q$-invariant on $2\otimes
g\otimes (g+1)$ is ${\rm SL}_2$-invariant, we conclude
that $$|Q(\sigma_m(f))|=m.$$
Finally, we note that for all odd primes $p\mid m$, we have that $p^2$ weakly divides $\Delta(f)$, and $p^2$ weakly divides $\Delta(\sigma_m(f))$ as an element of $\frac14W({\mathbb Z})$, {\it but $p^2$ strongly divides $\Delta(\sigma_m(f))$ as an element of $\frac14W_0({\mathbb Z})$}.
We have proven the following theorem.
\begin{theorem}\label{th:mapeven}
Let $m$ be a positive squarefree integer. There exists a map
$\sigma_m:{\mathcal W}_m^{\rm {(2)}}\to \frac14W({\mathbb Z})$ such that $f_{\sigma_m(f)}=f$ for every $f\in
{\mathcal W}_m^{\rm {(2)}}$ and, furthermore, Furthermore, $p^2$ strongly divides $\Delta(\sigma_m(f))$ for all $p\mid m$. In~addition,
elements in the image of $\sigma_m$ have $|Q|$-invariant $m$.
\end{theorem}
Let ${\mathcal L}$ be the set of elements $v\in \frac14W({\mathbb Z})$ that are $G({\mathbb Z})$-equivalent to some elements of $\frac14 W_0({\mathbb Z})$ and such that
the invariant polynomial of $v$ is irreducible
over ${\mathbb Q}$ and does not factor as $g(x)\bar{g}(x)$ over some quadratic extension of ${\mathbb Q}$.
Then by the remark following Proposition~\ref{prop:extendQeven}, we may view $|Q|$ as a function also on ${\mathcal L}$.
Let ${\mathcal W}_m^{{\rm {(2)}},{\rm irr}}$ denote the set of polynomials in ${\mathcal W}_m^{\rm {(2)}}$
that are irreducible
over ${\mathbb Q}$ and do not factor as $g(x)\bar{g}(x)$ over any quadratic extension of ${\mathbb Q}$. Then we have the following immediate consequence of Theorem~\ref{th:mapeven}:
\begin{theorem}\label{keymaporbiteven}
Let $m$ be a positive squarefree integer. There exists a map
$$\bar{\sigma}_m:{\mathcal W}_m^{{\rm {(2)}},{\rm irr}}\to G({\mathbb Z})\backslash{\mathcal L}$$ such that
$f_{\bar{\sigma}_m(f)}=f$ for every $f\in {\mathcal W}_m^{\rm {(2)}}$. Furthermore, every element
in every orbit in the image of $\bar{\sigma}_m$ has $|Q|$-invariant $m$.
\end{theorem}
It is known that the number of monic integer polynomials having height less than $X$ that are reducible or factor as $g(x)\bar{g}(x)$ over some quadratic extension of ${\mathbb Q}$ is of a strictly smaller order of magnitude than the total number of such polynomials (see, e.g., Proposition~\ref{propredboundall}). Thus to prove Theorem~\ref{thm:mainestimate}(b), it suffices to count the number of elements in
${\mathcal W}_m^{{\rm {(2)}},{\rm irr}}$ having height less than $X$ over all $m>M$, which, by Theorem~\ref{keymaporbiteven}, we may do by counting
$G({\mathbb Z})$-orbits on ${\mathcal L}\subset \frac14W({\mathbb Z})$ having height less than $X$ and $|Q|$-invariant greater than $M$.
More precisely, let $N({\mathcal L};M;X)$ denote the number of $G({\mathbb Z})$-equivalence classes of elements in
${\mathcal L}$ whose $Q$-invariant is greater than $M$ and whose height is
less than $X$. We obtain a bound for $N({\mathcal L};M;X)$ using the averaging
method utilized in~\cite{SW}.
The rest of this section is structured
exactly as the last two subsections of Section 3: we describe coordinate
systems for $W({\mathbb R})$ and $G({\mathbb R})$ in Section 4.3, and then bound the quantity
$N({\mathcal L};M;X)$ in Section 4.4. This will complete the proof
of Theorem~\ref{thm:mainestimate}(b) in the case of even integers~$n$.
\subsection{Coordinate systems on $G({\mathbb R})$}\label{scoeffeven}
In this subsection we describe a
coordinate system on the group $G({\mathbb R})$.
Let us write the Iwasawa decomposition of $G({\mathbb R})$ as
$$
G({\mathbb R})=N({\mathbb R})TK,
$$ where $N$ is a unipotent group, $K$ is compact, and $T$ is a split torus of $G$
\begin{equation*}
T=
\left\{\left(\begin{array}{ccccccc}
t_1^{-1}&&&&&&\\
&\ddots &&&&& \\
&& t_{g+1}^{-1} &&&&\\
&&&& \!\!\!\!t_{g+1} &&\\
&&&&&\!\!\!\!\ddots & \\
& &&&&& \!\!t_{1}
\end{array}\right)\right\}.
\end{equation*}
We may also make the following change of variables.
For $1\leq
i\leq g$, define $s_i$ to be
$$
s_i=t_i/t_{i+1},
$$
and let $s_g=t_gt_{g+1}$.
We denote an element of $T$ with coordinates $t_i$
(resp.\ $s_i$) by $(t)$ (resp.\ $(s)$).
The Haar measure on $G({\mathbb R})$ is given by
$$
dg=dn\,H(s)d^\times s\,dk,
$$ where $dn$ is Haar measure on the unipotent group $N({\mathbb R})$,
$dk$ is Haar measure on the compact group~$K$, $d^\times s$ is
given by
$$
d^\times s:=\prod_{i=1}^{g+1}\frac{ds_i}{s_i},
$$
and
$H(s)$ is given by
\begin{equation}\label{eqhaareven}
H(s)=\prod_{k=1}^{g-1} s_k^{k^2-2kg-k}\cdot (s_gs_{g+1})^{-g(g+1)/2};
\end{equation}
see~\cite[(26)]{SW}.
As before, we denote the coordinates of $W$ by $b_{ij}$, for $1\leq
i\leq j\leq n$, and we denote
the $T$-weight of a coordinate $\alpha$ on $W$, or a
product $\alpha$ of powers of such coordinates, by $w(\alpha)$.
We compute the weights of the coefficients $b_{ij}$ to be
\begin{equation}\label{wbij2}
w(b_{ij})=\left\{
\begin{array}{rcl}
t_i^{-1}t_j^{-1} &\mbox{ if }& i,j\leq g+1\\
t_i^{-1}t_{n-j+1} &\mbox{ if }& i\leq g+1,\; j>g+1\\
t_{n-i+1}t_{n-j+1} &\mbox{ if }& i,j>g+1.
\end{array}
\right.
\end{equation}
We end by computing the weight of $Q$. The polynomial $Q$ is
homogeneous of degree $g(g+1)/2$ in the coefficients of $W_0$.
We view the torus $T$ as sitting inside $G_0$. Then by \eqref{eq:weightG_0even}, the polynomial $Q$ has a well-defined weight and this weight is given by
\begin{equation}\label{eqQweighteven}
w(Q)=(t_1\cdots t_g)^{-1}t_{g+1}^g=\prod_{k=1}^g s_k^k.
\end{equation}
\subsection{Proof of Theorem~\ref{thm:mainestimate} for even $n$}\label{sgomeven}
Let ${\mathcal F}$ be a fundamental set for the left action of $G({\mathbb Z})$
on $G({\mathbb R})$ that is contained in a Siegel set, i.e., contained in $N'T'K$, where $N'$ consists of
elements in $N({\mathbb R})$ whose coefficients are absolutely bounded and $T'\subset T$ consists of elements $(s)\in T$
with $s_i\geq c$ for some positive constant $c$. Let $R$ be a
bounded fundamental set for the action of $G({\mathbb R})$ on elements
of $W({\mathbb R})$ with nonzero discriminant and height less than $1$. Such a set $R\subset W({\mathbb R})$ was
constructed in \cite[\S4.2]{SW}. As in \S3.4, we see that for every
$h\in G({\mathbb R})$, we have
\begin{equation}\label{eqoddfundveven}
N({\mathcal L};M;X)\ll \#\{B\in (({\mathcal F} h)\cdot (XR))\cap{\mathcal L}:Q(B)>M\}.
\end{equation}
Let $G_0$ be a compact left $K$-invariant set in $G({\mathbb R})$ which is the
closure of a nonempty open set.
Averaging \eqref{eqoddfundveven} over $h\in G_0$ as before, and exchanging
the order of integration, we obtain
\begin{equation}\label{eqevenfundavg}
N({\mathcal L};M;X)\ll \int_{\gamma\in{\mathcal F}} \#\{B\in((\gamma G_0)\cdot (XR))\cap{\mathcal L}:Q(B)>M\}
d\gamma,
\end{equation}
where the implied constant depends only on $G_0$ and $R$.
Let $W_{00}\subset W$ denote the space of symmetric matrices $B$
such that $b_{ij}=0$ for $i+j < n$. It was shown in \cite[Propositions~4.5 and 4.7]{SW} (analogous to \cite[Propositions~10.5 and 10.7]{BG2} used in the odd case)
that most lattice points in the fundamental domains $({\mathcal F} h)\cdot (XR)$ that are distinguished lie in $W_0$ and
in fact lie in $W_{00}$. These arguments from \cite{SW} can be used in the identical manner to show that the number of points in our fundamental domains lying in ${\mathcal L}$ but not in $\frac14W_{00}({\mathbb Z})$ is negligible.
\begin{proposition}\label{propeven}
We have
\begin{equation*}
\displaystyle\int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap ({\mathcal L}\setminus {\textstyle\frac14} W_{00}({\mathbb Z}))\}d\gamma=O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}).
\end{equation*}
\end{proposition}
\begin{proof}
We proceed exactly as in the proof of Proposition \ref{propodd}.
First we consider elements $B\in {\mathcal L}$ such that $b_{11}\neq 0$.
Since
$B$ is distinguished in $W({\mathbb Q})$, it is also distinguished as an element of $W({\mathbb Q}_p)$, which occurs in $W({\mathbb Z})$ with $p$-adic density bounded by some constant $c_n$ stricted less than~$1$. Since $\prod_p c_n=0$, we obtain as in \cite[Proof of Proposition~4.7]{SW} that
\begin{equation}\label{selapp2}
\int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap {\mathcal L}:b_{11}\neq 0\}d\gamma=o(X^{n(n+1)/2}).
\end{equation}
An application of the Selberg sieve exactly as in \cite{ShTs} again improves this estimate to
\linebreak $O_\epsilon(X^{n(n+1)/2-1/5+\epsilon})$.
Meanwhile, \cite[Proof of Proposition 4.5]{SW} immediately gives
\begin{equation*}
\int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap ({\textstyle\frac14} W({\mathbb Z})\setminus {\textstyle\frac14} W_{00}({\mathbb Z})):b_{11}=0\}d\gamma=O_\epsilon(X^{n(n+1)/2-1+\epsilon}).
\end{equation*}
Since ${\mathcal L}$ is contained in $\frac14W({\mathbb Z})$, this completes the proof.
\end{proof}
Next, we estimate the contribution to the right hand side of
\eqref{eqevenfundavg} from elements in ${\mathcal L}\cap \frac14 W_{00}({\mathbb Z})$. As in
Proposition \ref{proplargeQbound}, the desired saving is obtained via the
following two observations. Firstly, integral points in $((\gamma
G_0)\cdot (XR))\cap \frac14 W_{00}({\mathbb Z})$ occur predominantly when the Iwasawa
coordinates $s_i$ of $\gamma$ are large. Secondly, since the weight of
the $Q$-invariant is a product of negative powers of the $s_i$, the
$Q$-invariants of elements in $((\gamma G_0)\cdot (XR))\cap {\mathcal L}$ are large when the
values of the $s_i$ are small.
\begin{proposition}\label{proplargeQboundeven}
We have
\begin{equation*}
\int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap {\mathcal L}\cap {\textstyle\frac14} W_{00}({\mathbb Z}):|Q(B)|>M\}d\gamma=O(X^{n(n+1)/2}\log^2 X/M).
\end{equation*}
\end{proposition}
\begin{proof}
Since $s_i\geq c$ for every $i$, there exists a compact subset $N''$ of
$N({\mathbb R})$ containing $(t)^{-1}N'\,(t)$ for all $t\in T'$. Let $E$ be the
pre-compact set $N''KG_0R$. Then
\begin{eqnarray*}
&&\int_{\mathcal F} \#\{B\in((\gamma G_0)\cdot(XR))\cap {\mathcal L}\cap {\textstyle\frac14} W_{00}({\mathbb Z}):|Q(B)|>M\}d\gamma\\
&\ll&\int_{s_i\gg 1} \#\{B\in(s)\cdot XE\cap {\mathcal L}\cap {\textstyle\frac14} W_{00}({\mathbb Z}):|Q(B)|>M\}H(s)d^\times s
\end{eqnarray*}
where $H(s)$ is defined in \eqref{eqhaareven}. Analogous to the proof of
Proposition \ref{proplargeQbound}, in order for the set
$\{B\in((s)\cdot (XE))\cap {\mathcal L}\cap \frac14 W_{00}({\mathbb Z}):|Q(B)|>M\}$ to be nonempty,
the following conditions must be satisfied:
\begin{equation}\label{eqscond}
\begin{array}{rcl}
Xs_i^{-1}&\gg& 1,\\[.1in]
Xs_gs_{g+1}^{-1}&\gg& 1,\\[.1in]
X^{g(g+1)/2}w(Q)&\gg& M.
\end{array}
\end{equation}
Let $S$ denote the set of coordinates of $W_{00}$, i.e.,
$S=\{b_{ij}:i+j\geq n\}$. Let $T_{X,M}$ denote the set of~$(s)$
satisfying $s_i\gg 1$ and the conditions of \eqref{eqscond}. Then we have
\begin{eqnarray*}
&&\displaystyle\int_{s_i\gg 1} \#\{B\in((s)\cdot (XE))\cap {\mathcal L}\cap {\textstyle\frac12} W_{00}({\mathbb Z}):|Q(B)|>M\}H(s)d^\times s \\
&\ll&\displaystyle\int_{(s)\in T_{X,M}} \prod_{\alpha\in S}(Xw(\alpha))H(s)d^\times s \\
&\ll&\displaystyle\int_{(s)\in T_{X,M}} X^{n(n+1)/2-g(g+1)}\prod_{k=1}^{g-1}s_k^{2k-1}\cdot s_g^{g-1}s_{g+1}^{g}d^\times s \\
&\ll&\displaystyle\frac{1}{M}\int_{(s)\in T_{X,M}} X^{n(n+1)/2-g(g+1)/2}w(Q)\prod_{k=1}^{g-1}s_k^{2k-1}\cdot s_g^{g-1}s_{g+1}^{g}d^\times s \\
&\ll&\displaystyle\frac{1}{M}\int_{(s)\in T_{X,M}} X^{n(n+1)/2-g(g+1)/2}\prod_{k=1}^{g-1}s_k^{k-1}\cdot s_g^{-1}s_{g+1}^{g}d^\times s \\
&\ll&\displaystyle\frac{1}{M}\int_{(s)\in T_{X,M}} X^{n(n+1)/2-g(g+1)/2+g}\prod_{k=1}^{g-1}s_k^{k-1}\cdot s_g^{g-1}d^\times s \\
&\ll&\displaystyle\frac{1}{M}X^{n(n+1)/2}\log^2(X),
\end{eqnarray*}
where the first inequality follows from the fact that $Xw(b_{ij})\gg1$ for all $b_{ij}\in S$ when $(s)$ is in the range $1\ll s_i\ll X$, the second inequality follows from the definition \eqref{eqhaareven} of $H(s)$ and the computation (\ref{wbij2}) of the weights of the coordinates $b_{ij}$, the third inequality follows from the fact that $X^{g(g+1)/2}w(Q) \gg M$,
the fourth inequality follows from the computation of the weight
of $Q$ in \eqref{eqQweighteven}, the fifth inequality comes from multiplying by the factor $(Xs_gs_{g+1}^{-1})^g\gg1$, and the $\log^2
X$ factor in the last inequality comes from the integrals over $s_1$ and $s_{g+1}$.
\end{proof}
The estimate in Theorem \ref{thm:mainestimate}(b) for even $n$ now
follows from Theorem \ref{keymaporbiteven} and Propositions~\ref{propeven} and \ref{proplargeQboundeven}, in conjunction with the
bound on the number of reducible polynomials proved in Proposition~\ref{propredboundall}.
\section{Proof of the main theorems}\label{sec:sieve}
Let ${V_n^{\textrm{mon}}}({\mathbb Z})$ denote the set of monic integer polynomials of degree~$n$. Let ${V_n^{\textrm{mon}}}({\mathbb Z})^{\rm red}$ denote the subset of polynomials that are reducible or when $n\geq 4$, factor as $g(x)\bar{g}(x)$ over some quadratic extension of~${\mathbb Q}$.
For a set $S\subset {V_n^{\textrm{mon}}}({\mathbb Z})$, let $S_X$ denote the set of elements in
$S$ with height bounded by~$X$. We first give a power saving bound for the number of polynomials in ${V_n^{\textrm{mon}}}({\mathbb Z})^{\rm red}$ having bounded height. We start with the following lemma.
\begin{lemma}\label{lemlin}
The number of elements in ${V_n^{\textrm{mon}}}({\mathbb Z})_X$ that have a rational linear factor
is bounded by $O(X^{n(n+1)/2-n+1}\log X)$.
\end{lemma}
\begin{proof}
Consider the polynomial
\begin{equation*}
f(x)=x^n+a_{1}x^{n-1}+\cdots +a_n\in {V_n^{\textrm{mon}}}({\mathbb Z})_X.
\end{equation*}
First, note that the number of such polynomials with $a_n=0$ is bounded
by $O(X^{n(n+1)/2-n})$. Next, we assume that $a_n\neq 0$. There are
$O(X^{n(n+1)/2-n+1})$ possibilities for the $(n-1)$-tuple
$(a_1,a_2,\ldots,a_{n-2},a_n)$. If $a_n\neq 0$ is fixed, then there
are $O(\log X)$ possibilities for the linear factor $x-r$ of $f(x)$,
since $r\mid d$. By setting $f(r)=0$, we see that the
values of $a_1,a_2,\ldots,a_{n-2},a_n$, and $r$ determine $a_{n-1}$
uniquely. The lemma follows.
\end{proof}
Following arguments of Dietmann~\cite{Dit}, we now prove that the number of reducible monic
integer polynomials of bounded height is negligible, with a power-saving error term.
\begin{proposition}\label{propredboundall}
We have
\begin{equation*}
\#{V_n^{\textrm{mon}}}({\mathbb Z})^{\rm red}_X=O(X^{n(n+1)/2-n+1}\log X).
\end{equation*}
\end{proposition}
\begin{proof}
First, by \cite[Lemma~2]{Dit}, we have that
\begin{equation}\label{eqtemppoly}
x^n+a_1x^{n-1}+\cdots +a_{n-1}x+t
\end{equation}
has Galois group $S_n$ over ${\mathbb Q}(t)$ for all $(n-1)$-tuples
$(a_1,\ldots,a_{n-1})$ aside from a set $S$ of cardinality
$O(X^{(n-1)(n-2)/2})$. Hence, the number of $n$-tuples $(a_1,\ldots,a_n)$ with height bounded by $X$ such that the Galois group of $x^n+a_1x^{n-1}+\cdots +a_{n-1}x+t$ over ${\mathbb Q}(t)$ is not $S_n$ is $O(X^{(n-1)(n-2)/2}X^n) = O(X^{n(n+1)/2-n+1})$.
Next, let $H$ be a subgroup of $S_n$ that arises as the Galois group
of the splitting field of a polynomial in ${V_n^{\textrm{mon}}}({\mathbb Z})$ with no
rational root. For reducible polynomials, we have from \cite[Lemma~4]{Dit} that $H$ has
index at least $n(n-1)/2$ in $S_n$. When $n\geq4$ is even and the polynomial factors as $g(x)\bar{g}(x)$ over a quadratic extension, the splitting field has degree at most $2(n/2)!$ and so the index of the corresponding Galois group in $S_n$ is again at least $n(n-1)/2$. For fixed $a_1,\ldots,a_{n-1}$
such that the polynomial \eqref{eqtemppoly} has Galois group $S_n$ over
${\mathbb Q}(t)$, an argument identical to the proof of \cite[Theorem~1]{Dit}
implies that the number of $a_n$ with $|a_n|\leq X^n$ such that the
Galois group of the splitting field of $x^n+a_1x^{n-1}+\cdots a_n$
over ${\mathbb Q}$ is $H$ is bounded by
$$
O_\epsilon\Bigl(X^\epsilon\exp\bigl(\frac{n}{[S_n:H]}\log X +O(1)\bigr)\Bigr)
=O(X^{2/(n-1)+\epsilon}).
$$
In conjunction with Lemma~\ref{lemlin}, we thus obtain the estimate
\begin{equation*}
\#{V_n^{\textrm{mon}}}({\mathbb Z})^{\rm red}_X=O(X^{n(n+1)/2-n+1}\log X)+O(X^{n(n+1)/2-n+1})+
O_\epsilon(X^{n(n+1)/2-n+2/(n-1)+\epsilon}),
\end{equation*}
and the proposition follows.
\end{proof}
For any positive squarefree integer $m$, let ${\mathcal W}_m$ denote the set of
all elements in ${V_n^{\textrm{mon}}}({\mathbb Z})$ whose discriminants are divisible by $m^2$. Also, let ${\mathcal V}_m$ denote the set of all elements $f$ in ${V_n^{\textrm{mon}}}({\mathbb Z})$ such that the corresponding ring ${\mathbb Z}[x]/(f(x))$ is nonmaximal at all primes
dividing $m$.
\begin{theorem}\label{unif}
Let ${\mathcal W}_{m,X}$ denote the set of elements in ${\mathcal W}_m$ having height
bounded by $X$. For any positive real number $M$, we have
\begin{equation}\label{equ2}
\sum_{\substack{m>M\\ m\;\mathrm{ squarefree}}}
\#{\mathcal W}_{m,X}=O_\epsilon(X^{n(n+1)/2+\epsilon}/\sqrt{M})+O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}).
\end{equation}
\end{theorem}
\begin{proof}
Every element of ${\mathcal W}_m$ belongs to ${\mathcal W}_{m_1}^{\rm {(1)}}\cap{\mathcal W}_{m_2}^{\rm {(2)}}$ for
some positive squarefree integers $m_1,m_2$ with $m_1m_2=m$. One of
$m_1$ and $m_2$ must be larger than $\sqrt{m}$, and hence every
element of ${\mathcal W}_m$ belongs to some ${\mathcal W}_k^{\rm {(1)}}$ or ${\mathcal W}_k^{\rm {(2)}}$ where $k\geq
\sqrt{m}$. An element $f\in {V_n^{\textrm{mon}}}({\mathbb Z})$ belongs to at most $O(X^\epsilon)$
different sets ${\mathcal W}_m$, since $m$ is a divisor of $\Delta(f)$. The
theorem now follows from Parts (a) and (b) of Theorem
\ref{thm:mainestimate}.
\end{proof}
We remark that the $\sqrt{M}$ in the denominator in \eqref{equ2} can
be improved to $M$. However we will be using Theorem \ref{unif} for
$M=X^{1/2}$ in which case the second term
$O_\epsilon(X^{n(n+1)/2-1/5+\epsilon})$ dominates even
$O_\epsilon(X^{n(n+1)/2+\epsilon}/M)$. We outline here how to improve
the denominator to $M$ for the sake of completeness. Break up ${\mathcal W}_m$
into sets ${\mathcal W}_{m_1}^{\rm {(1)}}\cap{\mathcal W}_{m_2}^{\rm {(2)}}$ for positive squarefree
integers $m_1,m_2$ with $m_1m_2=m$ as above. Break the ranges of $m_1$
and $m_2$ into dyadic ranges. For each range, we count the number of
elements in ${\mathcal W}_{m_1}^{\rm {(1)}}\cap{\mathcal W}_{m_2}^{\rm {(2)}}$ by embedding each
${\mathcal W}_{m_2}^{\rm {(2)}}$ into $\frac14W({\mathbb Z})$ as in Sections \ref{sec:monicodd} and
\ref{sec:moniceven}. Earlier, we bounded the cardinality of the
image of ${\mathcal W}_{m_2,X}^{\rm {(2)}}$ by splitting $\frac14W({\mathbb Z})$ up into two
pieces: $\frac14W_{00}({\mathbb Z})$ and
$\frac14W({\mathbb Z})\setminus\frac14W_{00}({\mathbb Z})$. The bound on the second
piece does not depend on $m_2$ and continues to be
$O_\epsilon(X^{n(n+1)/2-1/5+\epsilon})$. However for the first piece, we
now impose the further condition that elements in $\frac14 W_{00}({\mathbb Z})$
are strongly divisible by $m_1$ and apply the quantitative version of the Ekedahl sieve as in~\cite{geosieve}. This
gives the desired additional $1/m_1$ saving, improving the bound to
\begin{equation*}
O_\epsilon(X^{n(n+1)/2+\epsilon}/M)+O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}).
\end{equation*}
The reason for counting in dyadic ranges of $m_1$ and $m_2$ is
that for both the strongly and weakly divisible cases, we count
not for a fixed $m$ but sum over all $m>M$.
Let $\bar{\lambda}_n(p)=1-\lambda_n(p)$ and
$\bar{\rho}_n(p)=1-\rho_n(p)$ denote the $p$-adic densities of ${\mathcal W}_p$
and ${\mathcal V}_p$, respectively; the values of $\lambda_n(p)$ and $\rho_n(p)$
were determined by Brakenhoff and Lenstra (\cite{ABZ}),
respectively, and are presented explicitly in the introduction. We
define $\bar{\lambda}_n(m)$ and $\bar{\rho}_n(m)$ for squarefree
integers $m$ to be
\begin{equation}
\begin{array}{rcl}
\bar{\lambda}_n(m)&=&\displaystyle\prod_{p\mid m}\bar{\lambda}_n(p),\\[.15in]
\bar{\rho}_n(m)&=&\displaystyle\prod_{p\mid m}\bar{\rho}_n(p).
\end{array}
\end{equation}
For a set $S\subset {V_n^{\textrm{mon}}}({\mathbb Z})$, let $S_X$ denote the set of elements in
$S$ having height less than~$X$.
Let ${V_n^{\textrm{mon}}}({\mathbb Z})^{\rm sf}$ denote the set of elements
in ${V_n^{\textrm{mon}}}({\mathbb Z})$ having squarefree discriminant and let ${V_n^{\textrm{mon}}}({\mathbb Z})^{\rm max}$ denote
the set of elements in ${V_n^{\textrm{mon}}}({\mathbb Z})$ that correspond to maximal rings. Let
$\mu$ denote the Mobius function. We have
\begin{equation}\label{eqth1}
\begin{array}{rcl}
\#{V_n^{\textrm{mon}}}({\mathbb Z})^{\rm sf}_X&=&\displaystyle\sum_{m\geq 1}\mu(m)\#{\mathcal W}_{m,X}\\[.2in]
&=&\displaystyle\sum_{m=1}^{\sqrt{X}}\mu(m)\bar{\lambda}_n(m)\#{V_n^{\textrm{mon}}}({\mathbb Z})_X
+O\Bigl(\sum_{m=1}^{\sqrt{X}}X^{n(n+1)/2-n}\Bigr)
+O\Bigl(\sum_{m>\sqrt{X}}\#{\mathcal W}_{m,X}\Bigr)\\[.15in]
&=&\displaystyle\Bigl(\prod_p\lambda_n(p)\Bigr)\cdot\#{V_n^{\textrm{mon}}}({\mathbb Z})_X+O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}),
\end{array}
\end{equation}
where the final equality follows from Theorem \ref{unif}.
Since ${\mathcal V}_m\subset {\mathcal W}_m$, we also obtain
\begin{equation}\label{eqth2}
\#{V_n^{\textrm{mon}}}({\mathbb Z})^{\rm max}_X=\Bigl(\prod_p\rho_n(p)\Bigr)\cdot\#{V_n^{\textrm{mon}}}({\mathbb Z})_X+O_\epsilon(X^{n(n+1)/2-1/5+\epsilon})
\end{equation}
by the identical argument.
Finally, note that we have
\begin{equation*}
\#{V_n^{\textrm{mon}}}({\mathbb Z})_X= 2^nX^{n(n+1)/2}+O(X^{n(n-1)/2}).
\end{equation*}
Therefore, Theorems \ref{polydisc2} and \ref{polydiscmax2} now follow
from \eqref{eqth1} and \eqref{eqth2}, respectively, since the
constants $\lambda_n$ and $\zeta(2)^{-1}$ appearing in these theorems
are equal simply to $\prod_p\lambda_n(p)$ and
$\prod_p\rho_n(p)$, respectively.
\section{A lower bound on the number of degree-$n$ number fields that are monogenic / have a short generating vector}\label{latticearg}
Let $g\in {V_n^{\textrm{mon}}}({\mathbb R})$ be a monic real polynomial of degree $n$ and nonzero discriminant with $r$ real roots and $2s$ complex roots. Then ${\mathbb R}[x]/(g(x))$
is naturally isomorphic to ${\mathbb R}^n\cong {\mathbb R}^r\times {\mathbb C}^s$ as ${\mathbb R}$-vector spaces via its real and complex embeddings (where we view ${\mathbb C}$ as ${\mathbb R}+{\mathbb R}\sqrt{-1}$). The ${\mathbb R}$-vector space ${\mathbb R}[x]/(g(x))$ also comes equipped with a natural basis, namely $1,\theta,\theta^2,\ldots,\theta^{n-1}$, where $\theta$ denotes the image of $x$ in ${\mathbb R}[x]/(g(x))$. Let $R_g$ denote the lattice spanned by $1,\theta,\ldots,\theta^{n-1}.$ In the case that $g$ is an integral polynomial in ${V_n^{\textrm{mon}}}({\mathbb Z})$, the lattice $R_g$ may be identified with the ring ${\mathbb Z}[x]/(g(x))\subset{\mathbb R}[x]/(g(x))\subset {\mathbb R}^n$.
Since $g(x)$ gives a lattice in ${\mathbb R}^n$ in this way, we may ask whether this basis is reduced in the sense of Minkowski, with respect to the usual inner product on ${\mathbb R}^n$.\footnote{Recall that a ${\mathbb Z}$-basis $\alpha_1,\ldots,\alpha_n$ of a lattice $L$ is called {\it Minkowski-reduced} if successively for $i=1,\ldots, n$ the vector $\alpha_i$ is the shortest vector in $L$ such that $\alpha_1,\ldots,\alpha_i$ can be extended to a ${\mathbb Z}$-basis of $L$. Most lattices have a unique Minkowski-reduced basis.}
More generally, for any monic real polynomial $g(x)$ of degree $n$ and nonzero discriminant, we may ask whether the basis $1,\theta,\theta^2,\ldots,\theta^{n-1}$ is Minkowski-reduced for the lattice $R_g$, up to a unipotent upper-triangular transformation over~${\mathbb Z}$ (i.e., when the basis $[1\;\;\theta\;\;\theta^2\;\cdots\; \theta^{n-1}]$ is replaced by
$[1\;\;\theta\;\;\theta^2\;\cdots\; \theta^{n-1}]A$ for some upper triangular $n\times n$ integer matrix $A$ with $1$'s on the diagonal).
More precisely, given $g\in {V_n^{\textrm{mon}}}({\mathbb R})$ of nonzero discriminant, let us
say that the corresponding basis
$1,\theta,\theta^2,\ldots,\theta^{n-1}$ of ${\mathbb R}^n$ is {\it
quasi-reduced} if there exist monic integer polynomials $h_i$ of
degree~$i$, for $i=1,\ldots,n-1$, such that the basis
$1,h_1(\theta),h_2(\theta),\ldots,h_{n-1}(\theta)$ of $R_g$ is
Minkowski-reduced (so that the basis
$1,\theta,\theta^2,\ldots,\theta^{n-1}$ is Minkowski-reduced up to a unipotent
upper-triangular transformation over~${\mathbb Z}$). By abuse of language, we
then call the polynomial $g$ {\it quasi-reduced} as well. We say that
$g$ is {\it strongly quasi-reduced} if in addition ${\mathbb Z}[x]/(g(x))$ has
a unique Minkowski-reduced basis.
The relevance of being strongly quasi-reduced is contained in the
following lemma.
\begin{lemma}\label{reducedg}
Let $g(x)$ and $g^*(x)$ be distinct monic integer polynomials of
degree $n$ and nonzero discriminant that are strongly quasi-reduced
and whose $x^{n-1}$-coefficients vanish. Then ${\mathbb Z}[x]/(g(x))$ and
${\mathbb Z}[x]/(g^*(x))$ are non-isomorphic rings.
\end{lemma}
\begin{proof}
Let $\theta$ and $\theta^*$ denote the images of $x$ in ${\mathbb Z}[x]/(g(x))$
and ${\mathbb Z}[x]/(g^*(x))$, respectively. By the assumption that $g$ and
$g^\ast$ are strongly quasi-reduced, we have that
$1,h_1(\theta),h_2(\theta),\ldots,h_{n-1}(\theta)$ and
$1,h_1^*(\theta^*),h_2^*(\theta^*),\ldots,h^*_{n-1}(\theta^*)$ are
the unique Minkowski-reduced bases of ${\mathbb Z}[x]/(g(x))$ and
${\mathbb Z}[x]/(g^*(x))$, respectively, for some monic integer polynomials
$h_i$ and $h_i^*$ of degree $i$ for $i=1,\ldots,n-1$.
If $\phi:{\mathbb Z}[x]/(g(x))\to{\mathbb Z}[x]/(g^*(x))$ is a ring isomorphism, then by
the uniqueness of Minkowski-reduced bases for these rings, $\phi$ must
map Minkowski basis elements to Minkowski basis elements, i.e.,
$\phi(h_i(\theta))=h_i^*(\theta^*)$ for all $i$. In particular, this
is true for $i=1$, so $\phi(\theta)=\theta^*+c$ for some $c\in{\mathbb Z}$,
since $h_1$ and $h_1^*$ are monic integer linear polynomials.
Therefore $\theta$ and $\theta^*+c$ must have the same minimal
polynomial, i.e., $g(x)=g^*(x-c)$; the assumption that $\theta$ and
$\theta^*$ both have trace 0 then implies that $c=0$. It follows that
$g(x)=g^*(x)$, a contradiction. We conclude that ${\mathbb Z}[x]/(g(x))$ and
${\mathbb Z}[x]/(g^*(x))$ must be non-isomorphic rings, as desired.
\end{proof}
The condition of being quasi-reduced
is fairly easy to attain:
\begin{lemma}\label{quasilemma}
If $g(x)$ is a monic real polynomial of nonzero discriminant, then $g(\rho x)$ is quasi-reduced for any sufficiently large $\rho>0$.
\end{lemma}
\begin{proof}
This is easily seen from the Iwasawa-decomposition description of
Minkowski reduction. Given an $n$-ary positive definite
integer-valued quadratic form $Q$, viewed as a symmetric $n\times n$
matrix, the condition that $Q$ is Minkowski reduced is equivalent to
$Q=\gamma I_n \gamma^T$, where $I_n$ is the sum-of-$n$-squares
diagonal quadratic form and $\gamma=\nu \tau \kappa$, where $\nu\in
N'$, $\tau\in T'$, and $\kappa\in K$; here $N'$ as before denotes a
compact subset (depending on $\tau$) of the group $N$ of
lower-triangular matrices, $T'$ is the group of diagonal matrices
$(t_1,\ldots,t_n)$ with $t_i\leq c\,t_{i+1}$ for all $i$ and some
absolute constant $c=c_n>0$, and $K$ is the orthogonal group
stabilizing the quadratic form $I_n$. The condition that $Q$ be
quasi-reduced is simply then that $t_i\leq c\,t_{i+1}$ (with no
condition on $\nu$).
Consider the natural isomorphism ${\mathbb R}[x]/(g(x))\to {\mathbb R}[x]/(g(\rho x))$
of \'etale ${\mathbb R}$-algebras defined by $x\to\rho x$. If $\theta$ denotes
the image of $x$ in ${\mathbb R}[x]/(g(x))$, then $\rho\theta$ is the image of
$x$ in ${\mathbb R}[x]/(g(\rho x))$ under this isomorphism.
Let $Q_\rho$ be the Gram matrix of the lattice basis
$1,\rho\theta,\rho^2\theta^2,\ldots,\rho^{n-1}\theta^{n-1}$ in
${\mathbb R}^{n}$ associated to $g(\rho x)$. If the element $\tau \in T$
corresponding to $g(x)$ is $(t_1,\ldots,t_n)$, then the element
$\tau_\rho\in T$ corresponding to $g(\rho x)$ is $(t_1,\rho t_2,\rho^2
t_3,\ldots,\rho^{n-1}t_n)$. This is because $Q_\rho=\Lambda
Q\Lambda^T$, where $\Lambda$ is the diagonal matrix
$(1,\rho,\rho^2,\ldots,\rho^{n-1})$; therefore, if
$Q=(\nu\tau\kappa)I_n(\nu\tau\kappa)^T$, then $Q_\rho=
(\Lambda\nu\tau\kappa)I_n(\Lambda\nu\tau\kappa)^T=(\nu'(\Lambda\tau)\kappa)I_n(\nu'(\Lambda\tau)\kappa)^T$
for some $\nu'\in N$ depending on $\Lambda$, so
$\tau_\rho=\Lambda\tau$. For sufficiently large $\rho$, we then have
$\rho^{i-1}t_i\leq c\rho^{i}t_{i+1}$ for all $i=1,\ldots,n-1$, as
desired.
\end{proof}
\noindent
Lemma~\ref{quasilemma} implies that most monic irreducible integer
polynomials are strongly quasi-reduced:
\begin{lemma}\label{mostqr}
A density of $100\%$ of irreducible monic integer polynomials
$f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ of degree~$n$, when ordered by
height $H(f):={\rm max}\{|a_1|,|a_2|^{1/2},\ldots,|a_n|^{1/n}\}$,
are strongly quasi-reduced.
\end{lemma}
\begin{proof}
Let $\epsilon>0$, and let $B$ be the closure of an open region in
${\mathbb R}^n\cong {V_n^{\textrm{mon}}}({\mathbb R})$ consisting of monic real polynomials of nonzero
discriminant and height less than $1$ such
that $${\rm Vol}(B)>(1-\epsilon){\rm Vol}(\{f\in {V_n^{\textrm{mon}}}({\mathbb R}):H(f)<1\}).$$ For each
$f\in B$, by Lemma~\ref{quasilemma} there exists a minimal finite
constant $\rho_f>0$ such that $f(\rho x)$ is quasi-reduced for any
$\rho>\rho_f$. The function $\rho_f$ is continuous in $f$, and thus by
the compactness of $B$ there exists a finite constant $\rho_B>0$ such
that $f(\rho x)$ is quasi-reduced for any $f\in B$ and $\rho>\rho_B$.
Now consider the weighted homogeneously expanding region $\rho\cdot B$
in ${\mathbb R}^n\cong {V_n^{\textrm{mon}}}({\mathbb R})$, where a real number $\rho>0$ acts on $f\in B$
by $(\rho\cdot f)(x)=f(\rho x)$. Note that $H(\rho\cdot f)=\rho
H(f)$. For $\rho>\rho_B$, we have that all polynomials in $\rho\cdot
B$ are quasi-reduced, and
$${\rm Vol}(\rho\cdot B)>(1-\epsilon){\rm Vol}(\{f\in {V_n^{\textrm{mon}}}({\mathbb R}):H(f)<\rho\}).$$
Letting $\rho$ tend to infinity shows that the density of monic
integer polynomials $f$ of degree $n$, when ordered by height, that
have nonzero discriminant and are strongly quasi-reduced is greater
than $1-\epsilon$ (since ``discriminant nonzero'' and ``strongly
quasi-reduced'' are both open conditions on the coefficients of
$f$). Since $\epsilon$ was arbitrary, and $100\%$ of integer
polynomials are irreducible, the lemma follows.
\end{proof}
We have the following variation of Theorem~\ref{polydisc2}.
\begin{theorem}\label{vanishinga1}
Let $n\geq1$ be an integer. Then when
monic integer polynomials $f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ of
degree~$n$ with $a_1=0$ are ordered by $H(f):=
{\rm max}\{|a_1|,|a_2|^{1/2},\ldots,|a_n|^{1/n}\}$, the density having
squarefree discriminant $\Delta(f)$ exists and is equal to
$\kappa_n=\prod_p\kappa_n(p)>0$, where $\kappa_n(p)$ is the density of
monic polynomials $f(x)$ over ${\mathbb Z}_p$ with vanishing
$x^{n-1}$-coefficient having discriminant indivisible by $p^2$.
\end{theorem}
Indeed, the proof of Theorem~\ref{polydisc2} applies also to those
monic integer polynomials having vanishing $x^{n-1}$-coefficient
without any essential change; one simply replaces the representation $W$ (along with $W_0$ and $W_{00}$) by the codimension-1 linear subspace consisting of symmetric matrices with anti-trace $0$, but otherwise the proof carries through in the
identical manner. The analogue of Theorem~\ref{vanishinga1} holds
also if the condition $a_1=0$ is replaced by the condition $0\leq
a_1<n$; in this case, $\kappa_n=\prod_p\kappa_n(p)>0$ is replaced by
the same constant $\lambda_n=\prod_p\lambda_n(p)>0$ of
Theorem~\ref{polydisc2},
since for any monic degree-$n$ polynomial $f(x)$ there is a unique
constant $c\in{\mathbb Z}$ such that $f(x+c)$ has $x^{n-1}$-coefficient $a_1$
satisfying $0\leq a_1<n$.
Lemmas \ref{reducedg} and \ref{mostqr} and Theorem~\ref{vanishinga1}
imply that 100\% of monic integer irreducible polynomials having
squarefree discriminant and vanishing $x^{n-1}$-coefficient (or those
having $x^{n-1}$-coefficient non-negative and less than $n$), when
ordered by height, yield {\it distinct} degree-$n$ fields. Since
polynomials of height less than $X^{1/(n(n-1))}$ have absolute
discriminant $\ll X$, and since number fields of degree $n$ and
squarefree discriminant always have associated Galois group $S_n$, we
see that the number of $S_n$-number fields of degree $n$ and absolute
discriminant less than $X$ is $\gg
X^{(2+3+\cdots+n)/(n(n-1))}=X^{1/2+1/n}$. We have proven
Corollary~\ref{monogenic}.
\begin{remark}{\em
The statement of Corollary~\ref{monogenic} holds even if one specifies
the real signatures of the monogenic $S_n$-number fields of degree
$n$, with the identical proof. It holds also if one imposes any
desired set of local conditions on the degree-$n$ number fields at a
finite set of primes, so long as these local conditions do not
contradict local monogeneity. }\end{remark}
\begin{remark}{\em
We conjecture that a positive proportion of monic integer polynomials
of degree~$n$ with $x^{n-1}$-coefficient non-negative and less than
$n$ and absolute discriminant less than $X$ have height $O(
X^{1/(n(n-1))})$, where the implied $O$-constant depends only on $n$.
That is why we conjecture that the lower bound in
Corollary~\ref{monogenic} also gives the correct order of magnitude
for the upper bound.
In fact, let $C_n$ denote the $(n-1)$-dimensional Euclidean volume of
the $(n-1)$-dimensional region $R_0$ in $V_n^{\textrm{mon}}({\mathbb R})\cong{\mathbb R}^n$
consisting of all polynomials $f(x)$ with vanishing
$x^{n-1}$-coefficient and absolute discriminant less than 1. Then the
region $R_z$ in $V^{\textrm{mon}}_n({\mathbb R})\cong{\mathbb R}^n$ of all polynomials $f(x)$ with
$x^{n-1}$-coefficient equal to $z$ and absolute discriminant less than
1 also has volume $C_n$, since $R_z$ is obtained from $R_0$ via the
volume-preserving transformation $x\mapsto x+z/n$. Since we expect
that 100\% of monogenic number fields of degree $n$ can be expressed
as ${\mathbb Z}[\theta]$ in exactly one way (up to transformations of the form
$\theta\mapsto \pm \theta + c$ for $c\in {\mathbb Z}$), in view of
Theorem~\ref{polydiscmax2} we conjecture that the number of monogenic
number fields of degree $n$ and absolute discriminant less than $X$ is
asymptotic to
\begin{equation}
\frac{nC_n}{2\zeta(2)} X^{1/2+1/n}.
\end{equation}
When $n=3$, a Mathematica computation shows that we have
$C_3= \frac{2^{1/3}(3+\sqrt{3})}{45} \frac{\Gamma(1/2)\Gamma(1/6)}{\Gamma(2/3)}$}.
\end{remark}
Finally, we turn to the proof of
Corollary~\ref{shortvector}. Following \cite{EV}, for any algebraic
number $x$, we write $\|x\|$ for the maximum of the archimedean
absolute values of~$x$. Given a number field~$K$, write
$s(K)=\inf\{\|x\|: x\in{\mathcal O}_K,\; {\mathbb Q}(x)=K\}$. We consider the number of
number fields $K$ of degree~$n$ such that $s(K)\leq Y$.
As already pointed out in \cite[Remark~3.3]{EV}, an upper bound of $\ll
Y^{(n-1)(n+2)/2}$ is easy to obtain. Namely, a bound on the
archimedean absolute values of an algebraic number~$x$ gives a bound
on the archimedean absolute values of all the conjugates of $x$, which
then gives a bound on the coefficients of the minimal polynomial of
$x$. Counting the number of possible minimal polynomials satisfying
these coefficient bounds gives the desired upper bound.
To obtain a lower bound of $\gg Y^{(n-1)(n+2)/2}$, we use
Lemmas~\ref{reducedg} and \ref{mostqr} and
Theorem~\ref{vanishinga1}. Suppose $f(x)=x^n + a_2x^{n-2} + \cdots +
a_n$ is an irreducible monic integer polynomial of degree $n$.
Let~$\theta$ denote a root of $f(x)$.
If $H(f)\leq Y$, then $|\theta|\ll Y$;
this follows, e.g., from Fujiwara's bound~\cite{Fujiwara}:
$$ \|\theta\|\leq {\rm max}\{ |a_1|,|a_2|^{1/2},\ldots,|a_{n-1}|^{1/(n-1)}|, |a_n/2|^{1/n}\}.$$
Therefore, if $H(f)\leq Y$, then
\begin{equation}\label{eq:est}
s({\mathbb Q}[x]/(f(x))) \leq \|\theta\| \ll Y.
\end{equation}
Now Lemma~\ref{mostqr} and Theorem \ref{vanishinga1} imply that there are
$\gg Y^{(n-1)(n+2)/2}$ such polynomials $f(x)$ of height less than $Y$ that have squarefree discriminant and are also strongly
quasi-reduced. Lemma~\ref{reducedg} and \eqref{eq:est} then imply that
these polynomials define distinct $S_n$-number fields $K$ of degree $n$ with
$s(K)\leq Y$. This completes the proof of Corollary~\ref{shortvector}.
\subsection*{Acknowledgments}
We thank Levent Alpoge, Benedict Gross, Wei Ho, Kiran Kedlaya, Hendrik
Lenstra, Barry Mazur, Bjorn Poonen, Peter Sarnak, and Ila Varma for
their kind interest and many helpful conversations. The first and third
authors were supported by a Simons Investigator Grant and NSF Grant DMS-1001828.
\bigskip
| {
"timestamp": "2016-12-01T02:10:41",
"yymm": "1611",
"arxiv_id": "1611.09806",
"language": "en",
"url": "https://arxiv.org/abs/1611.09806",
"abstract": "We determine the density of monic integer polynomials of given degree $n>1$ that have squarefree discriminant; in particular, we prove for the first time that the lower density of such polynomials is positive. Similarly, we prove that the density of monic integer polynomials $f(x)$, such that $f(x)$ is irreducible and $\\mathbb Z[x]/(f(x))$ is the ring of integers in its fraction field, is positive, and is in fact given by $\\zeta(2)^{-1}$.It also follows from our methods that there are $\\gg X^{1/2+1/n}$ monogenic number fields of degree $n$ having associated Galois group $S_n$ and absolute discriminant less than $X$, and we conjecture that the exponent in this lower bound is optimal.",
"subjects": "Number Theory (math.NT)",
"title": "Squarefree values of polynomial discriminants I",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462211935646,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7099326147112622
} |
https://arxiv.org/abs/2207.14098 | Convergence of iterates in nonlinear Perron-Frobenius theory | Let $C$ be a closed cone with nonempty interior $C^\circ$ in a Banach space. Let $f:C^\circ \rightarrow C^\circ$ be an order-preserving subhomogeneous function with a fixed point in $C^\circ$. We introduce a condition which guarantees that the iterates $f^k(x)$ converge to a fixed point for all $x \in C^\circ$. This condition generalizes the notion of type K order-preserving for maps on $\mathbb{R}^n_{>0}$. We also prove that when iterates converge to a fixed point, the rate of convergence is always R-linear in two special cases: for piecewise affine maps and also for order-preserving, homogeneous, analytic, multiplicatively convex functions on $\mathbb{R}^n_{>0}$. This later category includes the maps associated with the homogeneous eigenvalue problem for nonnegative tensors. | \section{Introduction}
Let $X$ be a real Banach space with norm $\| \cdot \|$. A \emph{closed cone} is a closed convex set $C \subset X$ such that (i) $\lambda C \subset C$ for all $\lambda \ge 0$ and (ii) $C \cap (-C) = \{0\}$. A closed cone $C$ induces the following partial order on $X$. We say that $x \le_C y$ whenever $y - x \in C$. When the cone is understood, we will write $\le$ instead of $\le_C$. We will also write $x \ll y$ when $y-x$ is in the interior of $C$, which will be denoted $\operatorname{int} C$. Let $D \subseteq X$ be a domain. A function $f:D \rightarrow X$ is \emph{order-preserving} if $f(x) \le f(y)$ whenever $x \le y$ and it is \emph{strictly order-preserving} if $f(x) \ll f(y)$ when $x \le y$. We will say that $f$ is \emph{homogeneous} if $f(tx) = tf(x)$ for all $t > 0$ and $x \in D$ and \emph{subhomogeneous} if $f(tx) \le tf(x)$ for all $t \ge 1$ and $x \in D$.
Suppose that $C$ is closed cone with nonempty interior $\operatorname{int} C$, and $f: \operatorname{int} C \rightarrow \operatorname{int} C$ is order-preserving and (sub)homogeneous.
Whether or not an eigenvector (or fixed point) exists in $\operatorname{int} C$ is a delicate question, even for finite dimensional cones. A good general reference for finite dimensional cones is \cite{LemmensNussbaum} and \cite[Section 3]{Nussbaum07} has results for infinite dimensional cones. See also \cite{Lins22} and the references therein for some recent results on the existence and uniqueness of eigenvectors in the cone $\mathbb{R}^n_{>0}$. For maps that do have eigenvectors in the interior of the cone, we can ask whether the iterates $f^k(x)$ (suitably normalized) converge to an eigenvector.
Jiang \cite{Jiang96}, motivated by a theorem of Kamke, introduced the following definition. An order-preserving function $f: \mathbb{R}^n_{\ge 0} \rightarrow \mathbb{R}^n_{\ge 0}$ is \emph{type K order-preserving} if $f(x)_i < f(y)_i$ whenever $x \le y$ and $x_i < y_i$. If a type K order-preserving subhomogeneous function $f:\mathbb{R}^n_{\ge 0} \rightarrow \mathbb{R}^n_{\ge 0}$ has a fixed point in $\mathbb{R}^n_{>0}$, then Jiang proved that $f^k(x)$ converges to a fixed point for all $x \in \mathbb{R}^n_{\ge 0}$ \cite[Theorem 2.3]{Jiang96}.
In section 3, we extend the definition of type K order-preserving to apply to maps on any closed cone in a Banach space. We prove in Theorem \ref{thm:main} that if $f: \operatorname{int} C \rightarrow \operatorname{int} C$ is type K order-preserving, subhomogeneous, and has a fixed point in $\operatorname{int} C$, then under a relatively mild compactness assumption, the iterates $f^k(x)$ converge to a fixed point in $\operatorname{int} C$ for all $x \in \operatorname{int} C$. The compactness condition is always satisfied in finite dimensions.
Note that Type K order-preserving is weaker than strict order-preserving on a cone, and does not imply the uniqueness of a fixed point, or in the homogeneous case, of an eigenvector up to scaling.
In some applications it is important to know how fast $f^k(x)$ converges to a fixed point. If $f$ has a unique fixed point and the spectral radius of the derivative or semiderivative at the fixed point is less than one, then the rate of convergence is known to be linear \cite{AkGaNu14}. Similar results for homogeneous maps with a unique eigenvector up to scaling are also known. In section 4 we show that in two important special cases, we can guarantee that the iterates converge to fixed points at a linear rate (specifically R-linear convergence), even if the map has more than one fixed point. In Theorem \ref{thm:piecewise} we prove that if $f$ is a piecewise affine nonexpansive map on a convex subset $M$ of a finite dimensional Banach space $X$ and $f^k(x)$ converges to $u \in M$ for some $x \in M$, then the rate of convergence must be linear. Nonexpansive piecewise affine maps are common in applications of nonlinear Perron-Frobenius theory \cite{AlBoGa21,GaGu98,HeidergottOldservanderWoude,HuCaPe21, Kohlberg80}, so this result is noteworthy. Then in Theorem \ref{thm:anal} we prove that for any order-preserving, homogeneous, multiplicatively convex, and analytic function $f: \mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n_{>0}$ if $f^k(x)/\|f^k(x)\|$ converges to an eigenvector in $\mathbb{R}^n_{>0}$, then the rate of convergence will be linear. This class of maps includes the functions associated with the homogeneous eigenproblem for nonnegative tensors (see e.g., \cite{ChPeZh08,FrGaHa13,HuHuQi14,HuQi16,Lim05,Lins22,Qi05,ZhQiWu13}) and also a large family of functions $\mathcal{M}_+$ constructed from generalized means that was introduced by Nussbaum \cite{Nussbaum86}. Again, Theorem \ref{thm:anal} applies even when the function has more than one linearly independent eigenvector in $\mathbb{R}^n_{>0}$, so it differs from previously known results about linear convergence of power method iterates for nonnegative tensors that have a unique eigenvector up to scaling \cite{FrGaHa13,HuHuQi14,ZhQiWu13}.
\section{Preliminaries} \label{sec:prelim}
Let $C$ be a closed cone in a real Banach space $X$. For $x, y \in C$, we define
$$M(x/y) := \inf \{\beta > 0 : x \le_C \beta y \}$$
and
$$m(x/y) := \sup \{\alpha > 0 : \alpha y \le_C x \}.$$
Note that $M(y/x) = m(x/y)^{-1}$. If $C$ has nonempty interior, then $(x,y) \mapsto M(x/y)$ is continuous on $X \times \operatorname{int} C$ \cite[Lemma 2.2]{LLNW18}.
Let $X^*$ denote the dual space of $X$. The \emph{dual cone} of $C$ is
$$C^* = \{\phi \in X^* : \phi(x) \ge 0 \text{ for all } x \in C \}.$$
If $C$ has nonempty interior in $X$, then $C^*$ is a closed cone in $X^*$.
An alternative formula for $M(x/y)$ (see e.g., \cite[Lemma 2.2]{LLNW18}) is:
\begin{equation} \label{functionals}
M(x/y) = \sup_{\phi \in C^*} \frac{\phi(x)}{\phi(y)}.
\end{equation}
We say that $x$ and $y$ are \emph{comparable} and write $x \sim y$ if $M(x/y) < \infty$ and $m(x/y)> 0$. Comparability is an equivalence relation, and the equivalence classes are called the \emph{parts} of $C$. If $C$ has nonempty interior, then $\operatorname{int} C$ is a part.
When $x, y \in C$ are comparable, \emph{Thompson's metric} is
$$d_T(x,y) := \max \{\log M(x/y), \log M(y/x) \}$$
and \emph{Hilbert's projective metric} is
$$d_H(x,y) := \log \left(\frac{M(x/y)}{m(x/y)}\right).$$
Thompson's metric is a metric on each part of $C$. Hilbert's projective metric has the following properties for any comparable $x,y, z \in C$.
\begin{enumerate}
\item $d_H(x,y) = 0 \text{ if and only if } y = \lambda x \text{ for some } \lambda > 0$.
\item $d_H(\alpha x, \beta y) = d_H(x,y) \text{ for all } \alpha, \beta > 0$.
\item $d_H(x,y) = d_H(y,x)$.
\item $d_H(x,z) = d_H(x,y)+d_H(y,z)$.
\end{enumerate}
Note that $d_H$ is a metric on the set $\Sigma := \{ x \in \operatorname{int} C : \|x\|=1 \}$.
For any comparable $x, y \in C$, we have
\begin{equation} \label{HilbertThompson}
d_H(x,y) \le 2 d_T(x,y).
\end{equation}
In finite dimensions, the topologies induced by $d_T$ on $\operatorname{int} C$ and $d_H$ on $\Sigma$ are equivalent to the topologies inherited from the norm. This is not always true in infinite dimensions. It is true, however, if the cone $C$ is normal. A closed cone $C$ in a Banach space $X$ is \emph{normal} if there is a constant $\kappa > 0$ such that
$$\|x\| \le \kappa \, \|y\| \text{ whenever } 0 \le_C x \le_C y.$$
If $C$ is a normal cone with nonempty interior in $X$, and the closed ball $N_R(u) := \{x \in X : \|x-u\| \le R\}$ is contained in $\operatorname{int} C$, then \cite[Proposition 1.3 and Remark 1.4]{Nussbaum88} imply that there is a constant $c > 0$ such that
\begin{equation} \label{normalThompson}
c^{-1} \|x-u\| \le d_T(x,u) \le c \|x-u\|
\end{equation}
for all $x \in N_R(u)$.
Let $(M,d)$ be a metric space. A function $f: M \rightarrow M$ is \emph{nonexpansive} with respect to the metric $d$ if $d(f(x),f(y)) \le d(x,y)$ for all $x, y \in M$. If $C$ has nonempty interior and $f: \operatorname{int} C \rightarrow \operatorname{int} C$ is order-preserving and subhomogeneous, then $f$ is nonexpansive with respect to Thompson's metric \cite[Lemma 2.1.7]{LemmensNussbaum}. If $f$ is order-preserving and homogeneous, then the map $g(x) = f(x)/\|x\|$ is nonexpansive with respect to Hilbert's projective metric on $\Sigma$ \cite[Proposition 1.5]{Nussbaum88}. In that case, $f$ has an eigenvector $u \in \operatorname{int} C$ with $f(u) = \lambda u$ if and only if $g(x)$ has a fixed point. Furthermore, if $f$ has more than one eigenvector $u, v \in \operatorname{int} C$, then the eigenvalues corresponding to $u$ and $v$ must be the same. This is not necessarily true, however, if $f$ is only subhomogeneous rather than homogeneous.
If $C$ is a normal closed cone with nonempty interior in a Banach space and $f: \operatorname{int} C \rightarrow \operatorname{int} C$ is order-preserving and homogeneous, the \emph{cone spectral radius} of $f$ is
$$r_C(f) = \limsup_{k \rightarrow \infty} \|f^k(x)\|^{1/k}$$
for some $x \in \operatorname{int} C$. The value of $r_C(f)$ does not depend on the choice of $x$ \cite[Theorem 2.2]{MaNu02}. Furthermore, since $f(\operatorname{int} C) \subset \operatorname{int} C$, it follows that $r_C(f) > 0$. If $f$ has an eigenvector in $\operatorname{int} C$, then the corresponding eigenvalue will be $r_C(f)$.
For any map $f:D \rightarrow D$ on a set $D$, we the \emph{orbit} of a point $x \in D$ under $f$ is the set $\mathcal{O}(x,f) := \{f^k(x) : k \in \mathbb{N} \}$. If $D$ has a topology, then the \emph{omega limit set} of a point $x$ under $f$ is
$$\omega(x,f) := \bigcap_{n \in \mathbb{N}} \overline { \{ f^k(x) : k \ge n \} },$$
where $\overline{A}$ denotes the closure of $A$.
\section{Type K order-preserving maps} \label{sec:typeK}
\begin{definition}
Let $C$ be a closed cone in a Banach space $X$. Let $D$ be a domain in $X$ and let $f: D \rightarrow X$. We say that $f$ is \emph{type K order-preserving} if for any $x, y \in D$ with $y \ge x$, there exists $\epsilon > 0$ such that $f(y) - f(x) \ge \epsilon (y-x)$.
\end{definition}
\begin{theorem} \label{thm:main}
Let $C$ be a closed cone with nonempty interior in a Banach space and let $f:\operatorname{int} C \rightarrow \operatorname{int} C$ be subhomogeneous and type K order-preserving. If $f$ has a fixed point in the interior of $C$ and the closure of the orbit $\mathcal{O}(x,f)$ is compact for some $x \in \operatorname{int} C$, then $f^k(x)$ converges to a fixed point of $f$.
\end{theorem}
The key insight in the proof of Theorem \ref{thm:main} is that if $\omega(x,f)$ is a compact subset of $\operatorname{int} C$ and $f:\operatorname{int} C \rightarrow \operatorname{int} C$ is nonexpansive with respect to Thompson's metric, then $f$ is an invertible Thompson metric isometry on $\omega(x,f)$ \cite[Lemma 3.1.2 and Corollary 3.1.5]{LemmensNussbaum}. This result can be traced back to work by Freudenthal and Hurewitz \cite{FrHu36}.
Before proving Theorem \ref{thm:main}, we note the following minor lemma.
\begin{lemma} \label{lem:feps}
Let $C$ be a closed cone with nonempty interior in a Banach space and let $f: \operatorname{int} C \rightarrow \operatorname{int} C$ be subhomogeneous and type K order-preserving. For every $\epsilon > 0$, let $f_\epsilon := f - \epsilon \operatorname{id}.$
Then for any $x,y \in \operatorname{int} C$, there exists $\epsilon > 0$ small enough so that
$$d_T(f_\epsilon(x),f_\epsilon(y)) \le d_T(x,y).$$
\end{lemma}
\begin{proof}
Let $\beta = \exp d_T(x,y)$. Then
$$x \le \beta y \text{ and } y \le \beta x.$$
Since $f$ is type K order-preserving, there is an $\epsilon > 0$ small enough so that
$$ \epsilon(\beta y - x)\le f(\beta y) - f(x) \text{ and } \epsilon(\beta x - y) \le f(\beta x) - f(y).$$
Therefore
$$f_\epsilon(x) \le f_\epsilon(\beta y) \text{ and } f_\epsilon(y) \le f_\epsilon(\beta x).$$
Note that $f_\epsilon$ is also subhomogeneous, therefore
$$f_\epsilon(x) \le \beta f_\epsilon(y) \text{ and } f_\epsilon(y) \le \beta f_\epsilon(x).$$
This means that $d_T(f_\epsilon(x),f_\epsilon(y)) \le d_T(x,y).$
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}] Since $f$ has a fixed point in $\operatorname{int} C$, the orbit of $x$ is bounded in Thompson's metric. Therefore, the omega limit set $\omega(x,f)$ is contained in $\operatorname{int} C$. Any subset of $\operatorname{int} C$ which is compact in the norm topology of $\operatorname{int} C$ is also compact in the Thompson metric topology, which may be weaker. So $f$ is an invertible Thompson's metric isometry on $\omega(x,f)$ \cite[Lemma 3.1.2 and Corollary 3.1.5]{LemmensNussbaum}.
Choose $y, z \in \omega(x,f)$ such that $d_T(y,z)$ is maximal. We may assume without loss of generality that $d_T(y,z) = \log M(z/y)$. Then there is a linear functional $\phi \in C^*$ such that $M(z/y) = {\phi(z)}/{\phi(y)}$ (see \cite[Lemma 2.2]{LLNW18}). Let $a = \phi(y)$ and $b = \phi(z)$ and note that $b \ge a$ since $0 \le d_T(y,z) = \log(b/a)$.
Since $f$ is invertible on $\omega(x,f)$, we have $y^{-1} := f^{-1}(y) \in \omega(x,f)$ and $z^{-1} := f^{-1}(z) \in \omega(x,f)$. Since $f$ is an isometry on $\omega(x,f)$,
$$d_T(y^{-1},z^{-1}) = d_T(y,z) = \log\left(\tfrac{b}{a}\right).$$
Let $f_\epsilon = f - \epsilon \operatorname{id}$. For any sufficiently small $\epsilon >0$, $f_\epsilon(y^{-1})$ and $f_\epsilon(z^{-1})$ are both in $\operatorname{int} C$. By Lemma \ref{lem:feps} there is an $\epsilon > 0$ small enough so that
$$d_T(f_\epsilon(y^{-1}),f_\epsilon(z^{-1})) \le d_T(y^{-1},z^{-1}) = \log\left(\tfrac{b}{a}\right).$$
Observe that $a \le \phi(w) \le b$ for all $ w \in \omega(x,f)$, otherwise $d_T(w,z)$ or $d_T(y,w)$ would be greater than $\log(b/a)$ by \eqref{functionals}, but $\log(b/a)$ is the maximal distance between pairs in $\omega(x,f)$. In particular, $\phi(y^{-1}) \ge a$ and $\phi(z^{-1}) \le b$. Therefore
\begin{equation} \label{oba}
\phi(f_\epsilon(y^{-1})) = \phi(y) - \epsilon \phi(y^{-1}) \le a(1-\epsilon)
\end{equation}
and
\begin{equation} \label{obb}
\phi(f_\epsilon(z^{-1})) = \phi(z) - \epsilon \phi(z^{-1}) \ge b(1-\epsilon).
\end{equation}
Then by \eqref{functionals}
$$\log\left(\frac{b}{a}\right) \le \log \frac{\phi(f_\epsilon(z^{-1}))}{\phi(f_\epsilon(y^{-1}))} \le d_T(f_\epsilon(y^{-1}),f_\epsilon(z^{-1})) \le \log\left(\frac{b}{a}\right).
$$
We conclude that $\phi(f_\epsilon(y^{-1})) = a(1-\epsilon)$ and $\phi(f_\epsilon(z^{-1})) = b(1-\epsilon)$. Combined with \eqref{oba} and \eqref{obb}, this means that $\phi(y^{-1}) = a$ and $\phi(z^{-1}) = b$.
We can repeat this argument to prove that $\phi(z^{-k}) = b$ for all $k \in \mathbb{N}$ where $z^{-k} := f^{-k}(z) \in \omega(x,f)$. However, there is a point $f^m(x) \in \mathcal{O}(x,f)$ that is arbitrarily close to $y$ and an $n \in \mathbb{N}$ such that $f^{m+n}(x)$ is arbitrarily close to $z$. Then $f^n(y)$ will be arbitrarily close to $z$ by the nonexpansiveness of $f$. Since $f$ is an isometry on $\omega(x,f)$, we have $d_T(f^n(y),z) = d_T(y,z^{-n})$ arbitrarily small. But since $\phi(y) = a$ and $\phi(z^{-k}) = b$, we have $d_T(y,z^{-k}) \ge \log (b/a)$ which is a contradiction unless $a=b$ and $\omega(x,f)$ is a singleton.
\end{proof}
\begin{remark}
Unlike \cite[Theorem 2.3]{Jiang96}, we assume that $f$ has a fixed point in $\operatorname{int} C$ in Theorem \ref{thm:main}.
If $f$ does not have a fixed point in $\operatorname{int} C$, then in many circumstances $\omega(x,f)$ will be contained in a convex subset of the boundary of $C$ \cite[Theorem 3.2]{LLNW18}. There is no guarantee, however, that $f$ extends continuously to the boundary of $C$ even in finite dimensions \cite{BuNuSp03}. However, if $C$ is a polyhedral cone, then any order-preserving subhomogeneous map $f: \operatorname{int} C \rightarrow \operatorname{int} C$ does have a continuous extension to all of $C$ that is order-preserving and subhomogeneous \cite[Theorem 5.1.5]{LemmensNussbaum}. Furthermore, if $C$ is a polyhedral cone and the orbit $\mathcal{O}(x,f)$ is bounded in norm, then it is known that the omega limit set $\omega(x,f)$ is a finite set which is a periodic orbit of a single point \cite[Lemma 6.5 and Theorem 6.8]{AGLN06}, even if $\omega(x,f)$ is not contained in $\operatorname{int} C$. So for polyhedral cones, we can remove the assumption that $f$ has a fixed point in $\operatorname{int} C$ from Theorem \ref{thm:main} as long as $f$ is type K order-preserving on all of $C$.
\end{remark}
If $C$ is a closed cone in a finite dimensional Banach space, then an order-preserving subhomogeneous map $f:\operatorname{int} C \rightarrow \operatorname{int} C$ has a fixed point in $\operatorname{int} C$ if and only if $\mathcal{O}(x,f)$ is bounded in Thompson's metric for all $x \in \operatorname{int} C$ \cite[Corollary 3.2.5]{LemmensNussbaum}. In that case, the assumption that $\mathcal{O}(x,f)$ has compact closure is automatic.
If $C$ is a normal, closed cone with nonempty interior in an infinite dimensional Banach space, and if $f:\operatorname{int} C \rightarrow \operatorname{int} C$ is order-preserving, subhomogeneous, and there is a measure of non-compactness $\gamma$ such that $f$ is $\gamma$-condensing, that is $\gamma(f(B)) < \gamma(B)$ for every bounded subset $B \subset \operatorname{int} C$, then it is also known that $f$ has a fixed point in $\operatorname{int} C$ if and only if $\mathcal{O}(x,f)$ is bounded in Thompson's metric for every $x \in \operatorname{int} C$ \cite[Theorems 4.3 and 4.4]{Nussbaum88}, and in that case the assumption that the closure of $\mathcal{O}(x,f)$ is compact is also automatic.
If $f: \operatorname{int} C \rightarrow \operatorname{int} C$ is order-preserving and subhomogeneous, then $\lambda f + (1-\lambda) \operatorname{id}$ is a type K order-preserving subhomogeneous map with the same fixed points as $f$ for all $0 < \lambda < 1$. Furthermore, if $f$ is $\gamma$-condensing, then so is $f_\lambda$. This follows from the fact that $\gamma$ is a seminorm on the set of bounded subsets of $X$, that is, $\gamma(cB) = |c|\gamma(B)$ and $\gamma(B_1+B_2) \le \gamma(B_1) +\gamma(B_2)$ for all bounded sets $B, B_1, B_2$ and constants $c \in \mathbb{R}$ \cite[Proposition 7.2]{Deimling}. Therefore the following corollary is true. Note the similarity to Krasnoselskii-Mann iteration for norm nonexpansive maps (see e.g., \cite[Section 5]{GaSt20}).
\begin{corollary} \label{cor:blue1}
Let $C$ be a closed cone with nonempty interior in a Banach space $X$ and let $f:\operatorname{int} C \rightarrow \operatorname{int} C$ be order-preserving and subhomogeneous. Let $\gamma$ be a measure of non-compactness on $X$ and suppose that $f$ is $\gamma$-condensing. Let $f_\lambda = \lambda f + (1- \lambda) \operatorname{id}$ for some $0 < \lambda < 1$. If $f$ has a fixed point in $\operatorname{int} C$, then $f_\lambda^k(x)$ converges to a fixed point of $f$ for every $x \in \operatorname{int} C$.
\end{corollary}
If $f: \operatorname{int} C \rightarrow \operatorname{int} C$ is order-preserving and homogeneous, we may want to find an eigenvector in the interior that has an eigenvalue not equal to one. If we don't know the eigenvalue in advance, then we can iterate the normalized map $g(x) = f(x)/\|f(x)\|$.
\begin{lemma} \label{lem:compactorbits}
Let $C$ be a normal, closed cone with nonempty interior in a Banach space $X$. Let $f: \operatorname{int} C \rightarrow \operatorname{int} C$ be order-preserving and homogeneous, and suppose that $f$ has an eigenvector in $\operatorname{int} C$. Let $g(x) = f(x)/\|f(x)\|$ for all $x \in \operatorname{int} C$. Then $\overline{\mathcal{O}(x,r_C(f)^{-1} f)}$ is compact if and only if $\overline{\mathcal{O}(x,g)}$ is compact.
\end{lemma}
\begin{proof}
For any closed interval $[a,b] \subset \mathbb{R}$ and compact set $K \subset X$, let
$$[a,b] \cdot K = \{tx : t \in [a,b], x \in K \}.$$
Note that $[a,b] \cdot K$ is compact since it is the image of the compact set $[a,b] \times K \subset \mathbb{R} \times X$ under the continuous map $(t,x) \mapsto tx$.
Let $u \in \operatorname{int} C$ be an eigenvector of $f$ with $\|u \| = 1$. We may assume without loss of generality that the spectral radius of $f$ is 1 by replacing $f$ with $r_C(f)^{-1} f$.
Then $f(u) = u$, so there are constants $\alpha, \beta > 0$ such that
$$\alpha u \le f^k(x) \le \beta u$$
for all $k \in \mathbb{N}$.
Since $C$ is a normal cone, there is a constant $\kappa > 0$ such that
$$\alpha \kappa^{-1} \le \|f^k(x)\| \le \beta \kappa$$
for all $k \in \mathbb{N}$.
Therefore
$$\overline{\mathcal{O}(x,f)} \subset [\alpha \kappa^{-1},\beta \kappa] \cdot \overline{\mathcal{O}(x,g)}$$
and
$$\overline{\mathcal{O}(x,g)} \subset [\kappa^{-1} \beta^{-1},\kappa \alpha^{-1}] \cdot \overline{\mathcal{O}(x,f)}.$$
Thus, if either $\overline{\mathcal{O}(x,f)}$ or $\overline{\mathcal{O}(x,g)}$ is compact, then so is the other.
\end{proof}
Combining Lemma \ref{lem:compactorbits} with Theorem \ref{thm:main}, we have the following.
\begin{corollary}
Let $C$ be a normal, closed cone with nonempty interior in a Banach space. Let $f: \operatorname{int} C \rightarrow \operatorname{int} C$ be Type K order-preserving and homogeneous. Let $g(x) = f(x)/\|f(x)\|$ for all $x \in \operatorname{int} C$. If $f$ has an eigenvector in $\operatorname{int} C$ and $\mathcal{O}(x,g)$ has compact closure, then $g^k(x)$ converges to an eigenvector of $f$ for all $x \in \operatorname{int} C$.
\end{corollary}
\section{Linear convergence to fixed points} \label{sec:linearrate}
Let $(M,d)$ be a metric space and $x^k$ be a sequence in $M$. We say that $x^k$ converges to $y \in M$ at a \emph{linear rate} if there exist constants $0 < \theta < 1$ and $c > 0$ such that $d(x^k,y) \le c \, \theta^k$ for all $k \in \mathbb{N}$. Note that $x^k$ converges to $y$ at a linear rate if and only if $\limsup_{k \rightarrow \infty} d(x^k,y)^{1/k} < 1$. This type of convergence is commonly referred to as \emph{R-linear convergence} to distinguish it from the stronger \emph{Q-linear convergence} where there is a constant $0 < \gamma < 1$ such that $d(x^{k+1},y) \le \gamma \, d(x^k,y)$ for all $k \in \mathbb{N}$.
The following technical lemma shows that for order-preserving homogeneous maps on a cone, if a sequence of normalized iterates converge to an eigenvector in the interior at a linear rate, then you get the same convergence constant regardless of which metric is used.
\begin{lemma} \label{lem:technical}
Let $C$ be a normal, closed cone with nonempty interior in a Banach space. Let $f: \operatorname{int} C \rightarrow \operatorname{int} C$ be order-preserving and homogeneous. Let $x, u \in \operatorname{int} C$ with $\|u\| = 1$, and let $0 < \theta < 1$. Then the following are equivalent.
\begin{enumerate}[(a)]
\item \label{item:dHlim} $\displaystyle \limsup_{k \rightarrow \infty} d_H(f^k(x),u)^{1/k} \le \theta.$
\item \label{item:dTlim} There is a $\lambda > 0$ such that $\displaystyle \limsup_{k \rightarrow \infty} d_T(f^k(x)/r_C(f)^k,\lambda u)^{1/k} \le \theta.$
\item \label{item:stronglim} There is a $\lambda > 0$ such that $\displaystyle \limsup_{k \rightarrow \infty} \left\|\frac{f^k(x)}{r_C(f)^k} - \lambda u \right\|^{1/k} \le \theta.$
\item \label{item:weaklim} $\displaystyle \limsup_{k \rightarrow \infty} \left\| \frac{f^k(x)}{\|f^k(x)\|} - u \right\|^{1/k} \le \theta.$
\end{enumerate}
\end{lemma}
\begin{proof}
We can assume without loss of generality that the cone spectral radius $r_C(f)$ is one by replacing $f$ with $r_C(f)^{-1} f$. We make this assumption throughout the proof. \\
\noindent
\ref{item:dHlim} $\Rightarrow$ \ref{item:dTlim}. By nonexpansiveness,
\begin{align*}
d_H(u,f(u)) &= \lim_{k \rightarrow \infty} d_H(f^{k+1}(x),f(u)) \\
&\le \lim_{k \rightarrow \infty} d_H(f^k(x),u) = 0.
\end{align*}
Therefore $u$ is an eigenvector of $f$ and its eigenvector must be $r_C(f) = 1$. Since $f(u) = u$, it follows that $M(f^k(x)/u)$ is a decreasing sequence and $m(f^k(x)/u)$ is an increasing sequence. The fact that
$$\lim_{k \rightarrow \infty} \log \left( \frac{M(f^k(x)/u)}{m(f^k(x)/u)} \right) = \lim_{k \rightarrow \infty} d_H(f^k(x),u) = 0$$
implies that
$$\lim_{k \rightarrow \infty} M(f^k(x)/u) = \lim_{k \rightarrow \infty} m(f^k(x)/u).$$
Let $\lambda = \lim_{k \rightarrow \infty} M(f^k(x)/u)$.
Note that $M(f^k(x)/\lambda u) \ge 1$ and $m(f^k(x)/\lambda u) \le 1$ for all $k \in \mathbb{N}$. Then the definitions of $d_H$ and $d_T$ imply that
$$d_T(f^k(x),\lambda u) \le d_H(f^k(x),\lambda u) ~\text{ for all } k \in \mathbb{N}.$$
Therefore $\limsup_{k \rightarrow \infty} d_T(f^k(x),\lambda u)^{1/k} \le \limsup_{k \rightarrow \infty} d_H(f^k(x),\lambda u)^{1/k} \le \theta.$ \\
\noindent
\ref{item:dTlim} $\Rightarrow$ \ref{item:stronglim}. Let $N_r(u) = \{ x \in X : \|x-u\| \le r \}$ with $r> 0$ small enough so that $N_r(u) \subset \operatorname{int} C$.
Then there is a constant $c > 0$ such that \eqref{normalThompson} holds for all $x \in N_r(u)$.
Therefore
$$\| f^k(x) - \lambda u \| \le c d_T(f^k(x),\lambda u) $$
for all $k \in \mathbb{N}$ sufficiently large. This implies the conclusion.\\
\noindent
\ref{item:stronglim} $\Rightarrow$ \ref{item:weaklim}. This follows from the following inequality.
\begin{align*}
\left\| \frac{f^k(x)}{\|f^k(x)\|} - u \right\| &\le \left\| \frac{f^k(x)}{\|f^k(x)\|} - \frac{f^k(x)}{\lambda} \right\| + \left\| \frac{f^k(x)}{\lambda} - u \right\| & (\text{triangle ineq.})\\
&= \left| 1 - \frac{\|f^k(x)\|}{\lambda} \right| + \left\| \frac{f^k(x)}{\lambda} - u \right\| \\
&= \left| \|u\| - \frac{\|f^k(x)\|}{\lambda} \right| + \left\| \frac{f^k(x)}{\lambda} - u \right\| \\
&\le \frac{2}{\lambda} \left\| f^k(x) - \lambda u \right\| & (\text{triangle ineq.})
\end{align*}
for all $k \in \mathbb{N}$.
\noindent
\ref{item:weaklim} $\Rightarrow$ \ref{item:dHlim}. The following inequality immediately implies the conclusion.
\begin{align*}
d_H(f^k(x), u) &= d_H(f^k(x)/\|f^k(x)\|, u) \\
&\le 2 d_T(f^k(x)/\|f^k(x)\|, u) & (\text{by } \eqref{HilbertThompson})\\
&\le 2 c \left\|\frac{f^k(x)}{\|f^k(x)\|} - u \right\| & (\text{by } \eqref{normalThompson})
\end{align*}
for all $k \in \mathbb{N}$ sufficiently large.
\end{proof}
\subsection{Piecewise affine functions}
Let $M$ be a convex subset of a Banach space $X$. We say that $f: M \rightarrow M$ is \emph{piecewise affine} if $M$ can be decomposed into a finite union of closed convex sets on each of which $f$ is affine linear. In some special cases, it has been observed that if the iterates of certain nonexpansive piecewise affine maps converge to a fixed point, then they do so at a linear rate. A result of Robinson \cite{Robinson79} implies that this is true for piecewise affine maps on $\mathbb{R}^n$ that are nonexpansive with respect to the Euclidean norm. It is also known for the value iteration operator for solving a Markov decision process \cite{ScFe79} (see also \cite[Appendix 4.A]{Zijm82}). The following general result appears to be new, however.
\begin{theorem} \label{thm:piecewise}
Let $(X, \|\cdot\|)$ be a finite dimensional Banach space and let $M$ be a convex subset of $X$. Let $d$ be a metric on $M$ which induces a topology on $M$ that is equivalent to the topology inherited from the norm. Suppose that $f:M \rightarrow M$ is a piecewise affine map that is nonexpansive with respect to $d$. If for some $x \in M$, the sequence $f^k(x)$ converges to a fixed point $u$, then there are constants $m \in \mathbb{N}$ and $0 < \gamma < 1$ such that
$$\|f^{k+m}(x) - u \| \le \gamma \|f^k(x)-u\|$$
for all $k \in \mathbb{N}$ sufficiently large.
\end{theorem}
\begin{proof} We will use the following notation for closed balls centered at $u$ in $M$. For any $R > 0$, let
$$B_R(u) := \{x \in M : d(x,u) \le R\}$$
and let
$$N_R(u) := \{x \in M : \|x-u\| \le R \}.$$
Since $f$ is a piecewise affine and $u$ is a fixed point of $f$, there is a closed ball $B_R(u)$ such that
$$
f(\lambda x + (1-\lambda) u) = \lambda f(x) + (1-\lambda) u
$$
whenever $x \in B_R(u)$ and $0 < \lambda < 1$. Since $f$ is nonexpansive with respect to $d$, $f(B_R(u)) \subseteq B_R(u)$, so by induction
\begin{equation} \label{locallyhomogeneous2}
f^k(\lambda x + (1-\lambda) u) = \lambda f^k(x) + (1-\lambda) u
\end{equation}
for all $x \in B_R(u)$, $0 < \lambda < 1$, and $k \in \mathbb{N}$.
Since the topology on $(M,d)$ is equivalent to the norm topology, there are constants $0 < r < R$ and $0 < \alpha < \beta$ such that
$$B_r(u) \subseteq N_\alpha(u) \subset N_\beta(u) \subseteq B_R(u).$$
Let $U = \{x \in M : f^k(x) \text{ converges to } u \}$. Let us prove that $U$ is closed. Suppose that $x^n$ is a sequence in $U$ that converges to $x \in M$. Then
\begin{align*}
\limsup_{k \rightarrow \infty} \|f^k(x) - u\| &= \limsup_{k \rightarrow \infty} \|f^k(x) - f^k(x^n)\| & (\text{since } \lim_{k \rightarrow \infty} f^k(x^n) = u)\\
&\le \|x - x^n \| & (\text{by nonexpansiveness})
\end{align*}
for any $n$. Since we can make $\|x-x^n\|$ arbitrarily small, we conclude that $f^k(x)$ converges to $u$, so $x \in U$ and $U$ is closed.
Let $V_k = \{x \in M : d(f^k(x),u) < r\}$. Each set $V_k$ is open in $M$, and $V_k \subseteq V_{k+1}$ for all $k \in \mathbb{N}$ since $f$ is nonexpansive and $u$ is a fixed point. Since $X$ is finite dimensional, the set $B_R(u) \cap U$ is compact. The sets $V_k$ are an open cover of $B_R(u) \cap U$, so there is a single $m \in \mathbb{N}$ such that $B_R(u) \cap U \subseteq V_m$. Thus $f^m(B_R(u) \cap U) \subseteq B_r(u) \cap U$. This implies that $f^m(N_\beta(u) \cap U) \subseteq N_\alpha(u) \cap U$.
Let $y \in N_\beta(u) \cap U$ with $y \neq u$. Let $z = u + \lambda^{-1}(y-u)$ where $\lambda = \|y-u\|/\beta$. Then $\|z-u\| = \beta$ and $y = \lambda z + (1-\lambda) u$. By \eqref{locallyhomogeneous2},
\begin{align*}
\|f^k(y)- u\| &= \|\lambda f^k(z) + (1-\lambda) u - u\| \\
&= \lambda \|f^k(z) - u\|
\end{align*}
for all $k \in \mathbb{N}$. Since $\lambda \ne 0$, it follows that $f^k(z)$ converges to $u$ and therefore $z \in N_\beta(u) \cap U$. Because $f^m(N_\beta(u) \cap U) \subseteq N_\alpha(u) \cap U$, we have
\begin{align*}
\|f^m(y)- u\| &= \lambda \|f^m(z) - u\| \\
&\le \lambda \alpha \le \frac{\alpha}{\beta} \|y-u\|.
\end{align*}
Since $f^k(x) \in N_\beta(u)$ for all $k$ sufficiently large, this completes the proof with $\gamma = \frac{\alpha}{\beta}$.
\end{proof}
Applying Theorem \ref{thm:piecewise} to the metric space $(\operatorname{int} C, d_T)$ gives:
\begin{corollary}
Let $C$ be a closed cone with nonempty interior in a finite dimensional Banach space. Let $f:\operatorname{int} C \rightarrow \operatorname{int} C$ be order-preserving, subhomogeneous, and piecewise affine. If $f^k(x)$ converges to a fixed point $u \in \operatorname{int} C$ for some $x \in \operatorname{int} C$, then there exists $m \in \mathbb{N}$ and $0 < \gamma < 1$ such that
$$\|f^{k+m}(x) - u\| \le \gamma \|f^{k}(x)-u\|$$
for all $k \in \mathbb{N}$ sufficiently large.
\end{corollary}
Combining the previous result with Lemma \ref{lem:technical}, we have:
\begin{corollary}
Let $C$ be a closed cone with nonempty interior in a finite dimensional Banach space. Let $f:\operatorname{int} C \rightarrow \operatorname{int} C$ be order-preserving, homogeneous, and piecewise affine. Let $g(x) = f(x)/\|f(x)\|$ for all $x \in \operatorname{int} C$. If $g^k(x)$ converges to an eigenvector $u \in \operatorname{int} C$ for some $x \in \operatorname{int} C$, then there exists $0 < \theta < 1$ such that
$$\limsup_{k \rightarrow \infty} \|g^{k}(x) - u\|^{1/k} \le \theta.$$
\end{corollary}
A theorem of Kohlberg \cite{Kohlberg80} says that if $X$ is a finite dimensional Banach space and $f: X \rightarrow X$ is piecewise affine and nonexpansive with respect to the norm on $X$, then $f$ has an invariant half-line. Specifically, there exist $v, w \in X$ such that $f(v+tw) = v+(t+1)w$ for all $t \ge 0$. The vector $w$ is unique for $f$ and is referred to as the \emph{cycle time vector} in some applications. Note that $f$ has a fixed point if and only if $w=0$.
Suppose that $f$ is a piecewise affine nonexpansive map that does not have a fixed point. The following lemma says that after a finite number of steps, the dynamics of the iterates of $f$ on $X$ are completely determined by the dynamics of a piecewise affine nonexpansive map that does have a fixed point. Thus Theorem \ref{thm:piecewise} can be applied to show that if $f^k(x)$ converges to one of the invariant half-lines of $f$, then the rate of convergence is linear.
\begin{lemma}
Let $X$ be a finite dimensional Banach space. Let $f: X \rightarrow X$ be piecewise affine and nonexpansive with respect to the norm on $X$. Suppose for $v,w \in X$ that $f(v+tw) = v+(t+1)w$ for all $t \ge 0$. Let $g(x) = f(x) - w$ for all $x \in X$. Then for every $x \in X$, there exists $m \in \mathbb{N}$ such that
$$f^{k+m}(x) = g^k(f^m(x)) + k w$$
for all $k \in \mathbb{N}$.
\end{lemma}
\begin{proof}
Since $f$ is piecewise affine, there is a finite collection of convex subsets $M_i \subset X$ such that $f(x) = A_i x + b_i$ for all $x \in M_i$ where $A_i$ is a linear transformation on $X$ and $b_i \in X$.
Fix $R > 0$ and let $B_R(y) = \{x \in X: \|x-y\| \le R \}$. Suppose that $M_i \cap B_R(v+tw) \ne \varnothing$ for all $t \ge 0$ large enough. Then we claim that $x+tw \in M_i$ whenever $x \in M_i$ and $t\ge 0$. Otherwise, by the Hahn-Banach theorem there is a linear functional $\phi \in X^*$ and a constant $c \in \mathbb{R}$ such that $\phi(y) \ge c$ for all $y \in M_i$, and $\phi(x+tw) < c$ for some $t>0$. Then $\phi(w) < 0$. Observe that for any $y \in B_R(v+tw)$,
\begin{align*}
\phi(y) &= \phi(v) + t \phi(w) + \phi(y-v+tw)\\
&\le \phi(v) + t \phi(w) + R\|\phi\|.
\end{align*}
In particular, if $t > 0$ is large enough, then $\phi(y) < c$ for all $y \in M_i \cap B_R(v+tw)$. That would mean that $M_i \cap B_R(v+tw)$ is empty, which is a contradiction. This proves the claim.
Now, suppose that $x + tw \in M_i$ for all $t \ge 0$. Then for all $t \ge 0$, we have
\begin{align*}
\|x-v\| &\ge \|f(x+tw) - f(v+tw)\| & (\text{nonexpansiveness})\\
&= \|A_ix + tA_iw + b_i - v - (t+1)w\| \\
&= \|f(x) + t A_iw - f(v) - tw \| \\
&\ge t\| A_iw - w \| - \|f(x)-f(v) \|. & (\text{triangle ineq.})
\end{align*}
This means that $\|A_i w - w\|$ must be zero and therefore $f(x+tw) = f(x)+tw$ for all $t \ge 0$.
We have shown that $f(x+tw) = f(x) + tw$ for all $x \in B_R(v+sw)$ and $t \ge 0$ when $s$ is sufficiently large. In particular, for any such $x$, $f^k(x) = g^k(x) + k w$ for all $k \in \mathbb{N}$, where $g(y) = f(y) - w$ for all $y \in X$. The conclusion follows by observing that if $\|x-v\| \le R$, then $f^m(x) \in B_R(v+mw)$ for $m \in \mathbb{N}$. Thus by choosing $m$ large enough, we have $f^{k+m}(x) = g^k(f^m(x)) + k w$ for all $k \in \mathbb{N}$.
\end{proof}
\subsection{Analytic and multiplicatively convex functions}
Let $[n] := \{1, \ldots, n\}$. Let $e_i$, $i \in [n]$, denote the standard basis vectors in $\mathbb{R}^n$. Let $\mathbf{1}$ denote the vector in $\mathbb{R}^n$ with all entries equal to one. We use $\log : \mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n$ and $\exp: \mathbb{R}^n \rightarrow \mathbb{R}^n_{>0}$ to denote the entrywise natural logarithm and exponential functions on $\mathbb{R}^n$.
Let $D$ be an open convex subset of $\mathbb{R}^n$. A function $f: D \rightarrow \mathbb{R}^n$ is \emph{analytic} if each entry $f_i$ is a real analytic function, and $f$ is \emph{convex} if each entry $f_i$ is a convex function. For $f: \mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n_{>0}$ we say that $f$ is \emph{multiplicatively convex} if $\log \circ f \circ \exp$ is a convex function on $\mathbb{R}^n$.
Order-preserving homogeneous maps $f: \mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n_{>0}$ that are analytic and multiplicatively convex are particularly nice. Many important features of these functions are determined by the \emph{directed graph} $\mathcal{G}(f)$ \emph{associated with} $f$ which has vertices $[n]$ and an arc from $i$ to $j$ when
$$\lim_{t \rightarrow \infty} f(\exp(te_j))_i = \infty.$$
If $f$ is analytic, it is differentiable and the Jacobian matrix $f'(x)$ at each $x \in \mathbb{R}^n_{>0}$ is a nonnegative matrix. When $f$ is analytic and multiplicatively convex, the directed graph $\mathcal{G}(f'(x))$ associated with the Jacobian matrix is always the same as $\mathcal{G}(f)$ at every $x \in \mathbb{R}^n_{>0}$ \cite[Lemma 4.9]{Lins22}.
\begin{lemma} \label{lem:analTypeK}
Let $f: \mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n_{>0}$ be order-preserving, homogeneous, multiplicatively convex, and analytic. Then $f$ is type K order-preserving if and only if there is an arc from $i$ to itself in $\mathcal{G}(f)$ for every $i \in [n]$.
\end{lemma}
\begin{proof}
($\Rightarrow$) Since $f$ is type K order-preserving, $f(\exp(e_i)) \ge f(\mathbf{1})+\epsilon e_i$ for some $\epsilon > 0$. This means that the function $t \mapsto \log(f(\exp(te_i))_i)$ is not constant. Therefore since $\log(f(\exp(te_i))_i)$ is a convex function which is increasing and not constant for $t > 0$, it follows that $\lim_{t\rightarrow \infty} f(\exp(te_i))_i = \infty$.
($\Leftarrow$) If there is an arc from every $i \in [n]$ to itself in $\mathcal{G}(f)$, then the entries on the main diagonal of the Jacobian matrix $\mathcal{G}(f'(x))$ are all positive for every $x \in \mathbb{R}^n_{>0}$. Since $f$ is analytic, the entries of $f'(x)$ depend continuously on $x$.
Suppose $x, y \in \mathbb{R}^n_{>0}$ with $x \le y$. Since the closed line segment connecting $x$ to $y$ is a compact set, we can choose an $\epsilon > 0$ such that $f'(\lambda y + (1-\lambda) x) - \epsilon \operatorname{id}$ is a nonnegative matrix for every $0 \le \lambda \le 1$. This implies that $f(x)-\epsilon x \le f(y) - \epsilon y$ or equivalently, $f(y)-f(x) \ge \epsilon (y-x)$ which proves that $f$ is type K order-preserving.
\end{proof}
Let $f: \mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n_{>0}$ be order-preserving and homogeneous. Let $C_1, \ldots, C_m$ denote the strongly connected components of $\mathcal{G}(f)$. We call these components the \emph{classes} of $f$. A class is \emph{final} if there are no arcs leaving the class in $\mathcal{G}(f)$. If $C$ is a final class, and $j \in C$, then $f(x)_j$ only depends on the entries $x_i$ of $x$ where $i \in C$ \cite[Lemma 4.5]{Lins22}. For any subset $J \subseteq [n]$, we let
$$\mathbb{R}^J = \{x \in \mathbb{R}^n : x_j = 0 \text{ for all } j \notin J\}$$
and
$$\mathbb{R}^J_{>0} = \{x \in \mathbb{R}^J : x_j > 0 \text{ for all } j \in J\}.$$
Let $P_J \in \mathbb{R}^{n \times n}$ be the orthogonal projection onto $\mathbb{R}^J$. Since $f$ extends continuously to $\mathbb{R}^n_{\ge 0}$ and the extension is order-preserving and homogeneous \cite[Corollary 4.6]{BuNuSp03}, we can define the following functions for every $J \subseteq [n]$
$$f_J := P_J f P_J.$$
The \emph{upper Collatz-Wielandt number} $r(f_J)$ is defined by
$$r(f_J) = \inf_{x \in \mathbb{R}^J_{>0}} \max_{j \in J} \frac{f_J(x)_j}{x_j}.$$
If $f_J(\mathbb{R}^J_{>0}) \subseteq \mathbb{R}^J_{>0}$, then $r(f_J)$ is equivalent to the cone spectral radius of $f_J$ defined in section 2 \cite[Theorem 5.6.1]{LemmensNussbaum}. In particular, the cone spectral radius $r_{\mathbb{R}^n_{\ge 0}}(f)$ and the upper Collatz-Wielandt number $r(f)$ are the same, so we will use $r(f)$ to denote both. A class $C$ is called \emph{basic} if $r(f_C) = r(f)$.
If $f$ is multiplicatively convex and no arcs leave $J$ in $\mathcal{G}(f)$, then $f_J(x)_j = f(x)_j$ for all $x \in \mathbb{R}^n_{>0}$ and $j \in J$ \cite[Lemma 4.6]{Lins22}. Put another way,
\begin{equation} \label{decouple}
f_J = P_J f
\end{equation}
on $\mathbb{R}^n_{\ge 0}$ when $f$ is multiplicatively convex and no arcs leave $J$ in $\mathcal{G}(f)$. In particular this is true for the final classes of $f$.
Gaubert and Gunawardena \cite[Theorem 2]{GaGu04} proved that if $f:\mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n_{>0}$ is order-preserving and homogeneous and $\mathcal{G}(f)$ is strongly connected, then $f$ has an eigenvector in $\mathbb{R}^n_{>0}$. Therefore $f_C$ has an eigenvector in $\mathbb{R}^C_{>0}$ with eigenvalue $r(f_C)$ for every final class $C$.
Recently, it was shown that any order-preserving, homogeneous, multiplicatively convex $f: \mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n_{>0}$ has an eigenvector in $\mathbb{R}^n_{>0}$ if its basic and final classes are the same; and if $f$ is also analytic, then the converse is true as well, that is, $f$ has an entrywise positive eigenvector if and only if all of its basic classes are final and vice versa \cite[Theorem 4.4]{Lins22}. This necessary and sufficient condition for the existence of eigenvectors in $\mathbb{R}^n_{>0}$ generalizes equivalent results for nonnegative matrices \cite[Theorem 2.3.10]{BermanPlemmons} and nonnegative tensors \cite[Theorem 5]{HuQi16}.
The main result of this section is the following.
\begin{theorem} \label{thm:anal}
Let $f:\mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n_{>0}$ be order-preserving, homogeneous, analytic, and multiplicatively convex. Let $g(x) = f(x)/\|f(x)\|$ for all $x \in \mathbb{R}^n_{>0}$. If $f$ is type K order-preserving and $f$ has an eigenvector in $\mathbb{R}^n_{>0}$, then there is a $0 < \theta < 1$ such that for every $x \in \mathbb{R}^n_{>0}$, there is an eigenvector $u \in \mathbb{R}^n_{>0}$ (which may depend on $x$) for which
$$\limsup_{k \rightarrow \infty} \|g^k(x) - u \|^{1/k} \le \theta.$$
\end{theorem}
Before proving Theorem \ref{thm:anal} we'll need the following lemma.
\begin{lemma} \label{lem:helper}
Let $(M,d)$ be a metric space, and let $F:M \rightarrow M$ be nonexpansive with respect to $d$. Let $u \in M$ be a fixed point of $F$ and let $(x^m)_{m \in \mathbb{N}}$ be a sequence in $M$ that converges to $u$. If there are constants $\eta, \theta \in (0, 1)$ and $c > 0$ such that
\begin{equation} \label{helperA}
d(F^k(x),u) \le c \, \eta^k
\end{equation}
for all $x$ in a neighborhood $B_R(u)$ of $u$ and all $k \in \mathbb{N}$ and
\begin{equation} \label{helperB}
d(F(x^m),x^{m+1}) \le c \, \theta^m
\end{equation}
for all $m \in \mathbb{N}$, then
$$\limsup_{k \rightarrow \infty} d(x^k,u) \le \eta^\lambda = \theta^{1-\lambda} < 1 \text{ where }\lambda = \dfrac{\log \theta}{\log \eta + \log \theta}.$$
\end{lemma}
\begin{proof}
For any $k, m \in \mathbb{N}$,
\begin{align*}
d(x^{k+m},F^k(x^m)) &\le \sum_{i = 0}^{k-1} d(F^{i}(x^{m+k-i}),F^{i+1}(x^{k+m-i-1})) & (\text{triangle ineq.})\\
&\le \sum_{i = 0}^{k-1} d(x^{m+k-i},F(x^{k+m-i-1})) & (\text{nonexpansiveness})\\
&\le \sum_{i = 0}^{k-1} c \, \theta^{m+k-i-1} \le ck \, \theta^m. & (\text{by } \eqref{helperB})\\
\end{align*}
As long as $m$ is large enough so that $x^m \in B_R(u)$, \eqref{helperA} holds so
\begin{align*}
d(x^{k+m},u) &\le d(x^{k+m},F^k(x^m)) + d(F^k(x^m),u) & (\text{triangle ineq.})\\
&\le ck \, \theta^m + c \, \eta^k \le ck (\theta^m + \eta^k).
\end{align*}
Let $n = k+m$. Observe that
$$(\theta^m + \eta^k)^{1/n} \le 2 \max \{ \eta^\lambda, \theta^{1-\lambda} \}$$
where $\lambda = k/n$. Therefore
$$d(x^{n},u)^{1/n} \le (2cn)^{1/n} \max \{ \eta^\lambda, \theta^{1-\lambda}\}$$
for every rational $\lambda$ with denominator $n$ and $m = (1-\lambda)n$ large enough so that $x^{m} \in B_R(u)$.
From this, we conclude that
$$\limsup_{n \rightarrow \infty} d(x^n,u)^{1/n} \le \max \{ \eta^\lambda, \theta^{1-\lambda} \}$$
for every $0 \le \lambda < 1$.
Since $\eta^\lambda$ is a decreasing function of $\lambda$ while $\theta^{1-\lambda}$ is increasing, the minimum occurs when $\eta^{\lambda} = \theta^{1-\lambda}$. This happens when $\lambda = \dfrac{\log \theta}{\log \eta + \log \theta}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:anal}]
Since $f$ has an eigenvector in $\mathbb{R}^n_{>0}$, its basic and final classes are the same \cite[Theorem 4.4]{Lins22}. We separate the proof into three cases based on the final classes of $f$. \\
\noindent
{Case I.} Suppose that $\mathcal{G}(f)$ is strongly connected. In this case, $f'(x)$ is irreducible for all $x \in \mathbb{R}^n_{>0}$. Since $f$ is type K order-preserving, Lemma \ref{lem:analTypeK} implies that the entries of $f'(x)$ on the main diagonal are all positive. This means that $f'(x)$ is primitive \cite[Lemma 8.5.5]{HornJohnson}. In this case, it is known that $f$ has a unique eigenvector $u \in \mathbb{R}^n_{>0}$ with $\|u\| = 1$ \cite[Corollary 6.4.7]{LemmensNussbaum}. A proof that $g^k(x)$ converges to $u$ at a linear rate for all $x \in \mathbb{R}^n_{>0}$ can be found in \cite[Corollary 5.2]{FrGaHa13}. Their result is stated for a special class of polynomial maps, but actually applies to any order-preserving, homogeneous map $f:\mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n_{>0}$ with an eigenvector $u$ where the Jacobian matrix $f'(u)$ exists and is primitive. They show by a standard linearization argument that
$$\limsup_{k \rightarrow \infty} \|g^k(x) - u\|^{1/k} \le \frac{\rho_2(f'(u))}{\rho(f'(u))}$$
where $\rho$ denotes the spectral radius of $f'(u)$ and $\rho_2$ denotes the second largest eigenvalue in absolute value, which must be strictly less than $\rho(f'(u))$ since $f'(u)$ is primitive. \\
\noindent
{Case II.} Suppose that all classes of $\mathcal{G}(f)$ are final and basic. In this case, the map $f$ decouples and the value of $f(x)_i$ only depends on the entries of $x$ in the same class as $i$ by \eqref{decouple}. We may assume without loss of generality that $r(f) = 1$ by replacing $f$ with $r(f)^{-1} f$ if necessary. Let $x \in \mathbb{R}^n_{>0}$. For any final class $C$, the previous case and Lemma \ref{lem:technical} imply there is a $u_C \in \mathbb{R}^C_{>0}$ and $0 < \theta_C < 1$ such that
$$\limsup_{k \rightarrow \infty} \|f_C^k(x) - u_C \|^{1/k} \le \theta_C.$$
Therefore
$$\limsup_{k \rightarrow \infty} \|f^k(x) - u \|^{1/k} \le \max_{C \in \mathcal{C}} \theta_C$$
where $\mathcal{C}$ is the set of all final classes of $f$ and $u = \sum_{C \in \mathcal{C}} u_C$. Note that each $u_C$ might depend on $x$, but the constants $\theta_C$ do not.
Combined with Lemma \ref{lem:technical}, this proves that
$$\limsup_{k \rightarrow \infty} \left\|g^k(x) - \frac{u}{\|u\|} \right\|^{1/k} \le \max_{C \in \mathcal{C}} \theta_C.$$ \\
\noindent
{Case III.} Suppose that $f$ has classes that are not final. Let $J$ be the union of the final classes of $f$, and let $I = [n] \setminus J$. We again assume that $r(f) = 1$. Fix $x \in \mathbb{R}^n_{>0}$. Since $f$ is type K order-preserving, there is a fixed point $u \in \mathbb{R}^n_{>0}$ such that $f^k(x)$ converges to $u$ by Theorem \ref{thm:main}.
Let $u_I := P_I u$ and $u_J := P_J u$.
Since there are no arcs in $\mathcal{G}(f)$ that leave $J$, \eqref{decouple} holds. Therefore
$$f_J(u_J) = P_J f P_J(u) = P_J f (u) = P_J u = u_J,$$
so $u_J$ is a fixed point of $f_J$. By the result of the previous case, there is a constant $0 < \theta_J < 1$ which does not depend on $x$ such that
$$\limsup_{k \rightarrow \infty} \|f_J^k(x) - u_J\|^{1/k} \le \theta_J.$$
Then by Lemma \ref{lem:technical} we also have
\begin{equation} \label{thetaJlim}
\limsup_{k \rightarrow \infty} d_T(f_J^k(x), u_J)^{1/k} \le \theta_J.
\end{equation}
Since $u$ is an eigenvector of the homogeneous map $f$ with eigenvalue 1, it is also an eigenvector of $f'(u)$ with the same eigenvalue. Since $f'(u)$ is a nonnegative matrix with an entrywise positive eigenvector, the basic and final classes of $f'(u)$ must be the same \cite[Theorem 2.3.10]{BermanPlemmons}. Also, the spectral radius of $f'(u)$ is $\rho(f'(u)) = 1$. Since $\mathcal{G}(f'(u)) = \mathcal{G}(f)$, the final classes of $f$ and $f'(u)$ are the same. Therefore, the spectral radius $\rho(P_I f'(u) P_I)$ is strictly less than one.
Let $F(y) := f(P_I y + u_J)$ for any $y \in \mathbb{R}^n_{>0}$. Observe that $F$ is order-preserving and subhomogeneous, so it is nonexpansive with respect to Thompson's metric. Also, $F(u)= u$. By \eqref{decouple},
$$P_J F(y) = P_J f(P_I y + u_J) = P_J f P_J(P_Iy + u_J) = u_J$$
for all $y \in \mathbb{R}^n_{>0}$. Since $F(y) = P_I F(y) + P_J F(y)$, it follows that $(P_J F)'(u) = 0$ and
\begin{align*}
F'(u) &= (P_I F)'(u) + (P_J F)'(u) \\
&= (P_I F)'(u) \\
&= P_I f'(u) P_I. & \text{(chain rule)}
\end{align*}
Therefore $\rho:=\rho(F'(u)) < 1$. By \cite[Lemma 5.6.10]{HornJohnson}, for any $\epsilon > 0$, there is a norm $\|\cdot\|$ on $\mathbb{R}^n$ such that
$$\|F'(u)(y-u)\| \le (\rho+\epsilon) \|y-u\|$$
for all $y \in \mathbb{R}^n_{>0}$.
Since $F$ is differentiable, for any $\epsilon > 0$, there is a neighborhood around $u$ such that
$$\|u + F'(u)(y-u) - F(y) \| \le \epsilon \|y-u\|.$$
By choosing $\epsilon > 0$ small enough, we have $\rho+2\epsilon < 1$ and
$$
\|F(y) - u\| \le (\rho+2\epsilon)\|y-u\|
$$
and therefore
$$\|F^k(y) - u\| \le (\rho+2\epsilon)^k\|y-u\|$$
for all $y$ in a sufficiently small neighborhood $B_r(u) = \{y \in \mathbb{R}^n_{>0} : d_T(y,u) \le r\}$ and $k \in \mathbb{N}$. Then by \eqref{normalThompson}, there is a $c > 0$ such that
\begin{equation} \label{rhoineq}
d_T(F^k(y),u) \le c\, (\rho+2\epsilon)^k d_T(y,u).
\end{equation}
For any $y \in \mathbb{R}^n_{>0}$, observe that
\begin{align*}
d_T(F(y),f(y)) &= d_T(f(y_I + u_J), f(y)) \\
&\le d_T(P_I y + u_J, y) & (\text{nonexpansiveness})\\
&= d_T(u_J, P_J y).
\end{align*}
In particular,
\begin{align*}
d_T(F(f^k(x)),f^{k+1}(x)) &\le d_T(u_J,P_J f^k(x)) \\
&= d_T(u_J, f_J^k(x)) & (\text{by } \eqref{decouple})
\end{align*}
for all $k \in \mathbb{N}$.
Then, by \eqref{thetaJlim}, we must have
$$d_T(F(f^k(x)),f^{k+1}(x)) \le c (\theta_J+\epsilon)^k$$
for all $k \in \mathbb{N}$ if we replace $c$ by a sufficiently large constant. Now the conditions of Lemma \ref{lem:helper} are satisfied for the metric space $(\mathbb{R}^n_{>0}, d_T)$ with the map $F$, fixed point $u$, sequence $x^m:=f^m(x)$, and constants $\theta = \theta_J+\epsilon$ and $\eta = \rho+2 \epsilon$ (where $\epsilon > 0$ can be made arbitrarily small). Therefore
$$\limsup_{k \rightarrow \infty} d_T(f^k(x), u )^{1/k} \le \rho^{\lambda} = \theta_J^{1-\lambda} < 1$$
where $\lambda = \frac{\log \theta_J}{\log \rho+ \log \theta_J}$. The conclusion of the theorem follows from Lemma \ref{lem:technical}.
\end{proof}
\begin{remark}
If $f:\mathbb{R}^n_{>0} \rightarrow \mathbb{R}^n_{>0}$ is order-preserving and homogeneous, but not type K order-preserving, then in general the iterates of $g(x) = f(x)/\|x\|$ will converge to a periodic orbit \cite[Theorem 8.1.7]{LemmensNussbaum}. Suppose $f$ is also multiplicatively convex and real analytic and $f$ has an eigenvector in $\mathbb{R}^n_{>0}$. Let $p$ be the least common multiple of the cycle lengths in $\mathcal{G}(f)$. Then $\mathcal{G}(f^p)$ has an arc from $i$ to itself for all $i \in [n]$. Therefore $f^p$ is type-K order-preserving by Lemma \ref{lem:analTypeK}. Therefore Theorem \ref{thm:anal} implies that $f^{kp}(x)$ converges to a fixed point of $f^p$ in $\operatorname{int} \mathbb{R}^n_{>0}$ at a linear rate as $k \rightarrow \infty$ for every $x \in \mathbb{R}^n_{>0}$. This means that $f^k(x)$ converges to points in a periodic orbit of $f$ at a linear rate.
\end{remark}
The following two examples show that both analyticity and convexity are necessary to guarantee the linear rate of convergence in Theorem \ref{thm:anal}.
\begin{example}
It is not enough for $f$ to be order-preserving, homogeneous, and analytic to guarantee a linear rate of convergence to fixed points in the interior of a cone. Consider $T:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ defined by
$$T(x) = \begin{bmatrix}
\tfrac{1}{2}(x_1+x_2)-\arctan(\tfrac{1}{2}(x_2-x_1)) \\
\tfrac{1}{2}(x_1+x_2)+ \arctan(\tfrac{1}{2}(x_2-x_1))
\end{bmatrix}.$$
It is easy to check that the Jacobian derivative of $T$ is always a nonnegative matrix, so $T$ is order-preserving. Let $f = \exp \circ T \circ \log$. Then $f:\mathbb{R}^2_{>0} \rightarrow \mathbb{R}^2_{>0}$ is order-preserving, homogeneous, and analytic.
Furthermore, $\mathbf{1}$ is the unique eigenvector of $f$ in $\mathbb{R}^2_{>0}$ up to scaling.
Let $x = \begin{bmatrix} \exp(-1) \\ \exp(1) \end{bmatrix}$. Then
$$
f^k(x) = \begin{bmatrix} \exp(-\arctan^k(1)) \\ \exp(\arctan^k(1)) \end{bmatrix}$$
for all $k \in \mathbb{N}$. Note that the sequence $\arctan^k(1)$ converges to zero at a rate that is much slower than linear. Therefore $d_T(f^k(x),\mathbf{1})$ also converges to zero at a sublinear rate.
\end{example}
\begin{example}
It is also not enough for $f$ to be order-preserving, homogeneous, and multiplicatively convex to guarantee a linear rate of convergence to fixed points in the interior of a cone. Consider $T:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ defined by
$$T(x) = \begin{bmatrix}
\max(x_1,x_2-\arctan(x_2-x_1)) \\
\max(x_2,x_1+\arctan(x_2-x_1))
\end{bmatrix}.$$
You can verify that $T$ is order-preserving by noting that both partial derivatives of $x_2+\arctan(x_1-x_2)$ are always nonnegative. Note that $T$ is also convex, since
$$T(x) = x+ \begin{bmatrix}
\max(0,x_2-x_1-\arctan(x_2-x_1)) \\
\max(0,x_1-x_2+\arctan(x_2-x_1))
\end{bmatrix}$$
and the map $t \mapsto \max(0,t - \arctan(t))$ is convex. If we let $f(x) = \exp \circ T \circ \log$, then $f:\mathbb{R}^2_{>0} \rightarrow \mathbb{R}^2_{>0}$ is order-preserving, homogeneous, and multiplicatively convex. The only eigenvector of $f$ in $\mathbb{R}^n_{>0}$ up to scaling is $\mathbf{1}$. Let $x = \begin{bmatrix} e^{-1} \\ 1 \end{bmatrix}$. Then
$$
f^k(x) = \begin{bmatrix}
\exp(-\arctan^k(1)) \\
1
\end{bmatrix}$$
for all $k \in \mathbb{N}$. Therefore $d_T(f^k(x), \mathbf{1})$ converges to zero, but at a rate much slower than linear.
\end{example}
| {
"timestamp": "2022-07-29T02:21:24",
"yymm": "2207",
"arxiv_id": "2207.14098",
"language": "en",
"url": "https://arxiv.org/abs/2207.14098",
"abstract": "Let $C$ be a closed cone with nonempty interior $C^\\circ$ in a Banach space. Let $f:C^\\circ \\rightarrow C^\\circ$ be an order-preserving subhomogeneous function with a fixed point in $C^\\circ$. We introduce a condition which guarantees that the iterates $f^k(x)$ converge to a fixed point for all $x \\in C^\\circ$. This condition generalizes the notion of type K order-preserving for maps on $\\mathbb{R}^n_{>0}$. We also prove that when iterates converge to a fixed point, the rate of convergence is always R-linear in two special cases: for piecewise affine maps and also for order-preserving, homogeneous, analytic, multiplicatively convex functions on $\\mathbb{R}^n_{>0}$. This later category includes the maps associated with the homogeneous eigenvalue problem for nonnegative tensors.",
"subjects": "Functional Analysis (math.FA); Dynamical Systems (math.DS); Optimization and Control (math.OC)",
"title": "Convergence of iterates in nonlinear Perron-Frobenius theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462204837636,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7099326142012032
} |
https://arxiv.org/abs/1111.0094 | Generalization of a few results in Integer Partitions | In this paper, we generalize a few important results in Integer Partitions; namely the results known as Stanley's theorem and Elder's theorem, and the congruence results proposed by Ramanujan for the partition function. We generalize the results of Stanley and Elder from a fixed integer to an array of subsequent integers, and propose an analogue of Ramanujan's congruence relations for the `number of parts' function instead of the partition function. We also deduce the generating function for the `number of parts', and relate the technical results with their graphical interpretations through a novel use of the Ferrer's diagrams. | \section{Introduction}
Partitioning a positive integer $n$ as sum of certain positive integers is a well known problem in the domain of number theory and combinatorics. A {\em partition} of a positive integer $n$ is any non-increasing sequence of positive integers that add up to $n$. The partition function $P(n)$ is defined as the number of unordered partitions of $n$. We also define $Q_k(n)$ as the number of occurrences of the part $k$ in all partitions of $n$, $V_k(n)$ as the number of parts occurring $k$ or more times in the partitions of $n$, and $S(n)$ as the sum of the numbers of distinct members in the partitions of $n$. This notation will be followed throughout the paper.
One of the very well referred results in integer partitions is the one presented by Stanley~\cite{stanley1}, which states the following.
\begin{result}[Stanley]
\label{stanley}
The total number of 1's that occur among all unordered partitions of a positive integer is equal to the sum of the numbers of distinct members of those partitions. In terms of the notation, $S(n) = Q_1(n)$.
\end{result}
One direction of generalizing Result~\ref{stanley} is the Elder's theorem~\cite{elder1}, which states the following.
\begin{result}[Elder]
\label{elder}
Total number of occurrences of an integer $k$ among all unordered partitions of $n$ is equal to the number of occasions that a part occurs $k$ or more times in a partition. In terms of the notation, $V_k(n) = Q_k(n)$.
\end{result}
In this paper, we generalize Result~\ref{stanley} in a different direction than what has been proposed in Result~\ref{elder}. We consider not only a single integer $n$, but generalize the premise to include subsequent integers. Our first result is as follows.
\begin{theorem}
\label{stanleyext}
Given any positive integer $n$ and any positive integer $k$,
$$S(n) \: = \: Q_k(n) + Q_k(n+1) + Q_k(n+2) + \cdots + Q_k(n+k-1) \: = \:
\sum_{i=0}^{k-1} Q_k(n+i).$$
\end{theorem}
We also generalize Result~\ref{elder} in a similar direction by including subsequent integers into the domain. The formal result is stated as follows.
\begin{theorem}
\label{elderext}
Given any positive integer $n$ and any positive integer $k$,
$$V_k(n) \: = \: Q_{rk}(n) + Q_{rk}(n+k) + Q_{rk}(n+2k) + \cdots + Q_{rk}(n+(r-1)k) \: = \:
\sum_{i=0}^{r-1} Q_{rk}(n+ik),$$
where $r$ can be chosen to be any positive integer.
\end{theorem}
These two results complete Results~\ref{stanley} and~\ref{elder}, and trace all possible avenues for generalizing the results proposed by Stanley and Elder. We prove both the generalizations in Section~\ref{proofgen}.
In the theory of integer partitions, an array of elegant congruence relations for partition function $P(n)$ were proposed by Ramanujan. He proposed and proved the following.
\begin{result}[Ramanujan]
\label{ramanujan}
For every non-negative $n \in \mathbb{Z}$,
\begin{eqnarray*}
p(5n + 4) & \equiv & 0 \pmod{5},\\
p(7n + 5) & \equiv & 0 \pmod{7},\\
p(11n + 6) & \equiv & 0 \pmod{11}.
\end{eqnarray*}
\end{result}
Ramanujan also conjectured that there exist such congruence modulo arbitrary powers of 5, 7, 11. A lot of eminent mathematicians have worked on similar results for a long time, and the best result till date is: {\em ``there exist such congruence relations for all non-negative integers which are co-prime to 6''}. This result was proved by Ahlgren and Ono~\cite{ono1}. In this paper, we propose a simple analogue to the Ramanujan results that holds true for the function $Q_k(n)$, where $k, n \in \mathbb{Z}$. The formal statement of our analogue is as follows.
\begin{theorem}
\label{ramanujanext}
Given any non-negative integer $n$, following the notation as before, one has
\begin{eqnarray*}
Q_{5} (5n + 4) & \equiv & 0 \pmod{5},\\
Q_{7} (7n + 5) & \equiv & 0 \pmod{7},\\
Q_{11} (11n + 6) & \equiv & 0 \pmod{11}.
\end{eqnarray*}
\end{theorem}
Two common tools for handling integer partitions are Generating functions and Ferrer's diagrams. In the process of generalizing the results of Stanley, Elder and Ramanujan, we also deduce the generating function for $Q_k(n)$ and propose an intuitive explanation of {\em `adding points'} to Ferrer's diagram, which integrates the technical results with their graphical interpretations.
\section{Proof of the Generalizations}
\label{proofgen}
To prove the generalizations stated earlier, we shall require a few preliminary results. One may find the following result in the current literature~\cite{sloane1} on integer partitions.
\begin{result}
\label{res1}
Given any positive integer $n$, one has $Q_1(n) = \sum_{i=0}^{n-1} P(i)$.
\end{result}
We also use the following lemma for our proofs of the generalizations.
\begin{lemma}
\label{lem1}
Given any two positive integers $k, n$, one has $Q_k(n) = Q_k(n-k) + P(n-k)$.
\end{lemma}
\begin{proof}
For a fixed positive integer $k$, a part of size $k$ occurs at least once in all partitions of $n$ of the form $\{k, R\}$, where $R$ denotes a partition of $n-k$. This amounts to at least $P(n-k)$ occurrences of $k$ in partitions of $n$. Moreover, the part $k$ may occur within the partition $R$ of $n-k$, which contributes $Q_k(n-k)$ to the total number of occurrences of $k$. Adding the two contributions, we get the number of occurrences of $k$ in partitions of $n$ as $Q_k(n) = Q_k(n-k) + P(n-k)$.
\end{proof}
These preliminary results will be used to prove the generalizations proposed in this paper. The formal proofs of the main results are presented in the following sections.
\subsection{Proof of Theorem~\ref{stanleyext}}
We have $S(n) = Q_1(n) = \sum_{i=0}^{n-1} P(i)$ by combining Result~\ref{stanley} (Stanley) and Result~\ref{res1}. Using Lemma~\ref{lem1} and solving the recurrence relation therein, we also obtain another known result~\cite{sloane2} as follows.
$$ Q_k(n) = P(n-k) + P(n-2k) + P(n-3k) + \cdots . $$
Consider the set of partitions $P_n = \{ P(0), P(1), P(2), \ldots, P(n-1) \}$. The sum over all these partitions is $S(n)$, and given any positive integer $k$, one may distribute $P_n$ over disjoint copies of congruence classes $\{ P(i), P(i+1), \ldots, P(i+k-1) \: | \: i \equiv n \bmod k \}$. Thus, one may deduce that
$$ S(n) \: = \: Q_1(n) \: = \: \sum_{i=0}^{n-1} P(i) \: = \: \sum_{j=0}^{k-1} \left( P(n+j-k) + P(n+j-2k) + \cdots \right) \: = \: \sum_{j=0}^{k-1} Q_k(n+j).$$
Hence the result, which holds true for any positive integral values of $n$ and $k$.
\subsection{Proof of Theorem~\ref{elderext}}
In this case, we start with Result~\ref{elder} (Elder), which states $V_k(n) = Q_k(n)$. From the proof of Theorem~\ref{stanleyext}, we have the representation of $Q_k(n)$ as
$$ Q_k(n) = P(n-k) + P(n-2k) + P(n-3k) + \cdots . $$
Consider the set of partitions $Q_n = \{ P(n-k), P(n-2k), P(n-3k), \ldots, P(0) \}$. The sum over all these partitions is $Q_k(n)$, and given any positive integer $r$, one may distribute $Q_n$ over disjoint copies of congruence classes $\{ P(i), P(i+k), \ldots, P(i+(r-1)k) \: | \: i \equiv n \bmod rk \}$. Thus, we get
\begin{eqnarray*}
V_k(n) \: = \: Q_k(n) \: & = & P(n-k) + P(n-2k) + P(n-3k) + \cdots \\
& = & \sum_{j=0}^{r-1} \left( P(n+jk-rk) + P(n+jk-2rk) + \cdots \right) \: = \: \sum_{j=0}^{r-1} Q_{rk}(n+jk).
\end{eqnarray*}
Hence the result, which holds true for any positive integral values of $n$, $k$ and $r$.
\subsection{Proof of Theorem~\ref{ramanujanext}}
Let us prove the case for $Q_5(n)$ and the rest will follow in a similar fashion. Note that we have the following representation for $Q_5(5n + 4)$
$$ Q_5(5n + 4) = P(5n + 4 - 5) + P(5n + 4 - 10) + P(5n + 4 - 15) + \cdots, $$
where each $P(\cdot)$ term in the expansion is of the same form $P(5m + 4)$. Thus, each term on the right hand side satisfy $P(\cdot) \equiv 0 \pmod{5}$ as per Ramanujan's congruence results (Result~\ref{ramanujan}). Hence, in turn, $Q_5(5n+4) \equiv 0 \pmod{5}$ as well.
The same is true for $Q_7(7n+5)$ and $Q_{11}(11n + 6)$. One can also derive analogous results for higher order Ramanujan congruences. For example,
\begin{eqnarray*}
Q_5(25n+24) & \equiv & 0 \pmod{5^2},\\
Q_5(125n+99) & \equiv & 0 \pmod{5^3}.
\end{eqnarray*}
In fact, one may also prove that if there exist integers $A(m)$ and $B(m)$ such that $P(A(m) \cdot n + B(m)) \equiv 0 \pmod{m}$, then it can be proved easily that $Q_{C(m)} (A(m) \cdot n + B(m)) \equiv 0 \pmod{m}$ for some positive integer $C(m)$ that depends on $m$ and $B(m)$.
\section{Other Results}
\label{other}
In this section, we deduce the generating function of $Q_k(n)$ and put forward a graphical understanding of the technical results in terms of Ferrer's diagrams.
\subsection{Generating function of $Q_k(n)$}
As we deal with the function $Q_k(n)$, it is also interesting to study the generating function of this parameter. The generating function for the partition function $P(n)$ is known to be
$$ F(x) = \sum_{m=0}^{\infty} P(m) \cdot x^m = \prod_{n=1}^{\infty} \frac{1}{1 - x^n}$$
where we assume $P(0) = 1$. In this formula, we count the coefficient of $x^m$ on both sides, where the coefficient on the right hand side is the result of counting all possible ways that $x^m$ is generated by multiplying smaller or equal powers of $x$. This obviously gives the number of partitions of $m$ into smaller or equal parts. What we require for $Q_k(n)$ is to count the number of $k$'s occurring in each of these partitions. Thus, we want to (i) add $r$ to the count if $x^{rk}$ is a member involved from the right hand side, and (ii) not count any of the partitions where no power of $x^k$ is involved. This intuition gives rise to the following generating function for $Q_k(n)$.
\begin{eqnarray*}
G_k(x) \: = \: \sum_{m=0}^{\infty} Q_k(m) \cdot x^m &=& \frac{1 \cdot x^k + 2 \cdot x^{2k} + 3 \cdot x^{3k} + \cdots }{(1 - x) \cdot (1 - x^2) \cdots (1 - x^{k-1}) \cdot (1 - x^{k+1}) \cdots }\\
& = & (1 - x^k) \cdot (x^k + 2 x^{2k} + 3 x^{3k} + \cdots ) \cdot \prod_{n=1}^{\infty} \frac{1}{1 - x^n} \\
& = & (x^k + x^{2k} + x^{3k} + \cdots ) \cdot \prod_{n=1}^{\infty} \frac{1}{1 - x^n} \: = \: \frac{x^k}{1 - x^k} \cdot \prod_{n=1}^{\infty} \frac{1}{1 - x^n}.
\end{eqnarray*}
\subsection{Adding Points to Existing Partitions}
Ferrer's diagram is a tool to graphically represent the partitions of an integer using linear horizontal array of dots/stars to denote each partition. In this section, we shall propose and prove a new problem in partition theory using the elegant exposition of Ferrer's diagram.
Before we prove our result for adding points to existing Ferrer's diagram, let us define the norms with an illustrative example. Consider the Ferrer's diagram of all partitions of $5$, as in Figure~\ref{fer5}.
\begin{figure}[htb]
\begin{verbatim}
5 4+1 3+2 3+1+1 2+2+1 2+1+1+1 1+1+1+1+1
***** **** *** *** ** ** *
* ** * ** * *
* * * *
* *
*
\end{verbatim}
\vspace*{-15pt}
\caption{Ferrer's Diagram for all Partitions of 5}
\label{fer5}
\end{figure}
Let us add one new point to each of the diagrams in this figure such that the resulting arrangements also correspond to valid Ferrer's diagrams. One way is to put the new point as a completely distinct partition in each of the existing ones, and thus get valid Ferrer's diagrams as output (\# denotes the new point in the figure). Another valid way to add the new point is to add it to the existing partitions instead of taking it as a new part. All possibilities are shown in Figure~\ref{fer51}.
\begin{figure}[htb]
\begin{verbatim}
5+1 4+1+1 3+2+1 3+1+1+1 2+2+1+1 2+1+1+1+1 1+1+1+1+1+1
***** **** *** *** ** ** *
# * ** * ** * *
# # * * * *
# # * *
# *
#
6 5+1 4+2 4+1+1 3+2+1 3+1+1+1 2+1+1+1+1
*****# ****# ***# ***# **# **# *#
* ** * ** * *
* * * *
* *
*
4+2 3+3 3+2+1 2+2+2 2+2+1+1
**** *** *** ** **
*# **# *# ** *#
* *# *
*
\end{verbatim}
\vspace*{-15pt}
\caption{Adding one point to Ferrer's diagram of 5.}
\label{fer51}
\end{figure}
Next, let us explain the process of adding more than one point to a specific diagram. Consider the partition $2+2+1$ of $5$, as shown in Figure~\ref{fer5}. To add 2 points to this partition, say, we consider the addition of the new points as a `packet of 2', instead of adding two separate points. Moreover, we only add this packet (containing two points) in its vertical orientation, i.e, in the form $1+1$, as shown in Figure~\ref{fer221}. With this restrictions imposed on the addition of 2 points, the new partitions that can be generated from $2+2+1$ are as illustrated in Figure~\ref{fer221}.
\begin{figure}[htb]
\begin{verbatim}
2+2+1 Wrong Wrong Right 2+2+1+1+1 3+3+1 Wrong
** ## # # ** **# **
** # # ** **# **
* * * *#
# #
#
\end{verbatim}
\vspace*{-15pt}
\caption{Adding two new points to partition $2+2+1$.}
\label{fer221}
\end{figure}
Note that we do not allow the packet to be added in horizontal or diagonal orientation, and we also abide by the norms of Ferrer's diagram while adding the packet. Based on this notion of point addition to existing Ferrer's diagram, let us propose and prove the following general result.
\begin{theorem}
\label{ferrer}
Consider adding $k$ points to all partitions of a positive integer $n$ in the Ferrer's diagram, where the new $k$ points are added as a single packet with vertical orientation, as discussed before. Then the total number of new partitions generated in this fashion is equal to the total number of $k$'s occurring in all the partitions of $n+k$.
\end{theorem}
\begin{proof}
While adding the packet of $k$ points to the partitions of $n$, we will count the new partitions in terms of the categorization we made before; adding the packet as a separate unit, and merging the packet with existing parts. It is quite clear that if we add the packet of $k$ points as a separate unit, then each partition of $n$ will generate a single new partition, namely, the existing partition plus $1 + 1 + \cdots + 1$ ($k$ number of 1's). Thus, the total number of partitions generated in this fashion is $P(n)$, the total number of partitions of $n$.
On the other hand, if we look for merging the new points with existing parts, we can fit in the vertical packet of $k$ stars in the Ferrer's diagram if and only if there is a vertical `permissible' opening, i.e., if there is a vertical slot of length $k$ (or more) where one can put this packet without violating the construction rules of Ferrer's diagram. This is possible when there exist at least $k$ copies of the same part in the existing partition. Two cases arise in such a merging situation:
\begin{itemize}
\item If there are $k$ equal parts in a partition, we will have just enough space to fit $k$ vertical points.
\item If there are more than $k$ equal parts, we will still be able to fit just one packet of $k$ points.
\end{itemize}
Thus, the number of new partitions that will be generated in this fashion from an existing partition is the number of parts that occur $k$ times or more in the existing partition. Note that this count is precisely the one mentioned in Elder's theorem (Result~\ref{elder}), i.e., $V_k(n)$.
Considering both possible categories of adding $k$ points to the partitions of $n$, we get the cumulative count of new partitions as $P(n) + V_k(n)$. We further obtain
$$ P(n) + V_k(n) = P(n) + Q_k(n) = Q_k(n+k) $$
from Result~\ref{elder} and Lemma~\ref{lem1}. Hence the desired result.
\end{proof}
Theorem~\ref{ferrer} provides a nice combinatorial intuition towards the problem of adding points to an existing partition, and also integrates Elder's theorem with the extension of Stanley's theorem and the related Lemma. Further explorations in this direction would be to study the general construction of larger partitions using smaller ones as building blocks.
\section{Conclusion}
\label{conclusion}
In this paper, we generalize Stanley's theorem, Elder's theorem in Integer Partitions by including the notion of subsequent integers in each of the original results. The original results were based on a fixed integer $n$ while we generalize it to include the set of all integers $\{n, n+1, n+2, \ldots \}$ in a natural way. Moreover, we propose analogues of Ramanujan's congruence results for the `number of parts' function $Q_k(n)$ instead of the original presentation for the `partition function' $P(n)$. We show that it is natural to extend all Ramanujan-like congruence relations to $Q_k(n)$ from $P(n)$. In this process of studying $Q_k(n)$, we also deduce the generating function for $Q_k(n)$, and relate the technical results with their graphical interpretations through a novel use of the Ferrer's diagrams.
| {
"timestamp": "2011-11-02T01:01:01",
"yymm": "1111",
"arxiv_id": "1111.0094",
"language": "en",
"url": "https://arxiv.org/abs/1111.0094",
"abstract": "In this paper, we generalize a few important results in Integer Partitions; namely the results known as Stanley's theorem and Elder's theorem, and the congruence results proposed by Ramanujan for the partition function. We generalize the results of Stanley and Elder from a fixed integer to an array of subsequent integers, and propose an analogue of Ramanujan's congruence relations for the `number of parts' function instead of the partition function. We also deduce the generating function for the `number of parts', and relate the technical results with their graphical interpretations through a novel use of the Ferrer's diagrams.",
"subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO); Number Theory (math.NT)",
"title": "Generalization of a few results in Integer Partitions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462183543601,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7099326126710258
} |
https://arxiv.org/abs/1405.2844 | Covering Array Bounds Using Analytical Techniques | A $t$-covering array with entries from the alphabet ${\cal Q}=\{0,1,\ldots,q-1\}$ is a $k\times n$ stack, so that for any choice of $t$ (typically non-consecutive) columns, each of the $q^{t}$ possible $t$-letter words over ${\cal Q}$ appear at least once among the rows of the selected columns. We will show how a combination of the Lovász local lemma; combinatorial analysis; Stirling's formula; and Calculus enables one to find better asymptotic bounds for the minimum size of $t$-covering arrays, notably for $t = 3, 4$. Here size is measured in the number of rows, as expressed in terms of the number of columns. | \section{Introduction}
A $t$-covering array with entries from the alphabet ${\cal Q}=\{0,1,\ldots,q-1\}$ is a $k\times n$ stack, so that for any choice of $t$ typically non-consecutive columns, each of the $q^{t}$ possible $t$-letter words over ${\cal Q}$ appear at least once among the rows of the selected columns. The following problem is central; see, e.g., \cite{colbourn}, \cite {sloane}: Given the parameters $q, t$, what is the smallest $k$ for which a covering array with these parameters exists? Specifically, we seek a function $k_0=k_0(n)=k_0(n,q,t)$ such that as $n\to\infty$, $k\ge k_0(n)\Rightarrow$ a $t$-covering array exists. Sperner's theorem was used by Kleitman and Spencer (see \cite{sloane}) to give a very satisfactory answer for $t=q=2$, while the work of Roux (again, see \cite{sloane}) showed that for $t=3; q=2$, we have
\begin{equation}k_0(n,2,3)=7.65\lg n(1+o(1)),\end{equation}
where, here and throughout this paper, $\lg:=\log_2$. A general upper bound of
\begin{equation}k_0(n,q,t)=(t-1)\frac{\lg n}{\lg\left(q^t/(q^t-1)\right)}(1+o(1))\end{equation}
was produced in \cite{gss}. Notice that plugging in $q=2, t=3$ in (2) yields a bound of
\[k_0(n,2,3)=10.3\lg n(1+o(1)),\]
which shows that the general bounds of \cite{gss} are inferior to the specific bound in (1), which was obtained by employing random methods with equal weight columns (an equal number of zeros and ones in each column in the binary case) either without (Roux) or with (\cite{gss}) the use of the Lov\'asz local lemma. Some improvement in (2) was made in the \cite{dg}, where a ``tiling method" was employed. In this paper, we adapt the methods of Roux (\cite{sloane}) and \cite{gss} to improve the bounds in (2) for several other cases. The analysis is difficult but not daunting for the cases we consider: a combination of the Lov\'asz local lemma (see, e.g., \cite{alon}); elementary combinatorial
analysis; Stirling's formula; and Calculus is employed to obtain our new results.
The case of $t=3, q\ge 3$ is considered in Section 2. We turn our attention to $t=4, q=2$, where double sums need to be employed, in Section 3.
\section{The Case of $t=3$}
\label{sec:examples}
\subsection{$q=3: 3$-Covering Arrays with a Three-Letter Alphabet}
\begin{thm}
$$k_0(n,3,3)\le 32.03 \cdot \lg(n)(1+o(1)).$$
\end{thm}
\begin{proof}
Let $n=3m$, and let us randomly place $m$ of each of the letters 0, 1, and 2 in each of the $k$ columns. The probability that any one set of three columns is missing any one of the 27 ternary three letter words, say 111, is
$$p={{{3m}\choose{m}}\frac{\sum_{j=0}^{m} {m \choose j} \cdot {2m \choose m-j} \cdot {3m-j \choose m}}{{3m \choose m}^{3}}}={\frac{\sum_{j=0}^{m} {m \choose j} \cdot {2m \choose m-j} \cdot {3m-j \choose m}}{{3m \choose m}^{2}}}.$$
This expression is derived as follows: First place the $m$ ones in the first column in ${{3m}\choose{m}}$ ways. Then, for some $j$, we pick $j$ of the spots in these $m$ positions to have a $1$ in the second column. Finally, since the word 111 is to be absent, the $m$ ones in column 3 all have to be in the $3m-j$ spots where the first two columns' entries are not both 1. The union bound now tells us that the probability $\pi$ that at least one word is missing in any set of three columns is given by
\[\pi\le27p.\]
Next, we maximize the numerator summand in the expression for $p$ by parametrizing: Set $j=Am$ for some $0\le A\le1$ and use Stirling's approximation to get (with $C$ representing a generic constant):
\begin{align}
&{m \choose j} \cdot {2m \choose m-j} \cdot {3m-j \choose m}\nonumber\\
=&\frac{m!}{j! (m-j)!} \cdot \frac{(2m)!}{(m-j)! (m+j)!} \cdot \frac{(3m-j)!}{m! (2m-j)!}\nonumber\\
=&\frac{(2m)! (3m-j)!}{j! (m-j)! (m-j)! (m+j)! (2m-j)!}\nonumber\\
\le&\frac{C}{m^{3/2}}\left(\frac{2m}{e}\right)^{2m} \cdot \left(\frac{(3-A)m}{e}\right)^{(3-A)m} \cdot \left(\frac{e}{Am}\right)^{Am} \cdot \left(\frac{e}{(1-A)m}\right)^{2(1-A)m} \nonumber\\
&\cdot \left(\frac{e}{(1+A)m}\right)^{(1+A)m}\cdot \left(\frac{e}{(2-A)m}\right)^{(2-A)m}\nonumber\\
=&\frac{C}{m^{3/2}}\left[\frac{2^{2} \cdot (3-A)^{(3-A)}}{A^{A} \cdot (1-A)^{2(1-A)} \cdot (1+A)^{(1+A)} \cdot (2-A)^{(2-A)}}\right]^{m}.
\end{align}
In order to find the critical value of $A$ in the exponential part of (3), we will maximize
$q(A)=\ln 4+(3-A)\ln(3-A)-A\ln(A)-2(1-A)\ln(1-A)-(1+A)\ln(1+A)-(2-A)\ln(2-A).$
We have:
\begin{align*}
q'(A)=&-\frac{3-A}{3-A}-\ln(3-A)-\left[\frac{A}{A}+\ln(A)\right]-\left[-\frac{2(1-A)}{1-A}-2\ln(1-A)\right]\\
&-\left[\frac{1+A}{1+A}+\ln(1+A)\right]-\left[-\frac{2-A}{2-A}-\ln(2-A)\right]\\
=&-\ln(3-A)-\ln(A)+2 \cdot \ln(1-A)-\ln(1+A)+\ln(2-A)\\
=&\ln\left(\frac{(1-A)^{2} \cdot (2-A)}{(3-A) \cdot A \cdot (1+A)} \right).
\end{align*}
Setting $q'(A)=0$, we see that $A=2-\sqrt{3}$.
Plugging $A=2-\sqrt{3}$ into (3), we see that for each $j$,
\begin{eqnarray}&&{m \choose j} \cdot {2m \choose m-j} \cdot {3m-j \choose m}\nonumber\\
&\le&
\frac{C}{m^{3/2}}\left[\frac{2^{2} \cdot (1+\sqrt{3})^{(1+\sqrt{3})}}{(2-\sqrt{3})^{(2-\sqrt{3})} \cdot (\sqrt{3}-1)^{(\sqrt{3}-1)} \cdot (3-\sqrt{3})^{(3-\sqrt{3})} \cdot \sqrt{3}^{\sqrt{3}}}\right]^{m}\nonumber\\
&\approx & \frac{C}{m^{3/2}}40.0148^{m}.
\end{eqnarray}
Next, we use Stirling's Approximation to estimate the denominator in the expression for $p$:
$$\frac{(3m)!}{(2m)!(m)!}\ge\frac{C}{m^{1/2}}\frac{\left(\frac{3m}{e}\right)^{3m}}{\left(\frac{2m}{e}\right)^{2m} \cdot \left(\frac{m}{e}\right)^{m}}=\frac{C}{m^{1/2}}\left(\frac{27}{4}\right)^{m}.$$
Thus, on bounding the numerator of the expression for $p$ by $m$ times the maximum summand, we get $$\pi\le C\sqrt{m}\frac{40.0148^{m}}{\left(\frac{27}{4}\right)^{2m}}.$$
Now whether or not a given set of three columns is missing at least one word depends on $O(n^2)$ other sets of columns, namely the ones that share at least one column with the given set. Thus the dependence number $d$ in the Lov\'asz lemma is of magnitude $n^2$. The lemma states that if $e\pi d<1$ then the probability that we have no sets of such deficient columns is positive, i.e. a construction exists that satisfies the criteria of a covering array. Now the inequality $e\pi d<1$ may be seen to hold, using elementary algebra, if
$$m>\frac{2 \lg(n)}{\lg(1.138)}(1+o(1)) \approx 10.67 \lg(n)(1+o(1)),$$
or
$$k=3m>32.03 \lg(n)(1+o(1)).$$ It follows that $k_0\le 32.03 \lg(n)(1+o(1))$, as claimed.
\end{proof}
\noindent REMARKS: The general bound in (2) yields $k_0(n,3,3)\le 36.73\lg n$, so we have quite an improvement. Notice also that the exact values of the constants $C$ and the exact nature of the polynomial terms in Stirling's approximation did not affect the end asymptotic result (even though a more careful analysis {\it would} be needed for bounds for specific values of $k$.) Accordingly, in the rest of the paper we will not explicitly mention these terms, and use Stirling's approximation as
\[N!\cong\lr\frac{N}{e}\rr^N,\]
where $f(n)\cong g(n)$ will mean that $f(n)$ is bounded both above and below by some rational quantity times $g(n)$.
\subsection{$q=4: 3$-covering Arrays with a Four-letter Alphabet}
\begin{thm}
$$k_0(n,4,3)\le81.28 \cdot \lg(n)(1+o(1)).$$
\end{thm}
\begin{proof} The proof is very similar to that of Theorem 1.
We first find the expression of the probability $p$ of avoiding a particular word in an array of size $4m\times n$, where each column contains an equal number of randomly placed letters 0, 1, 2, and 3. We have
$$p={\frac{\sum_{j=0}^{m} {m \choose j} \cdot {3m \choose m-j} \cdot {4m-j \choose m}}{{4m \choose m}^{2}}}.$$
We then maximize the summand in the numerator:
\begin{align*}
&{m \choose j} \cdot {3m \choose m-j} \cdot {4m-j \choose m}\\
=&\frac{m!}{j! (m-j)!} \cdot \frac{(3m)!}{(m-j)! (2m+j)!} \cdot \frac{(4m-j)!}{m! (3m-j)!}\\
=&\frac{(3m)! (4m-j)!}{j! (m-j)! (m-j)! (2m+j)! (3m-j)!}\\
\cong&\left(\frac{3m}{e}\right)^{3m} \cdot \left(\frac{(4-A)m}{e}\right)^{(4-A)m} \cdot \left(\frac{e}{Am}\right)^{Am} \cdot \left(\frac{e}{(1-A)m}\right)^{2(1-A)m}\\ &\cdot \left(\frac{e}{(2+A)m}\right)^{(2+A)m}
\cdot \left(\frac{e}{(3-A)m}\right)^{(3-A)m}\\
=&\left[\frac{3^{3} \cdot (4-A)^{(4-A)}}{A^{A} \cdot (1-A)^{2(1-A)} \cdot (2+A)^{(2+A)} \cdot (3-A)^{(3-A)}}\right]^{m}.
\end{align*}
We let
$q(A)=\ln 27+(4-A)\ln(4-A)-A\ln(A)-2(1-A)\ln(1-A)-(2+A)\ln(2+A)-(3-A)\ln(3-A),$
so that
\begin{align*}
q'(A)=&-\frac{4-A}{4-A}-\ln(4-A)-\left[\frac{A}{A}+\ln(A)\right]-\left[-\frac{2(1-A)}{1-A}-2\ln(1-A)\right]\\
&-\left[\frac{2+A}{2+A}+\ln(2+A)\right]-\left[-\frac{3-A}{3-A}-\ln(3-A)\right]\\
=&-\ln(4-A)-\ln(A)+2 \cdot \ln(1-A)-\ln(2+A)+\ln(3-A).
\end{align*}
This expression is seen to equal zero (and yield a maximum) for $A=\frac{5}{2}-\frac{\sqrt{21}}{2}$.
Substituting this value into the expression ${m \choose j} \cdot {3m \choose m-j} \cdot {4m-j \choose m}$:
yields a maximum value that is $\cong 83.97^{m}$.
Stirling's approximation applied to the denominator yields
$$\frac{(4m)!}{(3m)!(m)!}\cong \left(\frac{256}{27}\right)^{m},$$
and thus, $$\pi\cong\frac{83.97^{m}}{\left(\frac{256}{27}\right)^{2m}}.$$
The Erd\H os-Lov\'asz local lemma with $d=O(n^2)$ and $\pi$ as above then yields
$$m=20.32\lg n(1+o(1)),$$ or $$k_0\le 81.28\lg(n)(1+o(1)),$$
as compared to the value $k_0\le 88.03$ given by the general bound (2).
\end{proof}
\subsection{$3$-covering Arrays with a $q$-letter Alphabet}
This section gives a generalization of Theorems 1 and 2 for an arbitrary alphabet size.
\begin{thm}
$$k_0(n,q,3)\le B(q) \cdot \lg(n)(1+o(1)),$$
where the constant $B(q)$ is specified below.
\end{thm}
\begin{proof}
We first find a generalized expression for the probability $p$ of avoiding a particular word under a similar probability model as before:
$$p={\frac{\sum_{j=0}^{m} {m \choose j} \cdot {(q-1)m \choose m-j} \cdot {qm-j \choose m}}{{qm \choose m}^{2}}}.$$
The numerator summand can be written as
\begin{align*}
&{m \choose j} \cdot {(q-1)m \choose m-j} \cdot {qm-j \choose m}\\
=&\frac{m!}{j! (m-j)!} \cdot \frac{((q-1)m)!}{(m-j)! ((q-2)m+j)!} \cdot \frac{(qm-j)!}{m! ((q-1)m-j)!}\\
=&\frac{((q-1)m)! (qm-j)!}{j! (m-j)! (m-j)! ((q-2)m+j)! ((q-1)m-j)!}\\
\cong&\left(\frac{(q-1)m}{e}\right)^{(q-1)m} \cdot \left(\frac{(q-A)m}{e}\right)^{(q-A)m} \cdot \left(\frac{e}{Am}\right)^{Am}\cdot \left(\frac{e}{(1-A)m}\right)^{2(1-A)m} \\
&\cdot \left(\frac{e}{((q-2)+A)m}\right)^{((q-2)+A)m} \cdot \left(\frac{e}{((q-1)-A)m}\right)^{((q-1)-A)m}\\
=&\left[\frac{(q-1)^{(q-1)} \cdot (q-A)^{(q-A)}}{A^{A} \cdot (1-A)^{2(1-A)} \cdot ((q-2)+A)^{((q-2)+A)} \cdot ((q-1)-A)^{((q-1)-A)}}\right]^{m}
\end{align*}
Setting
$r(A)=\ln (q-1)^{(q-1)}+(q-A)\ln(q-A)-A\ln(A)-2(1-A)\ln(1-A)-((q-2)+A)\ln((q-2)+A)-((q-1)-A)\ln((q-1)-A)$, we see that
\begin{align*}
r'(A)=&-\frac{q-A}{q-A}-\ln(q-A)-\left[\frac{A}{A}+\ln(A)\right]-\left[-\frac{2(1-A)}{1-A}-2\ln(1-A)\right]\\
&-\left[\frac{(q-2)+A}{(q-2)+A}+\ln((q-2)+A)\right]-\left[-\frac{(q-1)-A}{(q-1)-A}-\ln((q-1)-A)\right]\\
=&-\ln(q-A)-\ln(A)+2 \cdot \ln(1-A)-\ln((q-2)+A)+\ln((q-1)-A),
\end{align*}
and that $r'(A)=0$ if
$$\ln\left(\frac{(1-A)^{2} \cdot (q-1-A)}{(q-A) \cdot A \cdot (q-2+A)} \right)=0,$$
or if
$$A^{2}-A(q+1)+1=0.$$
A feasible solution to this quadratic is
\begin{equation}A=\frac{(q+1)- \sqrt{(q+1)^{2}-4}}{2}\end{equation}
Incorporating the denominator of the expression for $p$, we see that
\begin{eqnarray}p&\cong& \left[\frac{(q-1)^{3(q-1)} \cdot (q-A)^{(q-A)}}{q^{2q}\cdot A^{A} \cdot (1-A)^{2(1-A)} \cdot (q-2+A)^{(q-2+A)} \cdot (q-1-A)^{(q-1-A)}}\right]^{m}\nonumber\\&:=&D^m,\end{eqnarray}
with $A$ given by (5) and with $\pi\le q^t\cdot p$.
Thus setting $e\pi d<1$, we obtain
$$k_0=qm\le B(q)\lg n,$$
where $$B(q)=\frac{2q}{\lg(1/D)},$$ and with $D$ given by (6).
\end{proof}
\noindent REMARK: A first order approximation to the maximizing value of $A$ is given by $A=\frac{1}{q+1}$; use of this approximation greatly streamlines the value of $p$ in (6), though computation of the optimal value of $p$ is not hard for any value of $q$.
\section{4-Covering Binary Arrays}
\begin{thm}
$$k_0(n,2,4)\le27.32 \cdot \lg(n) (1+o(1)).$$
\end{thm}
\begin{proof}
We first find the expression for the probability $p$ of avoiding a particular word (of the sixteen total) in a random equal weight array: We set $k=4m$ and note that
$$p={\frac{\sum_{j=0}^{2m} {2m \choose j}{2m \choose j} \sum_{i=0}^{j} {j \choose i}{4m-j \choose 2m-i}{4m-i \choose 2m}}{{4m \choose 2m}^{3}}}.$$
The expression may be justified by multiplying and dividing by ${{4m}\choose{2m}}$ and arguing that the numerator represents the number of ways of avoiding the word 1111 in any four selected columns as follows: We first select $2m$ ones in the first column in ${{4m}\choose{2m}}$ ways. Then, for some $j$, we pick $j$ ones in the second column to correspond to the positions with a 1 in the first column. We do the same for the positions with a 0 in the first column, choosing $2m-j$ of these. For some $i$ we now pick $i$ ones in column 3 so as to form a 111. Finally, we make sure that 1111 does not occur. The rest of the proof follows the same steps as in the previous section. Parametrizing by setting $j=Bn$, $i=ABn$, where $0 \leq A,B \leq 1$, we calculate that the summand $f(j,i)$ in the expression for $p$ equals
\begin{align*}
&f(j,i)={2m \choose j}{2m \choose j} {j \choose i}{4m-j \choose 2m-i}{4m-i \choose 2m}\\
=&\left(\frac{(2m)!}{j! (2m-j)!}\right)^{2} \cdot \frac{j!}{i! (j-i)!} \cdot \frac{(4m-j)!}{(2m-i)! (2m+i-j)!} \cdot \frac{(4m-i)!}{(2m)! (2m-i)!}\\
=&\left[\frac{2^{2} \cdot (4-B)^{(4-B)} \cdot (4-AB)^{4-AB}}{B^{B} \cdot (AB)^{AB} \cdot ((1-A)B)^{(1-A)B} \cdot (2-AB)^{2(2-AB)}}\right]^m\\
&\cdot \left[\frac{1}{(2-B)^{2(2-B)} \cdot (2+AB-B)^{2+AB-B}}\right]^{m}.
\end{align*}
We now find the value of $i$ for which the maximum occurs in the inner sum:
\begin{align*}
&{j \choose i}{4m-j \choose 2m-i}{4m-i \choose 2m} \\
=&\frac{j!}{i! (j-i)!} \cdot \frac{(4m-j!}{(2m-i)! (2m+i-j)!} \cdot \frac{(4m-i)!}{(2m)! (2m-i)!}\\
=&\left[\frac{B^{B} \cdot (4-B)^{(4-B)} \cdot (4-AB)^{4-AB}}{2^{2} \cdot (AB)^{AB} \cdot ((1-A)B)^{(1-A)B} \cdot (2-AB)^{2(2-AB)}}\right]^m\\ &\cdot \left[\frac{1}{(2+AB-B)^{2+AB-B}}\right]^{m}
\end{align*}
As before, we set $q(A)=(\ln (B^{B}/4))+(4-B)\ln(4-B)+(4-AB)\ln(4-AB)-AB\ln(AB)
-(B-AB)\ln(B-AB)-2(2-AB)\ln(2-AB)-(2+AB-B)\ln(2+AB-B)$,
so that
\begin{align*}
q'(A)=&-B-B\ln(4-AB)-(B+B\ln(AB))-(-B-B\ln(B-AB))\\
&-2(-B-B\ln(2-AB))-(B+B\ln(AB+2-B)).
\end{align*}
\\
Setting $q'(A)=0$ yields the critical value
$$A=\frac{3 - \sqrt{9-2B}}{B}.$$
Plugging the critical value of $A$ into the full expression for $f(j,i)$, we see that
\begin{eqnarray*}
f(j,i)&\le&\left[\frac{2^{2} \cdot (4-B)^{(4-B)} \cdot (1+\sqrt{9-2B})^{1+\sqrt{9-2B}}}{B^{B} \cdot (3-\sqrt{9-2B})^{3-\sqrt{9-2B}} \cdot (B-3+\sqrt{9-2B})^{B-3+\sqrt{9-2B}} }\right]^{m}\\
&\cdot& \left[\frac{1}{(\sqrt{9-2B}-1)^{2(\sqrt{9-2B}-1)}\cdot(2-B)^{2(2-B)} }\right]^{m}\\
&\cdot&\left[\frac{1}{(5-B-\sqrt{9-2B})^{5-B-\sqrt{9-2B}}}\right]^m.
\end{eqnarray*}
Repeating the same process, we set $r(B)=2\ln(2)+(4-B)\ln(4-B)+(1+\sqrt{9-2B})\ln(1+\sqrt{9-2B})-B\ln{B}
-(3-\sqrt{9-2B})\ln(3-\sqrt{9-2B})-(B-3+\sqrt{9-2B})\ln(B-3+\sqrt{9-2B})
-(2\sqrt{9-2B}-2)\ln(\sqrt{9-2B}-1)-(4-2B)\ln(2-B)
-(5-B-\sqrt{9-2B})\ln(5-B-\sqrt{9-2B})$, and set $r'(B)=0$ to obtain the critical value (using Maple) of
$B \approx 0.912621974615847$. Since
$A=\frac{3 - \sqrt{9-2B}}{B}$, we get
$A \approx 0.352201128737$,
and plugging these values of $A$ and $B$, we get the numerator of $p$ bounded by
$m^2\cdot(e^{8.013})^m$. Since the denominator expression is $\cong 16^{3m},$ we get that
\[\pi\le16p\cong \lr\frac{e^{8.013}}{16^3}\rr^m.\]
Since $d=O(n^3)$, the Lov\'asz lemma yields
that a suitable 4-covering array exists if
$$m>\frac{3\lg(n)}{\lg(1.3558)} \approx 6.83082\lg(n),$$
and thus
$$k_0\le 4(6.83082)\lg n=27.32\lg(n).$$
\end{proof}
\noindent REMARK: Our upper bound of $27.32\lg n$ should be compared to the bound of $32.22\lg n$ as given by (2). Also, the analysis in this section can readily be extended to $q$-ary 4-covering arrays, $q\ge3$, but we do not provide details.
\section{Acknowledgement} The research of AG and ZK was supported by NSF grant 1263009. RY also participated in the project without NSF support but with a great level of enthusiasm.
| {
"timestamp": "2014-05-13T02:18:20",
"yymm": "1405",
"arxiv_id": "1405.2844",
"language": "en",
"url": "https://arxiv.org/abs/1405.2844",
"abstract": "A $t$-covering array with entries from the alphabet ${\\cal Q}=\\{0,1,\\ldots,q-1\\}$ is a $k\\times n$ stack, so that for any choice of $t$ (typically non-consecutive) columns, each of the $q^{t}$ possible $t$-letter words over ${\\cal Q}$ appear at least once among the rows of the selected columns. We will show how a combination of the Lovász local lemma; combinatorial analysis; Stirling's formula; and Calculus enables one to find better asymptotic bounds for the minimum size of $t$-covering arrays, notably for $t = 3, 4$. Here size is measured in the number of rows, as expressed in terms of the number of columns.",
"subjects": "Combinatorics (math.CO)",
"title": "Covering Array Bounds Using Analytical Techniques",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462222582662,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7099326095224721
} |
https://arxiv.org/abs/1804.09411 | Stable-Matching Voronoi Diagrams: Combinatorial Complexity and Algorithms | We study algorithms and combinatorial complexity bounds for \emph{stable-matching Voronoi diagrams}, where a set, $S$, of $n$ point sites in the plane determines a stable matching between the points in $\mathbb{R}^2$ and the sites in $S$ such that (i) the points prefer sites closer to them and sites prefer points closer to them, and (ii) each site has a quota or "appetite" indicating the area of the set of points that can be matched to it. Thus, a stable-matching Voronoi diagram is a solution to the well-known post office problem with the added (realistic) constraint that each post office has a limit on the size of its jurisdiction. Previous work on the stable-matching Voronoi diagram provided existence and uniqueness proofs, but did not analyze its combinatorial or algorithmic complexity. In this paper, we show that a stable-matching Voronoi diagram of $n$ point sites has $O(n^{2+\varepsilon})$ faces and edges, for any $\varepsilon>0$, and show that this bound is almost tight by giving a family of diagrams with $\Theta(n^2)$ faces and edges. We also provide a discrete algorithm for constructing it in $O(n^3\log n+n^2f(n))$ time in the real-RAM model of computation, where $f(n)$ is the runtime of a geometric primitive (which we define) that can be approximated numerically, but cannot, in general, be performed exactly in an algebraic model of computation. We show, however, how to compute the geometric primitive exactly for polygonal convex distance functions. | \section{Introduction}\label{smvd:sec:intro}
The \emph{Voronoi diagram} is a well-known geometric structure with a
broad spectrum of applications in computational geometry
and other areas of Computer Science, e.g.,
see~\cite{Aurenhammer:1991,bookAurenhammer,Brandt1992, Kise1998, Meguerdichian2001, Petrek2007, Stojmenovic2006, Bhattacharya2008}.
The Voronoi diagram partitions the plane into regions.
Given a finite set $S$ of points, called \emph{sites}, each point in the plane is assigned to the region of its closest site in $S$.
Although the Voronoi diagram has been generalized in many ways, its
standard definition specifies that each
\emph{Voronoi cell} or \emph{region} of a site $s$ is the set
$V(s)$ defined as
\begin{equation}\label{smvd:eq:vd}
\bigl\{p\in \mathbb{R}^2 \mid d(p,s)\leq d(p,s')\quad\forall s'\not=s\in S\bigr\},
\end{equation}
where $d(\cdot,\cdot)$ denotes the distance between two points.
The properties of standard Voronoi diagrams have been thoroughly
studied (e.g., see~\cite{Aurenhammer:1991,bookAurenhammer}).
For example, it is well known that
in a standard Voronoi diagram for point sites in the plane
every Voronoi cell is a connected and convex polygon whose
boundaries lie along perpendicular bisectors of pairs of sites.
On a seemingly unrelated topic, the theory of
\emph{stable matchings} studies how to match entities in two sets,
each of which has its own preferences about the elements of the other set,
in a ``stable'' manner.
It is used, for instance, to match hospitals and medical students
starting their residencies~\cite{thematch},
as well as in on-line advertisement auctions (e.g., see~\cite{Aggarwal2009}).
It was originally formulated by Gale and Shapley~\cite{gale62}
in the context of establishing marriages between $n$ men and $n$ women,
where each man ranks the women by preference, and, likewise, the women rank the men.
A matching between the men and women is \emph{stable}
if there is no \emph{blocking pair}, defined as
a man and woman who prefer each other over their assigned choices.
Gale and Shapley~\cite{gale62} show that a stable solution always
exists for any set of preferences, and they provide an algorithm that runs in
$O(n^2)$ time.
When generalized to the one-to-many case, the stable matching problem
is also known as the \emph{college admission} problem~\cite{Roth89}
and can be formulated as a matching of $n$ students to $k$ colleges,
where each student has a preference ranking of the colleges and each college
has a preference ranking of the students and a \emph{quota}
indicating how many students it can accept.
In this paper, we are interested in studying the algorithmic and combinatorial
complexity of the diagrams that we call \emph{stable-matching Voronoi diagrams}, which
combine the notions of Voronoi diagrams and the one-to-many version of stable
matching.
These diagrams
were introduced by Hoffman, Holroyd, and Peres~\cite{hoffman2006}, who provided existence and uniqueness
proofs for such structures for potentially
countably infinite sets of sites, but did not study their
algorithmic or combinatorial complexities.
A stable-matching Voronoi diagram is defined with respect to
a set of sites in $\mathbb{R}^2$, which in this paper we restrict to
finite sets of $n$ distinct points,
each of which has an assigned finite numerical
\emph{quota} (which is also known as its ``\emph{appetite}'') indicating the area of the region of points assigned to it.
A preference relationship is defined in terms of distance, so that
each point $p$ in $\mathbb{R}^2$ prefers sites ordered by distance, from
closest to farthest,
and each site likewise prefers points ordered by distance.
The stable-matching Voronoi diagram, then, is a partition of the plane into regions, such that (i) each site is associated with a region of area equal to its appetite, and (ii) the assignment of points to sites is stable in
the sense that there is no blocking pair, defined as a site--point pair whose members
prefer each other over their assigned matches.
This is formalized in Definition~\ref{smvd:def:smvd}. The regions are defined as closed sets so that boundary points lie in more than one region, analogously to Equation~\ref{smvd:eq:vd}.
See Figure~\ref{smvd:fig:appetite}.
\begin{figure}[htb]
\centering
\includegraphics[width=.495\linewidth]{stablediagram1}\hspace*{.1em}
\includegraphics[width=.495\linewidth]{stablediagram2}
\caption{Stable-matching Voronoi diagrams for a set of 25 point sites,
where each site in the left diagram has an
appetite of 1 and each site in the right
diagram has an appetite of 2.
Each color corresponds to an individual cell, which is not necessarily convex or
even connected.}
\label{smvd:fig:appetite}
\end{figure}
\begin{definition}\label{smvd:def:smvd}
Given a set $S$ of $n$ points (called sites) in $\mathbb{R}^2$ and
a numerical appetite $A(s)>0$ for each $s\in S$,
the \emph{stable-matching Voronoi diagram}
of $(S,A)$ is a subdivision of $\mathbb{R}^2$
into $n+1$ \emph{regions}, which are closed sets in $\mathbb{R}^2$.
For each site $s \in S$ there is a corresponding region $C_s$ of area $A(s)$, and there is an extra region, $C_\emptyset$, for the remaining ``unmatched'' points. The regions do not overlap except along boundaries (boundary points are included in more than one region). The regions are such that there are no blocking pairs. A blocking pair is a site $s\in S$ and a point $p\in\mathbb{R}^2$ such that (\textit{i}) $p\not\in C_s$, (\textit{ii}) $d(p,s)<\max\; \{d(p',s)\mid p'\in C_s\}$, and (\textit{iii}) $p\in C_\emptyset$ or $d(p,s)<d(p, s')$, where $s'$ is a site such that $p\in C_{s'}$.
\end{definition}
As mentioned above,
Hoffman {\it et al.}~\cite{hoffman2006} show that, for any set of sites $S$ and appetites, the stable-matching Voronoi diagram of $S$ always exists and is unique. Technically, they consider the setting where all the sites have the same appetite, but the result applies to different appetites.
They also describe a continuous process that results in the stable-matching Voronoi diagram:
Start growing a circle from all the sites at the same time and at the same rate,
matching the sites with all the points encountered by
the circles that are not matched yet---when a
site fulfills its appetite, its circle stops growing.
The process ends when all the circles have stopped growing.
Note that this circle-growing method is analogous to a continuous version of the ``deferred acceptance'' stable matching algorithm of Gale and Shapley~\cite{gale62}. The sites correspond to the set making proposals, and $\mathbb{R}^2$ to the set accepting and rejecting proposals. The sites propose to the points in order by preference (with the growing circles), as in the deferred acceptance algorithm. The difference is that, in this setting, points receive all the proposals also in order by their own preference, so they always accept the first one and reject the rest.
Clearly, the circle-growing method can be simulated to obtain a numerical approximation of the diagram, but this would not be an effective discrete algorithm, which is one of the
interests of the present paper.
Figure~\ref{smvd:fig:comparison} shows a side-by-side comparison of the standard and stable-matching Voronoi diagrams. Note that the standard
Voronoi diagram is stable in the same sense as the stable-matching Voronoi diagram:
by definition, every point is matched to its first choice among the sites, so there can be no blocking pairs. In fact, the standard Voronoi diagram of a set of sites can be seen as the limit of the stable-matching Voronoi diagram as all the appetites grow to infinity, in the following sense: for any point $p$ in $\mathbb{R}^2$, and for sufficiently large appetites for all the sites, $p$ will belong to the region of the same site in the standard and stable-matching Voronoi diagrams.
\begin{figure}[!hbt]
\centering
\includegraphics[width=.495\linewidth]{samesetstable}\hspace*{.1em}
\includegraphics[width=.495\linewidth]{samesetvoronoi}
\caption{A stable-matching Voronoi diagram (left)
and a standard Voronoi diagram (clipped to a rectangle) (right)
for the same set of 25 sites. Each color represents a region.}
\label{smvd:fig:comparison}
\end{figure}
A standard Voronoi diagram solves the
\emph{post office} problem of assigning points
to their closest post office~\cite{knuth1998art}.
A stable-matching Voronoi diagram adds
the real-world assumption that each post office has a limit on the size of its jurisdiction.
Such notions may also be useful for political districting,
where point sites could represent polling stations, and
appetites could represent their capacities.
In this context, the distance preferences
for a stable-matching Voronoi diagram might determine a type of ``compactness''
that avoids the strange regions that are the subjects of recent court
cases involving gerrymandering. This was considered in~\cite{EPPSTEIN20172short}.
Nevertheless, depending on the appetites and
locations of the sites, the regions of the
sites in a stable-matching Voronoi diagram are not necessarily convex or even
connected (e.g., see Figure~\ref{smvd:fig:appetite}).
Thus, we are interested in this paper in characterizing the worst-case
combinatorial complexity of such diagrams (i.e., the maximum number of faces, edges, and vertices among all diagrams with $n$ sites), as well as
finding an efficient algorithm for constructing them.
\subsubsection*{Related Work}
There are large volumes of work on the topics of
Voronoi diagrams and stable matchings;
hence, we refer the interested reader to
surveys or books on the subjects
(e.g.,
see~\cite{Aurenhammer:1991,bookAurenhammer,Gusfield:1989:SMP:68392,Iwama:2008}).
A generalization of Voronoi diagram of particular interest are \emph{power diagrams}, where a weight associated to each site indicates how strongly the site draws the points in its neighborhood. Power diagrams have also been considered for political redistricting~\cite{Cohen-Addad:2018}.
Aurenhammer~\textit{et al.}~\cite{aurenhammer1998} show that, given a set of sites in a square and a quota for each site, it is always possible to find weights for the sites such that, in the power diagram induced by those weights, the area of the region of each site within the square is proportional to its prescribed quota. Thus, both stable-matching Voronoi diagrams and power diagrams are Voronoi-like diagrams that allow predetermined region sizes. Power diagrams
minimize the total squared distance between the sites and their associated points, while stable-matching Voronoi diagrams result in a stable matching.
In terms of algorithms for constructing stable-matching Voronoi diagrams, besides the mentioned continuous method by Hoffman et al.~\cite{hoffman2006}, Eppstein {\it et al.}~\cite{EPPSTEIN2017} study
the problem in a
discrete grid setting, where both sites and points are
pixels.
Eppstein {\it et al.}~\cite{EPPSTEIN20172short}
also consider an analogous stable-matching problem in planar graphs and road networks.
In these two previous works, the entities analogous to sites and points
are either pixels or vertices; hence,
they did not encounter the algorithmic
and combinatorial challenges raised by stable-matching Voronoi diagrams for
sites and points in the plane.
\subsubsection*{Our Contributions}
In Section~\ref{smvd:sec:geo}, we give a geometric interpretation of stable-matching Voronoi diagrams as the lower envelope of a set of cones, and discuss some basic properties of stable-matching Voronoi diagrams.
In Section~\ref{smvd:sec:bounds}, we give an $O(n^{2+\varepsilon})$ upper bound, for any $\varepsilon>0$, and an $\Omega(n^2)$ lower bound for the number of faces and edges of a stable-matching Voronoi diagrams in the worst case, where $n$ is the number of sites. The upper bound applies for arbitrary appetites, while the lower bound applies even in the special case where all the sites have the same appetite.
In Section~\ref{smvd:sec:algo}, we show that stable-matching Voronoi diagrams cannot be computed exactly in an algebraic model of computation. In light of this, we provide a discrete algorithm
for constructing them that runs in $O(n^3\log n+n^2f(n))$ time,
where $f(n)$ is the runtime of a geometric primitive (which
we define) that encapsulates this difficulty. This geometric primitive
can be approximated numerically. We also show how to compute the primitive exactly (and thus the diagram) when the distance metric is a polygonal convex distance function (Section~\ref{smvd:sec:convex}).
We assume Euclidean distance as the distance metric throughout the paper, except in Section~\ref{smvd:sec:convex}.
In particular, the upper and lower bounds on the combinatorial complexity apply to Euclidean distance.
We conclude in Section~\ref{smvd:sec:conc}.
\section{The Geometry of Stable-Matching Voronoi Diagrams}\label{smvd:sec:geo}
As is now well known, a (2-dimensional) Voronoi diagram can be viewed as a lower envelope of cones in 3 dimensions, as follows~\cite{Fortune87}.
Suppose that the set of sites are embedded in the plane $z=0$.
That is, we map each site $s=(x_s,y_s)$ to the 3-dimensional point $(x_s,y_s,0)$.
Then, we draw one cone for each site, with the site as the vertex, and growing to $+\infty$ all with the same slope.
If we then view the cones from below, i.e., from $z=-\infty$ towards $z=+\infty$, the part of the cone of each site that we see corresponds to the Voronoi cell of the site.
This is because two such cones intersect at points that are equally distant to both vertices. As a result, the $xy$-projection of their intersection corresponds to the perpendicular bisector of the vertices, and the boundaries of the Voronoi cells in the Voronoi diagram are determined by the perpendicular bisectors with neighboring sites.
Similarly, a stable-matching Voronoi diagram can also be viewed as the lower envelope of a set of cones. However, in this setting cones do not extending to $+\infty$. Instead, they are cut off at a finite height (which is a potentially different height for each cone, even if the associated sites have the same appetite).
This system of cones can be generated by a dynamic process that begins with cones of height zero and then grows them all at the same rate, halting the growth of each cone as soon as its area in the lower envelope reaches its appetite (see Figure~\ref{smvd:fig:3d}).
This process mimics the circle-growing method by Hoffman et al.~\cite{hoffman2006} mentioned before: if the $z$-axis is interpreted as time, the growing circles become the cones, and their lower envelope shows which circle reaches each point of the $xy$-plane first.
\begin{figure}[hbt]
\centering
\reflectbox{\includegraphics[width=.37\linewidth]{3dshape}}\hspace*{3em}
\includegraphics[width=.3\linewidth]{3dtopview}
\caption{View of a stable-matching Voronoi diagram of 3 sites
as the lower envelope of a set of cones.}
\label{smvd:fig:3d}
\end{figure}
\noindent A stable-matching Voronoi diagram consists of three types of elements:
\begin{itemize}
\item A \emph{face} is a maximal, closed, connected subset of a stable cell.
The stable cells can be disconnected, that is, a cell can have more than one face.
There is also one or more \emph{empty faces},
which are maximal connected regions not assigned to any site.
One of the empty faces is the \emph{external face}, which is
the only face with infinite area.
\item An \emph{edge} is a maximal line segment
or circular arc on the boundary of two faces.
We call the two types of edges \emph{straight} and \emph{curved edges}, respectively.
For curved edges, we distinguish between its incident convex face
(the one inside the circle along which the edge lies) and its incident
concave face.
\item A \emph{vertex} is a point shared by more than one edge. Generally, edges end at vertices, but curved edges may have no endpoints when they form a complete circle. This situation arises when the region of a site is isolated from other sites.
\end{itemize}
We say a set of sites with appetites is \textit{not} in general position if two curved edges of the stable-matching Voronoi diagram are tangent, i.e., touch at
a point $p$ that is not an endpoint
(e.g., two circles of radius 1 with centers 2 units apart).
In this special case, we consider that the curved edges are split at $p$, and that $p$ is a vertex.
In order to study the topology of the stable-matching Voronoi diagram, let the \emph{bounding disk}, $B_s$, of a site, $s$, be the smallest closed disk centered at $s$ that contains the stable cell of $s$.
The bounding disks arise in the topology of the diagram due to the following lemma:
\begin{lemma}\label{smvd:lem:boundary}
If part of the boundary between a face of site $s$ and a face of site $s'$ lies in the half-plane closer to $s$ than to $s'$, then that part of the boundary must lie along the boundary of the bounding disk $B_s$, and the convex face must belong to $s$.
\end{lemma}
\begin{proof}
The boundary between the faces of $s$ and $s'$ cannot lie outside of $B_s$, by definition of the bounding disk.
If the boundary is in the half-plane closer to $s$, then it also cannot be in the interior of $B_s$, because then there would exist a point $p$ inside $B_s$ and in the half-plane closer to $s$, but matched to $s'$ (see Figure~\ref{smvd:fig:boundary}).
In such a situation, $s$ and $p$ would be a blocking pair: $s$ prefers $p$ to the point(s) matched to it along $B_s$, and $p$ prefers $s$ to $s'$.
\end{proof}
\begin{figure}
\centering
\includegraphics[width=.3\linewidth]{boundary}
\caption{Illustration of the setting in the proof of Lemma~\ref{smvd:lem:boundary}.
It shows the perpendicular bisector of two sites $s$ and $s'$ (dotted line), the boundary of the bounding disk, $B_s$, of $s$ (dashed circular arc), and a hypothetical boundary between the faces of sites $s$ and $s'$ (solid curve).
In this setting, $s$ and $p$ would be a blocking pair.}
\label{smvd:fig:boundary}
\end{figure}
\begin{lemma}\label{smvd:lem:bounding}
The union of non-empty faces of the diagram is the union of the bounding disks of all the sites.
\end{lemma}
\begin{proof}
For any site $s$, all the points inside the bounding disk of $s$ must be matched.
Other\-wise, there would be a point, say, $p$, not matched to anyone but closer to $s$ than points actually matched to $s$ (along the boundary of $B_s$), which would be unstable, as $p$ and $s$ would be a blocking pair.
Moreover, points outside of all the bounding disks cannot be matched to anyone, by definition of the bounding disks.
\end{proof}
\begin{lemma}[Characterization of edges]\label{smvd:lem:edges}
$ $
\begin{enumerate}
\item A straight edge separating faces of sites $s$ and $s'$ can only lie along the perpendicular bisector of $s$ and $s'$.
\item A curved edge whose convex face belongs to site $s$ lies along the boundary of $B_s$.
Moreover, if the concave face belongs to a site $s'$, the edge must be contained in the half-plane closer to $s$ than $s'$.
\item Empty faces can only be concave faces of curved edges.
\end{enumerate}
\end{lemma}
\begin{proof}
Claims~(1) and~(2) are consequences of Lemma~\ref{smvd:lem:boundary}, and Claim~(3) is a consequence of Lemma~\ref{smvd:lem:bounding}.
\end{proof}
\section{Combinatorial Complexity}\label{smvd:sec:bounds}
\subsection{Upper Bound on the Number of Faces}
As mentioned in Section~\ref{smvd:sec:geo}, a stable-matching Voronoi diagram can be viewed as the lower envelope of a set of cones.
Sharir and Agarwal~\cite{SA95} provide results that characterize the combinatorial complexity of the lower envelope of certain sets of functions, including cones.
Formally, the \emph{lower envelope} (also called \emph{minimization diagram}) of a set of bivariate continuous functions $F=\{f_1(x,y),\ldots,f_n(x,y)\}$ is the function
$$E_F(x,y)=\min_{1\leq i\leq n} f_i(x,y),$$
where ties are broken arbitrarily.
The lower envelope of $F$ subdivides the plane into maximal connected regions such that $E_F$ is attained by a single function $f_i$ (or by no function at all).
The \emph{combinatorial complexity} of the lower envelope $E_F$, denoted $K(F)$, is the number of maximal connected regions of $E_F$.
To prove our upper bound, we use the following result:
\begin{lemma}[Sharir and Agarwal~\cite{SA95}, page 191]\label{smvd:lem:envelope}
The combinatorial complexity $K(F)$ of the lower envelope of a collection $F$ of $n$ (partially defined) bivariate functions that satisfy the assumptions below is $O(n^{2+\varepsilon})$, for any $\varepsilon>0$.\footnote{The theorem, as stated in the book (Theorem 7.7), includes some additional assumptions, but the book then shows that they are not essential.}
\begin{itemize}
\item Each $f_i\in F$ is a portion of an algebraic surface of the form $P_i(x,y)$, for some polynomial $P_i$ of constant maximum degree.
\item The vertical projection of each $f_i\in F$ onto the $xy$-plane is a planar region bounded by a constant number of algebraic arcs of constant maximum degree.
\end{itemize}
\end{lemma}
\begin{corollary}\label{smvd:cor:upper}
A stable-matching Voronoi diagram for $n$ sites has $O(n^{2+\varepsilon})$ faces, for any $\varepsilon>0$.
\end{corollary}
\begin{proof}
It is clear that the finite, ``upside-down'' cones whose lower envelope forms the stable-matching Voronoi diagram of a set of sites satisfy the above assumptions. In particular, their projection onto the $xy$-plane are disks.
Note that the bound still applies if we include the empty faces, as Lemma~\ref{smvd:lem:envelope} still holds if we add an extra bivariate function $f_{n+1}(x,y)=z^*$, where $z^*$ is any value higher than the height of any cone (i.e., $f_{n+1}$ is a plane that ``hovers'' over the cones). Such a function would have a face in the lower envelope for each empty face in the stable-matching Voronoi diagram.
\end{proof}
\subsection{Upper bound on the Number of Edges and Vertices}\label{smvd:app:boundedges}
Euler's formula relates the number of faces in a planar graph with the number of vertices and edges. By viewing the stable-matching Voronoi diagram as a graph, we can use Euler's formula to prove that the $O(n^{2+\varepsilon})$ upper bound also applies to the number of edges and vertices. In order to do so, we will need to show that the average degree is more than two, which is the purpose of the following lemmas.
In this section, we assume that sites are in general position (as defined in Section~\ref{smvd:sec:geo}).
However, note that non-general-position
constructions cannot yield the worst-case complexity.
This is because if two curved edges coincide exactly at a point that is not an endpoint, we can perturb slightly the site locations to move them a little closer, which creates a new vertex and edge.
For the same reason, we also assume that no vertex has degree four or more, which requires four or more sites to lie on the same circle (as in the standard Voronoi Diagram).
\begin{lemma}\label{smvd:lem:edgesequences}
The following sequences of consecutive edges along the boundary of two faces cannot happen: \textbf{1.} Straight--straight. \textbf{2.} Curved--curved. \textbf{3.} Straight--curved--straight.
\end{lemma}
\begin{proof}
$ $
\begin{enumerate}
\item Straight edges separating two faces of sites $s$ and $s'$ are constrained to lie along the perpendicular bisector of $s$ and $s'$ (Lemma~\ref{smvd:lem:edges}).
Therefore, two consecutive straight edges would not be maximal.
\item Curved edges separating a convex face of a site $s$ are constrained to lie along the boundary of the bounding disk of $s$ (Lemma~\ref{smvd:lem:edges}).
Thus, two consecutive curved edges would not be maximal (under the assumption of general position).
\item In such a case, not both straight edges could lie along the perpendicular bisector.\qedhere
\end{enumerate}
\end{proof}
Incidentally, \emph{curved--straight--curved} sequences \emph{can} happen, and can be seen in Figure~\ref{smvd:fig:comparison}.
\begin{lemma}\label{smvd:lem:degreetwo}
A vertex with degree two cannot be adjacent to two vertices with degree two.
\end{lemma}
\begin{proof}
A vertex with degree two connects two edges separating the same two faces.
If there were a node with degree two adjacent to two other nodes with degree two, we would either have four consecutive edges separating the same two faces or a triangular face inside another face.
However, neither case could avoid the sequences of edges given in Lemma~\ref{smvd:lem:edgesequences}.
\end{proof}
\begin{lemma}~\label{smvd:lem:avgdegree}
The average degree is at least $2.25$.
\end{lemma}
\begin{proof}
Note that all vertices have degree at least $2$, as they are the endpoints of edges, and every edge has different faces on each side. Also recall the assumption that there are no nodes with degree more than three, as this cannot yield a worst-case number of vertices nor edges.
Thus, all nodes have degree two or three. Let $n$ be the number of 2-degree vertices, and $k$ the number of 3-degree vertices.
The average degree is $(2n+3k)/(n+k)=2+k/(n+k)$. Thus, we need to show that $k/(n+k)\geq 1/4$, or, rearranging, that $k\geq n/3$.
By Lemma~\ref{smvd:lem:degreetwo}, a vertex with degree two cannot be adjacent to two vertices with degree two.
Among the $n$ 2-degree vertices, say $m_1$ are connected with another 2-degree node, while the remaining $m_2$ are adjacent only to 3-degree nodes. Then, there are $m_1+2m_2=n+m_2$ edges connecting 2-degree nodes with 3-degree nodes.
This means that $k\geq (n+m_2)/3$, completing the proof.
\end{proof}
\begin{lemma}\label{smvd:lem:upperedges}
Let $V,E$ and $F$ be the number of vertices, edges, and faces of the stable-matching Voronoi diagram of a set of sites $S$. Then, $V\leq 8F-16$ and $E\leq 9F-18$.
\end{lemma}
\begin{proof}
For this proof, suppose that there are no curved edges that form a full circle. Note that the presence of such edges can only reduce the number of vertices and edges, as for each such edge there is a site with a single edge and no vertices.
Without such edges, the vertices and edges of the stable-matching Voronoi diagram form a planar graph, and $V,E,F$ are the number of vertices, edges, and faces of this graph, respectively. Moreover, let $C$ be the number of connected components.
Due to Euler's formula for planar graphs, we have $F=E-V+C+1$, and thus $F\geq E-V+2$.
Moreover, by Lemma~\ref{smvd:lem:avgdegree}, the sum of degrees is at least $2.25V$, so $2E\geq 2.25V$.
Combining the two relations above, we have $V\leq 8F-16$ and $E\leq 9F-18$.
\end{proof}
We conclude by stating the main theorem of this section, which is a combination of Corollary~\ref{smvd:cor:upper} and Lemma~\ref{smvd:lem:upperedges}:
\begin{theorem}\label{smvd:the:upbound}
A stable-matching Voronoi diagram for $n$ point sites
has $O(n^{2+\varepsilon})$ faces, vertices, and edges, for any $\varepsilon>0$.
\end{theorem}
\subsection{Lower Bound}\label{smvd:sec:lower}
We show a quadratic lower bound on the number of faces in the worst case by constructing an infinite family of instances with $\Omega(n^2)$ faces. To start, we give such a family of instances where sites have arbitrary appetites. This introduces the technique behind our second, more intricate construction, which only uses sites with appetite $1$. This shows that the $\Omega(n^2)$ lower bound holds even in this restricted case where all the sites have the same appetite. The lower bound extends trivially to vertices and edges as well.
\begin{lemma}\label{smvd:lem:lowerPre}
A stable-matching Voronoi diagram for $n$ point sites
has $\Omega(n^2)$ faces, edges, and vertices in the worst case.
\end{lemma}
\begin{proof}
Consider the setting in Figure~\ref{smvd:fig:lowerPre}.
Assume $n$ is even. We divide the sites into two sets, $X$ and $Y$, of size $m=n/2$ each.
The sites in $X$ are arranged vertically, spaced evenly, and spanning a total height of $2$. Note that the \textit{standard} Voronoi diagram of the sites in $X$ alone consists of infinite horizontal strips. The top and bottom sites have strips extending vertically indefinitely, while the rest have thin strips of height $2/(m-1)$.
The sites in $Y$ are aligned vertically with the center of the strips. Half of the sites in $Y$ lie on each side of the sites in $X$. The sites in $Y$ have appetite $\pi$, so their ``ideal'' stable cell is a disk of radius $1$ around them. They are spaced evenly at a distance of at least 2 (e.g., $2.1$) of each other and of the first $m$ sites, so that each site in $Y$ is the first choice of all the points within distance $1$ of it.
Now, consider the resulting stable-matching Voronoi diagram when the sites of $X$ have large ($\gg m^2$) and equal appetites.
To visualize it, consider the circle-growing method from~\cite{hoffman2006} described in Section~\ref{smvd:sec:intro}, where a circle starts growing from each site at the same time and rate, and any unassigned point reached by a circle is assigned to the corresponding site.
The sites in $Y$ are allowed to grow without interference with any other site until they fulfill their appetite and freeze. Their region is thus a disk with diameter $2$, which spans all the thin strips (Figure~\ref{smvd:fig:lowerPre}).
The sites in $X$ start growing their region as a disk, which quickly reach the disks of the sites above and below. Then, the regions are restricted to keep growing along the horizontal strips.
The sites in $X$ keep growing and eventually reach a region already assigned to a site in $Y$ (which already fulfilled their appetite and stopped growing by this time). They continue growing along the strips past the regions already assigned to sites in $Y$. Eventually, they also freeze when they fulfill their appetite. The top and bottom sites are the first to fulfill their appetite, since they are not restricted to grow along thin strips, but we are not interested in the topology of the diagram beyond the thin strips. The only thing we need for our construction is that the appetite of the sites in $X$ is large enough so that their stable cells reach past the stable cells of the furthest sites in $Y$ along the strips.
Informally, the regions of the sites in $Y$ ``cut'' the thin strips of the sites in $X$. Each site in $Y$ creates $m-2$ additional faces (the top and bottom sites in $X$ do not have thin strips), and hence the number of faces is at least $m(m-2)=\Omega(n^2)$.
\end{proof}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{lowerPre}
\caption{Lower bound construction for Lemma~\ref{smvd:lem:lowerPre}.}
\label{smvd:fig:lowerPre}
\end{figure}
In the proof of Lemma~\ref{smvd:lem:lowerPre}, sites in $X$ and $Y$ have different roles. Sites in $X$ create a linear number of long, thin faces which can all be cut by a single disk. This is repeated a linear number of times, once for each site in $Y$, yielding quadratic complexity. However, this construction relies on the sites in $X$ having larger appetites than the sites in $Y$. Next, we consider the case where all the sites have appetite one.
The proof will follow the same idea, but now the thin and long strips will be circular strips.
Lemma~\ref{smvd:lem:annulusintersection} is an auxiliary result used in the proof.
\begin{lemma}\label{smvd:lem:annulusintersection}
Let $A$ be an annulus of width $\varepsilon>0$, and $D$ a disk centered outside the outer circle of $A$, with radius smaller than the inner radius of $A$, and tangent to the inner circle of $A$ (Figure~\ref{smvd:fig:annulusintersection}). Then,
$$\lim\limits_{\varepsilon\rightarrow 0}\frac{area(A\cap D)}{area(A)}=0$$
\end{lemma}
\begin{figure}
\centering
\includegraphics[width=.43\linewidth]{settinggeometry}
\caption{Setting in Lemma~\ref{smvd:lem:annulusintersection}.}
\label{smvd:fig:annulusintersection}
\end{figure}
\begin{proof}
Consider the smallest circular sector $S$ of $A$ that contains the asymmetric lens $A\cap D$ (the sector determined by angle $\alpha$ in Figure~\ref{smvd:fig:annulusintersection}). Since $A\cap D$ is contained in $S$, to prove the lemma it suffices to show that
$\lim\limits_{\varepsilon\rightarrow 0}\frac{area(S)}{area(A)}=0$. Note that $\frac{area(S)}{area(A)}$ is precisely $\frac{\alpha}{2\pi}$, and it is clear that $\lim\limits_{\varepsilon\rightarrow 0}\frac{\alpha}{2\pi}=0$.
\end{proof}
\begin{theorem}\label{smvd:thm:lower}
A stable-matching Voronoi diagram for $n$ point sites
has $\Omega(n^2)$ faces, edges, and vertices in the worst case, even when all the regions are restricted to have the same appetite.
\end{theorem}
\begin{proof}
Assume $n$ is a multiple of $4$. We divide the sites into two sets, $X$ and $Y$, of size $m=n/2$ each.
Let $\varepsilon_1,\varepsilon_2$ be two parameters with positive values that may depend on $m$. It will be useful to think of them as very small, since we will argue that the construction works for sufficiently small values of $\varepsilon_1$ and $\varepsilon_2$. Specific values for $\varepsilon_1$ and $\varepsilon_2$ are hard to express analytically but unimportant as long as they are small enough.
The $m$ sites in $X$, $s_1,\ldots,s_m$, lie, in this order, along a circle of radius $\varepsilon_1$.
They are almost evenly spaced around the circle, except that the angle between $s_1$ and $s_m$ is slightly larger than the others: the angle between $s_1$ and $s_m$ is increased by $\varepsilon_2$, and the angles between the rest of pairs of consecutive sites are reduced so that they are all equal (Figure~\ref{smvd:fig:lowerHalf}, Left).
The \textit{standard} Voronoi diagram of the sites in $X$ consists of infinite angular regions, with those of $s_1$ and $s_m$ slightly wider than those of the remaining sites.
Consider the circle-growing method applied to the sites of $X$ alone.
Initially, the regions are constrained to grow in the corresponding angular region in the standard Voronoi region. Since $s_1$ and $s_m$ have wider angles, they fill their appetite slightly before the rest, which all grow at the same rate. How much earlier depends on $\varepsilon_2$. Once $s_1$ and $s_m$ fulfill their appetite and stop growing, their angular regions become ``available'' to the other sites. The circles of $s_2$ and $s_{m-1}$ are the closest to the angular regions of $s_1$ and $s_m$, respectively, and thus start covering it to fulfill their appetite. In turn, this results in $s_2$ and $s_{m-1}$ fulfilling their appetite and freezing their circles earlier than the remaining sites. Their respective neighbors, $s_3$ and $s_{m-2}$, have the next closest circles to the angular regions of the sites that already stopped growing, and thus they use it to fill their appetite. This creates a cascading effect starting with $s_1$ and $s_m$ where the region of each site consists of a wedge that ends in a thin circular strip that ``wraps around'' the regions of the prior sites (Figure~\ref{smvd:fig:lowerHalf}, Right).
\begin{figure}
\centering
\includegraphics[width=.99\linewidth]{lowerHalf}
\caption{Left: Configuration of the sites in $X$ in the proof of Theorem~\ref{smvd:thm:lower}. The arc between $s_1$ and $s_m$, shown in red, is slightly wider than the rest. Sites $s_2,s_3,s_{m-1},s_{m-2}$ are shown. The remaining sites around the circle are omitted for clarity. Right: the stable cells of the aforementioned sites. The figures are not to scale, as in the actual construction $\varepsilon_1$ and $\varepsilon_2$ need to be much smaller, but even here we can appreciate the ``wrapping around'' effect.}
\label{smvd:fig:lowerHalf}
\end{figure}
As $\varepsilon_2$ approaches zero, the unfulfilled appetite of the sites other than $s_1$ and $s_m$ at the time $s_1$ and $s_m$ fill their appetite becomes arbitrarily small.
This results in arbitrarily thin circular strips. Note, however, that the circular arcs bounding each strip are not exactly concentric, as each one is centered at a different site. Thus, depending on $\varepsilon_1$, the strips might not wrap around all the way to the regions of $s_1$ and $s_m$. However, as $\varepsilon_1$ approaches zero, the sites get closer to each other, and thus their circular arcs become arbitrarily close to being concentric. It follows that if $\varepsilon_1$ is small enough (relative to $\varepsilon_2$), the circular strip of each site will wrap around all the way to the angular region of $s_1$ and $s_m$.
This concludes the first half of the construction, where we have a linear number of arbitrarily
thin, long strips.
Let $A$ be the annulus of minimum width centered at the center of the circle of the sites in $X$ and containing all the circular strips.
The sites in $Y$ lie evenly spaced along a circle concentric with $A$.
The radius of the circle is such that the regions of the sites in $Y$ are tangent to the inner circle of $A$, as in Figure~\ref{smvd:fig:lowerHalf2}.
\begin{figure}
\centering
\includegraphics[width=.45\linewidth]{lowerHalf3}
\caption{Configuration in the proof of Theorem~\ref{smvd:thm:lower}. For clarity, only the regions of $6$ sites in $X$ and 6 sites in $Y$ are shown. The strips inside $A$ are also omitted. The figure is not to scale, as in the actual construction the annulus $A$ needs to be much thinner.}
\label{smvd:fig:lowerHalf2}
\end{figure}
Since the wedges of the sites in $X$ are very
thin, the sites in $Y$ are closer to $A$ than the sites in $X$. Thus, the presence of $X$ does not affect the stable cells of the sites in $Y$. Each stable cell of a site in $Y$ is the intersection of a disk and a wedge of angle $2\pi/m$, with a total area of $1$ (Figure~\ref{smvd:fig:lowerHalf2}). The important aspect is how the presence of the stable cells of the sites in $Y$ affects the stable cells of the sites in $X$.
Some of the area of $A$ that would be assigned to sites in $X$ is now assigned to sites in $Y$. Thus, the sites in $X$ need to grow further to make up for the lost appetite.
However, recall that $A$ can be arbitrarily thin. Hence, by Lemma~\ref{smvd:lem:annulusintersection}, the fraction of the area of $A$ ``eaten'' by sites in $Y$ can be arbitrarily close to zero. As this fraction tends to zero, the distance that the sites in $X$ need to reach further to fulfill the lost appetite also tends to zero. Thus, if $A$ is sufficiently thin, the distance that the regions of $s_1$ and $s_m$ reach further is so small that the strips of $s_2$ and $s_{m-1}$ still wrap around the regions of $s_1$ and $s_m$, respectively, to fulfill their appetite. Similarly, the strips of $s_3$ and $s_{m-2}$ still wrap around the regions of the prior sites, and so on. Thus, if $A$ is sufficiently thin, the strips of all the sites in $X$ still wrap around to the regions of $s_1$ or $s_m$.
In this setting, half of the strips are at least as long as a quarter of the circle, and each of those gets broken into $\Theta(m)$ faces by the regions of the sites in $Y$.
Therefore, the circular strips are collectively broken into
a total of $\Theta(m^2)=\Theta(n^2)$ faces.
\end{proof}
\section{Algorithm}\label{smvd:sec:algo}
In general, a stable-matching Voronoi diagram cannot be computed in an algebraic model of computation, as it requires computing transcendental functions such as trigonometric functions.
\begin{observation}\label{smvd:obs:transcentental}
For infinitely-many sets of sites in general position and
with algebraic coordinates, the radii of some of the sites' bounding
disks cannot be computed exactly in an algebraic model of computation.
\end{observation}
\begin{proof}
Consider a set with only two sites, $s_1$ and $s_2$, with appetite~1 and aligned horizontally at distance~$2b$
from each other. By symmetry, the two bounding disks will have the same
radius~$r$. Assume that $b<\sqrt{1/\pi}$, so that the stable cells of $s_1$ and $s_2$ share a vertical edge. Consider the rectangular triangle with one vertex at $s_1$, another at the midpoint between $s_1$ and $s_2$, and the last at the top of the shared vertical edge (see Figure~\ref{smvd:fig:trigo}). Let $\alpha$ be the angle of the triangle at the vertex at $s_1$, and $a$ the length of the opposite side. The problem is, then, to determine the value of~$r$ which
satisfies
\[
\pi r^2 \left(1-\frac{2\alpha}{2\pi}\right) + 2\cdot \frac{ab}{2} = 1.
\]
Using the equalities~$\sin \alpha = a/r$ and~$\cos \alpha = b/r$, we obtain
\[
r^2 (\pi - \cos^{-1} \frac{b}{r}) + br \sin (\cos^{-1} \frac{b}{r}) = 1,
\]
that is,~$r$ is the solution of the equation
\[
r^2 (\pi - \cos^{-1} \frac{b}{r}) + b \sqrt{r^2-b^2} = 1,
\]
which cannot be solved in an algebraic model of computation because $\cos^{-1}$ is a transcendental (i.e., non-algebraic) function.
Such a construction appears in infinitely-many sets of points, implying the claim.
\end{proof}
\begin{figure}[htb]
\centering
\includegraphics[width=.475\linewidth]{trigo}
\caption{Setting in the proof of Observation~\ref{smvd:obs:transcentental}.}
\label{smvd:fig:trigo}
\end{figure}
Thus, in order to describe an exact and discrete algorithm, we rely on a geometric primitive. This primitive, which we define, encapsulates the problematic computations, and can be approximated numerically to arbitrary precision. In Section~\ref{smvd:sec:convex}, we show how to compute the geometric primitive exactly for polygonal convex distance functions.
\paragraph{Preliminaries.}
Let us introduce the notation used in this section.
The algorithm deals with multiple diagrams.
In this context, a \emph{diagram} is a subdivision of $\mathbb{R}^2$ into \emph{regions}. Each region is a set of one or more faces bounded by straight and circular edges. The regions do not overlap except along boundaries (boundary points are included in more than one region).
Each region is assigned to a unique site, but not all sites necessarily have a region. There is also an ``unassigned'' region consisting of the remaining faces. The \emph{domain} of a diagram is the subset of points of $\mathbb{R}^2$ in any of its assigned regions.
If $D$ is a diagram, $D(s)$ denotes the region of site $s$, which might be empty. If $D$ and $D'$ are diagrams and the domain of $D$ is a subset of the domain of $D'$, we say that $D$ and $D'$ are \emph{coherent} if, for every site $s$, $D(s)\subseteq D'(s)$. The data structures used to represent diagrams are discussed later.
Recall that we are given a set $S$ of $n$ sites, each with its own appetite $A(s)$.
The goal is to compute the (unique) stable-matching diagram of $S$ for those appetites, denoted by $D^*$.
For a site $s$, let $B^*(s)$ be the bounding disk of $D^*(s)$, and $r^*(s)$ the radius of $B^*(s)$ (the $^*$ superscript is used for notation relating to the sought solution).
Recall that the union of all the bounding disks $B^*(s)$ equals the domain of $D^*$ (Lemma~\ref{smvd:lem:bounding}), and that the bounding disks may not be disjoint.
We call an ordering $s_1,\ldots,s_n$ of the sites of $S$ \textit{proper} if the sites are sorted by increasing radius of their bounding disks, breaking ties arbitrarily. That is, $i<j$ implies $r^*(s_i)\leq r^*(s_j)$. Such an ordering is initially unknown, but it is discovered in the course of the algorithm.
Given a proper ordering, for $i=1,\ldots,n$,
let $B_{1..i}=\{B^*(s_1), \ldots, B^*(s_i)\}$ denote the set of bounding disks of the first $i$ sites, and $\cup B_{1..i}=B^*(s_1) \cup \cdots \cup B^*(s_i)$ the union of those disks. Let $\hat{B}(s_i)=B^*(s_i)\setminus \cup B_{1..i-1}$ be the part of $B^*(s_i)$ that is not inside a prior bounding disk in the ordering.
Let $S_{i..n}=\{s_i,\ldots,s_n\}$, and $V_{i..n}$ be the \emph{standard} Voronoi diagram of $S_{i..n}$.
Finally, let $\hat{V}_{i..n}$ be $V_{i..n}$ restricted to the region $\hat{B}(s_i)$.
This notation is illustrated in Figure~\ref{smvd:fig:notation}.
\begin{figure}[h!]
\centering
\includegraphics[width=.55\linewidth]{notation}
\caption{Notation used in the algorithm. The disks in $B_{1..3}$ are shown in black, the edges of $V_{4..9}$ are shown dashed in blue, and the interior of $\hat{B}(s_4)$ is shown in orange. The edges of $\hat{V}_{4..9}$ are overlaid on top of everything with red lines. Note that $\hat{V}_{4..9}$ is a diagram with three assigned regions, the largest assigned to $s_4$ and the others to unlabeled sites.}
\label{smvd:fig:notation}
\end{figure}
\paragraph{Incremental construction.}
The algorithm constructs a sequence of diagrams, $D_0,\dotsc,D_n$.
The starting diagram, $D_0,$ has an empty domain. We expand it incrementally until $D_n=D^*$. The diagrams are constructed in a greedy fashion: every $D_i$ is coherent with $D^*$. Thus, once a subregion of the plane is assigned in $D_i$ to some site, that assignment is definitive and remains part of $D_{i+1},\ldots, D_n$.
We construct $D^*$ one bounding disk at a time, ordered according to a proper ordering $s_1,\ldots,s_n$ (we address how to find this ordering later): the domain of each $D_i$ is $\cup B_{1..i}$.\footnote{An intuitive alternative approach is to construct $D^*$ one stable cell at a time. This is also possible, but the advantage of constructing it by bounding disks is that the topology of the intermediate diagrams $D_i$ is simpler, as it can be described as a union of disks, whereas stable cells have complex (and even disjoint) shapes. The simpler topology makes the geometric operations we do on these diagrams easier, in particular the geometric primitive from Definition~\ref{smvd:def:prim}.} Thus, $D_i$ can be constructed from $D_{i-1}$ by assigning $\hat{B}(s_i)$ (the $ \hat{\mbox{ } } $ mark is used for notation relating to the region added to $D_i$ at iteration $i$).
Since the boundaries of the bounding disks do not necessarily align with the edges of $D^*$, $D_i$ may contain a face of $D^*$ only partially. This can be seen in Figure~\ref{smvd:fig:newalgo}, which illustrates the first few steps of the incremental construction.
Further figures can be seen in Appendix~\ref{smvd:app:steps}.
At iteration $i$, we assign $\hat{B}(s_i)$ as follows. From $\hat{B}(s_i)$ and the standard Voronoi of the remaining sites, $V_{i..n}$, we compute the diagram $\hat{V}_{i..n}$. We then construct $D_i$ as the combination of $D_{i-1}$ and $\hat{V}_{i..n}$. That is, for each site $s$, $D_i(s)=D_{i-1}(s)\cup \hat{V}_{i..n}(s)$.
We first show that this assignment is correct.
\begin{figure}[t!] \centering
\begin{minipage}[b]{0.48\linewidth}
\fbox{\includegraphics[trim={0 4.7cm 0 2cm},clip,width=0.973\linewidth]{alg4}}\\[2pt]
\fbox{\includegraphics[trim={0 4.7cm 0 2cm},clip,width=0.973\linewidth]{alg6}}\\[2pt]
\fbox{\includegraphics[trim={0 4.7cm 0 2cm},clip,width=0.973\linewidth]{alg8}}
\end{minipage} \begin{minipage}[b]{0.48\linewidth}
\fbox{\includegraphics[trim={0 4.7cm 0 2cm},clip,width=0.973\linewidth]{alg5}}\\[2pt]
\fbox{\includegraphics[trim={0 4.7cm 0 2cm},clip,width=0.973\linewidth]{alg7}}\\[2pt]
\fbox{\includegraphics[trim={0 4.7cm 0 2cm},clip,width=0.973\linewidth]{alg9}}
\end{minipage}
\caption{Partial diagrams $D_0,\ldots, D_5$ computed in the first five iterations of the algorithm for a set of sites with equal appetites. At each iteration $i$, the edges of the standard Voronoi diagram $V_{i..n}$ of $S_{i..n}$ are overlaid in thick lines. The edges of the stable-matching Voronoi diagram (unknown to the algorithm) are overlaid in thin lines.} \label{smvd:fig:newalgo}
\end{figure}
\begin{lemma}\label{smvd:lem:corr}
For any $i$ with $0\leq i\leq n$, $D_i$ is coherent with $D^*$.
\end{lemma}
\begin{proof}
We use induction on $i$. The claim is trivial for $i=0$, as no site has a region in $D_0$.
We show that if $D_{i-1}$ is coherent with $D^*$ and $D_i$ is constructed as described, $D_i$ is also coherent. In other words, we show that $\hat{V}_{i..n}$ is coherent with $D^*$.
Let $s$ be an arbitrary site in $S_{i..n}$. We need to show that $\hat{V}_{i..n}(s)\subseteq D^*(s)$.
Let $p$ be an arbitrary point in the interior of $\hat{V}_{i..n}(s)$. We show that $p$ is also an interior point of $D^*(s)$.
First, note that $p$ does not belong in the stable cell of any of $s_1,\ldots,s_{i-1}$, because the regions of these sites are fully contained in $\cup B_{1..i-1}$, and $\hat{V}_{i..n}$ is disjoint from $\cup B_{1..i-1}$ except perhaps along boundaries.
By virtue of being in the interior of $V_{i..n}(s)$, $p$ has $s$ as first choice among the sites in $S_{i..n}$. We show that $s$ also prefers $p$ over \textit{some} of its assigned points in $D^*$, and thus they need to be matched or they would be a blocking pair. We consider two cases:
\begin{itemize}
\item $s=s_i$. In this case, $\hat{V}_{i..n}$ is a subset of $B^*(s)$, so $s$ prefers $p$ over some of its matched points (those at distance $r^*(s)$).
\item $s\not= s_i$. In this case, note the following three inequalities:
\textit{(i)} $d(p,s)<d(p,s_i)$ because $p$ is in the interior of $V_{i..n}(s)$;
\textit{(ii)} $d(p,s_i)< r^*(s_i)$ because $p$ is in the interior of $B^*(s_i)$;
\textit{(iii)} $r^*(s_i)\leq r^*(s)$ because $s$ appears after $s_i$ in the proper ordering. Chaining all three, we get that $d(p,s)< r^*(s)$, i.e., $p$ is inside the bounding disk of $s$. Thus, $s$ prefers $p$ to some of its matched points (those at distance $r^*(s)$).
\end{itemize}
\end{proof}
\begin{corollary}
The diagrams $D_n$ and $D^*$ are the same.
\end{corollary}
\begin{proof}
The domain of $D_n$ is $\cup B_{1..n}$ by construction. The domain of $D^*$ is also $\cup B_{1..n}$ by Lemma~\ref{smvd:lem:bounding}. By Lemma~\ref{smvd:lem:corr}, they are coherent, and so it must be that $D_n(s)=D^*(s)$.
\end{proof}
\paragraph{Finding the next bounding disk.}
The proper ordering $s_1,\ldots,s_n$ cannot be computed in a preprocessing step. Instead, the next site $s_i$ is discovered at each iteration.
Consider the point where we have computed $D_{i-1}$ and want to construct $D_i$ ($1\leq i\leq n$).
At this point, we have found the ordering up to $s_{i-1}$. Hence, we know which sites are in $S_{i..n}$, but we do not know their ordering yet.
In this step, we need to find a site $s$ in $S_{i..n}$ minimizing $r^*(s)$, and we need to find the radius $r^*(s)$ itself. The site $s$ can then be the next site in the ordering, i.e., we can ``label'' $s$ as $s_i$. If there is a tie for the smallest bounding disk among those sites, then there may be several valid candidates for the next site $s_i$. The algorithm finds any of them and labels it as $s_i$.
To find a site $s$ in $S_{i..n}$ minimizing $r^*(s)$, note the following results.
\begin{lemma}\label{smvd:lem:correctness}
If $r^*(s)\leq r^*(s')$, every point $p$ in $D^*(s)$ satisfies $d(p,s)\leq d(p,s')$.
\end{lemma}
\begin{proof}
Suppose, for a contradiction, that $r^*(s)\leq r^*(s')$ and $p$ is a point in $D^*(s)$, but $d(p,s')<d(p,s)$.
Clearly, $p$ prefers $s'$ to $s$. We show that $s'$ also prefers $p$ over some of its assigned points in $D^*$, and thus $p$ and $s'$ are a blocking pair.
If we combine the three inequalities $d(p,s')<d(p,s)$, $d(p,s)\leq r^*(s)$ (because $p$ is in $D^*(s)$), and $r^*(s)\leq r^*(s')$, we see that $d(p,s') < r^*(s')$.
Thus, $s'$ prefers $p$ to the points matched to $s'$ along the boundary of its bounding disk.
\end{proof}
\begin{corollary}\label{smvd:cor:correctness}
For any $s_i$ in a proper ordering, $D^*(s_i)\subseteq V_{i..n}(s_i)$.
\end{corollary}
\begin{proof}
According to Lemma~\ref{smvd:lem:correctness}, every point $p$ in $D^*(s_i)$ satisfies $d(p,s_i)\leq d(p,s_j)$ for any other site $s_j$ with $r_j\geq r_i$, and this includes every site in $S_{i+1..n}$.
\end{proof}
Based on Corollary~\ref{smvd:cor:correctness}, the idea for finding a site with the next smallest bounding disk is to compute what would be the stable cell of each site $s$ in $S_{i..n}$ if it were constrained to be a subset of $V_{i..n}(s)$. As we will see, among those stable cells, the one with the smallest bounding disk is correct.
More precisely, for each site $s$ in $S_{i..n}$, let $A_i(s)=A(s)-area(D_{i-1}(s))$ be the \textit{remaining appetite} of $s$ at iteration $i$: the starting appetite $A(s)$ of $s$ minus the area already assigned to $s$ in $D_{i-1}$. We define an \textit{estimate cell} $D^\dagger_i(s)$ for site $s$ at iteration $i$ as follows:
$D^\dagger_i(s)$ is the union of $D_{i-1}(s)$ and the intersection of $V_{i..n}(s)\setminus \cup B_{1..i-1}$ with a disk centered at $s$ such that that intersection has area $A_i(s)$.
Note that if $area(V_{i..n}(s)\setminus \cup B_{1..i-1})<A_i(s)$, no such disk exists. In this case, $D^\dagger_i(s)$ is not well-defined. If it is well-defined, we use $B^\dagger_i(s)$ to refer to its bounding disk (the smallest disk centered at $s$ that contains $D^\dagger_i(s)$), and $r^\dagger_i(s)$ to refer to the radius of $B^\dagger_i(s)$. Otherwise, we define $r^\dagger_i(s)$ as $+\infty$.
\begin{lemma}\label{smvd:lem:correctestimate}
At iteration $i$, for any site $s\in S_{i..n}$, $r^*(s)\leq r^\dagger_i(s)$. In addition, if $r^*(s)$ is minimum among the radii $r^*$ of the sites in $S_{i..n}$, then, $r^*(s)=r^\dagger_i(s)$ and $D^*(s)=D^\dagger_i(s)$.
\end{lemma}
\begin{proof}
For the first claim, let $s$ be a site in $S_{i..n}$.
Since $D_{i-1}$ is coherent with $D^*$ (Lemma~\ref{smvd:lem:corr}), the region $D_{i-1}(s)$ is in both $D^*(s)$ and $D^\dagger_i(s)$. The appetite of $s$ that is not accounted for in $D_{i-1}(s)$ is $A_i(s)$, and it must be fulfilled outside the domain of $D_{i-1}$, $\cup B_{1..i-1}$.
In $D^\dagger_i(s)$, $s$ fulfills the rest of its appetite with the points in $V_{i..n}(s)\setminus \cup B_{1..i-1}$ closest to it.
Note that all these points have $s$ as first choice among the sites in $S_{i..n}$. Thus, the remaining sites in $S_{i..n}$ cannot ``steal'' those points away from $s$, so $s$ for sure does not need to be matched to points even further than that. In other words, in the worst case for $s$, in $D^*$, $s$ fulfills the rest of its appetite, $A_i(s)$, with those points, and thus $r^*(s)=r^\dagger_i(s)$. However, in $D^*$, $s$ may partly fulfill that appetite with points outside of $V_{i..n}(s)$ (and outside $\cup B_{1..i-1}$, of course) which are even closer. These points do not have $s$ as first choice, but they may end up not being claimed by a closer site. Hence, it could also be that $r^*(s)< r^\dagger_i(s)$. For instance, see Figure~\ref{smvd:fig:estimateradii}.
For the second claim, if $r^*(s)$ is minimum, we are in the worst case for $s$, because, according to Corollary~\ref{smvd:cor:correctness}, $s$ fulfills the rest of its appetite in $V_{i..n}(s)$ and not outside.
\end{proof}
\begin{figure}[h!]
\centering
\includegraphics[width=.95\linewidth]{estimateradii}
\caption{An instance of two sites with different appetites. The left side shows the regions of the sought diagram, $D^*$, and the actual radii $r^*$ of the sites. The right shows the estimate cells and estimate radii of the sites at iteration $1$. We can see that $r^*(s_1)=r_1^\dagger(s_1)$ and that $r^*(s_2)<r_1^\dagger(s_2)$.}
\label{smvd:fig:estimateradii}
\end{figure}
\begin{corollary}\label{smvd:cor:correctestimate}
At iteration $i$, if $s$ has a smallest estimate radius $r^\dagger_i(s)$ among all the sites in $S_{i..n}$, then $s$ has a smallest actual radius $r^*(s)$ in $D^*$ among all the sites in $S_{i..n}$.
\end{corollary}
\begin{comment}
\begin{proof}
Let $x$ be the value of the smallest actual radius. Then, every site has estimate radius at least $x$ (Lemma~\ref{smvd:lem:correctestimate}). Further, there are sites with exactly estimate radius $x$: in particular, those with actual radius $x$ (Lemma~\ref{smvd:lem:correctestimate} again). Thus, a site $s$ with smallest estimate radius has estimate radius $r^\dagger_i(s)=x$, and so it must also have actual radius $r^*(s)=x$.
\end{proof}
\end{comment}
Corollary~\ref{smvd:cor:correctestimate} gives us a way to find the next site $s_i$ in a proper ordering: compute the estimate radii of all the sites, and choose a sites with a smallest estimate radius. To do this, we need to be able to compute the estimate radii $r^\dagger_i(s)$.
This is the most challenging step in our algorithm.
Indeed, Observation~\ref{smvd:obs:transcentental} speaks to its difficulty.
To circumvent this problem, we encapsulate the computation of each $r^\dagger_i(s)$ in a geometric primitive that can
be approximated numerically in an
algebraic model of computation.
For the sake of the algorithm description,
we assume the existence of a black-box function that allows us to
compute the following geometric primitive.
\begin{definition}[Geometric primitive]\label{smvd:def:prim} Given a convex polygon $P$,
a point $s$ in $P$, an appetite $A$, and a set $C$ of disks,
return the radius $r$ (if it exists)
such that the area of the intersection of $P\setminus C$
and a disk centered at $s$ with radius $r$ equals $A$.
\end{definition}
In the context of our algorithm, the point $s$ is a site in $S_{i..n}$,
the appetite $A$ is the remaining appetite $A_i(s)$ of $s$,
the polygon $P$ is the Voronoi cell $V_{i..n}(s)$,
and the set of disks $C$ is $B_{1..i-1}$.
Note that such a primitive could be approximated
numerically to arbitrary precision with a binary search like the one described later in Section~\ref{smvd:sec:convex}.
\paragraph{Implementation and runtime analysis.}
\begin{algorithm}[h!]
\caption{Stable-matching Voronoi diagram algorithm.}
\label{smvd:alg:smvd}
\begin{algorithmic}
\State \textbf{Input:} set $S$ of $n$ sites, and the appetite $A(s)$ of each site $s$.
\State Initialize $S_{1..n}$ as $S$, $V_{1..n}$ as a standard Voronoi diagram of $S$, $B_{1..0}$ as an empty set of disks, $\cup B_{1..0}$ as an empty union of disks, and $D_0$ as an empty diagram.
\State For each site $s\in S$, initialize its remaining appetite $A_1(s)=A(s)$.
\For{$i=1,\ldots,n$}
\For{each site $s$ in $S_{i..n}$}
\parState{Calculate the estimate radius $r^\dagger_i(s)$ and estimate bounding disk $B^\dagger_i(s)$ of $s$ using the primitive from Definition~\ref{smvd:def:prim} with parameters $V_{i..n}(s),s,A_i(s),$ and $B_{1..i-1}$.}
\EndFor
\State Let $s$ be a site in $S_{i..n}$ whose estimate radius $r^\dagger_i(s)$ is minimum.
\State Set $s_i=s$, $r^*(s_i)=r^\dagger(s_i), B^*(s_i)=B^\dagger(s_i)$.
\State Compute $\hat{B}(s_i)=B^*(s_i)\setminus \cup B_{1..i-1}$.
\State Compute $\hat{V}_{i..n}$ by partitioning $\hat{B}(s_i)$ according to $V_{i..n}$.
\State Add $\hat{V}_{i..n}$ to $D_{i-1}$ to obtain $D_i$.
\For{each site $s'$ in $S_{i..n}$}
\State Set $A_{i+1}(s')=A_i(s')-area(\hat{V}_{i..n}(s'))$ ($\hat{V}_{i..n}(s')$ might be empty).
\EndFor
\State Set $S_{i+1..n}=S_{i..n}\setminus\{s_i\}$ and $B_{1..i}=B_{1.. i-1}\cup\{B^*(s_i)\}$.
\State Add $B^*(s_i)$ to $\cup B_{1.. i-1}$ to obtain $\cup B_{1..i}$.
\State Remove $s_i$ from $V_{i..n}$ to obtain $V_{i+1..n}$.
\EndFor
\State Return $D_n$.
\end{algorithmic}
\end{algorithm}
Given the preceding discussion, Algorithm~\ref{smvd:alg:smvd} shows the full pseudocode.
It uses the following data structures:
\begin{itemize}
\item $V_{i..n}$: the standard Voronoi diagram of $n$ sites has $O(n)$ combinatorial complexity. It can be initially computed in $O(n\log n)$ time (e.g., see~\cite{Aurenhammer:1991,bookAurenhammer}). It can be updated after the removal of a site in $O(n)$ time~\cite{Gowda83}.
\item $\cup B_{1..i}$: the union of $n$ disks also has $O(n)$ combinatorial complexity~\cite{KLPS,SA95}. To compute $\cup B_{1..i}$ from $\cup B_{1..i-1}$, a new disk can be added to the union in $O(n\log n)$ time, e.g., with a typical plane sweep algorithm.
\item $\hat{V}_{i..n}$: since $\cup B_{1..i-1}$ has $O(n)$ complexity, and the boundary of $B^*(s_i)$ can only intersect each edge of $\cup B_{1..i-1}$ twice, $\hat{B}(s_i)$ also has $O(n)$ complexity. Given that both $\hat{B}(s_i)$ and $V_{i..n}$ have $O(n)$ combinatorial complexity, $\hat{V}_{i..n}$ has $O(n^2)$ combinatorial complexity. The diagram $\hat{V}_{i..n}$ can be computed in $O(n^2\log n)$ time, e.g., with a typical plane sweep algorithm.
\item $D_i$: we do not maintain the faces of the diagram $D_i$ explicitly as ordered sequences of edges. Instead, for each site $s$, we simply maintain the region $D_i(s)$ as the (unordered) set of edges $\cup_{1\leq j\leq i}\;edges(\hat{V}_{j..n}(s))$.
That is, at each iteration $i$, we add to the edge set of each site $s$ the edges bounding the (possibly empty) region of $s$ in $\hat{V}_{i..n}$. Note that after iteration $j$, the set of edges of $s_j$ does not change anymore. Since $\hat{V}_{i..n}$ has $O(n^2)$ complexity for any $i$, the collective size of the these edge sets is $O(n^3)$ throughout the algorithm.
\end{itemize}
We wait until the end of the algorithm to construct a proper data structure representing the planar subdivision $D^*$, e.g., a doubly connected edge cell (DCEL) data structure.
We construct it from the sets of edges collected during the algorithm. Let $E(s_i)=\cup_{1\leq j\leq i}\;edges(\hat{V}_{j..n}(s_i))$ be the set of edges for a site $s_i$.
\begin{lemma}\label{smvd:lem:glue}
If all the fragments of edges in $E(s_i)$ that overlap with other parts of edges in $E(s_i)$ are removed, then the parts left are precisely the edges of $D^*(s_i)$, perhaps split into multiple parts.
\end{lemma}
\begin{proof}
Since $D^*$ and $D_n$ are the same, every edge $e$ of $D^*(s_i)$ appears in $E(s_i)$. However, $e$ may not appear as a single edge. Instead, it may be split into multiple edges or fragments of edges of $E(s_i)$. This may happen when, for some $s_j$ with $j<i$, the boundary of $\hat{B}(s_j)$ intersects $e$. In this case, in $E(s_i)$, the edge $e$ is split in two at the intersection point. This is because the two parts of the edge are found at different iterations of the algorithm. See, e.g., edge $e$ in Figure~\ref{smvd:fig:glue}.
However, in $E(s_i)$ there may also be edges or fragments of edges which do not correspond to edges of $D^*(s_i)$. These are edges or fragments of edges that actually lie in the interior of $D^*(s_i)$, but that are added to $E(s_i)$ because they lie along the boundary of $\hat{B}(s_j)$ for some $s_j$ with $j<i$, which makes the region of $D^*(s_i)$ be split along that boundary. Such edges appear exactly twice in $E(s_i)$: one for each face on each side of the split. See, e.g., the edges that lie in the interior of $D^*(s_4)$ in Figure~\ref{smvd:fig:glue}, and note that they are all colored twice (unlike the actual edges of $D^*(s_i)$).
\end{proof}
\begin{figure}[htb]
\centering
\includegraphics[width=.45\linewidth]{glue}
\caption{The union of colored regions is the stable cell $D^*(s_4)$ of the site $s_4$. The algorithm finds it divided into four regions, $\hat{V}_{i..4}(s_4)$ for $i=1,2,3,4$, shown in different colors. The bounding disks of $s_1,s_2,$ and $s_3$, are hinted in dotted lines. The edge $e$ of $D^*(s_4)$, which lies along the perpendicular bisector between $s_3$ and $s_4$, is split between $\hat{V}_{2..4}(s_4)$ and $\hat{V}_{3..4}(s_4)$.}
\label{smvd:fig:glue}
\end{figure}
Given Lemma~\ref{smvd:lem:glue}, we can construct $D^*(s_i)$ from $E(s_i)$ as follows: first, remove all the overlapping fragments of edges in $E(s_i)$. Second, connect the edges with matching endpoints to construct the faces. While doing this, if two straight edges that lie on the same line share an endpoint, merge them into a single edge. Similarly, merge any two curved edges that lie along the same arc and share an endpoint. These are the fragmented edges of $D^*(s_i)$.
Each of these steps can be done with a typical plane sweep algorithm. In more detail, this could be done as follows: sort the endpoints of edges in $E(s_i)$ from left to right. Then, process the edges in the order encountered by a vertical line that sweeps the plane from left to right.
Maintain all the edges intersecting the sweep line, ordered by height of the intersection (e.g., in a balanced binary search tree). In this way, overlapping edges (for the first step) or edges with a shared endpoint (for the second step) can be found quickly in $O(\log n)$ time. Construct the faces of $D^*(s_i)$ as they are passed by the sweep line.
Since the sets $E(s_i)$, for $1\leq i\leq n$, have $O(n^3)$ cumulative combinatorial complexity, sorting all the $E(s_i)$ sets can be done in $O(n^3\log n)$ time. The plane sweeps for all the $s_i$ have overall $O(n^3)$ events, each of which can be handled in $O(\log n)$ time. Thus, the algorithm takes $O(n^3\log n)$ time in total.
\begin{theorem}
\label{smvd:thm:alg}
The stable-matching Voronoi diagram of a set $S$ of $n$ point sites
can be computed in the real-RAM model in $O(n^3\log n)$ time plus $O(n^2)$ calls to a
geometric primitive that has input complexity $O(n)$.
\end{theorem}
\begin{proof}
For the number of calls to the geometric primitive, note that
there are $n$ iterations, and at each
iteration we call the geometric primitive $O(n)$ times.
Any given cell of the standard Voronoi diagram $V_{i..n}$ has $O(n)$ edges, and there are $O(n)$ already-matched disks,
so the input of each call has $O(n)$ size.
Therefore, we make $O(n^2)$ calls to the geometric primitive, each of which
has combinatorial complexity $O(n)$.
Besides primitive calls, the bottleneck of each iteration $i$ is computing $\hat{V}_{i..n}$. This can be done in $O(n^2\log n)$ time, for a total of $O(n^3\log n)$ time over all the iterations.
The final step of reconstructing $D^*$ can also can be done in $O(n^3\log n)$.
\end{proof}
\subsection{Geometric Primitive for Polygonal Convex Distance Functions}\label{smvd:sec:convex}
In this section, we show how to implement the geometric primitive \emph{exactly} for convex distance functions induced by convex polygons. The use of this class of metrics for Voronoi Diagrams was introduced in~\cite{Chew1985}, and studied further, e.g., in~\cite{Ma00}. Intuitively, the polygonal convex distance function, $d_S(a,b)$, induced by a convex polygon $S$, is the factor by which we need to scale $S$, when $S$ is centered at $a$, to reach $b$. Solving the primitive exactly for such metrics is interesting for two reasons. First, this class of distance functions includes many commonly used metrics such as the $L_1$ (Manhattan) and $L_\infty$ (Chebyshev) distances. Second, a convex distance function induced by a regular polygon with a large number of sides can be used to approximate Euclidean distance.
Formally, the distance $d_S(a,b)$ is defined as follows: let $S$ be a convex polygon in $\mathbb{R}^2$ that contains the origin. Then, to compute the distance $d_S(a,b)$ from a point $a$ to a point $b$, we translate $S$ by vector $a$ so that $a$ is at the same place inside $S$ as the origin was. Let $p$ be the point at the intersection of $S$ with the ray starting at $a$ in the direction of $b$. Then, $d_S(a,b)=d(a,b)/d(a,p)$ (where $d(\cdot,\cdot)$ is Euclidean distance).
Convex distance functions satisfy triangle inequality, but they may not be symmetric. Symmetry ($d_S(p,q)=d_S(q,p)$) holds if and only if $S$ is symmetric with respect to the origin~\cite{Ma00}. In this section, we assume that $S$ is symmetric with respect to the origin. Another significant difference with Euclidean distance is that the bisector of two points may contain 2-dimensional regions. This happens when the line through the two points is parallel to a side of $S$~\cite{Ma00}. We assume that such degenerates cases do not happen.\footnote{Alternatively, we may redefine the bisector to go along the clockwise-most boundary of the two-dimensional region, as in~\cite{Chew1985}.}
\begin{figure}[htb]
\centering
\includegraphics[width=.4\linewidth]{chessstablematching1}
\includegraphics[width=.4\linewidth]{chessvoronoi}\hspace*{2em}
\caption{Stable-matching Voronoi diagram (left) and standard Voronoi diagram (clipped by a square) (right) for the convex distance function induced by a square centered at the origin, which corresponds to the $L_\infty$ metric.}
\label{smvd:fig:chessvoronoi}
\end{figure}
The discussion from Section~\ref{smvd:sec:geo} applies to diagrams based on polygonal convex distance functions. However, in this setting, all the edges are straight. Recall that, in the Euclidean distance setting, straight edges lie along perpendicular bisectors, while curved edges lie along the boundaries of bounding disks. This is still the case here, but bounding disks are constituted of straight edges.
In this context, disks are called balls. A \emph{ball} is a (closed) region bounded by a translated copy of $S$ scaled by some factor.
Therefore, straight and curved edges should now be referred to as \emph{bisector edges} and \emph{bounding ball edges}, respectively. With this distinction, the results in that section also apply.
Likewise, the algorithm applies as well. However, note that the notion of radius is not well defined for convex distance functions, as they grow at different rates in different directions. Therefore, instead of talking about the radii of the bounding disks, we should talk about the scaling factor of the bounding balls.
Most importantly, the fact that there are no curved edges allows us to compute the diagram exactly in an algebraic model of computation. This is the focus of this section.
We need to reformulate the geometric primitive for the case of convex distance functions. Recall that the polygon $P$ in the primitive should correspond to a Voronoi cell, which is the reason why $P$ is assumed to be convex in the primitive. However, Voronoi cells may not be convex for convex distance functions (see Figure~\ref{smvd:fig:chessvoronoi}). Instead, Voronoi cells of polygonal convex distance functions are star-shaped, with the site in the kernel~\cite{Ma00}. Thus, $P$ will now be a star-shaped polygon. For simplicity, we translate the site $s$ to the origin. Finally, we express the solution as the scaling factor of the wanted ball rather than its radius.
\begin{definition}[Geometric primitive for polygonal convex distance functions]\label{smvd:def:prim2} Given a convex distance function induced by a polygon $S$ symmetric with respect to the origin, a star-shaped polygon $P$ with the origin in the kernel, an appetite $A$, and a set $C$ of balls,
return the scaling factor $r$ (if it exists)
such that $A$ equals the area of the intersection of $P\setminus C$
and $S$ scaled by $r$.
\end{definition}
\paragraph{The algorithm.}
\begin{enumerate}
\item The algorithm begins by computing $P\setminus C$ (which is a polygonal shape that can be concave, have holes, and be disconnected). Then, we triangulate $P\setminus C$ into a triangulation, $T_1$. For each triangle in $T_1$ whose interior intersects one of the \emph{spokes} of $S$ (a ray starting at the origin and going through a vertex of $S$), we divide the triangle along the spoke and re-triangulate each part. After this, the resulting triangulation, $T_2$, has no triangles intersecting any spoke of $S$ except along the boundaries (see Figure~\ref{smvd:fig:triangulation}).
\begin{figure}[htb]
\centering
\includegraphics[width=.7\linewidth]{convexprimitive1}
\caption{Left: an input to the geometric primitive for the convex distance function induced by a square, where the balls in $C$ are shown in dashed red lines. Right: the corresponding triangulation $T_2$ of $P\setminus C$ where no triangle intersects any spoke of $S$ (shown in red, they are also part of the triangulation).}
\label{smvd:fig:triangulation}
\end{figure}
\item The next step is to narrow down the range of possible values of $r$.
We compute, for each vertex $v$ in $T_2$, the distance from the origin $d_S(O,v)$, and sort the vertices from shortest to longest distance. If two or more vertices are at the same distance, we discard all but one, so that we have a sorted list $L$ with only one vertex for each distance. Now, we search for two consecutive vertices $v_1$ and $v_2$ in $L$ such that $d_S(O,v_1)\leq r\leq d_S(O,v_2)$ (or conclude that $r$ does not exist). To find $v_1$ and $v_2$, we can use binary search on the list $L$: for a vertex $v$, we compute the area of the intersection of $P\setminus C$ and a ball centered at the origin passing through $v$ (this can be done by adding the individual contribution of each triangle in $T_2$). By comparing this area to $A$, we discern whether $v$ is too close or too far.
\item It remains to pinpoint $r$ between $d_S(O,v_1)$ and $d_S(O,v_2)$. Let $B_1$ and $B_2$ denote unit balls centered at the origin scaled by $d_S(O,v_1)$ and $d_S(O,v_2)$, respectively, and $B$ the annulus defined by $B_2\setminus B_1$. Note that, because $v_1$ and $v_2$ are consecutive vertices of $L$, the interior of $B$ does not contain any vertex of $T_2$. Conversely, no vertex of $B$ is in the interior of a triangle of $T_2$, because all the vertices of $B$ lie along the spokes of $S$, and no triangle in $T_2$ intersects the spokes of $S$.
As a result, if a triangle in $T_2$ intersects $B$, the intersection is either a triangle or a trapezoid (see Figure~\ref{smvd:fig:btintersections}).
Similarly to Step 1, for each triangle in $T_2$ whose interior is intersected by $B_1$ and/or $B_2$, we divide the triangle along $B_1$ and/or $B_2$ and re-triangulate each part. Figure~\ref{smvd:fig:triangulation2} illustrates the resulting triangulation, $T_3$, where the interior of each triangle is either fully contained in $B$ or disjoint from $B$. Moreover, all the triangles in $B$ have an edge along the boundary of $B_1$ or $B_2$, which we call the base, and a vertex in the boundary of the other (the cuspid).
\begin{figure}[htb]
\centering
\includegraphics[width=.575\linewidth]{convexprimitive2}
\caption{In black: three possible intersections of triangles in $T_2$ and $B$, and the resulting sub-triangulations. In red: two invalid intersections between a triangle in $T_2$ and $B$.}
\label{smvd:fig:btintersections}
\end{figure}
\item Finally, we find $r$ as follows. Since $r$ is between $d_S(O,v_1)$ and $d_S(O,v_2)$, triangles outside $B_2$ lie outside the ball with radius $r$. Conversely, all triangles inside $B_1$ are contained in the ball with radius $r$. Let $A'$ be the sum of the areas of all the triangles inside $B_1$. Then, the triangles in $B$ must contribute a total area of $A-A'$. They all have height $h=d_S(O,v_2)-d_S(O,v_1)$. Let $R_1$ and $R_2$ be the sets of triangles in $B$ with the base along $B_1$ and $B_2$, respectively. We need to find the height $h'$, with $0\leq h'\leq h$, such that $A-A'$ equals the sum of \textit{(i)} the areas of the triangles in $R_1$ from the base to a line parallel to the base at height $h'$, and \textit{(ii)} the areas of the triangles in $R_2$ from the cuspid to a line parallel to the base at height $h-h'$. Given $h'$, we can output $r=d_S(O,v_1)+h'$.
\begin{figure}[htb]
\centering
\includegraphics[width=.675\linewidth]{convexprimitive3}
\caption{Triangulation $T_3$ of $P\setminus C$ after Step 3 of the algorithm, where no triangle intersects $B$. The triangles of $T_3$ can be classified into those inside $B_1$, inside $B$, and outside $B_2$.}
\label{smvd:fig:triangulation2}
\end{figure}
In order to find $h'$, we rearrange the triangles to combine them into a trapezoid, as shown in Figure~\ref{smvd:fig:trapezoid}, Left. We rotate the triangles in $R_1$ to align their bases, translate them to put their bases adjacent along a line, and shift their cuspids along a line parallel to the bases to coincide at a single point above the leftmost point of the first base. Doing so does not change their area, and guarantees that triangles do not overlap. We do a similar but flipped transformation to triangles in $R_2$ in order to form the trapezoid. The height $h'$ is the height at which the area of the trapezoid from the base up to that height is $A-A'$, which can be found as the solution to a quadratic equation by using the formula for the area of a trapezoid, as shown in Figure~\ref{smvd:fig:trapezoid}, Right.
\begin{figure}[htb]
\centering
\includegraphics[width=.9975\linewidth]{convexprimitive4}
\caption{Top left: triangles of $T_3$ inside $B$, rotated and separated into triangles with the base along $B_1$ (top) and $B_2$ (bottom). Bottom left: the triangles rearranged (and transformed) into a trapezoid with the same area. Right: derivation of the quadratic equation for $h'$ from the formula for the area of a trapezoid, for the case where the base is shorter than the top size (i.e., $b$ is positive). Note that $a,b$, and $h$ are known. The alternative case is similar.}
\label{smvd:fig:trapezoid}
\end{figure}
\end{enumerate}
\paragraph{Running time.} The correctness of the algorithm follows from the simple argument in Step 4. We now consider its runtime analysis. The size of the input to this primitive is $O(|P|+|C|+|S|)$, where $|P|$ and $|S|$ denote the number of edges of the polygons $P$ and $S$, respectively. The polygonal shape $P\setminus C$ has $O(|P|+|C||S|)$ edges, as each ball in $C$ has $|S|$ edges. The corresponding triangulation $T_1$ has $O(|P|+|C||S|)$ triangles. Each spoke of $S$ may intersect every triangle and divide it in two or three, so $T_2$ has $|T_2|=O(|P||S|+|C||S|^2)$ triangles (and vertices). Sorting the vertices of $T_2$ requires $O(|T_2|\log{|T_2|})$ time. The binary search has $O(\log{|T_2|})$ steps, each of which takes time proportional to the number of triangles, $O(|T_2|)$. These steps are the bottleneck, as $T_3$ grows only by a constant factor with respect to $T_2$. Thus, the total runtime of the primitive is $O(|T_2|\log{|T_2|})=O((|P||S|+|C||S|^2)\log{(|P||S|+|C||S|)})$.
In the context of the algorithm, we make calls with to the primitive with $|P|=O(n|S|)$ and $|C|<n$, so we can compute the primitive in $O(n|S|^2\log (n|S|))$ time. When the polygon $S$ has a constant number of faces, the time is $O(n\log n)$. Thus, the entire stable-matching Voronoi diagram for metrics based on these polygons can be computed in $O(n^3\log n)$ total time. This includes the metrics $L_1$ and $L_\infty$.
\section{Conclusions}~\label{smvd:sec:conc}
We have studied stable-matching Voronoi diagrams,
providing characterizations of their combinatorial complexity
and a first discrete algorithm for constructing them.
Stable-matching Voronoi diagrams are a natural generalization of standard Voronoi diagrams to size-constrained regions. This is because standard Voronoi diagrams also have the defining property of stable-matching Voronoi diagrams: stability for preferences based on proximity. Furthermore, both have similar geometric constructions in terms of the lower envelopes of cones.
However, allowing prescribed region sizes comes at the cost of loss of convexity and connectivity; indeed, we have shown that a stable-matching Voronoi diagram may have $O(n^{2+\varepsilon})$
faces and edges, for any $\varepsilon>0$. We conjecture that $O(n^2)$ is the right upper bound, matching the lower bound that we have given.
Constructing a stable-matching Voronoi diagram is also
more computationally challenging than the construction of a standard
Voronoi diagram. In particular, it requires computations that cannot be carried out exactly in an algebraic model of computation. We have given an algorithm which runs in $O(n^3\log n+n^2f(n))$-time, where $f(n)$ is the runtime of a geometric primitive that we defined to encapsulate the computations that cannot be carried out analytically.
While such primitives cannot be avoided, a step forward from our algorithm would be one that relies only in primitives with constant-sized inputs.
With this work, there are now three approaches for computing stable-matching Voronoi diagrams, each of which requires a different compromise: \textit{(a)} use our algorithm and approximate the geometric primitive numerically; \textit{(b)} replace the Euclidean distance by a polygonal convex distance function induced by a regular polygon with many sides (this approximates a circle, which would correspond to Euclidean distance), and compute the primitive exactly as described in Section~\ref{smvd:sec:convex}; \textit{(c)} discretize the plane into a grid, and use the algorithms of~\cite{EPPSTEIN2017}.
\subparagraph*{Acknowledgements.}
This article reports on work supported by the
DARPA under agreement no.~AFRL FA8750-15-2-0092.
The views expressed are those of the authors and do not reflect the
official policy or position of the Department of Defense
or the U.S.~Government.
Work on this paper by the first author has been supported
in part by BSF Grant~2017684.
This work was also supported in part from NSF grants
1228639, 1526631,
1217322, 1618301, and 1616248.
We would like to thank Nina Amenta for several helpful discussions regarding
the topics of this paper. We also thank the anonymous reviewers for many useful comments.
| {
"timestamp": "2020-02-11T02:29:09",
"yymm": "1804",
"arxiv_id": "1804.09411",
"language": "en",
"url": "https://arxiv.org/abs/1804.09411",
"abstract": "We study algorithms and combinatorial complexity bounds for \\emph{stable-matching Voronoi diagrams}, where a set, $S$, of $n$ point sites in the plane determines a stable matching between the points in $\\mathbb{R}^2$ and the sites in $S$ such that (i) the points prefer sites closer to them and sites prefer points closer to them, and (ii) each site has a quota or \"appetite\" indicating the area of the set of points that can be matched to it. Thus, a stable-matching Voronoi diagram is a solution to the well-known post office problem with the added (realistic) constraint that each post office has a limit on the size of its jurisdiction. Previous work on the stable-matching Voronoi diagram provided existence and uniqueness proofs, but did not analyze its combinatorial or algorithmic complexity. In this paper, we show that a stable-matching Voronoi diagram of $n$ point sites has $O(n^{2+\\varepsilon})$ faces and edges, for any $\\varepsilon>0$, and show that this bound is almost tight by giving a family of diagrams with $\\Theta(n^2)$ faces and edges. We also provide a discrete algorithm for constructing it in $O(n^3\\log n+n^2f(n))$ time in the real-RAM model of computation, where $f(n)$ is the runtime of a geometric primitive (which we define) that can be approximated numerically, but cannot, in general, be performed exactly in an algebraic model of computation. We show, however, how to compute the geometric primitive exactly for polygonal convex distance functions.",
"subjects": "Computational Geometry (cs.CG); Data Structures and Algorithms (cs.DS)",
"title": "Stable-Matching Voronoi Diagrams: Combinatorial Complexity and Algorithms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987946220128863,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.709932607992295
} |
https://arxiv.org/abs/2205.10010 | Some identities on degenerate hyperharmonic numbers | The aim of this paper is to investigate some properties, recurrence relations and identities involving degenerate hyperharmonic numbers, hyperharmonic numbers and degenerate harmonic numbers. In particular, we derive an explicit expression of the degenerate hyperharmonic numbers in terms of the degenerate harmonic numbers. This is a degenerate version of the corresponding identity representing the hyperharmonic numbers in terms of harmonic numbers due to Conway and Guy. | \section{Introduction}
Recent explorations for various degenerate versions of quite a few special numbers and polynomials have been fascinating and fruitful, which began with the pioneering work of Carlitz in [2,3].
These quests for degenerat versons are not only limited to special polynomials and numbers but also extended to some transcendental functions, like gamma functions (see [11]). Many different tools are used, which include generating functions, combinatorial methods, $p$-adic analysis, umbral calculus, operator theory, differential equations, special functions, probability theory and analytic number theory (see [8-10,12,14,16] and the references therein). It is also worth mentioning that $\lambda$-umbral calculus has been introduced in [8], which turns out to be more convenient than the `classical' umbral calculus when dealing with degenerate Sheffer polynomials.\par
The aim of this paper is to investigate some properties, recurrence relations and identities involving degenerate hyperharmonic numbers, hyperharmonic numbers and degenerate harmonic numbers (see \eqref{6}, \eqref{9}, \eqref{12}). The novelty of this paper is the derivation of an explicit expression of the degenerate hyperharmonic numbers in terms of the degenerate harmonic numbers (see Theorem 3). This is a degenerate version of the corresponding identity representing the hyperharmonic numbers in terms of harmonic numbers due to Conway and Guy (see \eqref{10}).\par
The outline of this paper is as follows. In Section 1, we recall the degenerate exponentials and the degenerate logarithms. We remind the reader of the harmonic numbers and their generating function, and of the degenerate harmonic numbers and their generating function. Then we recall the hyperharmonic numbers due to Conway and Guy [5], its explicit expression in terms of harmonic numbers and their generating function. Finally, we remind the reader of the recently introduced degenerate hyperharmonic numbers, which are a degenerate version of the hyperharmonic numbers, and of their generating function.
Section 2 is the main result of this paper. We obtain an identity involving the degenerate hyperharmonic numbers and the hyperharmonic numbers in Theorem 2. It is obtained by taking higher order derivatives of the generating function of the degenerate hyperharmonic numbers in \eqref{13}. Theorem 3 is a degenerate version of the explicit expression for the hyperharmonic numbers \eqref{10}, which is obtained from Theorem 2 and Lemma 1 about explicit expressions of certain polynomials. In Section 3, we get an identity involving the degenerate hyperharmonic numbers and the degenerate zeta function. \par
For any nonzero $\lambda\in\mathbb{R}$, the degenerate exponential functions are defined by
\begin{equation}
e_{\lambda}^{x}(t)=(1+\lambda t)^{\frac{x}{\lambda}}=\sum_{n=0}^{\infty}\frac{(x)_{n,\lambda}}{n!}t^{n},\quad (\mathrm{see}\ [2,8,9,10,13]), \label{1}
\end{equation}
where
\begin{equation}
(x)_{0,\lambda}=1,\quad (x)_{n,\lambda}=x(x-\lambda)\cdots(x-(n-1)\lambda),\quad (n\ge 1),\quad(\mathrm{see}\ [8,9,10]).\label{2}
\end{equation}
When $x=1$, we use the notation $e_{\lambda}(t)=e_{\lambda}^{1}(t)$. \par
Let $\log_{\lambda}t$ be the compositional inverse function of $e_{\lambda}(t)$ such that $\log_{\lambda}(e_{\lambda}(t))=e_{\lambda}\big(\log_{\lambda}(t)\big)=t$.
It is called the degenerate logarithm and given by
\begin{equation}
\log_{\lambda}(1+t)=\sum_{k=1}^{\infty}\frac{\lambda^{k-1}(1)_{k,\frac{1}{\lambda}}}{k!}t^{k}=\frac{1}{\lambda}\big((1+t)^{\lambda}-1\big),\quad (\mathrm{see}\ [6,7,9]). \label{3}
\end{equation}
Note that $\displaystyle \lim_{\lambda\rightarrow 0}\log_{\lambda}(1+t)=\log(1+t)\displaystyle$ and $\displaystyle\lim_{\lambda\rightarrow 0}e_{\lambda}(t)=e^{t}\displaystyle$. \par
It is well known that the harmonic numbers are defined by
\begin{equation}
H_{0}=0,\quad H_{n}=1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n},\quad(n \ge 1),\quad (\mathrm{see}\ [3,4,5,11,12]).\label{4}
\end{equation}
From \eqref{4}, we can derive the generating function of harmonic numbers given by
\begin{equation}
-\frac{1}{1-t}\log(1-t)=\sum_{n=1}^{\infty}H_{n}t^{n},\quad (\mathrm{see}\ [5]). \label{5}
\end{equation}
Recently, the degenerate harmonic numbers are defined by
\begin{equation}
H_{0,\lambda}=0,\quad H_{n,\lambda}=\sum_{k=1}^{n}\frac{1}{\lambda}\binom{\lambda}{k}(-1)^{k-1},\quad (n\in\mathbb{N}),\quad (\mathrm{see}\ [6]). \label{6}
\end{equation}
Note that $\displaystyle\lim_{\lambda\rightarrow 0}H_{n,\lambda}=H_{n}\displaystyle$. \par
From \eqref{3} and \eqref{6}, we can derive the generating function of the degenerate harmonic numbers given by
\begin{equation}
-\frac{1}{1-t}\log_{\lambda}(1-t)=\sum_{n=1}^{\infty}H_{n,\lambda}t^{n},\quad (\mathrm{see}\ [6]).\label{7}
\end{equation}
In 1996, Conway and Guy introduced the hyperharmoinc numbers $H_{n}^{(r)},\,(n,r \ge 0),$ which are defined by
\begin{equation}
H_{0}^{(r)}=0;\,\, H_{n}^{(0)}=\frac{1}{n},\,\, H_{n}^{(1)}=H_{n},\,\, H_{n}^{(r)}=\sum_{k=1}^{n}H_{k}^{(r-1)},\,\,(r\ge 2),\quad (\mathrm{see}\ [3,5,11,14,16]). \label{9}
\end{equation}
By \eqref{9}, we get
\begin{equation}
H_{n}^{(r)}=\binom{n+r-1}{r-1}(H_{n+r-1}-H_{r-1}),\quad (r\ge 1),\label{10}
\end{equation}
\begin{equation}
-\frac{\log(1-t)}{(1-t)^{r}}=\sum_{n=1}^{\infty}H_{n}^{(r)}t^{n},\quad (\mathrm{see}\ [5]).\label{11}
\end{equation}
In [12], Kim-Kim introduced the degenerate hyperharmonic numbers $H_{n,\lambda}^{(r)},\,\,(n \ge 0, r \ge 1),$ which are given by
\begin{equation}
H_{0,\lambda}^{(r)}=0;\quad H_{n,\lambda}^{(1)}=H_{n,\lambda},\quad H_{n,\lambda}^{(r)}=\sum_{k=1}^{n} H_{k,\lambda}^{(r-1)},\quad (r\ge 2). \label{12}
\end{equation}
From \eqref{12}, we note that
\begin{align}
-\frac{\log_{\lambda}(1-t)}{(1-t)^{r}}&=\frac{1}{(1-t)^{r-1}}\bigg(\sum_{n=1}^{\infty}H_{n,\lambda}t^{n}\bigg)=\frac{1}{(1-t)^{r-2}}\bigg(\sum_{n=1}^{\infty}\bigg(\sum_{k=1}^{n}H_{k,\lambda}\bigg)\bigg)t^{n}\label{13} \\
&=\frac{1}{(1-t)^{r-2}}\sum_{n=1}^{\infty}H_{n,\lambda}^{(2)}t^{n}=\cdots=\sum_{n=1}^{\infty}H_{n,\lambda}^{(r)}t^{n},\quad (r\ge 1). \nonumber
\end{align}
By \eqref{13}, we easily get
\begin{equation}
H_{k,\lambda}^{(0)}=\frac{1}{k!}\lambda^{k-1}(-1)^{k-1}(1)_{k,\frac{1}{\lambda}},\quad (k\ge 1).
\end{equation}
\section{Some identities on degenerate hyperharmonic numbers}
First, let us define the polynomial $q_{n}(\lambda)\in Q[\lambda],\ (n\ge 0)$, with $\deg q_{n}(\lambda)=n-1$ as
\begin{equation}
\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\cdots\bigg(1-\frac{\lambda}{r+n-1}\bigg)=1+\lambda q_{n}(\lambda),\quad (n\in\mathbb{N}, r\in\mathbb{N}). \label{14}
\end{equation}
From \eqref{14}, we have
\begin{align}
& \bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\cdots\bigg(1-\frac{\lambda}{r+n-1}\bigg)\label{15}\\
&=\frac{(-1)^{n}}{r(r+1)\cdots(r+n-1)}(\lambda-r)(\lambda-r-1)(\lambda-r-2)\cdots(\lambda-r-(n-1))\nonumber\\
&=\frac{(r-1)!(-1)^{n}}{(r+n-1)!}(\lambda-r)_{n}=\frac{(-1)^{n}}{\binom{r+n-1}{n}n!}(\lambda-r)_{n}=\frac{(-1)^{n}}{\binom{r+n-1}{n}}\binom{\lambda-r}{n},\nonumber
\end{align}
where
\begin{displaymath}
(x)_{0}=1,\quad (x)_{n}=x(x-1)\cdots(x-n+1),\quad (n\ge 1).
\end{displaymath}
For $n\ge 0$, the Stirling numbers of the first kind are defined by
\begin{equation}
(x)_{n} =\sum_{k=0}^{n}S_{1}(n,k)x^{k},\quad (\mathrm{see}\ [1-16]). \label{16}
\end{equation}
By \eqref{16}, we get
\begin{align}
&\frac{(-1)^{n}}{r(r+1)\cdots(r+n-1)}(\lambda-r)(\lambda-r-1)\cdots(\lambda-r-n+1)=\frac{1}{(-r)_{n}}\sum_{l=0}^{n}(\lambda-r)^{l}S_{1}(n,l) \label{17}\\
&=\frac{1}{(-r)_{n}}\sum_{l=0}^{n}S_{1}(n,l)\sum_{k=0}^{l}\binom{l}{k}(-r)^{l-k}\lambda^{k}=\frac{1}{(-r)_{n}}\sum_{k=0}^{n}\lambda^{k}\sum_{l=k}^{n}S_{1}(n,l)\binom{l}{k}(-1)^{l-k}r^{l-k} \nonumber \\
&=1+\frac{1}{(-r)_{n}}\sum_{k=1}^{n}\lambda^{k}\sum_{l=k}^{n}\binom{l}{k}S_{1}(n,l)(-1)^{l-k}r^{l-k} \nonumber \\
&=1+\lambda\bigg(\frac{1}{(-r)_{n}}\sum_{k=1}^{n}\lambda^{k-1}\sum_{l=k}^{n}\binom{l}{k}S_{1}(n,l)(-1)^{l-k}r^{l-k}\bigg)\nonumber \\
&=1+\lambda q_{n}(\lambda). \nonumber
\end{align}
From \eqref{15} and \eqref{17}, we note that
\begin{equation}
1+\lambda q_{n}(\lambda)=\frac{(-1)^{n}}{\binom{r+n-1}{n}}\binom{\lambda-r}{n},\quad (n\ge 1,\ r\ge 1). \label{18}
\end{equation}
Therefore, by \eqref{18}, we obtain the following lemma.
\begin{lemma}
For $r,n\in\mathbb{N}$, the polynomials $q_{n}(\lambda)\in Q[\lambda]$ with $\deg q_{n}(\lambda)=n-1$ are defined by
\begin{displaymath}
\prod_{i=0}^{n-1}\bigg(1-\frac{\lambda}{r+i}\bigg)=1+\lambda q_{n}(\lambda).
\end{displaymath}
Then we have
\begin{displaymath}
q_{n}(\lambda)=\frac{1}{\lambda}\bigg\{\frac{(-1)^{n}}{\binom{r+n-1}{n}}\binom{\lambda-r}{n}-1\bigg\}.
\end{displaymath}
\end{lemma}
From \eqref{13}, we note that
\begin{align}
\frac{d^{k}}{dz^{k}}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg)&=\frac{d^{k}}{dz^{k}}\sum_{n=0}^{\infty}H_{n,\lambda}^{(r)}z^{n} \label{19}\\
&=\sum_{n=k}^{\infty}H_{n,\lambda}^{(r)}n(n-1)\cdots(n-k+1)z^{n-k} \nonumber \\
&=k!\sum_{n=0}^{\infty}H_{n+k,\lambda}^{(r)}\binom{n+k}{k}z^{n},\nonumber
\end{align}
where $k,r$ are positive integers. \par
Now, we observe that
\begin{align}
\frac{d}{dz}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg)&=\frac{(1-z)^{\lambda}}{(1-z)^{r+1}}-\frac{r}{(1-z)^{r+1}}\log_{\lambda}(1-z) \label{20}\\
&=\frac{r}{(1-z)^{r+1}}\bigg\{\frac{\lambda}{r}\log_{\lambda}(1-z)+\frac{1}{r}-\log_{\lambda}(1-z)\bigg\}\nonumber \\
&=\frac{r}{(1-z)^{r+1}}\bigg\{\frac{1}{r}-\log_{\lambda}(1-z)\bigg(1-\frac{\lambda}{r}\bigg)\bigg\}.\nonumber
\end{align}
Thus, by \eqref{20}, we get
\begin{align}
\frac{d}{dz}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg)&=\frac{r}{(1-z)^{r+1}}\bigg\{\frac{1}{r}-\log_{\lambda}(1-z)\bigg(1-\frac{r}{\lambda}\bigg)\bigg\}.\label{21}
\end{align}
From \eqref{21}, we have
\begin{align}
&\frac{d^{2}}{dz^{2}}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg)=\frac{d}{dz}\bigg(\frac{r}{(1-z)^{r+1}}\bigg(\frac{1}{r}-\bigg(1-\frac{\lambda}{r}\bigg)\log_{\lambda}(1-z)\bigg)\bigg) \label{22} \\
&=\frac{r(r+1)}{(1-z)^{r+2}}\bigg(\frac{1}{r}-\bigg(1-\frac{\lambda}{r}\bigg)\log_{\lambda}(1-z)\bigg)+\frac{r}{(1-z)^{r+1}}\bigg(1-\frac{\lambda}{r}\bigg)\frac{(1-z)^{\lambda}}{1-z} \nonumber \\
&=\frac{r(r+1)}{(1-z)^{r+2}}\bigg(\frac{1}{r}-\log_{\lambda}(1-z)\bigg(1-\frac{\lambda}{r}\bigg)\bigg)+\frac{r(r+1)}{(1-z)^{r+2}}\bigg(1-\frac{\lambda}{r}\bigg)\bigg(\frac{\lambda}{r+1}\log_{\lambda}(1-z)+\frac{1}{r+1}\bigg) \nonumber \\
&=\frac{r(r+1)}{(1-z)^{r+2}}\bigg\{\frac{1}{r}+\frac{1}{r+1}\bigg(1-\frac{\lambda}{r}\bigg)-\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\log_{\lambda}(1-z)\bigg\}.\nonumber
\end{align}
Thus, by \eqref{22}, we get
\begin{equation}
\begin{aligned}
&\frac{d^{2}}{dz^{2}}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg)\\
&=\frac{r(r+1)}{(1-z)^{r+2}}\bigg\{\frac{1}{r}+\frac{1}{r+1}\bigg(1-\frac{\lambda}{r}\bigg)-\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\log_{\lambda}(1-z)\bigg\}.
\end{aligned}\label{23}
\end{equation}
From \eqref{23}, we note that
\begin{align}
&\frac{d^{3}}{dz^{3}}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg)=\frac{d}{dz}\bigg(\frac{d^{2}}{dz^{2}}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg)\bigg) \label{24} \\
&=\frac{d}{dz}\bigg\{\frac{r(r+1)}{(1-z)^{r+2}}\bigg(\frac{1}{r}+\frac{1}{r+1}\bigg(1-\frac{\lambda}{r}\bigg)-\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\log_{\lambda}(1-z)\bigg\} \nonumber \\
&=\frac{r(r+1)(r+2)}{(1-z)^{r+3}}\bigg\{\frac{1}{r}+\frac{1}{r+1}\bigg(1-\frac{\lambda}{r}\bigg)-\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\log_{\lambda}(1-z)\bigg\}\nonumber \\
&\quad +\frac{r(r+1)}{(1-z)^{r+2}}\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\frac{\lambda}{1-z}\frac{1}{\lambda}\big((1-z)^{\lambda}-1+1\big) \nonumber \\
&=\frac{r(r+1)(r+2)}{(1-z)^{r+3}}\bigg\{\frac{1}{r}+\frac{1}{r+1}\bigg(1-\frac{\lambda}{r}\bigg)-\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\log_{\lambda}(1-z)\bigg\} \nonumber \\
&\quad +\frac{r(r+1)(r+2)}{(1-z)^{r+3}}\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\bigg(\frac{\lambda}{r+2}\log_{\lambda}(1-z)+\frac{1}{r+2}\bigg) \nonumber\\
&=\frac{r(r+1)(r+2)}{(1-z)^{r+3}}\bigg\{\frac{1}{r}+\frac{1}{r+1}\bigg(1-\frac{\lambda}{r}\bigg)+\frac{1}{r+2}\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\nonumber \\
&\quad -\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\bigg(1-\frac{\lambda}{r+2}\bigg)\log_{\lambda}(1-z)\bigg\}.\nonumber
\end{align}
Thus, by \eqref{24}, we get
\begin{equation}
\begin{aligned}
&\frac{d^{3}}{dz^{3}}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg)\\
&=\frac{r(r+1)(r+2)}{(1-z)^{r+3}}\bigg\{\frac{1}{r}+\frac{1}{r+1}\bigg(1-\frac{\lambda}{r}\bigg)+\frac{1}{r+2}\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\\
&\quad -\bigg(1-\frac{\lambda}{r}\bigg)\bigg(1-\frac{\lambda}{r+1}\bigg)\bigg(1-\frac{\lambda}{r+2}\bigg)\log_{\lambda}(1-z)\bigg\}.
\end{aligned}
\end{equation}
Continuing this process, we have
\begin{align}
\frac{d^{k}}{dz^{k}}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg)&=\frac{r(r+1)\cdots(r+k-1)}{(1-z)^{r+k}}\bigg(\frac{1}{r}+\sum_{l=2}^{k}\frac{1}{r+l-1}\prod_{j=0}^{l-2}\bigg(1-\frac{\lambda}{r+j}\bigg)\bigg) \label{25} \\
&\quad - r(r+1)\cdots(r+k-1)\prod_{l=0}^{k-1}\bigg(1-\frac{\lambda}{r+l}\bigg)\frac{\log_{\lambda}(1-z)}{(1-z)^{r+k}}. \nonumber
\end{align}
From Lemma 1 and \eqref{25}, we can derive the following equation. \eqref{26}
\begin{align}
&\frac{d^{k}}{dz^{k}}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)r}\bigg) \label{26} \\
&=\frac{r(r+1)\cdots(r+k-1)}{(1-z)^{r+k}}\bigg(\frac{1}{r}+\sum_{l=2}^{k}\frac{1}{r+l-1}\big(1+\lambda q_{l-1}(\lambda)\big)\bigg)\nonumber\\
&\quad -r(r+1)\cdots(r+k-1)(1+\lambda q_{k}(\lambda)\big)\frac{\log_{\lambda}(1-z)}{(1-z)^{k+r}} \nonumber \\
&=\frac{(r+k-1)!}{(r-1)!}\frac{1}{(1-z)^{r+k}}\bigg(\sum_{l=1}^{k}\frac{1}{r+l-1}+\lambda\sum_{l=2}^{k}\frac{q_{l-1}(\lambda)}{r+l-1}\bigg) \nonumber \\
&\quad -\frac{(r+k-1)!}{(r-1)!}\big(1+\lambda q_{k}(\lambda)\big)\frac{\log_{\lambda}(1-z)}{(1-z)^{k+r}} \nonumber \\
&=\frac{(r+k-1)!}{(r-1)!}\frac{1}{(1-z)^{r+k}}\bigg(\sum_{l=1}^{k}\frac{1}{r+l-1}-\log_{\lambda}(1-z)\bigg) \nonumber \\
&\quad +\lambda\frac{(r+k-1)!}{(r-1)!}\frac{1}{(1-z)^{r+k}}\bigg(\sum_{l=2}^{k}\frac{q_{l-1}(\lambda)}{r+l-1}-q_{k}(\lambda)\log_{\lambda}(1-z)\bigg) \nonumber \\
&=\frac{(r+k-1)!}{(r-1)!}\bigg(\sum_{n=0}^{\infty}\binom{n+r+k-1}{n}\bigg(H_{k+r-1}-H_{r-1}\bigg)z^{n}-\frac{\log_{\lambda}(1-z)}{(1-z)^{r+k}}\bigg)\nonumber\\
&\quad +\lambda\frac{(r+k-1)!}{(r-1)!}\bigg(\sum_{n=0}^{\infty}\binom{n+r+k-1}{n}\sum_{l=2}^{k}\frac{q_{l-1}(\lambda)}{r+l-1}z^{n}-q_{k}(\lambda)\frac{\log_{\lambda}(1-z)}{(1-z)^{r+k}}\bigg). \nonumber
\end{align}
From the generating function of degenerate hyperharmonic numbers in \eqref{13} and \eqref{26}, we obtain
\begin{align}
&\frac{d^{k}}{dz^{k}}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg) \label{27} \\
&=\frac{(r+k-1)!}{(r-1)!}\bigg(\sum_{n=0}^{\infty}\binom{n+r+k-1}{n})(H_{k+r-1}-H_{r-1})z^{n}+\sum_{n=1}^{\infty}H_{n,\lambda}^{(k+r)}z^{n}\bigg)\nonumber \\
&\quad +\lambda\frac{(r+k-1)!}{r-1}\bigg(\sum_{n=0}^{\infty}\binom{n+r+k-1}{n}\sum_{l=2}^{k}\frac{q_{l-1}(\lambda)}{r+l-1}z^{n}+q_{k}(\lambda)\sum_{n=1}^{\infty}H_{n,\lambda}^{(r+k)}z^{n}\bigg) \nonumber \\
&=k!\sum_{n=0}^{\infty}\bigg\{\binom{n+r+k-1}{n}\binom{r+k-1}{k}(H_{k+r-1}-H_{r-1})+\binom{r+k-1}{k}H_{n,\lambda}^{(k+r)}\bigg\}z^{n}\nonumber\\
&\quad +\lambda k!\sum_{n=0}^{\infty}\bigg\{\binom{n+r+k-1}{n}\binom{r+k-1}{k}\sum_{l=2}^{k}\frac{q_{l-1}(\lambda)}{r+l-1}+q_{k}(\lambda)\binom{r+k-1}{k}H_{n,\lambda}^{(k+r)}\bigg\}z^{n}.\nonumber
\end{align}
From \eqref{10} and \eqref{27}, we note that
\begin{align}
&\frac{d^{k}}{dz^{k}}\bigg(-\frac{\log_{\lambda}(1-z)}{(1-z)^{r}}\bigg) \label{28} \\
&=k!\sum_{n=0}^{\infty}\bigg\{\binom{n+r+k-1}{n}H_{k}^{(r)}+\binom{r+k-1}{k}H_{n,\lambda}^{(k+r)}\bigg\}z^{n}\nonumber \\
&\quad +\lambda k!\sum_{n=0}^{\infty}\bigg\{\binom{n+r+k-1}{n}\binom{r+k-1}{k}\sum_{l=2}^{k}\frac{q_{l-1}(\lambda)}{r+l-1}+\binom{r+k-1}{k}q_{k}(\lambda)H_{n,\lambda}^{(k+r)}\bigg\}z^{n}\nonumber \\
&=k!\sum_{n=0}^{\infty}\bigg\{\binom{n+r+k-1}{n}H_{k}^{(r)}+\binom{r+k-1}{k}H_{n,\lambda}^{(k+r)} \nonumber \\
&\qquad +\lambda\bigg(\binom{n+r+k-1}{n}\binom{r+k-1}{k}\sum_{l=2}^{k}\frac{q_{l-1}(\lambda)}{r+l-1}+\binom{r+k-1}{k}q_{k}(\lambda)H_{n,\lambda}^{(k+r)}\bigg)\bigg\}z^{n}.\nonumber
\end{align}
Therefore, by \eqref{19} and \eqref{28}, we obtain the following theorem.
\begin{theorem}
For $r,k\in\mathbb{N}$, we have the following identity
\begin{align*}
&\binom{n+k}{k}H_{n+k,\lambda}^{(r)}=\binom{n+r+k-1}{n}H_{k}^{(r)}+\binom{r+k-1}{k}H_{n,\lambda}^{(k+r)} \quad\\
&+\lambda\bigg\{\binom{n+r+k-1}{n}\binom{r+k-1}{k}\sum_{l=2}^{k}\frac{q_{l-1}(\lambda)}{r+l-1}+\binom{r+k-1}{k}q_{k}(\lambda)H_{n,\lambda}^{(k+r)}\bigg\},
\end{align*}
where $q_{n}(\lambda)$ is the polynomial of degeree $n-1$ given by $q_{n}(\lambda)=\frac{1}{\lambda}\Big(\frac{(-1)^{n}}{\binom{n+r-1}{n}}\binom{\lambda-r}{n}-1\Big)$.
\end{theorem}
From Theorem 2 and Lemma 1, we have
\begin{align}
&\binom{n+k}{k}H_{n+k,\lambda}^{(r)}=\binom{n+r+k-1}{n}H_{k}^{(r)}+\binom{r+k-1}{k}H_{n,\lambda}^{(k+r)}\label{30}\\
&+\binom{n+r+k-1}{n}\binom{r+k-1}{k}\sum_{l=2}^{k}\frac{1}{r+l-1}\bigg(\frac{(-1)^{l-1}}{\binom{r+l-2}{l-1}}\binom{\lambda-r}{l-1}-1\bigg) \nonumber \\
&+\binom{r+k-1}{k}\bigg(\frac{(-1)^{k}}{\binom{r+k-1}{k}}\binom{\lambda-r}{k}-1\bigg)H_{n,\lambda}^{(k+r)}\nonumber \\
&=\binom{n+r+k-1}{n}H_{k}^{(r)}+\binom{n+r+k-1}{n}\binom{r+k-1}{k}
\nonumber\\
&\times \bigg(\sum_{l=2}^{k}\frac{(-1)^{l-1}\binom{\lambda-r}{l-1}}{\binom{r+l-2}{l-1}(r+l-1)}-\sum_{l=2}^{k}\frac{1}{r+l-1}\bigg)
+(-1)^{k}\binom{\lambda-r}{k}H_{n,\lambda}^{(k+r)}.\nonumber
\end{align}
Let us take $r=1$ in \eqref{30}. Then we have
\begin{align}
\binom{n+k}{k}H_{n+k,\lambda}&=\binom{n+k}{n}H_{k}+\binom{n+k}{n}\bigg(\sum_{l=2}^{k}\frac{1}{l}\binom{\lambda-1}{l-1}(-1)^{l-1}-\sum_{l=2}^{k}\frac{1}{l}\bigg) \label{31} \\
&\quad +(-1)^{k}\binom{\lambda-1}{k}H_{n,\lambda}^{(k+1)} \nonumber \\
&=\binom{n+k}{n}H_{k}+\binom{n+k}{k}\bigg(\sum_{l=1}^{k}\frac{1}{l}\binom{\lambda-1}{l-1}(-1)^{l-1}-\sum_{l=1}^{k}\frac{1}{l}\bigg) \nonumber \\
&\quad +(-1)^{k}\binom{\lambda-1}{k}H_{n,\lambda}^{(k+1)}\nonumber \\
&=\binom{n+k}{n}H_{k}+\binom{n+k}{k}\bigg(\sum_{l=1}^{k}\frac{1}{\lambda}\binom{\lambda}{l}(-1)^{l-1}-H_{k}\bigg)\nonumber\\
&\quad +(-1)^{k}\binom{\lambda-1}{k}H_{n,\lambda}^{(k+1)}
\nonumber\\
&=\binom{n+k}{n}H_{k}+\binom{n+k}{k}(H_{k,\lambda}-H_{k})+(-1)^{k}\binom{\lambda-1}{k}H_{n,\lambda}^{(k+1)} \nonumber \\
&=\binom{n+k}{n}H_{k,\lambda}+(-1)^{k}\binom{\lambda-1}{k}H_{n,\lambda}^{(k+1)}. \nonumber
\end{align}
Therefore, by \eqref{31}, we obtain the following theorem.
\begin{theorem}
For $n,k\in\mathbb{N}$, we have
\begin{displaymath}
H_{n,\lambda}^{(k+1)}=\frac{(-1)^{k}}{\binom{\lambda-1}{k}}\binom{n+k}{n}\big(H_{n+k,\lambda}-H_{k,\lambda}\big).
\end{displaymath}
\end{theorem}
In Theorem 3, letting $\lambda \rightarrow 0$ gives the result in \eqref{10}. Namely, we have
\begin{displaymath}
H_{n}^{(k+1)}=\binom{n+k}{n}(H_{n+k}-H_{k}).
\end{displaymath}
This shows that Theorem 3 is a degenerate version of the expression in \eqref{10}.
\section{Further Remark}
For $\mathrm{Re}(\delta)>0$, the degenerate Hurwitz zeta function is defined by Kim-Kim as
\begin{displaymath}
\zeta_{\lambda}(s,\delta)=\sum_{n=0}^{\infty}\frac{(1)_{n,\lambda}}{(n+\delta)^{s}},\quad (\mathrm{Re}(s)>1),\quad (\mathrm{see}\ [7]).
\end{displaymath}
In particular, $\delta=1$, $\zeta_{\lambda}(s)=\zeta_{\lambda}(s,1)$ is called the degenerate zeta function. That is,
\begin{displaymath}
\zeta_{\lambda}(s)=\sum_{n=1}^{\infty}\frac{(1)_{n-1,\lambda}}{n^{s}},\quad (\mathrm{Re}(s)>1),\quad (\mathrm{see}\ [7]).
\end{displaymath}
\begin{align}
&\frac{H_{1,\lambda}^{(r)}}{1^{m}}+\frac{H_{2,\lambda}^{(r)}}{2^{m}}(1)_{1,\lambda}+\frac{H_{3,\lambda}^{(r)}}{3^{m}}(1)_{2,\lambda}+\frac{H_{4,\lambda}^{(r)}}{4^{m}}(1)_{3,\lambda}+\cdots \label{32}\\
&=\frac{H_{1,\lambda}^{(r-1)}}{1^{m}}+\frac{H_{1,\lambda}^{(r-1)}+H_{2,\lambda}^{(r-1)}}{2^{m}}(1)_{1,\lambda}+\frac{1}{3^{m}}(H_{1,\lambda}^{(r-1)}+H_{2,\lambda}^{(r-1)}+H_{3,\lambda}^{(r-1)})(1)_{2,\lambda}+\cdots \nonumber \\
&=H_{1,\lambda}^{(r-1)}\sum_{k=1}^{\infty}\frac{(1)_{k-1,\lambda}}{k^{m}}+H_{2,\lambda}^{(r-1)}\sum_{k=1}^{\infty}\frac{(1)_{k,\lambda}}{(k+1)^{m}}+H_{3,\lambda}^{(r-1)}\sum_{k=1}^{\infty}\frac{(1)_{k+1,\lambda}}{(k+2)^{m+1}}+\cdots \nonumber \\
&=\sum_{n=1}^{\infty}H_{n,\lambda}^{(r-1)}\sum_{k=1}^{\infty}\frac{(1)_{k+n-2,\lambda}}{(k+n-1)^{m}}=\sum_{n=1}^{\infty}H_{n,\lambda}^{(r-1)}\sum_{k=0}^{\infty}\frac{(1)_{k+n-1,\lambda}}{(k+n)^{m}}\nonumber \\
&=\sum_{n=1}^{\infty}H_{n,\lambda}^{(r-1)}\sum_{k=n}^{\infty}\frac{(1)_{k-1,\lambda}}{k^{m}}=-\sum_{n=1}^{\infty}H_{n,\lambda}^{(r-1)}\sum_{l=1}^{n-1}\frac{(1)_{l-1,\lambda}}{l^{m}}+\sum_{n=1}^{\infty}H_{n,\lambda}^{(r-1)}\sum_{k=1}^{\infty}\frac{(1)_{k-1,\lambda}}{k^{m}}\nonumber \\
&=-\sum_{n=1}^{\infty}H_{n,\lambda}^{(r-1)}\sum_{l=1}^{n-1}\frac{(1)_{l-1,\lambda}}{l^{m}}+\sum_{n=1}^{\infty}H_{n,\lambda}^{(r-1)}\zeta_{\lambda}(m). \nonumber
\end{align}
From \eqref{32}, we note that
\begin{displaymath}
\sum_{n=1}^{\infty}\bigg(\frac{H_{n,\lambda}^{(r)}}{n^{m}}(1)_{n-1,\lambda}+H_{n,\lambda}^{(r-1)}\sum_{l=1}^{n-1}\frac{(1)_{l-1,\lambda}}{l^{m}}\bigg)=\zeta_{\lambda}(m)\sum_{n=1}^{\infty}H_{n,\lambda}^{(r-1)}.
\end{displaymath}
\section{Conclusion}
In this paper, by using generating functions we investigated some properties, recurrence relations and identities involving degenerate hyperharmonic numbers, hyperharmonic numbers and degenerate harmonic numbers. The degenerate hyperharmonic numbers were introduced as a degenerate version of the hyperharmonic numbers which were introduced by Conway and Guy. In particular, derived was a degenerate version of the explicit expression of hyperharmonic numbers in terms of harmonic numbers, namely that of the degenerate hyperharmonic numbers in terms of the degenerate harmonic numbers.\par
It is one of our future projects to continue to study various degenerate versions of some special polynomials and numbers and to find their applications to physics, science and engineering as well as mathematics.
| {
"timestamp": "2022-05-23T02:12:35",
"yymm": "2205",
"arxiv_id": "2205.10010",
"language": "en",
"url": "https://arxiv.org/abs/2205.10010",
"abstract": "The aim of this paper is to investigate some properties, recurrence relations and identities involving degenerate hyperharmonic numbers, hyperharmonic numbers and degenerate harmonic numbers. In particular, we derive an explicit expression of the degenerate hyperharmonic numbers in terms of the degenerate harmonic numbers. This is a degenerate version of the corresponding identity representing the hyperharmonic numbers in terms of harmonic numbers due to Conway and Guy.",
"subjects": "Number Theory (math.NT)",
"title": "Some identities on degenerate hyperharmonic numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462197739626,
"lm_q2_score": 0.7185943865443352,
"lm_q1q2_score": 0.7099326077372655
} |
https://arxiv.org/abs/1607.02564 | Base-$b$ analogues of classic combinatorial objects | We study the properties of the base-$b$ binomial coefficient defined by Jiu and the second author, introduced in the context of a digital binomial theorem. After introducing a general summation formula, we derive base-$b$ analogues of the Stirling numbers of the second kind, the Fibonacci numbers and the classical exponential function. | \section{Introduction}
\label{Introduction}
Recently, Jiu and the second author have conducted work \cite{Vignat1} concerning the base-$b$ binomial coefficient. We let $n=\sum_{i=0}^{N_n-1}n_i b^i$ and $k=\sum_{i=0}^{N_k-1}k_i b^i$ be the base-$b$ expansions of $n$ and $k$ respectively. Then we define $N:=\max\{N_n, N_k\}$, the base-$b$ binomial coefficient is given as the product
\begin{equation}\label{1.1}
\binom{n}{k}_b := \prod_{i=0}^{N-1}\binom{n_i}{k_i},
\end{equation}
so that $\binom{n}{k}_b=0$ if $k_i>n_i$ for some $i.$
This also motivates the definition of a base $b$ factorial, which we define as
\[(n!)_b:=\prod_{i=0}^{N_n-1}n_i!.\]
The binomial theorem is a classic result stating that $(X+Y)^{n} = \sum_{k=0}^{n} \binom{n}{k} X^{k} Y^{n-k}$. Nguyen subsequently generalized this to the digital binomial theorem in \cite[(26)]{Nguyen1}, which states that
\begin{equation}\label{1.2}
(X+Y)^{s_2(n)} = \sum_{0\leq k \leq_2 n} X^{s_2(k)} Y^{s_2(n-k)},
\end{equation}
where $s_2(n)$ denotes the sum of the digits of $n$ expressed in base $2$. The condition $k \leq_2 n$ restricts the summation over indices such that each digit of $k$ is less than the corresponding digit of $n$ in base $2$, i.e. $k_i\leq n_i$ for all $0\leq i \leq N-1$. This notation, which has previously been referred to as digital dominance, will be adopted throughout this paper. This is also equivalent to the condition that the addition of $n-k$ and $k$ be carry-free in base $2$, which can be written symmetrically as $s_2(k)+s_2(n-k)=s_2(n)$ so that an equivalent form of \eqref{1.2} reads
\[
(X+Y)^{s_2(n)} = \sum_{s_2(k)+s_2(n-k)=s_2(n)} X^{s_2(k)} Y^{s_2(n-k).}
\]
The discovery of this digital binomial theorem spurred further extensions by Nguyen \cite{Nguyen1} \cite{Nguyen2}, Nguyen and Mansour \cite{Mansour1,Mansour2} and Liu and the second author \cite{Vignat1}, which led to the introduction of the base-$b$ binomial coefficient $\binom{n}{k}_b$ and a generalization of the digital binomial theorem to an arbitrary base $b$ as follows:
\[
(X+Y)^{s_b(n)} = \sum_{k=0}^{n} \binom{n}{k}_b X^{s_b(k)} Y^{s_b(n-k)}
\]
where $s_b\left(n\right)$ is the sum of the digits of $n$ in base $b$ and the base-$b$ binomial coefficient is
given by \eqref{1.1}.
The paper is organized as follows. In Section \ref{CarryfreeSums}, we present a general summation formula, which is used in Section \ref{Stirling} to derive theorems about a base-$b$ analogue of the Stirling numbers of the second kind. This same formula is used to define a base-$b$ analogue of the Fibonacci numbers in Section \ref{Fibonacci}. Finally, in Section \ref{Exponential} we introduce an analogue of the exponential function involving the base-$b$ factorial and derive some of its properties.
\section{Sums over carry-free k}
\label{CarryfreeSums}
We can extend the methods of \cite{Vignat1} to take sums over carry-free $k$, with a weighting by the base-$b$ binomial coefficient.
\begin{thm}\label{thm1}
Let $S(n):=\sum_{k=0}^{n} f(n,k)$. Then
\begin{equation}\label{2.1}
\prod_{i=0}^{N-1}S\left(n_i\right) = \sum_{0\leq k \leq_b n} \prod_{i=0}^{N-1}f(n_i, k_i).
\end{equation}
\end{thm}
\begin{proof}
We have
\begin{align*}
\prod_{i=0}^{N-1}S\left(n_i\right) &= \sum_{k_0=0}^{n_0}f\left(n_0,k_0\right)\sum_{k_1=0}^{n_1}f\left(n_1,k_1\right) \cdots\sum_{k_{N-1}=0}^{n_{N-1}}f\left(n_{N-1},k_{N-1}\right)\\
&=\sum_{k_0=0}^{n_0}\sum_{k_1=0}^{n_1}\cdots\sum_{k_{N-1}=0}^{n_{N-1}}f\left(n_0,k_0\right)f\left(n_1,k_1\right)\cdots
f\left(n_{N-1},k_{N-1}\right)\\
&=\sum_{0\leq k \leq_b n} \prod_{i=0}^{N-1} f\left(n_i,k_i\right).\end{align*}
\end{proof}
\begin{cor}
Let $S_2(n):=\sum_{k=0}^{n} \binom{n}{k}f(n,k)$. Then
\begin{equation}\label{2.3}
\prod_{i=0}^{N-1}S_2\left(n_i\right) = \sum_{0\leq k \leq_b n} \prod_{i=0}^{N-1}\binom{n_i}{k_i}f(n_i,k_i) = \sum_{k=0}^{n} \binom{n}{k}_b \prod_{i=0}^{N-1}f(n_i,k_i).
\end{equation}
\begin{proof}
Replace $f\left(n,k\right)$ with $\binom{n}{k}f\left(n,k\right)$ in \eqref{2.1}. We can extend the sum over all $0\leq k \leq n$ since $\binom{n}{k}_b=0$ if $k$ is not digitally dominated by $n$.\end{proof}
\end{cor}
While the algebraic proof of this identity is very simple, it greatly simplifies and supersedes the proof of many previously discovered identities relating to sums over digitally dominated $k$, and admits much more sweeping generalizations. The previous work done on sums over carry-free $k$ has centered on proving this identity for special values of $f(n,
)$, often using the properties of infinite matrices. We now give the choices of $f\left(n,
\right)$ that yield the central theorems in previous papers \cite{Mansour1} \cite{Mansour2} \cite{Nguyen1} \cite{Nguyen2} \cite{Vignat1}.
\begin{itemize}
\item
By taking $f(n,
) = X^k Y^{n-k}$ in \eqref{2.3} with constant $X$ and $Y$, and noting that $S_2(
) = \sum_{k=0}^{n}\binom{n}{k} X^k Y^{n-k} = (X+Y)^{n}$, we recover the base-$b$ binomial theorem \cite[(Theorem 2)]{Vignat1}. Taking the $b=2$ case reduces to the digital binomial theorem \cite[(26)]{Nguyen1}. Namely,
\begin{equation}\label{2.4}
\left(X+Y\right)^{s_b(n)} = \sum_{k=0}^{n}\binom{n}{k}_b X^{s_b(k)}Y^{s_b(n-k)}.
\end{equation}
\item
We take $f(n,
) = \frac{\left(X\right)_k}{k!}\frac{\left(Y\right)_{n-k}}{\left(n-k\right)!}$ with constant $X$ and $Y$ in \eqref{2.1}, where $(x)_k := \frac{\Gamma(x+k)}{\Gamma(x)}$ is the Pochhammer symbol. Noting that $S(
) = \frac{\left(X+Y\right)_n}{n!}$ by the Vandermonde identity, we obtain an extension of the base-$b$ binomial theorem, the central theorem in \cite[(Theorem 2)]{Nguyen2}. Namely,
\begin{equation}\label{2.5}
\prod_{i=0}^{N-1}\binom{X+Y+n_i-1}{n_i} =\sum_{0\leq k \leq_b n}\prod_{i=0}^{N-1} \binom{X+k_i-1}{k_i} \binom{Y+n_i-k_i-1}{k_i} .
\end{equation}
\item
Taking $f\left(n,
\right) = \binom{x;r}{k}\binom{y;r}{n-k}$ in \eqref{2.1} where $\binom{x;r}{d}:=\frac{x(x+r)\cdots(x+(d-1)r)}{d!}$ \cite[(Definition 8)]{Mansour1} and using \cite[(Lemma 9)]{Mansour1} to find the sum of $f$ over $k$ gives \cite[(Theorem 4)]{Mansour1}, which can be specialized to find $q$-analogues. Namely,
\begin{equation}\label{2.6}
\prod_{i=0}^{N-1}\binom{x_i+y_i;r}{n_i} =\sum_{0\leq k \leq_b n}\prod_{i=0}^{N-1} \binom{x_i;r_i}{k_i} \binom{y_i;r_i}{n_i-k_i} .
\end{equation}
\item
Taking $f\left(n,
\right) = \overline{p}_k(x)\overline{s}_{n-k}(y)$ and $f(n,
) = \overline{p}_k(x)\overline{p}_{n-k}(y)$ in \eqref{2.1} where $s$ and $p$ are normalized Sheffer sequences and using \cite[(Theorem 7)]{Mansour2} to evaluate the sum of $f$ over $k$ gives \cite[(Theorem 1)]{Mansour2}.
\end{itemize}
As a corollary of \eqref{2.6}, we obtain a base-$b$ analog of the Chu-Vandermonde identity. The Chu-Vandermonde identity states \cite[(Page 8)]{Riordan} that, for integer $m$ and $n$,
\begin{equation}\label{2.7}
\sum_{k=0}^{r}\binom{n}{k}\binom{m}{r-k}=\binom{n+m}{r}.
\end{equation}
This can be extended to complex valued $n$ and $m$. This has a base-$b$ counterpart \cite[(Theorem 8)]{Vignat1} which can be proved by taking $r=1$ in \eqref{2.6} and applying \eqref{1.1}. We note that if the addition of $n$ and $m$ in base $b$ in carry-free, then $\prod_{i=0}^{N-1}\binom{n_i+m_i}{r_i} = \binom{n+m}{r}_b$. Changing variables to maintain consistent notation yields the following analog of the Chu-Vandermonde identity.
\begin{thm} For integral $n$ and $m$ such that their addition in base $b$ in carry-free,
\begin{equation}
\binom{n+m}{r}_b=\sum_{0 \leq k \leq_b r}\binom{n}{k}_b\binom{m}{r-k}_b.
\end{equation}
\end{thm}
As with the digital binomial theorem, an identity involving binomial coefficients has an obvious base-$b$ analog. However, we emphasize that the results do not immediately transfer, as can be seen by the restriction on $n$ and $m$ in the base-$b$ Chu-Vandermonde identity. However, the striking similarities between the digital binomial theorem and binomial theorem ensure that some linear identities transfer almost identically. We list some analogs of classic binomial identities derived from the binomial theorem, omitting the proofs since they follow classical lines.
\begin{equation}
\sum_{k=0}^{n}\binom{n}{k}_b = 2^{s_b(n)}.
\end{equation}
\begin{equation}
\sum_{k=0}^{n}\binom{n}{k}_b s_b(k) = s_b(n)2^{s_b(n)-1}.
\end{equation}
\begin{equation}
\sum_{k=0}^{n}\binom{n}{k}_b {s_b}^2(k) =s_b(n)(s_b(n)+1)2^{s_b(n)-2}.
\end{equation}
\section{Generating Function}\label{genfunc}
From the Theorem \ref{thm1} we can also derive a generating function for digitwise functions, based on the following lemma.
\begin{lemma}\label{genlem}
\begin{equation}
\{k: 0\leq k \leq_b n, 0 \leq n_i \leq b-1, 0 \leq i < \infty \} = \{k: 0 \leq k \leq \infty \}.
\end{equation}
\begin{proof}
There is a bijection between both sets. To see this, fix a base $b$. Every natural number has a unique representation in base $b$ which occurs in the left-most set, while every number in the left-most set corresponds to a unique natural number $k$.
\end{proof}
\end{lemma}
We now state without proof the straightforward extension of \eqref{2.1}: let $S(n,i):=\sum_{k=0}^{n} f(n,k,i)$. Then
\begin{equation}\label{ext}
\prod_{i=0}^{N-1} S\left(n_i,i\right) = \sum_{0\leq k \leq_b n} \prod_{i=0}^{N-1}f(n_i, k_i,i).
\end{equation}
We then extend the summation over digitally dominated $k$ in \eqref{ext} to a sum over all natural numbers $k$. This means by letting the number of digits in \eqref{ext} tend to infinity while letting each $n_i = b-1$ and applying Lemma \ref{genlem} we can convert our sum over digitally dominated $k$ to a sum over natural numbers.
\begin{thm}\label{thm2}
\begin{equation}
\label{eqthm2}
\prod_{i=0}^{\infty} \sum_{k=0}^{b-1} f(k,i) = \sum_{k=0}^{\infty} \prod_{i=0}^{\infty} f(k_i,i).
\end{equation}
\begin{proof}
Fixing a base $b$ and real $x$ and setting $n_i=b-1$ in \eqref{ext} while taking $N \rightarrow \infty$, we can apply the preceeding lemma to \eqref{ext} as
\begin{equation}
\prod_{i=0}^{\infty} \sum_{k_i=0}^{b-1} f(k_i,i) = \sum_{k=0}^{\infty} \prod_{i=0}^{\infty} f(k_i,i).
\end{equation}
We can remove the dependence of $k$ on $i$ on the left hand side since each $k$ goes from $0$ to $b-1$ identically, which completes the proof.
\end{proof}
\end{thm}
\begin{cor}\label{genthm}
\begin{equation}
\prod_{i=0}^{\infty} \sum_{k=0}^{b-1} f(k,i) x^{b^i k} = \sum_{k=0}^{\infty}x^k \prod_{i=0}^{\infty} f(k_i,i).
\end{equation}
\begin{proof}
Making the substitution $f(k_i,i) = x^{b^i k_i} f(k_i,i)$ in \eqref{eqthm2} and noting
\begin{equation}
\prod_{i=0}^{\infty} x^{b^i k_i} f(k_i,i) = x^{\sum_{i=0}^{\infty}b^i k_i } \prod_{i=0}^{\infty} f(k_i,i) = x^k \prod_{i=0}^{\infty} f(k_i,i)
\end{equation}
completes the proof.
\end{proof}
\end{cor}
This means that any base-$b$ function which can be represented as a digitwise product has a closed form generating function.
\section{Stirling numbers of the second kind}
\label{Stirling}
The Stirling numbers of the second kind $\stirling{n}{k}$ count the number of ways to partition a set of $n$ elements into $k$ non-empty subsets. They have a natural base-$b$ analog, defined as
\begin{equation}
\label{Stirling_definition}
\stirling{n}{k}_{b}= \prod_{i=0}^{N-1} \stirling{n}{k_i}.
\end{equation}
\begin{thm}
The base-$b$ Stirling numbers of the second kind satisfy
\begin{equation}\label{3.1}
\stirling{n}{k}_{b} = \frac{1}{(k!)_b}\sum_{j=0}^k (-1)^{s_b(k)-s_b(j)}\binom{k}{j}_b \left(\prod_{i=0}^{N-1}j_i\right)^n.
\end{equation}
\begin{proof}
From \cite[(Page 90)]{Riordan} the Stirling numbers of the second kind can be represented as
\begin{equation}\label{3.2}
\stirling{n}{k} = \frac{1}{k!}\sum_{j=0}^{k}(-1)^{k-j}\binom{k}{j}j^n.
\end{equation}
Rearranging this relation and changing variables to maintain consistency with \eqref{2.3} yields
\begin{equation}\label{3.3}
\sum_{k=0}^{n}(-1)^{k}\binom{n}{k}k^\alpha = (-1)^n n! \stirling{\alpha}{n}.
\end{equation}
Taking $f(n,k)=(-1)^k k^\alpha$ in \eqref{2.3} and noting that $S_2(n)$ is nothing more than the left hand side of \eqref{3.2}, we obtain
\begin{equation}\label{3.4}
\prod_{i=0}^{N-1}(-1)^{n_i}n_i
\stirling{\alpha}{n_i} = \sum_{k=0}^n \binom{n}{k}_b \prod_{i=0}^{N-1}(-1)^{k_i} {k_i}^\alpha.
\end{equation}
Simplifying the product on both sides yields
\begin{equation}\label{3.5}
(-1)^{s_b(n)}(n!)_b\prod_{i=0}^{N-1
\stirling{\alpha}{n_i}= \sum_{k=0}^n \binom{n}{k}_b (-1)^{s_b(k)}\left(\prod_{i=0}^{N-1}k_i\right)^\alpha.
\end{equation}
Rearranging then gives
\begin{equation}\label{3.6}
\prod_{i=0}^{N-1
\stirling{\alpha}{n_i}= \frac{1}{(n!)_b}\sum_{k=0}^n (-1)^{s_b(n)-s_b(k)}\binom{n}{k}_b \left(\prod_{i=0}^{N-1}k_i\right)^\alpha,
\end{equation}
which can be compared with \eqref{3.2}. The parallels are immediately obvious. Renaming variables completes the proof.
\end{proof}
\end{thm}
In general, any sequence of numbers with an explicit representation as a sum involving a binomial coefficient will have a base-$b$ analog. But this representation is always possible, since an arbitrary sequence $\left(a_n\right)$ can always expressed as \cite[p.43]{Riordan}
\[
a_n = \sum_{k=0}^{n} \binom{n}{k}\left(-1\right)^k b_k
\]
with the sequence $\left(b_n\right)$ defined by
\[
b_n = \sum_{k=0}^{n} \binom{n}{k}\left(-1\right)^k a_k.
\]
These base-$b$ Stirling numbers can then be used to generalize certain results about the Stirling numbers. A fundamental result concerning the differential operator $\vartheta:=x D$, $D:=\frac{d}{dx}$, found in \cite[(Page 218)]{Riordan}, is
\begin{equation}\label{3.7}
\vartheta^n = \sum_{k=0}^{n
\stirling{n}{k}x^k D^k.
\end{equation}
\begin{thm}
Define a multivariable differential operator $\vartheta_{N}:=x_0x_1\cdots x_{N-1}D_0 D_1 \cdots D_{N-1}$ where $D_i:=\frac{\partial}{\partial x_i}$. Then
\begin{equation}\label{3.8}
\vartheta_{N}^n = \sum_{k_1,\ldots, k_{N-1}\leq n
\stirling{n}{k}_{b}\prod_{i=0}^{N-1}x_i^{k_i}D_i^{k_{i}}.
\end{equation}
\begin{proof}
By the equality of mixed partial derivatives,
\begin{equation}\label{3.9}
\vartheta_{N}^n = \left(x_0D_0x_1D_1\cdots x_{N-1}D_{N-1}\right)^n = (x_0D_0)^n(x_1D_1)^n \cdots (x_{N-1}D_{N-1})^n.
\end{equation}
Expanding each term using \eqref{3.7} gives
\begin{align}
\vartheta_{N}^n &= \left(\sum_{k_0=0}^{n
\stirling{n}{k_0} x_0^{k_0} D_0^{k_{0}}\right)\cdots \left(\sum_{k_{N-1}=0}^{n
\stirling{n}{k_{N-1}} x_{N-1}^{k_{N-1}} D_{N-1}^{k_{N-1}}\right) \nonumber \\
&=\sum_{k_1,\ldots, k_{N-1}\leq n} \prod_{i=0}^{N-1
\stirling{n}{k_i} x_i^{k_i}D_i^{k_{i}} \nonumber \\
&=\sum_{k_1,\ldots, k_{N-1}\leq n}
\stirling{n}{k}_{b}\prod_{i=0}^{N-1}x_i^{k_i}D_i^{k_{i}} \label{3.10}.
\end{align}
In the second equality, we used the interchange of summation presented in \eqref{2.1}.
\end{proof}
\end{thm}
An explicit expression of the base-$b$ Stirling numbers as sum over partitions can be obtained as follows.
\begin{thm}
\begin{equation}
\stirling{n}{k}_b = \sum_{j}\prod_{|J|=j}k_J
\stirling{n-1}{k_J}_b\stirling{n-1}{k_{\bar{J}}-1}_b,
\end{equation}
where the product is over all partitions $J$ of $\{1,\ldots,n\}$ and $\bar{J}=\{1,\ldots,n\}\char`\\ J$
\begin{proof}
We begin with the Pascal-type recurrence \cite[(4)]{Agoh1}
$\stirling{n}{k}=\stirling{n-1}{k-1}+k\stirling{n-1}{k}$ and substitute into the first equality in \eqref{3.1}. Namely,
\begin{equation}
\stirling{n}{k}_b = \prod_{i=0}^{N-1}\stirling{n}{k_i} = \prod_{i=0}^{N-1}\left(
\stirling{n-1}{k_{i}-1} +k_{i} \stirling{n-1}{k_{i}}\right) ,
\end{equation}
from which the theorem follows.
\end{proof}
\end{thm}
As a last remark, we start from the well-known representation of the Stirling numbers of the second
kind
\[
\stirling{n}{k}=\frac{\left(-1\right)^{k}}{k!}\Delta^{k}x^{n}\thinspace_{\vert x=0}
\]
where $\Delta$ is the forward difference operator
\[
\Delta f\left(x\right)=f\left(x+1\right)-f\left(x\right).
\]
With $s_{b}\left(n\right)$ denoting the sum of digits of $n$ in
base $b$, we deduce from \eqref{Stirling_definition} the representation of the base $b$ Stirling
numbers as
\[
\stirling{n}{k}_{b}=\frac{\left(-1\right)^{s_{b}\left(k\right)}}{k!_{b}}\Delta^{s_{b}\left(k\right)}x^{n}\thinspace_{\vert x=0}
\]
The base-$b$ Stirling number naturally occurs as a consequence of creating a multivariable generalization of an existing single variable identity, a process which could be explored further in the future. Additionally, the base $b$ has no special significance here so long as it is larger than $k_i$, suggesting that $\stirling{n}{k}_{b}$ has a larger combinatorial significance outside of the sum of digits function.
\section{Fibonacci numbers}
\label{Fibonacci}
In this section, we introduce one further analogue of classic numbers with well-studied properties. The Fibonacci numbers $F_n$ are defined by the recurrence $F_{n+1} = F_n + F_{n-1}$ and the initial values $F_0=1$ and $F_1 = 1$.
\subsection{Definition}
\begin{thm}
Define the base-$b$ generalized Fibonacci numbers $F^{\left(b\right)}_{n}$ as
\begin{equation}\label{4.0}
F^{\left(b\right)}_{n} = \prod_{i=0}^{N-1}F_{n_i}.
\end{equation}
Then these numbers satisfy
\begin{equation}\label{4.1}
F^{\left(b\right)}_{n} = \sum_{k \le_{b} n}\binom{n-k}{k}_b.
\end{equation}
\begin{proof}
By summing over shallow diagonals of Pascal's triangle, we have the relation
\begin{equation}\label{4.3
F_n = \sum_{k=0}^{n}\binom{n-k}{k}.
\end{equation}
Taking $f(n,k)=\binom{n-k}{k}$ in \eqref{2.1} and utilizing the relation \eqref{4.3} yields \eqref{4.1}.
\end{proof}
\end{thm}
Rather than the Stirling numbers, which have combinatorial significance, the Fibonacci numbers are well studied objects in number theory.
\subsection{The case $b=3$}
The sequence $F_{n}^{\left(3\right)}$, for $n\ge0,$ starts with
\[
1,1,2,1,1,2,2,2,4,1,1,2,1,1,2,2,2,4,2,2,4,2,2,4,4,4,8,1,1,2\dots
\]
We recognize the beginning of sequence $A117592$ in OEIS defined by
\[
a\left(3n\right)=a\left(n\right),\thinspace\thinspace a\left(3n+1\right)=a\left(n\right),\thinspace\thinspace a\left(3n+2\right)=2a\left(n\right)
\]
and
\[
a\left(0\right)=a\left(1\right)=1,\thinspace\thinspace a\left(2\right)=2.
\]
\begin{thm}
The sequence $\left\{ F_{n}^{\left(3\right)}\right\} $ satisfies
the recurrence
\[
F_{3n}^{\left(3\right)}=F_{n}^{\left(3\right)},\thinspace\thinspace F_{3n+1}^{\left(3\right)}=F_{n}^{\left(3\right)},\thinspace\thinspace F_{3n+2}^{\left(3\right)}=2F_{n}^{\left(3\right)}
\]
with initial conditions
\[
F_{0}^{\left(3\right)}=F_{1}^{\left(3\right)}=1,\thinspace\thinspace F_{2}^{\left(3\right)}=2
\]
so that it coincides with OEIS sequence $A117592$.
\begin{proof}
By our main theorem, the base-$b$ Fibonacci sequence satisfies
\[
F_{m}^{\left(b\right)}=\prod_{i}F_{m_{i}}.
\]
The digits of $m=3n$ (resp. $m=3n+1$ and $m=2n+2$) coincide with those
of $n$ up to an extra $0$ (resp. $1$ and $2$) appended at the right.
We deduce
\begin{equation}
\label{F3n}
F_{3n}^{\left(3\right)}=F_{n}^{\left(3\right)}F_{0}=F_{n}^{\left(3\right)}
\end{equation}
and
\begin{equation}
\label{F3n+1}
F_{3n+1}^{\left(3\right)}=F_{n}^{\left(3\right)}F_{1}=F_{n}^{\left(3\right)}
\end{equation}
and
\begin{equation}
\label{F3n+2}
F_{3n+2}^{\left(3\right)}=F_{n}^{\left(3\right)}F_{2}=2F_{n}^{\left(3\right)}.
\end{equation}
The initial conditions are easily checked.
\end{proof}
\end{thm}
As a consequence of identities \eqref{F3n}, \eqref{F3n+1} and \eqref{F3n+2}, we have
\begin{cor}
The Fibonacci numbers $F_{n}^{\left(3\right)}$ satisfy the recurrence
\[
F_{3n+2}^{\left(3\right)}=F_{3n}^{\left(3\right)}+F_{3n+1}^{\left(3\right)}
\]
which can be interpreted as a $\mod 3$ version of the usual recursion on the Fibonacci numbers
\[
F_{n+2}=F_{n+1}+F_{n}.
\]
\end{cor}
A combinatorial interpretation for the sequence $F_n^{\left(3\right)}$ is obtained using the base-$3$ expansion
of the integer $n$, which consist of a sequence of $0$'s, $1$'s and $2$'s, the number of which we call respectively
$s_3\left(n,0\right),\,\,s_3\left(n,1\right)$ and $s_3\left(n,2\right)$. Then from definition \eqref{4.0}, we deduce
\[
F_n^{\left(3\right)} = F_{0}^{s_3\left(n,0\right)}F_{1}^{s_3\left(n,1\right)}F_{2}^{s_3\left(n,2\right)}
=2^{s_3\left(n,2\right)}
\]
so that $F_n^{\left(3\right)}$ essentially counts the number of $2$'s in the base $3$ representation of $n.$
We remark that this interpretation does not extend to the case of a base $b >3$.
\subsection{The general case}
This suggests the more general result as follows. Its proof is omitted
since it is identical to the previous one.
\begin{thm}
For $0\le p<b,$ the sequence $\left\{ F_{n}^{\left(b\right)}\right\} $
satisfies the recurrence
\[
F_{bn+p}^{\left(b\right)}=F_{n}^{\left(b\right)}F_{p}
\]
with initial conditions
\[
F_{p}^{\left(b\right)}=F_{p}.
\]
\end{thm}
We also have the following theorem:
\begin{thm}
A generating function for the sequence $\left\{ F_{n}^{\left(b\right)}\right\}$ is
\[
{\cal F}_b\left(z\right)=\sum_{n \ge 0} F_{n}^{\left(b\right)}z^n = \prod_{k \ge 0}
\left(\sum_{l=0}^{b-1} F_l z^{b^l} \right)
\]
\begin{proof}
Take $f(k,i)=F_k$ in Theorem \ref{genfunc}.
\end{proof}
\end{thm}
Additional identities similar to those satisfied by the usual Fibonacci numbers are derived from
the identities \eqref{F3n}, \eqref{F3n+1} and \eqref{F3n+2} as follows.
\begin{thm}
The Fibonacci numbers $F_{n}^{\left(b\right)}$ satisfy
\[
\sum_{k=0}^{b-1}F_{bn+k}^{\left(b\right)}=2F_{bn+\left(b-1\right)}^{\left(b\right)}+F_{bn+\left(b-2\right)}^{\left(b\right)}-F_{n}^{\left(b\right)}
\]
and, for $0\le p\le q<b-3,$
\[
\sum_{k=p}^{q}F_{bn+k}^{\left(b\right)}=F_{bn+q+2}^{\left(b\right)}-F_{bn+p+1}^{\left(b\right)}.
\]
\end{thm}
\begin{proof}
We use the identity
\[
\sum_{k=0}^{n}F_{k}=F_{n+2}-1
\]
and, for $0\le p\le b-1,$
\[
F_{bn+p}^{\left(b\right)}=F_{p}F_{n}^{\left(b\right)}
\]
to deduce
\begin{align*}
\sum_{k=0}^{b-1}F_{bn+k}^{\left(b\right)} & =\sum_{k=0}^{b-1}F_{k}F_{n}^{\left(b\right)}=F_{n}^{\left(b\right)}\left(F_{b+1}-1\right)\\
& =F_{n}^{\left(b\right)}\left(F_{b}+F_{b-1}-1\right)=F_{n}^{\left(b\right)}\left(F_{b-2}+2F_{b-1}-1\right)\\
& =F_{bn+b-2}^{\left(b\right)}+2F_{bn+b-1}^{\left(b\right)}-F_{n}^{\left(b\right)}.
\end{align*}
Moreover, for $0\le p\le q<b-3,$
\begin{align*}
\sum_{k=p}^{q}F_{bn+k}^{\left(b\right)} & =\sum_{k=0}^{q}F_{bn+k}^{\left(b\right)}-\sum_{k=0}^{p-1}F_{bn+k}^{\left(b\right)}\\
& =F_{n}^{\left(b\right)}\sum_{k=0}^{q}F_{k}-F_{n}^{\left(b\right)}\sum_{k=0}^{p-1}F_{k}\\
& =F_{n}^{\left(b\right)}\left(F_{q+2}-1\right)-F_{n}^{\left(b\right)}\left(F_{p+1}-1\right)\\
& =F_{bn+q+2}^{\left(b\right)}-F_{bn+p+1}^{\left(b\right)}.
\end{align*}
\end{proof}
\begin{prop}
As a consequence of Cassini's identity
\[
F_{q}^{2}-F_{q+1}F_{q-1}=\left(-1\right)^{q},
\]
we deduce
\[
\left(F_{bn+q}^{\left(b\right)}\right)^{2}-F_{bn+q+1}^{\left(b\right)}F_{bn+q-1}^{\left(b\right)}=\left(-1\right)^{q}\left(F_{n}^{\left(b\right)}\right)^{2}
\]
and from its more general version
\[
F_{q}^{2}-F_{q+r}F_{q-r}=\left(-1\right)^{q-r+1}F_{r-1}^{2},
\]
we have
\[
\left(F_{bn+q}^{\left(b\right)}\right)^{2}-F_{bn+q+r}^{\left(b\right)}F_{bn+q-r}^{\left(b\right)}=\left(-1\right)^{n-r+1}\left(F_{bn+r-1}^{\left(b\right)}\right)^{2}.
\]
\end{prop}
\subsection{Another definition of base-$2$ Fibonacci numbers}
The extension of Fibonacci numbers defined in \eqref{4.1} in the case $b=2$ gives the uninteresting sequence
\[
F_n^{\left(2\right)} =1,\,\, \forall n\ge0,
\]
so that we propose the study of a slightly modified version of it.
Assume we define now the modified base-$2$ Fibonacci numbers as follows:
\[
\tilde{F}^{\left(2\right)}_{n} = \sum_{k=0}^{n} \binom{n-k}{k}_b
\]
so that we do not impose the digitally dominance on the summation index.
The first values of this new sequence, starting from $n=0,$ are
\[
1,1,2,1,3,2,3,1,4,3,5,\dots
\]
We recognize the first entries of Stern's diatomic sequence
$A002487$ defined by
\[
a_{0}=0,\thinspace\thinspace a_{1}=1\thinspace\thinspace\text{and}\thinspace\thinspace
a_{2n}=a_{n},\thinspace\thinspace\thinspace a_{2n+1}=a_{n}+a_{n+1},\thinspace\thinspace n\ge 1.
\]
\begin{thm}
The modified base-$2$ Fibonacci numbers $\tilde{F}_{n}$ are
\[
\tilde{F}_{n}^{\left(2\right)}=a_{n+1}
\]
where $\left\{ a_{n}\right\} $ is Stern's diatomic sequence.
\end{thm}
\begin{proof}
Since the base-$2$ binomial coefficient $\binom{n}{k}_{2}$ equals
$0$ or $1$ according to the parity of $\binom{n}{k},$ the modified base-$2$ Fibonacci
number
\[
\tilde{F}_{n}^{\left(2\right)}=\sum_{k\ge0}\binom{n-k}{k}_{2}
\]
is the number of odd binomial coefficients $\binom{n-k}{k}.$
It was identified by Carlitz who showed in \cite{Carlitz} that
this number $\theta_{0}\left(n\right)$ (in Carlitz notation) satisfies
\[
\theta_{0}\left(2n+1\right)=\theta_{0}\left(n\right),\thinspace\thinspace\thinspace\theta_{0}\left(2n\right)=\theta_{0}\left(n\right)+\theta_{0}\left(n+1\right)
\]
with $\theta_{0}\left(0\right)=1$ and $\theta_{0}\left(1\right)=1,$
hence $\theta_{0}\left(n\right)=a_{n+1}.$
\end{proof}
Another proof is obtained remarking that the modified base-$2$ binomial coefficients coincide with the usual binomial coefficients $\mod 2,$ and using the formula by S. Northshield \cite[Thm4.1]{Northshield}
\[
\sum_{2i+j=n} \left(\binom{i+j}{i}\mod 2\right) = a_{n+1}.
\]
Carlitz gives in \cite{Carlitz} a combinatorial interpretation of $a_{n+1}=\tilde{F}_{n}^{\left(2\right)}$ as the number
of hyperbinary representations of $n$, i.e the number of ways of writing $n$ as a sum of powers of $2$, each power being used at most twice.\\
\section{The base-b exponential}
\label{Exponential}
Analogously to the classical exponential function, we are interested in studying a modified exponential function
\begin{equation}\label{5.1}
e_b(x,w) := \sum_{k=0}^{\infty}\frac{x^{s_b(k)}}{(k!)_b}w^k.
\end{equation}
We note that without the weighting from $w^k$ this series has a zero radius of convergence. We can relate this to the classical exponential function, but require the following formula about the upper incomplete gamma function $\Gamma(a,z)$ \cite[(8.2.2)]{NIST:DLMF}, where
\begin{equation}\label{5.2}
\Gamma(a,z):=\int_{z}^{\infty} t^{a-1}e^{-t}dt.
\end{equation}
From \cite[(8.4.8)]{NIST:DLMF}, we have
\begin{equation}\label{5.3}
\sum_{k=0}^{n}\frac{z^k}{k!} = e^z \frac{\Gamma(n+1,z)}{n!}.
\end{equation}
\begin{thm}
The base-$b$ exponential function satisfies
\begin{equation}\label{5.5}
e_b(x,w) = \exp\left(x\sum_{i=0}^{\infty}w^{b^i}\right) \prod_{i=0}^{\infty} \left(\frac{\Gamma(b,xw^{b^i})}{(b-1)!}\right) \simeq \exp\left(x\sum_{i=0}^{\infty} w^{b^i}\right) \simeq e^{xw+xw^{b}}
\end{equation}
for $w$ close to 0.
\end{thm}
\begin{proof}
Making the substitution $x \rightarrow w$ in Theorem \ref{genthm}, then taking $f(k_i,i) = \frac{x^{k_i}}{k_i!}$ while using \eqref{5.3} to simplify the partial sums as below completes the proof:
\begin{equation}\label{5.7}
\prod_{i=0}^{\infty}\sum_{k=0}^{b-1} f(k,i) = \prod_{i=0}^{\infty}\sum_{k=0}^{b-1} \frac{(xw^{b^i})^{k}} {k!} = \prod_{i=0}^{\infty} \exp\left(xw^{b^i}\right) \frac{\Gamma(b,xw^{b^i})}{(b-1)!} = \exp\left(\sum_{i=0}^{\infty} xw^{b^i}\right) \prod_{i=0}^{\infty} \left(\frac{\Gamma(b,xw^{b^i})}{(b-1)!}\right).
\end{equation}
Equating these representations proves the first part of the theorem. The behavior of our base-$b$ exponential then depends on the series $\sum_{i=0}^{\infty} w^{b^i}$ and an infinite gamma product.
We now prove the first approximation. Looking at the infinite product term on the right-hand side shows that, even for small values of $b$ and $|x|<1$ it can be approximated by the indicator function of the interval $\left[0,1\right]$
\begin{equation}\label{5.9}
f\left(w\right)=\begin{cases}
1, & 0\le w \le 1\\
0, & w>1
\end{cases}
\end{equation}
This can be explained as follows: the function
\begin{equation}\label{5.10}
w\mapsto\frac{\Gamma\left(b,xw^{b^{i}}\right)}{\Gamma\left(b\right)}
\end{equation}
is strictly decreasing over $\left[0,1\right]$ from 1 at $w=0,$
to $0<\frac{\Gamma\left(b,x\right)}{\Gamma\left(b\right)}<1$ at $w=1.$ Moreover,
as \textbf{$b$ }increases, the ratio $\frac{\Gamma\left(b,x\right)}{\Gamma\left(b\right)}$
increases to 1.
For $0<w<1,$ $w^{b^{i}}$ is close to $0$ for $i$ large, so that
using the asymptotic expansion \cite[(8.7.3)]{NIST:DLMF}
\begin{equation}\label{5.11}
\Gamma\left(b,z\right)=\Gamma\left(b\right)+z^{b}\left(-\frac{1}{b}+\frac{z}{1+b}+\dots\right)
\end{equation}
we obtain
\begin{equation}\label{5.12}
\frac{\Gamma\left(b,xw^{b^{i}}\right)}{\Gamma\left(b\right)}=1-\frac{x^bw^{b^{i+1}}}{\Gamma\left(b+1\right)}+\frac{x^{b+1}w^{b^{i+1}+b^{i}}}{\left(b+1\right)\Gamma\left(b\right)}+\dots
\end{equation}
As a consequence, for $0\le w<1,$ $\sum w^{b^{i+1}}$ is convergent
so that $\prod_{i}\left(1-\frac{xw^{b^{i+1}}}{\Gamma\left(b+1\right)}\right)$
and $\prod_{i}\frac{\Gamma\left(b,xw^{b^{i}}\right)}{\Gamma\left(b\right)}$
are convergent. Moreover, from
\begin{equation}\label{5.13}
\log\Gamma\left(b,xw^{b^{i}}\right)-\log\Gamma\left(b\right)\simeq-\frac{1}{b}x^bw^{b^{i+1}},
\end{equation}
we deduce, for $0\le x<1,$
\begin{equation}\label{5.14}
\left|\sum\log\Gamma\left(b,xw^{b^{i}}\right)-\log\Gamma\left(b\right)\right|=\frac{1}{b}\sum_{i\ge0}x^bw^{b^{i+1}}\le\frac{1}{b}\sum_{i\ge0}w^{b^{i+1}}\le\frac{1}{b}\frac{w^{b}}{1-w}
\end{equation}
which goes to $0$ as $b\to\infty,$ so that the left-hand side goes
to $0$ as well. This yields the first approximation for $x\in\left[0,1\right]$, which can in turn be approximated as
\begin{equation}
e_b(x,w)\simeq e^{xw+xw^{b}}
\end{equation}
since for $w\in\left[0,1\right)$,
\begin{equation}
e^{w^{b^{2}}}\ll e^{w^{b}}.
\end{equation}
\end{proof}
For $b=2$, we obtain the following result.
\begin{cor}
The base $2$ exponential function satisfies
\begin{equation}\label{5.8}
e_2(x,w) = \prod_{i=0}^{\infty}\left(1+xw^{2^i}\right).
\end{equation}
\end{cor}
Analogously to the classical identity $e^xe^y=e^{x+y}$, we have a similar convolution identity for the base-$b$ exponential.
\begin{thm}
\begin{equation}
e_b(x,w)\star e_b(y,w) = e_b(x+y,w),
\end{equation}
where
\begin{equation}
\sum_{k=0}^{\infty}a_kx^{s_b(k)}\star \sum_{l=0}^{\infty}b_ly^{s_b(l)} = \sum_{n=0}^{\infty}\sum_{k\leq_b n}a_k b_{n-k}x^{s_b(k)}y^{s_b(n-k)}.
\end{equation}
\begin{proof}
The theorem follows from taking $a_k=\frac{w^k}{(k!)_b}$ and $b_l=\frac{w^l}{(l!)_b}$ and applying the digital binomial theorem \eqref{2.4}.
\end{proof}
\end{thm}
The $\star$ operation forms an analogue of multiplication for formal power series which instead applies to series with involving powers of $s_b(k)$, where the convolution of two power series is defined as
\begin{equation}
\sum_{k=0}^{\infty}a_kx^{k}\star \sum_{l=0}^{\infty}b_ly^{l} = \sum_{n=0}^{\infty}\sum_{k\leq n}a_k b_{n-k}x^{k}y^{n-k}.
\end{equation}
We note that the $\star$ operator must manually impose the condition $k\leq_bn$ because of the basic result that
\begin{equation}
\binom{n}{k}_b \neq \frac{(n!)_b}{(k!)_b(n-k)!_b},
\end{equation}
since the left hand side is zero for $k$ not digitally dominated by $n$, while the right hand side is well-defined and in general non-zero for such $k$.
\bibliographystyle{plain}
| {
"timestamp": "2016-11-18T02:01:55",
"yymm": "1607",
"arxiv_id": "1607.02564",
"language": "en",
"url": "https://arxiv.org/abs/1607.02564",
"abstract": "We study the properties of the base-$b$ binomial coefficient defined by Jiu and the second author, introduced in the context of a digital binomial theorem. After introducing a general summation formula, we derive base-$b$ analogues of the Stirling numbers of the second kind, the Fibonacci numbers and the classical exponential function.",
"subjects": "Number Theory (math.NT); Combinatorics (math.CO)",
"title": "Base-$b$ analogues of classic combinatorial objects",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462179994595,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7099326064621175
} |
https://arxiv.org/abs/1809.01405 | Noncrossing Arc Diagrams, Tamari Lattices, and Parabolic Quotients of the Symmetric Group | Ordering permutations by containment of inversion sets yields a fascinating partial order on the symmetric group: the weak order. This partial order is, among other things, a semidistributive lattice. As a consequence, every permutation has a canonical representation as a join of other permutations. Combinatorially, these canonical join representations can be modeled in terms of arc diagrams. Moreover, these arc diagrams also serve as a model to understand quotient lattices of the weak order. A particularly well-behaved quotient lattice of the weak order is the well-known Tamari lattice, which appears in many seemingly unrelated areas of mathematics. The arc diagrams representing the members of the Tamari lattices are better known as noncrossing partitions. Recently, the Tamari lattices were generalized to parabolic quotients of the symmetric group. In this article, we undertake a structural investigation of these parabolic Tamari lattices, and explain how modified arc diagrams aid the understanding of these lattices. | \section{Introduction}
\label{sec:introduction}
\subsection{Noncrossing partition lattices}
\label{sec:intro_noncrossing}
The \defn{noncrossing partition lattice} $\mathcal{N\!C}_{n}$ was introduced in the 1970s by G.~Kreweras as a subposet of the lattice of all set partitions of $[n]\stackrel{\mathrm{def}}{=}\{1,2,\ldots,n\}$ ordered by dual refinement via a certain ``noncrossing'' property~\cite{kreweras72sur}. It turns out that the noncrossing partition lattices appear in many seemingly unrelated areas of mathematics, such as representation theory, group theory, algebraic geometry, probability theory and many more; see \cites{armstrong09generalized,mccammond06noncrossing,simion00noncrossing} for surveys.
There is an intriguing way to realize $\mathcal{N\!C}_{n}$ as a partial order on (certain) elements of the symmetric group $\mathfrak{S}_{n}$. We may order $\mathfrak{S}_{n}$ by taking prefixes of minimal length factorizations of permutations into transpositions. Then, $\mathcal{N\!C}_{n}$ can be viewed as a principal order ideal generated by a long cycle~\cite{brady01partial}.
\subsection{Tamari lattices}
\label{sec:intro_tamari}
The \defn{Tamari lattice} $\mathcal{T}_{n}$ was introduced in the 1960s by D.~Tamari as a partial order on well-balanced parenthesizations of a string of length ${n+1}$~\cite{tamari62algebra}. Since then the Tamari lattices have made the top ranks of the most popular objects of study in lattice theory, geometry, topology, and combinatorics. For a comprehensive survey on the different aspects, applications and appearances of the Tamari lattices we refer the interested reader to \cite{hoissen12associahedra}, and to the references therein.
One of the many ways to realize $\mathcal{T}_{n}$ as a concrete partial order is the following. The symmetric group $\mathfrak{S}_{n}$ may be partially ordered by inclusion of inversion sets, the so-called \defn{weak order}, and $\mathcal{T}_{n}$ arises as the restriction of this order to the set of $231$-avoiding permutations~\cite{bjorner97shellable}*{Theorem~9.6(ii)}. In fact, this construction realizes $\mathcal{T}_{n}$ as a quotient lattice of the weak order on $\mathfrak{S}_{n}$~\cite{reading06cambrian}.
\subsection{Tamari lattices and noncrossing partition lattices are deeply connected}
It is quickly shown that the cardinality of $\mathcal{T}_{n}$ and $\mathcal{N\!C}_{n}$ is given by the $n\th$ Catalan number, and there are many bijections between the ground sets of different realizations of $\mathcal{T}_{n}$ and the set of noncrossing partitions of $[n]$.
It is perhaps a bit surprising that there also is a deep structural connection between the two families of lattices. The Tamari lattices are instances of so-called \defn{congruence-uniform} lattices, \text{i.e.}\; they may be constructed from the singleton lattice by successive doublings of intervals. N.~Reading explained how to order the elements of a congruence-uniform lattice in an alternate way~\cite{reading16lattice}*{Section~9-7.4}. If $\mathrm{CLO}(\mathcal{T}_{n})$ denotes this \defn{core label order} of $\mathcal{T}_{n}$, then \cite{reading11noncrossing}*{Theorem~8.5} implies that $\mathrm{CLO}(\mathcal{T}_{n})$ is isomorphic to $\mathcal{N\!C}_{n}$.
\subsection{Parabolic quotients of the symmetric group}
\label{sec:intro_parabolic_quotients}
This article is about a recent generalization of $\mathcal{T}_{n}$ to so-called parabolic quotients of $\mathfrak{S}_{n}$. Recall that $\mathfrak{S}_{n}$ is generated by the set of adjacent transpositions $(i,i{+}1)$ for $1\leq i<n$. If we fix $J\subseteq[n-1]$, then the set $\bigl\{(j,j{+}1)\mid j\in J\bigr\}$ generates a (parabolic) subgroup of $\mathfrak{S}_{n}$. The \defn{parabolic quotient} of $\mathfrak{S}_{n}$ with respect to $J$ is the set of minimal length representatives in the right cosets of $\mathfrak{S}_{n}$ by this subgroup. Clearly, if $J=\emptyset$, this quotient is simply all of $\mathfrak{S}_{n}$.
If we write $[n-1]\setminus J=\{s_{1},s_{2},\ldots,s_{r-1}\}$ such that $s_{1}<s_{2}<\cdots<s_{r-1}$ as well as $s_{0}=0$ and $s_{r}=n$, then the permutations that belong to the parabolic quotient of $\mathfrak{S}_{n}$ with respect to $J$ are precisely those whose one-line notation is increasing between positions $s_{i-1}{+}1$ and $s_{i}$ for all $i\in[r]$. Let us consider the (integer) composition $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$ of $n$ with $\alpha_{i}=s_{i}-s_{i-1}$ for $i\in[r]$, and color the one-line notation of $w\in\mathfrak{S}_{n}$ according to $\alpha$, \text{i.e.}\; we color the first $\alpha_{1}$ entries with color one, we color the next $\alpha_{2}$ entries with color two, and so on. We call the set of indices of entries with a common color an \defn{$\alpha$-region}. Then, $w$ belongs to the parabolic quotient of $\mathfrak{S}_{n}$ with respect to $J$ if and only if the $\alpha$-regions induce increasing sequences in the one-line notation of $w$. We will therefore compactly denote this parabolic quotient by $\mathfrak{S}_{\alpha}$.
For any $\alpha$, we may still order the elements of $\mathfrak{S}_{\alpha}$ by inclusion of inversion sets, and it was shown in \cite{muehle19tamari}*{Theorem~1.1} that we can define a \defn{parabolic Tamari lattice} $\mathcal{T}_{\alpha}$ as the restriction of this order to the set of permutations in $\mathfrak{S}_{\alpha}$ that avoid a particular pattern. Moreover, this definition recovers the classical Tamari lattice in the case $J=\emptyset$ (or $\alpha=(1,1,\ldots,1)$, respectively).
In a similar way, we can define the set $N\!C_{\alpha}$ of noncrossing partitions of the parabolic quotient $\mathfrak{S}_{\alpha}$ such that we recover the classical noncrossing partitions in the case $J=\emptyset$~\cite{muehle19tamari}*{Section~4}.
\subsection{Main Results}
\label{sec:intro_main_results}
The main purpose of this article is to investigate the relationship between $\mathcal{T}_{\alpha}$ and the poset $\mathcal{N\!C}_{\alpha}$ of parabolic noncrossing partitions ordered by (dual) refinement. This article is accompanied by a suite of \texttt{Sage}-scripts that implement most of the objects defined in this article. It can be obtained from the following location.
\begin{quote}
\url{https://www.math.tu-dresden.de/~hmuehle/files/sage/parabolic_cataland.zip}
\end{quote}
Our first main result concerns the structure of the poset of parabolic noncrossing partitions.
\begin{theorem}\label{thm:parabolic_noncrossing_partition_lattice}
Let $n>0$ and let $\alpha$ be a composition of $n$. The poset $\mathcal{N\!C}_{\alpha}$ is a ranked meet-semilattice, where the rank function is given by the number of bumps. It is a lattice if and only if $\alpha=(1,1,\ldots,1)$ or $\alpha=(n)$.
\end{theorem}
Theorem~4.2 in \cite{muehle19tamari} describes an explicit bijection between the ground sets of $\mathcal{T}_{\alpha}$ and $\mathcal{N\!C}_{\alpha}$. We use this connection to prove the following results on the structure and the topology of $\mathcal{T}_{\alpha}$.
\begin{theorem}\label{thm:parabolic_tamari_structure}
For all $n>0$ and every composition $\alpha$ of $n$, the parabolic Tamari lattice $\mathcal{T}_{\alpha}$ is congruence uniform and trim.
\end{theorem}
\begin{theorem}\label{thm:parabolic_tamari_topology}
Let $n>0$ and let $\alpha$ be a composition of $n$. The order complex of $\mathcal{T}_{\alpha}$ with least and greatest element removed is homotopic to a sphere if and only if $\alpha=(1,1,\ldots,1)$ or $\alpha=(n)$. Otherwise it is contractible.
\end{theorem}
Since Theorem~\ref{thm:parabolic_tamari_structure} implies that $\mathcal{T}_{\alpha}$ is congruence uniform, it follows that it admits an alternate order of its ground set~\cite{muehle19the}; denoted by $\mathrm{CLO}(\mathcal{T}_{\alpha})$. By construction, this alternate order can be described via parabolic noncrossing partitions, and the following result characterizes the cases in which $\mathrm{CLO}\bigl(\mathcal{T}_{\alpha}\bigr)$ is isomorphic to $\mathcal{N\!C}_{\alpha}$.
\begin{theorem}\label{thm:parabolic_tamari_alternate}
Let $n>0$ and let $\alpha$ be a composition of $n$. The core label order of $\mathcal{T}_{\alpha}$ is isomorphic to a subposet of $\mathcal{N\!C}_{\alpha}$, and we have $\mathrm{CLO}\bigl(\mathcal{T}_{\alpha}\bigr)\cong\mathcal{N\!C}_{\alpha}$ if and only if either $\alpha=(n)$ or $\alpha=(a,1,1,\ldots,1,b)$ for some positive integers $a,b$.
\end{theorem}
\subsection{Further Results}
\label{sec:intro_more_results}
As we proceed with proving Theorems~\ref{thm:parabolic_noncrossing_partition_lattice}--\ref{thm:parabolic_tamari_topology} we obtain a number of other results that in our opinion deserve being mentioned in this introduction.
The first auxiliary result is a general lattice-theoretical one. Recall that a lattice is \defn{extremal} if it has the same number of join- and meet-irreducible elements, and there exists a maximal chain whose length equals this common value. A lattice is \defn{left modular} if it has a maximal chain consisting of left-modular elements.
\begin{theorem}\label{thm:semidistributive_equivalence_lmod_xtrm}
In a semidistributive lattice the notions of extremality and left-modularity are equivalent.
\end{theorem}
The remaining auxiliary results again concern the parabolic Tamari lattices. In particular, we explicitly describe the poset of join-irreducibles and the Galois graph of $\mathcal{T}_{\alpha}$.
\begin{theorem}\label{thm:parabolic_tamari_irreducible_poset}
Let $n>0$ and let $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$ be a composition of $n$. The poset of join-irreducible elements of $\mathcal{T}_{\alpha}$ consists of $r{-}1$ connected components, where for $i\in[r-1]$ the $i\th$ component is isomorphic to the direct product of an $\alpha_{i}$-chain and an $(\alpha_{i+1}+\alpha_{i+2}+\cdots+\alpha_{r})$-chain.
\end{theorem}
If we set $\alpha=(1,1,\ldots,1)$, then Theorem~\ref{thm:parabolic_tamari_irreducible_poset} states that the poset of join-irreducible elements of the classical Tamari lattice $\mathcal{T}_{n}$ is a union of $n-1$ chains of lengths $1,2,\ldots,n-1$. This result is probably known to experts, but we have not been able to find an explicit reference. The join-irreducible elements of $\mathcal{T}_{\alpha}$ are those permutations with a unique descent $(a,b)$, and we abbreviate them by $w_{a,b}$. (There are a few restrictions depending on $\alpha$ for which values of $a$ and $b$ are admissible; see Corollary~\ref{cor:explicit_irreducibles}.)
\begin{theorem}\label{thm:parabolic_tamari_galois_graph}
Let $n>0$ and let $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$ be a composition of $n$. The Galois graph of $\mathcal{T}_{\alpha}$ is isomorphic to the directed graph whose vertices are the join-irreducible elements of $\mathcal{T}_{\alpha}$, and there exists a directed edge $w_{a,b}\to w_{a',b'}$ if and only if $w_{a,b}\neq w_{a',b'}$, and
\begin{itemize}
\item either $a$ and $a'$ belong to the same $\alpha$-region and $a\leq a'<b'\leq b$,
\item or $a$ and $a'$ belong to different $\alpha$-regions and $a'<a<b'\leq b$, where $a$ and $b'$ belong to different $\alpha$-regions, too.
\end{itemize}
\end{theorem}
F.~Chapoton has observed an interesting connection between the generating function of the M{\"o}bius function of $\mathcal{N\!C}_{n}$ and a certain polynomial defined on the order ideals of the triangular poset of all transpositions of $\mathfrak{S}_{n}$~\cites{chapoton04enumerative,chapoton06sur}. This connection has now been proven, and we present a generalization of it to parabolic quotients of $\mathfrak{S}_{n}$. In the following conjecture, $H_{\alpha}(x,y)$ is a polynomial that is defined on the order ideals of a particular partial order on the transpositions of $\mathfrak{S}_{\alpha}$, and $M_{\alpha}(x,y)$ is the generating function of the M{\"o}bius function of $\mathrm{CLO}\bigl(\mathcal{T}_{\alpha}\bigr)$. The precise definitions follow in Section~\ref{sec:chapoton_triangles}.
\begin{conjecture}\label{conj:parabolic_hm}
Let $n>0$ and let $\alpha$ be a composition of $n$ into $r$ parts. The following identity holds if and only if $\alpha$ has at most one part exceeding $1$:
\begin{displaymath}
H_{\alpha}(x,y) = \Bigl(x(y-1)+1\Bigr)^{r-1}M_{\alpha}\left(\frac{y}{y-1},\frac{x(y-1)}{x(y-1)+1}\right).
\end{displaymath}
\end{conjecture}
\subsection{Organization of the Article}
\label{sec:intro_organization}
We recall the basic poset- and lattice-theoretical notions in Section~\ref{sec:posets_and_lattices}, and we prove Theorem~\ref{thm:semidistributive_equivalence_lmod_xtrm} in Section~\ref{sec:trim}.
We properly define parabolic noncrossing partitions in Section~\ref{sec:parabolic_noncrossing_partitions}, and prove Theorem~\ref{thm:parabolic_noncrossing_partition_lattice} in Section~\ref{sec:parabolic_partitions}.
In Section~\ref{sec:parabolic_tamari_lattices} we recall the definition of parabolic quotients of the symmetric group and of the parabolic Tamari lattices. We prove Theorem~\ref{thm:parabolic_tamari_irreducible_poset} at the end of Section~\ref{sec:parabolic_tamari}, and Theorems~\ref{thm:parabolic_tamari_structure} and \ref{thm:parabolic_tamari_topology} in Section~\ref{sec:parabolic_tamari_proofs}. We conclude with the proof of Theorem~\ref{thm:parabolic_tamari_galois_graph} in Section~\ref{sec:galois_graph}.
The core label order of a congruence-uniform lattice is formally defined in Section~\ref{sec:parabolic_tamari_alternate_order} and we also prove Theorem~\ref{thm:parabolic_tamari_alternate} there.
We conclude this article with explaining the background of Conjecture~\ref{conj:parabolic_hm} in Section~\ref{sec:chapoton_triangles}, and illustrate it with some examples.
An extended abstract of this article, illustrating the main results and constructions in the case $\alpha=(t,1,1,\ldots,1)$, has appeared in \cite{muehle19ballot}.
\section{Posets and Lattices}
\label{sec:posets_and_lattices}
\subsection{Basic Notions}
\label{sec:poset_basics}
Let $\mathcal{P}=(P,\leq)$ be a partially ordered set, or \defn{poset} for short. The \defn{dual poset} is the poset $\mathcal{P}^{*}\stackrel{\mathrm{def}}{=}(P,\geq)$. Throughout this article we only consider finite posets.
Two elements $a,b\in P$ form a \defn{cover relation} if $a<b$ and there does not exist $c\in P$ with $a<c<b$. In that case, we usually write $a\lessdot b$, and say that $a$ \defn{is covered by} $b$ and $b$ \defn{covers} $a$. An \defn{interval} of $\mathcal{P}$ is a set $[a,b]\stackrel{\mathrm{def}}{=}\{c\in P\mid a\leq c\leq b\}$ for $a,b\in P$ with $a\leq b$.
An element $a\in P$ is \defn{minimal} in $\mathcal{P}$ if $b\leq a$ implies $b=a$ for all $b\in P$. An element is \defn{maximal} in $\mathcal{P}$ if it is minimal in $\mathcal{P}^{*}$. If $\mathcal{P}$ has a unique minimal element $\hat{0}$ and a unique maximal element $\hat{1}$, then it is \defn{bounded}. In a bounded poset, any element covering the unique minimal element is an \defn{atom}.
A \defn{chain} of $\mathcal{P}$ is a subset of $P$ in which every two elements are comparable, and an \defn{antichain} of $\mathcal{P}$ is a subset of $P$ in which no two elements are comparable. A chain is \defn{saturated} if it can be written as a sequence of cover relations, and it is \defn{maximal} if it is saturated and contains a minimal and a maximal element.
A \defn{rank function} of $\mathcal{P}$ is a function $\mathrm{rk}\colon P\to\mathbb{N}$ which satisfies $\mathrm{rk}(a)=0$ if and only if $a$ is a minimal element, and $\mathrm{rk}(a)=\mathrm{rk}(b)-1$ if and only if $a\lessdot b$. A poset that admits a rank function is \defn{ranked}. In other words, $\mathrm{rk}(a)$ is one less than the cardinality of any saturated chain from a minimal element to $a$.
An \defn{order ideal} of $\mathcal{P}$ is a subset $I\subseteq P$ such that for every $a\in I$ and every $b\leq a$ follows $b\in I$. An \defn{order filter} of $\mathcal{P}$ is an ideal of $\mathcal{P}^{*}$.
If for all $a,b\in P$ there exists a least upper bound $a\vee b$ (the \defn{join}), then $\mathcal{P}$ is a \defn{join-semilattice}. If for all $a,b\in P$ there exists a greatest lower bound $a\wedge b$ (the \defn{meet}), then $\mathcal{P}$ is a \defn{meet-semilattice}. A poset that is both a join- and a meet-semilattice is a \defn{lattice}. Note that every finite lattice is bounded.
\subsection{Congruence-Uniform Lattices}
\label{sec:congruence_uniform}
Let $(L,\leq)$ be a lattice. A \defn{lattice congruence} is an equivalence relation $\Theta$ on $L$ such that for all $a,b,c,d\in L$ with $[a]_{\Theta}=[c]_{\Theta}$ and $[b]_{\Theta}=[d]_{\Theta}$ holds ${[a\vee b]_{\Theta}=[c\vee d]_{\Theta}}$ and $[a\wedge b]_{\Theta}=[c\wedge d]_{\Theta}$. The set $\mathrm{Con}(\mathcal{L})$ of all lattice congruences on $\mathcal{L}$ forms a distributive lattice under refinement. If $a\lessdot b$ in $\mathcal{L}$, then we denote by $\mathrm{cg}(a,b)$ the smallest lattice congruence of $\mathcal{L}$ in which $a$ and $b$ are equivalent.
An element $j\in L\setminus\{\hat{0}\}$ is \defn{join irreducible} if whenever $j=a\vee b$, then $j\in\{a,b\}$. Let $\mathcal{J}(\mathcal{L})$ denote the set of join-irreducible elements of $\mathcal{L}$. If $\mathcal{L}$ is finite, and $j\in\mathcal{J}(\mathcal{L})$, then there exists a unique element $j_{*}\in L$ with $j_{*}\lessdot j$. We write $\mathrm{cg}(j)$ as a short-hand for $\mathrm{cg}(j_{*},j)$. We may dually define \defn{meet-irreducible} elements, and denote the set of such elements by $\mathcal{M}(\mathcal{L})$.
\begin{theorem}[\cite{freese95free}*{Theorem~2.30}]\label{thm:irreducible_congruences}
Let $\mathcal{L}$ be a finite lattice, and let $\Theta\in\mathrm{Con}(\mathcal{L})$. The following are equivalent.
\begin{enumerate}
\item $\Theta$ is join irreducible in $\mathrm{Con}(\mathcal{L})$.
\item $\Theta=\mathrm{cg}(a,b)$ for some $a\lessdot b$ in $\mathcal{L}$.
\item $\Theta=\mathrm{cg}(j)$ for some $j\in\mathcal{J}(\mathcal{L})$.
\end{enumerate}
\end{theorem}
A consequence of Theorem~\ref{thm:irreducible_congruences} is the existence of a surjective map
\begin{displaymath}
\mathrm{cg}_{*}\colon \mathcal{J}(\mathcal{L})\to\mathcal{J}\bigl(\mathrm{Con}(\mathcal{L})\bigr),\quad j\mapsto\mathrm{cg}(j).
\end{displaymath}
A finite lattice is \defn{congruence uniform} if the map $\mathrm{cg}_{*}$ is a bijection for both $\mathcal{L}$ and $\mathcal{L}^{*}$.
\begin{remark}
It follows from \cite{day79characterizations}*{Theorem~5.1} that congruence-uniform lattices are precisely the finite lattices that can be obtained from the singleton lattice by a finite sequence of interval doublings.
\end{remark}
Let $\mathcal{E}(\mathcal{L})=\{(a,b)\in L\times L\mid a\lessdot b\}$ denote the set of cover relations of $\mathcal{L}$. If $\mathcal{L}$ is congruence uniform, then Theorem~\ref{thm:irreducible_congruences} implies the existence of a map
\begin{equation}\label{eq:cu_labeling}
\lambda\colon\mathcal{E}(\mathcal{L})\to\mathcal{J}(\mathcal{L}),\quad (a,b)\mapsto j,
\end{equation}
where $j$ is the unique join-irreducible element of $\mathcal{L}$ with $\mathrm{cg}(a,b)=\mathrm{cg}(j)$.
In general, a map $f\colon\mathcal{E}(\mathcal{L})\to\mathcal{P}$ is an \defn{edge-labeling} of $\mathcal{L}$, where $\mathcal{P}$ is an arbitrary poset. If $C:a_{0}\lessdot a_{1}\lessdot\cdots\lessdot a_{s}$ is a saturated chain, then we use the following abbreviation:
\begin{displaymath}
f(C) \stackrel{\mathrm{def}}{=} \bigl(f(a_{0},a_{1}),f(a_{1},a_{2}),\ldots,f(a_{s-1},a_{s})\bigr)
\end{displaymath}
There is a nice characterization of cover relations in a congruence-uniform lattice which have the same label under $\lambda$. Two cover relations $(a,b),(c,d)\in\mathcal{E}(\mathcal{L})$ are \defn{perspective} if either $a\vee d=b$ and $a\wedge d=c$ or $b\vee c=d$ and $b\wedge c=a$.
\begin{lemma}[\cite{garver18oriented}*{Lemma~2.6}]\label{lem:perspective_covers}
Let $\mathcal{L}$ be a congruence-uniform lattice with labeling $\lambda$. For $(a,b)\in\mathcal{E}(\mathcal{L})$ and $j\in\mathcal{J}(\mathcal{L})$ holds $\lambda(a,b)=j$ if and only if $(a,b)$ and $(j_{*},j)$ are perspective.
\end{lemma}
We have the following corollary.
\begin{corollary}\label{cor:congruence_uniform_chains_no_duplicates}
For any saturated chain $C$ of a congruence-uniform lattice $\mathcal{L}$, the sequence $\lambda(C)$ does not contain duplicate entries.
\end{corollary}
\begin{proof}
Let $C:a_{0}\lessdot a_{1}\lessdot\cdots\lessdot a_{s}$ be a saturated chain of $\mathcal{L}$, and pick some index $i$ such that $\lambda(a_{i},a_{i+1})=k$. It follows from Lemma~\ref{lem:perspective_covers} that $a_{i}\vee k=a_{i+1}$. Thus, for any $j>i$ we conclude that $k\leq a_{i+1}\leq a_{j}$ and $a_{j}\vee k=a_{j}\lessdot a_{j+1}$. Lemma~\ref{lem:perspective_covers} then implies that $\lambda(a_{j},a_{j+1})\neq k$, and we are done.
\end{proof}
\subsection{Semidistributive Lattices}
\label{sec:semidistributive}
A finite lattice $\mathcal{L}=(L,\leq)$ is \defn{join semidistributive} if for every three elements $a,b,c\in L$ with $a\vee b=a\vee c$ it follows that $a\vee(b\wedge c)=a\vee b$. We may define \defn{meet-semidistributive} lattices dually. A lattice that is both join and meet semidistributive is simply called \defn{semidistributive}.
\begin{theorem}[\cite{day79characterizations}*{Theorem~4.2}]\label{thm:congruence_uniform_implies_semidistributive}
Every congruence-uniform lattice is semidistributive.
\end{theorem}
The converse is not true, see for instance~\cite{nation00unbounded}*{Section~3}. Semidistributive lattices have another characteristic property, namely that every element can be represented canonically as the join of a particular set of join-irreducible elements. More precisely, let $\mathcal{L}=(L,\leq)$ be a finite lattice. A subset $A\subseteq L$ is a \defn{join representation} of $a\in L$ if $a=\bigvee A$. A join representation $A$ of $a$ is \defn{irredundant} if there is no proper subset of $A$ that joins to $a$. For two irredundant join representations $B_{1},B_{2}$ of $a$ we say that $B_{1}$ \defn{refines} $B_{2}$ if for every $b_{1}\in B_{1}$ there exists some $b_{2}\in B_{2}$ with $b_{1}\leq b_{2}$. (In other words, the order ideal generated by $B_{1}$ is contained in the order ideal generated by $B_{2}$.) A join representation of $a$ is \defn{canonical} if it is irredundant and refines every other join representation of $a$. We remark that every canonical join representation is an antichain consisting of join-irreducible elements.
\begin{theorem}[\cite{freese95free}*{Theorem~2.24}]\label{thm:semidistributive_lattices_canonical_representations}
A finite lattice is join semidistributive if and only if every element admits a canonical join representation.
\end{theorem}
Figure~\ref{fig:nonsemidistributive} shows a lattice that is not join semidistributive, because the top element does not have a canonical join representation. (There are two minimal join representations of the top element: the three atoms, and the two highlighted elements, but none of these sets refines the other.)
\begin{figure}
\centering
\includegraphics[scale=1,page=1]{para_figures.pdf}
\caption{A lattice that is not join semidistributive.}
\label{fig:nonsemidistributive}
\end{figure}
If $\mathcal{L}$ is congruence uniform, then we can use the labeling from \eqref{eq:cu_labeling} to compute the canonical join representation of the elements of $\mathcal{L}$.
\begin{theorem}[\cite{garver18oriented}*{Proposition~2.9}]\label{thm:congruence_uniform_canonical_representation}
Let $\mathcal{L}=(L,\leq)$ be a finite, congruence-uniform lattice. The canonical join representation of $a\in L$ is $\bigl\{\lambda(b,a)\mid b\lessdot a\bigr\}$.
\end{theorem}
Let $j\in\mathcal{J}(\mathcal{L})$. If the set $\{a\in L\mid j_{*}\leq a, j\not\leq a\}$ has a greatest element, then we denote it by $\kappa(j)$. Whenever $\kappa(j)$ exists, then it is quickly shown that it must be meet irreducible. We recall two facts about $\kappa$.
\begin{lemma}[\cite{freese95free}*{Corollary~2.55}]\label{lem:kappa_bijection}
Let $\mathcal{L}$ be a finite, semidistributive lattice. Then, $\kappa$ is a bijection, and consequently $\bigl\lvert\mathcal{J}(\mathcal{L})\bigr\rvert=\bigl\lvert\mathcal{M}(\mathcal{L})\bigr\rvert$.
\end{lemma}
\begin{lemma}[\cite{freese95free}*{Lemma~2.57}]\label{lem:kappa_computation}
Let $\mathcal{L}=(L,\leq)$ be a finite lattice, and let $j\in\mathcal{J}(\mathcal{L})$ such that $\kappa(j)$ exists. For any $a\in L$ we have $a\leq\kappa(j)$ if and only if $j\not\leq j_{*}\vee a$.
\end{lemma}
\subsection{Trim Lattices}
\label{sec:trim}
Let $\mathcal{L}=(L,\leq)$ be a finite lattice. The \defn{length} of a chain is one less than its cardinality. Let $\ell(\mathcal{L})$ denote the maximum length of a maximal chain of $\mathcal{L}$, and call this the \defn{length} of $\mathcal{L}$. For every finite lattice $\mathcal{L}$ holds
\begin{displaymath}
\ell(\mathcal{L})\leq\min\Bigl\{\bigl\lvert\mathcal{J}(\mathcal{L})\bigr\rvert,\bigl\lvert\mathcal{M}(\mathcal{L})\bigr\rvert\Bigr\}.
\end{displaymath}
If all these three quantities are the same, \text{i.e.}\; when $\bigl\lvert\mathcal{J}(\mathcal{L})\bigr\rvert = \ell(\mathcal{L}) = \bigl\lvert\mathcal{M}(\mathcal{L})\bigr\rvert$, then $\mathcal{L}$ is \defn{extremal}~\cite{markowsky92primes}. It follows from \cite{markowsky92primes}*{Theorem~14(ii)} that any finite lattice can be embedded as an interval into an extremal lattice. Consequently, extremality is not inherited to intervals. We will now explain how to strengthen extremality such that we obtain a lattice property that is inherited to intervals.
An element $a\in L$ is \defn{left modular} if for all $b,c\in L$ with $b<c$ holds that
\begin{displaymath}
(b\vee a)\wedge c = b\vee(a\wedge c).
\end{displaymath}
If $\mathcal{L}$ has a maximal chain of length $\ell(\mathcal{L})$ consisting entirely of left-modular elements, then $\mathcal{L}$ is \defn{left modular}. An extremal, left-modular lattice is called \defn{trim}~\cite{thomas06analogue}.
It was recently shown that every extremal semidistributive lattice is already trim.
\begin{theorem}[\cite{thomas19rowmotion}*{Theorem~1.4}]\label{thm:semidistributive_extremal_lattice_trim}
Every extremal semidistributive lattice is trim.
\end{theorem}
Figure~\ref{fig:nonsemidistributive} shows the smallest extremal lattice that is not left modular. It has only one chain of maximal length, but the highlighted element on this chain is not left modular. We can prove the analogue of Theorem~\ref{thm:semidistributive_extremal_lattice_trim} for left-modularity.
\begin{theorem}\label{thm:semidistributive_lmod_lattice_trim}
Every left-modular semidistributive lattice is trim.
\end{theorem}
\begin{proof}
Let $\mathcal{L}=(L,\leq)$ be a left-modular lattice with left-modular chain $C:\hat{0}=a_{0}\lessdot a_{1}\lessdot\cdots\lessdot a_{n}=\hat{1}$. We consider the following surjective map:
\begin{displaymath}
\gamma\colon\mathcal{J}(\mathcal{L})\to[n],\quad j\mapsto\min\{s\mid j\leq a_{s}\}.
\end{displaymath}
Now suppose that $\mathcal{L}$ is not extremal, which means that $\bigl\lvert\mathcal{J}(\mathcal{L})\bigr\rvert>n$. In particular, $\gamma$ is not injective, and we may find two join-irreducible elements $j_{1},j_{2}\in\mathcal{J}(\mathcal{L})$ with $\gamma(j_{1})=s=\gamma(j_{2})$. Observe that none of $j_{1}$ and $j_{2}$ equals $a_{s}$. (If $a_{s}$ is itself join irreducible, then the unique element covered by $a_{s}$ is $a_{s-1}$, which means that every other join-irreducible element is either below $a_{s-1}$ or not below $a_{s}$ at all, and can therefore not be labeled by $s$.) Also, none of $j_{1}$ and $j_{2}$ equals $a_{s-1}$, because they are labeled by $s$.
Assume that $j_{1}<j_{2}$. There certainly exist indices $t_{1},t_{2}$ with $0\leq t_{1}\leq t_{2}\leq s-2$ such that $a_{t_{1}}\leq j_{1}$ and $a_{t_{2}}\leq j_{2}$. (We cannot have $a_{s-1}<j_{2}<a_{s}$, because $a_{s-1}$ and $a_{s}$ form a cover relation in $\mathcal{L}$.) We choose $t_{1}$ and $t_{2}$ maximal with these properties. If $t_{1}=t_{2}$, then the left-modularity of $a_{s-1}$ implies
\begin{displaymath}
j_{2} = (j_{1}\vee a_{s-1})\wedge j_{2} = j_{1}\vee(a_{s-1}\wedge j_{2}) = j_{1} < j_{2},
\end{displaymath}
which is a contradiction. If $t_{1}<t_{2}$, then we conclude from $j_{2}\in\mathcal{J}(\mathcal{L})$, that there exists $b<j_{2}$ with $a_{t_{2}}<b$. (This follows, because $j_{2}$ is an upper bound for both $j_{1}$ and $a_{t_{2}}$.) We find that
\begin{displaymath}
j_{2} = (b\vee a_{s-1})\wedge j_{2} = b\vee(a_{s-1}\wedge j_{2}) = b < j_{2},
\end{displaymath}
which is a contradiction. It follows that $j_{1}$ and $j_{2}$ are incomparable.
By definition we have $j_{1},j_{2}\leq a_{s}$ and $j_{1},j_{2}\not\leq a_{s-1}$ and it follows that $j_{1}\vee a_{s-1}=a_{s}=j_{2}\vee a_{s-1}$. Since $\mathcal{L}$ is join semidistributive, we find $a_{s-1}\vee(j_{1}\wedge j_{2})=a_{s}$, which implies $j_{1}\wedge j_{2}\not\leq a_{s-1}$.
Let $c=j_{1}\vee j_{2}$. It holds that $c\leq a_{s}$, and $c$ and $a_{s-1}$ are incomparable. The left-modularity of $a_{s-1}$ then implies
\begin{displaymath}
c = (j_{1}\vee a_{s-1})\wedge c = j_{1}\vee(a_{s-1}\wedge c).
\end{displaymath}
If we set $d=a_{s-1}\wedge c$, then we see that $d$ and $j_{1}$ are incomparable, too.
If $j_{1}\wedge a_{s-1}=j_{2}\wedge a_{s-1}$, then by meet-semidistributivity we have
\begin{displaymath}
j_{1}\wedge a_{s-1} = a_{s-1}\wedge(j_{1}\vee j_{2}) = a_{s-1}\wedge c = d,
\end{displaymath}
which implies $d\leq j_{1}$. This contradicts the previous paragraph, so we conclude that $j_{1}\wedge a_{s-1}\neq j_{2}\wedge a_{s-1}$. If $j_{1}\wedge j_{2}=j_{1}\wedge a_{s-1}$, then we have $j_{1}\wedge j_{2}\leq a_{s-1}$, which we have already ruled out. We conclude that the set $\{j_{1}\wedge j_{2},j_{1}\wedge a_{s-1},j_{2}\wedge a_{s-1}\}$ consists of three distinct elements.
Now $j_{1}$ is an upper bound of $j_{1}\wedge j_{2}$ and $j_{1}\wedge a_{s-1}$, but since $j_{1}$ is join irreducible (and $j_{1}$ and $j_{2}$ are incomparable), we conclude that $(j_{1}\wedge j_{2})\vee(j_{1}\wedge a_{s-1})<j_{1}$. Since $a_{s-1}$ is left modular, we obtain
\begin{displaymath}
j_{1} = a_{s}\wedge j_{1} = \Bigl((j_{1}\wedge j_{2})\vee a_{s-1}\Bigr)\wedge j_{1} = (j_{1}\wedge j_{2})\vee(a_{s-1}\wedge j_{1})<j_{1},
\end{displaymath}
which is a contradiction. We therefore conclude that $j_{1}\vee j_{2}=a_{s}$.
Now let $A\subseteq L$ be the canonical join representation of $a_{s}$. By definition $A$ refines each of $\{j_{1},j_{2}\}$, $\{j_{1},a_{s-1}\}$, $\{j_{2},a_{s-1}\}$.
Let $b\in A$. Since $A$ refines $\{j_{1},a_{s-1}\}$ we have $b\leq j_{1}$ or $b\leq a_{s-1}$. Since $\bigvee A=a_{s}$ at least one element of $A$ must not lie below $a_{s-1}$, and we can assume without loss of generality that $b$ is such an element. Therefore, we have $b\leq j_{1}$ and $b\not\leq a_{s-1}$. Since $b$ belongs to a canonical join representation it must be join irreducible, which implies that $\gamma(b)=s$. Without loss of generality we may thus assume that $b=j_{1}$, which means that $j_{1}\in A$. Analogously, since $A$ refines $\{j_{2},a_{s-1}\}$ we may assume that $j_{2}\in A$. Since $a_{s}=j_{1}\vee j_{2}$ this means that $A=\{j_{1},j_{2}\}$. Since none of $j_{1}$ and $j_{2}$ is below $a_{i-1}$ we see that $A$ does neither refine $\{j_{1},a_{s-1}\}$ nor $\{j_{2},a_{s-1}\}$; therefore it cannot be the canonical join representation of $a_{s}$, which is a contradiction.
We conclude that $\mathcal{L}$ is extremal, and therefore trim.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:semidistributive_equivalence_lmod_xtrm}]
This follows from Theorems~\ref{thm:semidistributive_extremal_lattice_trim} and \ref{thm:semidistributive_lmod_lattice_trim}.
\end{proof}
There is a natural way to order the join- and the meet-irreducible elements of an extremal lattice $\mathcal{L}$. Let $C:a_{0}\lessdot a_{1}\lessdot\cdots\lessdot a_{n}$ be a maximal chain of $\mathcal{L}$ of maximal length. Since $\mathcal{L}$ is extremal, we have $\bigl\lvert\mathcal{J}(\mathcal{L})\bigr\rvert=n=\bigl\lvert\mathcal{M}(\mathcal{L})\bigr\rvert$. Now we can label the join-irreducible elements by $j_{1},j_{2},\ldots,j_{n}$ and the meet-irreducible elements by $m_{1},m_{2},\ldots,m_{n}$ such that
\begin{equation}\label{eq:extremal_ordering}
j_{1}\vee j_{2}\vee\cdots\vee j_{s} = a_{s} = m_{s+1}\wedge m_{s+2}\wedge\cdots\wedge m_{n}.
\end{equation}
for all $s$. (We can always order some of the irreducible elements of a lattice in such a way, the extremality of $\mathcal{L}$ guarantees that this is an ordering of \emph{all} irreducibles.)
With the help of this ordering we may follow \cite{markowsky92primes}*{Definition~2(b)} and define the \defn{Galois graph} of $\mathcal{L}$ (see also \cite{thomas19rowmotion}*{Section~2.3}). This is the directed graph $\mathcal{G}(\mathcal{L})$ with vertex set $[n]$, where $s\to t$ if and only if $s\neq t$, and $j_{s}\not\leq m_{t}$.
\begin{remark}
By construction, $\mathcal{G}(\mathcal{L})$ is acyclic, which allows for the definition of the \defn{Galois poset} of $\mathcal{L}$ (by taking the transitive and reflexive closure). The Galois graph may be used to characterize extremal lattices~\cite{markowsky92primes}*{Theorem~11}, and from this point of view the Galois poset plays the same role in Markowsky's Representation Theorem of finite extremal lattices as the poset of join-irreducibles plays in Birkhoff's Representation Theorem for finite distributive lattices~\cite{birkhoff37rings}.
\end{remark}
If $\mathcal{L}$ is an extremal, congruence-uniform lattice we may use the labeling $\lambda$ from \eqref{eq:cu_labeling} to define another ordering of the join-irreducible elements. In particular, we pick once again a maximal chain $C$ of $\mathcal{L}$ of maximum length and we order the join-irreducible elements according to the order they appear as labels under $\lambda$ along $C$.
\begin{lemma}\label{lem:cu_ordering_is_extremal_ordering}
Let $\mathcal{L}$ be an extremal, congruence-uniform lattice, and fix a maximal chain $C$ of maximum length. The ordering of $\mathcal{J}(\mathcal{L})$ and $\mathcal{M}(\mathcal{L})$ coming from \eqref{eq:extremal_ordering} agrees with the order in which the join-irreducible elements appear in $\lambda(C)$.
\end{lemma}
\begin{proof}
Let $C:a_{0}\lessdot a_{1}\lessdot\cdots\lessdot a_{n}$ be a maximal chain of maximum length, and fix the order $j_{1},j_{2},\ldots,j_{n}$ of the join-irreducible elements given by \eqref{eq:extremal_ordering}. It follows that $a_{1}=j_{1}$, and therefore $\lambda(a_{0},a_{1})=j_{1}$. Now let $t\in[n]$ and suppose that $\lambda(a_{s-1},a_{s})=j_{s}$ for all $s\leq t$. Let $\lambda(a_{t},a_{t+1})=j$. By Corollary~\ref{cor:congruence_uniform_chains_no_duplicates}, $j\in\mathcal{J}(\mathcal{L})\setminus\{j_{1},j_{2},\ldots,j_{t}\}$. By construction we know also that $a_{t+1}=a_{t}\vee j_{t+1}$, and it follows from Lemma~\ref{lem:perspective_covers} that $a_{t+1}=j\vee a_{t}$. In particular, we have $j\leq a_{t+1}$ and $j\not\leq a_{t}$ as well as $j_{t+1}\leq a_{t+1}$ and $j_{t+1}\not\leq a_{t}$. If $j\neq j_{t+1}$, then we necessarily have that $\bigl\lvert\mathcal{J}(\mathcal{L})\bigr\rvert>n$, which is a contradiction. We conclude that $j=j_{t+1}$, and the claim follows by induction.
\end{proof}
\begin{corollary}\label{cor:kappa_in_congruence_uniform_extremal}
Let $\mathcal{L}$ be an extremal, congruence-uniform lattice of length $n$, in which the join- and meet-irreducible elements are ordered as in \eqref{eq:extremal_ordering} with respect to some maximal chain of maximmu length. For $s\in[n]$ we have $m_{s}=\kappa(j_{s})$.
\end{corollary}
\begin{proof}
Let $C:a_{0}\lessdot a_{1}\lessdot\cdots\lessdot a_{n}$ be the desired maximal chain of maximum length. Lemma~\ref{lem:cu_ordering_is_extremal_ordering} implies that $\lambda\bigl((j_{s})_{*},j_{s}\bigr)=\lambda(a_{s-1},a_{s})=\lambda(m_{s},m_{s}^{*})$. The claim now follows from the definition of $\kappa(j_{s})$.
\end{proof}
\begin{corollary}\label{cor:galois_edges_congruence_uniform_extremal}
Let $\mathcal{L}$ be an extremal, congruence-uniform lattice of length $n$, in which the join- and meet-irreducible elements are ordered as in \eqref{eq:extremal_ordering} with respect to some maximal chain of maximal length. For $s,t\in[n]$ we have $j_{s}\not\leq m_{t}$ if and only if $s\neq t$ and $j_{t}\leq(j_{t})_{*}\vee j_{s}$.
\end{corollary}
\begin{proof}
This follows from Corollary~\ref{cor:kappa_in_congruence_uniform_extremal} and Lemma~\ref{lem:kappa_computation}.
\end{proof}
\begin{corollary}\label{cor:some_galois_edges_congruence_uniform_extremal}
Let $\mathcal{L}$ be an extremal, congruence-uniform lattice, in which the join-irreducible elements are ordered as in \eqref{eq:extremal_ordering}. If $j_{t}\leq j_{s}$, then there there is a directed edge from $s$ to $t$ in the Galois graph of $\mathcal{L}$. In particular $\bigl(\mathcal{J}(\mathcal{L}),\leq\bigr)^{*}$ is a subposet of the Galois poset of $\mathcal{L}$.
\end{corollary}
If $\mathcal{L}$ is extremal and congruence uniform, then Corollary~\ref{cor:galois_edges_congruence_uniform_extremal} implies that we may view $\mathcal{G}(\mathcal{L})$ as a directed graph with vertex set $\mathcal{J}(\mathcal{L})$, where we have an edge from $j_{s}\to j_{t}$ if and only if $j_{t}\leq(j_{t})_{*}\vee j_{s}$. For extremal lattices that are not congruence uniform, this construction in general yields a directed graph that is not isomorphic to $\mathcal{G}(\mathcal{L})$.
\begin{figure}
\centering
\begin{subfigure}[t]{.3\textwidth}
\centering
\includegraphics[scale=1,page=2]{para_figures.pdf}
\caption{An extremal lattice that is not congruence uniform.}
\label{fig:extremal_not_congruence_uniform}
\end{subfigure}
\hspace*{.5cm}
\begin{subfigure}[t]{.25\textwidth}
\centering
\includegraphics[scale=1,page=3]{para_figures.pdf}
\caption{The Galois graph of the lattice in Figure~\ref{fig:extremal_not_congruence_uniform}.}
\label{fig:extremal_not_congruence_uniform_galois_graph}
\end{subfigure}
\hspace*{.5cm}
\begin{subfigure}[t]{.25\textwidth}
\centering
\includegraphics[scale=1,page=4]{para_figures.pdf}
\caption{Another directed graph defined by the join-irreducible elements of the lattice in Figure~\ref{fig:extremal_not_congruence_uniform}.}
\label{fig:extremal_not_congruence_uniform_other_graph}
\end{subfigure}
\caption{The Galois graph of an extremal lattice that is not congruence uniform.}
\label{fig:galois_extremal_not_congruence_uniform}
\end{figure}
Consider for instance the extremal lattice in Figure~\ref{fig:extremal_not_congruence_uniform}. This is the smallest extremal lattice that is not congruence uniform. It has a unique maximal chain of maximal length, and the corresponding order \eqref{eq:extremal_ordering} of the join- and meet-irreducible elements is indicated by the labels below and above the nodes, respectively. The corresponding Galois graph is shown in Figure~\ref{fig:extremal_not_congruence_uniform_galois_graph}. The directed graph on the vertex set $[4]$ with a directed edge $s\to t$ if and only if $s\neq t$ and $j_{t}\leq (j_{t})_{*}\vee j_{s}$ is shown in Figure~\ref{fig:extremal_not_congruence_uniform_other_graph}.
\subsection{Poset Topology}
\label{sec:poset_topology}
There is a natural way to associate a simplicial complex with a finite poset $\mathcal{P}=(P,\leq)$. The \defn{order complex} $\Delta(\mathcal{P})$ is the simplicial complex whose faces are the chains of $\mathcal{P}$. If $\mathcal{P}$ has a least, or a greatest element, then $\Delta(\mathcal{P})$ is always contractible. If $\mathcal{P}$ is bounded, then we denote by $\overline{\mathcal{P}}\stackrel{\mathrm{def}}{=}\bigl(P\setminus\{\hat{0},\hat{1}\},\leq\bigr)$ the corresponding \defn{proper part}.
The \defn{M{\"o}bius function} of $\mathcal{P}$ is the function $\mu_{\mathcal{P}}\colon P\times P\to\mathbb{Z}$ which is inductively defined by $\mu_{\mathcal{P}}(x,x)\stackrel{\mathrm{def}}{=} 1$ for all $x\in P$, and by
\begin{displaymath}
\mu_{\mathcal{P}}(x,y) \stackrel{\mathrm{def}}{=} -\sum_{z\in P\colon x\leq z<y}{\mu_{\mathcal{P}}(x,z)},
\end{displaymath}
for all $x,y\in P$ with $x\neq y$. If $\mathcal{P}$ is bounded, then we define the \defn{M{\"o}bius number} of $\mathcal{P}$ by $\mu(\mathcal{P})\stackrel{\mathrm{def}}{=}\mu_{\mathcal{P}}(\hat{0},\hat{1})$. It follows from a result of P.~Hall that the M{\"o}bius number of $\mathcal{P}$ equals the reduced Euler characteristic of $\Delta(\overline{\mathcal{P}})$; see~\cite{stanley11enumerative_vol1}*{Proposition~3.8.5}.
Let us recall two results concerning the topology of certain kinds of lattices. The first one follows from \cite{mcnamara06poset}*{Theorem~8} and \cite{bjorner97shellable}*{Theorem~5.9}.
\begin{theorem}\label{thm:left_modular_spherical}
Let $\mathcal{L}$ be a left-modular lattice. The order complex $\Delta(\overline{\mathcal{L}})$ is homotopic to a wedge of $\mu(\mathcal{L})$-many $\bigl(\ell(\mathcal{L}){-}2\bigr)$-dimensional spheres.
\end{theorem}
\begin{proposition}[\cite{muehle19the}*{Proposition~2.13}]\label{prop:meet_semidistributive_mobius}
Let $\mathcal{L}$ be a meet-semidistributive lattice with $n$ atoms. If the join of all atoms of $\mathcal{L}$ is $\hat{1}$, then $\mu(\mathcal{L})=(-1)^{n}$. Otherwise, we have $\mu(\mathcal{L})=0$.
\end{proposition}
\section{Noncrossing $\alpha$-Partitions}
\label{sec:parabolic_noncrossing_partitions}
\subsection{$\alpha$-Partitions}
\label{sec:parabolic_partitions}
Let $n>0$. A \defn{(set) partition} of $[n]$ is a collection $\mathbf{P}=\{P_{1},P_{2},\ldots,P_{s}\}$ of pairwise disjoint, nonempty subsets of $[n]$, called \defn{parts}, such that their union is all of $[n]$. We write $a\sim_{\mathbf{P}}b$ if $a,b\in P_{i}$ for some $i\in[s]$. If $a\sim_{\mathbf{P}}b$ for $a<b$ such that there is no $c\in[a{+}1,b{-}1]\stackrel{\mathrm{def}}{=}\{a{+}1,a{+}2,\ldots,b{-}1\}$ with $a\sim_{\mathbf{P}}c$, then we call $(a,b)$ a \defn{bump} of $\mathbf{P}$. Let $\Pi_{n}$ denote the set of all partitions of $[n]$.
For $\mathbf{P}_{1},\mathbf{P}_{2}\in\Pi_{n}$, we say that $\mathbf{P}_{1}$ \defn{refines} $\mathbf{P}_{2}$ if every part of $\mathbf{P}_{1}$ is contained in some part of $\mathbf{P}_{2}$; in symbols $\mathbf{P}_{1}\leq_{\mathrm{ref}}\mathbf{P}_{2}$. It is well known (and easy to verify) that the poset $(\Pi_{n},\leq_{\mathrm{ref}})$ is a lattice.
Now fix an integer composition $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$ of $n$. We set $s_{0}\stackrel{\mathrm{def}}{=} 0$ and $s_{i}\stackrel{\mathrm{def}}{=}\alpha_{1}+\alpha_{2}+\cdots+\alpha_{i}$ for $i\in[r]$. The \defn{$i\th$ $\alpha$-region} is the set $\{s_{i-1}{+}1,s_{i-1}{+}2,\ldots,s_{i}\}$. An \defn{$\alpha$-partition} is a partition of $[n]$ for which $a\sim_{\mathbf{P}}b$ implies that $a$ and $b$ belong to different $\alpha$-regions; equivalently no part of $\mathbf{P}$ intersects an $\alpha$-region in more than one element. Let $\Pi_{\alpha}$ denote the set of all $\alpha$-partitions.
We will represent an $\alpha$-partition $\mathbf{P}\in\Pi_{\alpha}$ by the following combinatorial model. We draw $n$ dots labeled by $1,2,\ldots,n$ on a straight line, and color the elements of the $i\th$ $\alpha$-region by color $i$. For any bump $(a,b)$ of $\mathbf{P}$ we draw an arc connecting the dots labeled by $a$ and $b$, such that this arc leaves $a$ at the bottom, passes below all dots in the same $\alpha$-region as $a$, then proceeds above all the remaining dots between $a$ and $b$, and finally enters $b$ at the top. Figure~\ref{fig:32_partitions} shows the poset $\bigl(\Pi_{(3,2)},\leq_{\mathrm{ref}}\bigr)$.
\begin{proposition}\label{prop:parabolic_partition_ideal}
For all $n>0$ and every composition $\alpha$ of $n$, the poset $\bigl(\Pi_{\alpha},\leq_{\mathrm{ref}}\bigr)$ is an order ideal of $\bigl(\Pi_{n},\leq_{\mathrm{ref}}\bigr)$. In particular, $\bigl(\Pi_{\alpha},\leq_{\mathrm{ref}}\bigr)$ is a meet-semilattice.
\end{proposition}
\begin{proof}
Let $\mathbf{P}\in\Pi_{\alpha}$, and let $\mathbf{P}'\leq_{\mathrm{ref}}\mathbf{P}$. By definition, any two integers $a,b\in[n]$ with $a\sim_{\mathbf{P}}b$ belong to different $\alpha$-regions. Therefore, if $a\sim_{\mathbf{P}'}b$, then $a$ and $b$ still belong to different $\alpha$-regions, and we conclude $\mathbf{P}'\in\Pi_{\alpha}$.
Let $\mathbf{P}_{1},\mathbf{P}_{2}\in\Pi_{\alpha}$, and let $\mathbf{P}$ be the meet $\mathbf{P}_{1}\wedge\mathbf{P}_{2}$ in $\bigl(\Pi_{n},\leq_{\mathrm{ref}}\bigr)$. Since $\mathbf{P}\leq_{\mathrm{ref}}\mathbf{P}_{1}$, we conclude from the first paragraph that $\mathbf{P}\in\Pi_{\alpha}$, and it must thus be the meet in $\bigl(\Pi_{\alpha},\leq_{\mathrm{ref}}\bigr)$.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=1,page=5]{para_figures.pdf}
\caption{The poset $(\Pi_{(3,2)},\leq_{\mathrm{ref}})$. The highlighted elements are not noncrossing.}
\label{fig:32_partitions}
\end{figure}
Now let us call an $\alpha$-partition $\mathbf{P}\in\Pi_{\alpha}$ \defn{noncrossing} if it satisfies the following two conditions.
\begin{description}
\item[NC1\label{it:nc1}] If two distinct bumps $(a_{1},b_{1})$ and $(a_{2},b_{2})$ of $\mathbf{P}$ satisfy $a_{1}<a_{2}<b_{1}<b_{2}$, then either $a_{1}$ and $a_{2}$ lie in the same $\alpha$-region, or $b_{1}$ and $a_{2}$ lie in the same $\alpha$-region.
\item[NC2\label{it:nc2}] If two distinct bumps $(a_{1},b_{1})$ and $(a_{2},b_{2})$ of $\mathbf{P}$ satisfy $a_{1}<a_{2}<b_{2}<b_{1}$, then $a_{1}$ and $a_{2}$ lie in different $\alpha$-regions.
\end{description}
Let us denote the set of all noncrossing $\alpha$-partitions by $N\!C_{\alpha}$, and let us write $\mathcal{N\!C}_{\alpha}\stackrel{\mathrm{def}}{=}\bigl(N\!C_{\alpha},\leq_{\mathrm{ref}}\bigr)$. If $\alpha=(1,1,\ldots,1)$, then all $\alpha$-regions are singletons, and conditions \eqref{it:nc1} and \eqref{it:nc2} restrict to the condition that there can be no four indices $a_{1}<a_{2}<b_{1}<b_{2}$ such that $(a_{1},b_{1})$ and $(a_{2},b_{2})$ are bumps. It is straightforward to check that this is equivalent to the classical definition of \defn{noncrossing partitions} as introduced in \cite{kreweras72sur}.
Two bumps $(a,b)$ and $(c,d)$ are \defn{$\alpha$-compatible}, if the $\alpha$-partition whose only bumps are $(a,b)$ and $(c,d)$ is noncrossing.
\begin{proposition}\label{prop:parabolic_nc_meet_semilattice}
For all $n>0$ and every composition $\alpha$ of $n$, the poset $\mathcal{N\!C}_{\alpha}$ is a meet-semilattice.
\end{proposition}
\begin{proof}
Let $\mathbf{P}_{1},\mathbf{P}_{2}\inN\!C_{\alpha}$ and consider $\mathbf{P}=\{P_{1}\cap P_{2}\mid P_{1}\in\mathbf{P}_{1},P_{2}\in\mathbf{P}_{2}\}\setminus\{\emptyset\}$.
We quickly see that $\mathbf{P}\leq_{\mathrm{ref}}\mathbf{P}_{1}$, and therefore every bump of $\mathbf{P}$ corresponds to a sequence of bumps of $\mathbf{P}_{1}$, and likewise for $\mathbf{P}_{2}$. If $\mathbf{P}\notinN\!C_{\alpha}$, then it must contain two bumps that do not satisfy Condition~\eqref{it:nc1}~or~\eqref{it:nc2}. In that case, we can find two bumps of $\mathbf{P}_{1}$ that also do not satisfy Condition~\eqref{it:nc1}~or~\eqref{it:nc2}, which contradicts $\mathbf{P}_{1}\inN\!C_{\alpha}$. We conclude that $\mathbf{P}\inN\!C_{\alpha}$.
Note that $\mathbf{P}$ is the meet of $\mathbf{P}_{1}$ and $\mathbf{P}_{2}$ in $\bigl(\Pi_{\alpha},\leq_{\mathrm{ref}}\bigr)$. Consequently, it must be the meet of $\mathbf{P}_{1}$ and $\mathbf{P}_{2}$ in $\mathcal{N\!C}_{\alpha}$, too.
\end{proof}
We now show that $\mathcal{N\!C}_{\alpha}$ is ranked. To do so, we need two easy lemmas.
\begin{lemma}\label{lem:order_ideals_ranked}
Any order ideal of a ranked poset is a ranked poset itself.
\end{lemma}
\begin{proof}
Let $\mathcal{P}=(P,\leq)$ be a ranked poset with rank function $\mathrm{rk}$. Let $A\subseteq P$ be an order ideal of $\mathcal{P}$. By definition, if $a\in A$ is minimal in $(A,\leq)$, then $a$ is also minimal in $\mathcal{P}$, and if $a\lessdot b$ in $(A,\leq)$, then we also need to have $a\lessdot b$ in $\mathcal{P}$. Thus, the poset $(A,\leq)$ inherits the rank function from $\mathcal{P}$.
\end{proof}
\begin{lemma}\label{lem:noncompatible_bumps_share_regions}
Let $\mathbf{P}\in\Pi_{\alpha}$ have two bumps $(a,b)$ and $(c,d)$ which are not $\alpha$-compatible. If there is an $\alpha$-region containing any of the sets $\{a,c\}$, $\{a,d\}$, $\{b,c\}$, or $\{b,d\}$, then no $\mathbf{Q}\in\Pi_{\alpha}$ with $\mathbf{P}\leq_{\mathrm{ref}}\mathbf{Q}$ is noncrossing.
\end{lemma}
\begin{proof}
Let $\mathbf{P},\mathbf{Q}\in\Pi_{\alpha}$ with $\mathbf{P}\leq_{\mathrm{ref}}\mathbf{Q}$. Suppose that there are two bumps $(a,b)$ and $(c,d)$ in $\mathbf{P}$ that are not $\alpha$-compatible. By construction we have $a\sim_{\mathbf{Q}}b$ and $c\sim_{\mathbf{Q}}d$. Let $Q_{1}$ be the part of $\mathbf{Q}$ that contains $a$ and $b$, and let $Q_{2}$ be the part of $\mathbf{Q}$ that contains $c$ and $d$.
If there is an $\alpha$-region that contains any of the sets $\{a,c\}$, $\{a,d\}$, $\{b,c\}$, or $\{b,d\}$, then $Q_{1}\neq Q_{2}$, and in particular there must be a bump $(a',b')$ in $Q_{1}$ and a bump $(c',d')$ in $Q_{2}$ that are not $\alpha$-compatible. Therefore, $\mathbf{Q}$ is not noncrossing.
\end{proof}
\begin{proposition}\label{prop:parabolic_noncrossing_ranked}
For all $n>0$ and every composition $\alpha$ of $n$, the poset $\mathcal{N\!C}_{\alpha}$ is ranked.
\end{proposition}
\begin{proof}
It is well known (and easy to verify) that $\bigl(\Pi_{n},\leq_{\mathrm{ref}}\bigr)$ is ranked by the function that assigns to $\mathbf{P}$ its number of bumps. By Proposition~\ref{prop:parabolic_partition_ideal} and Lemma~\ref{lem:order_ideals_ranked} we conclude that $\bigl(\Pi_{\alpha},\leq_{\mathrm{ref}}\bigr)$ is ranked by the same rank function.
We want to show that the same is true for $\mathcal{N\!C}_{\alpha}$. Clearly, $\mathcal{N\!C}_{\alpha}$ has the trivial partition into singleton parts as a least element, and this partition has zero bumps.
Now take two elements $\mathbf{P},\mathbf{P'}\inN\!C_{\alpha}$, which satisfy $\mathbf{P}\lessdot_{\mathrm{ref}}\mathbf{P'}$. Let $k$ denote the number of bumps of $\mathbf{P}$, and let $k'$ denote the number of bumps of $\mathbf{P'}$. We certainly have $k<k'$, since these partitions are distinct as elements of $\Pi_{\alpha}$. Assume that $k'>k+1$. This means that there exists $\mathbf{Q}\in\Pi_{\alpha}\setminusN\!C_{\alpha}$ with $\mathbf{P}\lessdot_{\mathrm{ref}}\mathbf{Q}<_{\mathrm{ref}}\mathbf{P'}$. In view of the first paragraph, we conclude that $\mathbf{Q}$ has exactly $k+1$ bumps. There are essentially two options.
\medskip
(i) There exists a bump $(a,b)$ in $\mathbf{P}$, which is subdivided into two bumps $(a,c)$ and $(c,b)$ of $\mathbf{Q}$. Since all other bumps are preserved, and $\mathbf{Q}\notinN\!C_{\alpha}$, there must be a common bump $(i,j)$ of $\mathbf{P}$ and $\mathbf{Q}$ which is not $\alpha$-compatible with $(a,c)$ or $(c,b)$. First assume that $(a,c)$ and $(i,j)$ are not $\alpha$-compatible. (The case that $(c,b)$ and $(i,j)$ are not $\alpha$-compatible can be treated in a similar fashion.) This may only happen in one of the following situations.
(ia) Let $a<i<c<j$. By \eqref{it:nc1} we conclude that $a$, $i$, and $c$ lie in different $\alpha$-regions. If $b<j$, then $(a,b)$ and $(i,j)$ are also not $\alpha$-compatible, which contradicts $\mathbf{P}\inN\!C_{\alpha}$. Therefore, we must have $j<b$. We observe that $(i,j)$ is not $\alpha$-compatible with both $(a,c)$ and $(c,b)$. By definition we have $a\sim_{\mathbf{P'}}c\sim_{\mathbf{P'}}b$ and $i\sim_{\mathbf{P'}}j$. If $j$ and $b$ lie in the same $\alpha$-region, then Lemma~\ref{lem:noncompatible_bumps_share_regions} implies $\mathbf{P'}\notinN\!C_{\alpha}$; a contradiction. If $j$ and $b$ lie in different $\alpha$-regions, then there exists a part of $\mathbf{P'}$ that contains the set $\{a,i,c,j,b\}$. Consider the partition $\mathbf{P''}\in\Pi_{\alpha}$ which has all the bumps of $\mathbf{P}$, except that $(i,j)$ is split into $(i,c)$ and $(c,j)$. Since $\mathbf{P}\inN\!C_{\alpha}$, it follows that $\mathbf{P''}\inN\!C_{\alpha}$, and since $a\not\sim_{\mathbf{P''}}i$, we conclude that $\mathbf{P}\lessdot_{\mathrm{ref}}\mathbf{P''}<_{\mathrm{ref}}\mathbf{P'}$. This contradicts the assumption that $\mathbf{P}$ and $\mathbf{P'}$ form a cover relation in $\mathcal{N\!C}_{\alpha}$.
(ib) Let $i<a<j<c$. By \eqref{it:nc1} we conclude that $i$, $a$, and $j$ lie in different $\alpha$-regions. Since $c<b$, we conclude that $(a,b)$ and $(i,j)$ are not $\alpha$-compatible, which contradicts $\mathbf{P}\inN\!C_{\alpha}$.
(ic) Let $a<i<j<c$. By \eqref{it:nc2} we conclude that $a$ and $i$ lie in the same $\alpha$-region. Lemma~\ref{lem:noncompatible_bumps_share_regions} implies that $\mathbf{P'}\notinN\!C_{\alpha}$; a contradiction.
(id) Let $i<a<c<j$. By \eqref{it:nc2} we conclude that $a$ and $i$ lie in the same $\alpha$-region. Lemma~\ref{lem:noncompatible_bumps_share_regions} implies that $\mathbf{P'}\notinN\!C_{\alpha}$; a contradiction.
\medskip
(ii) There exist two vertices $a,b$ with $a\not\sim_{\mathbf{P}}b$ such that $(a,b)$ is a bump of $\mathbf{Q}$, which is not $\alpha$-compatible with a common bump $(i,j)$ of both $\mathbf{P}$ and $\mathbf{Q}$.
(iia) Let $a<i<j<b$. By \eqref{it:nc2} we conclude that $a$ and $i$ lie in the same $\alpha$-region. Lemma~\ref{lem:noncompatible_bumps_share_regions} implies that $\mathbf{P'}\notinN\!C_{\alpha}$; a contradiction.
(iib) Let $a<i<b<j$. By \eqref{it:nc1} we conclude that $a$, $i$, and $b$ lie in different $\alpha$-regions. In view of Lemma~\ref{lem:noncompatible_bumps_share_regions} we only need to consider the case where $b$ and $j$ lie in different $\alpha$-regions. By definition we have $a\sim_{\mathbf{P'}}b$ and $i\sim_{\mathbf{P'}}j$. Since $\mathbf{P'}\inN\!C_{\alpha}$ we conclude that there exists a part of $\mathbf{P'}$ that contains the set $\{a,i,b,j\}$. Consider the partition $\mathbf{P''}\in\Pi_{\alpha}$ which has all the bumps of $\mathbf{P}$, except that $(i,j)$ is split into $(i,b)$ and $(b,j)$. Since $\mathbf{P}\inN\!C_{\alpha}$, it follows that $\mathbf{P''}\inN\!C_{\alpha}$, and since $a\not\sim_{\mathbf{P''}}i$, we conclude that $\mathbf{P}\lessdot_{\mathrm{ref}}\mathbf{P''}<_{\mathrm{ref}}\mathbf{P'}$. This contradicts the assumption that $\mathbf{P}$ and $\mathbf{P'}$ form a cover relation in $\mathcal{N\!C}_{\alpha}$.
(iic) Let $i<a<b<j$. This is analogous to (iia).
(iid) Let $i<a<j<b$. This is analogous to (iib).
\medskip
We have thus shown that $k'=k+1$, and the proof is complete.
\end{proof}
\begin{proposition}\label{prop:parabolic_nc_top_element}
Let $n>0$ and $\alpha$ be a composition of $n$. The poset $\mathcal{N\!C}_{\alpha}$ has a greatest element if and only if $\alpha=(1,1,\ldots,1)$ or $\alpha=(n)$.
\end{proposition}
\begin{proof}
If $\alpha=(n)$, then $N\!C_{\alpha}$ consists of a single element, namely the discrete partition, which is at the same time the least and the greatest element of $\mathcal{N\!C}_{\alpha}$. If $\alpha=(1,1,\ldots,1)$, then the full partition consisting of the single part $[n]$ belongs to $N\!C_{\alpha}$, and is thus the greatest element of $\mathcal{N\!C}_{\alpha}$.
Now, conversely, suppose that $\alpha\neq(1,1,\ldots,1)$ and $\alpha\neq(n)$. We can in particular find two $\alpha$-regions $B$ and $B'$, of which at least one contains more than one element. Let $B=\{a_{1},a_{2},\ldots,a_{s}\}$ and $B'=\{b_{1},b_{2},\ldots,b_{t}\}$ and suppose without loss of generality that $s>1$ and $s\geq t$. Consider on the one hand the noncrossing $\alpha$-partition $\mathbf{P}$ whose only bumps are $(a_{1},b_{1}),(a_{2},b_{2}),\ldots,(a_{t},b_{t})$. If $\mathbf{Q}$ is a maximal element of $\mathcal{N\!C}_{\alpha}$ with $\mathbf{P}\leq_{\mathrm{ref}}\mathbf{Q}$, then we have $a_{i}\sim_{\mathbf{Q}}b_{i}$ for all $i\in[t]$, and by the definition of $\alpha$-partitions, we conclude that $a_{s}\not\sim_{\mathbf{Q}}b_{1}$. On the other hand, consider the noncrossing $\alpha$-partition $\mathbf{P'}$ whose only bump is $(a_{s},b_{1})$. If $\mathbf{Q}'$ is a maximal element of $\mathcal{N\!C}_{\alpha}$ with $\mathbf{P}'\leq_{\mathrm{ref}}\mathbf{Q}'$, then we also have $a_{s}\sim_{\mathbf{Q}'}b_{1}$, and by construction $a_{i}\not\sim_{\mathbf{Q}'}b_{i}$ for $i\in[t]$. We conclude that $\mathbf{Q}\neq\mathbf{Q}'$, which implies that $\mathcal{N\!C}_{\alpha}$ has more than one maximal element.
\end{proof}
We may now conclude the proof of Theorem~\ref{thm:parabolic_noncrossing_partition_lattice}.
\begin{proof}[Proof of Theorem~\ref{thm:parabolic_noncrossing_partition_lattice}]
Proposition~\ref{prop:parabolic_nc_meet_semilattice} states that $\mathcal{N\!C}_{\alpha}$ is always a meet-semi\-lattice, and Proposition~\ref{prop:parabolic_nc_top_element} states that it has a top element if only if $\alpha=(1,1,\ldots,1)$ or $\alpha=(n)$. The standard lattice-theoretic argument that every finite meet-semilattice with a greatest element is a lattice (see for instance \cite{reading16lattice}*{Lemma~9-2.4} for the dual statement) concludes the proof. The fact that $\mathcal{N\!C}_{\alpha}$ is ranked is Proposition~\ref{prop:parabolic_noncrossing_ranked}.
\end{proof}
\begin{remark}\label{rem:parabolic_nc_sperner}
It was shown in \cite{simion91on}*{Section~2} that the lattice $\mathcal{N\!C}_{(1,1,\ldots,1)}$ admits a symmetric chain decomposition, which in particular implies that it has the strong Sperner property; \text{i.e.}\; for any $k$ the sum of the $k$ biggest rank numbers equals the sum of the cardinalities of the $k$ biggest antichains. For arbitrary $\alpha$, however, this property fails. Consider for instance the poset $\mathcal{N\!C}_{(3,3)}$ shown in Figure~\ref{fig:33_partitions}. Its biggest rank number is nine, while it has an antichain of size eleven. (This antichain consists of all elements of rank two, together with the two maximal elements of rank $1$.)
\end{remark}
\begin{figure}
\centering
\includegraphics[scale=.7,page=6]{para_figures.pdf}
\caption{The non-Sperner poset $\mathcal{N\!C}_{(3,3)}$.}
\label{fig:33_partitions}
\end{figure}
\section{$\alpha$-Tamari Lattices}
\label{sec:parabolic_tamari_lattices}
\subsection{The Symmetric Group}
\label{sec:symmetric_group}
The \defn{symmetric group} $\mathfrak{S}_{n}$ is the group of all permutations of $[n]$. For $w\in\mathfrak{S}_{n}$, let us write $w_{i}$ instead of $w(i)$ for $i\in[n]$; and let us call the string $w_{1}w_{2}\cdots w_{n}$ the \defn{one-line notation} of $w$. The \defn{inversion set} of $w\in\mathfrak{S}_{n}$ is
\begin{displaymath}
\mathrm{Inv}(w) \stackrel{\mathrm{def}}{=} \bigl\{(i,j)\mid 1\leq i<j\leq n, w_{i}>w_{j}\bigr\}.
\end{displaymath}
A \defn{descent} of $w$ is an inversion $(i,j)$ such that $w_{i}=w_{j}+1$. We denote by $\mathrm{Des}(w)$ the set of all descents of $w$.
Inversion sets give rise to the \defn{(left) weak order} on $\mathfrak{S}_{n}$. This order is defined by $u\leq_{L}v$ if and only if $\mathrm{Inv}(u)\subseteq\mathrm{Inv}(v)$. Let us write $\mathrm{Weak}(\mathfrak{S}_{n})\stackrel{\mathrm{def}}{=}(\mathfrak{S}_{n},\leq_{L})$. The cover relations in $\mathrm{Weak}(\mathfrak{S}_{n})$ can be described nicely in terms of descents. More precisely, we have $u\lessdot_{L}v$ in $\mathrm{Weak}(\mathfrak{S}_{n})$ if and only if $\mathrm{Inv}(v)\setminus\mathrm{Inv}(u)=\bigl\{(i,j)\bigr\}$ for some descent $(i,j)$ of $v$.
According to \cite{bjorner05combinatorics}*{Theorem~3.2.1}, the poset $\mathrm{Weak}(\mathfrak{S}_{n})$ is a lattice, and there exists thus a greatest element $w_{o}$ with one-line notation $n(n-1)\cdots 1$. Let $e$ denote the identity permutation.
\subsection{Parabolic Quotients}
\label{sec:parabolic_quotients}
Now let $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$ be a composition of $n$, and define once more $s_{0}=0$ and $s_{i}=\alpha_{1}+\alpha_{2}+\cdots+\alpha_{i}$ for $i\in[r]$. The group $\mathfrak{S}_{\alpha_{1}}\times\mathfrak{S}_{\alpha_{2}}\times\cdots\times\mathfrak{S}_{\alpha_{r}}$ is the parabolic subgroup of $\mathfrak{S}_{n}$ with respect to $\alpha$. The \defn{parabolic quotient} of $\mathfrak{S}_{n}$ with respect to $\alpha$ is the set $\mathfrak{S}_{\alpha}$ of minimal-length representatives of the right cosets of $\mathfrak{S}_{n}$ by $\mathfrak{S}_{\alpha_{1}}\times\mathfrak{S}_{\alpha_{2}}\times\cdots\times\mathfrak{S}_{\alpha_{r}}$. In other words, we have
\begin{displaymath}
\mathfrak{S}_{\alpha} \stackrel{\mathrm{def}}{=} \bigl\{w\in\mathfrak{S}_{n}\mid w_{j}<w_{j+1}\;\text{if}\;j\notin\{s_{1},s_{2},\ldots,s_{r-1}\}\bigr\}.
\end{displaymath}
Using the terminology from above, we can equivalently say that $w\in\mathfrak{S}_{\alpha}$ if and only if the entries of the one-line notation of $w$ corresponding to an $\alpha$-region form an increasing sequence.
Since $\mathfrak{S}_{\alpha}$ is itself a set of permutations, we may consider the poset $\mathrm{Weak}(\mathfrak{S}_{\alpha})$. In view of \cite{bjorner88generalized}*{Theorem~4.1}, this poset is isomorphic to an interval in $\mathrm{Weak}(\mathfrak{S}_{n})$. As a consequence, there exists a unique ``longest'' permutation in $\mathfrak{S}_{\alpha}$, \text{i.e.}\; there is a unique element which has the maximum number of inversions among all elements of $\mathfrak{S}_{\alpha}$. We denote this element by $w_{o;\alpha}$, and its one-line notation is
\begin{displaymath}
\underbrace{n{-}s_{1}{+}1,n{-}s_{1}{+}2,\ldots,n}_{\alpha_{1}} \mid \underbrace{n{-}s_{2}{+}1,n{-}s_{2}{+}2,\ldots,n{-}s_{1}}_{\alpha_{2}} \mid \ldots \mid \underbrace{1,2,\ldots,n-s_{r-1}}_{\alpha_{r}}.
\end{displaymath}
The vertical bars only serve as optical dividers to highlight the different $\alpha$-regions. In concrete examples, we highlight the different $\alpha$-regions by different background colors. Figure~\ref{fig:weak_212} shows the poset $\mathrm{Weak}\bigl(\mathfrak{S}_{(2,1,2)}\bigr)$.
\subsection{Parabolic Tamari Lattices}
\label{sec:parabolic_tamari}
We say that $w\in\mathfrak{S}_{\alpha}$ contains an \defn{$(\alpha,231)$-pattern} if there exist three indices $i<j<k$ in different $\alpha$-regions such that $w_{k}<w_{i}<w_{j}$ and $w_{i}=w_{k}+1$. Any permutation in $\mathfrak{S}_{\alpha}$ that does not contain an $(\alpha,231)$-pattern is \defn{$(\alpha,231)$-avoiding}. Let us denote the set of all $(\alpha,231)$-avoiding permutations by $\mathfrak{S}_{\alpha}(231)$. It follows from \cite{muehle19tamari}*{Lemma~3.8} that for every $w\in\mathfrak{S}_{\alpha}$ there exists a maximal $(\alpha,231)$-avoiding permutation $\pi_{\downarrow}(w)\in\mathfrak{S}_{\alpha}$ with $\pi_{\downarrow}(w)\leq_{L}w$.
We define the \defn{parabolic Tamari lattice} $\mathcal{T}_{\alpha}$ to be the poset $\mathrm{Weak}\bigl(\mathfrak{S}_{\alpha}(231)\bigr)$. In fact the map $\pi_{\downarrow}\colon\mathfrak{S}_{\alpha}\to\mathfrak{S}_{\alpha}(231)$ is a surjective lattice map from $\mathrm{Weak}(\mathfrak{S}_{\alpha})$ to $\mathcal{T}_{\alpha}$, which yields the following result.
\begin{theorem}[\cite{muehle19tamari}*{Theorem~1.1}]\label{thm:parabolic_tamari_lattice}
For all $n>0$ and every composition $\alpha$ of $n$, the poset $\mathcal{T}_{\alpha}$ is a quotient lattice of $\mathrm{Weak}(\mathfrak{S}_{\alpha})$.
\end{theorem}
\begin{figure}
\centering
\includegraphics[scale=1,page=7]{para_figures.pdf}
\caption{The weak order on $\mathfrak{S}_{(2,1,2)}$. The highlighted permutations are not $\bigl((2,1,2),231\bigr)$-avoiding.}
\label{fig:weak_212}
\end{figure}
The next result establishes that the sets $\mathfrak{S}_{\alpha}(231)$ and $N\!C_{\alpha}$ are in bijection.
\begin{theorem}[\cite{muehle19tamari}*{Theorem~4.2}]\label{thm:bijection_tamari_noncrossings}
For all $n>0$ and every composition $\alpha$ of $n$, there exists an explicit bijection $\Phi$ from $\mathfrak{S}_{\alpha}(231)$ to $N\!C_{\alpha}$. This bijection sends descents of $w\in\mathfrak{S}_{\alpha}(231)$ to bumps of $\Phi(w)\inN\!C_{\alpha}$.
\end{theorem}
Figure~\ref{fig:parabolic_tamari_212} shows the parabolic Tamari lattice $\mathcal{T}_{(2,1,2)}$, where each element is labeled by its corresponding noncrossing $\alpha$-partition as well.
\begin{figure}
\centering
\includegraphics[scale=1,page=8]{para_figures.pdf}
\caption{The parabolic Tamari lattice $\mathcal{T}_{(2,1,2)}$. The maximal chain constructed in the proof of Proposition~\ref{prop:parabolic_tamari_trim} is highlighted. The elements are additionally labeled by the corresponding noncrossing $(2,1,2)$-partitions.}
\label{fig:parabolic_tamari_212}
\end{figure}
Let us briefly describe the map $\Phi^{-1}$. Let $\mathbf{P}\inN\!C_{\alpha}$, and let $\bar{P}$ be the unique part of $\mathbf{P}$ containing $1$; suppose that $\bar{P}=\{i_{1},i_{2},\ldots,i_{s}\}$ with $i_{1}=1$. We want to create a permutation $w=\Phi^{-1}(\mathbf{P})\in\mathfrak{S}_{\alpha}(231)$ with $w_{i_{1}}=w_{i_{2}}+1=w_{i_{3}}+2=\cdots=w_{i_{s}}+s-1$, where $w_{i_{1}}$ is as small as possible. To determine that value, we order the parts of $\mathbf{P}$ according to the ``starts-below'' relation. A part $P$ \defn{starts directly below} another part $P'$ if the smallest element of $P$ lies between a bump of $P'$. Moreover, $P$ \defn{starts below} $P'$ if $P$ starts directly below a part $P''$, and either $P''=P'$ or $P''$ starts below $P'$. Then, $w_{i_{1}}$ equals the number of elements of $\bar{P}$ plus the number of elements of all parts starting below $\bar{P}$. We determine the missing values of $w$ inductively, by considering $\mathbf{P}\setminus\bar{P}$ as an element of $N\!C_{\alpha'}$ for some appropriate composition $\alpha'$ of some $n'<n$.
\begin{corollary}\label{cor:explicit_irreducibles}
Let $n>0$ and let $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$ be a composition of $n$. Let $\mathbf{P}_{a,b}\inN\!C_{\alpha}$ be the noncrossing $\alpha$-partition with the unique bump $(a,b)$, and suppose that $a$ belongs to the $j\th$ $\alpha$-region. The $(\alpha,231)$-avoiding permutation $w_{a,b}\stackrel{\mathrm{def}}{=}\Phi^{-1}(\mathbf{P}_{a,b})$ is given by
\begin{displaymath}
w_{a,b}(i) =
\begin{cases}
i, & \text{if}\;i<a\;\text{or}\;i>b,\\
a+b-s_{j}+k, & \text{if}\;i=a+k\;\text{for}\;0\leq k\leq s_{j}-a,\\
a+k-1, & \text{if}\;i=s_{j}+k\;\text{for}\;1\leq k\leq b-s_{j}.
\end{cases}
\end{displaymath}
\end{corollary}
\begin{lemma}\label{lem:descents_lower_covers}
For $w\in\mathfrak{S}_{\alpha}(231)$, the number of descents of $w$ equals the number of elements of $\mathcal{T}_{\alpha}$ covered by $w$.
\end{lemma}
\begin{proof}
Let $w\in\mathfrak{S}_{\alpha}(231)$. If $u$ is covered by $w$, then---by definition---we must have $\mathrm{Inv}(u)\subsetneq\mathrm{Inv}(w)$. Viewing the assignment $w\mapsto\pi_{\downarrow}(w)$, mentioned in the first paragraph of this section, as a map $\pi_{\downarrow}\colon\mathfrak{S}_{\alpha}\to\mathfrak{S}_{\alpha}(231)$, then it follows from \cite{muehle19tamari}*{Lemma~3.8} that $\pi_{\downarrow}$ is an order-preserving surjective lattice map, and the preimages of $\pi_{\downarrow}$ are intervals in $\mathrm{Weak}(\mathfrak{S}_{\alpha})$ and the $(\alpha,231)$-avoiding permutations correspond to the minimal elements of these intervals.
Let $(a,b)\in\mathrm{Inv}(w)$, and consider the permutation $w'\in\mathfrak{S}_{\alpha}$ obtained from $w$ by swapping the entries in position $a$ and $b$.
\begin{itemize}
\item If $(i,j)\in\mathrm{Des}(w)$, then $w'$ is the greatest element under weak order which does not have $(i,j)$ as an inversion. Therefore, $\pi_{\downarrow}(w')$ is covered by $w$ in $\mathcal{T}_{\alpha}$.
\item If $(i,j)\notin\mathrm{Des}(w)$, then there exist $k_{1}$ and $k_{2}$ such that $w(k_{1})=w(i)-1$ and $w_{k_{2}}=w(j)+1$.
If $k_{2}<i$, then $(k_{2},i,j)$ forms an $(\alpha,231)$-pattern, which is impossible by assumption. If $j<k_{2}$, then $(j,k_{2})\notin\mathrm{Inv}(w)$, but $(j,k_{2})\in\mathrm{Inv}(w')$, and therefore $w'\not\leq_{L}w$. So suppose that $i<k_{2}<j$. If $k_{1}<i$, then $(k_{1},i)\notin\mathrm{Inv}(w)$, but $(k_{1},i)\in\mathrm{Inv}(w')$, which implies $w'\not\leq_{L}w$. If $k_{1}>j$, then $(j,k_{1})\notin\mathrm{Inv}(w)$, but $(j,k_{1})\in\mathrm{Inv}(w')$, which implies $w'\not\leq_{L}w$. If $i<k_{1}<j$, then $(i,k_{1}),(k_{2},j)\in\mathrm{Des}(w)$, and we may consider the permutations $w_{1}$ and $w_{2}$, which are obtained from $w$ by swapping the entries in positions $i$ and $k_{1}$, respectively $k_{2}$ and $j$. We have already established that $\pi_{\downarrow}(w_{1})$ and $\pi_{\downarrow}(w_{2})$ each account for an element covered by $w$. Since $w\in\mathfrak{S}_{\alpha}(231)$ we conclude that $\pi_{\downarrow}(w)=w$ and $\pi_{\downarrow}(w)\neq\pi_{\downarrow}(w_{1})$ and therefore $\pi_{\downarrow}(w)\neq\pi_{\downarrow}(w_{2})$. Since $\pi_{\downarrow}$ is a lattice map, we conclude $\pi_{\downarrow}(w_{1})\neq\pi_{\downarrow}(w_{2})$. Hence, $\pi_{\downarrow}(w')$ is not covered by $w$ in $\mathcal{T}_{\alpha}$.
\end{itemize}
We have just established a bijection from the set of descents of $w$ to the set of elements of $\mathfrak{S}_{\alpha}(231)$ covered by $w$.
\end{proof}
\begin{corollary}\label{cor:irreducibles_inversions}
The elements $w_{a,b}$ defined in Corollary~\ref{cor:explicit_irreducibles} are precisely the join-irreducible elements of $\mathcal{T}_{\alpha}$, and their inversion sets are given by
\begin{displaymath}
\mathrm{Inv}(w_{a,b}) = \bigl\{(k,l)\mid a\leq k\leq s_{j},s_{j}+1\leq l\leq b\bigr\}.
\end{displaymath}
\end{corollary}
\begin{example}
Let $\alpha=(3,2,1,2)$. The permutation $w_{2,6}$ is given by the one-line notation $1\;5\;6\mid 2\;3\mid 4\mid 7\;8$.
\end{example}
\begin{corollary}\label{cor:comparable_irreducibles}
Let $w_{a,b},w_{a',b'}\in\mathcal{J}\bigl(\mathcal{T}_{\alpha}\bigr)$. We have $w_{a,b}\leq_{L}w_{a',b'}$ if and only if $a$ and $a'$ belong to the same $\alpha$-region and $a'\leq a<b\leq b'$.
\end{corollary}
We may use this result to describe the poset of irreducibles of $\mathcal{T}_{\alpha}$. Figure~\ref{fig:tamari_212_irreducibles} illustrates Theorem~\ref{thm:parabolic_tamari_irreducible_poset} in the case $\alpha=(2,1,2)$.
\begin{proof}[Proof of Theorem~\ref{thm:parabolic_tamari_irreducible_poset}]
By Corollary~\ref{cor:comparable_irreducibles} we conclude that for $w_{a,b}\leq_{L}w_{a',b'}$ to hold it is necessary that $a$ and $a'$ belong to the same $\alpha$-region. This accounts for the $r-1$ connected components of $\mathrm{Weak}\bigl(\mathcal{J}(\mathcal{T}_{\alpha})\bigr)$, since $a$ can be chosen from any but the last $\alpha$-region, and there is a total of $r$ $\alpha$-regions.
Now suppose that $a$ lies in the $i\th$ $\alpha$-region, which means that $a$ takes any of the values $\{s_{i-1}{+}1,s_{i-1}{+}2,\ldots,s_{i}\}$. For any choice of $a$, we can pick one element $b\in\{s_{i}{+}1,s_{i}{+}2,\ldots,n\}$ to obtain a join-irreducible $w_{a,b}$. Observe that whenever $a$ is not minimal in its $\alpha$-region, then we have $w_{a,b}\leq_{L}w_{a-1,b}$ and we always have $w_{a,b}\leq_{L}w_{a,b+1}$. This makes it clear that the $i\th$ component of $\mathrm{Weak}\bigl(\mathcal{J}(\mathcal{T}_{\alpha})\bigr)$ is isomorphic to the direct product of an $\alpha_{i}$-chain and an $(\alpha_{i+1}{+}\alpha_{i+2}{+}\cdots{+}\alpha_{r})$-chain.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=1,page=9]{para_figures.pdf}
\caption{The poset $\mathrm{Weak}\bigl(\mathcal{J}(\mathcal{T}_{(2,1,2)})\bigr)$.}
\label{fig:tamari_212_irreducibles}
\end{figure}
\subsection{The Proofs of Theorems~\ref{thm:parabolic_tamari_structure} and \ref{thm:parabolic_tamari_topology}}
\label{sec:parabolic_tamari_proofs}
It is now high time to prove Theorems~\ref{thm:parabolic_tamari_structure} and \ref{thm:parabolic_tamari_topology}. This is done in several steps. The first step exploits the fact that congruence-uniformity is preserved under passing to sublattices and quotients.
\begin{proposition}\label{prop:parabolic_tamari_congruence_uniform}
For all $n>0$ and every composition $\alpha$ of $n$ the lattice $\mathcal{T}_{\alpha}$ is congruence uniform.
\end{proposition}
\begin{proof}
Theorem~10-3.7 in \cite{reading16finite} states that $\mathrm{Weak}(\mathfrak{S}_{n})$ is congruence uniform. Theorem~4.3 in \cite{day79characterizations} implies that congruence-uniformity is preserved under passing to sublattices and quotient lattices. Thus, by definition, $\mathrm{Weak}(\mathfrak{S}_{\alpha})$ is congruence uniform, since it is an interval of of $\mathrm{Weak}(\mathfrak{S}_{n})$. Theorem~\ref{thm:parabolic_tamari_lattice} states that $\mathcal{T}_{\alpha}$ is a quotient lattice of $\mathrm{Weak}(\mathfrak{S}_{\alpha})$, and thus also congruence uniform.
\end{proof}
As an immediate consequence of Theorem~\ref{thm:bijection_tamari_noncrossings}, we obtain an explicit description of the canonical join representations in $\mathcal{T}_{\alpha}$. Recall that, by Lemma~\ref{lem:descents_lower_covers}, an element of $\mathfrak{S}_{\alpha}(231)$ is join irreducible in $\mathcal{T}_{\alpha}$ if and only if it has exactly one descent. It follows from Corollary~\ref{cor:irreducibles_inversions} that
\begin{displaymath}
\mathcal{J}\bigl(\mathcal{T}_{\alpha}\bigr) = \bigl\{w\in\mathfrak{S}_{\alpha}(231)\mid w=w_{a,b}\;\text{for some}\;1\leq a<b\leq n\;\text{in different $\alpha$-regions}\bigr\}.
\end{displaymath}
As a consequence, we can enumerate the join-irreducible elements of $\mathcal{T}_{\alpha}$ and explicitly describe canonical join representations.
\begin{proposition}\label{prop:parabolic_tamari_ji_elements}
Let $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$ be a composition of $n>0$. We have
\begin{displaymath}
\Bigl\lvert\mathcal{J}\bigl(\mathcal{T}_{\alpha}\bigr)\Bigr\rvert = \sum_{i=1}^{r-1}{\alpha_{i}\cdot(\alpha_{i+1}+\alpha_{i+2}+\cdots+\alpha_{r})}.
\end{displaymath}
\end{proposition}
\begin{proof}
In view of Theorem~\ref{thm:bijection_tamari_noncrossings} it is enough to count the number of elements of $N\!C_{\alpha}$ with precisely one bump. Now, the only condition is that this bump does not occur between elements of the same $\alpha$-region. Therefore, each of the $\alpha_{i}$ elements in the $i\th$ $\alpha$-region can be connected by a bump with every element that succeeds the $i\th$ $\alpha$-region; and there are $\alpha_{i+1}+\alpha_{i+2}+\cdots+\alpha_{r}$ such elements. The claim follows by summing over $i$.
\end{proof}
\begin{proposition}\label{prop:canonical_representation_tamari}
The canonical join representation of $w\in\mathcal{T}_{\alpha}$ is determined by the set of bumps of $\Phi(w)$.
\end{proposition}
\begin{proof}
Let $u\lessdot_{L}v$ be a cover relation of $\mathcal{T}_{\alpha}$. By construction, there is a unique descent $(a,b)\in\mathrm{Des}(v)$ with $(a,b)\notin\mathrm{Inv}(u)$. It follows that the labeling $\lambda$ from \eqref{eq:cu_labeling} labels the cover relation $u\lessdot_{L}v$ of $\mathcal{T}_{\alpha}$ by $w_{a,b}$.
Theorem~\ref{thm:congruence_uniform_canonical_representation} then implies that the canonical join representation of $w\in\mathcal{T}_{\alpha}$ is $\bigl\{w_{a,b}\mid (a,b)\in\mathrm{Des}(w)\bigr\}$. Since $\Phi$ sends the descents of $w$ to the bumps of $\Phi(w)$, the claim follows.
\end{proof}
The case $\alpha=(1,1,\ldots,1)$ of Proposition~\ref{prop:canonical_representation_tamari} was previously described in \cite{reading07clusters}*{Example~6.3}. Let us now move to the proof of the second part of Theorem~\ref{thm:parabolic_tamari_structure}.
\begin{proposition}\label{prop:parabolic_tamari_trim}
For all $n>0$ and every composition $\alpha$ of $n$, the parabolic Tamari lattice $\mathcal{T}_{\alpha}$ is trim.
\end{proposition}
\begin{proof}
From Proposition~\ref{prop:parabolic_tamari_congruence_uniform} and Theorem~\ref{thm:congruence_uniform_implies_semidistributive} follows that $\mathcal{T}_{\alpha}$ is semidistributive, and Lemma~\ref{lem:kappa_bijection} thus implies that $\bigl\lvert\mathcal{J}(\mathcal{T}_{\alpha})\bigr\vert=\bigl\lvert\mathcal{M}(\mathcal{T}_{\alpha})\bigr\vert$. In view of Theorem~\ref{thm:semidistributive_extremal_lattice_trim} it therefore suffices to show that $\ell\bigl(\mathcal{T}_{\alpha}\bigr)=\bigl\lvert\mathcal{J}(\mathcal{T}_{\alpha})\bigr\vert$.
Let $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$. Proposition~\ref{prop:parabolic_tamari_ji_elements} implies that the cardinality of $\mathcal{J}\bigl(\mathcal{T}_{\alpha}\bigr)$ is given by
\begin{equation}\label{eq:parabolic_tamari_length}
f(\alpha) \stackrel{\mathrm{def}}{=} \sum_{i=1}^{r-1}{\alpha_{i}\cdot(\alpha_{i+1}+\alpha_{i+2}+\cdots+\alpha_{r})}.
\end{equation}
It remains to exhibit a maximal chain of $\mathcal{T}_{\alpha}$ whose length is given by the formula in \eqref{eq:parabolic_tamari_length}. We apply induction on $r$. If $r=1$, then $\alpha=(n)$ and there is a unique $\alpha$-region. Hence, $\mathfrak{S}_{\alpha}$ consists only of the identity permutation. It follows that $\mathcal{T}_{\alpha}$ is the singleton lattice, which is trivially trim.
Now let $r>1$, and recall that $s_{i}=\alpha_{1}+\alpha_{2}+\cdots+\alpha_{i}$ for $i\in[r]$. Let $v_{a,b}$ be the permutation whose one-line notation is
\begin{multline}\label{eq:first_link_elements}
1\;\ldots\;\alpha_{1}{-}a\hspace*{.25cm} \alpha_{1}{+}b{-}a{+}1\hspace*{.25cm} s_{2}{-}a{+}2\;\ldots\;s_{2}\mid \alpha_{1}{+}1{-}a\hspace*{.25cm}\alpha_{1}{+}2{-}a\;\ldots\;\\
\alpha_{1}{+}b{-}a\hspace*{.25cm} \alpha_{1}{+}b{-}a{+}2\;\ldots\;s_{2}{+}1{-}a\mid s_{2}{+}1\;\ldots\;n.
\end{multline}
for $a\in[\alpha_{1}]$ and $b\in[\alpha_{2}]$. (Here we indicate the $\alpha$-regions by vertical bars rather than colors.)
Observe that $v_{a,b}\in\mathfrak{S}_{\alpha}(231)$ for all admissible choices of $a$ and $b$, because inversions may occur only between the first two $\alpha$-regions. Moreover, for fixed $a$ and for all $b\in[\alpha_{2}-1]$ we have $v_{a,b}\lessdot_{L}v_{a,b+1}$, and for all $a\in[\alpha_{1}-1]$ we have $v_{a,\alpha_{2}}\lessdot_{L}v_{a+1,1}$. It follows that the set
\begin{displaymath}
\{e\} \cup \{v_{a,b}\mid 1\leq a\leq \alpha_{1},\;\text{and}\;1\leq b\leq \alpha_{2}\}
\end{displaymath}
is a maximal chain of length $\alpha_{1}\cdot\alpha_{2}$ in the interval $[e,w_{\alpha_{1},\alpha_{2}}]$ in $\mathcal{T}_{\alpha}$.
Let us abbreviate $x_{1}=v_{\alpha_{1},\alpha_{2}}$, which has the one-line notation
\begin{displaymath}
\alpha_{2}{+}1\hspace*{.25cm}\alpha_{2}{+}2\;\ldots\;s_{2}\mid 1\;2\;\ldots\;\alpha_{2}\mid s_{2}{+}1\;\ldots\;n.
\end{displaymath}
In particular, if $r=2$, then $s_{2}=n$, and $x_{1}=w_{o;\alpha}$. We conclude that $\mathcal{T}_{\alpha}$ has length $f(\alpha)=\alpha_{1}\cdot\alpha_{2}$, which proves the induction base.
If we now repeat the above process with the first and the third $\alpha$-region, then we obtain the permutation $x_{2}$ given by the one-line notation
\begin{displaymath}
\alpha_{2}{+}\alpha_{3}{+}1\;\ldots\;s_{3}\mid 1\;\ldots\;\alpha_{2}\mid \alpha_{2}{+}1\;\ldots\;\alpha_{2}{+}\alpha_{3}\mid s_{3}{+}1\;\ldots\;n
\end{displaymath}
after $\alpha_{1}\cdot\alpha_{3}$ steps. As before, any element created in this process is by construction $(\alpha,231)$-avoiding.
We can therefore repeat this process another $r-3$ times, and end up with an element $x_{r-1}$ given by the one-line notation
\begin{displaymath}
n{-}\alpha_{1}{+}1\hspace*{.25cm}\ldots\;n\mid 1\;\ldots\;\alpha_{2}\mid \alpha_{2}{+}1\;\ldots\;n-\alpha_{1}.
\end{displaymath}
We have thus shown that the interval $[e,x_{r}]$ in $\mathcal{T}_{\alpha}$ has length
\begin{displaymath}
\alpha_{1}\cdot\alpha_{2} + \alpha_{1}\cdot\alpha_{3} + \cdots + \alpha_{1}\cdot\alpha_{r} = \alpha_{1}\cdot(\alpha_{2}+\alpha_{3}+\cdots+\alpha_{r}).
\end{displaymath}
The interval $[x_{r},w_{o;\alpha}]$ in $\mathrm{Weak}(\mathfrak{S}_{\alpha})$ is clearly isomorphic to $\mathrm{Weak}(\mathfrak{S}_{\alpha'})$ with $\alpha'=(\alpha_{2},\alpha_{3},\ldots,\alpha_{r})$. If an element in $[x_{r},w_{o;\alpha}]$ contains an $(\alpha,231)$-pattern, then this can only happen within the last $n-\alpha_{1}$ positions, which implies that the interval $[x_{r},w_{o;\alpha}]$ in $\mathcal{T}_{\alpha}$ is isomorphic to $\mathcal{T}_{\alpha'}$. By induction, this interval has length
\begin{align*}
f(\alpha') & = \sum_{i=2}^{r-1}{\alpha_{i}\cdot(\alpha_{i+1}+\alpha_{i+2}+\cdots+\alpha_{r})}.
\end{align*}
It follows that $\mathcal{T}_{\alpha}$ has length
\begin{displaymath}
\alpha_{1}\cdot(\alpha_{2}+\alpha_{3}+\cdots+\alpha_{r}) + f(\alpha') = f(\alpha)
\end{displaymath}
as desired.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:parabolic_tamari_structure}]
This follows from Propositions~\ref{prop:parabolic_tamari_congruence_uniform} and \ref{prop:parabolic_tamari_trim}.
\end{proof}
\begin{corollary}\label{cor:canonical_ji_order}
Let $C$ be the maximal chain constructed in the proof of Proposition~\ref{prop:parabolic_tamari_trim}, and let $\lambda$ be the labeling from \eqref{eq:cu_labeling}. The labels appearing on $C$ are pairwise distinct, and they induce a total order of $\mathcal{J}\bigl(\mathcal{T}_{\alpha}\bigr)$ given by the following cover relations:
\begin{displaymath}
w_{a,b}\lessdot
\begin{cases}
w_{a,b+1}, & \text{if}\;b<s_{j},\\
w_{a-1,s_{j-1}+1}, & \text{if}\;a>s_{i-1}+1, b=s_{j},\\
w_{s_{i},s_{j}+1}, & \text{if}\;a=s_{i-1}+1, b=s_{j},b<n,\\
w_{s_{i+1},s_{i+1}+1}, & \text{if}\;a=s_{i-1}+1, b=n, i<r;
\end{cases}
\end{displaymath}
where $a$ belongs to the $i\th$ $\alpha$-region and $b$ to the $j\th$ $\alpha$-region for $1\leq i<j\leq r$.
\end{corollary}
\begin{proof}
By definition, the map $\lambda$ assigns to each cover relation $u\lessdot_{L}v$ the unique $j\in\mathcal{J}\bigl(\mathcal{T}_{\alpha}\bigr)$ such that $\mathrm{cg}(u,v)=cg(j)$. Since $C$ is a maximal chain of length $f(\alpha)$ in $\mathcal{T}_{\alpha}$, it is also a maximal chain in $\mathrm{Weak}(\mathfrak{S}_{\alpha})$, and it follows that any cover relation $u\lessdot_{L}v$ on $C$ is mapped to $w_{i,j}$, where $\bigl\{(i,j)\bigr\}=\mathrm{Inv}(v)\setminus\mathrm{Inv}(u)$, and $(i,j)\in\mathrm{Des}(v)$. This also implies that the entries in $\lambda(C)$ are pairwise distinct. Since $\mathcal{T}_{\alpha}$ is trim (and therefore extremal), we conclude that all join-irreducibles of $\mathcal{T}_{\alpha}$ appear as labels along $C$.
For $a\in[\alpha_{1}]$ and $b\in[\alpha_{2}]$ let $v_{a,b}$ be the element whose one-line notation is \eqref{eq:first_link_elements}. It follows from the proof of Proposition~\ref{prop:parabolic_tamari_trim} that the first $\alpha_{1}\cdot\alpha_{2}+1$ elements of $C$ are precisely $e,v_{1,1},v_{1,2},\ldots,v_{\alpha_{1},\alpha_{2}}$. Now by construction we obtain $\lambda(e,v_{1,1})=w_{\alpha_{1},\alpha_{1}+1}$, and
\begin{displaymath}
\lambda(v_{a,b},v_{a,b+1}) = w_{\alpha_{1}-a+1,\alpha_{1}+b+1}
\end{displaymath}
for $b<\alpha_{2}$. For $a<\alpha_{1}$ we have
\begin{displaymath}
\lambda(v_{a,\alpha_{2}},v_{a+1,1}) = w_{\alpha_{1}-a,\alpha_{1}+1}.
\end{displaymath}
Since $s_{1}=\alpha_{1}$ und $s_{2}=\alpha_{1}+\alpha_{2}$, we conclude that the first $\alpha_{1}\cdot\alpha_{2}$ labels appearing on $C$ (in order from left to right, top to bottom) are
\begin{displaymath}\begin{aligned}
& w_{s_{1},s_{1}+1}, && w_{s_{1},s_{1}+2}, && \ldots, && w_{s_{1},s_{2}},\\
& w_{s_{1}-1,s_{1}+1}, && w_{s_{1}-1,s_{1}+2}, && \ldots, && w_{s_{1}-1,s_{2}},\\
& \vdots, && \vdots, && \ldots, && \vdots\\
& w_{1,s_{1}+1}, && w_{1,s_{1}+2}, && \ldots, && w_{1,s_{2}}.\\
\end{aligned}\end{displaymath}
By following the construction of $C$ we see that the adjacent elements in the resulting order of the join-irreducible elements are precisely determined by the conditions given in the statement. The next element on $C$ is given by the one-line notation
\begin{displaymath}
\alpha_{2}{+}1\hspace*{.25cm}\alpha_{2}{+}2\;\ldots\;s_{2}{+}1\mid 1\;2\;\ldots\;\alpha_{2}\mid s_{2}\hspace*{.25cm} s_{2}{+}2\;\ldots\;n.
\end{displaymath}
Thus, the next label in $\lambda(C)$ is $w_{s_{1},s_{2}+1}$.
The remaining order of the labels follows inductively according to the construction of $C$ from the proof of Proposition~\ref{prop:parabolic_tamari_trim}.
\end{proof}
\begin{example}\label{ex:parabolic_5_14_canonical_chain}
Let $\alpha=(2,1,2)$, which means that $s_{1}=2, s_{2}=3$ and $s_{3}=5$. The total order of the join-irreducibles according to Corollary~\ref{cor:canonical_ji_order} is
\begin{displaymath}\begin{aligned}
& w_{2,3}, && w_{1,3}, && w_{2,4}, && w_{2,5}, && w_{1,4}, && w_{1,5}, && w_{3,4}, && w_{3,5},
\end{aligned}\end{displaymath}
which can be verified in Figure~\ref{fig:parabolic_tamari_212}.
\end{example}
\begin{remark}
The join-irreducible elements of $\mathcal{T}_{(1,1,\ldots,1)}$ are precisely the transpositions $(i,j)$ for $1\leq i<j\leq n$, and the total order defined in Corollary~\ref{cor:canonical_ji_order} is exactly the lexicographic order on these transpositions.
We remark that this order corresponds to the so-called inversion order of $w_{o}$ with respect to the linear Coxeter element. This correspondence does not hold in general, as can be verified for instance in the case $\alpha=(2,1,2)$, which is illustrated in Figure~\ref{fig:parabolic_tamari_212}. (We refer readers unfamiliar with these notions to \cite{muehle19tamari}*{Section~6.2}.)
\end{remark}
It remains to prove Theorem~\ref{thm:parabolic_tamari_topology}.
\begin{proof}[Proof of Theorem~\ref{thm:parabolic_tamari_topology}]
Let $\alpha$ be a composition of $n>0$. Theorem~\ref{thm:parabolic_tamari_structure} states that $\mathcal{T}_{\alpha}$ is a congruence-uniform, trim lattice and is thus by definition also left modular. In view of Theorem~\ref{thm:congruence_uniform_implies_semidistributive} it is also (meet) semidistributive. Therefore, we conclude from Theorem~\ref{thm:left_modular_spherical} and Proposition~\ref{prop:meet_semidistributive_mobius} that the order complex of the proper part of $\mathcal{T}_{\alpha}$ is either contractible or homotopic to a sphere; and that it is contractible if and only if the atoms of $\mathcal{T}_{\alpha}$ do not join to $w_{o;\alpha}$.
If $\alpha=(n)$, then $\mathcal{T}_{\alpha}$ is the singleton-lattice, which yields $\mu\bigl(\mathcal{T}_{\alpha}\bigr)=1$. Let now $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$ with $r>1$. The atoms of $\mathcal{T}_{\alpha}$ are the elements $w_{s_{i},s_{i}+1}$ for $i\in[r-1]$. The canonical join representation of $w_{o;\alpha}$ is by construction
\begin{displaymath}
\Gamma = \bigl\{w_{1,s_{2}},w_{s_{1}+1,s_{3}},\ldots,w_{s_{r-2}+1,n}\bigr\}.
\end{displaymath}
Therefore, the join of all atoms of $\mathcal{T}_{\alpha}$ is $w_{o;\alpha}$ if and only if $\Gamma$ is precisely the set of atoms. This is equivalent to $r=n$, $s_{1}=1$ and $s_{i}=s_{i-1}+1$ for $i\in\{2,3,\ldots,r\}$, which is in turn equivalent to $\alpha=(1,1,\ldots,1)$.
\end{proof}
\subsection{The Galois Graph of $\mathcal{T}_{\alpha}$}
\label{sec:galois_graph}
We have seen in Proposition~\ref{prop:parabolic_tamari_trim} that $\mathcal{T}_{\alpha}$ is a trim lattice. Therefore it can be represented by its Galois graph (defined at the end of Section~\ref{sec:trim}). Since $\mathcal{T}_{\alpha}$ is also congruence uniform, it follows from Corollary~\ref{cor:galois_edges_congruence_uniform_extremal} that we can view the Galois graph as a directed graph on the set of join-irreducible elements of $\mathcal{T}_{\alpha}$.
Let us recall the following useful characterization of inversion sets of joins in weak order.
\begin{lemma}[\cite{markowsky94permutation}*{Theorem~1(b)}]\label{lem:inversion_sets_join}
Let $x,y\in\mathfrak{S}_{n}$. The inversion set $\mathrm{Inv}(x\vee y)$ is the transitive closure of $\mathrm{Inv}(x)\cup\mathrm{Inv}(y)$, \text{i.e.}\; if $(a,b),(b,c)\in\mathrm{Inv}(x)\cup\mathrm{Inv}(y)$, then $(a,c)\in\mathrm{Inv}(x\vee y)$.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:parabolic_tamari_galois_graph}]
By definition, the vertex set of $\mathcal{G}\bigl(\mathcal{T}_{\alpha}\bigr)$ is $[K]$, where
\begin{displaymath}
K=\bigl\lvert\mathcal{J}\bigl(\mathcal{T}_{\alpha}\bigr)\bigr\rvert=f(\alpha),
\end{displaymath}
and there exists a directed edge $i\to k$ if and only if $j_{i}\not\leq m_{k}$, where the join- and meet-irreducible elements are ordered as in \eqref{eq:extremal_ordering}. Since $\mathcal{T}_{\alpha}$ is congruence uniform, Corollary~\ref{cor:galois_edges_congruence_uniform_extremal} states that there is a directed edge $i\to k$ in $\mathcal{G}\bigl(\mathcal{T}_{\alpha}\bigr)$ if and only if $j_{k}\leq (j_{k})_{*}\vee j_{i}$. We may therefore view $\mathcal{G}\bigl(\mathcal{T}_{\alpha}\bigr)$ as a directed graph on the vertex set $\mathcal{J}\bigl(\mathcal{T}_{\alpha}\bigr)$.
Now pick $w_{a,b},w_{a',b'}\in\mathcal{J}\bigl(\mathcal{T}_{\alpha}\bigr)$ such that $a$ belongs to the $i\th$ $\alpha$-region, and $a'$ belongs to the ${i'}\th$ $\alpha$-region. It remains to determine when
\begin{equation}\label{eq:galois_edge}
w_{a',b'}\leq(w_{a',b'})_{*}\vee w_{a,b}
\end{equation}
holds.
For simplicity, let us write $p'=w_{a',b'}, p=w_{a,b}$, and $p'_{*}=(w_{a',b'})_{*}$. Let $z=p'_{*}\vee p$. By definition $\mathrm{Inv}(p'_{*})\cup\mathrm{Inv}(p)\subseteq\mathrm{Inv}(z)$, and it follows from Corollary~\ref{cor:irreducibles_inversions} that $\mathrm{Inv}(p'_{*})=\mathrm{Inv}(p')\setminus\bigl\{(a',b')\bigr\}$. We conclude that \eqref{eq:galois_edge} holds if and only if $(a',b')\in\mathrm{Inv}(z)$, which by Lemma~\ref{lem:inversion_sets_join} is either the case if $(a',b')\in\mathrm{Inv}(p)$ or if there exists $c\in[n]$ such that either $(a',c)\in\mathrm{Inv}(p'_{*})$ and $(c,b')\in\mathrm{Inv}(p)$ or vice versa.
Let us first consider the case where $a$ and $a'$ belong to the same $\alpha$-region, \text{i.e.}\; $i=i'$. There are two cases.
(i) Let $a\leq a'$. If $b'\leq b$, then Corollary~\ref{cor:comparable_irreducibles} implies that $p'\leq_{L}p$, and we see that \eqref{eq:galois_edge} holds. If $b<b'$, then Corollary~\ref{cor:irreducibles_inversions} implies that $(a',b')\notin\mathrm{Inv}(p)$. It follows from Corollary~\ref{cor:irreducibles_inversions} that $(c,b')\notin\mathrm{Inv}(p)$ for any $c$, and $(a',c)\in\mathrm{Inv}(p)$ implies $c\in[s_{i}{+}1,b]$ and $(c,b')\in\mathrm{Inv}(p'_{*})$ implies that $c\in[a'{+}1,s_{i}]$. It follows that $(a',b')\notin\mathrm{Inv}(z)$, and thus \eqref{eq:galois_edge} does not hold.
(ii) Let $a>a'$. If $b\leq b'$, then Corollary~\ref{cor:comparable_irreducibles} implies that $p<_{L}p'$, and we see that \eqref{eq:galois_edge} fails. If $b>b'$, then Corollary~\ref{cor:irreducibles_inversions} implies that $(a',b')\notin\mathrm{Inv}(p)$. Once again, we conclude from Corollary~\ref{cor:irreducibles_inversions} that $(a',c)\notin\mathrm{Inv}(p)$ for any $c$, and $(a',c)\in\mathrm{Inv}(p'_{*})$ implies $c\in[s_{i}{+}1,b'{-}1]$ and $(c,b')\in\mathrm{Inv}(p)$ implies that $c\in[a,s_{i}{-}1]$. It follows that $(a',b')\notin\mathrm{Inv}(z)$, and thus \eqref{eq:galois_edge} does not hold.
\medskip
Let us now consider the case where $a$ and $a'$ belong to different $\alpha$-regions, \text{i.e.}\; $i\neq i'$. There are two cases, and both times Corollary~\ref{cor:irreducibles_inversions} implies that $(a',b')\notin\mathrm{Inv}(p)$. Moreover, we conclude from Corollary~\ref{cor:irreducibles_inversions} that $(a',c)\notin\mathrm{Inv}(p)$ for any $c$, and $(a',c)\in\mathrm{Inv}(p'_{*})$ implies $c\in[s_{i'}{+}1,b'{-}1]$ and $(c,b')\in\mathrm{Inv}(p)$ implies $c\in[a,s_{i}]$ and $s_{i}<b'\leq b$.
(i) Let $i<i'$. It follows that $s_{i}<s_{i'}+1$, and we conclude that $(a',b')\notin\mathrm{Inv}(z)$, and thus \eqref{eq:galois_edge} does not hold.
(ii) Let $i>i'$. This means that $a'<a$. If $a<b'\leq s_{i}$, then we have $(c,b')\notin\mathrm{Inv}(p)$ for any $c$, and therefore \eqref{eq:galois_edge} does not hold. If $s_{i}<b'$, then we may choose $c=a$ to conclude that $(a',b')\in\mathrm{Inv}(z)$, and see that \eqref{eq:galois_edge} holds.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=1,page=10]{para_figures.pdf}
\caption{The Galois graph $\mathcal{G}\bigl(\mathcal{T}_{(2,1,2)}\bigr)$.}
\label{fig:galois_graph_5_14}
\end{figure}
In \cite{thomas19rowmotion}*{Theorem~5.5} it was shown that the complement of the \emph{undirected} Galois graph of an extremal semidistributive lattice $\mathcal{L}$ is precisely the $1$-skeleton of the so-called canonical join complex of $\mathcal{L}$. This is the simplicial complex whose faces are canonical join representations of $\mathcal{L}$. By Proposition~\ref{prop:canonical_representation_tamari} the canonical join representations in $\mathcal{T}_{\alpha}$ are precisely the $\alpha$-noncrossing partitions. We thus have the following corollary (which may also be verified directly).
\begin{corollary}
If there exists a directed edge from $w_{a,b}\to w_{a',b'}$, then $\Phi(w_{a,b})$ and $\Phi(w_{a',b'})$ are not $\alpha$-compatible.
\end{corollary}
\section{The Core Label Order of $\mathcal{T}_{\alpha}$}
\label{sec:parabolic_tamari_alternate_order}
\subsection{The Core Label Order of a Congruence-Uniform Lattice}
\label{sec:alternate_order}
The labeling $\lambda$ of a congruence-uniform lattice $\mathcal{L}=(L,\leq)$ from \eqref{eq:cu_labeling} gives rise to an alternate way of ordering the elements of $\mathcal{L}$. This order was first considered by N.~Reading in connection with posets of regions of simplicial hyperplane arrangements under the name shard intersection order; see~\cite{reading16lattice}*{Section~9-7.4}. For $a\in L$ let
\begin{displaymath}
a_{\downarrow}\stackrel{\mathrm{def}}{=}\bigwedge\limits_{b\in L\colon b\lessdot a}{b},
\end{displaymath}
and define
\begin{equation}\label{eq:shard_set}
\Psi(a) \stackrel{\mathrm{def}}{=} \left\{\lambda(u,v)\mid a_{\downarrow}\leq u\lessdot v\leq a\right\}.
\end{equation}
The \defn{core label order} of $\mathcal{L}$ is the poset $(L,\sqsubseteq)$, where we have $a\sqsubseteq b$ if and only if $\Psi(a)\subseteq\Psi(b)$. We usually write $\mathrm{CLO}(\mathcal{L})$ for this poset.
A congruence-uniform lattice $\mathcal{L}=(L,\leq)$ has the \defn{intersection property} if for every $a,b\in L$ there exists $c\in L$ such that $\Psi(c)=\Psi(a)\cap\Psi(b)$. It is quickly verified that if $\mathcal{L}$ has the intersection property, then $\mathrm{CLO}(\mathcal{L})$ is a meet-semilattice~\cite{muehle19the}*{Proposition~4.7}.
\subsection{The Core Label Order of $\mathcal{T}_{\alpha}$}
\label{sec:parabolic_alternate_order}
Figure~\ref{fig:parabolic_tamari_212_clo} shows the core label order of $\mathcal{T}_{(2,1,2)}$. We have additionally represented the elements of $\mathcal{T}_{(2,1,2)}$ by their corresponding noncrossing $(2,1,2)$-partitions. We observe that this poset is isomorphic to $\mathcal{N\!C}_{(2,1,2)}$.
\begin{figure}
\centering
\includegraphics[scale=.8,page=11]{para_figures.pdf}
\caption{The core label order of $\mathcal{T}_{(2,1,2)}$. This is also the poset $\mathcal{N\!C}_{(2,1,2)}$.}
\label{fig:parabolic_tamari_212_clo}
\end{figure}
We now want to characterize the cases, where the core label order of $\mathcal{T}_{\alpha}$ is isomorphic to $\mathcal{N\!C}_{\alpha}$. Let us define the following set for $u\in\mathfrak{S}_{\alpha}(231)$:
\begin{equation}\label{eq:irreducibles_from_noncrossings}
X(u) = \bigl\{w_{a,b}\mid a\sim_{\Phi(u)}b\bigr\},
\end{equation}
where $w_{a,b}$ is the unique element of $\mathcal{J}\bigl(\mathcal{T}_{\alpha}\bigr)$ with $\mathrm{Des}(w_{a,b})=\bigl\{(a,b)\}$.
\begin{lemma}\label{lem:dual_refinement_as_inclusion}
For $u,v\in\mathfrak{S}_{\alpha}(231)$ we have $\Phi(u)\leq_{\mathrm{ref}}\Phi(v)$ if and only if $X(u)\subseteq X(v)$.
\end{lemma}
\begin{proof}
Let $w_{a,b}\in X(u)$. By definition, this means $a\sim_{\Phi(u)}b$. If $\Phi(u)\leq_{\mathrm{ref}}\Phi(v)$, then we conclude that $a\sim_{\Phi(v)}b$, which means $w_{a,b}\in X(v)$.
Now let $a\sim_{\Phi(u)}b$, which implies $w_{a,b}\in X(u)$. If $X(u)\subseteq X(v)$, then we conclude that $a\sim_{\Phi(v)}b$. In particular, each part of $\Phi(u)$ is contained in some part of $\Phi(v)$, which means that $\Phi(u)\leq_{\mathrm{ref}}\Phi(v)$.
\end{proof}
\begin{proposition}\label{prop:parabolic_noncrossing_contains_shard_sets}
Let $\alpha$ be a composition of $n>0$. For all $u\in\mathfrak{S}_{\alpha}(231)$ holds $\Psi(u)\subseteq X(u)$.
\end{proposition}
\begin{proof}
Let us for a moment consider the poset $\mathrm{Weak}(\mathfrak{S}_{\alpha})$. Let $u,v\in\mathfrak{S}_{\alpha}$ with $u\lessdot_{L}v$ such that $(a,b)\in\mathrm{Des}(u)$. If $(a,b)\notin\mathrm{Des}(v)$, we can quickly check that there are three options: (i) $(c,b)\in\mathrm{Des}(v)$ for $c<a$, (ii) $(a,c)\in\mathrm{Des}(v)$ for $b<c$, or (iii) $(a,c),(c,b)\in\mathrm{Des}(v)$ for $a<c<b$.
Now let $u\in\mathfrak{S}_{\alpha}(231)$, and fix $w_{a,b}\in\Psi(u)$. By definition this means that there exists a cover relation $v\lessdot_{L}v'$ in $\mathcal{T}_{\alpha}$ with $u_{\downarrow}\leq_{L}v$ and $v'\leq_{L}u$ such that $\mathrm{cg}(v,v')=\mathrm{cg}(w_{a,b})$. In the present setting this means that $(a,b)\notin\mathrm{Inv}(v')$ while $(a,b)\in\mathrm{Des}(v)$. Without loss of generality we choose this cover relation so that $v'$ is maximal.
If $(a,b)\in\mathrm{Des}(u)$, then we conclude from Theorem~\ref{thm:bijection_tamari_noncrossings} that $(a,b)$ is a bump of $\Phi(u)$, and thus $w_{a,b}\in X(u)$ by construction.
If $(a,b)\notin\mathrm{Des}(u)$, then we have in particular $v'<_{L}u$, and the reasoning in the first paragraph implies that along any maximal chain from $v'$ to $u$ the descent $(a,b)$ gets ``extended'' (cases (i) or (ii)) and/or ``subdivided'' (case (iii)), until we are left with a sequence $(k_{0},k_{1}),(k_{1},k_{2}),\ldots,(k_{s-1},k_{s})$ of descents of $u$ such that $k_{0}=a$ and $k_{s}=b$. We conclude $a\sim_{\Phi(u)}b$, and thus $w_{a,b}\in X(u)$ by construction.
\end{proof}
\begin{proposition}\label{prop:shard_set_contains_parabolic_noncrossings}
Let $\alpha$ be a composition of $n>0$. It holds $\Psi(w)=X(w)$ for all $w\in\mathfrak{S}_{\alpha}(231)$ if and only if either $\alpha=(n)$ or $\alpha=(a,1,1,\ldots,1,b)$ for some positive integers $a,b$.
\end{proposition}
\begin{proof}
If $\alpha=(n)$, then the only element of $\mathfrak{S}_{\alpha}$ is the identity, and we have $\Psi(e)=\emptyset=X(e)$. Now suppose that $\alpha=(a,1,1,\ldots,1,b)$ for some positive integers $a,b$. Let $u\in\mathfrak{S}_{\alpha}(231)$ with $\mathrm{Des}(u)=\bigl\{(p_{1},q_{1}),(p_{2},q_{2}),\ldots,(p_{s},q_{s})\bigr\}$.
It follows from Proposition~\ref{prop:parabolic_noncrossing_contains_shard_sets} that $\Psi(u)\subseteq X(u)$. We therefore just need to show the reverse inclusion, and we pick two integers $i,j$ with $i\sim_{\Phi(u)}j$. By definition, there exists a sequence of integers $k_{0},k_{1},\ldots,k_{r}$ such that $i=k_{0}$ and $j=k_{r}$, and $(k_{l-1},k_{l})$ is a bump of $\Phi(u)$ for all $l\in[r]$. In particular, all the $k_{l}$ lie in different $\alpha$-regions. Theorem~\ref{thm:bijection_tamari_noncrossings} implies that $(k_{l-1},k_{l})$ is a descent of $u$ for all $l\in[r]$, which yields $(i,j)\in\mathrm{Inv}(u)$.
If $r=1$, then $(i,j)$ is itself a bump of $\Phi(u)$, and we have in fact $(i,j)\in\mathrm{Des}(u)$. Proposition~\ref{prop:canonical_representation_tamari} implies that $w_{i,j}$ belongs to the canonical join representation of $u$. It follows that $w_{i,j}\in\Psi(u)$.
Consider the permutation $v^{(r)}$, which arises from $u$ by swapping the entries in position $k_{r-1}$ and $k_{r}$, \text{i.e.}\; $v^{(r)}=(k_{r-1},k_{r})\cdot u$. Clearly, we have $u_{\downarrow}\leq_{L}v^{(r)}\lessdot_{L}u$. Since $\mathrm{Weak}(\mathfrak{S}_{\alpha})$ is an order ideal in $\mathrm{Weak}(\mathfrak{S}_{n})$, we conclude that $v^{(r)}\in\mathfrak{S}_{\alpha}$. We claim that we also have $v^{(r)}\in\mathfrak{S}_{\alpha}(231)$. The assumption $\alpha=(a,1,1,\ldots,1,b)$ implies that every $\alpha$-region, except perhaps the first and the last are singletons. In particular, $u_{k_{r-1}}$ is the only element in its $\alpha$-region. By construction we have
\begin{displaymath}
\mathrm{Des}\bigl(v^{(r)}\bigr) = \Bigl(\mathrm{Des}(u)\setminus\bigl\{(k_{r-1},k_{r})\bigr\}\Bigr)\cup\bigl\{(k_{r-2},k_{r})\bigr\}.
\end{displaymath}
In order to conclude that $v^{(r)}\in\mathfrak{S}_{\alpha}(231)$, we need to show that $v^{(r)}_{c}<v^{(r)}_{k_{r}}$ for all $c\in[n]$ with $k_{r-2}<c<k_{r}$. If such a $c$ belongs to an $\alpha$-region strictly between the $\alpha$-regions containing $k_{r-2}$ and $k_{r-1}$, then we conclude from the assumption $u\in\mathfrak{S}_{\alpha}(231)$ that
\begin{displaymath}
v^{(r)}_{c} = u_{c} < u_{k_{r-1}} = v^{(r)}_{k_{r}}.
\end{displaymath}
If $c$ belongs to an $\alpha$-region strictly between the $\alpha$-regions containing $k_{r-1}$ and $k_{r}$, then we conclude from the assumption $u\in\mathfrak{S}_{\alpha}(231)$ that
\begin{displaymath}
v^{(r)}_{c} = u_{c} < u_{k_{r}} < u_{k_{r-1}} = v^{(r)}_{k_{r}}.
\end{displaymath}
If $c$ belongs to the same $\alpha$-region as $k_{r-1}$, then by assumption $c=k_{r-1}$. (This follows since every $\alpha$-region, except for possibly the first and the last, is a singleton, and $k_{r-1}$ does not lie in the last $\alpha$-region.) It follows that
\begin{displaymath}
v^{(r)}_{c} = v^{(r)}_{k_{r-1}} = u_{k_{r}} < u_{k_{r-1}} = v^{(r)}_{k_{r}}.
\end{displaymath}
We conclude that $v^{(r)}$ does not contain a $(\alpha,231)$-pattern, and thus $v^{(r)}\in\mathfrak{S}_{\alpha}(231)$.
Now consider the permutation $v^{(r-1)}$, which arises from $v^{(r)}$ by swapping the entries in position $k_{r-2}$ and $k_{r}$, \text{i.e.}\; $v^{(r-1)}=(k_{r-2},k_{r})\cdot v^{(r)}$. By construction we have
\begin{displaymath}
\mathrm{Des}\bigl(v^{(r-1)}\bigr) = \Bigl(\mathrm{Des}\bigl(v^{(r)}\bigr)\setminus\bigl\{(k_{r-2},k_{r})\bigr\}\Bigr)\cup\bigl\{(k_{r-3},k_{r})\bigr\}.
\end{displaymath}
As before we can show that $v^{(r-1)}\in\mathfrak{S}_{\alpha}(231)$, and that $u_{\downarrow}\leq_{L}v^{(r-1)}\leq_{L}u$. After $r-2$ of these steps we obtain a permutation $v^{(2)}\in\mathfrak{S}_{\alpha}(231)$ with $u_{\downarrow}\leq_{L}v^{(2)}\leq_{L}u$ and $(k_{0},k_{r})\in\mathrm{Des}\bigl(v^{(2)}\bigr)$. The permutation $v^{(1)}=(k_{0},k_{r})\cdot v^{(2)}$ can again be shown to be $(\alpha,231)$-avoiding, and we still have $u_{\downarrow}\leq_{L}v^{(1)}\leq_{L}u$. Thus
\begin{displaymath}
\lambda\Bigl(v^{(1)},v^{(2)}\Bigr) = w_{k_{0},k_{r}} = w_{i,j},
\end{displaymath}
which implies $w_{i,j}\in\Psi(u)$ as desired.
Now suppose that $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$ is a composition of $n$ which is not of the form stated in the proposition. In particular, $r\geq 3$ and there exists $k\in[r-1]$ such that $s_{k+1}>s_{k}+1$. Consider the three $\alpha$-regions
\begin{align*}
R & = \{s_{k-1}{+}1,s_{k-1}{+}2,\ldots,s_{k}\},\\
R' & = \{s_{k}{+}1,s_{k}{+}2,\ldots,s_{k+1}\},\\
R'' & = \{s_{k+1}{+}1,s_{k+1}{+}2,\ldots,s_{k+2}\}.
\end{align*}
(If $k=1$, then we set $s_{k-1}=0$ and if $k=r-1$, then we set $s_{k+2}=n$.) In particular, $\lvert R'\rvert=s_{k}-s_{k-1}>1$. Consider the noncrossing $\alpha$-partition $\mathbf{P}\inN\!C_{\alpha}$ whose only bumps are $(s_{k},s_{k}+1)$ and $(s_{k}+1,s_{k+1}+1)$. By construction the permutation $u=\Phi^{-1}(\mathbf{P})$ satisfies
\begin{displaymath}
u = 1\;2\;\ldots\;s_{k}{-}1\;s_{k}{+}2\mid s_{k}{+}1\;s_{k}{+}3\;\ldots\;2s_{k}{-}s_{k-1}{+}1\mid s_{k}\;2s_{k}{-}s_{k-1}{+}2\;\ldots\;n.
\end{displaymath}
We conclude that
\begin{displaymath}
u_{\downarrow} = 1\;2\;\ldots\;s_{k}{-}1\;s_{k}\mid s_{k}{+}1\;s_{k}{+}3\;\ldots\;2s_{k}{-}s_{k-1}{+}1\mid s_{k}{+}2\;2s_{k}{-}s_{k-1}{+}2\;\ldots\;n.
\end{displaymath}
Hence, the weak order interval $[u_{\downarrow},u]$ consists of six elements, where besides $u_{\downarrow}$ and $u$ we have
\begin{align*}
p_{1} & = 1\;2\;\ldots\;s_{k}{-}1\;s_{k}{+}1\mid s_{k}\;s_{k}{+}3\;\ldots\;2s_{k}{-}s_{k-1}{+}1\mid s_{k}{+}2\;2s_{k}{-}s_{k-1}{+}2\;\ldots\;n,\\
p_{2} & = 1\;2\;\ldots\;s_{k}{-}1\;s_{k}\mid s_{k}{+}2\;s_{k}{+}3\;\ldots\;2s_{k}{-}s_{k-1}{+}1\mid s_{k}{+}1\;2s_{k}{-}s_{k-1}{+}2\;\ldots\;n,\\
p_{3} & = 1\;2\;\ldots\;s_{k}{-}1\;s_{k}{+}2\mid s_{k}\;s_{k}{+}3\;\ldots\;2s_{k}{-}s_{k-1}{+}1\mid s_{k}{+}1\;2s_{k}{-}s_{k-1}{+}2\;\ldots\;n,\\
p_{4} & = 1\;2\;\ldots\;s_{k}{-}1\;s_{k}{+}1\mid s_{k}{+}2\;s_{k}{+}3\;\ldots\;2s_{k}{-}s_{k-1}{+}1\mid s_{k}\;2s_{k}{-}s_{k-1}{+}2\;\ldots\;n.
\end{align*}
By assumption $p_{3},p_{4}\notin\mathfrak{S}_{\alpha}(231)$, which implies that the interval $[u_{\downarrow},u]$ in $\mathcal{T}_{\alpha}$ consists of the four elements $u_{\downarrow},p_{1},p_{2},u$. Consequently, $\Psi(u)=\bigl\{w_{s_{k},s_{k}+1},w_{s_{k}+1,s_{k+1}+1}\bigr\}$. However, by construction we have $X(u)=\bigl\{w_{s_{k},s_{k}+1},w_{s_{k}+1,s_{k+1}+1},w_{s_{k},s_{k+1}+1}\bigr\}$, which finishes the proof.
\end{proof}
\begin{figure}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[scale=1,page=12]{para_figures.pdf}
\caption{The parabolic Tamari lattice $\mathcal{T}_{(1,2,1)}$.}
\label{fig:parabolic_tamari_121}
\end{subfigure}
\hspace*{.5cm}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[scale=1,page=13]{para_figures.pdf}
\caption{The core label order of $\mathcal{T}_{(1,2,1)}$.}
\label{fig:parabolic_tamari_121_clo}
\end{subfigure}
\caption{The parabolic Tamari lattice $\mathcal{T}_{(1,2,1)}$ and its core label order.}
\label{fig:parabolic_121}
\end{figure}
Let us illustrate a situation in which $\Psi(w)\subsetneq X(u)$ by considering $\alpha=(1,2,1)$. Figure~\ref{fig:parabolic_tamari_121} shows the poset $\mathcal{T}_{(1,2,1)}$, and Figure~\ref{fig:parabolic_tamari_121_clo} shows the corresponding core label order. We observe that $\mathcal{N\!C}_{(1,2,1)}$ has one extra cover relation, namely the one between $\Phi(4|12|3)$ and $\Phi(3|24|1)$. We conclude the proof of Theorem~\ref{thm:parabolic_tamari_alternate}.
\begin{proof}[Proof of Theorem~\ref{thm:parabolic_tamari_alternate}]
Lemma~\ref{lem:dual_refinement_as_inclusion} states that $\Phi(u)\leq_{\mathrm{ref}}\Phi(v)$ if and only if $X(u)\subseteq X(v)$. Proposition~\ref{prop:shard_set_contains_parabolic_noncrossings} states that $X(w)=\Psi(w)$ if and only if $\alpha=(n)$ or $\alpha=(a,1,1,\ldots,b)$ for some positive integers $a,b$. This proves the claim.
\end{proof}
We have shown in Proposition~\ref{prop:parabolic_noncrossing_ranked} that the poset $\mathcal{N\!C}_{\alpha}$ is ranked, and Proposition~\ref{prop:shard_set_contains_parabolic_noncrossings} implies that it contains $\mathrm{CLO}\bigl(\mathcal{T}_{\alpha}\bigr)$ as a subposet. Computational evidence suggests that the core label order $\mathrm{CLO}\bigl(\mathcal{T}_{\alpha}\bigr)$ is ranked, too.
\begin{conjecture}\label{conj:parabolic_alternate_ranked}
For all $n>0$ and every composition $\alpha$ of $n$, the core label order of $\mathcal{T}_{\alpha}$ is ranked.
\end{conjecture}
We remark that the core label order of a congruence-uniform lattice need not be ranked in general. Consider for instance the congruence-uniform lattice in Figure~\ref{fig:cu_unranked_1}. Its core label order is shown in Figure~\ref{fig:cu_unranked_2}.
\begin{figure}
\centering
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[scale=1,page=14]{para_figures.pdf}
\caption{A congruence-uniform lattice.}
\label{fig:cu_unranked_1}
\end{subfigure}
\hspace*{1cm}
\begin{subfigure}[t]{.45\textwidth}
\centering
\includegraphics[scale=1,page=15]{para_figures.pdf}
\caption{The core label order of the congruence-uniform lattice in Figure~\ref{fig:cu_unranked_1}.}
\label{fig:cu_unranked_2}
\end{subfigure}
\caption{A congruence-uniform lattice whose core label order is not ranked.}
\label{fig:cu_unranked}
\end{figure}
We conclude this section with the following conjecture, which we have verified by computer for $n\leq 6$.
\begin{conjecture}\label{conj:parabolic_tamari_intersection_property}
For all $n>0$ and every composition $\alpha$ of $n$, the lattice $\mathcal{T}_{\alpha}$ has the intersection property. Consequently $\mathrm{CLO}\bigl(\mathcal{T}_{\alpha}\bigr)$ is a meet-semilattice.
\end{conjecture}
Theorem~\ref{thm:parabolic_tamari_alternate} supports Conjecture~\ref{conj:parabolic_tamari_intersection_property}, because we may conclude the following result.
\begin{proposition}\label{prop:parabolic_tamari_intersection_property_some}
If $\alpha=(n)$ or $\alpha=(a,1,1,\ldots,1,b)$ for some positive integers $a,b$, then $\mathcal{T}_{\alpha}$ has the intersection property.
\end{proposition}
\begin{proof}
In the given case we have seen in Proposition~\ref{prop:shard_set_contains_parabolic_noncrossings} that for $u\in\mathfrak{S}_{\alpha}(231)$ we have $X(u)=\Psi(u)$. Now let $u,v\in\mathfrak{S}_{\alpha}$. It follows from Proposition~\ref{prop:parabolic_nc_meet_semilattice} that there exists a noncrossing $\alpha$-partition $\mathbf{P}$ whose set of bumps is precisely $X(u)\cap X(v)$, namely $\mathbf{P}=\Phi(u)\wedge\Phi(v)$. We conclude that $z=\Phi^{-1}(\mathbf{P})$ satisfies
\begin{displaymath}
\Psi(z) = X(z) = X(u)\cap X(v) = \Psi(u)\cap\Psi(v)
\end{displaymath}
as desired.
\end{proof}
\section{$\alpha$-Chapoton Triangles}
\label{sec:chapoton_triangles}
In this section we describe a conjectural enumerative connection between the core label order of $\mathcal{T}_{\alpha}$ and the parabolic root poset of $\mathfrak{S}_{\alpha}$.
\subsection{$\alpha$-Root Posets}
\label{sec:parabolic_root_posets}
The article \cite{muehle19tamari} introduces a third family of combinatorial families associated with parabolic quotients of the symmetric group.
Recall that the \defn{root poset} of $\mathfrak{S}_{n}$ is the poset $\bigl(T_{n},\trianglelefteq\bigr)$, where
\begin{displaymath}
T_{n} \stackrel{\mathrm{def}}{=} \bigl\{(i,j)\mid 1\leq i<j\leq n\bigr\},
\end{displaymath}
and $(i,j)\trianglelefteq(k,l)$ if and only if $i\geq k$ and $j\leq l$. Moreover, let
\begin{displaymath}
S_{n} \stackrel{\mathrm{def}}{=} \bigl\{(i,i{+}1)\mid 1\leq i<n\bigr\}.
\end{displaymath}
Now fix a composition $\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})$ of $n$, and recall the definition $s_{i}=\alpha_{1}+\alpha_{2}+\cdots+\alpha_{i}$ for $i\in[r]$. We define
\begin{align*}
S_{\alpha} & \stackrel{\mathrm{def}}{=} \bigl\{(s_{i},s_{i}{+}1)\mid i\in[r-1]\bigr\},\\
T_{\alpha} & \stackrel{\mathrm{def}}{=} \bigl\{t\in T_{n}\mid s\trianglelefteq t\;\text{for some}\;s\in S_{\alpha}\bigr\}.
\end{align*}
The \defn{$\alpha$-root poset} is the poset $\bigl(T_{\alpha},\trianglelefteq\bigr)$. Recall that an antichain in a poset is a set of pairwise incomparable elements. A \defn{nonnesting $\alpha$-partition} is an antichain of ${\bigl(T_{\alpha},\trianglelefteq)}$, and we denote the set of all nonnesting $\alpha$-partitions by $N\!N_{\alpha}$. Figure~\ref{fig:root_poset_3322} shows the poset $\bigl(T_{(3,3,2,2)},\trianglelefteq\bigr)$ with one of its antichains highlighted.
\begin{figure}
\centering
\includegraphics[scale=1,page=16]{para_figures.pdf}
\caption{The $(3,3,2,2)$-root poset, with a nonnesting $(3,3,2,2)$-partition highlighted.}
\label{fig:root_poset_3322}
\end{figure}
\subsection{Certain Bivariate Polynomials}
The following constructions are inspired by \cites{chapoton04enumerative,chapoton06sur}. We recover the polynomials studied there in the special case $\alpha=(1,1,\ldots,1)$.
Let us define the \defn{$H_{\alpha}$-triangle} by
\begin{displaymath}
H_{\alpha}(x,y) \stackrel{\mathrm{def}}{=} \sum_{A\inN\!N_{\alpha}}x^{\lvert A\rvert}y^{\lvert A\cap S_{\alpha}\rvert}.
\end{displaymath}
Figure~\ref{fig:h_triangle_311} shows the elements of $N\!N_{(3,1,1)}$ together with the term they contribute to $H_{(3,1,1)}(x,y)$. Summing these terms yields
\begin{displaymath}
H_{(3,1,1)}(x,y) = x^{2}y^{2} + 2x^{2}y + 3x^{2} + 2xy + 5x + 1.
\end{displaymath}
\begin{figure}
\centering
\includegraphics[scale=1,page=17]{para_figures.pdf}
\caption{The nonnesting $(3,1,1)$-partitions, together with the term they contribute to $H_{(3,1,1)}(x,y)$.}
\label{fig:h_triangle_311}
\end{figure}
For $\mathbf{P}\inN\!C_{\alpha}$ let $\mathrm{bump}(\mathbf{P})$ denote the number of bumps of $\mathbf{P}$. Recall from Proposition~\ref{prop:parabolic_noncrossing_ranked} that $\mathrm{bump}$ is a rank function of the poset $\mathcal{N\!C}_{\alpha}$, and recall the definition of the M{\"o}bius function of a finite poset from Section~\ref{sec:poset_topology}.
We may now introduce the rank-generating function of $\mathcal{N\!C}_{\alpha}$ weighted by the M{\"o}bius function; the \defn{$\overline{M}_{\alpha}$-triangle}:
\begin{equation}\label{eq:m_triangle_nc}
\overline{M}_{\alpha}(x,y) \stackrel{\mathrm{def}}{=} \sum_{\mathbf{P}_{1},\mathbf{P}_{2}\inN\!C_{\alpha}}\mu_{\mathcal{N\!C}_{\alpha}}(\mathbf{P}_{1},\mathbf{P}_{2})x^{\mathrm{bump}(\mathbf{P}_{1})}y^{\mathrm{bump}(\mathbf{P}_{2})}.
\end{equation}
Figure~\ref{fig:pnc_311} shows the poset $\mathcal{N\!C}_{(3,1,1)}$. We may directly compute $\overline{M}_{(3,1,1)}(x,y)$ from this picture and obtain
\begin{displaymath}
\overline{M}_{(3,1,1)}(x,y) = 6x^{2}y^{2} - 15xy^{2} + 9y^{2} + 7xy - 7y + 1.
\end{displaymath}
\begin{figure}
\centering
\includegraphics[scale=1,page=18]{para_figures.pdf}
\caption{The poset $\mathcal{N\!C}_{(3,1,1)}$.}
\label{fig:pnc_311}
\end{figure}
A priori, one should not expect any sort of connection between the polynomials $H_{\alpha}(x,y)$ and $\overline{M}_{\alpha}(x,y)$. However, computational evidence suggests that whenever $\alpha=(a,1,\ldots,1)$ or $\alpha=(1,\ldots,1,a)$ for any positive integer $a$, where the number of parts is $r$, it holds
\begin{displaymath}
H_{\alpha}(x,y) = \Bigl(x(y-1)+1\Bigr)^{r-1}\overline{M}_{\alpha}\left(\frac{y}{y-1},\frac{x(y-1)}{x(y-1)+1}\right).
\end{displaymath}
This equality can for instance be witnessed in the case $\alpha=(3,1,1)$ by explicit computation.
Theorem~\ref{thm:parabolic_tamari_alternate} implies that in the cases for which the previous equality is conjectured to hold we have an isomorphism $\mathcal{N\!C}_{\alpha}\cong\mathrm{CLO}(\mathcal{T}_{\alpha})$. It may thus be worthwhile to see what happens if we replace $\mu_{\mathcal{N\!C}_{\alpha}}$ by $\mu_{\mathrm{CLO}(\mathcal{T}_{\alpha})}$ in \eqref{eq:m_triangle_nc}. We define
\begin{displaymath}
M_{\alpha}(x,y) \stackrel{\mathrm{def}}{=} \sum_{\mathbf{P}_{1},\mathbf{P}_{2}\inN\!C_{\alpha}}\mu_{\mathrm{CLO}(\mathcal{T}_{\alpha})}(\mathbf{P}_{1},\mathbf{P}_{2})x^{\mathrm{bump}(\mathbf{P}_{1})}y^{\mathrm{bump}(\mathbf{P}_{2})}.
\end{displaymath}
By inspection of Figure~\ref{fig:parabolic_tamari_121_clo} we find that
\begin{displaymath}
M_{(1,2,1)}(x,y) = 4x^{2}y^{2} - 9xy^{2} + 5y^{2} + 5xy - 5y + 1.
\end{displaymath}
In comparison, we have
\begin{displaymath}
\overline{M}_{(1,2,1)}(x,y) = 4x^{2}y^{2} - 10xy^{2} + 6y^{2} + 5xy - 5y + 1,
\end{displaymath}
since there exists an additional cover relation in $\mathcal{N\!C}_{(1,2,1)}$ (which relates the partitions corresponding to $4|12|3$ and $3|24|1$). Figure~\ref{fig:h_triangle_121} shows the nonnesting $(1,2,1)$-partitions together with the term they contribute to $H_{(1,2,1)}(x,y)$. We see that
\begin{align*}
H_{(1,2,1)}(x,y) & = x^{2}y^{2} + 2x^{2}y + 2xy + x^{2} + 3x + 1\\
& = \Bigl(x(y-1)+1\Bigr)^{2}M_{(1,2,1)}\left(\frac{y}{y-1},\frac{x(y-1)}{x(y-1)+1}\right).\\
\end{align*}
\begin{figure}
\centering
\includegraphics[scale=1,page=19]{para_figures.pdf}
\caption{The elements of $N\!N_{(1,2,1)}$ together with the term they contribute to $H_{(1,2,1)}(x,y)$.}
\label{fig:h_triangle_121}
\end{figure}
Indeed, computer verifications suggest that the equality
\begin{displaymath}
H_{\alpha}(x,y) = \Bigl(x(y-1)+1\Bigr)^{r-1}M_{\alpha}\left(\frac{y}{y-1},\frac{x(y-1)}{x(y-1)+1}\right).
\end{displaymath}
holds if and only if $\alpha$ is a composition of $n$ into $r$ parts, where exactly one part may exceed $1$. This is Conjecture~\ref{conj:parabolic_hm}.
We may even go one step further and define another rational function, the \defn{$F_{\alpha}$-triangle}, by
\begin{displaymath}
F_{\alpha}(x,y) \stackrel{\mathrm{def}}{=} x^{r}H_{\alpha}\left(\frac{x+1}{x},\frac{y+1}{x+1}\right).
\end{displaymath}
Using this definition, we compute
\begin{align*}
F_{(1,2,1)}(x,y) & = 5x^{2} + 4xy + y^{2} + 9x + 4y + 4,\\
F_{(3,1,1)}(x,y) & = 9x^{2} + 4xy + y^{2} + 15x + 4y + 6.
\end{align*}
In fact, computer experiments suggest that $F_{\alpha}(x,y)$ is a polynomial with nonnegative integer coefficients precisely in the situation of Conjecture~\ref{conj:parabolic_hm}.
\begin{conjecture}\label{conj:parabolic_f}
Let $n>0$ and let $\alpha$ be a composition of $n$ into $r$ parts. The rational function $F_{\alpha}(x,y)$ is a polynomial with nonnegative integer coefficients if and only if $\alpha$ has at most one part exceeding $1$.
\end{conjecture}
It is an intriguing challenge to give a combinatorial definition of $F_{\alpha}$, \text{i.e.}\; to find a combinatorial family $X$ indexed by $\alpha$ and two statistics $\sigma_{1}$ and $\sigma_{2}$ on $X$ such that
\begin{displaymath}
F_{\alpha}(x,y) = \sum_{a\in X}x^{\sigma_{1}(a)}y^{\sigma_{2}(x)}.
\end{displaymath}
Contrasting the previous examples, we give the $M_{\alpha}$-, $H_{\alpha}$-, and $F_{\alpha}$-triangles for $\alpha=(2,1,2)$:
\begin{align*}
M_{(2,1,2)}(x,y) & = x^{3}y^{3} - 4x^{2}y^{3} + 5xy^{3} + 9x^{2}y^{2} - 2y^{3} - 22xy^{2} + 13y^{2} + 8xy - 8y + 1,\\
H_{(2,1,2)}(x,y) & = x^{2}y^{2} + x^{3} + 2x^{2}y + 6x^{2} + 2xy + 6x + 1,\\
F_{(2,1,2)}(x,y) & = \frac{14x^{3}+4x^{2}y + xy^{2} + 25x^{2} + 4xy + 12x + 1}{x}.
\end{align*}
We see directly that the property stated in Conjecture~\ref{conj:parabolic_f} does not hold, and the ready may verify that the equality stated in Conjecture~\ref{conj:parabolic_hm} does not hold either.
We conclude with the remark that the claims of Conjectures~\ref{conj:parabolic_hm} and \ref{conj:parabolic_f} are known to hold in the case $\alpha=(1,1,\ldots,1)$; see \cite{athanasiadis07on}*{Theorem~1.1} and \cite{thiel14on}*{Theorem~2}. Other definitions of $H$-, $M$- and $F$-triangles (in similar contexts) appear in \cite{armstrong09generalized}*{Theorem~1} and \cite{garver17enumerative}.
\section*{Acknowledgements}
I thank Robin Sulzgruber for raising my interest in Chapoton triangles, and I am indebted to Christian Krattenthaler for his invaluable advice.
| {
"timestamp": "2020-01-20T02:10:52",
"yymm": "1809",
"arxiv_id": "1809.01405",
"language": "en",
"url": "https://arxiv.org/abs/1809.01405",
"abstract": "Ordering permutations by containment of inversion sets yields a fascinating partial order on the symmetric group: the weak order. This partial order is, among other things, a semidistributive lattice. As a consequence, every permutation has a canonical representation as a join of other permutations. Combinatorially, these canonical join representations can be modeled in terms of arc diagrams. Moreover, these arc diagrams also serve as a model to understand quotient lattices of the weak order. A particularly well-behaved quotient lattice of the weak order is the well-known Tamari lattice, which appears in many seemingly unrelated areas of mathematics. The arc diagrams representing the members of the Tamari lattices are better known as noncrossing partitions. Recently, the Tamari lattices were generalized to parabolic quotients of the symmetric group. In this article, we undertake a structural investigation of these parabolic Tamari lattices, and explain how modified arc diagrams aid the understanding of these lattices.",
"subjects": "Combinatorics (math.CO)",
"title": "Noncrossing Arc Diagrams, Tamari Lattices, and Parabolic Quotients of the Symmetric Group",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462176445589,
"lm_q2_score": 0.7185943865443349,
"lm_q1q2_score": 0.7099326062070879
} |
https://arxiv.org/abs/2103.03138 | Numerical reconstruction of curves from their Jacobians | We approach the Torelli problem of recostructing a curve from its Jacobian from a computational point of view. Following Dubrovin, we design a machinery to solve this problem effectively, which builds on methods in numerical algebraic geometry. We verify this methods via numerical experiments with curves up to genus 7. | \section{Introduction}
\label{introduction}
The Torelli theorem is a classical and foundational result in algebraic geometry, stating that a Riemann surface, or smooth algebraic curve, $C$ is uniquely determined by its Jacobian variety $J(C)$. More concretely, the theorem says that a Riemann surface of genus $g$ can be recovered from one Riemann matrix $\tau$ that represents its Jacobian, together with the theta function
\begin{equation}\label{eq:theta}
\theta\colon \mathbb{C}^g \times \mathbb{H}_g \longrightarrow \mathbb{C},
\qquad \theta({\bf z},\tau) := \sum_{n\in \mathbb{Z}^g} \exp\left( \pi i n^t\tau n + 2\pi i n^t{\bf z} \right)
\end{equation}
where $\mathbb{H}_g$ is the Siegel upper-half space of $g\times g$ symmetric complex matrices with positive definite imaginary part. There are various proofs of Torelli's theorem, which can be even made concrete in computational terms. Most proofs rely on the geometry of the theta divisor. This is the locus inside the Jacobian variety $J(C) = \mathbb{C}^g / \mathbb{Z}^g + \tau\mathbb{Z}^g$ which is cut out by the theta function:
\[ \Theta = \{ {\bf z}\in J(C) \,|\, \theta({\bf z},\tau) = 0 \}. \]
For example, suppose that the Riemann surface $C$ is not hyperelliptic, so that we can identify $C$ with a canonical model $C\subseteq \mathbb{P}^{g-1}$. Then for any singular point ${\bf z}\in \Theta_{\rm sing}$ of the theta divisor, the corresponding Hessian matrix
$(\frac{\partial^2 \theta}{\partial z_i \partial z_j}({\bf z},\tau))$
defines a quadric in the projective space space $\mathbb{P}^{g-1}$. By a result of Green \cite{Green}, such quadrics span the space of quadrics in the ideal of the curve. Hence, if the curve is not trigonal, or a smooth plane quintic, these quadrics generate the whole canonical ideal. This result has been extended by Kempf and Schreyer, which gave a way to recover the curve from a single singular point \cite{KS}. In particular, this gives a powerful effective reconstruction of the curve, provided that we are able to solve the system
\begin{equation}\label{eq:singularpointstheta}
\theta({\bf z},\tau) = \frac{\partial \theta}{\partial z_1 }(z,\tau) = \dots = \frac{\partial \theta}{\partial z_g}({\bf z},\tau) = 0.
\end{equation}
This has been implemented numerically for curves of genus $4$ in \cite{ChuKumStu}, but it is a rather hard task in general, since the theta function is inherently transcendental. Moreover, this problem is also quite sensitive to the precision of the data: for example, if we move $\tau$ a bit, the corresponding theta divisor will not have singular points. There are various other proofs of the Torelli theorem, but many involve solving system of equations such as \eqref{eq:singularpointstheta}.
Hence, we look for different, more algebraic methods. Such a strategy was proposed by Dubrovin \cite{Dub81}, building on Krichever's work \cite{Kri} on algebraic curves and the Kadomtsev-Petviashvili (KP) equation:
\begin{equation}
\label{eq:KP1}
\frac{\partial}{\partial x}\left( 4u_t - 6uu_x -u_{xxx} \right)\, = \, 3u_{yy}.
\end{equation}
More precisely, for each Riemann surface $C$ of genus $g$ there exists a threefold $\mathcal{D}_C$ in a weighted projective space $\mathbb{WP}^{3g-1}$ parametrizing triples $(U,V,W)$ such that the function
\begin{equation}
\label{eq:u_tau}
u(x,y,t) \,\,=\,\, 2\frac{\partial^2}{\partial x^2} \log \uptau(x,y,t)+c, \qquad \uptau(x,y,z) := \theta(Ux+Vy+Wz+D,\tau)
\end{equation}
is a solution to the KP equation \eqref{eq:KP1} for any $D\in \mathbb{C}^g$ and some $c\in \mathbb{C}$. This threefold was called the \emph{Dubrovin threefold} in \cite{AgoCelStu} and it was studied there from a computational point of view. The important properties of this object for our point of view are two: first, $\mathcal{D}_C$ is cut out by some explicit equations whose coefficient are derivatives of theta functions (with characteristic) evaluated at zero. These can be computed explicitly with a software for the evaluation of the theta functions, such as \texttt{Theta.jl} in Julia \cite{julia}. Second, the projection of $\mathcal{D}_C$ onto the projective space $\mathbb{P}^{g-1}_U$ of the coordinates $u_1,\dots,u_g$ consists exactly of the canonical model for the curve $C\subseteq \mathbb{P}^{g-1}_U$.
Hence, equations for the canonical model of $C$ can be obtained by eliminating the variables $V,W$ from the equations of the Dubrovin threefold $\mathcal{D}_C$, a purely algebraic process. In conclusion, this allows to recover the curve from the Riemann matrix $\tau$ without having to solve a transcendental system such as \eqref{eq:singularpointstheta}.
In this note, we explain how to implement this strategy effectively, using the methods of numerical algebraic geometry. In Section \ref{sec:Dubrovin}, we explain the background behind the Dubrovin threefold and we state the key Lemma \ref{lemma:quarticelimination}, which explains how to recover equations for the curve. In particular, this allows to recover quartics equations, but we discuss also the case of quadrics and cubics. Furthermore, we also comment on applications of these methods to the classical Schottky problem. In Section \ref{dubrovin} we state the algorithm and we analyze its complexity. Moreover, even if our focus is on methods that avoid finding singular points, this can be very useful when we are able to solve the transcendental system \eqref{eq:singularpointstheta}, and we comment on this in Section \ref{SingularPointsTheta}. We conclude by presenting numerical experiments with curves from genera from 3 to 7, which we carried out with the packages \texttt{RiemannSurfaces} in Sage and \texttt{Theta.jl} and \texttt{Homotopycontinuation.jl} in Julia.
\section{The Dubrovin Threefold}\label{sec:Dubrovin}
We start by recalling some background on the Dubrovin threefold, follwing \cite{AgoCelStu,Dub81}. Let $C$ be a smooth projective algebraic curve, or compact Riemann surface, of genus $g$. We fix a symplectic basis $\,a_1,b_1,\dots,a_g,b_g\,$ for the
first homology group $H_1(C,\mathbb{Z})$ and we choose a \emph{normalized basis} $\omega_1,\omega_2,\dots,\omega_g$ of holomorphic differentials, meaning that
\begin{equation}\label{eq:adaptedbasis}
\int_{a_i} \omega_j\,\,=\,\, \delta_{ij} ,
\end{equation}
The corresponding Riemann matrix $\tau \in \mathbb{H}_g$ is defined as
\begin{equation}\label{eq:riemannmatrix}
\tau = \left( \int_{b_i}\omega_j \right)_{1\leq i,j \leq g}.
\end{equation}
One can see that the function \eqref{eq:u_tau} is a solution to the KP equation \eqref{eq:KP1} for all values of $D\in \mathbb{C}^g$ and a certain value of $c\in \mathbb{C}$ if and only if there exists a $d\in \mathbb{C}$ such that the following quartic PDE, known as the Hirota bilinear equation, is satisfied:
\begin{equation}\label{eq:hirota}
\!\! ( \uptau_{xxxx}\uptau -4\uptau_{xxx}\uptau_x + 3\uptau_{xx}^2)
\,+\,4 (\uptau_{x}\uptau_t \,-\, \uptau \uptau_{xt}) \,+\, 6 c\, ( \uptau_{xx}\uptau \, -\, \uptau_x^2)
\,+\, 3 (\uptau \uptau_{yy}\,-\, \uptau_y^2)\,+\,8d\,\uptau^2
\,\,= \,\,0.
\end{equation}
We now introduce a weighted projective space $\mathbb{WP}^{3g+1}$ with variables $(U,V,W,c,d)$ where the $U=(u_1,\dots,u_g)$ have degree $1$, the $V=(v_1,\dots,v_g)$ have degree $2$, the $W=(w_1,\dots,w_g)$ have degree $3$ and finally $c,d$ have degree $2$ and $4$ respectively. The \emph{big Dubrovin threefold} $\mathcal{D}_{C}^{\rm big}$ parametrizes all elements $(U,V,W,c,d)$, with $U\ne \mathbf{0}$, such that $\uptau(x,y,z)$ in \eqref{eq:u_tau} is a solution to the Hirota bilinear equation \eqref{eq:hirota} for all $D\in \mathbb{C}^g$. The projection of this variety to the space $\mathbb{WP}^{3g-1}$ of the $(U,V,W)$ is called simply the \emph{Dubrovin threefold} $\mathcal{D}_{C}$.
Equations for $\mathcal{D}^{\rm big}_C$ can be obtained directly from \eqref{eq:hirota} as follows. Given any ${\bf z}$ in $\mathbb{C} ^g$, we write the Riemann theta function as $\theta({\bf z}) = \theta({\bf z} , \tau)$. Then we consider the differential operator $\partial_U := u_1 \frac{\partial}{\partial z_1} + \cdots + u_g \frac{\partial}{\partial z_g}$, and the analogous operators $\partial_V, \partial_W$. For any fixed vector $\mathbf{z}\in\mathbb{C}^g$, the \emph{Hirota quartic} $H_{\bf z}$ is defined as:
\begin{multline}\label{eq:hirotaquartics}
\left(\partial_U^4\theta({\bf z}) \cdot \theta({\bf z})-4\partial^3_U\theta({\bf z})\cdot \partial_U\theta({\bf z})+3\{\partial^2_U\theta({\bf z})\}^2\right)+4\cdot \left(\partial_U\theta({\bf z})\cdot \partial_W\theta({\bf z})-\theta({\bf z}) \cdot\partial_U\partial_W \theta({\bf z})\right)\\+6c\cdot \left( \partial^2_U\theta({\bf z})\cdot \theta({\bf z}) - \{\partial_U\theta({\bf z})\}^2 \right) + 3\cdot \left(\theta({\bf z})\cdot \partial^2_V\theta({\bf z})-\{\partial_V\theta({\bf z})\}^2 \right) +8d\cdot \theta({\bf z})^2.
\end{multline}
This is exactly the expression obtained by combining \eqref{eq:hirota} and \eqref{eq:KP1}, hence the big Dubrovin threefold $\mathcal{D}^{\rm big}_C$ is cut out by the
Hirota quartics $H_{\bf z}$, as $\mathbf{z}$ runs over all vectors in $ \mathbb{C}^g$, see \cite[Proposition 4.2]{AgoCelStu}. The coefficients $H_{\bf z}(U,V,W,c,d)$ are the values of
the theta function $\theta$ and its partial derivatives of certain order at ${\bf z}$ and they can be computed using numerical software for evaluating theta functions and their derivatives. We use the {\tt Julia} package that is introduced in \cite{julia}.
This yields an infinite number of equations for the Dubrovin threefold. A finite number can be obtained via the theta functions with characteristic $\varepsilon,\delta \in \{ 0,1\}^g$:
\begin{equation}
\label{eq:thetaFunctionChar}
\theta\begin{bmatrix} \varepsilon \\ \delta \end{bmatrix}({\bf z}\, |\, \tau)\,\,\, = \,\,\,
\sum_{n \in \mathbb{Z}^g} {\rm exp} \left( \pi i \left(n+\frac{\varepsilon}{2}\right)^T \tau \left(n+\frac{\varepsilon}{2}\right) + \left(n + \frac{\varepsilon}{2}\right)^T \left(z + \frac{\delta}{2}\right) \right).
\end{equation}
This coincides with the Riemann theta function \eqref{eq:theta} for $\varepsilon=\delta=0$ and in general it differs from it by an exponential factor. We consider the following function
\begin{equation}
\label{eq:doubled}
\hat{\theta}[\varepsilon]({{\bf z}})\,\, :=\,\, {\theta}\begin{bmatrix} \varepsilon \\ 0 \end{bmatrix}({\bf z}\, |\, 2\tau) .
\end{equation}
For fixed $\tau$, these complex numbers $\,\hat{\theta}[\varepsilon](0)\,$ at ${\bf z}=0$ are called \emph{theta constants}.
We use the term theta constant also for evaluations at ${\bf z}={\bf 0}$ of derivatives of (\ref{eq:doubled}).
With these conventions, we define the \emph{Dubrovin quartic} in $(U,V,W,c,d)$ associated to the half-characteristic $\varepsilon$ as:
\begin{equation}\label{eq:dubrovinquartic}
F[\varepsilon](U,V,W,c,d) \,\,:=\,\, \partial^4_U \hat{\theta}[\varepsilon]({\bf 0})-\partial_U\partial_{W}\hat{\theta}[\varepsilon]({\bf 0})+\frac{3}{2}c\cdot \partial^2_U\hat{\theta}[\varepsilon]({\bf 0})+\frac{3}{4}\partial^2_V\hat{\theta}[\varepsilon]({\bf 0})+d\hat{\theta}[\varepsilon]({\bf 0}).
\end{equation}
The Dubrovin and the Hirota quartics span the same vector subspace of $\,\mathbb{C} [U,V,W,c,d\,]_4$ as shown in \cite[Proposition 4.3]{AgoCelStu}, hence they also provide defining equations for $\mathcal{D}_C^{\rm big}$.
We come to the crucial point: the projection of the big Dubrovin threefold onto the projective space $\mathbb{P}^{g-1} = \mathbb{P}^{g-1}_{U}$ coincides exactly with the canonical model of the curve $C$ induced by the basis of holomorphic differentials of \eqref{eq:adaptedbasis}:
\begin{equation}\label{eq:canonicalmodel}
C \longrightarrow \mathbb{P}^{g-1}_U, \qquad p \mapsto \left[ \omega_1(p),\omega_2(p),\dots,\omega_g(p) \right]
\end{equation}
in particular, if the curve $C$ is not hyperelliptic, the canonical model is isomorphic to the curve $C$ itself. In algebraic terms, this means that the canonical model of \eqref{eq:canonicalmodel} can be recovered by eliminating the variables $V,W,c,d$ from the equations of the big Dubrovin threefold. This is reduced to a problem of linear algebra as follows: for any half-characteristic $\varepsilon \in \{0,1 \}^g$ write $Q[\varepsilon]$ for the Hessian matrix of the function $\hat{\theta}({\bf z})$, then, combining \cite[Lemma 4.6]{AgoCelStu} and \cite[Proposition 4.7]{AgoCelStu} we have:
\begin{lemma}\label{lemma:quarticelimination}
Let us denote by $V_{\tau} \subseteq \mathbb{C}[u_1,\dots,u_g]$ the vector space of linear combinations
\begin{equation}\label{eq:quarticelimination}
\sum_{\varepsilon \in \{0,1 \}^g} \lambda_{\varepsilon} \cdot \partial^4_U\hat{\theta}[\varepsilon],
\end{equation}
where the $2^g$ complex scalars $\lambda_{\varepsilon}$ satisfy the linear equations
\begin{equation}\label{eq:linearcondition}
\sum_{\varepsilon} \lambda_{\varepsilon} \cdot Q[\varepsilon] \,=\, 0 \qquad {\rm and} \qquad
\sum_{\varepsilon} \lambda_{\varepsilon} \cdot \hat{\theta}[\varepsilon] \,=\, 0.
\end{equation}
Then a linear combination of the Dubrovin quartics is independent of $c,d$ if and only if it belongs to $V_{\tau}$. Furthermore, $V_{\tau}$ has dimension $\,2^g-\frac{g(g+1)}{2}-1$ and the corresponding quartics \eqref{eq:quarticelimination} cut out the canonical model \eqref{eq:canonicalmodel} of the curve $C$.
\end{lemma}
Hence, if the curve $C$ is not hyperelliptic, this Lemma gives a way to recover the curve from the Riemann matrix $\tau$, which depends only on the evaluation of the theta function and its derivatives.
\subsection{Recovering quadrics and cubics}\label{sec:quadrics}
Lemma \ref{lemma:quarticelimination} allows us to recover a linear space of quartics that cut out a canonical model of $C$. However, it is also possible to recover quadric equations. We start with the following basic observation: if in the space $V_{\tau}$ we can find a quartic of the form $Q(U)^2$, then $Q$ is a quadric containing the curve $C$. We can actually find such special quartics inside $V_{\tau}$: indeed, suppose that $\mathbf{z}_0 \in \mathbb{C}^g$ is a singular point of the theta divisor $\Theta$. Then the Hirota quartic $H_{\bf z_0}$ becomes:
\begin{equation}\label{eq:hirotasingular}
H_{\bf z_0} = 3\left( \partial^2_U \theta({\bf z_0})\right)^2
\end{equation}
and since this is independent of $c,d$, Lemma \ref{lemma:quarticelimination} tells us that $(\partial^2_U\theta({\bf z_0}))^2 \in V_{\tau}$. Furthermore, we know by Green's result mentioned in the introduction, that the quadrics $\partial^2_U\theta({\bf z_0})$ appearing in \eqref{eq:hirotasingular} span the whole vector space of quadrics in the ideal of the canonical curve. Hence, if $C$ is not hyperelliptic, trigonal, or a smooth plane quintic, such quadrics generate the canonical ideal of the curve. We again point out that, at least in principle, such quadrics can be computed by algebraic and not transcendental methods. Indeed, this corresponds to intersecting the space $V_{\tau}$ with the subvariety in $\mathbb{C}[U]_4$ given by quartics of the form $Q^2$, so it amounts to solving a polynomial system of equations in the space $V_{\tau}$.
We discuss briefly also the case of cubics, which can appear if the curve is trigonal or a smooth plane quintic. In general, if $\mathbf{z}_0 \in \Theta$ is a singular point in the theta divisor, the cubic equation $\partial_U^3\theta({\bf z}_0)$ belongs to the canonical ideal of the curve \cite{KS}. If we apply the operator $\partial_U$ to the Hirota quartic $H_{\bf z}$ and we evaluate it at $\mathbf{z} = \mathbf{z}_0$, we obtain the quintic equation
\begin{equation}\label{eq:hirotadersingular}
{\partial_UH_{\bf z}}_{|\mathbf{z}=\mathbf{z}_0} = 2(\partial_U^2 \theta({\bf z_0}))(\partial^3_U \theta({\bf z_0}))
\end{equation}
The quintic \eqref{eq:hirotadersingular} is a linear combination of the quintics $u_i \cdot F[\varepsilon]$, for $i=1,\dots,g$ and $\varepsilon \in \{0,1 \}^g$, so in principle we could try to proceed as for quadrics, and look for reducible quintics of the form $Q(U)\cdot T(U)$, where $\deg Q(U) = 2$ and $\deg T(U) =3$. However, this can be quite complicated, and it can be worth trying and computing directly a singular point of the theta divisor. Since this can be useful in general, we discuss this in Section \ref{SingularPointsTheta}.
\subsection{Applications to the Schottky problem}\label{sec:schottky}
Up to now we have discussed the Torelli problem of reconstructing a smooth curve $C$ from a Riemann matrix $\tau$ of its Jacobian $J(C)$. Another fundamental question in this area is the Schottky problem \cite{Gru}, which asks, given a matrix $\tau \in \mathbb{H}_g$, whether this represents the Jacobian of a curve. This can be formulated in different ways with different possible solutions: see for example \cite{FGSM} for a very recent one. In particular, one of these was given by Krichever \cite{Kri} and Shiota \cite{Shi} via the KP equation. This solution can be formulated in terms of the Dubrovin threefold \cite[Section IV.4]{Dub81} by saying that $\tau\in \mathbb{H}_g$ represents a Jacobian if and only if the Dubrovin quartics \eqref{eq:dubrovinquartic} cut out a threefold.
In particular, we can check that a matrix $\tau \in \mathbb{H}_g$ does not represent a Jacobian, by computing the quartics of Lemma \ref{lemma:quarticelimination} and then checking that they do not define a curve in $\mathbb{P}^{g-1}$. We verified this experimentally in Example \ref{example:schottky}.
\section{Numerical recovery}\label{dubrovin}
We can sum up the discussion of the previous section in the following algorithm. We have implemented it in Julia, which can be found at \url{https://turkuozlum.wixsite.com/tocj}.
\vspace{5pt}
\begin{algorithm}[H]\label{alg:recovery}
\KwIn{A matrix $\tau\in \mathbb{H}_g$ representing the Jacobian of a non-hyperelliptic curve.}
\KwOut{Quartics that cut out the canonical model of the algebraic curve $C$ whose Riemann matrix is $\tau$.}
\KwSty{Step 1:} {Set up the linear system in~\eqref{eq:linearcondition} by computing the theta constants via the Julia package \texttt{Theta.jl}}.\\
\KwSty{Step 2:} Solve the linear system in \eqref{eq:linearcondition}.\\
\KwSty{Step 3:} Write the quartics~\eqref{eq:quarticelimination} and return them. \\
\caption{Recovery through the Dubrovin threefold}
\end{algorithm}
\vspace{5pt}
The algorithm is straightforward and we can easily analyze its complexity in terms of the genus $g$. In Step 1, we need to evaluate $2^g\cdot ( {g(g+1)}/{2} + 1)$
theta constants, coming from the matrices $Q[\varepsilon]$ and the scalars $\hat{\theta}[\varepsilon]$. Then, in Step 2, we need to solve a $({g(g+1)}/{2} +1 ) \times 2^g$ linear system of maximal rank. Finally, in Step 3, we need to compute the quartics \eqref{eq:quarticelimination}, which involves the evaluation of
$ 2^g \cdot {(g+3)(g+2)(g+1)g}/{24} $
theta constants. In our experiments, we considered examples, taken from the literature, up to genus $7$, so that the linear system of Step 2 is of relatively small size and it can be solved very quickly in Julia. What takes most of the time is the evaluation of the theta constants: the following table presents the approximated times to compute the theta constants in the examples below, with 12 digits of precision. In the table, $\partial^i$ indicates the order of the partial derivative of $\theta$ that we compute. The last column denotes the time needed to run the entire algorithm.
\begin{center}
\begin{tabular}{ | c | c | c | c | c |}
\hline
genus & $\partial^0$ & $\partial^2$ & $\partial^4$ & total \\ \hline \hline
3 & 0.0009 sec & 0.001 sec & 0.002 sec & 5 sec \\ \hline
4 & 0.008 sec & 0.015 sec & 0.02 sec & 11 sec \\ \hline
5 & 0.07 sec & 0.15 sec & 0.23 sec & 9 min \\ \hline
6 & 2.1 sec & 4.2 sec & 6.9 sec & 12 h \\ \hline
7 & 6 sec & 8 sec & 10 sec & 60 h \\ \hline
\end{tabular}
\end{center}
\subsection{Computing the Singular Points}\label{SingularPointsTheta}
As we explained before, one of the advantages of the Dubrovin threefold is that it allows to recover the curve without computing a singular point of the theta divisor. However, this is also a very useful method, if we manage to solve the transcendental system \eqref{eq:singularpointstheta}.
A Sage code that computes a singular point of the theta divisor in genus $4$ is presented in the article~\cite{ChuKumStu}. The basic idea is to solve system \eqref{eq:singularpointstheta} by numerical optimization, starting from a random input $z=a+\tau b$, where $a,b$ are real vectors with entries between $0$ and $1$. In our implementation, we use the function \texttt{optimize.root} from the \texttt{SciPy} package. We call this function with the method \texttt{lm}, based on the Levenberg-Marquardt algorithm, which speeds up the computation substantially in comparison with the \texttt{hybr} method. The function \texttt{optimize.root} evaluates the partial derivatives~\eqref{eq:singularpointstheta} of the given function via estimating the limits of the function. Instead, we used the partial derivatives that is implemented in the Sage package \texttt{abelfunctions}~\cite{abelFunctions}, which gave more accurate results. In our experiments, it took about $30$ minutes for one singular point to be computed in the case of genus $4$ and about $1.5$ hours in the case of genus $5$.
\begin{remark}\label{rmk:changeofbasis}
Before presenting our experiments, we observe that it is often convenient to work with an arbitrary basis of differentials instead of a normalized one as in \eqref{eq:adaptedbasis}. For such an arbitrary basis $\widetilde{\omega}_1,\dots,\widetilde{\omega}_g$, we consider the corresponding $g \times g$ period matrices
\begin{equation}\label{eq:periodmatrices}
\Pi_a \,=\, \left( \int_{a_j} \widetilde{\omega}_i \right)_{ij} \quad
{\rm and} \qquad \Pi_b \,=\, \left( \int_{b_j} \widetilde{\omega}_i \right)_{ij}.
\end{equation}
Then we obtain a normalized basis of differentials as in \eqref{eq:adaptedbasis} and the corresponding Riemann matrix by taking
\begin{align}
( \omega_1, \omega_2,\ldots,\omega_g)^T &= \Pi_a^{-1} \,
( \widetilde{\omega}_1, \widetilde{\omega}_2,\ldots, \widetilde{\omega}_g)^T. \label{eq:basistransf} \\
\tau &= \Pi_a^{-1}\Pi_b . \label{eq:riemannmatrix}
\end{align}
\end{remark}
\subsection{Numerical experiments}
Finally, we present some examples illustrating our algorithm.
In our experiments, we start with an explicit plane affine model for a non-hyperelliptic curve and possibily also its canonical model $C\subseteq \mathbb{P}^{g-1},$ and then we use the package \texttt{RiemannSurface} of Sage \cite{BruSijZot} to compute a Riemann matrix $\tau$ on which we run the Algorithm \ref{alg:recovery}. We then verify that the resulting quartics cut out the canonical curve we started with. We can do this explicitly in genus $3$, when the curve itself is a smooth plane quartic. In higher genera, we first verify that the quartics belong to the ideal of the curve by running the polynomial division algorithm, which return a remainder of zero, up to a certain numerical approximation. Furthermore, to verify that the quartics cut out the curve set-theoretically, we compute the intersection with an hyperplane in $\mathbb{P}^{g-1}$ by adding a random linear form and solving the resulting polynomial system via homotopy continuation. This is the primary computational method in numerical algebraic geometry, and we used the Julia implementation of \texttt{HomotopyContinuation.jl}~\cite{BreTim}. This computation returns $2g-2$ solutions, confirming that the quartics cut out a curve of degree $2g-2$.
We also tried to recover the quadrics vanishing on the curve using the method of Section \ref{sec:quadrics}. We set up the problem of finding elements of the form $Q(U)^2$ in the space of quartics returned by Algorithm \ref{alg:recovery}, and we solved it again via \texttt{HomotopyContinuation.jl}. We could do this in genus $4$.
In genera $4$ and $5$, we could also compute singular points of the theta divisor, using the methods of Section \ref{SingularPointsTheta}. With these singular points, we could compute quadric and cubic equations for the curve, as described in Section \ref{sec:quadrics}.
\begin{example}[Genus three]\label{example:Trott}
The Trott curve is a smooth plane quartic with affine model $ C = \{f(x,y) = 0\}$, where \[ f(x,y ) = 12^2(x^4+y^4)-15^2(x^2+y^2)+350x^2y^2+81. \]
In particular, this is already the canonical model, and the curve is of genus $3$ and not hyperelliptic. We compute a Riemann matrix using \texttt{RiemannSurface} in Sage \cite{BruSijZot}: in particular, the package uses the basis of differentials:
\[ \widetilde{\omega}_1 = \frac{1}{f_y}dx, \quad \widetilde{\omega}_2 = \frac{x}{f_y}dx, \quad \widetilde{\omega}_3 = \frac{y}{f_y}dx, \]
where $f_y$ denotes the derivative $\frac{\partial f}{\partial y}$, and then it computes the period matrices $\Pi_a$ and $\Pi_b$ as in \eqref{eq:periodmatrices}. The corresponding normalized Riemann matrix $\tau$ is
\[
\begin{tiny}
\begin{pmatrix}
1.06848368471179 + 0.723452867814272i & -0.305886633614305 + 0.123618182281837i & -0.160517941389541 - 0.206682546926085i \\
-0.305886633614305 + 0.123618182281837i & 0.776859918461210 + 1.25292663517205i & -0.626922516393387 - 0.289746911570334i \\
-0.160517941389541 - 0.206682546926085i & -0.626922516393387 - 0.289746911570334i & 0.376235735801471 + 0.484440302728207i \\
\end{pmatrix}.
\end{tiny}
\]
With this matrix, we set the linear system~\eqref{eq:linearcondition} by computing the theta constants appearing in the expressions of the system~\cite{julia}. As expected, this has an unique solution, up to a scalar multiplication, and we compute the corresponding quartic polynomial~\eqref{eq:quarticelimination}:
$$
\begin{scriptsize}
\begin{matrix}
(0.44055338231573327 - 0.11712521895532513i) u_1^4 + (2.094882287195226 + 7.879664904010854i) u_1^3 u_2
\\- (5.316458517368645 - 1.4134300016965646i) u_1^3 u_3 + (61.49338091003442 - 16.348587918073555i) u_1^2 u_2^2 \\+ (27.505923029039046 + 105.6412469122926i) u_1^2 u_2 u_3 - (43.67750279381081 - 12.658628276584892i) u_1^2 u_3^2 \\
- (0.20611709900405373 +0.7752863638524854i) u_1 u_2^3 + (142.137577271911 + 22.777083502115772i) u_1 u_2^2 u_3 \\
+ (101.16905240593528 + 146.6228999954985i) u_1 u_2 u_3^2 - (28.214458865117336 - 92.58798535078905i) u_1 u_3^3 \\
- (0.06519271764094459 - 0.017332091034810038i) u_2^4 - (0.016856400506870983 + 0.8256030828721883i) u_2^3 u_3
\\+ (64.66553470742735 + 38.49587006148285i) u_2^2 u_3^2 + (94.88897578016996 + 81.18194430047456i) u_2 u_3^3 \\
+ (33.080420780163195 + 41.521570514217885i) u_3^4.
\end{matrix}
\end{scriptsize}
$$
At a first glance, this might not look like the Trott curve. However, this equation is for the canonical model of $C$ with respect to a basis of normalized differentials $\omega_1,\omega_2,\omega_3$. If we go back to the differentials $\widetilde{\omega}_1,\widetilde{\omega}_2,\widetilde{\omega}_3$ via the change of coordinates in~\eqref{eq:basistransf}, we obtain the following quartic, after scaling the coefficients.
$$
\begin{tiny}
\begin{matrix}
81u_1^4 + (1.2223597321441586\cdot10^{-13} - 9.454838005323456\cdot10^{-14}i)u_1^3u_2 \\ + (2.9124976279639876\cdot10^{-13} + 1.1282283371974781\cdot10^{-13}i)u_1^3u_3 - (225.00000000000017 - 4.607401108070593\cdot10^{-13}i)u_1^2u_2^2 \\ + (3.669767553538813\cdot10^{-13} - 3.017230893609506\cdot10^{-13}i)u_1^2u_2u_3 - (224.99999999999986 + 5.357443148919295\cdot10^{-13}i)u_1^2u_3^2 \\ - (4.1371303331328177\cdot10^{-13} - 3.5463573271644895\cdot10^{-13}i)u_1u_2^3 - (8.382029113614384\cdot10^{-13} + 3.97078497125283\cdot10^{-13}i)u_1u_2^2u_3 \\ + (7.725484571929981\cdot10^{-13} + 2.34428275709395\cdot10^{-13}i)u_1u_2u_3^2 - (8.239810406206657\cdot10^{-13} + 2.6625152861265\cdot10^{-13}i)u_1u_3^3 \\ + (143.99999999999918 - 7.607569271465399\cdot10^{-13}i)u_2^4 - (9.177234341211958\cdot10^{-13} - 8.304604428951876\cdot10^{-13}i)u_2^3u_3 \\ + (350.0000000000026 + 1.2750714694427922\cdot 10^{-12}i)u_2^2u_3^2 - (1.3400119435300388\cdot10^{-13} - 5.996042934502803\cdot10^{-13}i)u_2u_3^3 \\ + (143.99999999999895 - 5.357443148919295\cdot10^{-14}i)u_3^4.
\end{matrix}
\end{tiny}
$$
This is nothing but the quartic defining the Trott curve, up to an error of $10^{-12}$. In particular, we can recover the exact equation if we round up the coefficients to the nearest integer. We emphasize that this example is treated slightly different than as it has been in \cite[Example 4.8]{AgoCelStu}.
\end{example}
Inspired by this example, we repeated the same experiments with 20 plane quartics with integer coefficients. The coefficients were bounded in absolute value by 100. We computed the period and the Riemann matrix with 53 bits of precision, we computed the theta constants with 12 digits of precision, and at the end we could recover the exact equation of the curve by rounding up the coefficients to the closest integer. Each experiment took approximately 4 seconds.
\begin{example}[Genus four] \label{ex:genus4}
Moving on to the case of genus 4, we consider the canonical curve
\begin{equation}\label{canonical4}
C = \left\{ u_1 u_4 - u_2 u_3=0 \,,\,\, u_1^3 - u_2^3 - u_3^3 - u_4^3=0 \right\}.
\end{equation}
This has an affine plane model given by $\{ f(x,y) =0 \}$, where $f(x,y) = 1-x^3-y^3-x^3y^3$. We can recover the previous canonical model via the basis of differentials
\[ \widetilde{\omega}_1 = -\frac{1}{f_y}dx,\quad \widetilde{\omega}_2=-\frac{x}{f_y}dx, \quad \widetilde{\omega}_3 = -\frac{y}{f_y}dx,\quad \widetilde{\omega}_4 = -\frac{xy}{f_y}dx. \]
We can compute the $4 \times 4$ Riemann matrix $\tau$ via the plane model of the curve with the Sage package~\cite{BruSijZot}. This takes approximately 677 milliseconds for 53 bits, or about 16 digits, of precision.
To reconstruct the canonical model of the curve back from $\tau$, we compute the $5$ quartics in~\eqref{eq:quarticelimination} by solving the linear system in~\eqref{eq:linearcondition}.
By Lemma \ref{lemma:quarticelimination}, these 5 quartics cut out the canonical curve \eqref{canonical4} after the basis change~\eqref{eq:basistransf}. We can first verify that the transformed quartics belong to the ideal of $C$ by the polynomial division algorithm. We did it in Sage, working over the complex field with 200 digits of precision. The coefficients of the remainder of the division algorithm were all of size $10^{-15}$.
Then, to verify that these quartic equations cut out the curve, we use the Julia package \texttt{HomotopyContinuation.jl}~\cite{BreTim}. We add a random linear form to the polynomial system of our 5 quartics and then \texttt{HomotopyContinuation.jl} returns $6$ solutions, which is what we expect from a curve of degree $6$ in $\mathbb{P}^3$. Moreover, again via homotopy continuation methods, we can find a quadratic polynomial $Q(U)$ such that $Q(U)^2$ is in the linear space generated by the 5 quartics, as in Section \ref{sec:quadrics}.
After applying the change of basis in \eqref{eq:adaptedbasis} and rescaling we get the following expression for the quadric
$$
\begin{tiny}
\begin{matrix}
u_1 u_4 - (1.4829350744889013 \cdot 10^{-15} - 1.6682847904065378 \cdot 10^{-15}i) u_1 u_2 \\- (3.5425309660018567 \cdot 10^{-15} + 6.403641669846521 \cdot 10^{-16}i) u_1 u_3 (2.3679052278901118 \cdot 10^{-15} + 1.6728691462607347 \cdot 10^{-15}i) u_1^2 \\ + (1.8423363133604865 \cdot 10^{-15} - 3.1265312370929112 \cdot 10^{-15}i) u_2^2 - (1.0000000000000007 - 2.3672426468663847 \cdot 10^{-15}i) u_2 u_3 \\- (1.739413822171687 \cdot 10^{-15} - 2.775912191360744 \cdot 10^{-15}i)
u_2 u_4 + (3.3764916290020825 \cdot 10^{-15} - 6.001818720458345 \cdot 10^{-15}i) u_3^2\\ - (3.777550457759975 \cdot 10^{-16} -1.4453231221486755 \cdot 10^{-15}i) u_3 u_4 - (1.1268542006403671 \cdot 10^{-15} + 1.7990461567600933 \cdot 10^{-15}i) u_4^2
\end{matrix}
\end{tiny}
$$
In genus four, we can also compute numerically a singular point of the theta divisor. We do it in Sage, as described in Section \ref{SingularPointsTheta} and we find the point:
\begin{equation*}
{\bf z}_0=(0.75+0.54819629i, 0.75-0.54819629i,0.5+0.33618324i,0.75+0.2120130i).
\end{equation*}
The theta function and its derivatives vanish at this point up to 13 digits.
With this, we can compute the quadric $\partial^2_U\theta({\bf z}_0)$ and the cubic $\partial^3_U\theta({\bf z}_0)$. After the usual change of coordinates \eqref{eq:basistransf}, the quadric becomes
$$
\begin{tiny}
\begin{matrix}
u_1u_4+(9.977112210552615\cdot 10^{-8} + 6.939529950175681\cdot 10^{-8}i)u_1^2 + (6.74409346713264\cdot 10^{-8} - 2.3021247380555947\cdot 10^{-15}
i)u_1u_2 \\
-(2.3274471968848503\cdot 10^{-8} + 5.037739720772648\cdot 10^{-8}i)u_1u_3+ (9.977111730319195\cdot 10^{-8} - 6.93953067335553\cdot 10^{-8}i)u_2^2 \\
-(0.9999999999999997 - 5.892639748496844\cdot 10^{-8}i)u_2u_3 + (2.3274465793326793\cdot 10^{-8} - 5.037739901039536\cdot 10^{-8}i)u_2u_4
\\
+ (3.9887820808350887\cdot 10^{-8} + 2.7942580923377634\cdot 10^{-8}i)u_3^2 + (1.3142269019133975\cdot 10^{-7} - 6.865574771844027\cdot 10^{-15}i)u_3u_4
\\
+ (3.988781716962214\cdot 10^{-8} - 2.7942586271411155\cdot 10^{-8}i)u_4^2.
\end{matrix}
\end{tiny}
$$
which coincides with the quadric $u_1u_4-u_2u_3$ of \eqref{canonical4}, up to about 10 digits of precision. Instead, the cubic equation that we obtain is:
$$
\begin{tiny}
\begin{matrix}
u_1^3 - (1.0000000001244782 + 4.8426070737375934 \cdot 10^{-11}i)u_2^3 \\
-(1.0000000001244924 + 4.8417923378918607 \cdot 10^{-11}i)u_3^3- (1.0000000002161082 + 3.561043256396786 \cdot 10^{-11}i)u_4^3 \\
+ (4.5851667064228973 \cdot 10^{-11} + 3.883459414267029 \cdot 10^{-11}i)u_1^2u_2-(5.655054824682744 \cdot 10^{-11} - 2.0264603303086512 \cdot 10^{-11}i)u_1^2u_3\\
-(4.562873576948296 \cdot 10^{-11} + 1.0723355355290677 \cdot 10^{-10}i)u_1u_2^2+ (4.416105308034146 \cdot 10^{-11} + 6.485802277816666 \cdot 10^{-11}i)u_2^2u_4 \\
- (2.4095153783271284 \cdot 10^{-11} - 3.821774264257014 \cdot 10^{-11}i)u_2u_4^2- (2.1061545135963428 \cdot 10^{-11} + 3.9985852337772294 \cdot 10^{-11}i)u_3u_4^2 \\
+ (3.407561378730717 \cdot 10^{-11} - 7.066441338212726 \cdot 10^{-11}i)u_3^2u_4- (7.006134562951018 \cdot 10^{-11} - 9.313087455058934 \cdot 10^{-11}i)u_1u_3^2\\
+ (5.3419725864118215 - 2.3068027651869776i)u_1^2u_4- (5.341972586241411 - 2.3068027651197216i)u_1u_2u_3\\
- (0.861775203997493 - 1.4926384376919832i)u_1u_2u_4+ (0.8617752039804856 - 1.492638437827363i)u_2^2u_3 \\
-(0.8617752037076406 + 1.4926384378593538i)u_1u_3u_4 + (0.8617752038333869 + 1.4926384379123137i)u_2u_3^2 \\
- (2.396786904582313 - 2.79440847281551i)u_1u_4^2+ (2.3967869046516648 - 2.794408472851858i)u_2u_3u_4
\end{matrix}
\end{tiny}
$$
And we see that, with an approximation of 9 digits, this is the cubic $u_1^3-u_2^3-u_3^3-u_4^3$ of \eqref{canonical4}, plus with a linear combination of $u_i(u_1u_4-u_2u_3)$, for $i=1,2,3,4$. \end{example}
\begin{example}\label{genus5example}
Let $C$ be the genus $5$ curve with an affine plane equation given by the polynomial $f(x,y)=x^2y^4 + x^4 + x + 3$.
The differentials $\frac{1}{f_y}dx, \frac{x}{f_y}dx, \frac{xy}{f_y}dx, \frac{xy^2}{f_y}dx, \frac{x^2}{f_y}dx$ form a basis of the space of holomorphic differentials. The corresponding canonical model is given by the complete intersection of the three quadrics:
\begin{equation*}
\label{canonicalGenus5}
u_4^2+u_5^2+u_2u_1+3u_1^2, \quad
u_3^2-u_2u_4, \quad
u_2^2-u_5u_1.
\end{equation*}
We compute the sixteen Dubrovin quartics~\eqref{eq:quarticelimination} and we check that, after the change of coordinates \eqref{eq:basistransf}, they belong to the ideal of $C$. We do this by polynomial division in Sage, over the complex field with 200 digits of precision as in Example~\ref{ex:genus4}. The coefficients of the remainder are of size $10^{-10}$.
We also check whether the quartics define a curve by adding a random linear form and solving the corresponding system via \texttt{HomotopyContinuation.jl}. We obtain $8$ solutions, which is nothing but the degree of our canonical genus 5 curve. Here, one needs to increase the precision of about 15 digits while computing the Riemann matrix, which is required for computing the Dubrovin quartics.
In this example, we could compute also singular points of the theta divisor as explained in Section \ref{SingularPointsTheta}. We computed three points $\mathbf{z}_1,\mathbf{z}_2,\mathbf{z}_2$ where the theta function and all its derivative vanish up to an error of $10^{-10}$. Then we obtain three quadrics $\partial^2_U\theta(\mathbf{z}_1),\partial^2_U\theta(\mathbf{z}_2),\partial^2_U\theta(\mathbf{z}_3)$, which, after the usual change of variables \eqref{eq:basistransf}, can be expressed as three independent linear combinations of the quadrics in \eqref{canonicalGenus5}, again up to an error of $10^{-10}$.
\end{example}
In the following examples, we push experimenting our methods to higher genera.
\begin{example}[genus $6$ and $7$] Here we choose the curves of genus 6 and 7 as Wiman's sextic~\cite{Wim} and the butterfly curve~\cite{Fay}. Their respective plane affine equations are:
\begin{align*}
&x^{6}+y^{6}+1+(x^{2}+y^{2}+1)(x^{4}+y^{4}+1)=12x^{2}y^{2}, \\
&\qquad \qquad \qquad \qquad \qquad x^{6}+y^{6}=x^{2}.
\end{align*}
We first compute their Riemann matrices numerically in Sage. Then, we estimate the corresponding 42 and 99 quartics~\eqref{eq:quarticelimination} in $\mathbb{P}^5$ and $\mathbb{P}^6$ respectively. Using the homotopy continuation method in Julia, we could verify that they define curves of degree 10 and 12 as expected. We point out that in these cases we needed to increase the precision in the Riemann matrix computation: 200 bits of precision in genus 6 and 500 bits in genus 7 were enough for the homotopy continuation computation to terminate.
\end{example}
\begin{example}\label{example:schottky}
Finally, we discuss some numerical experiments related to the Schottky problem, as in Section \ref{sec:schottky}. We choose 100 random Riemann matrices in genus 4, we computed the corresponding quartics as in Lemma \ref{lemma:quarticelimination}, we added a random linear form and we solved numerically the resulting system via \texttt{Homotopycontinuation.jl}. As expected, we found no solutions, confirming the fact that the quartics do not cut out a curve in $\mathbb{P}^{3}$. We expect that this circle of ideas would lead to an effective numerical solution to the Schottky problem, and we will investigate this in future work.
\end{example}
{\bf Acknowledgements}: We would like to thank Nils Bruins, Bernard Deconinck, Bernd Sturmfels and Andr\'{e} Uschmajew for their useful comments and their support.
\vspace{-2mm}
| {
"timestamp": "2021-03-05T02:42:47",
"yymm": "2103",
"arxiv_id": "2103.03138",
"language": "en",
"url": "https://arxiv.org/abs/2103.03138",
"abstract": "We approach the Torelli problem of recostructing a curve from its Jacobian from a computational point of view. Following Dubrovin, we design a machinery to solve this problem effectively, which builds on methods in numerical algebraic geometry. We verify this methods via numerical experiments with curves up to genus 7.",
"subjects": "Algebraic Geometry (math.AG); Mathematical Physics (math-ph); Number Theory (math.NT)",
"title": "Numerical reconstruction of curves from their Jacobians",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462226131667,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7099326038236228
} |
https://arxiv.org/abs/2005.05461 | Geometry and Algebra of the Deltoid Map | The geometry of the deltoid curve gives rise to a self-map of $\mathbb{C}^2$ that is expressed in coordinates by $f(x,y) = (y^2 - 2x, x^2 - 2y)$. This is one in a family of maps that generalize Chebyshev polynomials to several variables. We use this example to illustrate two important objects in complex dynamics: the Julia set and the iterated monodromy group. | \section{Introduction.}
Complex dynamics is perhaps best known for the fractal
images it produces. For instance, given a polynomial
function $\C \to \C$, an important set to consider is
the \emph{Julia set}, whose points behave ``chaotically''
under iteration of the function; for most polynomials,
the Julia set is a fractal. However, the Julia set is
a smooth curve in the case of two special families:
\emph{power maps}, having the form $z \mapsto z^d$,
and \emph{Chebyshev polynomials}, of which the simplest
example is $z \mapsto z^2 - 2$. For power maps, the
Julia set is the unit circle, and for Chebyshev polynomials
it is the segment $[-2,2]$, contained in the real line.
These structurally simple examples play a distinguished
role in complex dynamics, and studying them can illuminate
parts of the theory that apply in more complicated cases.
Power maps have an obvious generalization to functions
from $\C^n$ to itself: just take the $d$th power of each
coordinate. The higher-dimensional analogues of Chebyshev
polynomials are not as obvious, however. In the 1980s,
Veselov \cite{aV86,aV91} and Hoffman--Withers \cite{mHwW88}
independently constructed a family of ``Chebyshev-like''
self-maps of $\C^n$ associated to each crystallographic
root system of rank $n$. The cases where $n = 2$ have
received much further attention (see, e.g.,
\cite{aL90,sN08,bRhM11,kU01,kU07,kU09,wW88}),
especially for the $A_2$ root system, which is connected
with the deltoid curve (a.k.a.\ three-cusped hypocycloid
or Steiner's hypocycloid).
This article presents a new approach to construct a
quadratic $A_2$-type map $f$ based directly on
geometric properties of the deltoid. For this reason
we call $f$ \emph{the deltoid map}. The set of lines
tangent to the deltoid will play a crucial role, and
indeed we will see that $f$ preserves this set of lines.
(This fact was previously observed in \cite{wW88}; the
difference is that we construct the map from the tangent
lines, rather than starting with the map ahead of time
and deducing from it the invariance of the tangent lines;
in particular, our approach does not use the theory of root
systems.) Using this invariance property, we will study
two dynamical features of $f$: one geometric (the Julia set)
and the other algebraic (the iterated monodromy group).
Both of these objects will be formally defined later in the
article.
The Julia set of $f$ is a real algebraic hypersurface $J$
of degree $4$ (Corollary~\ref{C:julia}). We derive this
property from a description of $J$ in terms of pedal
curves, which arise from classical differential geometry
(Theorem~\ref{T:pedal}). The Julia set of $f$ is therefore
considerably more interesting geometrically than in
the case of a Chebyshev polynomial in one variable,
the segment $[-2,2]$ mentioned above.
The iterated monodromy group of $f$ is an affine Coxeter
group (Theorem~\ref{T:img}). Such groups are present
implicitly in the construction from \cite{aV86,aV91} and
explicitly in \cite{mHwW88}. The connection with iterated
monodromy groups is new, however, and extends the
(very short) list of polynomial endomorphisms of $\C^n$,
with $n \ge 2$, whose iterated monodromy groups are known
(see \cite{jBsK10,vN12} for the only other examples known
to the author).
In future work, we will show how these properties of the
deltoid map generalize to other Chebyshev-like maps.
\section{Lines and planes.}
In this section we establish some notation and terminology.
The \emph{complex projective line} $\CP^1$ is identified
with the one-point compactification of $\C$
(i.e., the Riemann sphere) in the usual way; generally
$t \in \C \cup \{\infty\}$ will be used to mean
this extended complex coordinate on $\CP^1$.
The \emph{complex projective plane} $\CP^2$ has
homogeneous coordinates $[x:y:z]$, where $x$, $y$, and $z$
are complex numbers, not all zero; this means that
$[x:y:z] = [\alpha x:\alpha y:\alpha z]$ for all
$\alpha \in\C\setminus\{0\}$. We use $[a:b:c]^\vee$
to represent homogeneous coordinates on the
\emph{dual projective plane} $(\CP^2)^\vee$, whose
elements are the lines in $\CP^2$, so that
\[
[x:y:z] \; \in \; [a:b:c]^\vee
\qquad\iff\qquad
ax + by + cz = 0.
\]
The \emph{affine plane} $\C^2$ is canonically included in
$\CP^2$ via the map $(x,y) \mapsto [x:y:1]$. The complement
of the image of $\C^2$ under this embedding is the
{\em complex line at infinity} $L_\infty \cong \CP^1$,
having equation $z = 0$; that is, $L_\infty = [0:0:1]^\vee$.
The real plane in $\C^2$ with equation $y = \bar{x}$ is
a copy of the Euclidean plane,
and it will be denoted by $\mathbb{E}^2$. Its closure in $\CP^2$
is a copy of the real projective plane,
$\overline{\mathbb{E}}{}^2 \cong \mathbb{RP}^2$, but we do not write it
as such, because the coordinates induced on $\mathbb{E}^2$ as
a subset of $\C^2$ are not real. We call $\partial\mathbb{E}^2 =
\overline{\mathbb{E}}{}^2 \setminus \mathbb{E}^2 = \overline{\mathbb{E}}{}^2 \cap L_\infty \cong S^1$ the
\emph{circle at infinity}, trusting no confusion will arise
from the fact that the (real) circle at infinity is contained
in the (complex) line at infinity.
As a real submanifold of $\CP^2$, $\mathbb{E}^2$ does not carry a
complex structure (else its closure could not be the real
projective plane, topologically), but the restriction of
the coordinate $x$ to $\mathbb{E}^2$ provides a bijection
$\mathbb{E}^2 \cong \C$. This is what we will always mean when we
carry out constructions on $\mathbb{E}^2$ using a complex coordinate.
\section{The deltoid as a real curve and as a complex curve.}
\label{S:curve}
In this section we collect some known
properties of the deltoid---especially regarding
its tangent lines---that will be useful in our study.
\begin{figure}[h]\centering
\includegraphics{deltoidmapfig-11.eps}\hspace{0.9in}
\includegraphics{deltoidmapfig-12.eps}
\caption{Tracing out the deltoid as a hypocycloid.}
\label{F:deltoidtrace}
\end{figure}
The classical \emph{deltoid}
is the curve traced by a point marked on the
circumference of a circle of radius $1$ rolling
without slipping inside a circle of radius $3$.
When the center of the smaller circle travels once
counterclockwise around the center of the larger circle,
a point on the smaller circle's circumference makes two
clockwise revolutions around its center.
(See Figure~\ref{F:deltoidtrace}.)
Because the centers remain $2$ units apart, the deltoid
can be parametrized in $\mathbb{E}^2$ by
\[
x = 2t + {\bar{t}}^{2}, \qquad |t| = 1.
\]
This extends to a complex algebraic curve in the following
way. Because $\mathbb{E}^2$ is embedded in $\C^2$ as the real plane
$y = \bar{x}$, the parametrization of the deltoid in $\C^2$
becomes $(2t+{\bar{t}}^2,2\bar{t}+t^2)$ with $|t|=1$. In
order to make this parametrization holomorphic, we replace
$\bar{t}$ with $t^{-1}$ (when $|t| = 1$, these are the same),
and we define
\begin{equation}\label{E:delparam}
\gamma(t) = \left(2t + \frac{1}{t^2}, \frac{2}{t} + t^2\right),
\qquad t \in \C\setminus\{0\}.
\end{equation}
We can further extend $\gamma$ to a curve in $\CP^2$,
which we also call $\gamma$, by appending an additional
coordinate, initially equal to $1$, then clearing
denominators (which is allowed in homogeneous coordinates):
\[
\gamma(t) = \big[ 2t^3 + 1 : 2t + t^4 : t^2 \big],
\qquad t \in \CP^1
\]
Note that $\gamma(0) = [1:0:0]$ and $\gamma(\infty) = [0:1:0]$.
(To see why the latter expression is correct, rewrite $\gamma(t)$
as $\gamma(1/s)$, clear denominators, then let $s$ go to $0$.)
These are the only two points of $\CP^1$ that $\gamma$ sends to
$L_\infty$.
$\mathcal{D}$ will denote the image of $\gamma$ in either
$\C^2$ or $\CP^2$, and $\mathcal{D}_{\mathbb{E}^2} = \mathcal{D} \cap \mathbb{E}^2$
is the \emph{real} deltoid.
In $\C^2$ we have
\[
\gamma'(t)
= \left(2 - \frac{2}{t^3}, -\frac{2}{t^2} + 2t\right)
= 2 \left(1 - \frac{1}{t^3}\right) (1,t)
\]
and so $\gamma'(t)$ vanishes precisely when $t$ equals $1$,
$\omega = e^{i\,2\pi/3}$, or $\omega^2 = e^{i\,4\pi/3}$;
these cube roots of unity give rise to the three cusps
of $\mathcal{D}$. At every other point of $\mathcal{D}$, a
tangent vector is $(1,t)$. An equation for the line tangent
to $\mathcal{D}$ at $\gamma(t)$ is therefore
\[
\begin{vmatrix}
1 & x - 2t - t^{-2} \\
t & y - 2t^{-1} - t^2
\end{vmatrix} = 0,
\]
which is equivalent to
\begin{equation}\label{E:taneqn}
t^3 - t^2 x + t y - 1 = 0.
\end{equation}
This equation works equally well at the cusps, where
$t^3 = 1$ and \eqref{E:taneqn} reduces to $y = tx$,
so each cusp also has a well-defined tangent line,
which passes through the origin.
It is worth remarking that in \cite{fMfvM54}
the study of the real deltoid begins, not with
any classical construction, but with equation
\eqref{E:taneqn}, restricted to $y = \bar{x}$
and $|t| = 1$, which is simply called the
``line equation'' of the deltoid.
Equation \eqref{E:taneqn} shows that a generic point
$(x,y)$ of $\C^2$ lies on three tangent lines of
$\mathcal{D}$. A point belongs to $\mathcal{D}$
if and only if at least two of these tangent lines
coincide, which is to say that the discriminant of
the left side of \eqref{E:taneqn} (as a polynomial in $t$)
is zero. Thus we obtain an affine equation for
$\mathcal{D}$ (and an additional reason to name this set
$\mathcal{D}$, since it is where a discriminant vanishes):
\begin{equation}\label{E:affeqn}
x^2 y^2 - 4\big(x^3 + y^3\big) + 18 xy - 27 = 0.
\end{equation}
Now we can also parametrize the dual curve
$\mathcal{D}^\vee$ in the dual projective plane
$(\CP^2)^\vee$. From \eqref{E:taneqn}, we get the
following parametrization of $\mathcal{D}^\vee$:
\begin{equation}\label{E:dualparam}
\check\gamma(t) = [-t^2:t:t^3-1]^\vee.
\end{equation}
In particular we see that
$\check\gamma(0) = \check\gamma(\infty) = [0:0:1]^\vee$,
so that the line at infinity in $\CP^2$ is tangent to
$\mathcal{D}$ at both $\gamma(0)$ and $\gamma(\infty)$.
(This tangency can also be seen, less directly, from the
fact that $L_\infty$ intersects $\mathcal{D}$, a curve of
degree 4, in only two points.)
From \eqref{E:dualparam}, we can deduce that an
equation for $\mathcal{D}^\vee$ is
\begin{equation}\label{E:dualeqn}
a^3 + b^3 = abc
\end{equation}
(when $a$, $b$, and $c$ are real, this equation produces
the folium of Descartes). This curve is smooth except
for a self-intersection at $[0:0:1]^\vee$, which shows
that the line at infinity is the only bitangent of
$\mathcal{D}$.
Because equation \eqref{E:affeqn} has degree four, a
generic line in $\CP^2$ will intersect $\mathcal{D}$ in
four points. Meanwhile, a generic element of $\mathcal{D}^\vee$
(that is, a line tangent to $\mathcal{D}$) will intersect
$\mathcal{D}$ at two points besides the point of tangency.
These other two points of intersection are connected with
several interesting geometric properties; we state three
of them here for later use. All three have easy algebraic
proofs, which we leave to the reader. They are illustrated
in Figure~\ref{F:3properties}.
\begin{figure}[h]\centering
\includegraphics{deltoidmapfig-21.eps}
\hspace{0.35in}
\includegraphics{deltoidmapfig-22.eps}
\hspace{0.35in}
\includegraphics{deltoidmapfig-23.eps}
\caption{Three properties of lines tangent to $\mathcal{D}$.}
\label{F:3properties}
\end{figure}
\begin{enumerate}
\item[(A)] For all $t \in \C\setminus\{0\}$, the line containing
$\gamma(t)$ and $\gamma(-t)$ is tangent to $\mathcal{D}$ at
$\gamma(1/t^2)$.
\item[(B)] The midpoint of $\gamma(t)$ and $\gamma(-t)$
in $\C^2$ lies on the curve $xy = 1$.
\item[(C)] The tangent lines $\check\gamma(t)$ and
$\check\gamma(-t)$ intersect at a point also on $xy = 1$.
\end{enumerate}
Property (A) will later form the basis for our
geometrically-defined dynamical system. Properties
(B) and (C) will relate to the critical points of the map.
The curve $\mathcal{C}$ with equation $xy = 1$ is,
projectively speaking, a conic section. Its
intersection $\mathcal{C}_{\mathbb{E}^2}$ with the plane $\mathbb{E}^2$
is the unit circle, having equation $|x|^2 = 1$.
The real deltoid $\mathcal{D}_{\mathbb{E}^2}$ is a Jordan curve
in $\mathbb{E}^2$; let $K$ be the union of $\mathcal{D}_{\mathbb{E}^2}$
with its interior. $K$ consists of those
points $x$ such that all solutions to
$t^3 - xt^2 + \bar{x}t - 1 = 0$ lie on
the unit circle $|t| = 1$; in other words, these are
the points that lie on three ``real'' tangent lines.
(See Figure~\ref{F:K}, left and middle.)
\begin{figure}[h]\centering
\includegraphics{deltoidmapfig-31.eps}\hspace{0.45in}
\includegraphics{deltoidmapfig-32.eps}\hspace{0.45in}
\includegraphics{deltoidmapfig-33.eps}
\caption{{\sc Left:} The set $K \subset \mathbb{E}^2$ bounded by
$\mathcal{D}\cap{\mathbb{E}^2}$. {\sc Middle:} Tangent lines through
the three cusps of $\mathcal{D}$ and their point of
intersection at the origin.
{\sc Right:} For a generic tangent line $L \in \mathcal{D}^\vee$
there is another line $\check{f}(L) \in \mathcal{D}^\vee$ such
that $\check{f}(L)$ is secant to $\mathcal{D}$ at the point
$\mathbf{x}$ where $L$ is tangent.}\label{F:K}
\end{figure}
\section{The deltoid map.}\label{S:map}
In this section we use the geometric properties of
$\mathcal{D}$ to define a map $f$ from $\CP^2$ to itself.
First, we define a natural map
$\check{f}$ on the dual curve $\mathcal{D}^\vee$.
Given $L = T_\mx{x}\mathcal{D} \in \mathcal{D}^\vee$,
let $\check{f}(L)$ be the unique element of $\mathcal{D}^\vee$
such that $\{L,\check{f}(L)\}$ is the full set of tangent lines
to $\mathcal{D}$ passing through $\mx{x}$, as illustrated
in Figure~\ref{F:K}, right.
(Note that $\check{f}(L)$ is the same as $L$ when $\mx{x} = \gamma(t)$ for
$t \in \{1,\omega,\omega^2,0,\infty\}$, and it is distinct
otherwise.)
It follows from property (A) in the previous section that
\[
\check{f}(\check\gamma(t)) = \check\gamma(1/t^2)
\qquad\text{for all $t \in \CP^1$.}
\]
In particular, $\check{f}$ fixes $L_\infty$ as an element
of $\mathcal{D}^\vee$, but it is helpful to think of it as
exchanging the points of tangency, namely $\gamma(0)$ and
$\gamma(\infty)$.
Now we turn to our promised self-map of $\C^2$. First
we observe that, given $(x,y) \in \C^2$, the solutions
$t_1, t_2, t_3$ to \eqref{E:taneqn} satisfy $t_1 t_2 t_3 = 1$
and
\begin{equation}\label{E:xyparam}
x = t_1 + t_2 + t_3, \qquad
y = \frac{1}{t_1} + \frac{1}{t_2} + \frac{1}{t_3}.
\end{equation}
Conversely, if $t_1, t_2, t_3$ are chosen to satisfy
$t_1 t_2 t_3 = 1$, then the formulas \eqref{E:xyparam}
provide coefficients for the equation \eqref{E:taneqn}
to be solved by $t_1, t_2, t_3$.
\begin{prop}
Suppose $\check\gamma(t_1)$, $\check\gamma(t_2)$, and
$\check\gamma(t_3)$ are concurrent. Then so are
$\check{f}(\check\gamma(t_1))$, $\check{f}(\check\gamma(t_2))$,
and $\check{f}(\check\gamma(t_3))$.
\end{prop}
\begin{proof}
If the point of concurrency lies on $L_\infty$,
then the result is trivial, as at most two lines
are involved. Otherwise a necessary and sufficient
condition for concurrency is $t_1 t_2 t_3 = 1$.
But if $t_1$, $t_2$, and $t_3$ satisfy this equality,
then also $(1/{t_1}^2) (1/{t_2}^2) (1/{t_3}^2) =
(1/t_1 t_2 t_3)^2 = 1$.
\end{proof}
This proposition provides the basis for defining a map on
all of $\CP^2$: given $\mx{x} \in \CP^2$, let $L_1$, $L_2$,
and $L_3$ be the three elements of $\mathcal{D}^\vee$
passing through $\mx{x}$ (some of these may coincide). Then
define $f(\mx{x})$ to be the point at which $\check{f}(L_1)$,
$\check{f}(L_2)$, and $\check{f}(L_3)$ are concurrent.
(See Figure~\ref{F:geomdef}.) To handle the special cases
of when all three lines $L_1$, $L_2$, and $L_3$ coincide,
we extend by continuity and define $f([1:0:0]) = [0:1:0]$,
$f([0:1:0]) = [1:0:0])$, and whenever
$\check{f}(L_1) = \check{f}(L_2) = \check{f}(L_3)$ passes
through a cusp of $\mathcal{D}$, $f(\mx{x})$ is defined
to be that cusp.
\begin{figure}\centering
\includegraphics{deltoidmapfig-41.eps}
\hspace{0.1in}
\includegraphics{deltoidmapfig-42.eps}
\hspace{0.1in}
\includegraphics{deltoidmapfig-43.eps}
\caption{Geometric definition of $f$. Any point
$\mx{x} \in \CP^2$ lies on three tangent lines
of $\mathcal{D}$ (counted with multiplicity).
The point of tangency for each of these lines
lies on another element of $\mathcal{D}^\vee$,
as seen in Figure~\ref{F:K}. The resulting collection
of three new tangent lines (again, counted with
multiplicity) is concurrent at $f(\mx{x})$.}\label{F:geomdef}
\end{figure}
With this geometric definition in hand, we find polynomials
that describe $f$.
\begin{prop}\label{P:formulas}
On $\C^2$, $f$ takes the form
$(x,y) \mapsto (y^2 - 2x, x^2 - 2y)$.
On $\CP^2$, this extends to
$[x:y:z] \mapsto [y^2 - 2xz : x^2 - 2yz : z^2]$.
On $L_\infty$, $f$ has the form $\zeta \mapsto 1/\zeta^2$.
\end{prop}
\begin{proof}
If $(x,y) \in \C^2$, and $t_1$, $t_2$, and $t_3$ are the
roots of \eqref{E:taneqn}, then by the observations
surrounding equation~\eqref{E:xyparam}, we have
\[
f(x,y)
= \left(
\frac{1}{{t_1}^2} + \frac{1}{{t_2}^2} + \frac{1}{{t_3}^2},\;
{t_1}^2 + {t_2}^2 + {t_3}^2
\right).
\]
Now we observe that
\[
\left( \frac{1}{t_1} + \frac{1}{t_2} + \frac{1}{t_3} \right)^2
- 2 \left(
\frac{1}{t_1 t_2} + \frac{1}{t_2 t_3} + \frac{1}{t_3 t_1}
\right)
= \frac{1}{{t_1}^2} + \frac{1}{{t_2}^2} + \frac{1}{{t_3}^2}
\]
and
\[
(t_1 + t_2 + t_3)^2 - 2\, (t_1 t_2 + t_2 t_3 + t_3 t_1)
= {t_1}^2 + {t_2}^2 + {t_3}^2,
\]
which proves the result on $\C^2$. The formula on
$\CP^2$ is then obtained by a standard homogenization
process. Because $L_\infty$ is defined by $z = 0$,
on this line the map becomes $[x:y:0] \mapsto [y^2:x^2:0]$;
if we set $\zeta = y/x$, the result for $L_\infty$
becomes clear. Alternatively, for $L_\infty$
we could use the observations made previously that
$f(\check\gamma(t)) = \check\gamma(1/t^2)$ and that
$\check\gamma(t)$ intersects $L_\infty$ at $[1:t:0]$,
so $\zeta = t$.
\end{proof}
\section{Julia set, Fatou set, and Green function.}
Having defined the deltoid map $f$, we turn to some of
its dynamical properties. Ideally, for any point
$\mx{x} \in \CP^2$, we would like to be able to predict
the behavior of its \emph{orbit} under $f$, which is the
sequence $\mx{x},f(\mx{x}),f^2(\mx{x}),f^3(\mx{x}),\dots$,
and also to say something about the orbits of points near
$\mx{x}$. (Here and in the rest of the article $f^n$ denotes
the composition of $f$ with itself $n$ times; this notation
is standard in dynamical systems.) From the construction of
$f$, we can already see that it has some exceptional
properties: the deltoid $\mathcal{D}$ is forward invariant,
meaning $f(\mathcal{D}) = \mathcal{D}$, and $f$ also sends
each line tangent to $\mathcal{D}$ to another such line.
These tangent lines will continue to be key in studying
properties of $f$.
Notice that $f$ commutes with the involution $\iota(x,y)
= (y,x)$. The composition $\iota \circ f = f \circ \iota$
is studied by Uchimura in \cite{kU01,kU07,kU09} and Nakane
in \cite{sN08}. The dynamical properties of $f$ and
$\iota \circ f$ are essentially identical.
A fundamental tool in complex dynamics is the partition
of the dynamical space into the Fatou set, where the
dynamics are ``simple,'' and the Julia set, where the
dynamics are ``chaotic.'' More precisely, the Fatou set
$\Omega = \Omega_f$ is the largest open set of $\CP^2$ on
which the iterates of $f$ locally form an equicontinuous
family; thus if $\mx{x}$ and $\mx{y}$ are points of $\Omega$
that are sufficiently near each other, then $f^n(\mx{x})$
and $f^n(\mx{y})$ remain close (in $\CP^2$) as $n$ increases.
The Julia set $J = J_f$ is the complement of $\Omega$; thus
if $\mx{x}$ is in $J$ and $\mx{y}$ is close to $\mx{x}$, then
$f^n(\mx{x})$ and $f^n(\mx{y})$ may be very far apart.
On $L_\infty$, as we have seen, $f$ reduces to the power
map $\zeta \mapsto 1/\zeta^2$. This map of $\CP^1$ exchanges
$0$ and $\infty$ (in $\CP^2$, these are the points $[1:0:0]$
and $[0:1:0]$), and so these two points form a period $2$
orbit. If $|\zeta| \ne 1$, then $\zeta^{(-2)^n}$ approaches
the previously observed period $2$ orbit. If $|\zeta| = 1$,
then $\zeta^{(-2)^n}$ remains on the unit circle, while
some nearby points are drawn to the $\{0,\infty\}$ orbit.
Thus the Julia set of $f$ on $L_\infty$ is the circle at
infinity, and the Fatou set in $L_\infty$ has two components,
one containing $0$ and the other $\infty$.
To determine the Julia and Fatou sets of $f$ in $\C^2$,
we introduce the \emph{Green function} $G = G_f$ of $f$,
which is defined \cite{eBmJ00,jHpP94} by
\[
G(\mx{x})
= \lim_{n\to\infty}
\frac{1}{2^n} \log^+ \left\|f^n(\mx{x})\right\|,
\]
where ${\log^+} = \max\,\{\log,0\}$, and $\|\cdot\|$
is any norm on $\C^2$. This function measures how
quickly points of $\C^2$ escape to infinity under
iteration of $f$; it is zero precisely for those points
whose orbits are bounded, which comprise the set $K$.
It is a continuous, subharmonic function on $\C^2$, and
it satisfies the functional equation $G(f(x,y)) = 2G(x,y)$.
For most self-maps of $\C^2$, the Green function cannot
be explicitly calculated. The deltoid map is an exception.
\begin{prop}\label{P:green}
The Green function $G$ of the deltoid map $f$ can
be calculated as follows: given $(x,y) \in \C^2$, let $t_1$,
$t_2$, and $t_3$ be the solutions to \eqref{E:taneqn}.
Then
\begin{equation}\label{E:green}
G(x,y)
= \log \max
\left\{ |t_1|,\, |t_2|,\, |t_3|,\,
\frac{1}{|t_1|},\, \frac{1}{|t_2|},\, \frac{1}{|t_3|}
\right\}.
\end{equation}
\end{prop}
Notice that we do not need to use $\log^+$ in
\eqref{E:green}, because the set over which the
maximum is taken contains at least one element
that is greater than or equal to $1$.
\begin{proof}[Proof of Proposition~\ref{P:green}]
Using the $L^\infty$ norm on $\C^2$, we have
\[
G(x,y) = \lim_{n\to\infty}
\frac{1}{2^n} {\log^+} \max \left\{
\left| {t_1}^{2^n} + {t_2}^{2^n} + {t_3}^{2^n} \right|,\;
\left| \frac{1}{{t_1}^{2^n}} + \frac{1}{{t_2}^{2^n}}
+ \frac{1}{{t_3}^{2^n}} \right| \right\}.
\]
Set $\tau = \max \big\{ |t_1|, |t_2|, |t_3|, |t_1|\inv,
|t_2|\inv, |t_3|\inv \big\}$. Then $\tau \ge 1$, and we have
\begin{gather}
\frac{1}{2^n} \log \max
\left\{
\left| {t_1}^{2^n} + {t_2}^{2^n} + {t_3}^{2^n} \right|,\,
\left| \frac{1}{{t_1}^{2^n}} + \frac{1}{{t_2}^{2^n}}
+ \frac{1}{{t_3}^{2^n}} \right| \right\}
- \log \tau \label{Eq:greendiff} \\
= \frac{1}{2^n} \log \max
\left\{
\frac{\left| {t_1}^{2^n} + {t_2}^{2^n} + {t_3}^{2^n} \right|}
{{\tau}^{2^n}},\,
\frac{1}{{\tau}^{2^n}}
\left| \frac{1}{{t_1}^{2^n}} + \frac{1}{{t_2}^{2^n}}
+ \frac{1}{{t_3}^{2^n}} \right| \right\}. \label{Eq:greenquot}
\end{gather}
By our choice of $\tau$, the maximum of the set in
\eqref{Eq:greenquot} is bounded by $3$. Therefore, as $n$ tends to
$\infty$, the difference in \eqref{Eq:greendiff} tends to $0$. This
shows that $G(x,y) = \log\tau$, as claimed.
\end{proof}
In terms of the Green function, $\Omega$ is the
set of points where $dd^c\,G$ vanishes. Here
$dd^c = \frac{i}{2\pi}\partial\conj\partial$
is the so-called \emph{pluri-Laplacian}, and the
derivatives should properly be interpreted as
currents (``differential forms with distributional
coefficients''); for us, however, it is sufficient
to know where $dd^c\,G = 0$. Because $dd^c\,{\log^+}|t|$
vanishes except on the unit circle $S^1$, we obtain
the following characterization of $J$.
\begin{prop}\label{P:julia}
The Julia set of $f$ is the set $J$ of points
$[x:y:z] \in \CP^2$ such that the polynomial
$z(t^3 - 1) - xt^2 + yt$ has at least one root on $S^1$.
\end{prop}
Given our geometric definition of $f$, this result
is not surprising: as we have seen, the line
$\check\gamma(t) \in \mathcal{D}^\vee$ intersects
$L_\infty$ at $[1:t:0]$, and the circle at infinity,
where $|t| = 1$, is precisely the Julia set of
$f|_{L_\infty}$
Nakane \cite{sN08} provided a description of the foliation
of $J$ by ``stable disks'' of the circle at infinity,
as well as how external rays land at points of $K$.
We shall take a different perspective and consider the
intersection of $J$ with complex lines in $\C^2$ parallel
to the $x$- and $y$-axes. In order to describe the result,
however, we must invoke some classical differential geometry.
Given a curve $C$ and a point $O$ in $\mathbb{E}^2$, the
{\em pedal curve of $C$ with respect to $O$} is the
locus of points $P$ such that $P$ is the orthogonal
projection of $O$ onto a line tangent to $C$. (See
Figure~\ref{F:pedal} for some examples.)
\begin{figure}[h]
\begin{tabular}{rll}
\includegraphics{deltoidmapfig-51.eps}&
\includegraphics{deltoidmapfig-52.eps}&
\includegraphics{deltoidmapfig-53.eps}\\
\includegraphics{deltoidmapfig-54.eps}&
\includegraphics{deltoidmapfig-55.eps}&
\includegraphics{deltoidmapfig-56.eps}
\end{tabular}
\caption{Some pedal curves of the real deltoid in $\mathbb{E}^2$.
In each image the point $O$ is indicated by a dot.
{\sc Top:} With respect to the center (a trifolium),
with respect to the point opposite a cusp (a bifolium),
and with respect to a cusp (a simple folium).
{\sc Bottom:} With respect to an exterior point,
with respect to an interior point on an axis of symmetry,
and with respect to a generic interior point.}\label{F:pedal}
\end{figure}
At this point we can state our first main result,
which says that the Julia set of the deltoid map on
$\C^2$ geometrically decomposes into a disjoint union
of pedal curves of the real deltoid.
\begin{theorem}\label{T:pedal}
The intersection of $J$ with a line $L \ne L_\infty$
through $[1:0:0]$ (that is, parallel to the $x$-axis
in $\C^2$) is the pedal curve of the real deltoid with
respect to the $x$-coordinate of $L \cap \mathbb{E}^2$.
Likewise, the intersection of $J$ with a line parallel
to the $y$-axis is the pedal curve of the real deltoid
with respect to the $y$-coordinate of the intersection
of this line and $\mathbb{E}^2$.
\end{theorem}
To prove this result, we will use the following
projection from $\C^2$ to $\mathbb{E}^2$:
\[
\mathrm{pr}_{\mathbb{E}^2}(x,y)
= \left(\frac{x+\bar{y}}{2},\frac{y+\bar{x}}{2}\right).
\]
This projection is orthogonal with respect to the
standard Hermitian inner product on $\C^2$, namely
$(x_1,y_1)\cdot(x_2,y_2) = x_1 \overline{x_2} + y_1 \overline{y_2}$.
Conveniently, it also preserves each complex line that is
tangent to $\mathcal{D}$ at a point of $\mathcal{D}_{\mathbb{E}^2}$,
which is the content of the next lemma.
\begin{lemma}
If $|t| = 1$ and $(x,y) \in \check\gamma(t)$, then also
$\mathrm{pr}_{\mathbb{E}^2}(x,y) \in \check\gamma(t)$.
\end{lemma}
\begin{proof}
By assumption, $t$ and $(x,y)$ satisfy equation
\eqref{E:taneqn} $t^3 - t^2 x + ty - 1 = 0$,
as well as its conjugate
$\bar{t}^3 - \bar{t}^2 \bar{x} + \bar{t} \bar{y} - 1 = 0$.
Because $|t| = 1$, we have $\bar{t} = t^{-1}$, and so,
after multiplying the conjugate of \eqref{E:taneqn} by
$t^3$ we obtain $1 - t \bar{x} + t^2 \bar{y} - t^3 = 0$.
Subtracting this latter equation from \eqref{E:taneqn}
and dividing by $2$ produces
\[
t^3 - t^2 \!\left(\frac{x + \bar{y}}{2}\right)
+ t \!\left(\frac{y + \bar{x}}{2}\right) - 1 = 0
\]
as desired.
\end{proof}
A line in $\C^2$ parallel to the $x$-axis is determined
by its $y$-coordinate. Let $L_\alpha$ be the line with
equation $y = \bar\alpha$. The intersection of $L_\alpha$
with $\mathbb{E}^2$ is
\[
L_\alpha \cap \mathbb{E}^2 = \{(\alpha,\bar\alpha)\}.
\]
The restriction of $\mathrm{pr}_{\mathbb{E}^2}$ to $L_\alpha$ is
a bijection, whose inverse $\lambda_\alpha : \mathbb{E}^2 \to L_\alpha$
is the affine map
\[
\lambda_\alpha(x,\bar{x}) = (2x - \alpha, \bar\alpha).
\]
Notice, however, that with respect to the metrics
induced on $\mathbb{E}^2$ and $L_\alpha$ by the Hermitian inner
product on $\C^2$, $\lambda_\alpha$ is not just affine, but
a similarity. To prove Theorem~\ref{T:pedal}, therefore,
it suffices to show that $\mathrm{pr}_{\mathbb{E}^2}(J \cap L_\alpha)$
is the pedal curve of $\mathcal{D} \cap \mathbb{E}^2$ with respect to
$(\alpha,\bar\alpha)$. Or what is the same, we need to show
that for all $t \in S^1$, the point $(x,\bar{x}) \in \mathbb{E}^2$
is the orthogonal projection of $(\alpha,\bar\alpha)$
onto $\check\gamma(t) \cap \mathbb{E}^2$ if and only if
$\lambda_\alpha(x,\bar{x})$ is in $\check\gamma(t)$.
If $|t| = 1$, then the Hermitian inner product of the vectors
$(1,t)$ and $(1,-t)$ is zero, so any two lines in $\C^2$
of the form $y = tx + b_1$ and $y = -tx + b_2$ are orthogonal.
\begin{proof}[Proof of Theorem~\ref{T:pedal}]
Let $|t| = 1$. The intersection of
$\check\gamma(t)$ and $\mathbb{E}^2$ has the equation
\[
t^3 - t^2 x + t \bar{x} - 1 = 0
\text{,\qquad or}\qquad
\bar{x} = tx - t^2 + t\inv\text.
\]
The line through $(\alpha,\bar\alpha)$ that is
orthogonal to $\check\gamma(t)$ is therefore
\[
\bar{x} - \bar\alpha = -t(x - \alpha)\text.
\]
These latter two equations together imply
(by eliminating $\bar{x}$) that
\[
tx - t^2 + t\inv = \bar\alpha - t(x - \alpha)\text,
\]
and solving for $x$ produces
\[
x = \frac{1}{2} \big( \alpha + t + \bar\alpha t\inv - t^{-2}\big)\text.
\]
On the other hand, if $\lambda_\alpha(x,\bar{x}) \in J$,
then
\[
t^3 - t^2 (2x - \alpha) + t\bar\alpha - 1 = 0\text,
\]
which produces the same solution for $x$, as desired.
The proof for the intersection of $J$ with a line
parallel to the $y$-axis is virtually identical.
\end{proof}
From this geometric description of the intersection of
$J$ with a horizontal or vertical line, we can find an
algebraic equation for $J$ in $\C^2$.
\begin{corollary}\label{C:julia}
The Julia set of $f$ is the real hypersurface in $\C^2$
having the equation
\[
2 \Re (x - \bar{y})^3 + \Re (x - \bar{y})^2 (\bar{x}^2 - y^2) = 0.
\]
\end{corollary}
\begin{proof}
Start in $\mathbb{E}^2$ with the real lines
\[
t^3 - t^2 x + t \bar{x} - 1 = 0
\qquad\text{and}\qquad
\bar{x} - \bar\alpha = -t(x - \alpha)\text,
\]
then eliminate $t$ to get
\[
\left(\frac{\bar{x} - \bar\alpha}{x - \alpha}\right)^3
+ \left(\frac{\bar{x} - \bar\alpha}{x - \alpha}\right)^2 x
+ \left(\frac{\bar{x} - \bar\alpha}{x - \alpha}\right) \bar{x}
+ 1 = 0
\]
Now a point $(x,y) \in \C^2$ is in $J$ if
$\mathrm{pr}_{\mathbb{E}^2}(x,y)$ satisfies this equation
(meaning we replace $x$ with $(x+\bar{y})/2$
and $\bar{x}$ with $(\bar{x}+y)/2$)
when $\alpha = \bar{y}$, which yields
\[
2(\bar{x} - y)^3
+ (\bar{x} - y)^2
(x^2 - \bar{y}^2)
+ (x - \bar{y})^2
(\bar{x}^2 - y^2)
+ 2(x - \bar{y})^3 = 0\text.
\]
This is equivalent to the desired equation.
\end{proof}
Note that in particular the equation in
Corollary~\ref{C:julia} is satisfied when
$y = \bar{x}$, so $\mathbb{E}^2$ is entirely contained in $J$.
This is to be expected, because every point of $\mathbb{E}^2$
lies on a line that intersects $L_\infty$ on the
circle at infinity.
To end this section, we provide a description of
the Fatou set $\Omega$.
\begin{corollary}\label{T:fatou}
$\Omega$ has two components, each of which is
biholomorphic to $(\mathbb{D} \times \mathbb{D})/\sigma$, where
$\mathbb{D}$ is the open unit disk in $\C$ and $\sigma$
is the involution $\sigma(u,v) = (v,u)$. These two
components are exchanged by $f$.
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{T:fatou}]
Define the following two functions from $\C^2$ to $\CP^2$:
\begin{align*}
\Psi_x(u,v)
&= \big[ u^2 v + uv^2 + 1 : u + v + u^2 v^2 : uv \big]\text, \\
\Psi_y(u,v)
&= \big[ u + v + u^2 v^2 : u^2 v + uv^2 + 1 : uv \big]\text.
\end{align*}
Direct computation shows that
\[
(f \circ \Psi_x)(u,v) = \Psi_y(u^2,v^2)
\qquad\text{and}\qquad
(f \circ \Psi_y)(u,v) = \Psi_x(u^2,v^2)\text,
\]
and for $uv \ne 0$, $\Psi_x(1/u,1/v) = \Psi_y(u,v)$.
Geometrically, $u$ and $v$ are the $t$-parameters for
two of the lines in $\mathcal{D}^\vee$ passing through
$\Psi_x(u,v)$, the third being $1/uv$. Thus, $\Psi_x(u,v)$
is contained in $J$ if and only if either $u$ or $v$ lies
on the unit circle, and the same holds for $\Psi_y(u,v)$.
Together, $\Psi_x$ and $\Psi_y$ cover all of $\CP^2$.
By definition of $J$ as the complement of $\Omega$,
we see that $\Omega$ is covered by the two images of
$\mathbb{D} \times \mathbb{D}$ via $\Psi_x$ and $\Psi_y$. Thus $\Omega$
has two connected components. The polynomials defining
$\Psi_x$ and $\Psi_y$ are symmetric in $u$ and $v$, and
distinct unordered pairs $\{u,v\}$ lead to different
points of $\CP^2$ by $\Psi_x$ and $\Psi_y$.
This proves the result.
\end{proof}
The functions $\Psi_x$ and $\Psi_y$ are variants of
the function $\Psi$ used in \cite{sN08} as an
``inverse B\"ottcher coordinate'' on the Julia set of
$f$. We can see from the formulas for $f \circ \Psi_x$
and $f \circ \Psi_y$ how the orbit of any point of
$\Omega$ tends uniformly and super-exponentially to
the orbit consisting of
$\Psi_x(0,0) = [1:0:0]$ and $\Psi_y(0,0) = [0:1:0]$.
\section{Iterated monodromy group of the deltoid map.}
We begin this final section with one more exceptional
property of $f$.
The Jacobian determinant of $f$ at $(x,y) \in \C^2$ is
$4(1 - xy)$. Thus the locus of critical points in $\C^2$
is the curve $\mathcal{C}$ having equation $xy = 1$, whose
importance was previously noted in Section \ref{S:curve}. Indeed,
because the lines $\check\gamma(t)$ and $\check\gamma(-t)$
have the same image under $\check{f}$, their point of
intersection must be a critical point of $f$; by property
(C), all such points lie on $\mathcal{C}$.
If we parametrize $\mathcal{C}$ by $(t,1/t)$, then we find
that the image of a point of $\mathcal{C}$ can be written as
\[
f\!\left(t,\frac1t\right)
= \left(-2t + \frac{1}{t^2}, t^2 - \frac{2}{t}\right)
= \gamma(-t),
\]
and so we see that $f(\mathcal{C}) = \mathcal{D}$. Because
$\mathcal{D}$ is forward invariant under $f$, we conclude
that $f$ is \emph{post-critically finite}, meaning that the
post-critical locus $\bigcup_{n\ge1} f^n(\mathcal{C})$ is an
algebraic curve---in this case, $\mathcal{D}$ itself.
(Post-critically finite maps of $\CP^2$ were introduced in
\cite{jeFnS92}, under the name of ``critically finite rational
maps.'')
Set $\mathcal{X} = \C^2 \setminus \mathcal{D}$ and
$\mathcal{X}_1 = \mathcal{X} \setminus \mathcal{C}$.
Then the above property implies that
$f \vert_{\mathcal{X}_1}$ is a covering map
from $\mathcal{X}_1$ to $\mathcal{X}$, called a
\emph{partial self-covering} of $\mathcal{X}$.
Let $\mx{x}_0 = (0,0) \in \mathcal{X}$; then the
fundamental group $\pi_1(\mathcal{X},\mx{x}_0)$ permutes
the set of preimages of $\mx{x}_0$ by $f$ in a standard
way: given $[\eta] \in \pi_1(\mathcal{X},\mx{x}_0)$
and $\mx{y} \in f^{-1}(\mx{x}_0)$, use $f$ to lift
$\eta$ to a path $\bar\eta$ starting at $\mx{y}$, and
let $[\eta] \cdot \mx{y}$ be the endpoint of $\bar\eta$.
This defines a homomorphism $\mu_f$ from
$\pi_1(\mathcal{X},\mx{x}_0)$ to the symmetric group
on $f^{-1}(\mx{x}_0)$, called the \emph{monodromy
homomorphism}.
Likewise, if we set $\mathcal{X}_n = f^{-n}(\mathcal{X})$,
then $f^n \vert_{\mathcal{X}_n}$ is a covering map, and
$\pi_1(\mathcal{X},\mx{x}_0)$ acts on $f^{-n}(\mx{x}_0)$
by the monodromy homomorphism $\mu_{f^n}$. The intersection
\[
\kappa_f = \bigcap_{n\ge1} \mathop{\mathrm{ker}}\mu_{f^n}
\]
is a normal subgroup of $\pi_1(\mathcal{X},\mx{x}_0)$,
consisting of all elements $[\eta]$ such that every lift
of $\eta$ by every iterate of $f$ remains a loop. The
quotient
\[
\mathrm{IMG}(f) = \pi_1(\mathcal{X},\mx{x}_0)/\kappa_f
\]
is called the {\em iterated monodromy group} of $f$.
(See \cite{sG12,vN11} for details.)
Iterated monodromy groups are a relatively recent
addition to the complex dynamics toolbox. They have
already proved useful in classification problems
\cite{lBvN06} and in determining the shape of Julia sets
more complicated than that of the deltoid map \cite{vN12}.
Nevertheless, only a few such groups have been
explicitly calculated, especially for maps in dimension
greater than $1$. A nice feature of $f$ is that
$\mathrm{IMG}(f)$ can be found directly from the
definition, which is how we will prove our second
main result.
\begin{theorem}\label{T:img}
$\mathrm{IMG}(f)$ is isomorphic to the affine
Coxeter group $\tilde{A}_2$.
\end{theorem}
$\tilde{A}_2$ can be realized geometrically as the group
generated by reflections across the sides of an equilateral
triangle in the plane. It has the group presentation
\[
\tilde{A}_2 =
\left\langle g_1,g_2,g_3 \mid
\forall k\ g_k^2 = 1,\;
\forall j \forall k\ (g_jg_k)^3 = 1 \right\rangle\text.
\]
On the other hand, the fundamental group
$\pi_1(\mathcal{X},\mx{x}_0)$ is isomorphic to
the related Artin group
\[
\bar{A}_2 = \langle h_1,h_2,h_3 \mid
\forall j \forall k\ h_jh_kh_j = h_kh_jh_k \rangle
\]
(see \cite{eaBjicA08} for a proof). Note that in
$\tilde{A}_2$, the relation $(g_jg_k)^3 = 1$ is
equivalent to $g_j g_k g_j = g_k g_j g_k$, and so
$\tilde{A}_2$ can be obtained from $\bar{A}_2$ by
adding the relations $h_k^2 = 1$ for $k = 1,2,3$.
We will accomplish this in Lemma~\ref{L:order2},
then show that no additional relations are present
in $\mathrm{IMG}(f)$.
First we find a useful set of generators for
$\pi_1(\mathcal{X},\mx{x}_0)$: these can be chosen as
circles contained in the lines $\check\gamma(\omega)$,
$\check\gamma(\omega^2)$, and $\check\gamma(1)$ and
passing through $\mx{x}_0$. To see why, we use the
Zariski--van~Kampen theorem \cite{eV33,oZ29}, which
states that generators can be obtained by taking a
sufficiently general line $L$ and drawing loops around
the finite set of points $L \cap \mathcal{D}$.
The condition on $L$ is that $L \cap \mathcal{D}$
should have four distinct points in $\C^2$.
We choose a line of the form
$L = \{ (x,y) \mid x + y = -a \}$, where $2 < a < 3$.
Then \eqref{E:affeqn} implies that
$\gamma(t)$ lies on $L$ if
\[
t^4 + 2t^3 + at^2 + 2t + 1 = 0,
\]
and our choice of $a$ ensures that all solutions
of this equation lie on the unit circle, which means
all points of intersection in $L \cap \mathcal{D}$ lie
in $\mathbb{E}^2$. (See Figure~\ref{F:pi1}, left.) Thus the
four points of $L \cap \mathcal{D}$ lie in a straight
(real) line, and so we can draw small loops around
these inside the (complex) line $L$. Each such loop
intersects $\mathbb{E}^2$ in two points: one in $K$, and one outside.
\begin{figure}[h]\centering
\includegraphics{deltoidmapfig-61.eps}\hspace{.65in}
\includegraphics{deltoidmapfig-62.eps}
\caption{{\sc Left:} The complex line $L$ with equation
$x+y=a$ intersects the deltoid $\mathcal{D}$ at four
points, all contained in $\mathbb{E}^2$, provided $-3 < a < -2$.
Around each point of intersection, draw a loop inside
$L$ that intersects $K$ at one point. When connected
to $\mx{x}_0$ by additional segments in $K$, these loops
generate $\pi_1(\mathcal{X},\mx{x}_0)$.
{\sc Right:} Generators for $\pi_1(\mathcal{X},\mx{x}_0)$,
homotopic to those found in left picture.
Each loop $\eta_k$ is contained in the complex line
$\check\gamma(\omega^k)$, which intersects $\mathcal{D}$
at the cusp $\gamma(\omega^k)$ and at the midpoint of
the opposite branch $b_k$.}
\label{F:pi1}
\end{figure}
Connect each loop in $L$ from the point where it
intersects $K$ to $\mx{x}_0$ with a line segment,
so that it becomes an element of $\pi_1(\mathcal{X},\mx{x}_0)$
(with orientation given by the complex line in which it lies).
Let's label these elements. The real deltoid has
three cusps, and between these lie three ``branches'':
\begin{itemize}
\item one from $\gamma(1)$ to $\gamma(\omega)$,
\item one from $\gamma(\omega)$ to $\gamma(\omega^2)$, and
\item one from $\gamma(\omega^2)$ to $\gamma(1)$.
\end{itemize}
Call these branches, respectively, $b_2$, $b_3$, and
$b_1$, so that $b_k$ and $b_{k+1}$ meet at the cusp to
which $\check\gamma(\omega^{k+2})$ lies tangent. (All
indices are computed modulo $3$.) The loops in $L$
surrounding $b_1$ and $b_2$ are homotopic in $\mathcal{X}$
to loops that lie in $\check\gamma(\omega)$ and
$\check\gamma(\omega^2)$. On the other hand, the two loops
surrounding $b_3$ are both homotopic to the {\em same} loop
in $\check\gamma(1)$. Thus $\pi_1(\mathcal{X},\mx{x}_0)$
is generated by three elements, which have representatives
lying in the lines $\check\gamma(\omega)$,
$\check\gamma(\omega^2)$, and $\check\gamma(1)$.
Call these, respectively, $\eta_1$, $\eta_2$, and
$\eta_3$, so that $\eta_k$ wraps around
$b_k \cap \check\gamma(\omega^k)$.
(See Figure~\ref{F:pi1}, right.)
\begin{figure}[h]\centering
\includegraphics{deltoidmapfig-71.eps}
\caption{The four lifts of $\eta_3$ by $f$.
The loops lie in $\check\gamma(-1)$, and the arcs
lie in $\check\gamma(1)$.}\label{F:arclifts}
\label{F:lifts}
\end{figure}
\begin{lemma}\label{L:order2}
For each $k = 1,2,3$ and for all $n \ge 1$,
$\mu_{f^n}([\eta_k])$ has order $2$.
\end{lemma}
\begin{proof}
We want to show that every lift of $\eta_k$ by
every iterate of $f$ is either a closed loop, or
forms a closed loop with one other lift. We will
use the fact that every lift of $\eta_k$ by any
iterate of $f$ is contained in some line
$\check\gamma(t) \in \mathcal{D}^\vee$.
The line $\check\gamma(t)$, when $t \in \C\setminus\{0\}$,
can be parametrized by
\[
\sigma_t(s)
= \left(t + \frac{s}{\sqrt{t}}, \frac{1}{t} + s\sqrt{t}\right)\text,
\qquad s \in \C\text,
\]
as may be checked directly from the equation for
$\check\gamma(t)$. (Here, $\sqrt{t}$ can be either
square root of $t$.) This parametrization of $\gamma(t)$
has the nice feature that when $s = 0$, the resulting
point lies on the critical locus $\mathcal{C}$, since
it is the midpoint of $\gamma(\sqrt{t})$ and
$\gamma(-\sqrt{t})$ (see property (B) from
Section \ref{S:curve}).
Now when we apply $f$ to $\sigma_t(s)$, we obtain
\[
f\big(\sigma_t(s)\big)
= \left(
\frac{1}{t^2} + (s^2 - 2) t,
t^2 + (s^2 - 2)\frac1t
\right)
= \sigma_{1/t^2}(s^2 - 2)\text.
\]
So we just need to consider the possible lifts of
a closed curve in $\C$ by the polynomial $T(s) = s^2 - 2$,
avoiding the post-critical set of $T$. (See, for example,
Figure~\ref{F:arclifts}, which illustrates the four lifts
of $\eta_3$ by $f$.)
The critical point of $T(s)$ is $0$, and its critical
value is $-2$. The image of $-2$ by $T(s)$ is $2$, which
is a fixed point. Let $\eta$ be any loop in $\C$
that does not pass through $-2$ or $2$. If $\eta$ does not
encircle $-2$, then it lifts to a pair of disjoint loops;
if $\eta$ encircles $2$, then one of these loops encircles
$2$ and one encircles $-2$, otherwise neither lift encircles
$-2$. If $\eta$ does encircle $-2$, then it lifts to a double
cover of itself, consisting of two arcs, that does not encircle
$-2$.
\end{proof}
In other words, Lemma~\ref{L:order2} says that
the square of each generator $[\eta_k]$ is in
$\kappa_f$. Together with the relations in
$\pi_1(\mathcal{X},\mx{x}_0) = \bar{A}_2$,
this result implies that $\mathrm{IMG}(f)$
is a quotient of $\tilde{A}_2$.
To complete the proof of Theorem~\ref{T:img},
we need to show that no additional relations are
present in $\mathrm{IMG}(f)$.
\begin{proof}[Proof of Theorem~\ref{T:img}]
Recall the realization of $\tilde{A}_2$ as the
group generated by reflections $\rho_1$, $\rho_2$,
$\rho_3$ across the sides of an equilateral triangle.
This group can be expressed as the semidirect product
$\Lambda \rtimes D_3$, where $\Lambda$ is the
normal subgroup consisting of translations
(isomorphic to $\Z^2$) and $D_3$ is the subgroup
that fixes a vertex of the triangle (the dihedral
group of order $6$). $D_3$ is generated by the
reflections in two adjacent sides of the triangle.
Suppose $\phi : \tilde{A}_2 \to \IMG(f)$ is the
homomomorphism that sends $\rho_k$ to $[\eta_k]\kappa_f$.
If $\ker\phi \cap D_3 \ne \{\id\}$, then the order
of $\phi(D_3)$ is either $1$ or $2$, because the group
of rotations is the only nontrivial normal subgroup of
$D_3$; in either case we must have $\phi(\rho_1) =
\phi(\rho_2) = \phi(\rho_3)$. On the other hand, if
$\ker\phi \cap \Lambda \ne \{\id\}$, then
because this intersection is invariant under the
action of $D_3$, it must contain two linearly independent
elements $\lambda_1,\lambda_2$; the group
$\Lambda/(\lambda_1\Z\oplus\lambda_2\Z)$ is then finite
and so is $\phi(\Lambda)$.
Therefore, in order to show that $\phi$ is an
isomorphism, it suffices to show that
$[\eta_1]\kappa_f \ne [\eta_2]\kappa_f$ and that
$\IMG(f)$ is infinite. The first condition is easily
checked by observing that $\mu_f([\eta_1])$ and
$\mu_f([\eta_2])$ are different permutations of
$f^{-1}(\mx{x}_0)$. The second condition may be seen
by restricting our attention to an invariant line
such as $\gamma(1)$; on this line $f$ behaves like
the single-variable Chebyshev map $s \mapsto s^2 - 2$,
and the iterated monodromy group of such a map is known
to have elements of infinite order (see \cite{vN11}).
\end{proof}
| {
"timestamp": "2020-05-13T02:05:19",
"yymm": "2005",
"arxiv_id": "2005.05461",
"language": "en",
"url": "https://arxiv.org/abs/2005.05461",
"abstract": "The geometry of the deltoid curve gives rise to a self-map of $\\mathbb{C}^2$ that is expressed in coordinates by $f(x,y) = (y^2 - 2x, x^2 - 2y)$. This is one in a family of maps that generalize Chebyshev polynomials to several variables. We use this example to illustrate two important objects in complex dynamics: the Julia set and the iterated monodromy group.",
"subjects": "Geometric Topology (math.GT); Dynamical Systems (math.DS)",
"title": "Geometry and Algebra of the Deltoid Map",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462219033656,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7099326033135637
} |
https://arxiv.org/abs/2110.10299 | Archimedean Zeta Functions and Oscillatory Integrals | This note is a short survey of two topics: Archimedean zeta functions and Archimedean oscillatory integrals. We have tried to portray some of the history of the subject and some of its connections with similar devices in mathematics. We present some of the main results of the theory and at the end we discuss some generalizations of the classical objects. | \section{Introduction}\label{Sec:Intro}
Probably the first Archimedean zeta function was the Gamma function, introduced by Euler in a letter to Golbach in 1729 with the aim to interpolate the factorial, see \cite{Da}.
If the real part of the complex number $s$ is positive, then the Gamma function is defined via the convergent improper integral
\begin{equation}\label{Eq:Gamma}
\Gamma(s) = \int_0^\infty x^{s-1} e^{-x}\, \mathrm{d}x.
\end{equation}
It is not difficult to show that $\Gamma(s)$ converges absolutely on $\Re (s) > 0$. Moreover the function admits an analytic continuation as a meromorphic function that is holomorphic in the whole complex plane except the non-positive integers, where $\Gamma(s)$ has simple poles.
The Gamma function is just a particular case of the more general definition of an Archimedean zeta function. Take $\mathds{K}=\mathds{R}$ or $\mathds{C}$, and fix an open set $U$ in $\mathds{K}^n$, by $\mathcal{S}(U)$ we will denote the Schwartz space of $U$, see Section \ref{Sec:Dist_and_Loc}. Consider a $\mathds{K}$-analytic function $f : U\rightarrow \mathds{K}$ and a function $\phi\in\mathcal{S}(U)$. The \textit{local zeta function attached to the pair} $(f,\phi)$ is the parametric integral
\begin{equation}\label{Eq:DefIntro}
Z_{\phi}(s,f)=\int\limits_{\mathds{K}^{n}\smallsetminus f^{-1}(0)}\phi(x)\ |f(x)|_{\mathds{K}}^s\ |\mathrm{d}x|,
\end{equation}
for $s\in\mathds{C}$ with $\Re(s)>0$, where $|\mathrm{d}x|$ is the Haar
measure on $\mathds{K}^{n}$. For uniformity reasons we will use for $a\in\mathds{C}$ the convention $|a|_{\mathds{K}}=||a||_{\mathds{C}}^2$, where $||a||_{\mathds{C}}$ is the standard complex norm.
It is easily seen that $Z_{\phi}(s,f)$ converges on the half plane $\{s\in\mathds{C}\mid \Re(s)>0 \}$ and defines a holomorphic function there. Furthermore, $Z_{\phi}(s,f)$ admits a meromorphic continuation to the whole complex plane. This was proved by I. N. Bernstein and S. I. Gel'fand in \cite{BerGel}, then independently by M. Atiyah in \cite{At}, both proofs make use of Hironaka's theorem on resolution of singularities \cite{Hir}. Later, I. N. Bernstein \cite{Ber72} gave a proof by using the following functional equation:
\[P(s,x,\partial/\partial x)\cdot f(x)^{s+1} = b(s)f(x)^s.
\]
Here $b(s)\in\mathds{K}[s]$ and $P$ denotes a polynomial in the `variables' $s, x_1,\ldots,x_n,\partial/\partial x_1$, $\ldots,\partial/\partial x_n$ and with coefficients in $\mathds{K}$.
The monic generator of the ideal of functions satisfying the functional equation is called the \textit{Bernstein-Sato polynomial of} $f$.
In fact, the theory of the Bernstein-Sato polynomials, also known as $b$-functions, was developed by M. Sato during the 60's at Kyoto under the name of algebraic theory of linear differential systems, as it is mentioned in the translation of his original work in \cite{Sat90}. This theory was framed in the theory of prehomogeneous vector spaces, see the comments in \cite[Chapter 6]{IguBook} and \cite{Gra}. Some years later, the advanced student of I. M. Gel'fand, I.N. Bernstein was thinking about similar questions in Moscow \cite{Ber68,Ber71}. Both works initiated the huge $\mathcal{D}$-module theory which has multiple and fruitful ties with algebraic geometry, singularity theory, topology of varieties, representation theory and (of course) differential equations, among others. See the nice surveys \cite{Gra,Wal,AlJeNuB} and the books \cite{Bjo,Cou}.
The other topic that we treat in this note is \textit{oscillatory integrals}, which are integrals of the form
\[I_\phi(\tau;f)=\int_{\mathds{R}^{n}} \exp(\mathrm{i}\tau f(x))\,\phi(x)\, |\mathrm{d}x|,\]
for $f$ real analytic, $\tau \in \mathds{R}$ and $\phi\in\mathcal{S}(U)$.
They can be considered as generalizations of the Fourier transform and can be traced back to the original works of Airy, Stokes, Lipschitz, and Riemann, see the nice historic remarks in the book by E. Stein \cite[Chapter VIII]{Ste}. They have played a major role in harmonic analysis, partial differential equations and number theory among others. The function $f$ is called the \textit{phase} and the function $\phi$ is called the \textit{amplitude}. Here a central question is to know (or, better, estimate) how $I_\phi(\tau;f)$ decays when the real parameter $\tau$ tends to $\infty$. A general principle known as the stationary phase principle or the saddle point method
states that the main contribution in the asymptotics of $I_\phi(\tau;f)$ is given by neighbourhoods of the critical points of the phase. Elementary methods can be used to study simple phases, cf. \cite[Chapter VIII]{Ste} and \cite[Section III.4.5]{GelShi}, but more complicated phases require more elaborated methods.
The general theorem on asymptotic expansions of $I_\phi(\tau;f)$ (c.f. Theorem \ref{Thm:AsyExp}) was proved first by Jeanquartier in \cite{Jea70}, and then by Malgrange \cite{Mal74} and Igusa \cite{IguTata}. Again the necessary tool is resolution of singularities, and the conclusion is that the asymptotic behaviour is controlled by the poles of $Z_{\phi}(s,f)$.
In the proof given by Igusa in \cite{IguTata}, he initiates the uniform treatment of the subject of local zeta functions over local fields $\mathds{K}$ of characteristic zero, i.e. $\mathds{K}=\mathds{R},\, \mathds{C},\, \mathds{Q}_p$ or a finite extension of $\mathds{Q}_p$. Since then, many authors have been working on this theory of local zeta functions, see the book by Igusa \cite{IguBook} and the references therein. The theory of $p$-adic zeta functions (also called Igusa zeta functions) has evolved greatly in the last 30 years, boosted mainly by the definition of the motivic zeta functions of J. Denef and F. Loeser in \cite{DenLoe}. For more detailed descriptions of the work and legacy of Igusa see the classic survey by J. Denef \cite{DenRepo} and the more recent survey by D. Meuser \cite{Meu}. It is not our purpose in this note to detail this whole subject, for an introduction to the theory of $p$-adic zeta functions we recommend \cite{LeoZun}, and also the recent works, including the motivic case, to appear in this Volume \cite{PoVe,Vi}.
The proof by Malgrange of Theorem \ref{Thm:AsyExp} about the asymptotic expansions of $I_\phi(\tau;f)$ brought to the game a collection of new tools of algebraic topological nature. He considered integrals of the form
\[\int_{\Gamma}\exp(\tau f(x))\, \phi(x) \mathrm{d}x_1\wedge\cdots\wedge\mathrm{d}x_n,
\]
where $\Gamma$ is a real $n$-dimensional chain lying in $\mathds{C}^n$, $f(x)$ is a holomorphic function on $\mathds{C}^n$ and $\phi\in\mathcal{S}(\mathds{C}^n)$. Malgrange shows that these integrals also admit asymptotic expansions, but this time related with the monodromy of $f$ at some point of $f^{-1}(0)$. It turns out that this type of integrals is able to recover the real integrals $I_\phi(\tau;f)$, see \eqref{Eq:ORIandOCI}, providing a connection between the poles of $Z_{\phi}(s,f)$ and the eigenvalues of the (complex) monodromy of $f$ at some point of $f^{-1}(0)$. This point of view contributes to enlarge and foster the theory of Archimedean zeta functions.
Finally we want to mention that there have been several generalizations of the classical Archimedean zeta functions $Z_{\phi}(s,f)$. Most of them use a variant proposed by C. Sabbah in \cite{Sab87B}, where he considers several complex variables $s_1,\ldots,s_l$ attached to the same number of functions, see Section \ref{Sec:LZFGen}. There have been also some generalizations of the oscillatory integrals $I_\phi(\tau;f)$ in the frame of multilinear (oscillatory) operators, see the work of Phong, Stein, and Sturm in \cite{PhStSt01}, and the work of Christ, Li, Tao and Thiele in \cite{ChLiTaTh}.
We now describe the content of the article. In Section \ref{Sec:ArqLZF} we present the basic definitions of the Scwartz space, the Weyl algebra and the Archimedean local zeta functions. We also present in this section our first proof of Theorem \ref{Thm:mer_cont} about the meromorphic continuation of $Z_{\phi}(s,f)$, by using resolution of singularities. We also present as an application of Theorem \ref{Thm:mer_cont} a proof of the fact that differential operators with constant coefficients admit fundamental solutions. Section \ref{Sec:BerPol} is devoted to the presentation of the Bernstein-Sato polynomial and the second proof of the meromorphic continuation of $Z_{\phi}(s,f)$ in Theorem \ref{Thm:Ber72}. We discuss also here the difference between the sets of candidate poles for the Archimedean zeta functions. Next, in Section \ref{Sec:OscInt} we introduce the oscillatory integrals, the Gel'fand Leray form and some interesting fiber integrals. We state in Theorem \ref{Thm:AsyExp} the asymptotic expansion of $I_\phi(\tau;f)$ and relate some of its parameters with the log-canonical threshold of $f$. We briefly explain in Section \ref{Sec:RealOI} some of the problems when dealing with oscillatory integrals of real analytic phases. In Section \ref{Sec:CompxOI} we review some of the results of Malgrange about complex oscillatory integrals and their relations with $Z_{\phi}(s,f)$. Finally in Section \ref{Sec:SomeGen} we present some generalizations of Archimedean local zeta functions and oscillatory integrals.
We have tried to provide an ample list of references to the classic and current literature about the considered topics. No new proofs are given and only some proofs are presented. Our purpose is not just to expose the classic theory but also to call the attention of the community to some not so well known relations among Archimedean local zeta functions and oscillatory integrals. The most important example of this fact is the almost inexistent connections between real zeta functions and multilinear oscillatory integrals.
\section{Archimedean Local Zeta Functions}\label{Sec:ArqLZF}
The first serious study of real and complex zeta functions was carried out in the first volume of the book \textsc{Generalized Functions} by I. M.~Gel'fand and G.~Shilov \cite{GelShi}. They begin their study with the following \textit{regularization} technique. Take $s\in\mathds{C}$ with $\Re(s)>-1$ and let $\phi$ be a smooth real function with compact support. Note that the integral
\[(x_+^s,\phi)=\int_0^\infty x^s \phi(x)\,\mathrm{d}x,
\]
is a holomorphic function on $\Re(s) > -1$. Moreover we can write
\begin{equation}\label{Eq:BasicExam}
(x_+^s,\phi)=\int_0^1 x^s\,[\phi(x)-\phi(0)]\,\mathrm{d}x+\frac{\phi(0)}{s+1}+\int_1^\infty x^s\phi(x)\,\mathrm{d}x.
\end{equation}
The first term of the RHS of \eqref{Eq:BasicExam} is defined for $\Re(s)>-2$, the second for $s\ne -1$ and the last one for all $s\in\mathds{C}$. Hence $(x_+^s,\phi)$ can be continued analytically to $\Re(s)>-2$ and $s\neq -1$. Analogously, $(x_+^s,\phi)$ can be continued analytically (\textit{regularized}) to the region $\Re(s)>-n-1$, $s\ne -1,-2,\ldots,-n $, by means of
\begin{gather*}
(x_+^s,\phi)=\int_0^1 x^s\left[\phi(x)-\phi(0)-\dots-\frac{x^{n-1}}{(n-1)!}\phi^{(n-1)}(0)\right]\mathrm{d}x
+\sum_{k=1}^n\frac{\phi^{(k-1)}(0)}{(k-1)!(s+k)}\\
+\int_1^\infty x^s\,\phi(x)\,\mathrm{d}x.
\end{gather*}
Finally, $(x_+^s,\phi)$ has an analytic continuation to the whole $\mathds{C}$ except at the points $s=-1, -2,\ldots$, where it has simple poles.
A complex analogue of $(x_+^s,\phi)$ is studied in Section 3.6 of the same book. Indeed, a considerable part of this text is devoted to the generalization of $(x_+^s,\phi)$, replacing $x_+$ by more elaborated polynomials in several variables over $\mathds{R}$ or $\mathds{C}$. During a talk at the ICM of 1954 \cite{Gel}, I. M.~Gelfand posed the following problem.
\begin{problem}
Let $P(x_1,\ldots,x_n)$ be a polynomial and consider the area in which $P>0$. Let $\phi(x_1,\ldots,x_n)$ be a smooth function with compact support. Then the generalized function \[(P^s,\phi)=\int_{P>0} P(x_1,\ldots,x_n)^s \phi(x_1,\ldots,x_n)\,|\mathrm{d}x_1\ldots\mathrm{d}x_n|,\]
is a meromorphic function of $s$.
\end{problem}
In \cite{GelShi} the authors had discovered that this problem requires a careful analysis of the singular set of $P$, $\mathrm{Sing}_P$. While some simple singularities could be handled, the general case appeared to require new ideas. I.M.~Gelfand was aware that the solution to this problem would imply the existence of fundamental solutions for differential operators with constant coefficients, see Section \ref{Sec:FunSol}. The new idea that settled the problem definitively was resolution of singularities. By using Hironaka's theorem \cite{Hir}, I.N.~Bernstein and S.I.~Gelfand in \cite{BerGel}, and independently M.~Atiyah in \cite{At}, reduce the general problem to the monomial case, which can be easily obtained from the example $(x_+^s,\phi)$. In order to present their proof we introduce some notation.
\subsection{Distributions and local zeta functions}\label{Sec:Dist_and_Loc}
Following Igusa \cite{IguBook}, we denote by
\[D_n=\mathds{C}[x,\partial/\partial x]=\mathds{C}[x_1,\ldots,x_n,\partial/\partial x_1,\ldots,\partial/\partial x_n]\]
the $n$-th Weyl algebra over $\mathds{C}$, i.e. the ring of differential operators with polynomial coefficients in $n$ variables over $\mathds{C}$. A smooth function $\phi\,:\mathds{R}^n \to \mathds{C}$, i.e. a function having derivatives of arbitrarily high order, is called a \textit{Schwartz function} if
\[||P\phi||_\infty\quad \text{is finite for every}\quad P\in D_n=\mathds{C}[x,\partial/\partial x],
\]
where $||\phi||_\infty=\sup_{x\in\mathds{R}^n}|\phi(x)|$. The set of Schwartz functions is called the \textit{Schwartz space} $\mathcal{S}(\mathds{R}^n)$. By identifying $\mathds{C}$ with $\mathds{R}^2$, we similarly define $\mathcal{S}(\mathds{C}^n)$. In any case, $\mathcal{S}(\mathds{K}^n)$ ($\mathds{K}=\mathds{R}$ or $\mathds{C}$), forms a $\mathds{C}$-vector space. Moreover, $\mathcal{S}(\mathds{K}^n)$ is a topological vector space whose topology is induced by the family of seminorms
\[||\phi||_{i,j} = \sup_{x\in\mathds{K}^n}|x^i(\partial/\partial x)^j\phi(x)|,\]
for all $i, j \in\mathds{Z}_{\geq 0}^n$. As a metric space, $\mathcal{S}(\mathds{K}^n)$ is complete and the continuous linear functionals on it form the space of tempered distributions $\mathcal{S}^\prime(\mathds{K}^n)$. There are also local versions of these spaces for an open set $U$ of $\mathds{K}^n$, see \cite[Sect. 2.1]{Hor}. It is well known that the space of smooth functions with compact support is a dense subspace of $\mathcal{S}(\mathds{K}^n)$, see e.g. \cite[Lemma 5.2.2]{IguBook}.
\begin{definition}
The local zeta function of $f(x) \in \mathds{K}[x_1,\ldots, x_n]\setminus \mathds{K}$ and $\phi\in\mathcal{S}(\mathds{K}^n)$ is the element of $\mathcal{S}^\prime(\mathds{K}^n)$ defined as
\[
Z_{\phi}(s,f)=(f^s,\phi)=\int\limits_{\mathds{K}^{n}\setminus f^{-1}(0)}\phi(x)\, |f(x)|_{\mathds{K}}^s\, |\mathrm{d}x|,
\]
for $s\in\mathds{C}$ with $\Re(s)>0$.
\end{definition}
This definition is essentially the same given by Igusa in \cite{IguCom}. Since the complex parameter $s$ appears as power in the distribution induced by $|f|_\mathds{K}$, I.M.~Gelfand used in \cite{Gel} the terminology \textit{complex power} for $Z_{\phi}(s,f)$. Note that the definition given in the introduction by \eqref{Eq:DefIntro} is slightly more general than the previous one.
We now present our first proof of the meromorphic continuation of $Z_{\phi}(s,f)$, given originally by \cite{BerGel} and \cite{At}. We follow closely the presentation of \cite{IguBook}. For $f(x)\in\mathds{K}[x_1,\ldots, x_n]\setminus\mathds{K}$, we denote by $\Sing_f$ the singular locus of $f$, meaning the set of
roots of $f$ in $\mathds{K}^n$ at which all the partial derivatives simultaneously vanish. A \textit{resolution of singularities} of $f^{-1}(0)$ is a pair $(X,\pi)$ where $X$ is a smooth manifold and $\pi: X\to\mathds{K}^n$ is a proper map. In $X$ there is a finite set $\mathcal{E}=\{E\}$ of closed submanifolds of $X$ of codimension $1$ with a pair of positive integers $(N_{E},\nu_{E})$ assigned to each $E$, which are called the numerical data of the resolution. The map $\pi$ has the following properties: (i) $(f\circ \pi)^{-1}(0)=\cup_{E\in \mathcal{E}}E$, and $\pi$ induces an isomorphism
\[X\setminus \pi^{-1}\left( \mathrm{Sing}_{f} \right) \rightarrow \mathds{K}^n\setminus \mathrm{Sing}_{f};\]
(ii) at every point $p$ of $X$ if $E_{1},\ldots,E_{m}$ are all the $E$ in $\mathcal{E}$ containing $p$ with local equations $y_{1},\ldots,y_{m}$ around $p$,
then there exist local coordinates of $X$ around $p$ of the form $\left(
y_{1},\ldots,y_{m},y_{m+1},\ldots,y_{n}\right) $ such that
\begin{equation}\label{Eq:localeqs}
\left( f\circ \pi\right) \left( y\right) =\varepsilon\left( y\right) {\displaystyle\prod\limits_{i=1}^{m}}
y_{i}^{N_{i}},\quad
\pi^{\ast}\Bigg(\bigwedge\limits_{1\leq i\leq n}\mathrm{d}x_i\Bigg)=\eta\left( y\right) \left(
{\displaystyle\prod\limits_{i=1}^{m}}
y_{i}^{\nu_{i}-1}\right)
\bigwedge\limits_{1\leq i\leq n}
\mathrm{d}y_{i}
\end{equation}
on some neighborhood of $p$, in which $\varepsilon\left( y\right) $,
$\eta\left( y\right) $ are units of the local ring $\mathcal{O}_{p}$ of $X$
at $p$. In particular $\cup_{E\in \mathcal{E}}E$ has normal crossings.
Now consider our integral $Z_{\phi}(s,f)$ and let $\left\vert\bigwedge\nolimits_{1\leq i\leq n}\mathrm{d}x_{i}\right\vert $ denote the measure induced by the differential form
$\bigwedge\nolimits_{1\leq i\leq n}\mathrm{d}x_{i}$ on $\mathds{K}^{n}$, which agrees with the Haar measure of $\mathds{K}^{n}$. Then
\[Z_{\phi}\left( s,f\right) =\int\limits_{\mathds{K}^{n}\setminus f^{-1}\left(0\right)}\phi\left( x\right)\,| f(x)|_{\mathds{K}}^{s}\left\vert\bigwedge_{1\leq i\leq n}\mathrm{d}x_{i}\right\vert.
\]
Note that by density, it is enough to make the proof for $\phi$ a smooth function with compact support. We use $\pi$ as a change of variables in our integral to get
\[
Z_{\phi}\left( s,f\right)=\bigintsss\limits_{X\setminus \pi^{-1}(f^{-1}(0))} \phi(\pi(y))\,|f(\pi(y))|_\mathds{K}^{s}\,\left\vert\pi^{\ast}\Bigg(\bigwedge_{1\leq i\leq n}
\mathrm{d}x_{i}\Bigg) (y)\right\vert.
\]
At every point $p$ of $X\setminus \pi^{-1}(f^{-1}(0))$ we can choose a chart $U$ such that \eqref{Eq:localeqs} holds. Moreover
we choose a small neighborhood $U_p$ of $p$ over which the above local coordinates are valid and the functions $\varepsilon$ and $\eta$ are locally invertible. Since the map $\pi$ is proper, we see that $C = \pi^{-1}(\mathrm{Supp}(\phi))$ is also compact and we will consider a partition of the unity associated to $C$ and subordinated to the cover $\{U_p\}$. By this we mean that for every element $U_p$ of the cover, there exists a finite set $\{\rho_{p,k}(y)\}_{k\in K}$ of smooth functions such that for every $y$ in $C$, only a finite number of $\rho_{p,k}$ are different from zero, and $\sum_{k\in K} \rho_{p,k}(y)=1$. This implies that
\[
Z_{\phi}(s,f)=\sum\limits_{k\in K} \bigintssss_{U_p} \big(\phi(\pi(y))\cdot \rho_{p,k}(y)\,|\varepsilon(y)|_\mathds{K}^s\, \eta(y)\big)
\prod\limits_{i=1}^{m}|y_{i}|_{\mathds{K}}^{N_{i}s+\nu_{i}-1}\left\vert\bigwedge\limits_{1\leq i\leq n}\mathrm{d}y_{i}\right\vert.
\]
In every neighborhood $U_p$ the function $\phi(\pi(y))\cdot \rho_{p,k}(y)\,|\varepsilon(y)|_\mathds{K}^s\, \eta(y)$ can be considered as a smooth function with compact support. Therefore, by using the $n$-dimensional version of the example $(x_+^s,\phi)$, see e.g. \cite[Lemma 7.3]{AG-ZV}, one gets the following theorem, stated in Igusa's version \cite[Theorem~5.4.1]{IguBook}.
\begin{theorem}[{\cite[Introduction]{At},\cite[Theorem 1]{BerGel}}]\label{Thm:mer_cont}
Let $f(x) \in \mathds{K}[x_1,\ldots, x_n]\setminus \mathds{K}$ and $\phi\in\mathcal{S}(\mathds{K}^n)$. Assume that $(X,\pi)$ a resolution of singularities of $f^{-1}(0)$ with total transform $\mathcal{E}=\{E\}$. Let $(N_{E},\nu_{E})$ be the numerical data of $(X,\pi)$. Then $Z_{\phi}(s,f)$ admits a meromorphic continuation to the whole complex plane with poles contained in the set
\begin{equation}\label{Eq:Poles}
\bigcup_{E\in\mathcal{E}} -\frac{\nu_{E}+\mathds{Z}_{\geq 0}}{qN_E},
\end{equation}
where $q=1$ if $\mathds{K}=\mathds{R}$ and $q=2$ if $\mathds{K}=\mathds{C}$. Moreover, the orders of the candidate poles are at most equal to $\min \{n, \Card(\mathcal{E})+1\}$.
\end{theorem}
\subsection{Fundamental Solutions of Differential Operators}\label{Sec:FunSol}
The proof by M. Atiyah in \cite{At} of Theorem \ref{Thm:mer_cont} contains other interesting consequences of the meromorphic continuation of $Z_{\phi}(s,f)$. For instance, one may give another proof of the following result of Hörmander and Lojasiewicz, known as the division of distributions problem. Again we follow \cite[Sect. 5.5]{IguBook}.
\begin{theorem}[{\cite[Corollary~1]{At}}]\label{Thm:At}
If $f(x) \in \mathds{C}[x_1,\ldots, x_n]\setminus \mathds{C}$, there exists an element $T$ of $\mathcal{S}^\prime(\mathds{R}^n)$ satisfying $fT = 1$.
\end{theorem}
The proof is not difficult, and according to Igusa is an elegant one. A nice corollary of Theorem \ref{Thm:At} is the remarkable fact that there exists fundamental (or elementary) solutions for differential operators with constant coefficients as we shall see below. First we recall that for $P\in D_n=\mathds{K}[x,\partial/\partial x]$, its \textit{adjoint operator} $P^\ast$ is the image of the $\mathds{K}$-involution of $D_n$ given by $x_i^\ast=x_i$ and $(\partial/\partial x_i)^\ast=-\partial/\partial x_i$. In particular this implies that $(P_1P_2)^\ast=P_2^\ast P_1^\ast$ and $(P_1^\ast)^\ast=P_1$ for every
$P_1, P_2\in D_n$.
This notion of adjoint can be extended up to $\mathcal{S}^\prime(\mathds{K}^n)$. In particular one has for every $\phi\in\mathcal{S}(\mathds{R}^n)$ that
\[1^\ast(\phi)=\phi(0)=:\delta_0(\phi),\]
which means that $1^\ast=\delta_0$ in $\mathcal{S}^\prime(\mathds{R}^n)$. The element $\delta_0$ of $\mathcal{S}^\prime(\mathds{R}^n)$ is known as the \textit{Dirac measure of} $\mathds{R}^n$ \textit{supported by} $0$. Now consider $f(x) \in \mathds{C}[x_1,\ldots, x_n]\setminus \mathds{C}$, say $f(x)=\sum_{k\in(\mathds{Z}_{\geq 0})^n}c_k x^k$ for $c_k\in\mathds{C}$, and define an element $P$ of $\mathds{C}[\partial/\partial x_1,\ldots,\partial/\partial x_n]$ (i.e. a \textit{differential operator with constant coefficients}) by
\[P=\sum_k \frac{1}{(2\pi\mathrm{i})^{|k|}}c_k(\partial/\partial x)^k.\]
It is a simple matter to show that for any $T$ in $\mathcal{S}^\prime(\mathds{K}^n)$ one has $(fT)^\ast = PT^\ast$.
\begin{cor}[{\cite[Corollary~3]{At}}]\label{Cor:At}
If $P$ is a differential operator with constant coefficients, then there exists an elementary solution for $P$, i.e., an element $S$ of
$\mathcal{S}^\prime(\mathds{R}^n)$ satisfying $PS = \delta_0$.
\end{cor}
\begin{proof}
Assume that \[P=\sum_k \frac{1}{(2\pi\mathrm{i})^{|k|}}c_k(\partial/\partial x)^k,\] for some $c_k\in\mathds{C}$. Put $f(x)=\sum_{k\in\mathds{Z}_{\geq 0}}c_k x^k$ in $\mathds{C}[x_1,\ldots, x_n]\setminus \mathds{C}$, then by Theorem \ref{Thm:At}, there is an element $T$ of $\mathcal{S}^\prime(\mathds{R}^n)$ satisfying $fT = 1$. By taking $S=T^\ast$, we have that $S$ belongs to $\mathcal{S}^\prime(\mathds{R}^n)$ and moreover $PS =(fT)^\ast = 1^\ast=\delta_0$.
\end{proof}
\section{The Bernstein-Sato Polynomial}\label{Sec:BerPol}
So far our approach to Archimedean local zeta functions appeared only related with objects from analysis, we will see in this section how $Z_{\phi}(s,f)$ is connected with objects of more algebraic nature. The algebraic analysis theory or algebraic theory of linear differential systems began in the 60's with the school of M. Sato at Kyoto, see \cite{Sat90} for the translation of his work in the 60's. A similar development was taking place in Moscow, where I.N. Bernstein was working on similar matters, see \cite{Ber68,Ber71}. The whole mathematical construction is known today as $\mathcal{D}$-module theory and it has connection in many areas of mathematics such as algebraic geometry, singularity theory, topology of varieties, representation theory and of course differential equations, among others. For more on the history of the subject and its connections we recommend the surveys \cite{Gra,Wal, AlJeNuB} or the books \cite{Bjo,Cou}.
In this section we will only present the definition of the Bernstein-Sato polynomial and discuss some of the connections with $Z_{\phi}(s,f)$. Let us start by considering the Weyl algebra $D_n=\mathds{K}[x,\partial/\partial x]$. Note that $\mathds{K}(x)$ is a $D_n$-module and so is $\mathds{K}[x]$ because it is mapped to
itself under operations of $D_n$. Moreover, we pick another variable $s$ (in addition to $x_1,\ldots,x_n$) and consider the Weyl algebra $D=\mathds{K}(s)[x,\partial/\partial x]$. Now we take an element $f(x)\in \mathds{K}[x]\setminus\{0\}$ and denote by $S$ the multiplicative subset of $\mathds{K}[x]\setminus\{0\}$ generated by $f(x)$, then it can be shown that
\[
S^{-1}(\mathds{K}(s)[x])=\mathds{K}(s)[x_1,\ldots,x_n,1/f(x)],
\]
is a $D$-module. Igusa presents Bernstein's main Theorem in the following form
\begin{theorem}[{\cite[Lemma~1]{Ber68}}]\label{Thm:Ber68}
There exists an element $P_0$ of $D_n$ satisfying
\begin{equation}\label{Eq:Berns}
P_0\cdot f(x) = 1.
\end{equation}
\end{theorem}
Multiplying both sides of \eqref{Eq:Berns} by an arbitrary polynomial of $\mathds{K}[s]\setminus\{0\}$ we get
\[P\cdot f(x) = b(s),\]
for some polynomial $P$ in the `variables' $s, x_1,\ldots,x_n,\partial/\partial x_1,\ldots,\partial/\partial x_n$ and with coefficients in $\mathds{K}$. If now we assume that $s\in\mathds{Z}$, we obtain the following functional equation
\begin{equation}\label{Eq:Func_Eq}
P(s,x,\partial/\partial x)\cdot f(x)^{s+1} = b(s)f(x)^s,
\end{equation}
which is proved in detail in \cite{Ber72}. The set of all the polynomials $b(s)$ of $\mathds{K}[s]$ satisfying \eqref{Eq:Func_Eq} forms an ideal, and the unique monic generator of this ideal is called the \textit{Bernstein-Sato polynomial} $b_f(s)$ \textit{of} $f$. The meromorphic continuation of the real $Z_{\phi}(s,f)$ is stated by Bernstein in the following form.
\begin{theorem}[{\cite[Theorem~1\textquotesingle]{Ber72}}]\label{Thm:Ber72}
For a complex number $s$ with $\Re(s)>0$, consider $Z_{\phi}(s,f)$, the local zeta function of $f(x) \in \mathds{R}[x_1,\ldots, x_n]\setminus \mathds{R}$ and $\phi\in\mathcal{S}(\mathds{R}^n)$. Suppose that the Bernstein-Sato polynomial $b_f(s)$ of $f$ factors as
\[b_f(s)=\prod_{\lambda}(s+\lambda),\]
and denote by $\Gamma(s)$ the Gamma function, see \eqref{Eq:Gamma}. Then $\frac{Z_{\phi}(s,f)}{\prod_{\lambda}\Gamma(s+\lambda)}$ has a holomorphic
continuation to the whole $\mathds{C}$.
\end{theorem}
\begin{proof}
In order to avoid complications with the absolute value we first consider the following simplification. Let $f_+^s$ be the function defined by
\[f_+^s(x)=\begin{cases}
f(x)^s & \text{ for } f(x)\geq 0\\
0 & \text{ for } f(x)<0.\\
\end{cases}\]
Then by the functional equation \eqref{Eq:Func_Eq} one has
\begin{align*}
b_f(s)\,Z_{\phi}(s,f_+)&=\int\limits_{\mathds{R}^{n}\setminus f^{-1}(0)}\phi(x)\, b_f(s)f_{+}^s(x)\, |\mathrm{d}x|=\int\limits_{\mathds{R}^{n}\setminus f^{-1}(0)}\phi(x)\, P(s)\,f_{+}^{s+1}(x)\, |\mathrm{d}x|\\
&=\int\limits_{\mathds{R}^{n}\setminus f^{-1}(0)}P^\ast(s)\cdot\phi(x)\, f_{+}^{s+1}(x)\, |\mathrm{d}x|.
\end{align*}
Here $P^\ast$ denotes the adjoint operator of $P$, and one has that $P^\ast(s)\cdot\phi(x)\in\mathcal{S}(\mathds{R}^n)$. We can repeat the
above process and, after $r$-times, we get
\[b_f(s+r-1)\cdots b_f(s)\,Z_{\phi}(s,f_+)=\int\limits_{\mathds{R}^{n}\setminus f^{-1}(0)}\phi_r(x)\, f_{+}^{s+r}(x)\, |\mathrm{d}x|,
\]
where $\phi_r(x)=P^\ast(s + r - 1)\cdots P^\ast(s)\cdot\phi\in\mathcal{S}(\mathds{R}^n)$. Now, by a simple calculation one has that
\[\prod_{\lambda}\Gamma(s+\lambda)\,b_f(s) \cdots b_f(s + r - 1)=\prod_{\lambda}\Gamma(s+r+\lambda),
\]
which in turn implies that for $s$ with $\Re(s)$ sufficiently large
\begin{equation}\label{Eq:ProofBern}
\frac{Z_{\phi}(s,f_+)}{\prod_{\lambda}\Gamma(s+\lambda)}=\frac{1}{\prod_{\lambda}\Gamma(s+r+\lambda)}\,\int\limits_{\mathds{R}^{n}\setminus f^{-1}(0)}\phi_r(x)\, f_{+}^{s+r}(x)\, |\mathrm{d}x|.
\end{equation}
Note that, while the LHS of \eqref{Eq:ProofBern} is holomorphic when $\Re(s)>0$, the RHS is holomorphic on $\Re(s)>-r$. Since $r\in\mathds{Z}_{\geq 0}$ is arbitrary, one concludes that $\frac{Z_{\phi}(s,f_+)}{\prod_{\lambda}\Gamma(s+\lambda)}$ has a holomorphic continuation to $\mathds{C}$.
\end{proof}
The previous proof can be easily adapted for the complex $Z_{\phi}(s,f)$, see \cite[Theorem~5.3.2]{IguBook}.
\subsection{Poles of $Z_{\phi}(s,f)$.} By Theorem \ref{Thm:Ber72} the poles of $Z_{\phi}(s,f)$ are integer translates of the roots of $b_f(s)$. This agrees with Theorem \ref{Thm:mer_cont} since the roots of the Bernstein-Sato polynomial of $f$ are negative rational numbers \cite{Kas}. Moreover one has
\begin{theorem}[{\cite[Theorem~1]{Lic}}]\label{Thm:Ka-Li}
If $f(x) \in \mathds{C}[x_1,\ldots, x_n]\setminus \mathds{C}$ and $(X,\pi)$ is a resolution of singularities of $f^{-1}(0)$, with the notation of Section \ref{Sec:Dist_and_Loc}, every root of $b_f(s)$ is of the form
\[\lambda=-\frac{\nu_E+k}{N_E}, \]
for some $E\in \mathcal{E}$ and some integer $k\geq 0$.
\end{theorem}
In most cases, Theorem \ref{Thm:Ka-Li} gives only a big list of candidate roots for $b_f(s)$ and the problem of the actual determination of the true roots is largely open. Of course, the same problem translates to the candidate poles of $Z_{\phi}(s,f)$, since in general, Theorem \ref{Thm:mer_cont} gives a very big list of candidate poles and the determination of the true poles is hard. For instance it is known that, when $f$ is a reduced plane curve singularity or an isolated quasi homogeneous singularity, the set of poles of the complex $Z_{\phi}(s,f)$ coincides exactly with the set $\{\lambda-r\mid \lambda
\text{ is a root of } b_f(s), r\in\mathds{Z}_{\geq 0}\}$, see \cite{Loe85}. See also \cite{Bla} for some recent results about the poles of the complex $Z_{\phi}(s,f)$ in the case of curves.
It is also known that the roots of the Bernstein-Sato polynomial are deeply connected with the geometry of $f$. Malgrange \cite{Mal} showed that, if $\lambda$ is a root of $b_f(s)$ then $\exp(2\pi\mathrm{i}\lambda)$ is an eigenvalue of the local monodromy of $f$ at some point of $f^{-1}(0)$, and all eigenvalues are obtained in this way, see \cite{Bar84}. We discuss some of these results in Section \ref{Sec:CompxOI}.
We have started this section mentioning the work by M. Sato in the 60's. Part of his interest in the theory of $\mathcal{D}$-modules was motivated by another mathematical theory, namely prehomogeneous vector spaces. If $G$ is a connected reductive algebraic subgroup of $GL_n(\mathds{C})$ acting transitively on the complement of
an absolutely irreducible hypersurface $f^{-1}(0)$ in $\mathds{C}^n$, then $(G,\mathds{C}^n)$ is called a \textit{prehomogeneous vector space}. In this case one may show that $f(x)$ is a homogeneous polynomial and also a \textit{relative invariant} of $G$, that is,
\[ f(g\cdot x) = \nu(g)f(x),\quad\text{for all }\, g \in G,\]
where $\nu$ belongs to $\Hom(G,\mathds{C}^\times)$. See \cite{Sat90} for the translation of the original work of M. Sato and \cite[Chap. 6]{IguBook} for a detailed exposition. This theory of prehomogeneous vector spaces contains Archimedean zeta functions of some group invariants, see e.g. \cite{Sat74,Kim82,Gra}. It may be useful to compute explicitly some zeta functions, as in the following example, given at the end of \cite[Chap. 6]{IguBook}:
\[\int\limits_{\mathds{C}^{n^2}} \exp(-2\pi \ltrans{x} \bar{x})\, |\det(x)|_\mathds{C}^{s}\,|\mathrm{d}x|=\frac{1}{(2\pi)^{ns}}\cdot\prod_{r=1}^n\frac{\Gamma(s+r)}{\Gamma(r)}.
\]
\section{Oscillatory Integrals}\label{Sec:OscInt}
Denote by $\varepsilon(x)$ the standard additive character of $\mathds{K}=\mathds{R}$ or $\mathds{C}$, which is defined as
\[\varepsilon(x)=\begin{cases}
\exp(2\pi\mathrm{i}x), \text{ for } x\in\mathds{R}\\
\exp(4\pi\mathrm{i}\Re(x)), \text{ for } x\in\mathds{C}.
\end{cases}\]
\begin{definition}\label{Def:OscInt}
If $f(x) \in \mathds{K}[x_1,\ldots, x_n]\setminus \mathds{K}$ and $\phi\in\mathcal{S}(\mathds{K}^n)$, then the \textit{oscillatory integral} attached to $(f,\phi)$ is defined as
\[I_\phi(\tau;f)=\int\limits_{\mathds{K}^{n}\setminus f^{-1}(0)}\phi(x)\ \varepsilon(\tau\cdot f(x))\ |\mathrm{d}x|,
\]
for $\tau\in\mathds{K}$.
\end{definition}
This type of integrals appears frequently in a great number of situations of physical and mathematical interest, see e.g. \cite[Chapter 6]{AG-ZV} and \cite[Chapter VIII]{Ste}. More generally $f$ is considered to be a $\mathds{K}$-analytic function, which is called the \textit{phase} and the function $\phi$ is called the \textit{amplitude}. One of the main problems in the theory of oscillatory integrals is to study the asymptotic behaviour of $I_\phi(\tau;f)$ when the norm of the parameter $\tau$ tends to $\infty$. The stationary phase principle states that the main contribution in the asymptotics is given by neighbourhoods of the critical points of the phase. Moreover, this asymptotic behaviour is controlled by the poles of
$Z_{\phi}(s,f)$.
It is also known that the integral $I_\phi(\tau;f)$ can be rewritten in terms of one dimensional integrals using the Gel'fand-Leray forms, defined
as follows. Assume that $f$ is a $\mathds{K}$-analytic function, then the Gel'fand-Leray form $\omega_f(x,t)$ is the unique $(n-1)$-form on
the level hypersurface $X_t = \{x\mid f(x)=t\}$ $(t\neq0)$, which satisfies
\[\mathrm{d}f\wedge \omega_f(x,t)=\mathrm{d}x_1\wedge\cdots\wedge\mathrm{d}x_n.\]
If $C_f:=\{x\in\mathds{K}^n\mid \nabla f(x) =0\}$ denotes \textit{the critical set of} $f$, and one assumes that $C_f\cap\mathrm{Supp}(\phi)\subset f^{-1}(0)$, then
\begin{equation}\label{Eq:FibInt}
I_\phi(\tau;f)=\int_{\mathds{K}} \varepsilon(\tau\cdot t)\Bigg(\int_{X_t}\phi(x)\cdot \omega_f(x,t) \Bigg) |\mathrm{d}t|.
\end{equation}
\begin{theorem}\label{Thm:AsyExp}
Consider $f(x) \in \mathds{K}[x_1,\ldots, x_n]\setminus \mathds{K}$ and $\phi\in\mathcal{S}(\mathds{K}^n)$. If we assume that
\[C_f\cap\mathrm{Supp}(\phi)\subset f^{-1}(0),\]
then the oscillatory integral $I_\phi(\tau;f)$ can be expanded in an asymptotic series of the form
\begin{equation}\label{Eq:AsyExp}
\sum_{\alpha}\sum_{k=0}^{n-1}S_{k,\alpha}(\phi)\tau^\alpha\,(\ln \tau)^k,\quad\text{as } |\tau|_\mathds{K}\to\infty.
\end{equation}
Here the numerical coefficients $S_{k,\alpha}(\phi)$ belong to $\mathcal{S}^\prime(\mathds{K}^n)$, and the parameter $\alpha$ runs through the arithmetic progressions of the form \eqref{Eq:Poles}, given by the poles of $Z_{\phi}(s,f)$.
\end{theorem}
The previous statement is a reformulation of \cite[Theorem~1.6,~Ch.~III]{IguTata}, where three objects are related: the asymptotic expansions of $I_\phi(\tau;f)$ when $|\tau|_\mathds{K}\to\infty$, the asymptotic expansions of the fiber integral
\[\int_{X_t}\phi(x)\cdot \omega_f(x,t), \text{ when } |t|_\mathds{K}\to 0,\]
and the poles of $Z_{\phi}(s,f)$. For the proof, some functional spaces are defined for each one of the aforementioned objects and then it is shown that these spaces are homeomorphic by using the Fourier and Mellin tranforms. The proof also includes the case of $p$-adic fields.
Another relevant characteristic of the asymptotic series of $I_\phi(\tau;f)$ is the \textit{oscillation index} $\beta_f$ of $f$, which is the leading power of the parameter $\tau$ in the expansion \eqref{Eq:AsyExp}. It turns out that the negative of $\beta_f$ coincides in some cases with another relevant invariant of the singularity defined by $f$: the \textit{log canonical threshold} of $f$. It may be defined in terms of the resolution $(X,\pi)$ of $f^{-1}(0)$ as
\[\mathrm{lct}(f):=\min_{E\in\mathcal{E}}\frac{\nu_{E}+1}{N_E},\]
where we have used the notation of \eqref{Eq:Poles} in Section \ref{Sec:Dist_and_Loc}. It is well known that $-\mathrm{lct}(f)$ is exactly the largest pole of $Z_{\phi}(s,f)$.
For properties and more information about $\mathrm{lct}(f)$ and its role in birational geometry see e.g. \cite{Mus06,Mus12}.
\subsection{Real Oscillatory Integrals}\label{Sec:RealOI}
In the realm of harmonic analysis, the real oscillatory integrals of type $I_\phi(\tau;f)$ have been an essential player for a long time. They have also been the basis for the definition of differential operators and have become a classical subject in harmonic
analysis, partial diferential equations and geometry (geometric analysis) of manifolds. For instance, in \cite{Ste} there is the following class of oscillatory integrals of the second kind (being the real $I_\phi(\tau;f)$ the ones of the first kind):
\[T_\tau(f)(x) = \int\limits_{\mathds{R}^{n}}\exp(\mathrm{i}\tau f(x,y))\, K(x,y) g(y)\ |\mathrm{d}y|.
\]
Here the kernel $K(x,y)$ carries an oscillatory factor and the function $g$ satisfies some smoothness condition. In this case, the important matter are the boundedness properties of $T_\tau(f)(x)$. A key hypothesis in many cases is the analyticity of the phase $f$, in this case Theorem \ref{Thm:AsyExp} was proved first by Jeanquartier in \cite{Jea70}, and then by Malgrange \cite{Mal74}, Igusa \cite{IguTata} and Jeanquartier \cite{Jea79}, among others. See also \cite[Section 7.3.2]{AG-ZV}.
An important class of real analytic phases for which the asymptotics of $I_\phi(\tau;f)$ and the boundedness of $T_\tau(f)(x)$ (for some $T_\tau$) is better understood is the class of non-degenerate functions with respect to their Newton polyhedron. In his seminal paper \cite{Var}, A. N. Varchenko investigates the oscillatory index of $I_\phi(\tau;f)$ as well as the asymptotic series \eqref{Eq:AsyExp} in terms of the geometry of the Newton polyhedron of the phase. Loosely speaking, Varchenko's idea is to attach a Newton polyhedron $\mathcal{NP}(f)$ to the phase function $f$ and then define a non-degeneracy condition with respect to $\mathcal{NP}(f)$. Then one may construct a toric variety associated to the Newton polyhedron, and use the well known toric resolution of singularities to give a list of parameters $\alpha$ in \eqref{Eq:AsyExp} (equivalently a list of candidate poles for $Z_{\phi}(s,f)$). Moreover, this list can be read off from the geometry of $\mathcal{NP}(f)$ and a refinement of its normal fan. For example, Varchenko shows that under some mild conditions
\[\mathrm{lct}(f)=-\beta_f=\frac{1}{t_0},\]
where $t_0$ is the entry of the point in $\mathds{R}^n$ given by the intersection of the diagonal of $\mathds{R}^n$ with the boundary of $\mathcal{NP}(f)$. A more detailed exposition of the original article of Varchenko is presented in \cite[Chapters 6-8]{AG-ZV}.
It is well known to the specialist that the only data of $\mathcal{NP}(f)$ that give poles of $Z_{\phi}(s,f)$ are contained in the normal fan of the polyhedron, i.e. the extra rays required for the toric resolution give rise to fake candidate poles. This list of fake poles was removed by Denef and Sargos in \cite{DenSar} and by a different method by Aroca, G\'omez-Morales and the author in \cite{ArGoLe}.
The original ideas of Varchenko have been extensively used and generalized in analysis. In particular they have been used in the study of: more general oscillatory integrals, estimation of the size of sublevel sets, boundedness and decay estimates of operators, among other matters. See for instance the works
\cite{PhStSt99,PhSt00,PhStSt01,Kar,DeNiSa,CaWaWr,GrPrTa,Ikr,Gre10A,Gre10B,CoGrPr,OkaTak,KamNos16,Xia16,Gil,GiGrXi,Gre19},
and the references therein. In particular we should note that Kamimoto and Nose have extended in \cite{KamNos16} the work of Varchenko for the case when the phase $f$ is smooth and the amplitude has a zero at a critical point of $f$. They show that the optimal rates of decay, and other related parameters, for some weighted
oscillatory integrals can be expressed in terms of the geometry of $\mathcal{NP}(f)$ and a refinement of its normal fan. Similar results are given for the poles of the associated local zeta functions.
\subsection{Complex Oscillatory Integrals}\label{Sec:CompxOI}
We now consider the case of holomorphic functions $f$. In some sense, the corresponding counterpart of the oscillatory integrals $I_\phi(\tau;f)$ of real analytic functions are integrals over vanishing cycles, see e.g. \cite{Mal74,Bar84,AG-ZV,Pem10A,Pem10B}. We consider two holomorphic functions on $\mathds{C}^n$,
$f(x)$ and $\phi(x)$. The integrals of interest have the form
\begin{equation}\label{Eq:CompxOI}
\int_{\Gamma}\exp(\tau f(x))\, \phi(x) \mathrm{d}x_1\wedge\cdots\wedge\mathrm{d}x_n,
\end{equation}
where $\Gamma$ is a real $n$-dimensional chain lying in $\mathds{C}^n$, and $\tau$ is a large real parameter. These integrals also admit asymptotic expansions, but this time related with the monodromy of $f$ at some point of $f^{-1}(0)$. We only present here the main ideas of Malgrange \cite{Mal74}, following closely the presentation of \cite[Chapter 11]{AG-ZV}. We start by assuming that
\[f: (\mathds{C}^n,0)\rightarrow (\mathds{C},0)\]
has a critical point $x_0$ of finite multiplicity, in fact we will assume that $x_0$ is the origin. We will say that the $n$-dimensional chain $\Gamma$ is \textit{admissible} if on its boundary one has $\Re(f(x))<0$. This notion admits the following interpretation in terms of the Milnor fibration.
Let $B_\epsilon$ be the open ball centered at the origin with radius $\epsilon$ and let $D_\eta$ be the open punctured disk defined by the condition $\{t\in\mathds{C}\,\textbf{;}\,|t|<\eta\}$, with $0<\eta\ll\epsilon\ll1$. We also put $X=B_\epsilon\cap f^{-1}(D_\eta)$. We recall that the \textit{Milnor fibration} of $f$ at the origin is the smooth locally trivial fibration defined by
\begin{equation}\label{Eq:MilFib}
f\vert_{X}:\ X\rightarrow D_\eta.
\end{equation}
We denote by $D_\eta^-$ the set $\{t\in D_\eta\,\textbf{;}\, \Re(t)<0\}$ and by $X^-$ the set $X\cap f^{-1}(D_\eta^-)$. With these definitions, the $n$-dimensional chain $\Gamma$ is admissible if $\Gamma\subset X$ and $\partial(\Gamma)\subset X^-$. Two admissible chains $\Gamma_1,\Gamma_2$ are said to be equivalent if there exists an $(n + 1)$-dimensional chain $V \subset X$ such that
\[(\Gamma_1-\Gamma_2+\partial V)\subset X^-.\]
The equivalence classes of admissible chains with the operations of addition and
multiplication by scalars form a vector space, by definition coinciding with the homology group $H_n(X,X^-)$.
Since $f\vert_{X^-}:\ X^-\rightarrow D_\eta^-$ is also a trivial fibration, one may show that the homology of $X^-$ is isomorphic to the homology of an arbitrary fiber over $D_\eta^-$. This means that for any $t\in D_\eta^-$, there is an isomorphism
\[\partial_t\ :\ H_n(X,X^-)\rightarrow H_{n-1}(X_t),\]
where $X_t$ is a Milnor fiber (under \eqref{Eq:MilFib}) and $H_{n-1}(X_t)$ is the reduced homology group. Now, for an admissible chain $\Gamma$, representing the class $[\Gamma]\in H_{n}(X,X^-)$, one may contract its boundary in $X^-$ to the fiber $X_t$, giving a cycle that represents the element $\partial_t[\Gamma]\in H_{n-1}(X_t)$. The family of classes obtained in this way depends continuously on $t\in D_\eta^-$.
\begin{theorem}[{\cite[Theorem~7.1]{Mal74}}]\label{Thm:AG-ZV1}
Let $\omega$ be a holomorphic differential $n$-form defined on $X$, and take $[\Gamma]\in H_{n}(X,X^-)$. Then there exists a small positive number $t_0\in D_\eta$ ,such that
\[\int_{\Gamma}\exp(\tau f)\cdot\omega \approx \int_{0}^{t_0}\exp(-\tau t)\Bigg(\int_{\partial_{-t}[\Gamma]}\, \omega_f(x,t)\Bigg)\mathrm{d}t.\]
The sign $\approx$ means that the integrals are equal up to a negative integer power of the parameter $\tau$ when $\tau \to \infty$.
\end{theorem}
Note the similarity with \eqref{Eq:FibInt}. Once we have reached this point, it only rests to state the main Theorem of Malgrange in \cite{Mal74}.
\begin{theorem}\label{Thm:AG-ZV2}
Let $\omega$ be a holomorphic differential $n$-form defined on $X$, and take $[\Gamma]\in H_{n}(X,X^-)$. Then the integral $\int_{\Gamma}\exp(\tau f)\cdot\omega$ can be expanded in an asymptotic series of the form
\[\sum_{\alpha,k}S_{k,\alpha} \tau^\alpha\,(\ln \tau)^k,\quad\text{as } \tau\to\infty.\]
Here the parameter $\alpha$ runs through a finite set of arithmetic progressions, depending only on the phase and consisting of negative rational numbers.
Moreover, each number $\alpha$ verifies that $\exp (-2\pi\mathrm{i}\alpha)$ is an eigenvalue of the classical monodromy operator of the critical point of the phase. The coefficient $S_{k,\alpha}$ is zero when the monodromy does not have Jordan blocks of dimension greater than or equal to $k + 1$ associated with the eigenvalue $\exp (-2\pi\mathrm{i}\alpha)$.
\end{theorem}
Malgrange also shows that integrals of type \eqref{Eq:CompxOI} allow one to recover some real oscillatory integrals, see e.g. \cite[Section~7]{Mal74} and \cite[Section~11.2]{AG-ZV}. If $f$ is a real analytic function and $\phi$ is supported on a small neighbourhood of the origin, then under some mild conditions on $\phi$, there exists an admissible $n$-chain $\Gamma$ for which
\begin{equation}\label{Eq:ORIandOCI}
\int_{\mathds{R}^{n}} \exp(\mathrm{i}\tau f)\,\phi\, \mathrm{d}x_1\wedge\cdots\wedge\mathrm{d}x_n\approx
\int_{\Gamma}\exp(\mathrm{i}\tau f)\cdot \phi\, \mathrm{d}x_1\wedge\cdots\wedge\mathrm{d}x_n.
\end{equation}
From this observation follows the connection between the poles of Archimedean local zeta functions and eigenvalues of the complex monodromy of $f$ at some point of $f^{-1}(0)$.
For some other works related with complex oscillatory integrals or with the ideas of Malgrange in \cite{Mal74} see for example \cite{WonMcC,Bar84,Loe85,Pal,Bar86,Loe87,AG-ZV,DenRepo,LioRol,Jac00A,LiaPar,DelHow,BarJed,Ang,Del,Web,Col,Pem10A,Pem10B,BraSeb,ClCoMi,Sab18,AndPet}, and the references therein.
\section{Some generalizations}\label{Sec:SomeGen}
In this last section we want to mention some generalizations of local zeta functions and oscillatory integrals.
\subsection{Local Zeta Functions}\label{Sec:LZFGen}
In 1987 C. Sabbah introduces in \cite{Sab87B} the following generalization of $Z_{\phi}(s,f)$. Let $f_1,\ldots,f_l$ be a set of analytic functions defined over a complex (smooth) variety $X$, and let $\phi$ be a smooth function with compact support
contained in $X$. Then the \textit{multivariate local zeta function associated to} $\phi$ and $f= (f_1,\ldots,f_l)$ is defined as
\[Z_{\phi}(s_1,\ldots,s_l,f)=\int_{X}\phi(x)\ |f_1(x)|^{s_1}\cdots |f_l(x)|^{s_l}\ \mathrm{d}x,\]
where $s_i\in\mathbb{C}$ with $\Re(s_i)>0$, for $i=1,\ldots,l$.
This integral converges on the subspace of $\mathds{C}^l$ defined by $\Re(s_i)>0$, for $i=1,\ldots,l$. Moreover, the author shows that there is a meromorphic continuation to the whole $\mathds{C}^l$ with poles contained in the union of some hyperplanes described in terms of some Bernstein polynomial. He shows that there is a finite set
$\mathcal{L}$ of linear forms with positive integer coefficients, relatively prime, such that for $i=1,\ldots,l$ the following functional equations holds:
\[\Bigg[\prod_{L\in\mathcal{L}}b_{L,i}(L(s))\Bigg]\cdot f^s=P_k(s,x,\partial_x)f^s\cdot f_k.\]
Here the $b_{L,i}$ denote certain Bernstein polynomials defined previously by the author in \cite{Sab87A}. The $P_k(s,x,\partial_x)$ are certain differential operators analogous to the ones defined in Section \ref{Sec:BerPol}. The set $\mathcal{L}$ is called the \textit{set of slopes} and in principle depends on the choice of some filtration of the $\mathcal{D}$-module associated with $f$.
Apparently at the same time, F. Loeser defines in \cite{Loe89} similar zeta functions. He works over a local field $\mathds{K}$ of characteristic zero. In this case $f_1,\ldots,f_l$ are polynomials vanishing at the origin when $\mathds{K}$ is non-Archimedean, and analytic functions in the Archimedean case. In the $p$-adic case the set of test functions is given by locally constant functions with compact support, while in the Archimedean case it is the set of smooth functions with compact support.
The \textit{multivariate local zeta function} associated to a test function $\phi$ and $F= (f_1,\ldots,f_l)$ is defined as
\[Z_{\phi}(s_1,\ldots,s_l,f)=\int_{\mathds{K}^n}\phi(x)\ |f_1(x)|_\mathds{K}^{s_1}\cdots |f_l(x)|_\mathds{K}^{s_l}\ |\mathrm{d}x|,\]
where $s_i\in\mathbb{C}$ with $\Re(s_i)>0$, for $i=1,\ldots,l$. Actually, his definition is a little more general, since it includes the quasicharacters of $\mathds{K}$. He uses Hironaka's theorem to show that the poles of the meromorphic continuation to $\mathds{C}^l$ (as a rational function in the $p$-adic case) are contained in the union of some hyperplanes described this time in terms of the resolution of $\cup_{i=1}^lf_i^{-1}(0)$. He also shows that the set of slopes is connected with the geometry of the morphism $F$. More precisely, if one imposes some finiteness condition on $F$, the set of slopes is contained in the set of normal vectors to the faces of the Newton polyhedron of the discriminant of $F$ at the origin. These multivariate zeta functions have been used recently in mathematical physics to study string amplitudes in \cite{B-GVZ-G} and Log-Coulomb Gases in \cite{Z-GZ-LL-C}. See the references given there for other uses in $p$-adic mathematical physics.
Another local zeta function involving several functions was studied in a joint work with W. Veys and W. A. Z\'{u}\~{n}iga-Galindo \cite{Le-Ve-Zu}, were we considered Archimedean zeta functions for analytic mappings. If $\mathds{K}=\mathds{R}$ or $\mathds{C}$, let $\boldsymbol{f}=$ $\left(f_{1},\ldots,f_{l}\right) :U\rightarrow \mathds{K}^{l}$ be a $\mathds{K}$-\textit{analytic mapping} defined on an open $U$ in $\mathds{K}^{n}$. Let $\phi:U\rightarrow\mathbb{C}$ be a
smooth function on $U$ with compact support. Then the local zeta function
attached to $\left( \boldsymbol{f},\Phi\right) $ is defined as
\[
Z_{\phi}(s,\boldsymbol{f})=\int\limits_{\mathds{K}^{n}\setminus\boldsymbol{f}%
^{-1}\left( 0\right) }\phi\left( x\right)\, \left\vert \boldsymbol{f}%
(x)\right\vert _{\mathds{K}}^{s}\ |\mathrm{d}x|,\quad \text{ for } s\in\mathbb{C}\text{ with } \Re(s)>0.
\]
Here $|\boldsymbol{f}(x)|_{\mathds{K}}=|(f_{1}(x),\ldots,f_{l}(x)|_{\mathds{K}}$ stands for $\sqrt{\sum f_{i}^{2}(x)}$ or $\sqrt{\sum |f_{i}(x)|_\mathds{C}^{2}}$ depending on whether $\mathds{K}$
is $\mathds{R}$ or $\mathds{C}$, and $|\mathrm{d}x|$ is the Haar measure on $\mathds{K}^{n}$. In this case we give a description of the possible poles of $Z_{\phi}(s,\boldsymbol{f})$ in terms of a log-principalization of the ideal $\mathcal{I}_{\boldsymbol{f}}=\langle f_{1},\ldots,f_{l}\rangle$. We also generalize some of the results of Varchenko presented in Section \ref{Sec:RealOI}, by introducing a new non-degeneracy condition associated to the Newton polyhedron of $\boldsymbol{f}$ (which in general differs from the Minkowski sum of the $\mathcal{NP}(f_i)$).
Very recently W. Veys and W. A. Z\'{u}\~{n}iga-Galindo extend in \cite{Ve-Zu} part of the classical theory of local zeta functions and oscillatory integrals to the case of meromorphic functions defined over a local field $\mathds{K}$ of characteristic zero. Let $f, g\ :U\to\mathds{K}$ be two $\mathds{K}$-analytic functions defined on an open set $U\subseteq \mathds{K}^n$, such that $f/g$ is not constant. The \textit{local zeta function attached to a test function} $\phi$ \textit{and} $f/g$ is defined as
\[Z_{\phi}(s,f/g)=\int_{U\setminus D_\mathds{K}}\phi(x)\ \left\vert \frac{f(x)}{g(x)}\right\vert _{\mathds{K}}^{s}\ |\mathrm{d}x|,
\]
where $s$ is a complex number and $D_\mathds{K}=f^{-1}(0)\cup g^{-1}(0)$. In this case a resolution of singularities of $D_\mathds{K}$ is required even to show that $Z_{\phi}(s,f/g)$ converges in some open strip $(\alpha,\beta)$, but it turns out that $\alpha$ and $\beta$ are independent of the chosen resolution. The authors also consider oscillatory integrals, with the notation of Definition \ref{Def:OscInt}, they have the form
\[\int\limits_{\mathds{K}^{n}\setminus D_\mathds{K}}\phi(x)\ \varepsilon\Bigg(\tau\cdot \frac{f(x)}{g(x)}\Bigg)\ |\mathrm{d}x|.
\]
They show that the analogue of Theorem \ref{Thm:AsyExp} implies two different asymptotic expansions: one when the norm of the parameter tends to infinity, and another one when the norm of the parameter tends to zero. The first asymptotic expansion is controlled by some poles of the local zeta functions associated to the meromorphic functions $f/g-c$, for certain special values $c$, while the second expansion is controlled by other poles of $Z_{\phi}(s,f/g)$.
\subsection{Oscillatory Integrals}
Some generalizations of oscillatory integrals have been considered under the name of multilinear oscillatory operators, see e.g. \cite{CheLu,PhStSt01,LuTao,LuWu,ChLiTaTh,Gre08,GiGrXi,NiOnZe} and the references given there. We discuss just a couple of examples to see the kind of problems of interest in the area.
In \cite{PhStSt01} Phong, Stein, and Sturm investigated the multilinear oscillatory integral operator
\[
T_D(\tau;g_1,\ldots,g_n)=\int\limits_{D} \exp(\mathrm{i}\tau\cdot f(x_1,\ldots,x_n))\,g_1(x_1)\cdots g_n(x_n)\, |\mathrm{d}x_1\wedge\cdots\wedge\mathrm{d}x_n|,
\]
where $\tau$ is a real parameter, the phase function $f$ is a polynomial and $D$ is a subset of the unit ball in $\mathds{R}^n$. In addition, the functions $g_i$ belong to the
functional space $\mathbf{L}^{p_i}[0,1]$. One of their main results is the following \textit{decay estimate} for $n\geq 2$,
\[|T_D(\tau;g_1,\ldots,g_n)|\leq C\, |\tau|^{-1/\alpha}\,[\ln (2+|\tau|)]^{n-1/2}\,\prod_{i=1}^{n}||g_i||_{p_i},
\]
where $\alpha$ is some constant depending on a certain Newton polyhedron of $f$ and $C$ is a constant that does not depend on $\tau$.
A more general multilinear oscillatory integral operator was proposed by Christ, Li, Tao and Thiele in \cite{ChLiTaTh} by
\[
I(\tau;g_1,\ldots,g_m)=\int\limits_{\mathds{R}^{n}} \exp(\mathrm{i}\tau\cdot f(x))\,\prod_{j=1}^{m}g_j(\pi_j(x))\,\phi(x)\,|\mathrm{d}x|.
\]
Here $\tau$ is a real parameter, $\phi$ is a smooth function with compact support and $f\,:\,\mathds{R}^n\rightarrow \mathds{R}$ is a measurable function. Moreover, each $\pi_j$ is a projection from $\mathds{R}^n$ to some $\mathds{R}^{k_j}$, with $1\leq k_j\leq n-1$, and the functions $g_j :\,\mathds{R}^{k_j}\rightarrow \mathds{C}$ are locally integrable with respect to the measure of $\mathds{R}^{k_j}$.
The integral $I(\tau;g_1,\ldots,g_m)$ is well defined if all the $g_j$ belong to the functional space $\mathbf{L}^\infty$, and in this case
\[|I(\tau;g_1,\ldots,g_m)|\leq C\,\prod_j||g_j||_{\infty}.\]
The authors show that if $f$ is a polynomial of bounded degree and the $k_j$'s are $n-1$ or $1$ according to some conditions, then it is possible to characterize the decay rates of $I(\tau;g_1,\ldots,g_m)$. Some other cases are considered in \cite{NiOnZe} and \cite{GiGrXi}, but the general case is largely open.
Finally we want to comment that to the best of our knowledge, there are no well established connections between these multilinear oscillatory integrals and local zeta functions. In seems also that there are no `multilinear' analogues of the complex oscillatory integrals of section \ref{Sec:OscInt}. To establish such connections/analogues may be of some interest for analysts or algebraic geometers.
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2021-10-22T02:06:49",
"yymm": "2110",
"arxiv_id": "2110.10299",
"language": "en",
"url": "https://arxiv.org/abs/2110.10299",
"abstract": "This note is a short survey of two topics: Archimedean zeta functions and Archimedean oscillatory integrals. We have tried to portray some of the history of the subject and some of its connections with similar devices in mathematics. We present some of the main results of the theory and at the end we discuss some generalizations of the classical objects.",
"subjects": "Algebraic Geometry (math.AG); Classical Analysis and ODEs (math.CA)",
"title": "Archimedean Zeta Functions and Oscillatory Integrals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462187092608,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7099326010182979
} |
https://arxiv.org/abs/1410.3997 | On reversible maps and symmetric periodic points | In reversible dynamical systems, it is frequently of importance to understand symmetric features. The aim of this paper is to explore symmetric periodic points of reversible maps on planar domains invariant under a reflection. We extend Franks' theorem on a dichotomy of the number of periodic points of area preserving maps on the annulus to symmetric periodic points of area preserving reversible maps. Interestingly, even a non-symmetric periodic point guarantees infinitely many symmetric periodic points. We prove an analogous statement for symmetric odd-periodic points of area preserving reversible maps isotopic to the identity, which can be applied to dynamical systems with double symmetries. Our approach is simple, elementary and far from Franks' proof. We also show that a reversible map has a symmetric fixed point if and only if it is a twist map which generalizes a boundary twist condition on the closed annulus in the sense of Poincaré-Birkhoff. Applications to symmetric periodic orbits in reversible dynamical systems with two degrees of freedom are briefly discussed. | \section{Main results and applications}
The study on symmetric features of reversible dynamical systems is of great importance. This paper is majorly concerned with symmetric periodic points of reversible maps on planar domains, which can be applied to find symmetric periodic orbits of reversible dynamical systems with two degrees of freedom possessing global surfaces of section invariant under symmetries, see below. Among other studies on symmetric periodic points, we refer the reader to \cite{Bir15,DeV54,Dev76} which are related to our approach. \\[-1.5ex]
Let $I$ be the reflection on ${\mathbb{R}}^2$ given by
$$
I:{\mathbb{R}}^2\to{\mathbb{R}}^2,\quad (x,y)\mapsto(-x,y).
$$
A connected planar domain $\Omega\subset{\mathbb{R}}^2$ is said to be {\bf invariant} if $I(\Omega)=\Omega$. Then $I$ descends to a reflection $I_\Omega$ on an invariant domain $\Omega$. A homeomorphism $f$ on an invariant domain $(\Omega,I_\Omega)$ obeying
$$
f\circ I_\Omega=I_\Omega\circ f^{-1}
$$
is called a {\bf reversible map}. A point $z\in\Omega$ is called a {\bf symmetric periodic point} of $f$ if
$$
f^k(z)=z,\quad f^\ell(z)=I_\Omega(z) \quad\textrm{for some }\; k,\,\ell\in{\mathbb{N}}.
$$
The minimal number $k\in{\mathbb{N}}$ satisfying the requirement is called a {\bf period} of a symmetric fixed point $z$ of $f$.
In particular if $k=\ell=1$, $z$ is called a {\bf symmetric fixed point}. Note that symmetric fixed points lie on the fixed locus $\mathrm{Fix}\, I_\Omega$ of $I_\Omega$. Recall that a point meeting the first condition is called a periodic point of $f$. Throughout this paper, we always assume that $\Omega\subset{\mathbb{R}}^2$ is an invariant connected domain with a reflection $I_\Omega$ and the following invariant domains are of interest to us.
$$
A:=\{z\in{\mathbb{R}}^2\,|\,1\leq|z|\leq2\},\quad S:=\{(x,y)\in{\mathbb{R}}^2\,|\,0\leq|y|\leq1\},\quad D:=\{z\in{\mathbb{R}}^2\,|\,|z|\leq 1\}
$$
and
$$
\mathring A:=\{z\in{\mathbb{R}}^2\,|\,1<|z|<2\},\quad \mathring D:=\{z\in{\mathbb{R}}^2\,|\,|z|< 1\}.
$$
A beautiful theorem by Franks \cite{Fra92,Fra96} states that every area preserving homeomorphism on $A$ or $\mathring A$ with a periodic point has infinitely many interior periodic points. Our first result is to extend this dichotomy to symmetric periodic points of reversible maps.
\begin{Thm}\label{thm:Franks-like theorem}
Every area preserving reversible map on $A$ or $\mathring A$ is either periodic point free or has infinitely many interior symmetric periodic points.
\end{Thm}
It is worth emphasizing that even a {\em non-symmetric} periodic point also guarantees infinitely many interior symmetric periodic points. Although the theorem is stated for $A$ and $\mathring A$, it carries over to the half-closed annuli $\{z\in{\mathbb{R}}^2\,|\,1\leq|z|<2\}$ and $\{z\in{\mathbb{R}}^2\,|\,1<|z|\leq2\}$. We will show in Proposition \ref{prop:symmetric fixed point} that every area preserving reversible map on $D$ or $\mathring D$ has at least one interior symmetric fixed point. The following corollary is a direct consequence of these results.
\begin{Cor}\label{cor:dichotomy}
Every area preserving reversible map on $D$ or $\mathring D$ has either precisely one interior symmetric fixed point with no other periodic points or infinitely many interior symmetric periodic points.
\end{Cor}
The following theorem improves Theorem \ref{thm:Franks-like theorem} for reversible maps isotopic to the identity under an additional assumption on the parity of periods. Odd-periodic points have an intimate relation with centrally symmetric periodic orbits as described in the next subsection.
\begin{Thm}\label{thm:dichotomy2}
Any area preserving reversible map on $A$ or $\mathring A$ isotopic to the identity with an odd-periodic point (not necessarily symmetric nor interior) has infinitely many interior symmetric odd-periodic points. In consequence, every area preserving orientation preserving reversible map on $D$ or $\mathring D$ has either precisely one interior symmetric fixed point with no other odd-periodic points or infinitely many interior symmetric odd-periodic points.
\end{Thm}
In view of the Poincar\'e-Birkhoff theorem, we expect that there exists a symmetric fixed point on $A$ under a boundary twist condition. Recall that a homeomorphism $f:A\to A$ isotopic to the identity is said to satisfy the {\bf boundary twist condition} if $f$ rotates two boundary circles of $A$ in opposite angular directions. To be precise, there is a lift $F=(F_1,F_2):S\to S$ of $f$ such that
$$
F_1(x,0)<x<F_1(x,1),\quad x\in{\mathbb{R}}.\\[1.5ex]
$$
\begin{Thm}\label{thm:PB theorem}
Let $f$ be a reversible map on $A$ isotopic to the identity with the boundary twist condition. Then there is an interior symmetric fixed point of $f$ on each connected component of $\mathrm{Fix}\, I_A$.
\end{Thm}
We point out that here and in the next theorem a reversible map is not even required to have no wandering point, which is a weaker condition than being area-preserving, see \cite{Fra88}. In \cite{Bir15}, Birkhoff already showed the existence of infinitely many symmetric periodic orbits and studied their rotation numbers. There was no precise statement like Theorem \ref{thm:PB theorem} but presumably he knew that this is true. We will outline two proofs of Theorem \ref{thm:PB theorem} which are on the lines of \cite{Bir13,Bir15}.
In fact, we find a necessary and sufficient condition on the existence of a symmetric fixed point of a reversible map isotopic to the identity on an arbitrary invariant connected domain $\Omega$ and in particular, the boundary twist condition on $A$ implies the sufficient condition. Note that the fixed locus of $I_\Omega$ is composed of disjoint intervals $Y_i$, i.e. $\mathrm{Fix}\, I_\Omega=\bigsqcup_{1\leq i\leq n} Y_i$.
\begin{Thm}\label{thm:necessary and sufficient condition}
Let $f:\Omega\to\Omega$ be a reversible map isotopic to the identity. Then $f$ has a symmetric fixed point on $Y_i$ if and only if $f(Y_i)\cap Y_i\neq\emptyset$ for $i\in\{1,\dots,n\}$ Similarly, the existence of an interior symmetric fixed point of $f$ on $Y_i$ is equivalent to $f(Y_i)\cap Y_i\cap\mathring \Omega\neq\emptyset$.
\end{Thm}
In general it is not easy to find an analogue of the boundary twist condition for an arbitrary possibly non-closed planar domain, see \cite{Fra88} for the case of the open annulus. However in view of this theorem, a reversible map $f:\Omega\to\Omega$ is reasonable to be called a {\bf twist map} if $\mathrm{Fix}\, I_\Omega$ is twisted, explicitly, if some component of $\mathrm{Fix}\, I_\Omega$ has a self-intersection point by $f$.
Apart from these results, we will also show some classical fixed point theorems for symmetric fixed points of reversible maps and some of them will play key roles in the proof of our main results.
The Poincar\'e-Birkhoff theorem has been proved and generalized in many papers \cite{Bir13,Bir15,Bir25,BN77,Neu77,Car82,Din83,Fra88,YZ95,Gui97,LCW09,HMS12,KS13,PR13}. Franks' theorem also has been pursued further in \cite{FH03,LeC06,CKRTZ12,Ker12}. There has been a remarkable new approach using pseudoholomorphic curves to understand smooth dynamics on planar domains by Bramham and Hofer \cite{BH12} (see also the literature cited therein). We treat the Poincare-Birkhoff theorem and Franks' theorem in their original forms for brevity but we expect that such generalized works might have reversible counterparts as well.
\subsection{Applications to symmetric periodic orbits}\quad\\[-1.5ex]
Let $M$ be a 3-dimensional smooth manifold with a smooth vector field $X$. An embedded compact Riemann surface $\Sigma\hookrightarrow M$ with boundary is called a {\bf global surface of section} for $X$ if $(\Sigma,X)$ meets the following requirements.\\[-2ex]
\begin{itemize}
\item[(1)] The boundary of $\Sigma$ consists of periodic orbits, called the {\em spanning orbits}.
\item[(2)] The vector field $X$ is transversal to the interior $\mathring\Sigma$ of $\Sigma$.
\item[(3)] Every orbit of $X$, except the spanning orbits, passes through $\mathring\Sigma$ in forward and backward time.\\[-2ex]
\end{itemize}
When we study dynamical systems with two degrees of freedom, the existence of global surfaces of section reduces the complexity by one dimension. A notion of global surfaces of section was introduced by Poincar\'e \cite{Poi99} and further studied by Birkhoff \cite{Bir17}. In the spirit of Poincar\'e's groundbreaking idea, our results help us to find symmetric periodic orbits of reversible dynamical systems with global surfaces of section invariant under symmetries.
If $M$ carries a smooth involution $\rho$ with $\rho_*X=-X$, a global surface of section $\Sigma$ invariant under $\rho$, i.e. $\rho(\Sigma)=\Sigma$, is called an {\bf invariant global surface of section}. Note that the unique spanning orbit $\gamma:{\mathbb{R}}/T{\mathbb{Z}}\to M$ of an invariant global disk-like surface of section is automatically {\bf symmetric}, i.e. $\rho(\gamma({\mathbb{R}}/T{\mathbb{Z}}))=\gamma({\mathbb{R}}/T{\mathbb{Z}})$, or equivalently if $\rho(\gamma(t))=\gamma(-t)$, $t\in {\mathbb{R}}/T{\mathbb{Z}}$ up to reparametrization. The condition $\rho^*X=-X$ yields that the induced Poincar\'e return map $f:\mathring\Sigma\to\mathring\Sigma$ is reversible with respect to the involution $\rho|_\Sigma$. A crucial observation is that there is the following one-to-one correspondence:
$$
\left\{\begin{array}{ll}\textrm{geometrically distinct sym-}\\ \textrm{metric periodic orbits of $X$}\end{array}\right\}\setminus
\left\{\begin{array}{ll}\textrm{spanning}\\ \textrm{orbits of $\Sigma$}\end{array}\right\}
\longleftrightarrow
\left\{\begin{array}{ll}\textrm{$f$-orbits of symmetric}\\ \textrm{periodic points of $f$}\end{array}\right\}.
$$
Accordingly, reversible dynamical systems with invariant global surfaces of section, for instance the planar restricted three-body problem or the St\"ormer problem, are of interest to us.
As observed in \cite{Bir15,Con63,McG69}, there are invariant global surfaces of section in the restricted three-body problem sometimes and the Poincar\'e return maps are area preserving. Although they were not interested in whether their global surfaces of section are invariant, one can easily check that these are indeed invariant. Another example is the St\"ormer problem, see \cite{Sto07,Bra70}. The global surface of section in the St\"ormer problem constructed in \cite{DeV54} is also invariant.
More generally, Frauenfelder and the author \cite{FK14} prove that if a hypersurface in ${\mathbb{R}}^4$ bounds a strictly convex domain and is invariant under the involution $\rho:=\mathrm{diag(-1,-1,1,1)}$, there exists an invariant global disk-like surface of section for the standard Reeb vector field and the Poincar\'e return map is area preserving, which generalizes a pioneering work of Hofer, Zehnder, and Wysocki \cite{HWZ98,HWZ03}. This shows that a global disk-like surface of section in \cite{AFFHvK12} is also invariant. As a consequence of Corollary \ref{cor:dichotomy}, there are either two or infinitely many symmetric periodic orbits on such a hypersurface and the latter happens under the presence of a non-symmetric periodic orbit.
\\[-1.5ex]
Suppose that a hypersurface $M\subset{\mathbb{R}}^4$ is centrally symmetric, i.e. $-\mathrm{Id}_{{\mathbb{R}}^4}(M)=M$. The standard Reeb vector field $X$ on $M$ automatically satisfies $(-\mathrm{Id}_{{\mathbb{R}}^4})^*X=X$. A periodic orbit which is symmetric with respect to $-\mathrm{Id}_{{\mathbb{R}}^4}$ is called {\bf centrally symmetric}. A centrally symmetric hypersurface $M$ which is symmetric with respect to $\rho$ as well is also interesting as the regularized energy hypersurface of the planar restricted three-body problem is so, see \cite{AFFHvK12}. If a periodic orbit is symmetric with respect to $\rho$ as well as $-\mathrm{Id}_{{\mathbb{R}}^4}$, it is called {\bf doubly symmetric}. The geometric motion of doubly symmetric periodic orbits and that of $\rho$-symmetric centrally non-symmetric periodic orbits are considerably different, see \cite{Kan12}. In this situation, if $\Sigma$ is a $\rho$-invariant global surface of section, so is $-\mathrm{Id}_{{\mathbb{R}}^4}(\Sigma)$. In consequence, the Poincar\'e (half-return) map $f:\mathring\Sigma\to -\mathrm{Id}_{{\mathbb{R}}^4}(\mathring\Sigma)=\mathring\Sigma$ (the identification by $-\mathrm{Id}_{{\mathbb{R}}^4}$) encodes the qualitative properties of the doubly symmetric aspect of $X$, and in particular we have the following correspondence.
$$
\left\{\begin{array}{ll}\textrm{geometrically distinct doubly}\\ \textrm{symmetric periodic orbits of $X$}\end{array}\right\}\setminus\left\{\begin{array}{ll}\textrm{spanning}\\ \textrm{orbits of $\Sigma$}\end{array}\right\}
\longleftrightarrow
\left\{\begin{array}{ll}\textrm{$f$-orbits of symmetric}\\ \textrm{odd-periodic points of $f$}\end{array}\right\}.
$$
In view of this correspondence, Theorem \ref{thm:dichotomy2} helps us to detect doubly symmetric periodic orbits. We expect that a hypersurface in ${\mathbb{R}}^4$ bounding a strictly convex domain and symmetric with respect to both $\rho$ and $-\mathrm{Id}_{{\mathbb{R}}^4}$ has a $\rho$-invariant global disk-like surface of section with the doubly symmetric spanning orbit.\\[-1.5ex]
We can apply this to symmetric closed geodesics on Riemannian 2-spheres. A combination of Franks' work \cite{Fra92} and Bangert's work \cite{Ban93} (see also \cite{Hin93}) proves the existence of infinitely many closed geodesics on every Riemannian 2-sphere. In particular Franks' work applies to a Riemannian 2-sphere with Birkhoff's global annulus-like surface of section. Let $\gamma:S^1\to S^2$ be a simple closed geodesic with respect to a Riemannian metric $g$. It separates $S^2$ into two disks and we denote one of them by $D_1$. Then
$$
\Sigma_\gamma:=\{(x,v)\in S_gS^2\,|\, x\in\gamma(S^1),\; 0\leq \angle(\dot\gamma_x,v)\leq\pi\}
$$
is an embedded surface in the unit tangent bundle $S_gS^2$ diffeomorphic to the closed annulus with boundary $\dot\gamma(S^1)\sqcup\dot{\bar\gamma}(S^1)$ where $\bar\gamma(t):=\gamma(-t)$. Birkhoff showed in \cite{Bir27} that $\Sigma_\gamma$ is a global surface of section and the Poincar\'e return map on it extends to the boundary of $\Sigma_\gamma$ if $\gamma$ meets certain properties, which are true for $(S^2,g)$ with positive curvature. Moreover the extended Poincar\'e return map on ${\Sigma}_\gamma$ is area preserving and isotopic to the identity.
There is a refinement of this in the symmetric case as follows. Suppose that there exists an involutive orientation reversing isometry $R$ on $(S^2,g)$, i.e. $R^2=\mathrm{Id}_{S^2}$, $\mathrm{Fix}\, R\cong S^1$, and $R^*g=g$. We call a closed geodesic $\gamma$ is {\bf symmetric} if $R(\gamma(t))=\gamma(-t)$.
Assume that there is a simple symmetric closed geodesic $\gamma$ which defines
a global surface of section $\Sigma_\gamma$. Then $\Sigma_\gamma$ is invariant under the involution $R_*$. Hence the following corollary is an immediate consequence of Theorem \ref{thm:Franks-like theorem} and Theorem \ref{thm:PB theorem}, see the proof of \cite[Theorem 4.1]{Fra92} for details.
\begin{Cor}\label{cor:geodesic}
There are infinitely many symmetric closed geodesics on a symmetric $(S^2,g,R)$ equipped with invariant Birkhoff's global annulus-like surface of section.
\end{Cor}
\subsection{Questions}\quad\\[-1.5ex]
We expect that an analogue of Theorem \ref{thm:dichotomy2} in the absence of reversibility is also true. To be precise, we expect that every area preserving homeomorphism $f$ on $A$ or $\mathring A$ isotopic to the identity has infinitely many odd-periodic points provided a single odd-periodic point. More generally it is conceivable that the following claim is true. Assume that an area preserving homeomorphism $f$ on $A$ or $\mathring A$ isotopic to the identity has a $k$-periodic point for some $k\in{\mathbb{N}}$. For a given $n\in{\mathbb{N}}$ if $n$ and $k$ are relatively prime, $f$ has infinitely many periodic orbits with period relatively prime to $n$. This is motivated by the case $S^3$ with the standard ${\mathbb{Z}}/n$-action. Note that this covers the centrally symmetric case when $n=2$.
It is also tempting to remove the assumption on the existence of Birkhoff's global surface of section in Corollary \ref{cor:geodesic} by showing a reversible counterpart to a result in \cite{Ban93}.
\section{Reversible maps and involutions}
A remarkable property of reversible maps is that $f\circ I_\Omega$ is an involution if $f$ is a reversible map on an invariant domain $\Omega$. Symmetric fixed points of a reversible maps $f$ can be interpreted as intersection points of two fixed loci $\mathrm{Fix}\, f$ and $\mathrm{Fix}\, I_\Omega$. Therefore to study a symmetric fixed point of $f$, it is crucial to observe the fixed locus $\mathrm{Fix}\, (f\circ I_\Omega)$ of the involution $f\circ I_\Omega$ since
$$
\mathrm{Fix}\, f\cap \mathrm{Fix}\, I_\Omega=\mathrm{Fix}\, (f\circ I_\Omega)\cap \mathrm{Fix}\, I_\Omega.
$$
\begin{Rmk}
We define an operation $\star$ on the space $\mathfrak{Inv}$ of involutions on $\Omega$ by
$$
J\star K=J\circ K\circ J,\quad J,\,K\in\mathfrak{Inv},
$$
so that $(\mathfrak{Inv},\star)$ becomes a magma, i.e. $J\star K\in\mathfrak{Inv}$. We also consider the space $\mathfrak{Rev}$ of reversible maps on $\Omega$ and endow $\mathfrak{Rev}$ with a magma operation by
$$
f\diamond g=f\circ g^{-1}\circ f,\quad f,\,g\in\mathfrak{Rev}.
$$
Then we have a bijective magma homomorphism between them:
$$
(\mathfrak{Rev},\diamond)\to(\mathfrak{Inv},\star),\quad f\mapsto f\circ I_\Omega.
$$
In particular, any reversible map $f$ is a composition of two involutions $f\circ I_\Omega$ and $I_\Omega$ as observed by Birkhoff \cite{Bir15}.
\end{Rmk}
Recall that a map $f:\Omega\to\Omega$ is said to be {\bf conjugate} to a map $g:\Omega\to\Omega$ if there exists a homeomorphism $h:\Omega\to\Omega$ with $h\circ f=g\circ h$. If this is the case,
$$
h(\mathrm{Fix}\, f)=\mathrm{Fix}\, g.
$$
According to Brouwer \cite{Brou19}, any involution on ${\mathbb{R}}^2$ is conjugate to $\mathrm{Id}$ or $-\mathrm{Id}$ or $I$. Consequently, the fixed locus of an orientation reversing involution on ${\mathbb{R}}^2$ is an embedded line and the fixed locus of an involution on a planar domain is a topological submanifold. The following lemma immediately follows from this observation.
\begin{Lemma}\label{lem:fixd loci of involutions}
If $J$ is an orientation reversing involution on $\Omega$, $\mathrm{Fix}\, J$ is a topological submanifold. If $\Omega={\mathbb{R}}^2$, $\mathrm{Fix}\, J\cong {\mathbb{R}}$. If $\Omega =D$, $\mathrm{Fix}\, J\cong [0,1]$. If $\Omega=S$ and $J$ is isotopic to $I_S$, $\mathrm{Fix}\, J\cong[0,1]$. If $\Omega=A$ and $J$ is isotopic to $I_A$, $\mathrm{Fix}\, J\cong[0,1]\sqcup[0,1]$.
\end{Lemma}
\begin{Rmk}\label{rmk:Lefschetz fixed point theorem}
In general it is not easy to answer whether an involution has a fixed point. There is a Lefschetz fixed point theorem for involutions due to \cite{KK68}. This states that if an involution $J$ on a locally compact space $X$ is fixed point free,
$$
\Lambda_J:=\sum_{k\geq0}(-1)^k\mathrm{Tr} (J_*|H_k(X;{\mathbb{R}}))=0.
$$
Even if $\mathrm{Fix}\, J$ is nonempty, it is not necessarily a topological submanifold whereas the fixed locus of a smooth involution is always a smooth submanifold. There is a so-called {\em dogbone space} $B$ due to Bing which is not a topological manifold but $B\times{\mathbb{R}}$ is homeomorphic to ${\mathbb{R}}^4$. Using this, one can construct a continuous involution $J$ on ${\mathbb{R}}^4$ with $\mathrm{Fix}\, J=B$.
We refer the reader to \cite[Section 9]{Bin59} for details.
\end{Rmk}
\begin{Rmk}
The condition $J\simeq I_A$ when $\Omega=A$ in the preceding lemma is indispensable. Note that an antisymplectic involution $J$ on $A$ defined by
$$
(r,\theta)\mapsto (3-r,\pi+\theta)
$$
is fixed point free where $(r,\theta)\in{\mathbb{R}}^2$ is the polar coordinate. In particular, the Lefschetz number $\Lambda_{J}$ in Remark \ref{rmk:Lefschetz fixed point theorem} vanishes.
\end{Rmk}
The following two easy lemmas will play key roles. Note that any iteration $f^i$, $i\in{\mathbb{Z}}$ of a reversible map $f$ is reversible again.
\begin{Lemma}\label{lem:fix loci}
If $f:\Omega\to\Omega$ is reversible,
$$
f^j(\mathrm{Fix}\, (f^i\circ I_\Omega))=\mathrm{Fix}\,(f^{2j+i}\circ I_\Omega), \quad\forall i,\, j\in{\mathbb{Z}}.
$$
\end{Lemma}
\begin{proof}
For $z\in\mathrm{Fix}\, (f^i\circ I_\Omega)$, $f^j(z)\in\mathrm{Fix}\,(f^{2j+i}\circ I_\Omega)$ since
$$
f^{2j+i}\circ I_\Omega(f^j(z))=f^{2j+i}\circ f^{-j}(I_\Omega(z))=f^j(f^i\circ I_\Omega(z))=f^j(z).
$$
To prove the converse we pick $z\in\mathrm{Fix}\,(f^{2j+i}\circ I_\Omega)$. Then,
$$
f^i\circ I_\Omega(f^{-j}(z))=I_\Omega\circ f^{-i-j}(f^{2j+i}\circ I_\Omega(z))=I_\Omega\circ f^j\circ I_\Omega(z)=f^{-j}(z).
$$
This implies $f^{-j}(z)\in\mathrm{Fix}\,(f^i\circ I_\Omega)$ and thus $z\in f^j(\mathrm{Fix}\, f^i\circ I_\Omega)$.
\end{proof}
\begin{Lemma}\label{lem:intersection point=>symmetric periodic point}
Let $f:\Omega\to\Omega$ be a reversible map. If $z\in\mathrm{Fix}\, (f^k\circ I_\Omega)\cap\mathrm{Fix}\,(f^\ell\circ I_\Omega)$ for some $k>\ell\in{\mathbb{Z}}$, then $z$ is a symmetric $(k-\ell)$-periodic point of $f$.
\end{Lemma}
\begin{proof}
Since $f^k\circ I_\Omega(z)=f^\ell\circ I_\Omega(z)$,
$$
z=I_\Omega\circ f^{\ell-k}\circ I_\Omega(z)=f^{k-\ell}(z).
$$
Moreover by the assumption $f^k\circ I_\Omega(z)=z$ and hence
$$
f^{m(k-\ell)-k}(z)=f^{-k}(z)=I_\Omega(z).
$$
for any $m\in{\mathbb{Z}}$.
\end{proof}
\section{On the fixed point index for reversible maps}
For any two smooth paths $\delta,\,\gamma:[0,1]\to\Omega$ with $\delta(t)\neq\gamma(t)$ for every $t\in[0,1]$, we define a path $\gamma-\delta:[0,1]\to\Omega$ avoiding the origin by $(\gamma-\delta)(t):=\gamma(t)-\delta(t)$. The variation of angle of such two paths $\delta,\,\gamma$ is defined by
$$
i(\delta,\gamma):=\frac{1}{2\pi}\int_{\gamma-\delta} d\theta
$$
where $\theta$ is the angular coordinate on ${\mathbb{R}}^2$. We list some useful properties of $i(\delta,\gamma)$.
\begin{itemize}
\item[1)] For any $\delta,\,\gamma:[0,1]\to\Omega$ with $\delta(t)\neq\gamma(t)$,
$$
i(\delta,\gamma)=i(\gamma,\delta).
$$
\item [2)] If $\delta_1,\,\delta_2,\,\gamma_1,\,\gamma_2:[0,1]\to\Omega$ with $\delta_j(t)\neq\gamma_j(t)$, $j=1,2$ satisfy $\delta_1(1)=\delta_2(0)$ and $\gamma_1(1)=\gamma_2(0)$,
$$
i(\delta_1*\delta_2,\gamma_1*\gamma_2)=i(\delta_1,\gamma_1)+i(\delta_2,\gamma_2)
$$
where $*$ stands for the catenation operation defined by
$$
\gamma_1*\gamma_2(t):=\left\{\begin{array}{ll} \gamma_1(2t) & t\in[0,\frac{1}{2}],\\[.5ex]
\gamma_2(1-2t) & t\in[\frac{1}{2},1].
\end{array}\right.
$$
\item [2)] If $\delta,\,\gamma:[0,1]\to\Omega$ with $\delta(t)\neq\gamma(t)$ are loops, i.e. $\delta(0)=\delta(1)$, $\gamma(0)=\gamma(1)$,
$$
i(\delta,\gamma)\in{\mathbb{Z}}.
$$
\item [3)] For any $\delta,\,\gamma:[0,1]\to\Omega$ with $\delta(t)\neq\gamma(t)$,
$$
i(\bar\delta,\bar\gamma)=-i(\delta,\gamma)
$$
where $\bar\delta(t):=\delta(1-t)$, $\bar\gamma(t):=\gamma(1-t)$.
\item [4)] Recall that we have assumed that $\Omega$ is invariant. For any $\delta,\,\gamma:[0,1]\to\Omega$ with $\delta(t)\neq\gamma(t)$,
$$
i(I_\Omega\circ\delta,I_\Omega\circ\gamma)=-i(\delta,\gamma).
$$
\item [5)] For continuous families of paths $\delta_s$,\,$\gamma_s:[0,1]\to\Omega$, $s\in{\mathbb{R}}$ with $\delta_s(t)\neq\gamma_s(t)$ for any $s$, $i(\delta_s,\gamma_s)$ varies continuously with the parameter $s$.
\end{itemize}
\begin{Prop}\label{prop:mirror loop has the same index}
Let $f:\Omega\to\Omega$ be a reversible map isotopic to the identity. Then for any loop $\gamma:S^1\to\Omega\setminus\mathrm{Fix}\, f$,
$$
i(\gamma,f\circ\gamma)=i(I_\Omega\circ\bar\gamma,f\circ I_\Omega\circ\bar\gamma).
$$
Let $h:\Omega\to\Omega$ be a reversible map isotopic to $I_\Omega$. Then for any loop $\gamma:S^1\to\Omega\setminus\mathrm{Fix}\, h$,
$$
i(\gamma,h\circ\gamma)=-i(I_\Omega\circ\bar\gamma,h\circ I_\Omega\circ\bar\gamma).
$$
\end{Prop}
\begin{proof}
Observe that by reversibility of $f$
\begin{equation}\begin{aligned}\nonumber
i(I_\Omega\circ\bar\gamma,f\circ I_\Omega\circ\bar\gamma)&=i(I_\Omega\circ\bar\gamma,I_\Omega\circ f^{-1}\circ\bar\gamma)
=i(\gamma,f^{-1}\circ\gamma).
\end{aligned}\end{equation}
Let $f_s:\Omega\to\Omega$, $s\in[0,1]$ be an isotopy between $f_0=\mathrm{Id}_\Omega$ and $f_1=f$. Since $i(f_s\circ\gamma,f_s\circ f^{-1}\circ\gamma)$ varies continuously in $s\in[0,1]$ and it takes an integer value for every $s\in[0,1]$, $i(f_s\circ\gamma,f_s\circ f^{-1}\circ\gamma)$ is constant. In particular,
$$
i(\gamma,f^{-1}\circ\gamma)=i(f\circ\gamma,\gamma),
$$
and the first assertion is proved. For the second identity, let $h_s:\Omega\to\Omega$, $s\in[0,1]$ be an isotopy between $h_0=I_\Omega$ and $h_1=h$. In a similar vein, we compute
\begin{equation}\begin{aligned}\nonumber
i(I_\Omega\circ\bar\gamma,h\circ I_\Omega\circ\bar\gamma)&=i(\gamma,h^{-1}\circ\gamma)
=-i(I_\Omega\circ\gamma,I_\Omega\circ h^{-1}\circ\gamma)=-i(h\circ\gamma,\gamma).
\end{aligned}\end{equation}
\end{proof}
Recall that the \textbf{index} of an isolated interior fixed point $z\in\mathrm{Fix}\, f$ is defined by
$$
i(f,z):=i(\gamma,f\circ\gamma)
$$
for any simple sufficiently small loop $\gamma:S^1\to\Omega\setminus\mathrm{Fix}\, f$ winding $z$ counterclockwise. We will also use the
fact that for any simply connected domain $D_\delta$ enclosed by a simple loop $\delta:S^1\to\Omega\setminus\mathrm{Fix}\, f$ with finitely many fixed points of $f$, it holds that
$$
i(\delta,f\circ\delta)=\sum_{x\in\mathrm{Fix}\, f\cap D_\delta} i(f,x).
$$
\begin{Cor}\label{cor:index}
Let $f:\Omega\to\Omega$ be a reversible map. If $z$ is fixed by $f$, so is $I_\Omega(z)$. Moreover if $f$ is isotopic to the identity and $z\in\mathrm{Fix}\, f\cap \mathring\Omega$, $i(f,z)=i(f,I_\Omega(z))$. If $f$ and $I_\Omega$ are isotopic, $i(f,z)=-i(f,I_\Omega(z))$
\end{Cor}
\begin{proof}
If $f(z)=z$, $f\circ I_\Omega(z)=I_\Omega\circ f^{-1}(z)=I_\Omega(z)$. Assume that $f$ is isotopic to the identity. The case that $f$ is isotopic to $I_\Omega$ follows in a similar way. Suppose that $\gamma:S^1\to\Omega\setminus\mathrm{Fix}\, f$ is a sufficiently small loop such that $i(f,z)=i(\gamma,f\circ\gamma)$ for $z\in\mathrm{Fix}\, f$. Then $I_\Omega\circ\bar\gamma$ is a sufficiently small loop surrounding $I_\Omega(x)$ counterclockwise, and hence using Proposition \ref{prop:mirror loop has the same index} we have
$$
i(f,I_\Omega(z))=i(I_\Omega\circ\bar\gamma,f\circ I_\Omega\circ\bar\gamma)=i(\gamma,f\circ\gamma)=i(f,z).
$$
\end{proof}
\section{Classical fixed point theorems for reversible maps}
The following statement can be thought as Brouwer's fixed point theorem for symmetric fixed points of reversible maps. Note that any continuous map from $D$ to itself is isotopic to $\mathrm{Id}_{D}$ (to $I_D$) provided that it is orientation preserving (reversing).
\begin{Prop}
Let $f:D\to D$ be a reversible map with finitely many fixed points. Then $f$ has at least one symmetric fixed point.
\end{Prop}
\begin{proof}
Suppose that there is no symmetric fixed point of $f$. Then due to Corollary \ref{cor:index},
$$
\sum_{z\in\mathrm{Fix}\, f}i(f,z)\in2{\mathbb{Z}}.
$$
This contradicts the Lefschetz-Hopf theorem, which implies
$\sum_{z\in\mathrm{Fix}\, f}i(f,z)=1$.
\end{proof}
\begin{Rmk}
There is a reversible map $f:D\to D$ with infinitely many fixed points but no symmetric fixed point. For simplicity we consider the domain $B=\{(x,y)\in{\mathbb{R}}^2\,|\, -2\leq x\leq 2,\,0\leq y\leq 1\}$ instead of $D$. Let $f:B\to B$ be a reversible map given by
$$
\left\{\begin{array}{cc} f(x,y)=(4+3x,y),&\quad y\in[-2,-1]\\[.5ex]
f(x,y)=\Big(\frac{4}{3}+\frac{1}{3}x,y), &y\in[-1,2]
\end{array}\right.
$$
Then $\mathrm{Fix}\, f=\{(-2,y)\cup(2,y)\,|\,0\leq y\leq 1\}$ but $f$ has no symmetric periodic point.
\end{Rmk}
The following is a version of Brouwer's lemma \cite{Brou12} for reversible maps.
\begin{Prop}
Let $f:{\mathbb{R}}^2\to{\mathbb{R}}^2$ be an orientation preserving reversible map with a symmetric periodic point. Then $f$ has a symmetric fixed point.
\end{Prop}
\begin{proof}
Let $(x,y)\in{\mathbb{R}}^2$ be a symmetric periodic point of $f$, i,e, $f^\ell(x,y)=I(x,y)=(-x,y)$ and $f^k(x,y)=(x,y)$ for some $\ell,\, k\in{\mathbb{N}}$. Since $f\circ I$ is an orientation reversing involution, $\mathrm{Fix}\,(f\circ I)$ is an embedded line in the plane due to Lemma \ref{lem:fixd loci of involutions}. We write $\alpha=\mathrm{Fix}\,(f\circ I)$. Assume by contradiction that $\alpha\cap\mathrm{Fix}\, I=\emptyset$, see Lemma \ref{lem:intersection point=>symmetric periodic point}. Note that $I(\alpha)$ is also an embedded line which does not intersect with both $\mathrm{Fix}\, I$ and $\alpha$. We assume that $I(\alpha)\subset \{(x,y)\in{\mathbb{R}}^2\,|\, x<0\}$. The case that $I(\alpha)\subset \{(x,y)\in{\mathbb{R}}^2\,|\, x>0\}$ also follows in a similar way. The line $\alpha$ (resp. $I(\alpha)$) divides the plane into two open regions $\alpha_+$ and $\alpha_-$ (resp. $I(\alpha)_+$ and $I(\alpha)_-$). Let $\alpha_-$ and $I(\alpha)_+$ be regions containing the imaginary axis $\mathrm{Fix}\, I $. Since $f$ is orientation preserving and $f(I(\alpha))=\alpha$, $f$ maps $I(\alpha)_\pm$ to $\alpha_\pm$ respectively. Now we show that $(x,y)$ cannot be a symmetric periodic point. If $(x,y)\in I(\alpha)_-$, $f^n(x,y)\in\alpha_-$ for all $n\in{\mathbb{N}}$ but $f^\ell(x,y)=(-x,y)\in\alpha_+$ for some $\ell\in{\mathbb{N}}$. Thus this does not happen and othere cases $(x,y)\in I(\alpha)\cup I(\alpha)_+$ can be ruled out in a similar way. This contradiction proves the proposition.
\end{proof}
The above proposition does not remain true for general invariant domains. However we still be able to find a symmetric fixed point in the presence of a certain symmetric 2-periodic point. Note that if $(0,y)\in\mathrm{Fix}\, I_\Omega$ is a symmetric 2-periodic point of a reversible map $f:\Omega\to\Omega$, $f(0,y)\in\mathrm{Fix}\, I_\Omega$ as well. Borrowing an idea of M. Brown \cite{Bro90}, if a symmetric 2-periodic points $(0,y),\,f(0,y)\in\mathrm{Fix}\, I_\Omega$ lie on the same connected component of $\mathrm{Fix}\, I_\Omega$, then we can find a symmetric fixed point of a reversible map $f$ isotopic to the identity. This will be used later to prove Theorem \ref{thm:necessary and sufficient condition}.
\begin{Prop}\label{period 2 real periodic point=>real fixed point}
Suppose that a reversible map $f:\Omega\to\Omega$ isotopic to the identity has a symmetric 2-periodic point $(0,y)\in\mathrm{Fix}\, I_\Omega$. If $(0,y)$ and $f(0,y)$ are on the same connected component of $\mathrm{Fix}\, I_\Omega$, there exists a symmetric fixed point of $f$ lying on $[(0,y),f(0,y)]\subset\mathrm{Fix}\, I_\Omega$.
\end{Prop}
\begin{proof}
We may assume that $f$ has no fixed point in a small open neighborhood of the interval $[(0,y),f(0,y)]$ since otherwise $f$ has a fixed point on $[(0,y),f(0,y)]$ which is in turn a symmetric fixed point.
We choose an arc $\gamma:[0,1]\to\Omega\setminus(\mathrm{Fix}\, f\cup\mathrm{Fix}\, I_\Omega)$ with $\gamma(0)=(0,y)$ and $\gamma(1)=f(0,y)$ such that the invariant domain $D_\gamma$ enclosed by the loop $\gamma*(I_\Omega\circ\bar\gamma)$ is simply connected and contains no fixed points of $f$ .
Since $f\circ\gamma(1)-\gamma(1)=(0,y)-f(0,y)$ and $f\circ\gamma(0)-\gamma(0)=f(0,y)-(0,y)$ point in opposite directions,
$$
i(\gamma,f\circ\gamma)\in\frac{1}{2}+{\mathbb{Z}}.
$$
Now we claim that
$$
i(I_\Omega\circ\bar\gamma,f\circ I_\Omega\circ\bar\gamma)= i(\gamma,f\circ\gamma)
$$
where $\bar\gamma(t)=\gamma(1-t)$. Indeed,
$$
i (I_\Omega\circ\bar\gamma,f\circ I_\Omega\circ\bar\gamma)=-i(I_\Omega\circ\gamma,f\circ I_\Omega\circ\gamma)=-i(I_\Omega\circ\gamma,I_\Omega\circ f^{-1}\circ\gamma)=i(\gamma,f^{-1}\circ\gamma).
$$
Let $f_s:\Omega\to\Omega$, $s\in[0,1]$ be an isotopy between $f_0=\mathrm{Id}$ and $f_1=f$. Since $f_s\circ f^{-1}\circ\gamma(1)-f_s\circ\gamma(1)=f_s(0,y)-f_s\circ f(0,y)$ and $f_s\circ f^{-1}\circ\gamma(0)-f_s\circ\gamma(0)=f_s\circ f(0,y)-f_s(0,y)$ point in opposite directions,
$$
i(f_s\circ\gamma,f_s\circ f^{-1}\circ\gamma)\in\frac{1}{2}+{\mathbb{Z}}.,
$$
for every $s\in[0,1]$ and in particular it holds that
$$
i(\gamma,f^{-1}\circ\gamma)=i(f\circ \gamma,\gamma).
$$
This proves the clam. By the claim, we have
$$
i(\gamma*(I_\Omega\circ\bar\gamma),f(\gamma*(I_\Omega\circ\bar\gamma)))=i(\gamma,f\circ\gamma)+i(I_\Omega\circ\bar\gamma,f\circ I_\Omega\circ\bar\gamma)=2\,i (\gamma,f\circ\gamma).
$$
Using $i(\gamma,f\circ\gamma)\in\frac{1}{2}+{\mathbb{Z}}$, we deduce
$$
i(\gamma*(I_\Omega\circ\bar\gamma),f(\gamma*(I_\Omega\circ\bar\gamma)))\in 1+2{\mathbb{Z}},
$$
and therefore there is a fixed point of $f$ in $D_\gamma$. This contradiction shows that there exists at least one symmetric fixed point of $f$ inside $D_\gamma$, and hence on $\mathrm{Fix}\, I_\Omega\cap D_\gamma=[(0,y),f(0,y)]$.
\end{proof}
\section{Proofs of the main results}
\subsection{Proofs of Theorem \ref{thm:Franks-like theorem} and Corollary \ref{cor:dichotomy}}
\begin{Prop}\label{prop:per pt=>int sym per pt}
If an area preserving reversible map $f$ on $A$ or $\mathring A$ has a periodic point, there is an interior symmetric periodic point of $f$.
\end{Prop}
\begin{proof}
\begin{figure}[htb]
\includegraphics[width=1.0\textwidth,clip]{disk0.pdf}
\caption{}\label{fig:disk0}
\end{figure}
We only give a proof for $A$ which applies to $\mathring A$ as well. Assume by contradiction that there is no interior symmetric periodic point of $f$. We denote the connected components of $\mathrm{Fix}\, I_A$ by $Y_\pm:=\{0\}\times[\pm1,\pm2]$. By Lemma \ref{lem:fix loci}, $f^m(Y_+)\subset\mathrm{Fix}\,(f^{2m}\circ I_A)$ for every $m\in{\mathbb{N}}$, and thus by Lemma \ref{lem:intersection point=>symmetric periodic point}, we have
\begin{equation}\label{eq:non-intersections}
f^m(Y_+)\cap Y_\pm\cap\mathring A=\emptyset,\quad m\in{\mathbb{N}}\cup\{0\}.
\end{equation}
Two segments $Y_+$ and $f(Y_+)$ divide $A$ into two closed regions $A_-$ and $A_+$ such that $A_-\cup A_+=A$ and $A_-\cap A_+=Y_+\cup f(Y_+)$. Since $f(Y_+)\cap Y_-\cap\mathring A=\emptyset$, $A_-$ and $A_+$ have different area, say $\mathrm{area}(A_-)<\mathrm{area}(A_+)$. We assume that $A_-\subset\{(x,y)\in{\mathbb{R}}^2\,|\,x\leq 0\}$. The other case is proved in the same manner. Note that
\begin{equation}\label{eq:area preserving}
\mathrm{area}(A_-)=\mathrm{area}(f^m(A_-))<\frac{\mathrm{area}(A)}{2},\quad m\in{\mathbb{N}}\cup\{0\}.
\end{equation}
We claim that
\begin{equation}\label{eq:boundary intersections}
f^m(A_-)\cap f^{m+1}(A_-)=f^m(Y_+),\quad m\in{\mathbb{N}}\cup\{0\}.
\end{equation}
It suffices to prove the claim for $m=0$. Indeed if this is not true, $A_-\cap f(A_-)\supsetneq f(Y_+)$, and thus $f^2(Y_+)\cap Y_+\cap\mathring A\neq\emptyset$ since $\mathrm{area}(A_-)+\mathrm{area}(f(A_-))<\mathrm{area}(A)$. This contradicts \eqref{eq:non-intersections} and hence \eqref{eq:boundary intersections} follows.
Now we are ready to prove the assertion.
Suppose that $z_-\in Y_+\cap f(Y_+)$. By \eqref{eq:non-intersections}, $z_-\in\partial A$. Observe that $f^\ell(A_-)\cap\{(x,y)\in A\,|\, x>0\}\neq\emptyset$ for some $\ell\in{\mathbb{N}}$ by \eqref{eq:area preserving} and \eqref{eq:boundary intersections}. If $\ell\in{\mathbb{N}}$ is the minimal number with this property, there exists a point
$$
z_+\in f^{\ell+1}(Y_+)\cap\{(x,y)\in A\,|\, x>0\}.
$$
Since $z_-$ is fixed by $f$, $f^{\ell+1}(Y_+)$ connects $z_-$ and $z_+$, and therefore $f^{\ell+1}(Y_+)\cap Y_-\cap\mathring A\neq\emptyset$ which ensures the existence of an interior symmetric periodic point of $f$ by Lemma \ref{lem:intersection point=>symmetric periodic point}. See the first picture of Figure \ref{fig:disk0}.
Suppose that $Y_+\cap f(Y_+)=\emptyset$. Let $z\in A$ be a periodic point of $f$, i.e. $f^k(z)=z$ for some $k\in{\mathbb{N}}$. Abbreviate by $g=f^{k}$. Then
$$
z\notin g^m(Y_+),\quad m\in{\mathbb{N}}\cup\{0\}
$$
since otherwise $z=g(z)\in Y_+\cap g(Y_+)$.
We may assume that $z\in\{(x,y)\in A\,|\,x\leq 0\}$ as $I_A(z)$ is also a periodic point of $f$. As before $Y_+$ and $g(Y_+)$ divides $A$ two closed regions and we call the smaller region $A_-$ again. With $g$ and this $A_-$, \eqref{eq:non-intersections}, \eqref{eq:area preserving}, and \eqref{eq:boundary intersections} still hold. By \eqref{eq:area preserving} and \eqref{eq:boundary intersections},
$$
\{(x,y)\in A\,|\,x\leq 0\}\subset \bigcup_{0\leq m\leq \ell}g^m(A_-).
$$
Hence the periodic point $z\in g^m( A_-)$ for some $0\leq m\leq \ell$. This contradicts that $z\in\mathrm{Fix}\, g$ since $g^m(A_-)\cap g^{m+1}(A_-)=g^{m+1}(Y_+)$ by \eqref{eq:boundary intersections} and $z\notin g^{m+1}(Y_+)$, $m\in{\mathbb{N}}\cup\{0\}$. See the second picture of Figure \ref{fig:disk0}.
\end{proof}
\begin{Thm}\label{thm:sym per pt=>infinite}
If an area preserving reversible map $f$ on $A$ or $\mathring A$ has a periodic point, it possesses infinitely many interior symmetric periodic points.
\end{Thm}
\begin{proof}
\begin{figure}[htb]
\includegraphics[width=0.6\textwidth,clip]{disk4.pdf}
\caption{}\label{fig:disk1}
\end{figure}
See Figure \ref{fig:disk1}. We will prove the theorem only for $A$ and the same proof goes through for $\mathring A$ as well. Due to Proposition \ref{prop:per pt=>int sym per pt}, there is an interior symmetric periodic point of $f$.
Suppose that there are only finitely many interior symmetric periodic points of $f$. By deleting interior symmetric periodic points from $A$, we obtain a punctured annulus which we denote by $M$. Suppose that $M$ has $n\geq1$ punctures. Since $f$ is homeomorphism, $f$ is permutes $n$ punctures and thus the iterated map $g:=f^{2n!}$ fixes punctures pointwise. Moreover $g$ is isotopic to the identity and in particular $g$ preserves the inner/outer boundary circle of $M$.
The fixed locus $\mathrm{Fix}\, I_M$ consists of several connected components. We denote by $Y_1$ the component at the very top, i.e. $Y_1$ connects the outer boundary of $A$ and the very top puncture or the inner boundary circle of $A$. We observe that
\begin{equation}\label{eq:e_i}
g^m(Y_1)\cap\mathrm{Fix}\, I_M\cap\mathring M=\emptyset,\quad m\in{\mathbb{N}}\cup\{0\}
\end{equation}
as in \eqref{eq:non-intersections} due to Lemma \ref{lem:fix loci} and Lemma \ref{lem:intersection point=>symmetric periodic point}.
Note that $Y_1$ and $g(Y_1)$ separates $M$ into two closed regions $M_-$ and $M_+$. Due to \eqref{eq:e_i} with $m=1$, $M_-$ and $M_+$ have different area, say $\mathrm{area}(M_-)<\mathrm{area}(M_+)$. Observe as before, see \eqref{eq:boundary intersections}, that
\begin{equation}\label{eq:bdry intersection}
g^m(M_-)\cap g^{m+1}(M_-)=g^m(Y_1),\quad m\in{\mathbb{N}}\cup\{0\}.
\end{equation}
Since $g$ fixes punctures pointwise, there is no puncture inside every $g^m(M_-)$, $m\in{\mathbb{N}}\cup\{0\}$ by \eqref{eq:bdry intersection}. There is no loss of generality in assuming $M_-\subset\{(x,y)\in M\,|\,x\leq 0\}$. Due to \eqref{eq:e_i} and the fact that there is at least one pucture on $\{(x,y)\in A\,|\,x\leq 0\}$, every $g^m(Y_1)\subset \{(x,y)\in M\,|\,x\leq 0\}$ for all $m\in{\mathbb{N}}\cup\{0\}$ which contradicts that $g$ is preserves the area of $M_-$. This completes the proof.
\end{proof}
One can use a covering picture to prove the above theorem, see Theorem \ref{thm:infinitely many odd}.
\begin{Prop}\label{prop:symmetric fixed point}
Every area preserving reversible map $f$ on $D$ or $\mathring D$ has an interior symmetric fixed point.
\end{Prop}
\begin{proof}
The proof is stated for $D$ but the same argument works for $\mathring D$. As observed it suffices to show that
$$
\mathrm{Fix}\, (f\circ I_D)\cap\mathrm{Fix}\, I_D\cap\mathring D\neq\emptyset.
$$
Suppose that $f$ is orientation preserving. Then since $f\circ I_D$ is an orientation reversing area preserving involution, $\mathrm{Fix}\,(f\circ I_D)$ is an embedded line in $D$ (see Lemma \ref{lem:fixd loci of involutions}) which divides $D$ into two regions with the same area. Thus it has to cross the interior part of $\mathrm{Fix}\, I_D$.
On the other hand, if $f$ is orientation reversing, we consider $f^2$ instead. Then by the above argument, $f^2$ has an interior symmetric fixed point which in turn implies that $f$ has a symmetric 2-periodic point on $\mathrm{Fix}\, I_D\cap \mathring D$. Applying Proposition \ref{period 2 real periodic point=>real fixed point}, we can find an interior symmetric fixed point of $f$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:dichotomy2}}
\begin{Prop}
If an area preserving reversible map $f$ on $A$ or $\mathring A$ isotopic to the identity has an odd-periodic point, there is an interior symmetric odd-periodic point of $f$.
\end{Prop}
\begin{proof}
This follows from almost the same argument as in the proof of Proposition \ref{prop:per pt=>int sym per pt}. Instead repeating the argument, we briefly explain why the proof carries over to this proposition. The first key point is that $f$ is additionally assumed to be isotopic to the identity. This ensures that $\mathrm{Fix}\, (f\circ I_A)$ separates $A$ into two regions by Lemma \ref{lem:fixd loci of involutions}. The second reason is that points in
$$
f^m(\mathrm{Fix}\,(f\circ I_A))\cap \mathrm{Fix}\, I_A,\quad m\in{\mathbb{N}}\cup\{0\}
$$
are symmetric odd-periodic points according to Lemma \ref{lem:fix loci} and Lemma \ref{lem:intersection point=>symmetric periodic point}. Hence the proof of Proposition \ref{prop:per pt=>int sym per pt} with $Y_\pm$ replaced by two connected components of $\mathrm{Fix}\, (f\circ I_A)$ proves the present proposition.
\end{proof}
It is conceivable that the following theorem also immediately follows from the proof of Theorem \ref{thm:sym per pt=>infinite} with minor modifications. However we provide a slightly different picture.
\begin{Thm}\label{thm:infinitely many odd}
Let $f$ be an area preserving reversible map on $A$ or $\mathring A$ isotopic to the identity. If $f$ has a odd-periodic point, there are infinitely many interior symmetric odd-periodic points.
\end{Thm}
\begin{proof}
\begin{figure}[htb]
\includegraphics[width=1.\textwidth,clip]{Annulus5.pdf}
\caption{}\label{odd case}
\end{figure}
We prove theorem for $A$ and the case $\mathring A$ can be proved in a similar vein.
Due to the previous proposition we assume that there is an interior symmetric $k$-periodic point $z$ of $f$ for $k\in 2{\mathbb{N}}-1$. Since $k$ is odd, one of $f^\ell(z)$, $1\leq\ell\leq k$ lies on $\mathrm{Fix}\, I_{A}$, we may assume $z\in\mathrm{Fix}\, I_{A}\cap\mathring A$. Denote by $g=f^k$. We consider the covering $S\to A$ and lift $I_A$ to $I_S$. Thus we can choose a lift $G:S\to S$ of $g:{A}\to { A}$ such that $G\circ I_S=I_S\circ G^{-1}$ and
$$
\mathrm{Fix}\, (G\circ I_{ S})\cap \mathrm{Fix}\, I_{ S}\neq\emptyset.
$$
Suppose that the cardinality of the above set is finite since otherwise there are infinitely many symmetric $k$-periodic points. Since $f$ and hence $G$ is isotopic to the identity, $\mathrm{Fix}\, (G\circ I_S)$ is a line connecting the upper boundary and the lower boundary of $S$, see Lemma \ref{lem:fixd loci of involutions}. We choose the closed sub-segment $s$ of $\mathrm{Fix}\, (G\circ I_{ S})$ with boundary $s_\pm$ satisfying
$$
s\cap \mathrm{Fix}\, I_{ S}\cap\mathring S=\{s_-\},\quad s\cap ({\mathbb{R}}\times\{1\})=\{s_+\}.
$$
Note that $I_S(s)=G^{-1}(s)$, $s\cap I_{ S}(s)\cap\mathring S=\{s_-\}$, and therefore
\begin{equation}\label{eq1}
G^{m}(s)\cap G^{m+1}(s)\cap\mathring S=\{s_-\},\quad m\in{\mathbb{N}}\cup\{0\}.
\end{equation}
Without loss of generality, we assume $s\subset [0,\infty)\times[0,1]$.
Since $G(I_{S}(s_+))=s_+$ and $G$ is isotopic to the identity,
\begin{equation}\label{eq2}
G^{\ell}(s_+)\in [s_+,\infty)\times\{1\},\quad \ell\in{\mathbb{N}}.
\end{equation}
Suppose that there is $q_0\in{\mathbb{N}}$ such that $G^{q_0}(s)\cap\mathrm{Fix}\, I_S\cap \mathring S\neq\{s_-\}$.
Then there exists
$$
(0,y_q)\in G^{q}(s)\cap\mathrm{Fix}\, I_S\cap\mathring S,\quad q\geq q_0
$$
with $y_q>y_{q+1}$ different from $s_-$ due to \eqref{eq1}, \eqref{eq2}, and $G^{q}(s_-)=s_-$. See the first picture of Figure \ref{odd case}. Due to Lemma \ref{lem:fix loci} and Lemma \ref{lem:intersection point=>symmetric periodic point}, this shows the existence of infinitely many interior symmetric odd-periodic points of $f$.
Suppose that $G^{\ell}(s)\cap\mathrm{Fix}\, I_S\cap \mathring S=\{s_-\}$ for every $\ell\in{\mathbb{N}}$. Let $D$ be the domain enclosed by $s\cup I_S(s)$ and ${\mathbb{R}}\times\{1\}$. Observe that
$$
G^{\ell}(D)\cap G^{\ell+1}(D)=G^{\ell+1}(s),\quad \ell\in{\mathbb{N}}\cup\{0\}
$$
by \eqref{eq1}, see the second picture of Figure \ref{odd case}. Since $\bigcup_{\ell\in{\mathbb{N}}} G^\ell(D)\subset [0,\infty)\times[0,1]$ is connected and has infinite area, there exists $n(\ell)\in{\mathbb{N}}$, $\ell\in{\mathbb{N}}$ such that
$$
\lim_{\ell\to\infty}n(\ell)=\infty,\quad G^{\ell}(s)\cap (\{n\pi\}\times[0,1])\cap\mathring S\neq\emptyset,\quad 0\leq\forall n\leq n(\ell).
$$
Since $\{n\pi\}\times[0,1]$'s are the lift of $\mathrm{Fix}\, I_A$, this shows that there are infinitely many interior symmetric odd-periodic points of $f$.
\end{proof}
\subsection{Proofs of Theorem \ref{thm:PB theorem} and Theorem \ref{thm:necessary and sufficient condition}}\quad\\[-1.5ex]
As we emphasized, we do {\em not} require here $f$ to be area preserving. Recall that we denote by $\mathrm{Fix}\, I_\Omega=Y_1\sqcup\cdots\sqcup Y_n$ where $Y_i$ are disjoint intervals for an invariant connected possibly non-closed domain $\Omega\subset{\mathbb{R}}^2$.
\begin{figure}[htb]
\includegraphics[width=.5\textwidth,clip]{disk3.pdf}
\caption{}\label{iff}
\end{figure}
\begin{Thm}\label{thm:twist}
Let $f$ be a reversible map on $\Omega$ isotopic to the identity. Then $f$ has a (an interior) symmetric fixed point on $Y_i$ if and only if $f(Y_i)\cap Y_i\neq\emptyset$ ($f(Y_i)\cap Y_i\cap\mathring \Omega\neq\emptyset$) for $i\in\{1,\dots,n\}$.
\end{Thm}
\begin{proof}
See Figure \ref{iff}. If $f$ has a symmetric fixed point $z\in Y_i$, $z=f(z)\in f(Y_i)\cap Y_i$ for $1\leq i\leq n$. For the converse, suppose that $z\in f(Y_i)\cap Y_i$ for $1\leq i\leq n$. Then $z=f(z_0)$ for some $z_0\in Y_i$. We assume $z\neq z_0$ since otherwise we are done. Since both $z_0$, $z\in Y_i\subset\mathrm{Fix}\, I_\Omega$, we compute
$$
f(z)=f\circ I_\Omega(z)=I_\Omega\circ f^{-1}(z)=I_\Omega(z_0)=z_0.
$$
This implies $z$ is a symmetric 2-periodic point on $Y_i$ with $f(z)\in Y_i$, and hence Proposition \ref{period 2 real periodic point=>real fixed point} guarantees the existence of a symmetric fixed point of $f$ in between $z$ and $f(z)$ and hence on $Y_i$. The assertion concerning an interior symmetric fixed point follows in the same way.
\end{proof}
Although Theorem \ref{thm:twist} directly implies the theorem below, we outline two more elementary proofs of this. The first proof makes use of the fixed locus of the involution $f\circ I_A$ while the second proof uses the fixed point index.
\begin{Thm}
Let $f$ be a reversible map on $A$ isotopic to the identity with the boundary twist condition. Then there is a symmetric fixed point of $f$ on each connected component of $\mathrm{Fix}\, I_A$.
\end{Thm}
\begin{figure}[htb]
\includegraphics[width=0.9\textwidth,clip]{Annulus1.pdf}
\caption{}\label{fig:PB proof2 strip}
\end{figure}
\noindent\textsc{Sketch of Proof 1.}
See Figure \ref{fig:PB proof2 strip}. We lift $f:A\to A$ to a reversible map $$F=(F_1,F_2): S={\mathbb{R}}\times[0,1]\to S$$ such that $F_1(0,1)\in(0,2\pi]$. Then by the boundary twist condition,
$$
\mathrm{Fix}\, (F\circ I_S)\cap({\mathbb{R}}\times\{1\})\in(0,\pi],\quad \mathrm{Fix}\, (F\circ I_S)\cap({\mathbb{R}}\times\{0\})<0.
$$
Therefore we have
$$
\mathrm{Fix}\, (F\circ I_S)\cap\mathrm{Fix}\, I_S\cap\mathring S\neq\emptyset,
$$
i.e. there is an interior symmetric fixed point of $f$ on one connected component of $\mathrm{Fix}\, I_A$. To find another interior symmetric fixed point of $f$ on the other connected component of $\mathrm{Fix}\, I_A$, consider the reflection $I_S':(x,y)\mapsto(2\pi-x,y)$ which is another lift of $I_A$ to $S$. Then $F\circ I'_S$ is also reversible and we have
$$
\mathrm{Fix}\, (F\circ I'_S)\cap\mathrm{Fix}\, I'_S\cap\mathring S\neq\emptyset.
$$
by the boundary twist condition again. This gives another symmetric fixed point of $f$.
\hfill$\square$\\[-1ex]
\begin{figure}[htb]
\includegraphics[width=.8\textwidth,clip]{Annulus3.pdf}
\caption{}\label{PB proof 1}
\end{figure}
\noindent\textsc{Sketch of Proof 2.}
See Figure \ref{PB proof 1}.
Suppose that a lift $F:S\to S$ of $f:A\to A$ does not have any fixed points on $\mathrm{Fix}\, I_S$. Then there is no fixed point of $F$ inside $(-\epsilon,\epsilon)\times[0,1]$ for small $\epsilon>0$. Now we choose any path $\gamma:[0,1]\to(0,\epsilon)\times[0,1]$ from ${\mathbb{R}}\times\{0\}$ to ${\mathbb{R}}\times\{1\}$. Then due to the boundary twist condition,
\begin{equation}\label{eq:half integer}
i(\gamma,F\circ\gamma)\in\frac{1}{2}+{\mathbb{Z}}.
\end{equation}
Moreover as in the proof of Proposition \ref{period 2 real periodic point=>real fixed point}, we have
\begin{equation}\label{eq:same index}
i(I_S\circ \bar\gamma,F\circ I_S\circ\bar\gamma)=i(\gamma,F\circ\gamma).
\end{equation}
We choose auxiliary paths $\beta$ and $\delta$ in the boundary of $S$ as in Figure \ref{PB proof 1} so that $\alpha:=\gamma*\delta*I(\bar\gamma)*\beta$ forms a loop containing $\mathrm{Fix}\, I_S$. Then using \eqref{eq:half integer} and \eqref{eq:same index} we compute
$$
i(\alpha,F\circ\alpha)=i(I\circ \bar\gamma,F\circ I\circ\bar\gamma)+i(\gamma,F\circ\gamma)\in1+2{\mathbb{Z}}.
$$
Therefore there is a fixed point of $F$ inside the domain enclosed by the loop $\alpha$. This contradiction shows the existence of a symmetric fixed point of $f$ on one connected component of $\mathrm{Fix}\, I_A$.
In a similar way, we are able to find a fixed point of $F$ on $\mathrm{Fix}\, I_S'$ which gives another symmetric fixed point of $f$ on the other connected component of $\mathrm{Fix}\, I_A$.
\hfill$\square$\\[-1ex]
A standard argument shows the following, see for instance \cite[Corollary 8.6]{MS98}.
\begin{Cor}
Let $f:A\to A$ be a reversible map isotopic to the identity with the boundary twist condition. Then for each $\frac{p}{q}\in{\mathbb{Q}}$ satisfying
$$
\min_{x\in{\mathbb{R}}}(F_1(x,0)-x)<\frac{2\pi p}{q}< \max_{x\in{\mathbb{R}}}(F_1(x,1)-x),
$$
there is a symmetric $q$-periodic point of $f$ with rotation number $\frac{p}{q}$ on each connected component of $\mathrm{Fix}\, I_A$.
\end{Cor}
\subsubsection*{Acknowledgments} {I would like to thank Urs Fuchs for helpful discussions. The author is supported by DFG grant KA 4010/1-1. This work is also partially supported by SFB 878-Groups, Geometry, and Actions.}
| {
"timestamp": "2014-10-16T02:09:07",
"yymm": "1410",
"arxiv_id": "1410.3997",
"language": "en",
"url": "https://arxiv.org/abs/1410.3997",
"abstract": "In reversible dynamical systems, it is frequently of importance to understand symmetric features. The aim of this paper is to explore symmetric periodic points of reversible maps on planar domains invariant under a reflection. We extend Franks' theorem on a dichotomy of the number of periodic points of area preserving maps on the annulus to symmetric periodic points of area preserving reversible maps. Interestingly, even a non-symmetric periodic point guarantees infinitely many symmetric periodic points. We prove an analogous statement for symmetric odd-periodic points of area preserving reversible maps isotopic to the identity, which can be applied to dynamical systems with double symmetries. Our approach is simple, elementary and far from Franks' proof. We also show that a reversible map has a symmetric fixed point if and only if it is a twist map which generalizes a boundary twist condition on the closed annulus in the sense of Poincaré-Birkhoff. Applications to symmetric periodic orbits in reversible dynamical systems with two degrees of freedom are briefly discussed.",
"subjects": "Dynamical Systems (math.DS); Symplectic Geometry (math.SG)",
"title": "On reversible maps and symmetric periodic points",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462176445589,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7099326002532091
} |
https://arxiv.org/abs/1910.02316 | Superiority of Bayes estimators over the MLE in high dimensional multinomial models and its implication for nonparametric Bayes theory | This article focuses on the performance of Bayes estimators, in comparison with the MLE, in multinomial models with a relatively large number of cells. The prior for the Bayes estimator is taken to be the conjugate Dirichlet, i.e., the multivariate Beta, with exchangeable distributions over the coordinates, including the non-informative uniform distribution. The choice of the multinomial is motivated by its many applications in business and industry, but also by its use in providing a simple nonparametric estimator of an unknown distribution. It is striking that the Bayes procedure outperforms the asymptotically efficient MLE over most of the parameter spaces for even moderately large dimensional parameter space and rather large sample sizes. | \section{Introduction}
The present article shows by analytical computations and simulations that Bayes estimators even in moderately high dimensional multinomial models outperform the MLE on most of the parameter space. High dimensional multinomial models are useful for industrial planning. For example, for planning its manufacturing process a big departmental store may try to estimate the proportions of a certain type of clothing, by sizes and/or colors, demanded by its customers (see the example in Section \ref{sec:data}).
For another important motivation for pursuing this study of multinomials, note that an unknown distribution on a state space may be approximated nonparametrically by probabilities of members of a fine partition. Sampling from this approximate distribution is the same as sampling from a multinomial. Bayes estimation of the approximating multinomial with a conjugate Beta (i.e. Dirichlet), prior was a motivation for Ferguson’s path breaking development of nonparametric Bayes theory of estimation of a probability distribution on the state space with the so called Dirichlet process prior \citep{ferguson1973}.
It may be pointed out that the venerable Bernstein-von Mises theorem (see, e.g., \citet[p.~339]{bickel2001}, or \citet[p.~190]{textbook}) is designed to show how Bayes estimators with reasonable priors are asymptotically as efficient as the MLE and approach it as the sample size increases. It turns out from our study that in high dimensional multinomial models it is the MLE which tries to catch up with Bayes as the sample size increases!
The MLE of a multinomial with parameter $\bb\theta=(\theta_1,\theta_2,\dotsc,\theta_k)$ is given by $\bb{\hat\theta}=(\hat\theta_1,\hat\theta_2,\dotsc,\hat\theta_k)$, where $\hat\theta_i=n_i/n$ is the proportion of i.i.d. observations $X_1,\dotsc,X_n$ in the $i$-th cell estimating its true proportion $\theta_i$. The Bayes estimator $\bb{d_B}$ uses the conjugate prior $\operatorname{Beta}(\alpha_1,\alpha_2,\dotsc,\alpha_k)$, also called a Dirichlet prior and denoted $\dir(\alpha_1,\alpha_2,\dotsc,\alpha_k)$. We take $\alpha_1=\alpha_2=\dotsb=\alpha_k$, for purposes of ease in computation and assuming no a priori preference for some categories over others. Indeed, when the common value is 1 one has the so-called \emph{non-informative prior}, assigning the uniform density on the parameter space
\begin{equation}
E_k\equiv \left\{\bb\theta = (\theta_1,\theta_2,\dotsc,\theta_k): \theta_i \ge 0 \;\forall i, \sum_{1\le i \le k} \theta_i =1 \right\}.
\label{eq:ek1}
\end{equation}
The posterior distribution of $\bb\theta$ is $\dir(\alpha_1+n_1,\dotsc,\alpha_k+n_k)$ and its mean is the Bayes estimator $\bb{d_B}$ of $\bb\theta$, as given in \eqref{eq:db}.
For the most part, we take the loss function to be squared error under which the risk functions $R(\bb{\hat\theta},\bb\theta)$ and $R(\bb{d_B},\bb\theta)$ are given by \eqref{eq:mlerisk} and \eqref{eq:Bayesrisk}. Both analytical calculations and simulations are carried out for cases with $\alpha_1=\alpha_2=\dotsb=\alpha_k$.
For an approximation of the nonparametric estimation of an unknown probability on some state space, the Bayes procedure with a Dirichlet prior and a base measure $\bb\alpha$ with total mass $C$, we consider for each $k$, $\alpha_i\equiv\bb\alpha(\{i\})=C/k \;\forall i 1\le i \le k$. Such an approximation with large $k$, named the ``tree construction'' of the Dirichlet process may be found in \citet[Chapter 3]{ghosh_ram2003} and \citet[Chapter 4]{ghosal2017}. One may think of $C/k$ as the $\bb\alpha$ measure of each of $k$ members of a partition, which can be ensured if $\bb\alpha$ is absolutely continuous (with respect to Lebesgue measure on an Euclidean state space or a volume measure on a manifold).
Section \ref{sec:mult} introduces the multinomial distribution and the estimators under consideration. Section \ref{sec:volume} explores the volume of the parameter space where the Bayes estimators have lower risk than the MLE. To build intuition we begin in Sections \ref{sub:omega2} and \ref{sub:omega3} with analytical computations for the binomial model, i.e., a multinomial model with $k=2$, and then a multinomial with $k=3$. For $k\ge 4$, the geometry of the region of the simplex \eqref{eq:ek1} where $R(\bb{\hat\theta},\bb\theta)\le R(\bb{d_B},\bb\theta)$ is complex. Extensive simulations show that, under the uniform prior $\dir(1,1,\dotsc,1)$, for moderate and large vaules of $k$ and small as well as large values of the sample size $n$, the Bayes estimator has a smaller expected squared error than that of the MLE on most of the parameter space (see Figure \ref{fig:sampling}).
Next consider the multivariate Beta prior with parameters $\alpha_1=\alpha_2=\dotsb=\alpha_k=1/k$ with a large $k$, suitable for a simple nonparametric estimation of an unknown distribution as alluded to above. In this case, for sufficiently large $k$, the region where the MLE has a smaller expected squared error than Bayes is identifiable as the union of $k$ regions, each being a ``cone'' shaped structure minus a cap at the base (see Figure \ref{fig:delta3} in Section \ref{sec:volume} for the case $k=3$). Its area, i.e., its volume measure in the simplex $E_k$ in \eqref{eq:ek1}, relative to the volume measure of $E_k$ or, equivalently, of its complement, is estimated analytically in Section 3 (see Lemma \ref{lem:sa}). This conservative estimate is compared with the ``true'' value obtained by simulation in Table \ref{tab:1kprior}, once again showing that the region where the MLE has a smaller expected risk that that of the Bayes is rather negligible.
In Section \ref{sec:avg}, the average difference over the parameter space of expected squared errors $R(\bb{\hat\theta},\bb\theta)-R(\bb{d_B},\bb\theta)$ is computed in proportion to the average of $R(\bb{\hat\theta},\bb\theta)$ and compared (see Table \ref{tab:comparison} and Figure \ref{fig:avgriskdec}). This scaled difference is shown to be maximum under the uniform prior$\dir(1,1,\dotsc,1)$. This may come as a suprise because the proportion of the volume of $E_k$ in which the Bayes risk is smaller than that of the MLE is generally larger under the prior $\dir(1/k,\dotsc,1/k)$ than under the uniform!
Section \ref{sec:tv} briefly illustrates the approximations in $L_1$ or total variation distance between a true distribution and its nonparametric estimators based on the MLE and those of the nonparametric Bayes estimators. Section \ref{sec:data} presents a data example to illustrate an industrial application of a high dimensional multinomial. A final Section \ref{sec:final} lays down some final Remarks.
\section{The multinomial distribution}\label{sec:mult}
Consider the estimation of the parameter $\boldsymbol \theta=(\theta_1,\theta_2,\dotsc,\theta_k)$ in the multinomial distribution, where $\theta_j$ is the proportion of the $j$-th class in a population with $k\ge 2$ classes ($j=1,2,\dotsc,k$). Based on a simple random sample of size $n$ from the population, let $n_1,n_2,\dotsc,n_k$ be the numbers in the sample belonging to each of the $k$ classes. Since $(n_1,n_2,\dotsc,n_k)$ is a sufficient statistic for $\boldsymbol\theta$, it is enough to consider the distribution of $(n_1,n_2,\dotsc,n_k)$ for the estimation of $\boldsymbol\theta$. Namely,
\begin{equation}
f(n_1,n_2,\dotsc,n_k;\boldsymbol\theta)=\frac{n!}{n_1!n_2!\dotsb n_k!}\theta_1^{n_1}\theta_2^{n_2}\dotsb \theta_k^{n_k}\quad\quad
\left\{(\theta_1,\theta_2,\dotsc,\theta_k)\in\rr^k: \theta_j\ge 0~\forall j, \sum_{j=1}^k \theta_j=1\right\}.
\label{eq:mult}
\end{equation}
The Maximuim Likelihood Estimator (MLE) is $\hat\theta\equiv\left(\frac{n_1}{n},\frac{n_2}{n},\dotsc,\frac{n_k}{n}\right)$. The multivariate Beta, or \emph{Dirichlet}, prior $\dir(\alpha_1,\alpha_2,\dotsc,\alpha_k)$ has density with respect to Lebesgue measure on $\Theta^\sim$, where $\Theta^\sim$ is given by
\[
\Theta^\sim\equiv
\left\{ (\theta_1,\theta_2,\dotsc,\theta_{k-1})\in\rr^{k-1}: \theta_j\ge 0~\forall j, \sum_{j=1}^{k-1} \theta_j\le1\right\}.
\]
The Dirichlet density is
\begin{equation}
\pi(\theta_1,\theta_1,\dotsc,\theta_k)=\frac{\Gamma(\alpha_1+\dotsb+\alpha_k)}{\Gamma(\alpha_1)\Gamma(\alpha_2)\dotsb\Gamma(\alpha_k)} \theta_1^{\alpha_1-1}\theta_2^{\alpha_2-1}\dotsb\theta_k^{\alpha_k-1},\quad \text{for }\boldsymbol\theta\in\Theta^\sim,
\label{eq:dir}
\end{equation}
where $\theta_k=1-\theta_1-\theta_2-\dotsb-\theta_{k-1}$.
It is well known , and easy to prove, that if the prior is $\dir(\alpha_1,\dotsc,\alpha_k)$, the posterior distribution of $\boldsymbol\theta$ is Dirichlet $\dir(\alpha_1+n_1,\alpha_2+n_2,\dotsc,\alpha_k+n_k)$ (see, e.g., \citet{textbook}).
Under squared error loss: $L(\boldsymbol\theta,\boldsymbol\theta')=|\boldsymbol\theta-\boldsymbol\theta'|^2=\sum_{1\le i\le k} (\theta_i-\theta_i')^2$, then the risk function of the MLE ($\hat\theta=(\hat\theta_1,\dotsc,\hat\theta_k)$ with $\hat\theta_i=n_i/n$) is given by
\begin{equation}
R(\boldsymbol{\hat\theta},\boldsymbol\theta)=\sum_{1\le i \le k} \frac{\theta_i(1-\theta_i)}{n}=\frac{1-\sum_{1\le i\le k} \theta_i^2}{n}
\label{eq:mlerisk}
\end{equation}
We wish to choose an exchangeable prior--invariant under permutation of coordinates. Thus we choose $\alpha_1=\alpha_2=\dotsb =\alpha_k=C_{k,n}$, where $C_{k,n}$ is some constant which may depend on $k$ and $n$. The choices of $C_{k,n}$ that lead to better estimators in terms of risk under squared error loss will be investigated.
Under the Dirichlet prior $\dir(\alpha_1,\dotsc,\alpha_k)$ with $\alpha_1=\alpha_2=\dotsb=\alpha_k=C_{k,n}$ and squared error loss, the Bayes estimator is
\begin{equation}
\boldsymbol d_B=(d_{B1},\dotsc,d_{Bk}), \quad \text{with } d_{Bi}=\frac{n_i+C_{k,n}}{n+kC_{k,n}} \quad (i=1,2,\dotsc, k)
\label{eq:db}
\end{equation}
and its risk function is (see, e.g., \citet{textbook})
\begin{align}
R(\boldsymbol d_B, \boldsymbol\theta)&=\sum_{1\le i \le k} \frac{n\theta_i(1-\theta_i)+(C_{k,n}-k\theta_iC_{k,n})^2}{\left(n+kC_{k,n}\right)^2}\nonumber \\
&=\left(\sum_{1\le i \le k} (\theta_i-\theta_i^2)\right)\frac{n}{\left(n+kC_{k,n}\right)^2} + \frac{C_{k,n}^2\left(k-2k+k^2\left(\sum_{1\le i \le k} \theta_i^2\right)\right)}{\left(n+kC_{k,n}\right)^2}\nonumber \\
&=\left( 1- \sum_{1\le i \le k} \theta_i^2\right)\frac{n}{\left(n+kC_{k,n}\right)^2} - \frac{kC_{k,n}^2}{\left(n+kC_{k,n}\right)^2}+\frac{k^2C_{k,n}^2 \sum_{1\le i \le k} \theta_i^2}{\left(n+kC_{k,n}\right)^2} \nonumber \\
&=\frac{n-kC_{k,n}^2}{\left(n+kC_{k,n}\right)^2}+\frac{\left(k^2C_{k,n}^2-n\right) \sum_{1\le i \le k} \theta_i^2}{\left(n+kC_{k,n}\right)^2}
\label{eq:Bayesrisk}
\end{align}
Hence, $R(\boldsymbol{\hat\theta},\boldsymbol\theta)\le R(\boldsymbol d_B,\boldsymbol\theta)$ (and thus the MLE has lower risk than the Bayes estimator) only on the set
\begin{equation}
\left\{ \boldsymbol\theta = (\theta_1,\theta_2,\dotsc,\theta_k): \theta_i \ge 0~\forall i, \sum_{1 \le i \le k} \theta_i=1, {\left[\frac{k^2C_{k,n}^2-n}{\left(n+kC_{k,n}^2\right)^2}+\frac{1}{n}\right] \sum_{1\le i \le k} \theta_i^2 \ge \frac{1}{n}-\frac{n-kC_{k,n}^2}{\left(n+kC_{k,n}\right)^2}}\right\}.
\end{equation}
This can be written more compactly as
\begin{equation}
\left\{ \boldsymbol\theta = (\theta_1,\theta_2,\dotsc,\theta_k): \theta_i \ge 0~\forall i, \sum_{1 \le i \le k} \theta_i=1, { \sum_{1\le i \le k} \theta_i^2 \ge \frac{2n+(n+k)C_{k,n}}{2n+(k+kn)C_{k,n}}}\right\}.
\label{eq:riskset}
\end{equation}
Recall that the parameter space is the simplex $E_k$
\begin{equation}
E_k\equiv \{ \boldsymbol \theta=(\theta_1,\theta_2,\dotsc,\theta_k): \theta_i\ge 0~\forall i, \sum_{1 \le i \le k} \theta_i=1\}
\label{def:ek}
\end{equation}
We will calculate/simulate the volume of the region \ref{eq:riskset} in $E_k$ for various choices of $C_{k,n}$. This gives the proportion of the parameter space that is better estimated (with regard to risk) by the MLE.
It is seen (see Section 4) that even for fairly large sample sizes the Bayes estimator outperforms the MLE after $k$ is large, such as $k\ge 10$.
\section{The proportion of the parameter space favoring the Bayes estimator: volume calculations} \label{sec:volume}
Before calculating the volume of the region \eqref{eq:riskset}, we will consider, more generally, the simplex $E_k$ defined in \eqref{def:ek}, and the region $\Omega_{k,R}\equiv \{ \boldsymbol\theta \in E_k: |\boldsymbol\theta|^2\le R\}$, where $R$ is a known constant. Note that the region \eqref{eq:riskset} is the complement of $\Omega_{k,R}$ (for a specific choice of $R$). Thus the region $\Omega_{k,R}$, when applied to the this problem, represents the region of the parameter space where the Bayes estimator has lower risk than the MLE.
Define the point $\boldsymbol{e_0}$ by
\begin{equation}
\boldsymbol{e_0}\equiv \left(\frac{1}{k},\frac{1}{k},\dotsc,\frac{1}{k}\right),
\label{eq:e0}
\end{equation}
the point in $E_k$ that is closest to the origin, with $\left|\boldsymbol{e_0}\right|^2=\frac{1}{k}$. We can see, then, that if $R < 1/k$ then $\Omega_{k,R}=\O$. Similarly, if $R \ge 1$ then $\Omega_{k,R}=E_k$, since $|\boldsymbol\theta|^2\le1~\forall \theta\in E_k$.
For $R\ge 1/k$, define $\delta_k(R)$ to be the distance between $\boldsymbol{e_0}$ and the sphere $\{ \boldsymbol\theta\in E_k: |\boldsymbol\theta|^2=R\}$. Then
\begin{equation}
\delta_k(R)=\sqrt{R-\frac{1}{k}}.
\label{eq:delta}
\end{equation}
Define $\nu_j$ to be the distance between $\boldsymbol{e_0}$ and the $(k-1-j)$-dimensional boundary of $E_k$. This is the distance between $\boldsymbol{e_0}$ and $\left(0,\dotsc,0,\frac{1}{k-j},\dotsc, \frac{1}{k-j}\right)$, which has the first $j$ coordinates equal to 0, and the remaining $k-j$ coordinates equal to $\frac{1}{k-j}$. We have
\begin{equation}
\nu_j=\sqrt{\frac{j}{k(k-j)}}.
\label{eq:nu}
\end{equation}
Note that we take $j=1,\dotsc,k-1$ and that $\nu_1<\nu_2<\dotsb<\nu_{k-1}$. Thus, for any $R\in \left(1/k,1\right)$, we can find $j$ such that $\nu_j\le \delta_k(R) < \nu_{j+1}$. We conjecture that the precise shape of $\Omega_{k,R}$, and thus the formula for calculating its surface area, should depend on which $j$ satifies this condition. We use the term ``surface area'' since $\Omega_{k,R}$ (and also $E_k$) is a $(k-1)$-dimensional subspace of $\rr^k$.
\subsection{The surface area of $\boldsymbol{E_k}$}
Consider in general the simplex $S_k(r)$, defined
\[
S_k(r)\equiv \{\boldsymbol\theta = (\theta_1,\theta_2,\dotsc,\theta_k):~ \theta_i\ge0~\forall i, \sum_{1\le i\le k}\theta_i \le r\}, ~r>0.
\]
Let $E_k(r)$ be the boundary of $S_k(r)$, namely,
\[
E_k(r)\equiv \{\boldsymbol\theta = (\theta_1,\theta_2,\dotsc,\theta_k):~ \theta_i\ge0~\forall i, \sum_{1\le i\le k}\theta_i = r\},
\]
and write $E_k\equiv E_k(1)$.
\begin{lemma}
(i) The volume $V_k(r)$ of $S_k(r)$ is $\frac{r^k}{k!}$, and (ii) the surface area $A_k(r)$ of $E_k(r)$ is $r^{k-1}\frac{\sqrt{k}}{(k-1)!}$.
In particular, the surface area of the simplex $E_k\equiv E_k(1)$ is
\begin{equation}
A_k\left(\equiv A_k(1)\right)=\frac{\sqrt{k}}{(k-1)!}.
\label{eq:ekvol}
\end{equation}
\end{lemma}
\begin{proof}
\begin{enumerate}[(i)]
\item
\begin{align*}
V_k(r)&=\int_{S_k(r)}d\theta_1d\theta_2\dotsb d\theta_k\\
&=\int_{S_{k-1}(r)} \left(r-\sum_{1\le i\le k-1} \theta_i\right)d\theta_1 d\theta_2\dotsb d\theta_{k-1}\\
&=\int_{S_{k-2}(r)} \frac{\left(r-\sum_{1\le i\le k-2} \theta_i\right)^2}{2} d\theta_1 d\theta_2\dotsb d\theta_{k-2}\\
&=\dotsb\\
&= \int_{S_1(r)} \frac{(r-\theta_1)^{k-1}}{(k-1)!} d\theta_1\\
&=\frac{r^k}{k!}.
\end{align*}
\item The difference in volume between $S_k(r)$ and $S_k(r+\Delta r)$ is a slab around $E_k(r)$. Note that
\begin{align*}
V_k(r+\Delta r)-V_k(r)&=\Delta r \left[\frac{d}{dr} \left(\frac{r^k}{k!}\right)\right]+o(\Delta r)\\
&= \frac{(\Delta r) r^{k-1}}{(k-1)!} \quad\text{as } \Delta r\searrow 0.\\
\end{align*}
The unit normal to the surface $E_k(r)$ at every point on it is $(\text{grad} \sum_{1\le i\le k}\theta_i)/|\text{grad} \sum_{1\le i\le k}\theta_i|=\left(\frac{1}{\sqrt{k}},\frac{1}{\sqrt{k}},\dotsc,\frac{1}{\sqrt{k}}\right)$. Hence at every point the thickness of the slab $S_k(r+\Delta r) \setminus S_k(r)$ is $\Delta r/\sqrt{k}$. One may also see this by computing the distance between $E_k(r)$ and $E_k(r+\Delta r)$ along the normal through the origin, i.e. $\left|\left(\frac{r}{k},\frac{r}{k},\dotsc,\frac{r}{k}\right)-\left(\frac{r+\Delta r}{k},\frac{r+\Delta r}{k},\dotsc,\frac{r+\Delta r}{k}\right)\right|$.
The surface area $A_k(r)$ then is given by
\begin{align*}
A_k(r)&=\lim_{\Delta r\to 0} \frac{V_k(r+\Delta r) - V_k(r)}{\Delta r/\sqrt{k}}\\
&=r^{k-1}\frac{\sqrt{k}}{(k-1)!}~.\\
\end{align*}
\end{enumerate}
\end{proof}
\subsection{The Surface Area of $\boldsymbol{\Omega_{2,R}}$}\label{sub:omega2}
Let us calculate the $k=2$ case (this corresponds to the binomial distribution, a special case of the multinomial distribution with $k=2$). The simplex $E_2=\{\boldsymbol\theta\in \rr^2:~ \theta_1,\theta_2\ge 0,~ \theta_1+\theta_2=1\}$ is the line between $(0,1)$ and $(1,0)$. The region of interest is $\Omega_{2,R}=\{\boldsymbol\theta\in E_2:~ \theta_1^2+\theta_2^2\le R\}$. See Figure \ref{fig:k2} for an illustration of this region.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=4]
\draw[thick, ->] (0,0) -- (1.25,0) node[anchor=north west] {$\theta_1$};
\draw[thick, ->] (0,0) -- (0,1.25) node[anchor=south east] {$\theta_2$};
\foreach \x in {0,1}
\draw (\x cm,1pt) -- (\x cm,-1pt) node[anchor=north] {$\x$};
\foreach \y in {0,1}
\draw (1pt,\y cm) -- (-1pt,\y cm) node[anchor=east] {$\y$};
\draw[->] (0,0) -- ({.49 + .49*sqrt(2*0.8^2-1)},{.49 - .49*sqrt(2*0.8^2-1)}) node[below= 3pt, left=23pt] {$\sqrt{R}$};
\draw[very thick] (0,1) -- (1,0);
\draw[very thick, dashed] (0.8,0) arc (0:90:0.8 cm);
\draw[line width = 2pt, yellow, opacity=.5] ({.5 - .5*sqrt(2*0.8^2-1)},{.5 + .5*sqrt(2*0.8^2-1)}) -- ({.5 + .5*sqrt(2*0.8^2-1)},{.5 - .5*sqrt(2*0.8^2-1)});
\filldraw ({.5 - .5*sqrt(2*0.8^2-1)},{.5 + .5*sqrt(2*0.8^2-1)}) circle (.5pt) node[above right] {$p_1$};
\filldraw ({.5 + .5*sqrt(2*0.8^2-1)},{.5 - .5*sqrt(2*0.8^2-1)}) circle (.5pt) node[above right] {$p_2$};
\end{tikzpicture}
\caption{An illustration of $E_2$. The region $\Omega_{2,R}$ is the line sement between $p_1$ and $p_2$.}
\label{fig:k2}
\end{figure}
We find the intersection points by solving $\theta_1^2+(1-\theta_1)^2=R$ to obtain the points
\[
p_1=\left(\frac{1}{2}-\frac{1}{2}\sqrt{2R-1},\frac{1}{2}+\frac{1}{2}\sqrt{2R-1}\right)
\]
and
\[
p_2=\left(\frac{1}{2}+\frac{1}{2}\sqrt{2R-1},\frac{1}{2}-\frac{1}{2}\sqrt{2R-1}\right).
\]
The surface area of $\Omega_{2,R}$ is the length of the line segment between these two points, which is $\sqrt{4R-2}$.
We can also find $\delta_2(R)=\sqrt{R-1/2}$ and $\nu_1=\sqrt{1/2}$ using equations \eqref{eq:delta} and \eqref{eq:nu}, respectively. These distances are pictured in Figure \ref{fig:delta2}. Note that we also have that the 1-dimensional volume of $\Omega_{2,R}$ is equal to $2\delta_2(R)=2\sqrt{R-1/2}=\sqrt{4R-1}$.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=4]
\draw[thick, ->] (0,0) -- (1.25,0) node[anchor=north west] {$\theta_1$};
\draw[thick, ->] (0,0) -- (0,1.25) node[anchor=south east] {$\theta_2$};
\foreach \x in {0,1}
\draw (\x cm,1pt) -- (\x cm,-1pt) node[anchor=north] {$\x$};
\foreach \y in {0,1}
\draw (1pt,\y cm) -- (-1pt,\y cm) node[anchor=east] {$\y$};
\draw[very thick] (0,1) -- (1,0);
\draw[very thick, dashed] (0.8,0) arc (0:90:0.8 cm);
\draw[line width = 2pt, yellow, opacity=.5] ({.5 - .5*sqrt(2*0.8^2-1)},{.5 + .5*sqrt(2*0.8^2-1)}) -- ({.5 + .5*sqrt(2*0.8^2-1)},{.5 - .5*sqrt(2*0.8^2-1)});
\filldraw (.5,.5) circle (.5pt) node[below left] {$e_0$};
\draw[<->, blue] ({.55 - .5*sqrt(2*0.8^2-1)},{.53 + .5*sqrt(2*0.8^2-1)}) -- node[above right] {$\delta_2(R)$} (.55,.53);
\draw[<->, red] (.55,.53) -- node[above right] {$\nu_1$} (.99,.09);
\end{tikzpicture}
\caption{An illustration of $\Omega_{2,R}$ with the distances $\delta_2(R)$ and $\nu_1$ labeled.}
\label{fig:delta2}
\end{figure}
The length of the line $E_2$ is $\sqrt{2}$, giving that the proportion of the 1-dimensional volume of $ E_2$ made up by $\Omega_{2,R}$ is
\begin{align*}
\frac{\vol(\Omega_{2,R})}{\vol(E_2)}&=\frac{\sqrt{4R-2}}{\sqrt{2}}\\
&=\sqrt{2R-1}
\end{align*}
\subsection{The surface area of $\boldsymbol{\Omega_{3,R}}$}\label{sub:omega3}
For $k=3$, we can also calculate this volume exactly. The simplex $E_3=\{\boldsymbol\theta\in \rr^3:~ \theta_1,\theta_2, \theta_3\ge 0,~ \theta_1+\theta_2+\theta_3=1\}$ is an equilateral triangle between the points $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$. See Figure \ref{fig:smallrad} for an illustration of the space for $R\in \left(\frac{1}{3},\frac{1}{2}\right)$ and Figure \ref{fig:largerad} for an illustration of the space for $R\in \left(\frac{1}{2},1\right)$.
\begin{figure}[h]
\centering
\includegraphics[width=.6\textwidth]{smaller-radius2}
\caption{An illustration of $E_3$ with the region $\Omega_{3,R}$ in gray for $R\in \left(\frac{1}{3},\frac{1}{2}\right)$.}
\label{fig:smallrad}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.6\textwidth]{larger-radius2}
\caption{An illustration of $E_3$ with the region $\Omega_{3,R}$ in gray for $R\in \left(\frac{1}{2},1\right)$.}
\label{fig:largerad}
\end{figure}
We calculate $\nu_1$ and $\nu_2$ using equation \eqref{eq:nu} and $\delta_3(R)$ using equation \eqref{eq:delta}. These are illustrated in Figure \ref{fig:delta3}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=4]
\draw (0,0) node[below left] {(1,0,0)} -- (1,0) node[below right] {(0,1,0)} -- (.5, {sqrt(3)/2}) node[above] {(0,0,1)} -- cycle;
\begin{scope}
\clip (0,0) -- (1,0) -- (.5, {sqrt(3)/2}) -- cycle;
\fill[yellow, opacity = .5] (.5, {sqrt(3)/6}) circle (.4cm);
\end{scope}
\draw (.5, {sqrt(3)/6}) circle (.4cm) ;
\filldraw (.5, {sqrt(3)/6}) circle (.3pt) node[right] {$e_0$} ;
\draw[<->,blue] (.49, {sqrt(3)/6-.05/sqrt(23)}) -- node[right] {$\delta_3(R)$} ({(15-sqrt(69))/30+.001*sqrt(23)},0.005 );
\draw[<->,red] (.49, {sqrt(3)/6+sqrt(3)/300}) -- node[above] {$\nu_1$} (.25, {sqrt(3)/4});
\draw[<->,red] (.5, {sqrt(3)/6+.01}) -- node[right] {$\nu_2$} (.5, {sqrt(3)/2-.02});
\end{tikzpicture}
\caption{An illustration of $\Omega_{3,R}$ with $\nu_1 < \delta_3(R) < \nu_2$.}
\label{fig:delta3}
\end{figure}
\begin{align*}
\nu_1&=\sqrt{\left(\frac{1}{3}\right)^2+2\left(\frac{1}{3}-\frac{1}{2}\right)^2}\\
&=\frac{1}{\sqrt{6}}~,\\
\nu_2&=\sqrt{2\left(\frac{1}{3}\right)^2+\left(\frac{1}{3}-1\right)^2}\\
&=\sqrt{\frac{2}{3}}~,
\end{align*}
and
\[
\delta_3(R)=\sqrt{R-\frac{1}{3}}~.
\]
If $\delta_3(R)\le \nu_1$, then $\Omega_{3,R}$ is just a circle with radius $\delta_3(R)$. Its surface area is then $\pi\left[\delta_3(R)\right]^2=\pi\left(R-\frac{1}{3}\right)$. The surface area of $E_3$ is (using equation \eqref{eq:ekvol}) $A_3=\sqrt{3}/2$. This gives that the proportion of the 2-dimensional volume of $E_3$ made up by $\Omega_{3,R}$ is
\begin{align*}
\frac{\vol(\Omega_{3,R})}{\vol(E_3)}&=\frac{\pi\left(R-\frac{1}{3}\right)}{\sqrt{3}/2}\\
&=\frac{2\sqrt{3}}{3}\pi\left(R-\frac{1}{3}\right).\\
\end{align*}
If $\nu_1 < \delta_3(R) < \nu_2$ then we can divide up the region as in Figure \ref{fig:vol3} to determine the surface area.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=4]
\draw (0,0) node[below left] {(1,0,0)} -- (1,0) node[below right] {(0,1,0)} -- (.5, {sqrt(3)/2}) node[above] {(0,0,1)} -- cycle;
\begin{scope}
\clip (0,0) -- (1,0) -- (.5, {sqrt(3)/2}) -- cycle;
\fill[yellow, opacity = .5] (.5, {sqrt(3)/6}) circle (.4cm);
\end{scope}
\draw (.5, {sqrt(3)/6}) circle (.4cm) ;
\filldraw (.5, {sqrt(3)/6}) circle (.3pt) node[below, font=\tiny] {$e_0\equiv\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)$};
\draw[dashed, thin] (.5, {sqrt(3)/6}) -- ({(15-sqrt(69))/30},0 );
\draw[dashed, thin] (.5, {sqrt(3)/6}) -- ({(15+sqrt(69))/30},0 );
\draw[dashed, thin] (.5, {sqrt(3)/6}) -- ({(15-sqrt(69))/60},{sqrt(3)*(15-sqrt(69))/60} );
\draw[dashed, thin] (.5, {sqrt(3)/6}) -- ({(45+sqrt(69))/60},{sqrt(3)*(15-sqrt(69))/60} );
\draw[dashed, thin] (.5, {sqrt(3)/6}) -- ({(15+sqrt(69))/60},{sqrt(3)*(15+sqrt(69))/60} );
\draw[dashed, thin] (.5, {sqrt(3)/6}) -- ({(45-sqrt(69))/60},{sqrt(3)*(15+sqrt(69))/60} );
\filldraw[dashed,draw=black, fill=blue, opacity=.5]
({(52.5-sqrt(69))/30},.2 )
-- ({(52.5+sqrt(69))/30},.2 )
-- (1.75, {sqrt(3)/6 + .2})
-- cycle;
\filldraw (1.75, {sqrt(3)/6 + .2}) circle (.2pt) node[above, font=\tiny] {$e_0\equiv\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)$};
\filldraw (1.75,.2) circle (.2pt) node[above, font=\tiny] {$\left(\frac{1}{2},\frac{1}{2},0\right)$};
\draw[<->] ({(52.5-sqrt(69))/30},.15 ) -- ({(52.5+sqrt(69))/30},.15 );
\draw (1.75, .15) node[below, font=\tiny] {$\sqrt{4R-2}$};
\draw[->,blue] (.5,.05) .. controls +(up:.1cm) and +(left:.1cm) .. ({(52.5-sqrt(69))/30},.3 );
\filldraw[dashed, draw=black, fill=red, opacity=.5]
({(-15-sqrt(69))/30},.2 )
-- (-.5, {sqrt(3)/6+.2})
-- ({(-45-sqrt(69))/60},{sqrt(3)*(15-sqrt(69))/60+.2} )
arc[start angle = {180+acos(5/8+sqrt(69)/24}, end angle = {180 + acos(sqrt(69)/12)}, radius = .4]
-- cycle;
\filldraw (-.5, {sqrt(3)/6+.2}) circle (.2pt) node[above right, font=\tiny] {$e_0\equiv\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)$};
\draw[<->] ({(-15-sqrt(69))/30+.02},.18 ) -- node[right=.5pt,font=\tiny] {$\sqrt{R-\frac{1}{3}}$} (-.48, {sqrt(3)/6+.18});
\draw[->,red] (.22,.1) .. controls +(up:.1cm) and +(right:.1cm) .. ({-.2-sqrt(69)/30},.2);
\draw (-.5, {sqrt(3)/6+.2}) node[below=6pt, left=4pt, font=\tiny] {$\phi$};
\end{tikzpicture}
\caption{A diagram of how to divide $\Omega_{3,R}$ to determine its surface area.}
\label{fig:vol3}
\end{figure}
\sloppy The triangles each have area $\frac{1}{2}\frac{1}{\sqrt{6}}\sqrt{4R-2}=\frac{1}{2}\sqrt{\frac{2R-1}{3}}$. The circular sectors have radius $\delta_3(R)=\sqrt{R-1/3}$. The angle $\phi$ is the angle between the vectors $\left(\frac{1}{6}+\frac{1}{2}\sqrt{2R-1}, -\frac{1}{3} , \frac{1}{6}-\frac{1}{2}\sqrt{2R-1}\right)$ and $\left(\frac{1}{6}+\frac{1}{2}\sqrt{2R-1}, \frac{1}{6}-\frac{1}{2}\sqrt{2R-1},-\frac{1}{3} \right)$. We have
\[
\cos\phi=\frac{\frac{1}{4}(2R-1)+\frac{1}{2}\sqrt{2R-1}-\frac{1}{12}}{R-\frac{1}{3}}.
\]
The surface area of $\Omega_{3,R}$ is then
\[
\vol(\Omega_{3,R})=\frac{3}{2}\sqrt{\frac{2R-1}{3}}+\frac{3}{2}\left(R-\frac{1}{3}\right)\arccos\left(\frac{\frac{1}{4}(2R-1)+\frac{1}{2}\sqrt{2R-1}-\frac{1}{12}}{R-\frac{1}{3}}\right).
\]
The proportion of the 2-dimensional volume of $E_3$ made up by $\Omega_{3,R}$ is then
\[
\frac{\vol(\Omega_{3,R})}{\vol(E_3)}=\sqrt{2R-1}+\sqrt{3}\left(R-\frac{1}{3}\right)\arccos\left(\frac{\frac{1}{4}(2R-1)+\frac{1}{2}\sqrt{2R-1}-\frac{1}{12}}{R-\frac{1}{3}}\right).
\]
We can bound the surface area of $\Omega_{3,R}$ from below by cutting out triangles in the corners. This is done by drawing a straight line between the intersections of the sphere and the edges with the same value in one of the coordinates. There are six places where the sphere intersects the edges of $E_3$. If we let $\theta'=\frac{1}{2}+\frac{1}{2}\sqrt{2R-1}$, these solutions are $\boldsymbol a=(\theta',1-\theta',0)$, $\boldsymbol b=(\theta',0,1-\theta')$, $\boldsymbol c=(1-\theta',0,\theta')$, $\boldsymbol d=(0,1-\theta',\theta')$, $\boldsymbol e=(0,\theta',1-\theta')$, and $\boldsymbol f=(1-\theta',\theta',0)$. Draw lines between $\boldsymbol a$ and $\boldsymbol b$, $\boldsymbol c$ and $\boldsymbol d$, and $\boldsymbol e$ and $\boldsymbol f$ (see Figure \ref{fig:approx}).
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=4]
\begin{scope}
\clip (0,0) -- (1,0) -- (.5, {sqrt(3)/2}) -- cycle;
\fill[yellow] (.5, {sqrt(3)/6}) circle (.4cm);
\end{scope}
\begin{scope}
\clip ({(15-sqrt(69))/30},0 ) -- ({(15-sqrt(69))/60},{sqrt(3)*(15-sqrt(69))/60} ) -- ({(15+sqrt(69))/60},{sqrt(3)*(15+sqrt(69))/60} )
-- ({(45-sqrt(69))/60},{sqrt(3)*(15+sqrt(69))/60} ) -- ({(45+sqrt(69))/60},{sqrt(3)*(15-sqrt(69))/60} )
-- ({(15+sqrt(69))/30},0 ) -- cycle;
\draw[pattern = crosshatch dots, pattern color = gray] (0,0) -- (1,0) -- (.5, {sqrt(3)/2}) -- cycle;
\end{scope}
\draw ({(15-sqrt(69))/30},0 ) node[below left] {$a$} -- ({(15-sqrt(69))/60},{sqrt(3)*(15-sqrt(69))/60} ) node[left] {$b$};
\draw ({(15+sqrt(69))/60},{sqrt(3)*(15+sqrt(69))/60}) node[above left] {$c$} -- ({(45-sqrt(69))/60},{sqrt(3)*(15+sqrt(69))/60}) node[above right] {$d$};
\draw ({(45+sqrt(69))/60},{sqrt(3)*(15-sqrt(69))/60}) node[right] {$e$} -- ({(15+sqrt(69))/30},0 ) node[below right] {$f$};
\draw (.5, {sqrt(3)/6}) circle (.4cm) ;
\draw (0,0) node[below left] {(1,0,0)} -- (1,0) node[below right] {(0,1,0)} -- (.5, {sqrt(3)/2}) node[above] {(0,0,1)} -- cycle;
\end{tikzpicture}
\caption{How to bound the surface area of $\Omega_{3,R}$ from below using similar triangles. $\Omega_{3,R}$ is shaded as before and the lower bound area is crosshatched.}
\label{fig:approx}
\end{figure}
Due to the convexity of the sphere, the lines drawn are entirely inside the sphere. Thus the set $\Omega_{3,R}'$, which is $E_k$ without the three triangles, is entirely contained in $\Omega_{3,R}$. This gives
\begin{equation}
\vol(\Omega_{3,R}) \ge \vol(\Omega_{3,R}').
\label{eq:approx3}
\end{equation}
The triangles in the corners that are removed are equilateral triangles, with side length $\sqrt{R-\sqrt{2R-1}}$. They are thus similar to $E_3$, which is equilateral with side length $\sqrt{2}$. The ratio of the areas of the small triangles to $E_3$ is the ratio of the squared side lengths, which is $\frac{R-\sqrt{2R-1}}{2}$. This gives, finally,
\begin{equation}
\frac{\vol(\Omega_{3,R})}{\vol(E_3)}\ge 1-\frac{3R-3\sqrt{2R-1}}{2}.
\label{eq:volbound3}
\end{equation}
\subsection{The surface area of $\Omega_{k,R}$ for $k\ge 4$}\label{sub:omegaGE4}
Calculating this surface area explicitly appears to be an open problem. We have not found a formula, but can approximate the area with a fairly sharp lower bound for certain choices of $R$. We will generalize the ``cutting off corners'' method in the $k=3$ case, which is valid for $R$ such that $\nu_{k-2}<\delta_k(R)<\nu_{k-1}$.
\begin{lemma}[The Surface Area of $\Omega_{k,R}$ for $R\in [1/2,1)$]
Let $E_k$ be the standard $k$-simplex (defined in equation \eqref{def:ek}) and $\Omega_{k,R}$ be the region $\{ \boldsymbol\theta \in E_k: |\boldsymbol\theta|^2\le R\}$. Assume that $R\in [1/2,1)$. Then
\begin{equation}
\frac{\vol(\Omega_{k,R})}{\vol(E_k)}\ge 1-k\left(\frac{R-\sqrt{2R-1}}{2}\right)^\frac{k-1}{2}.
\label{eq:volbound}
\end{equation}
Additionally, the proportion of $E_k$ made up by $\Omega_{k,R}$ approaches 1 as $k\to\infty$.
\label{lem:sa}
\end{lemma}
\begin{proof}
Since $1/2 \le R < 1$, we have
\begin{align*}
\frac{1}{2} \le R < 1 &\Leftrightarrow \frac{1}{2} - \frac{1}{k} < R-\frac{1}{k} < 1 - \frac{1}{k}\\
&\Leftrightarrow \frac{k-2}{2k} \le R - \frac{1}{k} < \frac{k-1}{k}\\
&\Leftrightarrow \nu_{k-2}^2 \le \left[\delta_k(R)\right]^2 < \nu_{k-1}^2 \quad \text{(see equation \eqref{eq:nu})}\\
&\Leftrightarrow \nu_{k-2} \le \delta_k(R) < \nu_{k-1}
\end{align*}
In this case, where $\nu_{k-2}\le\delta_k(R)<\nu_{k-1}$, there are intersections along the 1-dimensional edges of $E_k$ ($\boldsymbol \theta$ such that only two of its components are nonzero) with the sphere $\{\boldsymbol \theta\in \rr^k: |\boldsymbol \theta|^2=R\}$.
Due to the convexity of the sphere, hyperplanes between these points will be contained inside the sphere. As in the $k=3$ case, we can form $k$ $(k-1)$-dimensional equilateral simplices. The $j$-th simplex will have as one of its vertices a single vertex from $E_k$ of the form $\theta_j=1$ and $\theta_i=0$ for $i\ne j$. Its remaining $k-1$ vertices will be of the form $\theta_j=\theta'$ and $\theta_i=1-\theta'$, with $i\in \{1,2,\dotsc, j-1, j+1,\dotsc, k\}$ (as in the $k=3$ case, we define $\theta'=\frac{1}{2}+\frac{1}{2}\sqrt{2R-1}$). These have edge length $\sqrt{R-\sqrt{2R-1}}$ and are similar to $E_k$ which has edge length $\sqrt{2}$. The ratio of the areas of the small simplices to $E_k$ is $\left(\frac{R-\sqrt{2R-1}}{2}\right)^\frac{k-1}{2}$. This gives the lower bound \eqref{eq:volbound}
\[
\frac{\vol(\Omega_{k,R})}{\vol(E_k)}\ge 1-k\left(\frac{R-\sqrt{2R-1}}{2}\right)^\frac{k-1}{2}.
\]
We see that $\frac{\vol(\Omega_{k,R})}{\vol(E_k)}$ approaches 1 as $k\to\infty$ as long as $\frac{R-\sqrt{2R-1}}{2}<1$. This is, in particular, true when $R\in [1/2,1)$.
\end{proof}
\subsection{Applying the volume calculations to Bayes estimators}\label{sub:estthm}
When using the prior $\dir\left(C_{k,n},\dotsc,C_{k,n}\right)$, we obtained the region \eqref{eq:riskset} where the MLE has lower risk than the Bayes estimator
\[
\left\{ \boldsymbol\theta = (\theta_1,\theta_2,\dotsc,\theta_k): \theta_i \ge 0~\forall i, \sum_{1 \le i \le k} \theta_i=1, { \sum_{1\le i \le k} \theta_i^2 \ge \frac{2n+(n+k)C_{k,n}}{2n+(k+kn)C_{k,n}}}\right\}.
\]
The region where the \emph{Bayes estimator has lower risk than the MLE} (the complement of region \eqref{eq:riskset}) is $\Omega_{k,R}$ with $R$ defined by
\begin{equation}
R=\frac{2n+(n+k)C_{k,n}}{2n+(k+kn)C_{k,n}}.
\label{eq:Bayesrad}
\end{equation}
We can then determine which choice of $C_{k,n}$ will yield the exponential convergence in Lemma \ref{lem:sa}. This gives the following theorem.
\begin{theorem}\label{thm:volbound}
Consider estimating the $k$-dimensional ($k\ge 3$) parameter $\boldsymbol \theta$ in the multinomial distribution based on a simple random sample of size $n$. Under the prior $\dir\left(C_{k,n},\dotsc,C_{k,n}\right)$, the proportion of the parameter space where the Bayes estimator has lower risk than the MLE is greater than
\[
1-k\left(\frac{1}{4}\right)^{\frac{k-1}{2}} \quad\text{(for all } n\ge k),
\]
for $C_{k,n}$ satisfying
\begin{equation}
C_{k,n}<\frac{2n}{n(k-2)-k}.
\label{eq:ckn}
\end{equation}
\end{theorem}
\begin{proof}
As noted above, the region where the Bayes estimator has lower risk than the MLE is $\Omega_{k,R}$ with $R$ defined by equation \eqref{eq:Bayesrad}
\[
R=\frac{2n+(n+k)C_{k,n}}{2n+(k+kn)C_{k,n}}.
\]
We have
\begin{align*}
R > 1/2 &\Leftrightarrow \frac{2n+(n+k)C_{k,n}}{2n+(k+kn)C_{k,n}} > 1/2\\
&\Leftrightarrow C_{k,n}(k+2n-kn) > -2n \\
&\Leftrightarrow C_{k,n} < \frac{2n}{n(k-2)-k}.
\end{align*}
We can thus apply Lemma \ref{lem:sa}. Since $R>1/2$, we have
\begin{align*}
R > \frac{1}{2} &\Rightarrow R-\sqrt{2R-1} < \frac{1}{2}-\sqrt{\frac{2}{2}-1}\\
&\Rightarrow k\left(\frac{R-\sqrt{2R-1}}{2}\right)^\frac{k-1}{2} < k\left(\frac{1}{4}\right)^{\frac{k-1}{2}}\\
&\Rightarrow 1- k\left(\frac{R-\sqrt{2R-1}}{2}\right)^\frac{k-1}{2} > 1 - k\left(\frac{1}{4}\right)^{\frac{k-1}{2}}.
\end{align*}
Since $1 - k\left(\frac{1}{4}\right)^{\frac{k-1}{2}}\to 1$ as $k\to\infty$, we have proved that $\frac{\vol(\Omega_{k,R})}{\vol(E_k)}$ approaches 1 as $n,k\to\infty$.
\end{proof}
We can make this convergence tighter for a given $C_{k,n}$ with $R=\frac{2n+(n+k)C_{k,n}}{2n+(k+kn)C_{k,n}}>1/2$. Then we have that the proportion of the parameter space where this Bayes estimator has lower risk than the MLE is greater than
\begin{equation}
1-k\left(\frac{R-\sqrt{2R-1}}{2}\right)^\frac{k-1}{2}.
\label{eq:volbound2}
\end{equation}
One may ask if there is a best choice of $C_{k,n}$. Note that the squared radius $R$ of the region where the Bayes estimator has lower risk ($\Omega_{k,R}$), defined in equation \eqref{eq:Bayesrad}, is a decreasing function of $C_{k,n}$. We have that $R$ approaches 1 as $C_{k,n}\searrow 0$. That is, the proportion of the parameter space where the Bayes estimator has lower risk approaches 1 (the whole space) as $C_{k,n}$ approaches 0. Note that we cannot take $C_{k,n}=0$ as the Dirichlet prior requires that $\alpha_i>0$ for all $i$. Indeed, if we could take $C_{k,n}$ to be identically zero, the Bayes estimator would be equal to the MLE!
One could, however, use the formula for $R$ and the lower bound in \eqref{eq:volbound2} to select $C_{k,n}$ small enough to satisfy a desired level of coverage of the parameter space for a choice of $k$ (and any larger $k$).
\begin{example} \label{ex:prior1-k}
One such choice of prior is $\dir\left(\frac{1}{k},\dotsc,\frac{1}{k}\right)$. This can be thought of as relating to using a base measure that is a probability measure, since $\sum_{1\le i \le k} \alpha_i=1$. The region where the Bayes estimator has lower risk than the MLE is $\Omega_{k,R}$ with $R$ defined by
\[
R=\frac{2n+1+\frac{n}{k}}{3n+1} > 2/3.
\]
Thus the proportion of the parameter space where the Bayes estimator has lower risk is greater than $1 - k\left(\frac{\frac{2}{3}-\sqrt{\frac{4}{3}-1}}{2}\right)^\frac{k-1}{2}$. Table \ref{tab:1kprior} contains estimates using \eqref{eq:Bayesrad} and 1 - \eqref{eq:volbound2}, giving an upper bound of the proportion of the parameter space where the MLE has lower risk for various values of $k$ and $n$. It also contains simulated proportions using similar methods as in section \ref{sec:sim}. The simulation used sample sizes of 10,000,000, and thus the small proportions for $k=20$ could not be detected.
\begin{table}[h]
\footnotesize
\centering
\begin{tabular}{llll}
\hline
$\boldsymbol k$ & $\boldsymbol n$ & {\bf Prop (Upper Bound)} & {\bf Prop (Simulated)}\\
\hline
$k=5$ & $n=10$ & $\ee{2.68}{-3}$ & $\ee{2.12}{-3}$\\
$k=5$ & $n=25$ & $\ee{2.95}{-3}$ & $\ee{2.32}{-3}$\\
$k=10$ & $n=20$ & $\ee{1.97}{-6}$ & $\ee{7.00}{-7}$\\
$k=10$ & $n=100$ & $\ee{2.30}{-6}$ & $\ee{8.00}{-7}$\\
$k=20$ & $n=40$ & $\ee{6.53}{-13}$ & 0\\
$k=20$ & $n=400$ & $\ee{7.88}{-13}$ & 0\\
\hline
\end{tabular}
\caption{Estimates of the proportion of the parameter space where the MLE has lower risk for various values of $k$ and $n=2k,k^2$. Note that it is nearly 0 for even the moderate $k=10$.}
\label{tab:1kprior}
\end{table}
\end{example}
\subsection{Simulation results for other priors}\label{sec:sim}
The requirement that $C_{k,n}<\frac{2n}{n(k-2)-k}$ precludes some priors that may be of interest. These correspond to cases with a region of interest $\Omega_{k,R}$ such that $\delta_k(R)<\nu_{k-2}$. We have not found a suitable volume lower bound for such cases. However, we have found simulation examples of a slower convergence.
\subsubsection{Uniform prior} \label{sec:uniformsim}
A common choice of prior is the uniform prior, which is the prior $\dir(1,\dotsc,1)$. Under our notation, this corresponds to $C_{k,n}=1$, which clearly does not satisfy $C_{k,n}<\frac{2n}{n(k-2)-k}$ \eqref{eq:ckn} for $k>4$.
Here the MLE has lower risk than the Bayes estimator only on the set
\begin{equation}
\left\{ \boldsymbol\theta = (\theta_1,\theta_2,\dotsc,\theta_k): \theta_i \ge 0~\forall i, \sum_{1 \le i \le k} \theta_i=1, { \sum_{1\le i \le k} \theta_i^2 \ge \frac{3n+k}{nk+2n+k}}\right\}.
\label{eq:risksetU}
\end{equation}
We used a simulation study to better understand the regions where the MLE has lower risk than the Bayes estimator under this prior. Rearranging the inequality in the region \ref{eq:risksetU}, define the function $g$
\begin{equation}
g(\boldsymbol\theta)=-3n-k+(nk+2n+k)|\boldsymbol\theta|_2^2.
\label{eq:comp}
\end{equation}
The MLE has lower risk than the Bayes estimator for $\boldsymbol\theta\in E_k$ where $g(\boldsymbol\theta)\ge 0$.
To estimate the percent of the volume of $E_k$ where the MLE has lower risk than the Bayes estimator, we fixed $k$ and took a uniform sample of size 500,000 from $E_k$ using the R package \verb|hitandrun| \citep{hitandrun}. We then calculated $g(\boldsymbol\theta)$ for $n=k,2k,3k,4k,k^2,2k^2,3k^2,4k^2,k^3,2k^3,3k^3,4k^3,k^4,2k^4,3k^4,4k^4$ and found the percentage of the samples where $g$ is positive for each $n$. This gives a numeric estimate of the percent of the volume of $E_k$ where the MLE has lower risk than the Bayes estimator. The results are summarized in Figure \ref{fig:sampling}. Note that eventually we see that the MLE has lower risk in essentially none of the parameter space, but it is a much slower process, taking until $k=200$.
\begin{figure}[p]
\centering
\includegraphics[width=.9\textwidth]{comp1}
\caption{We see that, although Theorem \ref{thm:volbound} can't be applied to the uniform prior, the MLE still has lower risk in a proportion of the parameter that decreases to 0 as $k$ increases. Note that the $n$-axis is plotted in log-scale and in terms of $k$ to make the samples comparable.}
\label{fig:sampling}
\end{figure}
\subsubsection{$\dir\left(\frac{C}{k},\dotsc,\frac{C}{k}\right)$ prior}
Note that the Dirichlet distribution on the space of all probabilities on the Borel sigma-field of a Polish space $S$ is just the Dirichlet, or multivariate Beta, distribution when $S$ is a finite set (see section~\ref{sec:tv}). By studying the limiting behavior in $n$ and $k$ of Bayes estimators in the multinomial distribution, we can hopefully gain insight into the difference between the risks of density estimation via nonparametric Bayes and using the MLE for a parametric model. However, since Ferguson's construction of the Dirichlet process prior relies on a finite base measure $\alpha(S)$, we may want to consider the sum of the prior parameters $\sum_{1\le i\le k} \alpha_i=C$, a constant, rather than $\sum_{1\le i\le k} \alpha_i=k$, which is the case for the uniform prior.
If we choose $C_{k,n}=C/k$, $R(\bb{\hat\theta},\boldsymbol\theta)\le R(\boldsymbol d_B,\boldsymbol\theta)$ (and thus the MLE has lower risk than the Bayes estimator) only on the set
\begin{equation}
\left\{ \boldsymbol\theta = (\theta_1,\theta_2,\dotsc,\theta_k): \theta_i \ge 0~\forall i, \sum_{1 \le i \le k} \theta_i=1, { \sum_{1\le i \le k} \theta_i^2 \ge \frac{2n+C+Cn/k}{2n+C+nC}}\right\}.
\label{eq:risksetC}
\end{equation}
Note that if $C=1$, we obtain the example in subsection \ref{sub:estthm}. If $C>2$, then $C_{k,n}=C/k$ does not satisfy \eqref{eq:ckn} for moderately sized $k$ and $n$.
We again used simulation to study the regions in question. Rearranging the inequality in the region \ref{eq:risksetC}, define the function $\tilde g$
\begin{equation}
\tilde g(\boldsymbol \theta)=-Cn/k-2n-C+(2n+C+nC)|\boldsymbol\theta|_2^2.
\label{eq:comp2}
\end{equation}
The MLE has lower risk than the Bayes estimator for $\boldsymbol\theta\in E_k$ where $\tilde g(\boldsymbol\theta)\ge 0$.
In the simulations, the limiting behavior is similar in shape to the uniform prior case, but seems to converge faster. For fixed $k$ near $C$, as $n$ increases, the percent of the volume of the parameter space where the MLE has lower risk increases to some limiting value. As $k$ increases, this limiting value decreases to zero. For $k>>C$, however, the Bayes estimator had lower risk in 100\% of the samples, indicating that the volume of the region where the MLE has lower risk is very small.
For example, with $C=30$ and $k=10, 20, 30$, the results can be found in Figure \ref{fig:sampling2}. A relatively large $C$ was chosen so that there would be several $k$ smaller than $C$ to graph.
For $k=40$, the maximum percentage was 0.12\% and for $k=50$, 0.0076\%. For $k=100, 200, 500$ (the three largest values used), the MLE had lower risk in 0\% of the samples.
\begin{figure}[p]
\centering
\includegraphics[width=1.093\textwidth]{compC30-k_print}
\caption{Sampling scheme using $\tilde g$, with $C=30$. For $k=10,20,30$, the maximum percentages were 69.32\%, 17.73\%, and 1.77\%, respectively. Again we see that, although Theorem \ref{thm:volbound} does not apply, the MLE has smaller risk for almost none of the space for $k$ large enough.}
\label{fig:sampling2}
\end{figure}
Again, we see that for large enough $k$, the Bayes estimator has lower risk than the MLE for almost all of the parameter space. We would need to find a suitable volume bound to properly describe this phenomenon.
\subsubsection{$\dir(C,C,\dotsc,C)$ prior for $C>1$}
Since we have already considered the uniform prior, which is $\dir(1,1,\dotsc,1)$, it is natural to consider priors $\dir(C,C,\dotsc,C)$ with $C>1$. That is, $C_{k,n}$ is a constant rather than depending on $k$ or $n$. Priors of this type are unimodal, focusing most of their mass on the center of the parameter space $(1/k,1/k,\dotsc,1/k)$, with smaller component variances as $C$ increases.
If we choose $C_{k,n}=C$, $R(\bb{\hat\theta},\bb\theta) \le R(\bb{d_B},\bb\theta)$ (and thus the MLE has lower risk than the Bayes estimator) only on the set
\begin{equation}
\left\{ \boldsymbol\theta = (\theta_1,\theta_2,\dotsc,\theta_k): \theta_i \ge 0~\forall i, \sum_{1 \le i \le k} \theta_i=1, { \sum_{1\le i \le k} \theta_i^2 \ge \frac{2n+C(k+n)}{2n+C(k+kn)}}\right\}.
\label{eq:risksetC2}
\end{equation}
For simulation purposes, define the function $h$, which is positive when the MLE has lower risk than the Bayes estimator, by rearranging the inequality \eqref{eq:risksetC2}:
\begin{equation}
h(\bb\theta)=-2n-c(k+n) + (2n+C(k+kn))|\bb\theta|_2^2.
\end{equation}
In simulations, a change of behavior is observed at $C=2$. For $C\in(1,2)$, the limiting behavior is similar to the uniform prior case, although with slower convergence. However, for $C\ge 2$, the opposite limiting behavior is observed. As $k$ increases, the proportion of the parameter space where the MLE has lower risk \emph{increases} to 1. See Figure \ref{fig:samplingC} for illustrative examples with $C=1.9$, $C=2$, and $C=3$.
\begin{figure}[p]
\centering
\begin{tabular}{c}
\includegraphics[height=.3\textheight]{compC1-9} \\
\includegraphics[height=.3\textheight]{compC2} \\
\includegraphics[height=.3\textheight]{compC3} \\
\end{tabular}
\caption{Simulation results for the prior $\dir(C,C,\dotsc,C)$ with $C=1.9, 2, 3$. Note the change in limiting behavior for $C\ge 2$, where the final percent is \emph{increasing} as $k$ increases, rather than decreasing as in the earlier examples.}
\label{fig:samplingC}
\end{figure}
\section{Average risk across the parameter space}\label{sec:avg}
In Section \ref{sec:volume}, we considered \emph{whether} the Bayes estimator had smaller risk than the MLE, and where this occurred within the parameter space. We did not consider the \emph{magnitude} of the decrease in risk. In this section, we look at the average risk (with respect to the volume measure) of the various estimators to understand the magnitude of decrease.
Recall that the risk of the MLE, by \eqref{eq:mlerisk}, is
\[
R(\boldsymbol{\hat\theta},\boldsymbol\theta)=\frac{1-\sum_{1\le i\le k} \theta_i^2}{n}
\]
and the risk of the Bayes estimator with Dirichlet prior with $\alpha_i=C_{k,n},\;\forall i,$ is, by \eqref{eq:Bayesrisk},
\[
R(\boldsymbol d_B, \boldsymbol\theta)=\frac{n-kC_{k,n}^2}{\left(n+kC_{k,n}\right)^2}+\frac{\left(k^2C_{k,n}^2-n\right) \sum_{1\le i \le k} \theta_i^2}{\left(n+kC_{k,n}\right)^2}.
\]
Note that in each case, for fixed $k, \boldsymbol\theta$ and prior (choice of $C_{k,n}$), the risk is decreasing to 0 as $n\to\infty$. Thus any decrease in risk becomes negligible for large enough $n$.
Integrating the above over $E_k$, we obtain
\[
\int_{E_k} R(\bb{\hat\theta},\bb\theta)d\bb\theta = \frac{1}{n}A_k - \frac{1}{n}\int_{E_k}|\bb\theta|^2d\bb\theta
\]
and
\[
\int_{E_k} R(\boldsymbol d_B, \boldsymbol\theta)d\bb\theta=\frac{n-kC_{k,n}^2}{\left(n+kC_{k,n}\right)^2}A_k+\frac{\left(k^2C_{k,n}^2-n\right)}{\left(n+kC_{k,n}\right)^2}\int_{E_k}|\bb\theta|^2d\bb\theta.
\]
By \eqref{eq:ekvol}, $A_k=\frac{\sqrt{k}}{(k-1)!}$. It can be shown that $\int_{E_k}|\bb\theta|^2d\bb\theta=\frac{2k\sqrt{k}}{(k+1)!}$. Thus we obtain that the average risks $\bar R_{\bb{\hat\theta}}$ and $\bar R_{\bb{d_B}}$ are, respectively,
\begin{align}
\bar R_{\bb{\hat\theta}} &= \frac{\int_{E_k} R(\bb{\hat\theta},\bb\theta)d\bb\theta }{A_k} \nonumber\\
&= \left(\frac{\sqrt{k}}{n(k-1)!} - \frac{2k\sqrt{k}}{n(k+1)!}\right)\div \frac{\sqrt{k}}{(k-1)!} \nonumber \\
&= \frac{1}{n} - \frac{2k}{nk(k+1)} \nonumber\\
&= \frac{k-1}{n(k+1)},
\end{align}
and
\begin{align}
\bar R_{\bb{d_B}} &= \frac{\int_{E_k} R(\boldsymbol d_B, \boldsymbol\theta)d\bb\theta}{A_k} \nonumber \\
&=\left[\frac{(n-kC_{k,n}^2)\sqrt{k}}{\left(n+kC_{k,n}\right)^2(k-1)!}+\frac{2k\left(k^2C_{k,n}^2-n\right)\sqrt{k}}{\left(n+kC_{k,n}\right)^2(k+1)!}\right]\div \frac{\sqrt{k}}{(k-1)!} \nonumber \\
&=\frac{n-kC_{k,n}^2}{\left(n+kC_{k,n}\right)^2} + \frac{2\left(k^2C_{k,n}^2-n\right)}{\left(n+kC_{k,n}\right)^2(k+1)} \nonumber\\
&=\frac{(k-1)(kC_{k,n}^2+n)}{(kC_{k,n}+n)^2(k+1)}.
\end{align}
Then the average decrease in risk for the Bayes estimator \emph{in proportion to the average risk of the MLE} is
\begin{align}
\frac{\bar R_{\bb{\hat\theta}} - \bar R_{\bb{d_B}}}{\bar R_{\bb{\hat\theta}}} &= \left[\frac{k-1}{n(k+1)} - \frac{(k-1)(kC_{k,n}^2+n)}{(kC_{k,n}+n)^2(k+1)}\right] \div \frac{k-1}{n(k+1)} \nonumber \\
&= 1-\frac{n\left(kC_{k,n}^2+n\right)}{\left(kC_{k,n}+n\right)^2}. \label{eq:avgriskdec}
\end{align}
For fixed $k$ and $n$, this function has a global maximum at $C_{k,n}=1$, as illustrated in Figure \ref{fig:avgriskdec}. When $C_{k,n}=1$ we obtain, by plugging in to \eqref{eq:avgriskdec},
\begin{equation}
\frac{\bar R_{\bb{\hat\theta}} - \bar R_{\bb{d_{B,C_{k,n}=1}}}}{\bar R_{\bb{\hat\theta}}}=\frac{k}{k+n}=\frac{1}{1+n/k}.
\label{eq:avgriskC1}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[height=.3\textheight]{avgrisk}
\caption{Graph of the average decrease in risk for the Bayes estimator in proportion to the average risk of the MLE, as a function of $C_{k,n}$ with fixed $k$ and $n$. The function has a global maximum at $C_{k,n}=1$. It is postive for the region shown here, but can become negative for large enough $C_{k,n}$. This graph was made using the values $k=10$ and $n=30$; however a similar shape will result from any fixed $k$ and $n$, with a maximum at $C_{k,n}=1$ and maximum value $k/(k+n)$.}
\label{fig:avgriskdec}
\end{figure}
We see from \eqref{eq:avgriskC1} that for all $k$, there is some positive decrease in (proportional) average risk that depends on the relationship between $k$ and $n$. For example, when $n=k$, the decrease is 50\%, when $n=2k$, the decrease is 33.3\%, and when $n=k^2$, the decrease is $100\cdot\left(\frac{1}{k+1}\right)\%$. The Bayes estimator with the uniform prior is the estimator that has the smallest average risk (with respect to the Lebesgue measure) by definition (see, e.g., \citet[pp. 22-23]{textbook}). We see here that this effect, in comparison to the MLE, is most pronounced for $n$ on the order of $k$.
On the other hand, we can consider $C_{k,n}=1/k$. This prior gave rise to an estimator that had smaller risk that the MLE for nearly the entire parameter space for even small to moderate $k$ (see example \ref{ex:prior1-k}). Plugging in to \eqref{eq:avgriskdec},
\begin{equation}
\frac{\bar R_{\bb{\hat\theta}} - \bar R_{\bb{d_{B,C_{k,n}=1/k}}}}{\bar R_{\bb{\hat\theta}}}=1 - \frac{kn^2+n}{k(n+1)^2}=1 - \frac{n(n+1/k)}{(n+1)^2}.
\label{eq:avgriskC1k}
\end{equation}
This decreases to 0 as $n$ become large, but depends less on $k$ for its limiting behavior. For the nonparametric estimation of a distribution, based on a given sample size $n$, it is useful to consider the behavior of \eqref{eq:avgriskC1k} as $k\nearrow\infty$. For fixed $n$, \eqref{eq:avgriskC1k} $\nearrow 1-\frac{n^2}{(n+1)^2}$.
Comparing the behavior for moderate to large $k$ and $n$ in \eqref{eq:avgriskC1} and \eqref{eq:avgriskC1k} shows an opposite kind of optimality than in Section \ref{sec:volume}, where the $\dir(1/k,\dotsc,1/k)$ was favored over the uniform prior. Balancing the smaller radius of the region with lower risk for the estimator under the uniform prior with its optimal decrease in average risk indicates that, for moderate to large $k$ and $n$, this estimator ($\bb{d_{B,C_{k,n}=1}}$) is preferable over other Bayes estimators. For small $k$ ($k<10$) or when the true distribution is believed to have an unknown dominating class (and thus requires an exchangeable prior with a large radius for a decrease in risk), the estimator with the $\dir(1/k,\dotsc,1/k)$ prior may be preferable. For illustration, some comparison values are in Table \ref{tab:comparison}.
\begin{table}[h]
\centering
\begingroup\footnotesize
\begin{tabular}{ll | l | rr | rr}
\hline
&& {\bf MLE} & \multicolumn{1}{| l}{\bf Uniform Prior} & & \multicolumn{1}{| l}{$\boldsymbol{1/k}$ {\bf Prior}} & \\
$\boldsymbol k$ & $\boldsymbol n$&{\bf Avg. Risk} & {\bf Decrease (\%)} & {\bf Vol. Prop.} & {\bf Decrease (\%)} & {\bf Vol. Prop.} \\
\hline
$k=5$ & $n=5$ & $\ee{1.33}{-1}$ & 50.00 \% & 0.9443 & 27.77 \% & 0.9977 \\
$k=5$ & $n=25$ & $\ee{2.67}{-2}$ & 16.67 \% & 0.8926 & 6.80 \% & 0.9970 \\
$k=10$ & $n=10$ & $\ee{8.18}{-2}$ & 50.00 \% & 0.9823 & 16.53 \% & 1\\
$k=10$ & $n=100$ & $\ee{8.18}{-3}$ & 9.09 \% & 0.9385 & 1.87 \% & 1\\
$k=50$ & $n=50$ & $\ee{1.92}{-2}$ & 50.00 \% & 0.9998 & 3.84 \% & 1\\
$k=50$ & $n=2500$ & $\ee{3.84}{-4}$ & 1.96 \% &0.9939 & 0.08 \% & 1\\
$k=100$ & $n=100$ & $\ee{9.80}{-3}$ & 50.00 \% & 1 & 1.96 \% & 1\\
$k=100$ & $n=10000$ & $\ee{9.80}{-5}$ & 0.99 \% & 0.9993 & 0.02 \% & 1\\
\hline
\end{tabular}
\caption{Table comparing the average risks for the MLE and the Bayes estimators under the uniform prior and the $\dir(1/k,\dotsc,1/k)$ priors for $k=5,10,50,100$ and $n=k,k^2$. Listed as well are estimated proportions of the parameter space where the estimators have lower risk than the MLE. For the uniform prior, these are estimated using the simulation in section \ref{sec:uniformsim}. For the $\dir(1/k,\dotsc,1/k)$ prior, the estimates use \eqref{eq:Bayesrad} and \eqref{eq:volbound2}.}
\label{tab:comparison}
\endgroup
\end{table}
\section{On simulation of total variation distances between the true distribution and the distributions of (1) MLE-based frequentist estimators and (2) nonparametric Bayes estimators}
\label{sec:tv}
Let $Q$ be a probability measure on some measurable state space $(S,\mathcal S)$, which is partitioned into $k$ measurable subsets $A_1,\dotsc,A_k$. If one wishes to estimate the probabilities $Q(A_j)=\theta_j>0,\: j=1,\dotsc,k$, based on the numbers $n_1,\dotsc,n_k$ of a random sample of size $n$ from $Q$ falling into these classes, the MLE $\boldsymbol{\hat{\theta}}=(\hat\theta_1=n_1/n,\dotsc,\hat\theta_k=n_k/n)$ is the time-honored estimate, and $(n_1,\dotsc,n_k)$ has the multinomial distribution $M(\boldsymbol n:\theta_1,\theta_2,\dotsc,\theta_k)$. One may, instead, use the Bayes estimator with the conjugate prior $\operatorname{Dir}(\alpha_1,\alpha_2,\dotsc,\alpha_k)$, namely, $\boldsymbol{d_B}=\left(\left(n_1+\alpha_1\right)/\left(n+\sum\alpha_j\right),\dotsc,\left(n_k+\alpha_k\right)/\left(n+\sum\alpha_j\right)\right)$, where $\alpha_i>0\, \forall i$. We have seen that under squared error loss, for large or moderately large $k$, $\boldsymbol{d_B}$ outperforms $\hat\theta$ on most of the parameter space in cases (1) $\alpha_i=C>0\, \forall i$ (for $C<2$), and (2) $\alpha_i=1/k\,\forall i$. For large $k$ these estimators provide approximations to $Q$. We now consider a way of providing approximations to $Q$ in variation norm for classes of $S$ with a finite volume measure $\omega$ and $Q$ absolutely continuous with respect to it.
Consider a closed bounded region $S$ such as a ball or rectangular region in an Euclidean space, or a compact Riemannian manifold such as the sphere $S^d$, Kendall's planar shape space $\Sigma_2^m$ (which is the same as the complex projective space $\mathbb CP^{m-2}$), etc., each equipped with a volume measure $\omega$. As above, we consider a partition of $S$ into $k$ subsets $A_{j,k},\: j=1,\dotsc,k,$ such that $\omega(A_{j,k})>0\,\forall j,k$ and $\omega(A_{j,k})\to 0$ as $k\to\infty$, and let $Q(A_{j,k})=\theta_{j,k}$. Let $\boldsymbol{\hat\theta}=(\hat\theta_{1,k}=n_1/n,\dotsc,\hat\theta_{k,k}=n_k/n)$ be the MLE of $\boldsymbol\theta=(\theta_{1,k},\dotsc,\theta_{k,k})$. Consider the Bayes estimator under the Dirichlet prior with $\alpha_i=1/k\,\forall i$ and under the \emph{absolute error} loss function, say $\boldsymbol{\breve\theta}=(\breve\theta_{1,1},\dotsc,\breve\theta_{k,k})$, where $\breve\theta_{j,k}$ is the median of the posterior distribution of $\theta_{j,k}$, namely, the \emph{median} of $\operatorname{Beta}(1/k+n_j,n+1-n_j-1/k),\: j=1,\dots,k$.
The risk function of $\boldsymbol{\breve\theta}$ is
\begin{equation}
R\left(\boldsymbol\theta,\boldsymbol{\breve\theta}\right)=\sum_{1\le j\le k} E_{\theta_{j,k}}\left|\breve\theta_{j,k}-\theta_{j,k}\right|=\sum_{j=1}^k\sum_{r=0}^n C^n_r\theta_{j,k}^r(1-\theta_{j,k})^{n-r}|F^{-1}_{(r+1/k,n-r+1-1/k)}(1/2)-\theta_{j,k}|,
\label{eq:Bayesabsrisk}
\end{equation}
where $F_{(\alpha,\beta)}$ is the distribution function of $\operatorname{Beta}(\alpha,\beta)$, $F^{-1}_{(\alpha,\beta)}$ is its inverse, and $F^{-1}_{(\alpha,\beta)}(1/2)$ is the median of $\operatorname{Beta}(\alpha,\beta)$.
The risk function of $\boldsymbol{\hat\theta}$ is
\begin{equation}
R\left(\boldsymbol\theta,\boldsymbol{\hat\theta}\right)=\sum_{1\le j\le k} E_{\theta_{j,k}}|\hat\theta_{j,k}-\theta_{j,k}|=\sum_{j=1}^k\sum_{r=0}^n C_r^n\theta_{j,k}^r(1-\theta_{j,k})^{n-r}|r/n-\theta_{j,k}|.
\label{eq:mleabsrisk}
\end{equation}
Suppose $Q$ has a continuous density $f$ (with respect to the volume measure $\omega$). We now consider the problem of estimating the approximate density of $Q$ as $f_k$, where
\begin{equation}
f_k(x)=\theta_{j,k}/\omega(A_{j,k})\quad\text{for } x\in A_{j,k} \:(j=1,\dotsc,k).
\end{equation}
Assume that $\max \{\operatorname{diam}(A_{j,k}): j=1,\dotsc,k \}\to 0$ as $k\to\infty$. Then $\int|f_k(x)-f(x)|\omega(dx)\to0$ as $k\to\infty$. We consider now the estimates of $f_k$ given by
\begin{align}
\hat f_k(x)&=\hat\theta_{j,k}/\omega(A_{j,k})\quad\text{for } x\in A_{j,k}\: (j=1,\dotsc,k),\\
\breve f_k(x)&=\breve\theta_{j,k}/\omega(A_{j,k})\quad\text{for } x\in A_{j,k}\: (j=1,\dotsc,k).
\end{align}
The $L_1$ (and thus the total variation) distances between $f_k$ and its estimates above are given by \eqref{eq:Bayesabsrisk} and \eqref{eq:mleabsrisk}. In particular, the Bayes estimator $\breve f_k(x)$ basically provides the nonparametric estimator of $f_k$ under the Dirichlet prior with base measure $\alpha_k$ on $S$ with density $\alpha_k(x)=\frac{1}{k\omega(A_{j,k})}$ for $x\in A_{j,k}\: (j=1,\dotsc,k)$.
Note that one may consider the above type of approximation for an unbounded state space by requiring that the density decays fast outside a bounded region.
Similarly, the $L_2$ distance between $f_k$ and its estimators under the squared error loss are related to the corresponding risk functions. If the $A_{j,k}$ are chosen such that $\omega(A_{j,k})=\omega(S)/k\; \forall j$ and $\hat\theta_{j,k}=n_j/n$ and $d_{j,k}=(n_j+1/k)/(n+1)$ (the Bayes estimator under squared error loss with Dirichlet prior $\operatorname{Dir}(1/k,\dotsc,1/k)$), then
\begin{align}
\norm{f_k-\hat f_k}_2&=\sqrt{\frac{k}{\omega(S)}}\sqrt{R\left(\boldsymbol{\hat\theta},\boldsymbol\theta\right)},\label{eq:mlel2risk}\\
\norm{f_k-\tilde f_k}_2&=\sqrt{\frac{k}{\omega(S)}}\sqrt{R\left(\boldsymbol{d_B},\boldsymbol\theta\right)},\label{eq:Bayesl2risk}
\end{align}
where $R\left(\boldsymbol{\hat\theta},\boldsymbol\theta\right)$ and $R\left(\boldsymbol{d_B},\boldsymbol\theta\right)$ are as defined in \eqref{eq:mlerisk} and \eqref{eq:Bayesrisk}, respectively.
\subsection{Simulated $L_1$ distances}
We simulated these $L_1$ distances using the case where $\omega(S)=1$ by uniformly sampling 10,000 probability vectors from the standard $k$-simplex for each of $k=10, 20, 30, 40, 50, 100, 200, 500$ and calculating the risk using equations \eqref{eq:Bayesabsrisk} and \eqref{eq:mleabsrisk}. We then averaged the 100 risk calcuations to obtain an estimate of the average $L_1$ distance across the parameter space. Note that the sample size of 10,000 is smaller than that used in the $L_2$ case (1,000,000) due to increased computational complexity.
The results can be found in Tables \ref{tab:l1k10} through \ref{tab:l1k500}. Note that for large $n$, the estimated average $L_1$ distance for the Bayes estimator is often slightly larger that that for the MLE. It is unknown whether this is because the two are too close to distinguish with sample means, or that the $L_1$ distance is truly larger.
\begin{table}[p]
\begin{minipage}{.5\textwidth}
\scriptsize
\centering
\begin{tabular}{lrr}
\hline
$\boldsymbol{k=10}$\\
\hline
$\boldsymbol n$ & {\bf MLE} & {\bf Bayes} \\
\hline
$n=20$ & 0.4689 & 0.4541 \\
$n=30$ & 0.3827 & 0.3765 \\
$n=40$ & 0.3314 & 0.3283 \\
$n=50$ & 0.2964 & 0.2947 \\
$n=100$ & 0.2095 & 0.2095 \\
$n=200$ & 0.1481 & 0.1483 \\
$n=300$ & 0.1209 & 0.1211 \\
$n=400$ & 0.1047 & 0.1048 \\
$n=500$ & 0.09363 & 0.09374 \\
$n=600$ & 0.08547 & 0.08556 \\
$n=700$ & 0.07913 & 0.0792 \\
$n=800$ & 0.07402 & 0.07408 \\
$n=900$ & 0.06978 & 0.06984 \\
$n=1000$ & 0.0662 & 0.06625 \\
\hline
\end{tabular}
\caption{Simulated $L_1$ distances for $k=10$.}
\label{tab:l1k10}
\end{minipage}
\hfillx
\begin{minipage}{.5\textwidth}
\scriptsize
\centering
\begin{tabular}{lrr}
\hline
$\boldsymbol{k=20}$\\
\hline
$\boldsymbol n$ & {\bf MLE} & {\bf Bayes} \\
\hline
$n=20$ & 0.682 & 0.6333 \\
$n=30$ & 0.5583 & 0.5343 \\
$n=40$ & 0.4839 & 0.4706 \\
$n=50$ & 0.433 & 0.425 \\
$n=100$ & 0.3063 & 0.3057 \\
$n=200$ & 0.2166 & 0.2172 \\
$n=300$ & 0.1768 & 0.1774 \\
$n=400$ & 0.1531 & 0.1536 \\
$n=500$ & 0.137 & 0.1373 \\
$n=600$ & 0.125 & 0.1253 \\
$n=700$ & 0.1157 & 0.116 \\
$n=800$ & 0.1083 & 0.1085 \\
$n=900$ & 0.1021 & 0.1023 \\
$n=1000$ & 0.09683 & 0.09702 \\
\hline
\end{tabular}
\caption{Simulated $L_1$ distances for $k=20$.}
\end{minipage}
\end{table}
\begin{table}[p]
\begin{minipage}{.5\textwidth}
\scriptsize
\centering
\begin{tabular}{lrr}
\hline
$\boldsymbol{k=30}$\\
\hline
$\boldsymbol n$ & {\bf MLE} & {\bf Bayes} \\
\hline
$n=20$ & 0.8373 & 0.75 \\
$n=30$ & 0.6885 & 0.6404 \\
$n=40$ & 0.5978 & 0.5685 \\
$n=50$ & 0.5353 & 0.5163 \\
$n=100$ & 0.3791 & 0.376 \\
$n=200$ & 0.2682 & 0.2687 \\
$n=300$ & 0.219 & 0.2198 \\
$n=400$ & 0.1896 & 0.1904 \\
$n=500$ & 0.1696 & 0.1703 \\
$n=600$ & 0.1548 & 0.1554 \\
$n=700$ & 0.1433 & 0.1438 \\
$n=800$ & 0.1341 & 0.1345 \\
$n=900$ & 0.1264 & 0.1268 \\
$n=1000$ & 0.1199 & 0.1203 \\
\hline
\end{tabular}
\caption{Simulated $L_1$ distances for $k=30$.}
\end{minipage}
\hfillx
\begin{minipage}{.5\textwidth}
\scriptsize
\centering
\begin{tabular}{lrr}
\hline
$\boldsymbol{k=40}$\\
\hline
$\boldsymbol n$ & {\bf MLE} & {\bf Bayes} \\
\hline
$n=20$ & 0.9611 & 0.8375 \\
$n=30$ & 0.7948 & 0.7208 \\
$n=40$ & 0.6916 & 0.6436 \\
$n=50$ & 0.62 & 0.5872 \\
$n=100$ & 0.4397 & 0.4325 \\
$n=200$ & 0.3112 & 0.3111 \\
$n=300$ & 0.2541 & 0.2549 \\
$n=400$ & 0.2201 & 0.221 \\
$n=500$ & 0.1968 & 0.1977 \\
$n=600$ & 0.1797 & 0.1805 \\
$n=700$ & 0.1664 & 0.1671 \\
$n=800$ & 0.1556 & 0.1562 \\
$n=900$ & 0.1467 & 0.1473 \\
$n=1000$ & 0.1392 & 0.1397 \\
\hline
\end{tabular}
\caption{Simulated $L_1$ distances for $k=40$.}
\end{minipage}
\end{table}
\begin{table}[p]
\begin{minipage}{.5\textwidth}
\scriptsize
\centering
\begin{tabular}{lrr}
\hline
$\boldsymbol{k=50}$\\
\hline
$\boldsymbol n$ & {\bf MLE} & {\bf Bayes} \\
\hline
$n=20$ & 1.063 & 0.9081 \\
$n=30$ & 0.8851 & 0.786 \\
$n=40$ & 0.7723 & 0.7051 \\
$n=50$ & 0.6931 & 0.6455 \\
$n=100$ & 0.4926 & 0.4803 \\
$n=200$ & 0.3488 & 0.3477 \\
$n=300$ & 0.2849 & 0.2855 \\
$n=400$ & 0.2468 & 0.2477 \\
$n=500$ & 0.2207 & 0.2217 \\
$n=600$ & 0.2015 & 0.2024 \\
$n=700$ & 0.1865 & 0.1874 \\
$n=800$ & 0.1745 & 0.1753 \\
$n=900$ & 0.1645 & 0.1652 \\
$n=1000$ & 0.1561 & 0.1567 \\
\hline
\end{tabular}
\caption{Simulated $L_1$ distances for $k=50$.}
\end{minipage}
\hfillx
\begin{minipage}{.5\textwidth}
\scriptsize
\centering
\begin{tabular}{lrr}
\hline
$\boldsymbol{k=100}$\\
\hline
$\boldsymbol n$ & {\bf MLE} & {\bf Bayes} \\
\hline
$n=20$ & 1.391 & 1.144 \\
$n=30$ & 1.203 & 1.008 \\
$n=40$ & 1.068 & 0.9143 \\
$n=50$ & 0.9676 & 0.8454 \\
$n=100$ & 0.6973 & 0.6499 \\
$n=200$ & 0.4958 & 0.4838 \\
$n=300$ & 0.4053 & 0.4016 \\
$n=400$ & 0.3511 & 0.3503 \\
$n=500$ & 0.3141 & 0.3144 \\
$n=600$ & 0.2868 & 0.2876 \\
$n=700$ & 0.2655 & 0.2665 \\
$n=800$ & 0.2484 & 0.2495 \\
$n=900$ & 0.2342 & 0.2353 \\
$n=1000$ & 0.2222 & 0.2233 \\
\hline
\end{tabular}
\caption{Simulated $L_1$ distances for $k=100$.}
\end{minipage}
\end{table}
\begin{table}[p]
\begin{minipage}{.5\textwidth}
\scriptsize
\centering
\begin{tabular}{lrr}
\hline
$\boldsymbol{k=200}$\\
\hline
$\boldsymbol n$ & {\bf MLE} & {\bf Bayes} \\
\hline
$n=20$ & 1.652 & 1.364 \\
$n=30$ & 1.512 & 1.245 \\
$n=40$ & 1.392 & 1.149 \\
$n=50$ & 1.291 & 1.073 \\
$n=100$ & 0.9698 & 0.8479 \\
$n=200$ & 0.6991 & 0.652 \\
$n=300$ & 0.5732 & 0.5508 \\
$n=400$ & 0.4972 & 0.4854 \\
$n=500$ & 0.4451 & 0.4386 \\
$n=600$ & 0.4065 & 0.4029 \\
$n=700$ & 0.3764 & 0.3746 \\
$n=800$ & 0.3522 & 0.3515 \\
$n=900$ & 0.3321 & 0.332 \\
$n=1000$ & 0.3151 & 0.3155 \\
\hline
\end{tabular}
\caption{Simulated $L_1$ distances for $k=200$.}
\end{minipage}
\hfillx
\begin{minipage}{.5\textwidth}
\scriptsize
\centering
\begin{tabular}{lrr}
\hline
$\boldsymbol{k=500}$\\
\hline
$\boldsymbol n$ & {\bf MLE} & {\bf Bayes} \\
\hline
$n=20$ & 1.849 & 1.543 \\
$n=30$ & 1.78 & 1.483 \\
$n=40$ & 1.714 & 1.425 \\
$n=50$ & 1.653 & 1.369 \\
$n=100$ & 1.393 & 1.152 \\
$n=200$ & 1.071 & 0.9189 \\
$n=300$ & 0.8932 & 0.7951 \\
$n=400$ & 0.7798 & 0.7133 \\
$n=500$ & 0.7003 & 0.6532 \\
$n=600$ & 0.6407 & 0.6064 \\
$n=700$ & 0.5941 & 0.5684 \\
$n=800$ & 0.5562 & 0.5367 \\
$n=900$ & 0.5248 & 0.5097 \\
$n=1000$ & 0.4981 & 0.4864 \\
\hline
\end{tabular}
\caption{Simulated $L_1$ distances for $k=500$.}
\label{tab:l1k500}
\end{minipage}
\end{table}
\section{Data example: stocking jeans}\label{sec:data}
In recent years, the lack of sizing representation in clothing stores has been decried by many groups. See, for example, the article ``Women's Clothing Retailers are Still Ignoring the Reality of Size in the US'' from Quartzy \citep{shendruk_2018}. A large component of this problem is that stores do not tend to stock sizes in proportion to the distribution of clothing sizes reflected in the general population; rather, there is a notion of stocking clothing based on the ``typical customer'' for the store. This becomes a self-fulfilling prophecy, however, since choosing not to stock for parts of the population not deemed ``typical customers'' ensures that they cannot ever be customers by definition.
Let us focus on denim jeans, which are widely considered a staple in the American woman's wardrobe. A brick-and-mortar retailer will largely only sell sizes that are currently in stock (while employees may offer to special order sizes not in stock, the majority of patrons will simply leave the store without purchasing if their size is not in stock). Since the purchasing of stock represents a risk by the retailer, it is important to accurately guess which sizes to stock. However, when taking into account both waist size and inseam, as several denim brands do, this can result in a large number of size options for stock. For example, using the Levi's online size chart and their online catalog, we calculated 59 different sizes \citep{levis}.
The retailer could use past sales as a guide for how much of each size to stock. However, this has the effect of perpetuating errors in representation, since patrons who desired to purchase jeans but were unable since their sizes were not in stock can not be represented in the sales data. Instead, the retailer could sample the desired sizes of anyone who enters the store, regardless of whether they make a purchase. This would potentially reflect the distribution of potential customers more accuately than sales data. The retailer most likely would need to take a small to moderate sample initially since too much time with an inaccurate stock distribution may cause unrepresented segments of the population to stop coming altogether.
To simulate such a sampling scheme, we used the National Health and Nutrition Examination Survey from 2015-16 \citep{nhanes} and the Levi's size chart to estimate the true Levi's jean size distribution of adult women in the United States. After restricting to adult women and excluding those in the sample that were pregnant (as this temporarily skews waist size), there were 2697 adult women surveyed in the NHANES, with sample weighting to properly reflect the uninstitutionalized population of the United States. Using the Levi's website, we calculated 59 different jean sizes, as well as a category for those whose waist size is too high to fit into any of Levi's listed sizes (we estimated that 8.39\% of adult women in the United States fit into this category). There was one jean size that was not sampled in the NHANES. We decided that it is unlikely that this jean size does not exist in the entire population of the US, so we gave this size a proprotion equal to one half of the minimum nonzero proportion in the other sizes and then renormalized.
We then simulated random samples of size 100 from the multinomial distribution with 60 categories using the calculated size distribution for the US adult women population. This simulates the following scenario: the retailer hopes to estimate the distribution of jean sizes his clientele desire by recording the desired jean size of a sample of 100 potential customers, and his potential custormers reflect the size distribution of the US adult female population as a whole, rather than a (potentially smaller-waisted) subpopulation. We included the 60-th category of ``no size'' since, under this scenario in which the potential customers reflect the true distribution, it is possible that customers may arrive at the store hoping to buy jeans before learning that the sizes are not large enough.
We then calculated the MLE for the size distribution by taking the sample counts and dividing by 100. We also used a uniform prior and calculated two different Bayes estimators: the estimator under squared-error ($L_2$) loss, which is the posterior mean, and the estimator under absolute-error ($L_1$) loss, which is the posterior median. In some sense, estimating under a uniform prior tries to balance between two ideas of ``fairness'': representing all sizes (a uniform prior) and representing the size distribution (the posterior mean or median given the sample).
We repeated this simulation 1000 times. In each case, at least 18 of the size categories were unrepresented in the sample of 100. Thus the MLE estimated zero probabilities for almost one third of the sizes. On the other hand, the Bayes estimator are never zero and thus leans toward being more inclusive of sizes in stocking while still taking into account the sample data. We also calculated the $L_1$, $L_2$, and infinity (maximum) distance between the estimators and the calculated size distribution. The results are in Table \ref{tab:dist}. Note that the Bayes estimators tended to be closer that the MLE to the true size distribution by all three distance measures despite being a biased estimator. The $L_2$ Bayes estimator was closer in $L_1$ in 85.8\% of the simulations, closer in $L_2$ in 94.9\% of the simulations, and even had a smaller maximum distance (i.e. the largest absolute difference among all 60 categories) in 67.8\% of the simulations. The $L_1$ Bayes estimator was closer in $L_1$ in 95.6\% of the simulations, closer in $L_2$ in 93.5\% of the simulations, and had a smaller maximum distance in 62.5\% of the simulations.
\begin{table}[h!]
\centering
\begin{tabular}{lrrr}
\hline
& $L_1$ & $L_2$ & Infinity (maximum) \\
\hline
Bayes estimator ($L_2$ Loss) & 0.4599 & 0.0816 & 0.0377 \\
Bayes estimator ($L_1$ Loss) & 0.4368 & 0.0830 & 0.0395 \\
MLE & 0.5081 & 0.0969 & 0.0447 \\
\hline
\end{tabular}
\caption[Distance between estimators and true probability]{The mean $L_1$, $L_2$, and infinity (maximum) distances between the estimators and the true size distribution in 1000 simulations.}
\label{tab:dist}
\end{table}
In Table \ref{tab:jeans} are the estimated (true) size distribution based on the NHANES, the numbers of a stock of 1000 jeans in a Levi's store this would represent, as well as stock based on the MLE and Bayes estimators from a sample of 100 customers. Note that in three sizes, the probabilities are so small that the true size distribution still recommends to stock zero jeans in those sizes. Also note that in this particular sample 35 of the sizes were unrepresented, and thus the MLE recommended stocking less than half of the sizes. Due to rounding, the stocks are not exactly 1000. The MLE-based stock has 993, the Bayes ($L_2$) has 1006, and the Bayes ($L_1$) has 985. The $L_2$ distances between the true size distribution and the MLE, Bayes ($L_2$), and Bayes ($L_1$) estimators were, respectively, 0.122, 0.077, and 0.079. The absolute errors in the stock (the sum of all absolute differences between stock numbers in each size) for the MLE, Bayes ($L_2$) and Bayes ($L_1$) were, respectively, 687, 486, and 487. The maximum possible absolute error would be around 2000, which would occur if all 1000 pairs of jeans were stocked in the three sizes where zero stock was recommended by the true distribution (additional values are possible due to rounding).
\begin{table}[p]
\centering
\begingroup\tiny
\setlength{\tabcolsep}{6pt}
\renewcommand{\arraystretch}{.55}
\begin{tabular}{lrrrrr}
\hline
Size (Waist.Inseam) & True $p$ & Stock (true) & Stock (MLE) & Stock (Bayes, $L_2$ Loss) & Stock (Bayes, $L_1$ Loss) \\
\hline
24.28 & $\ee{1.386}{-04}$ & 0 & 0 & 7 & 5 \\
24.30 & $\ee{8.657}{-04}$ & 1 & 0 & 7 & 5 \\
24.32 & $\ee{2.020}{-04}$ & 0 & 0 & 7 & 5 \\
25.28 & $\ee{6.930}{-05}$ & 0 & 0 & 7 & 5 \\
25.30 & $\ee{1.949}{-03}$ & 2 & 0 & 7 & 5 \\
25.32 & $\ee{1.000}{-03}$ & 1 & 0 & 7 & 5 \\
26.28 & $\ee{6.218}{-04}$ & 1 & 0 & 7 & 5 \\
26.30 & $\ee{5.900}{-03}$ & 6 & 11 & 14 & 13 \\
26.32 & $\ee{4.262}{-03}$ & 5 & 0 & 7 & 5 \\
27.28 & $\ee{6.986}{-04}$ & 1 & 0 & 7 & 5 \\
27.30 & $\ee{1.268}{-02}$ & 14 & 0 & 7 & 5 \\
27.32 & $\ee{8.139}{-03}$ & 9 & 11 & 14 & 13 \\
27.34 & $\ee{1.214}{-03}$ & 1 & 0 & 7 & 5 \\
28.28 & $\ee{1.356}{-03}$ & 1 & 0 & 7 & 5 \\
28.30 & $\ee{5.628}{-03}$ & 6 & 0 & 7 & 5 \\
28.32 & $\ee{1.305}{-02}$ & 14 & 0 & 7 & 5 \\
28.34 & $\ee{2.675}{-03}$ & 3 & 0 & 7 & 5 \\
29.28 & $\ee{1.847}{-03}$ & 2 & 0 & 7 & 5 \\
29.30 & $\ee{1.181}{-02}$ & 13 & 46 & 34 & 37 \\
29.32 & $\ee{1.471}{-02}$ & 16 & 0 & 7 & 5 \\
29.34 & $\ee{3.006}{-03}$ & 3 & 0 & 7 & 5 \\
30.28 & $\ee{2.008}{-03}$ & 2 & 0 & 7 & 5 \\
30.30 & $\ee{1.203}{-02}$ & 13 & 23 & 21 & 21 \\
30.32 & $\ee{1.350}{-02}$ & 15 & 0 & 7 & 5 \\
30.34 & $\ee{4.348}{-03}$ & 5 & 0 & 7 & 5 \\
31.28 & $\ee{4.102}{-03}$ & 4 & 0 & 7 & 5 \\
31.30 & $\ee{2.713}{-02}$ & 30 & 11 & 14 & 13 \\
31.32 & $\ee{3.483}{-02}$ & 38 & 0 & 7 & 5 \\
31.34 & $\ee{9.686}{-03}$ & 11 & 0 & 7 & 5 \\
32.28 & $\ee{3.122}{-03}$ & 3 & 0 & 7 & 5 \\
32.30 & $\ee{3.314}{-02}$ & 36 & 46 & 34 & 37 \\
32.32 & $\ee{4.211}{-02}$ & 46 & 69 & 48 & 52 \\
32.34 & $\ee{1.271}{-02}$ & 14 & 34 & 27 & 29 \\
33.28 & $\ee{3.128}{-03}$ & 3 & 0 & 7 & 5 \\
33.30 & $\ee{2.157}{-02}$ & 24 & 11 & 14 & 13 \\
33.32 & $\ee{3.553}{-02}$ & 39 & 23 & 21 & 21 \\
33.34 & $\ee{3.100}{-03}$ & 3 & 11 & 14 & 13 \\
34.28 & $\ee{9.037}{-03}$ & 10 & 0 & 7 & 5 \\
34.30 & $\ee{4.477}{-02}$ & 49 & 46 & 34 & 37 \\
34.32 & $\ee{7.274}{-02}$ & 79 & 103 & 68 & 76 \\
34.34 & $\ee{1.153}{-02}$ & 13 & 0 & 7 & 5 \\
16W.S &$\ee{ 8.729}{-03}$ & 10 & 0 & 7 & 5 \\
16W.M &$\ee{ 9.149}{-03}$ & 10 & 0 & 7 & 5 \\
16W.L & $\ee{1.929}{-03}$ & 2 & 0 & 7 & 5 \\
18W.S & $\ee{4.694}{-02}$ & 51 & 80 & 55 & 60 \\
18W.M &$\ee{ 5.803}{-02}$ & 63 & 92 & 62 & 68 \\
18W.L & $\ee{8.997}{-03}$ & 10 & 11 & 14 & 13 \\
20W.S & $\ee{4.353}{-02}$ & 48 & 0 & 7 & 5 \\
20W.M & $\ee{4.290}{-02}$ & 47 & 46 & 34 & 37 \\
20W.L & $\ee{8.351}{-03}$ & 9 & 0 & 7 & 5 \\
22W.S & $\ee{3.617}{-02}$ & 39 & 69 & 48 & 52 \\
22W.M & $\ee{3.962}{-02}$ & 43 & 57 & 41 & 45 \\
22W.L & $\ee{8.016}{-03}$ & 9 & 34 & 27 & 29 \\
24W.S & $\ee{2.187}{-02}$ & 24 & 0 & 7 & 5 \\
24W.M & $\ee{3.911}{-02}$ & 43 & 103 & 68 & 76 \\
24W.L & $\ee{7.272}{-03}$ & 8 & 11 & 14 & 13 \\
26W.S & $\ee{1.851}{-02}$ & 20 & 34 & 27 & 29 \\
26W.M & $\ee{2.248}{-02}$ & 25 & 11 & 14 & 13 \\
26W.L & $\ee{2.546}{-03}$ & 3 & 0 & 7 & 5 \\
No size & $\ee{8.393}{-02}$ & 0 & 0 & 0 & 0 \\
\hline
\end{tabular}
\endgroup
\caption[Size distribution (NHANES) and stock]{The true size distribution based on NHANES 15-16 as well as the stock of about 1000 based on the truth and estimators from a sample of size 100. The total absolute errors of the three estimated stocks are 687, 486, and 487, respectively.}
\label{tab:jeans}
\end{table}
\section{Final remarks}\label{sec:final}
In this article it is shown that in a multinomial model with a moderately large number of $k$ cells and even a reasonable large sample size, the Bayes estimator with a multivariate Beta (or Dirichlet) $\dir(C_{k,n},\dotsc,C_{k,n})$ prior has a smaller risk under squared error loss than that of the MLE on most of the parameter space, for the cases $C_{k,n}=1$, and $C_{k,n}=1/k$. The volume of domination is larger for the case $C_{k,n}=1/k$ than $C_{k,n}=1$. When compared by average performance this domination over the MLE persistes, but is more pronounced for the uniform prior than for the case $C_{k,n}=1/k$. Simulation studies also show the surprising fact that for $C_{k,n}\ge 2$ the performance of the Bayes estimator rapidly declines.
The choice $C_{k,n}=1/k$ is motivated by the fact that it provides a simple approximation of the nonparametric estimation of an unknown distribution with a Dirichlet process prior \`a la \citet{ferguson1973}. In this context, there have been some simulation studies where nonparametric Bayes procedures have been found to outperform frequentist ones. A dramatic example may be found in \citet{david1}. Here a random sample is drawn from a parametric distribution on Kendall's planar shape space with density $f_0=f(\mathord{\cdot},\theta_0)$ with a given parameter value $\theta=\theta_0$, and three estimates of $f_0$ are compared: $\hat f=f(\mathord{\cdot},\hat\theta)$ with $\hat\theta$ as the MLE of the parameter $\theta$, the standard nonparametric kernel density estimator $g$ of $f$, and a nonparametric Bayes estimator $h$ of $f$. One would expect that the asymptotically efficient MLE of a correctly specified parametric model would perform better than its nonparametric competitors. But, surprisingly, a set of 20 simulations each with a fresh random sample of size 200 show the following average $L_1$-distances $d$: (1) $d(h,f_0)=0.44$, (2) $d(g,f_0)=1.03$, (3) $d(\hat f,f_0)=0.75$. Although the present study points to the superiority of the Bayes procedure compared to frequentist ones such as the histogram method, the differences do not appear to be that dramatic. Perhaps the method of representing an unknown density as a mixture of an appropriate parametric family and estimating the mixture by Ferguson’s Dirichlet process, as used by \citet{david1} should be preferred (also see \citet{ghosh_ram2003} and \citet{ghosal2017}). Still the present article provides a simple and widely applicable Bayes estimation of a nonparametric distribution, which perhaps may be sharpened to be more effective.
\clearpage
| {
"timestamp": "2019-10-08T02:10:54",
"yymm": "1910",
"arxiv_id": "1910.02316",
"language": "en",
"url": "https://arxiv.org/abs/1910.02316",
"abstract": "This article focuses on the performance of Bayes estimators, in comparison with the MLE, in multinomial models with a relatively large number of cells. The prior for the Bayes estimator is taken to be the conjugate Dirichlet, i.e., the multivariate Beta, with exchangeable distributions over the coordinates, including the non-informative uniform distribution. The choice of the multinomial is motivated by its many applications in business and industry, but also by its use in providing a simple nonparametric estimator of an unknown distribution. It is striking that the Bayes procedure outperforms the asymptotically efficient MLE over most of the parameter spaces for even moderately large dimensional parameter space and rather large sample sizes.",
"subjects": "Statistics Theory (math.ST)",
"title": "Superiority of Bayes estimators over the MLE in high dimensional multinomial models and its implication for nonparametric Bayes theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018419665618,
"lm_q2_score": 0.7279754607093178,
"lm_q1q2_score": 0.7099230101901832
} |
https://arxiv.org/abs/1503.00374 | A Randomized Algorithm for Approximating the Log Determinant of a Symmetric Positive Definite Matrix | We introduce a novel algorithm for approximating the logarithm of the determinant of a symmetric positive definite (SPD) matrix. The algorithm is randomized and approximates the traces of a small number of matrix powers of a specially constructed matrix, using the method of Avron and Toledo~\cite{AT11}. From a theoretical perspective, we present additive and relative error bounds for our algorithm. Our additive error bound works for any SPD matrix, whereas our relative error bound works for SPD matrices whose eigenvalues lie in the interval $(\theta_1,1)$, with $0<\theta_1<1$; the latter setting was proposed in~\cite{icml2015_hana15}. From an empirical perspective, we demonstrate that a C++ implementation of our algorithm can approximate the logarithm of the determinant of large matrices very accurately in a matter of seconds. |
\section{Conclusions}
\label{sec:conclusions}
There are few approximation algorithms for the logarithm of the determinant of
a symmetric positive definite matrix.
Unfortunately, those algorithms either do not work for all SPD matrices, or do
not admit a worst-case theoretical analysis, or both.
In this work, we presented the first approximation algorithm for the logarithm
of the determinant of a matrix that comes with strong theoretical worst-case
analysis bounds and can be applied to \emph{any} SPD matrix.
Using state-of-the-art C\kern-0.05em\texttt{+\kern-0.03em+}{} numerical linear algebra software packages for
both dense and sparse matrices, we demonstrated that the proposed approximation
algorithm performs remarkably well in practice in serial and parallel
environments.
\section*{Acknowledgements} We thank Ahmed El Alaoui for many useful suggestions and comments on an early draft of this paper.
\section{Experiments}
\label{sec:experiments}
\begin{figure*}[t]
\begin{center}
\subfigure[Accuracy v/s $m$]
\includegraphics[width=0.32\textwidth]{results/dense-accuracy-m-p=0.png}
\label{fig:dense-accuracy}
}
\subfigure[Speedup v/s $m$]{%
\includegraphics[width=0.32\textwidth]{results/dense-speedup-m-p=0.png}
\label{fig:dense-speedup-m}
}
\subfigure[Parallel Speedup]{%
\includegraphics[width=0.32\textwidth]{results/dense-speedup-p-m=4.png}
\label{fig:dense-speedup-np}
}
\end{center}
\caption{
Panels~\ref{fig:dense-accuracy} and~\ref{fig:dense-speedup-m} depict the effect
of $m$ (see Algorithm~\ref{alg1}) on the accuracy of the approximation and the
time to completion, respectively, for dense matrices generated by
\code{randSPDDense}.
For all the panels, $p=60,t=\log{4n}$.
The baseline for all experiments was Cholesky factorization, which was used
to compute the exact value of $\logdet{\mat{A}}$.
For panels~\ref{fig:dense-accuracy} and~\ref{fig:dense-speedup-m}, the number
of cores, $np$, was set to 1.
The last panel~\ref{fig:dense-speedup-np} depicts the relative speed of
the approximate algorithm when compared to the baseline (at $m=4$).
Elemental was used as the backend for these experiments.
For the approximate algorithm, we report the mean and standard deviation
of 10 iterations; each iteration used different random numbers.
}
\label{fig:dense}
\end{figure*}
The goal of our experimental section is to establish that our approximation
to $\logdet{\mat{A}}$ (Algorithm~\ref{alg1}) is both accurate and fast for both
dense and sparse matrices.
The accuracy of Algorithm~\ref{alg1} is measured by comparing its result
against the ``exact'' $\logdet{\mat{A}}$ computed using Cholesky factorization.
The rest of this section is laid out as follows:
in Section~\ref{subsec:software}, we describe our software for approximating
$\logdet{\mat{A}}$ and
in Sections~\ref{subsec:dense_matrices} and~\ref{subsec:sparse_matrices}, we
discuss experimental results for dense and sparse SPD matrices, respectively.
\subsection{Software}
\label{subsec:software}
We developed high-quality, shared- and distributed-memory parallel C\kern-0.05em\texttt{+\kern-0.03em+}{}
code for the algorithms listed in this paper; python bindings for our code are
in writing, and we hope to complete it soon.
All of the code that was developed for this paper (which continues to be
improved) is hosted at {\small\url{https://github.com/pkambadu/ApproxLogDet}}.
In it's current state, our software supports:
(1) ingesting dense (binary and text format) matrices and sparse (binary,
text, and matrix market format) matrices,
(2) generating large random symmetric positive definite matrices,
(3) computing both approximate and exact spectral norms of matrices,
(4) computing both approximate and exact traces of powers matrices, and
(5) computing both approximate and exact log determinant of matrices.
Currently, we support both Eigen~\cite{eigenweb} and
Elemental~\cite{poulson2013elemental} matrices.
Eigen package supports both dense and sparse matrices; Elemental strongly
supports dense matrices and only recently added support for sparse matrices
(pre-release).
As we wanted the random SPD generation to be fast, we have used parallel
random number generators from Random123~\cite{salmon2011parallel} in
conjunction with Boost.Random.
\subsection{Environment}
\label{subsec:environment}
All experiments were run on ``Nadal'', which is a 60-core machine, where each
core is an Intel\textregistered{} Xeon\textregistered{} E7-4890 machine running
at 2.8 Ghz.
Nadal has 1 tera-byte of RAM and runs Linux kernel version 2.6-32.
For compilation, we used GCC 4.9.2.
We used Eigen 3.2.4, OpenMPI 1.8.4, Boost 1.55.7, and the latest version of
Elemental at {\small\url{https://github.com/elemental}}.
For experiments with Elemental, we used OpenBlas, which is an extension of
GotoBlas~\cite{goto2008high}, for its parallel prowess; Eigen provides built-in
BLAS and LAPACK.
\subsection{Dense Matrices}
\label{subsec:dense_matrices}
\paragraph{Data Synthesis}
\noindent
In our experiments, we used two types of synthetic SPD matrices. The first
kind were diagonally dominant and were generated as follows.
First, we create $\mat{X}\in{}\mathbb{R}^{n\times{}n}$ by drawing $n^2$ entries from a
uniform sphere with center 0.5 and radius 0.25.
We generate symmetric $\mat{Y}$ by setting
$$\mat{Y}=0.5*(\mat{X}+\mat{X}^\top).$$
Finally, we ensure that the desired matrix $\mat{A}$ is positive definite by adding $n$ to
each of the diagonal entries of $\mat{Y}$~\cite{curran2009variation}:
$$
\mat{A} = \mat{Y} + n \cdot \mat{I}_n.
$$
We call this method \code{randSPDDenseDD}.
\begin{sloppypar}
The second approach generates SPD matrices that are not diagonally dominant.
We create $\mat{X},\mat{D} \in{} \mathbb{R}^{n\times{}n}$ by drawing $n^2$ and $n$
entries, respectively, from a uniform sphere with center 0.5 and radius 0.25;
$\mat{D}$ is a diagonal matrix with small entries.
Next, we generate an orthogonal random matrix $\mat{Q} = \qr{\mat{X}}$; i.e.,
$\mat{Q}$ is an orthonormal basis for $\mat{X}$.
Finally, we generate
$$
\mat{A} = \mat{Q}\mat{D}{}\mat{Q}^\top{}.
$$
We call this method \code{randSPDDense}.
\code{randSPDDense} is more expensive than \code{randSPDDenseDD} as it requires
an additional $O(n^3)$ computations for the QR factorization and the
matrix-matrix product.
However, \code{randSPDDense} represents a realistic scenario, therefore we
use it for our smaller experiments.
\begin{table}
\center
\small
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$n$} &
\multicolumn{3}{|c|}{$\logdet{\mat{A}}$} &
\multicolumn{3}{|c|}{time (secs)} \\
\cline{2-7}
& exact & mean & std & exact & mean & std \\\hline
5000 & -3717.89 & -3546.920 & 8.10 & 2.56 & 1.15 & 0.0005 \\\hline
7500 & -5474.49 & -5225.152 & 8.73 & 7.98 & 2.53 & 0.0015 \\\hline
10000 & -7347.33 & -7003.086 & 7.79 & 18.07 & 4.47 & 0.0006 \\\hline
12500 & -9167.47 & -8734.956 & 17.43 & 34.39 & 7.00 & 0.0030 \\\hline
15000 & -11100.9 & -10575.16 & 15.09 & 58.28 & 10.39 & 0.0102 \\\hline
\end{tabular}
\caption{
Accuracy and sequential running times (at $p=60$, $m=4, t=\log{4n}$) for dense
random matrices generated using \code{randSPDDense}.
Baselines were computed using Cholesky factorization; mean and standard
deviation are reported for 10 iterations; each iteration reset the random
processes in Algorithm~\ref{alg1}.
}
\label{tbl:dense-abs}
\normalsize
\end{table}
\paragraph{Evaluation}
\noindent
To compare the runtime of Algorithm~\ref{alg1}, we use Cholesky decomposition
to compute a factorization and then use the diagonal elements of the factor to
compute $\logdet{\mat{A}}$; that is, we compute $\mat{L}=\chol{\mat{A}}$ and
$\logdet{\mat{A}} = 2\times{}\logdet{\mat{L}}$.
As Elemental provides distributed and shared memory parallelism, we restrict
ourselves to experiments with Elemental matrices throughout this section.
Note that we measure accuracy of the approximate algorithm in terms of
relative error to ensure that we have numbers of the same scale for matrices
with vastly different values for $\logdet{\mat{A}}$.
We define the relative error $e$ thusly: $e =
100\times{}\frac{x-\tilde{x}}{x}$, where $x$ is the true value and
$\tilde{x}$ is the approximation.
Similarly, we define the speedup $s$ thusly: $s = t_x/t_{\tilde{x}}$,
where $t_x$ is the time to compute $x$ and $t_{\tilde{x}}$ is the time to
compute the approximation $\tilde{x}$; this definition of speedup is used
both for parallel speedup and for speedup resulting from the approximation.
\end{sloppypar}
\paragraph{Results}
\begin{figure*}[t]
\begin{center}
\subfigure[Accuracy v/s $m$]{%
\includegraphics[width=0.32\textwidth]{results/dense-dd-accuracy-m-p=0.png}
\label{fig:dense-dd-accuracy}
}
\subfigure[Speedup v/s $m$]{%
\includegraphics[width=0.32\textwidth]{results/dense-dd-speedup-m-p=0.png}
\label{fig:dense-dd-speedup-m}
}
\subfigure[Parallel Speedup]{%
\includegraphics[width=0.32\textwidth]{results/dense-dd-speedup-p-m=2.png}
\label{fig:dense-dd-speedup-np}
}
\end{center}
\caption{
Panels~\ref{fig:dense-dd-accuracy} and~\ref{fig:dense-dd-speedup-m} depict the
effect of $m$ (see Algorithm~\ref{alg1}) on the accuracy of the approximation
and the time to completion, respectively, for diagonally dominant dense random
matrices generated by \code{randSPDDenseDD}.
For all the panels, $p=60, t=\log{4n}$.
The baseline for all experiments was Cholesky factorization, which was used
to compute the exact value of $\logdet{\mat{A}}$.
For panels~\ref{fig:dense-dd-accuracy} and~\ref{fig:dense-dd-speedup-m}, the
number of cores, $np$, was set to 1.
The last panel~\ref{fig:dense-dd-speedup-np} depicts the relative speed of
the approximate algorithm when compared to the baseline (at $m=2$).
Elemental was used as the backend for these experiments.
For the approximate algorithm, we report the mean and standard deviation
of 10 iterations; each iteration used different random numbers.
}
\label{fig:dense-dd}
\end{figure*}
\begin{table}
\center
\small
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$n$} &
\multicolumn{3}{|c|}{$\logdet{\mat{A}}$} &
\multicolumn{3}{|c|}{time (secs)} \\
\cline{2-7}
& exact & mean & std & exact & mean & std \\\hline
10000 & 92103.1 & 92269.5 & 5.51 & 18.09 & 2.87 & 0.01 \\\hline
20000 & 198069.0 & 198397.4 & 9.60 & 135.92 & 12.41 & 0.02 \\\hline
30000 & 309268.0 & 309763.8 & 20.04 & 448.02 & 30.00 & 0.12 \\\hline
40000 & 423865.0 & 424522.4 & 14.80 & 1043.74 & 58.05 & 0.05 \\\hline
\end{tabular}
\caption{
Accuracy and sequential running times (at $p=60$, $m=2, t=\log{4n}$) for
diagonally dominant dense random matrices generated using
\code{randSPDDenseDD}.
Baselines were computed using Cholesky factorization; mean and standard
deviation are reported for 10 iterations; each iteration reset the random
processes in Algorithm~\ref{alg1}.
}
\label{tbl:dense-dd-abs}
\normalsize
\end{table}
For dense matrices, we exclusively used synthetic data generated using
both \code{randSPDDense} and \code{randSPDDenseDD}.
We first discuss the results of the relatively ``ill-conditioned'' matrices
with $n$, the number of rows and columns being sizes 5000, 7500, 10000, 12500,
and 15000 that were generated using \code{randSPDDense}.
The three key points pertaining to these matrices are shown in
Figure~\ref{fig:dense}.
First, we discuss the effect of $m$, the number of terms in the Taylor series
used to approximate $\logdet{\mat{A}}$; panel~\ref{fig:dense-accuracy} depicts
our results for the sequential case.
On the y-axis, we see the relative error, which is measured against the exact
$\logdet{\mat{A}}$ as computed using Cholesky factorization.
We see that for these ``ill-conditioned'' matrices, for small values of $m<4$,
the relative error (in \%) is high.
However, for all values of $m>=4$, we see that the error drops down
significantly and stabilizes.
Note that in each iteration, all random processes were re-seeded with new
values; we have plotted the error bars throughout Figure~\ref{fig:dense}.
However, the standard deviation in both accuracy and time was consistently
small; indeed, it is not visible to the naked eye at scale.
To see the benefit of approximation, we look at
Figure~\ref{fig:dense-speedup-m} together with Figure~\ref{fig:dense-accuracy}.
For example, at $m=4$, for all matrices, we get at least a factor of 2 speedup.
However, as can be seen, as $n$ gets larger, the speedups afforded by the
approximation also increase.
For example, for $n=15000$, the speedup at $m=4$ is nearly $6x$; from
panel~\ref{fig:dense-accuracy}, we see that at $m=4$, the relative error
between the exact and approximate quantities of $\logdet{\mat{A}}$ is around
$4\%$.
This speedup is expected as Cholesky factorization requires $O(n^3)$
operations; the approximation only relies on matrix-matrix products where one
of the matrices have a small number of columns ($p$).
Notice that $p$ depends entirely on $\delta{}$ and $\varepsilon{}$, and is
subsequently independent from $n$; this makes Algorithm~\ref{alg1} scalable.
Finally, we discuss the parallel speedup in panel~\ref{fig:dense-speedup-np},
which shows the relative speedup of the approximate algorithm with respect to
the baseline Cholesky algorithm.
For this, we set $m=4$ and varied the number of processes, $np$,
from $1$ to $60$.
The main take away from panel~\ref{fig:dense-speedup-np} is that the
approximate algorithm provides nearly the same or increasingly better speedups
relative to a parallelized version of the exact (Cholesky) algorithm.
For example, for $n=15000$, the speedups for using the approximate algorithm
are consistently better that $6.5x$.
The absolute values for $\logdet{\mat{A}}$ and timing along with the baseline
numbers for this experiment are given in Table~\ref{tbl:dense-abs}.
We report the numbers in Table~\ref{tbl:dense-abs} at $m=4$ at which point, we
have low relative error.
For the second set of dense experiments, we generated diagonally dominant
matrices using \code{randSPDDenseDD}; we were able to quickly generate and
run benchmarks on matrices of $n=10000, 20000, 30000, 40000$ due to the
relatively simpler procedure involved in matrix generation.
In this set of of experiments, due to the diagonal dominance, all matrices
were ``well-conditioned''.
The results of our experiments on these ``well-conditioned'' matrices are
presented in Figure~\ref{fig:dense-dd} and show a marked improved over the
results presented in Figure~\ref{fig:dense}.
First, notice that very few terms of the Taylor series (i.e., small $m$) are
sufficient to get high accuracy approximations; this is apparent from
panel~\ref{fig:dense-dd-accuracy}.
In fact, we see that even at $m=2$, we are near convergence and at $m=3$, for
most of the matrices, we have near-zero relative error.
This experimental result, when taken together with
panel~\ref{fig:dense-dd-speedup-m} is encouraging; at $m=2$, we seem to not
only have near-exact approximation of $\logdet{\mat{A}}$, but also have at least
$5x$ speedup.
Like in Figure~\ref{fig:dense}, the speedups are better for larger matrices.
For example, for $n=40000$, the speedup at $m=2$ is nearly $20x$.
We conclude our analysis by presenting panel~\ref{fig:dense-dd-speedup-np},
which like panel~\ref{fig:dense-speedup-np}, drives home the point that at any
level of parallelism, Algorithm~\ref{alg1} maintains it's relative performance
over the exact (Cholesky) factorization.
The absolute values for $\logdet{\mat{A}}$ and timing along with the baseline
for this experiment are given in Table~\ref{tbl:dense-dd-abs}.
We report the numbers in Table~\ref{tbl:dense-abs} at $m=2$ at which point, we
have low relative error.
\subsection{Sparse Matrices}
\label{subsec:sparse_matrices}
\paragraph{Data Synthesis}
\noindent
To generate a sparse synthetic matrix $\mat{A}\in{}\mathbb{R}^{n\times{}n}$, with
$nnz$ non-zeros, we use a Bernoulli distribution to determine the location of
the non-zero entries and a uniform distribution to generate the values.
First, we completely fill the $n$ principle diagonal entries.
Next, we generate $\frac{nnz-n}{2}$ index positions in the upper triangle for
the non-zero entries by sampling from a Bernoulli distribution with
$p=\frac{nnz-n}{n^2-n}$.
We reflect each entry about the principle diagonal for symmetricity.
Finally, we add $n$ to each diagonal entry to ensure positive-definiteness.
This also makes the generated matrix diagonally dominant.
\paragraph{Real Data}
\noindent
To demonstrate the prowess of Algorithm~\ref{alg1} on real-world data, we
use some SPD matrices from the University of Florida's sparse matrix
collection~\cite{davis2011university}.
The complete list of matrices from this collection used in our experiments
along with a brief description is given in columns 1--4 of Table~\ref{tbl:ufl}.
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{name} &
\multirow{3}{*}{$n$} &
\multirow{3}{*}{$nnz$} &
\multirow{3}{*}{area of origin} &
\multicolumn{3}{|c|}{\logdet{\mat{A}}} &
\multicolumn{2}{|c|}{time (sec)} &
\multirow{3}{*}{$m$} \\ \cline{5-9}
& & & & \multirow{2}{*}{exact} &
\multicolumn{2}{|c|}{approx} &
\multirow{2}{*}{exact} &
approx & \\ \cline{6-7} \cline{9-9}
& & & & & mean & std & & mean & \\\hline
thermal2 & 1228045 & 8580313 & Thermal & 1.3869e6 & 1.3928e6 & 964.79 & 31.28 & 31.24 & 149 \\ \hline
ecology2 & 999999 & 4995991 & 2D/3D & 3.3943e6 & 3.403e6 & 1212.8 & 18.5 & 10.47 & 125 \\ \hline
ldoor & 952203 & 42493817 & Structural & 1.4429e7 & 1.4445e7 & 1683.5 & 117.91 & 17.60 & 33 \\ \hline
thermomech\_TC & 102158 & 711558 & Thermal & -546787 & -546829.4 & 553.12 & 57.84 & 2.58 & 77 \\ \hline
boneS01 & 127224 & 5516602 & Model reduction & 1.1093e6 & 1.106e6 & 247.14 & 130.4 & 8.48 & 125 \\ \hline
\end{tabular}
\caption{
Description of the SPD matrices from the University of Florida sparse
matrix collection~\cite{davis2011university} that were used in our experiments.
All experiments were run sequentially ($np=1$) using Eigen.
Accuracy for the approximation (Algorithm~\ref{alg1}) are the mean and standard
deviation of 10 iterations at $t=5$ and $p=5$; only the mean for the time is
reported as the standard deviation was negligible.
The exact $\logdet{\mat{A}}$ was computed using Cholesky factorization.
}
\label{tbl:ufl}
\end{table*}
\paragraph{Evaluation}
\noindent
It is tricky to pick any single method as the ``exact method'' to compute the
$\logdet{\mat{A}}$ for a sparse SPD matrix $\mat{A}$.
One approach would be to use direct methods such as Cholesky decomposition of
$\mat{A}$~\cite{davis2006direct,gupta2000wsmp}.
For direct methods, it is difficult to derive an analytical solution
for the number of operations required for factorization as a function of the
number of non-zeros ($nnz$) as this is highly dependent on the structure of
the matrix~\cite{gupta1997highly}.
In case of distributed computing, one also needs to consider the volume of
communication involved, which is often the bottleneck; we omit a detailed
discussion for brevity.
Alternately, we can use iterative methods to compute the eigenvalues of
$\mat{A}$~\cite{davidson1975iterative} and use the eigenvalues to compute
$\logdet{\mat{A}}$.
It is clear that the worst case performance of both the direct and iterative
methods is $O(n^3)$.
However, iterative methods are typically used to compute a few eigenvalues and
eigenvectors: \textit{therefore, we use direct Cholesky factorization based on
matrix reordering to compute the exact value of $\logdet{\mat{A}}$}.
It is important to note that both the direct and iterative methods are
notoriously hard to implement, which contracts starkly with the relatively
simple implementation of Algorithm~\ref{alg1}, which also parallelizes readily
in the case of sparse matrices.
However, we omit further discussion of parallelism in the sparse case for
brevity and discuss results for sequential performance.
\begin{figure}[t]
\begin{center}
\subfigure[Convergence with $m$]
\includegraphics[height=0.25\textwidth,width=0.45\textwidth]{results/sparse-convergence-m-p=0.png}
\label{fig:sparse-convergence-m}
}
\subfigure[Cost with $m$]{%
\includegraphics[height=0.25\textwidth,width=0.45\textwidth]{results/cost-of-sparse-m-p=0.png}
\label{fig:sparse-cost-m}
}
\end{center}
\caption{
Panels~\ref{fig:sparse-convergence-m} and~\ref{fig:sparse-cost-m} depict the
effect of the number of terms in the Taylor expansion, $m$, (see
Algorithm~\ref{alg1}) on the convergence to the final solution and the time to
completion of the approximation.
The matrix size was fixed at $n=10^6$ and sparsity was varied as $0.1\%,
0.25\%, 0.5\%, 0.75\%, \textnormal{ and } 1\%$.
Experiments were run sequentially ($np=1$) and we set $p=60$, $t=\log{4n}$.
For panel~\ref{fig:sparse-convergence-m}, the baseline is the final value of
$\logdet{\mat{A}}$ at $m=25$.
For panel~\ref{fig:sparse-cost-m}, the baseline is the time to completion of
approximation of $\logdet{\mat{A}}$ at $m=1$.
Eigen was used as the backend for these experiments.
}
\label{fig:sparse}
\end{figure}
\paragraph{Results}
\noindent
The true power of Algorithm~\ref{alg1} lies in its ability to approximate
$\logdet{\mat{A}}$ for sparse $\mat{A}$.
The exact method --- Cholesky --- can introduce $O(n^2)$ non-zeros during
factorization due to fill-in; for many problems, there is insufficient
memory to factorize a large but sparse matrix.
In our first set of experiments, we wanted to show the effect of $m$ on: (1)
convergence of $\logdet{\mat{A}}$ and (2) cost of the solution.
To this end, we generated sparse, diagonally dominant SPD matrices of size
$n=10^6$ and varied the sparsity from $0.1\%$ to $1\%$ in increments of
$0.25\%$.
We did not attempt to compute the exact $\logdet{\mat{A}}$ for these synthetic
matrices --- our aim was to merely study the speedup with $m$ for different
sparsities while $t$ and $p$ were held constant at $\log(4n)$ and $60$
respectively.
The results are shown in Figure~\ref{fig:sparse}.
Panel~\ref{fig:sparse-convergence-m} depicts the convergence of
$\logdet{\mat{A}}$ measured as relative error of the current estimate from the
final estimate.
As can be seen --- for well conditioned matrices --- convergence is quick.
Panel~\ref{fig:sparse-cost-m} shows the relative cost of increasing $m$; here
the baseline is $m=1$.
As can be seen, the additional cost incurred by increasing $m$ is linear when
all other parameters are held constant.
The results of running Algorithm~\ref{alg1} on the UFL matrices are shown in
Table~\ref{tbl:ufl}~\footnote{We tried to run with a few larger SPD matrices
from UFL collection (eg., Flan\_1565, Hook\_1498, Geo\_1438, StocF-1465), but
we were unable to finish Cholesky factorization in a reasonable amount of time
when running sequentially.}.
The numbers reported for the approximation are the mean and standard deviation
of 10 iterations $t=5$, and $p=5$~\footnote{We experimented with different
$p,t$ and settled on the smallest values that did not result in loss in
accuracy.}.
The value of $m$ was varied from 1 to 150 in increments of 5 to select the best
(mean) accuracy.
The matrices shown in Table~\ref{tbl:ufl} have nice structure, which lends
itself to nice reorderings and therefore efficient Cholesky factorization.
We see that even in such cases, the performance of Algorithm~\ref{alg1} is
commendable due to the strong theoretical guarantees and the relatively lower
algorithmic complexity; \texttt{ldoor} is the only exception as the
approximation takes longer than the factorization.
In the case of \texttt{thermomech\_TC}, we achieve good accuracy while getting
a 22x speedup.
The most expensive operations in Algorithm~\ref{alg1} are the (sparse-)matrix
vector products, where the number of vectors is $p$, a parameter which is
independent of $n$.
Note that, for efficiency, we rewrite the $p$ matrix-vector products
as a single symmetric matrix-matrix product.
As we show, it is possible to improve performance of Algorithm~\ref{alg1} by
tuning $p$, $t$, and $m$.
\section{Introduction}
Given a matrix $\mat{A} \in \R^{n \times n},$ the determinant of $\mat{A}$, denoted by $\det(\mat{A})$, is one of the most important quantities associated with $\mat{A}$. Since its invention by Cardano and Leibniz in the late 16th century, the determinant has been a fundamental mathematical concept with countless applications in numerical linear algebra and scientific computing. The advent of Big Data, which are often represented by matrices, increased the applicability of algorithms that compute, exactly or approximately, matrix determinants; see, for example,~\cite{leithead2005efficient,zhang2008log, zhang2007approximate,d2008first,hsieh2013big} for machine learning applications~(e.g., gaussian process regression) and~\cite{lesage2001spatial,kambadur2013parallel,friedman2008sparse,pace1997quick,pace2000method} for several data mining applications~(e.g., spatial-temporal time series analysis).
The first formal definition of the determinant for arbitrary square matrices $\mat{A}$ is often attributed to Leibniz.
Let $\pi$ be a permutation of the set $\{1,2,...,n\}$ and let $\Pi_n$ be the set of all $n!$ such permutations. Let
$ \text{sgn}(\pi)$ be the sign function of the permutation $\pi$, defined as\footnote{Here $N(\pi)$ denotes the number of inversions in $\pi$. Formally,
let $\pi = (\pi_1,\ldots,\pi_n)$ be a sequence of $n$ distinct numbers. If $i < j$ and $\pi_i > \pi_j$, then the pair $(i,j)$ is called an inversion of $\pi$.}
$ \text{sgn}(\pi) = (-1)^{N(\pi)},$ and let
$\mat{A}_{i,\pi_i}$ denote the $\left(i,\pi_i \right)$-th entry of $\mat{A}$ (here $\pi_i$ is the $i$-th entry of $\pi$). Using this notation,
$$ \det(\mat{A}) = \sum_{\pi \in \Pi_{n}} \left( \text{sgn}(\pi) \cdot \prod_{i=1}^n \mat{A}_{i,\pi_i} \right).$$
A second formula for the determinant is attributed to Laplace and was derived in the latter part of the 17th century.
The Laplace (or co-factor) expansion, is given by
$$ \det(\mat{A}) = \sum_{i=1}^n \mat{A}_{i,j} \cdot \det( \mat{C}^{[i*,*j]}).$$
In the above formula, $j$ is any index in the set $\{1,2,...,n\}$ and $\mat{C}^{[i*,*j]}$ denotes the
$\left(n-1\right) \times \left(n-1\right)$ matrix derived from $\mat{A}$ by removing
its $i$-th row and its $j$-th column. Notice that the Laplace formula is a recursive definition of the determinant.
However, neither the Laplace nor the Leibniz formula can be used to design an efficient, polynomial-time, algorithm to compute the determinant of $\mat{A}$. To achieve this goal, one should rely on other properties of the determinant. Two well known such properties are connections between the determinant and the matrix eigenvalues, as well as the connection between the determinant and the so-called $LU$ matrix decomposition (or the Cholesky decomposition for symmetric matrices). More specifically, it is well known that
$$\det(\mat{A}) = \prod_i^n \lambda_i(\mat{A}),$$
where $\lambda_1(\mat{A}) \ge \lambda_2(\mat{A}) \ge ... \ge \lambda_n(\mat{A})$ are the eigenvalues of $\mat{A}$. This property implies an $O(n^3)$ deterministic algorithm to compute the determinant via the eigendecomposition of $\mat{A}$. Alternatively, one can leverage the $LU$ decomposition of $\mat{A}$, e.g., the fact that any matrix $\mat{A} \in \mathbb{R}^{n \times n}$ can be expressed as
$$ \mat{A} = \mat{L} \mat{U},$$
where $\mat{L} \in \R^{n \times n}$ is an upper triangular matrix with diagonal elements equal to one and $\mat{U} \in \R^{n \times n}$ is a lower triangular matrix. Using $\det( \mat{X} \mat{Y} ) = \det(\mat{X}) \det(\mat{Y})$ for any $\mat{X}, \mat{Y} \in \R^{n \times n}$ and the fact that the determinant of an upper or lower triangular matrix is equal to the product of its diagonal elements, we get
\begin{eqnarray*}
\det( \mat{A} ) = \det( \mat{L} \mat{U} )
&=& \det( \mat{L} ) \cdot \det(\mat{U}) \\
&=& \prod_{i=1}^n \mat{L}_{i,i} \cdot \prod_{i=1}^n \mat{U}_{i,i} \\
&=& \prod_{i=1}^n \mat{U}_{i,i}.
\end{eqnarray*}
Since the computation of the $LU$ decomposition takes $O(n^3)$ time, the above derivation implies an $O(n^3)$ deterministic algorithm to compute the determinant of $\mat{A}$.
In this paper, we are interested in approximating the logarithm of the determinant of a symmetric positive definite (SPD) matrix $\mat{A}$. (Recall that an SPD matrix is a symmetric matrix with strictly positive eigenvalues.) The logarithm of the determinant, instead of the determinant itself, is important in several settings~\cite{leithead2005efficient,zhang2008log, zhang2007approximate,d2008first,hsieh2013big,lesage2001spatial,kambadur2013parallel,friedman2008sparse,pace1997quick,pace2000method}.
\begin{definition}\textsc{[LogDet Problem definition]}\label{def:problem}
Given an SPD matrix $\mat{A} \in \R^{n \times n}$, compute (exactly or approximately) $\logdet{\mat{A}}$.
\end{definition}
The best exact algorithm for the above problem simply computes the determinant of $\mat{A}$ in cubic time and takes its logarithm. Few approximation algorithms have appeared in the literature, but they either lack a proper theoretical convergence analysis or do not work for all SPD matrices. We will discuss prior work in detail in Section~\ref{sec:related}.
\subsection{Our contributions: theory}
We present a fast approximation algorithm for the problem of Definition~\ref{def:problem}. Our algorithm (Algorithm~\ref{alg1}) is randomized and its running time is
$\mathcal{O}\left({ \rm nnz }(\mat{A})\left(m \varepsilon^{-2}+\log(n) \right)\right),$
where ${ \rm nnz }(\mat{A})$ denotes the number of non-zero elements in $\mat{A}$ and (integer) $m >0$ and (real) $\epsilon > 0$ are user-controlled accuracy parameters that are specified in the input of the algorithm.
The first step of our approximation algorithm uses the power method to compute an approximation to the top eigenvalue of $\mat{A}$. This value will be used in a normalization~(preconditioning) step in order to compute a convergent matrix-Taylor expansion. The second step of our algorithm leverages a truncated matrix-Taylor expansion of a suitably constructed matrix in order to compute an approximation of the log determinant. This second step leverages randomized trace estimation algorithms~\cite{AT11}.
Let $\alogdet{\mat{A}}$ be the value returned by our approximation algorithm~(Algorithm~\ref{alg1}) and let $\logdet{\mat{A}}$ be the true log determinant of $\mat{A}$. Then, in Lemma~\ref{thm1} we prove that with constant probability,
\begin{equation}\label{eqn:mainresult}
\abs{ \alogdet{\mat{A}} - \logdet{\mat{A}} } \leq
\left(\varepsilon+\left(1-\gamma \right)^m\right) \cdot \Gamma,
\end{equation}
where
$$
\gamma = \frac{\lambda_{n}\left(\mat{A}\right)}{ \lambda_1(\mat{A})},
$$
and
$$
\Gamma = \sum_{i=1}^n \log\left( 5 \cdot \frac{\lambda_1(\mat{A})}{ \lambda_i(\mat{A}) }\right).
$$
We now take a more careful look at the above approximation bound,
starting with the term $\left(1-\gamma \right)^m$. For this term to be ``small'',
the smallest eigenvalue of $\mat{A}$ should be sufficiently large so that the quantity
$\left(1-\gamma \right)^m$ converges to zero fast as $m$ grows.
Formally, in order to guarantee $\left(1-\gamma \right)^m \le \varepsilon$ it suffices to set
\begin{equation}\label{eqn:boundm}
m \ge \log\left(\frac{1}{\varepsilon}\right) / \log\left( \frac{1}{1 - \frac{\lambda_{n}\left(\mat{A}\right)}{ \lambda_1(\mat{A})}} \right) = \Omega\left( \log\left(\frac{1}{\varepsilon}\right) \cdot \kappa\left(\mat{A} \right) \right),
\end{equation}
where
$\kappa(\mat{A}) = \lambda_1(\mat{A}) / \lambda_n(\mat{A})$ is the condition number of $\mat{A}$.
The error of our algorithm also scales with $\Gamma$, which cannot be immediately compared to $\logdet{\mat{A}}$, namely the quantity that we seek to approximate. It is worth noting that the $\Gamma$ term increases \textit{logarithmically} with respect to the ratios $\lambda_1(\mat{A})/\lambda_i(\mat{A}) \ge 1$; how large these ratios are depends also on the condition number of $\mat{A}$~(those ratios are ``small'' when the condition number is ``small''). For example, using majorization, one upper bound for this term is
$$
\Gamma = \sum_{i=1}^n \log\left( 5 \cdot \frac{\lambda_1(\mat{A})}{ \lambda_i(\mat{A}) }\right)
\le n \cdot \log\left( 5 \kappa(\mat{A}) \right).
$$
Note that the above upper bound is, in general, an overestimate of the error of our algorithm. We now state our main theorem.
\begin{theorem}\label{thm:MAINMAIN}
Fix some accuracy parameter $\tilde\varepsilon$ with $0 < \tilde\varepsilon < 1$.
Run Algorithm~\ref{alg1} on inputs $\mat{A},$
$$m = \Omega\left( \log\left(\frac{ \log\left( \kappa\left(\mat{A} \right) \right) }{\tilde\varepsilon}\right) \cdot \kappa\left(\mat{A} \right) \right),$$ and
$$
\varepsilon = \frac{ \tilde\varepsilon } { 2 \log\left( 5 \kappa(\mat{A}) \right) }.
$$
Let $\alogdet{\mat{A}}$ be the output of Algorithm~\ref{alg1}.
Then, with probability at least $3/16 - 0.01$,
$$
\abs{ \alogdet{\mat{A}} - \logdet{\mat{A}} } \leq \tilde\varepsilon \cdot n.
$$
The running time of the algorithm is
$$
\mathcal{O}\left(
{ \rm nnz }(\mat{A}) \cdot
\kappa\left(\mat{A} \right) \cdot
\log^2\left( \kappa(\mat{A}) \right) \cdot
\frac{1}{\tilde\varepsilon^{2}} \cdot
\log \left(\frac{1}{\tilde\varepsilon}\right) \cdot
\log(n)
\right).
$$
\end{theorem}
\subsection{Our contributions: practice}
We implemented our algorithm in \texttt{C++} and tested it on several large dense and sparse matrices. Our dense implementation runs on top of \texttt{Elemental}~\cite{poulson2013elemental}, a linear algebra library for distributed matrix computations with dense matrices. Our sparse implementation runs on top of \texttt{Eigen}~\footnote{\url{http://eigen.tuxfamily.org/}}, a software library for sparse matrix computations. This C++ implementation is accessible in both C++ and Python environments~(through Python bindings), and it is also available to download on Github
(see Section~\ref{subsec:environment} for more details.).
\subsection{Related Work}\label{sec:related}
The most relevant result to ours is the work in~\cite{BP99}.
Barry and Pace~\cite{BP99} described a randomized
algorithm for approximating the logarithm of the determinant of a matrix with special structure that we will describe below.
They show that in order to approximate the logarithm of the determinant of a matrix
$\mat{A}$,
it suffices to approximate the traces of $\mat{D}^{k}$, for $k=1,2,3...$ for
a suitably constructed matrix $\mat{D}$. Specifically, \cite{BP99} deals with approximations to SPD matrices
$\mat{A}$ of the form
$\mat{A} = \mat{I}_n - \alpha \mat{D},$
where $0 < \alpha < 1$ and all eigenvalues of $\mat{D}$ are in the interval $\left[-1,1\right]$.
Given such a matrix $\mat{A}$, the authors of~\cite{BP99} seek to derive an estimator $\alogdet{\mat{A}}$ that is close to $\logdet{\mat{A}}$.
\cite{BP99} proved (using the so-called Martin expansion~\cite{Mar92}) that
$$ \log(\det(\mat{A})) = - \sum_{k=1}^{m} \frac{\alpha^k}{k} \text{\rm Tr}{\mat{D}^k} - \sum_{k=m}^{\infty} \frac{\alpha^k}{k} \text{\rm Tr}{\mat{D}^k}.$$
They considered the following estimator:
$$ \alogdet{\mat{A}} = \frac{1}{p}
\sum_{i=1}^{p}
\left(
\underbrace{
-n
\sum_{k=1}^{m}
\left(
\frac{\alpha^k}{ k} \frac{{\mathbf z}_i^{\textsc{T}} \mat{D}^k {\mathbf z}_i}{{\mathbf z}_i^{\textsc{T}} {\mathbf z}_i}
\right)}_{V_i}
\right).
$$
All $V_i$ for $i=1\ldots p$ are random variables and the value of $p$ controls the variance of the estimator. The algorithm in~\cite{BP99} constructs vectors ${\mathbf z}_i$ $\in \R^n$ whose entries are independent identically distributed standard Gaussian random variables. The above estimator ignores the trailing terms
of the Martin expansion and only tries to approximate the first $m$ terms. \cite{BP99} presented the following approximation bound:
$$
\abs{ \alogdet{\mat{A}} - \logdet{\mat{A}} } \leq \frac{n \cdot \alpha^{m-1}}{(m+1)(1-\alpha)}
+ 1.96 \cdot \sqrt{ \frac{\sigma^2}{p} },
$$
where $\sigma^2$ is the variance of the random variable $V_i$. The above bound fails with probability at most $0.05$.
We now compare the results in~\cite{BP99} with ours.
First, the idea of using the Martin expansion~\cite{Mar92} to relate the logarithm of the determinant
and traces of matrix powers is present in both approaches. Second, the algorithm of~\cite{BP99} is applicable to SPD matrices that have special structure, while our algorithm is applicable to any SPD matrix. Intuitively, we overcome this limitation of~\cite{BP99} by estimating the top eigenvalue of the matrix in the first step of our algorithm. Third, our error bound is much better that the error bound of~\cite{BP99}. To analyze our algorithm, we used the theory of randomized trace estimators of Avron and Toledo~\cite{AT11}, which relies on powerful matrix-Chernoff bounds, while~\cite{BP99} uses the weaker Chebyshev's inequality.
A similar idea using Chebyshev polynomials appeared in the paper~\cite{pace2004chebyshev}; to our best understanding, there are no theoretical convergence properties of the proposed algorithm. Applications to Gaussian process regression appeared in~\cite{leithead2005efficient,zhang2008log, zhang2007approximate}. The work of~\cite{Reusken2002} uses an approximate matrix inverse to compute the $n$-th root of the determinant of $\mat{A}$ for large sparse SPD matrices; the error bounds in this work are a posteriori and thus not directly comparable to our bounds.
Recent work in~\cite{hunter2014computing} provides a strong worst-case theoretical result which is, however, only applicable to Symmetric Diagonally Dominant (SDD) matrices. The algorithm is randomized and guarantees that, with high probability,
$
\abs{ \alogdet{\mat{A}} - \logdet{\mat{A}} } \leq \varepsilon \cdot n,
$
for a user specified error parameter $\varepsilon > 0$. This approach also uses the Martin expansion~\cite{Mar92} as well as ideas from preconditioning systems of linear equations with Laplacian matrices~\cite{spielman2004nearly}. The algorithm of~\cite{hunter2014computing} runs in time
$O\left(
{ \rm nnz }(\mat{A}) \varepsilon^{-2} \log^3\left(n\right)
\log^2\left( n \kappa(\mat{A})/\varepsilon \right) \right).$
Comparing to our approach, we note that the above running time depends \textit{logarithmically} on the condition number of the input matrix $\mat{A}$, whereas our algorithm has a linear dependency on the condition number. Notice, however, that our method is applicable to any SPD matrix while the method in~\cite{hunter2014computing} is applicable only to SDD matrices; given current state-of-the-art on Laplacian preconditioners it looks hard to extend the approach of~\cite{hunter2014computing} to general SPD matrices.
We conclude by noting that the aforementioned algorithms for the determinant computation assume floating point arithmetic and do not measure bit operations. If the computational cost is to be measured in bit operations, the situation is much more complicated and an exact computation of the determinant, even for integer matrices, is not trivial. We refer the interested reader to~\cite{eberly2000computing} for more details.
\section{Preliminaries}
\noindent This section summarizes notation and prior work that we will use in the paper.
\subsection{Notation}
\noindent \math{\mat{A},\mat{B},\ldots} denote matrices; \math{{\mathbf a},{\mathbf b},\ldots} denote
column vectors. $\mat{I}_{n}$ is the $n \times n$
identity matrix; $\bm{0}_{m \times n}$ is the $m \times n$ matrix of zeros;
\text{\rm Tr}{\mat{A}} is the trace of a square matrix $\mat{A}$;
the Frobenius and the spectral matrix-norms are:
$ \FNormS{\mat{A}} = \sum_{i,j} \mat{A}_{ij}^2$
and $\TNorm{\mat{A}} = \max_{\TNorm{{\mathbf x}}=1}\TNorm{\mat{A}{\mathbf x}}$.
We denote the determinant of a matrix $\mat{A}$ by $\det(\mat{A})$ and the logarithm (base two) of the determinant of $\mat{A}$ by $\logdet{\mat{A}}$. All logarithms in the paper are base two.
\subsection{SPD matrices and eigenvalues}
Let $\mat{A}$ be an $n \times n$ symmetric matrix and let
$\lambda_{i}\left(\mat{A}\right)$
denote the $i$-th eigenvalue of $\mat{A}$ for all $i=1,\dots,n$ with
$$
\TNorm{\mat{A}} = \lambda_1(\mat{A})\ge\lambda_2(\mat{A})\ge\ldots\ge\lambda_n(\mat{A}) \ge 0.
$$
A symmetric matrix $\mat{A}$ is called positive definite (SPD) if all its eigenvalues are positive (e.g., $\lambda_n(\mat{A}) > 0$).
\subsection{Matrix logarithm}
For an SPD matrix
$\mat{A} \in \R^{n \times n},$ $\logm{\mat{A}}$ is an $n \times n$ matrix defined as:
$$
\logm{\mat{A}} = \mat{U} \mat{D} \mat{U} ^{\textsc{T}},
$$
where $\mat{U} \in \R^{n \times n}$ contains the eigenvectors of $\mat{A}$
and $\mat{D} \in \R^{n \times n}$ is diagonal:
$$
\mat{D} =
\begin{pmatrix}
\log(\lambda_1(\mat{A})) & & & \\
& \log(\lambda_2(\mat{A})) & & \\
& & \ddots & \\
& & & \log(\lambda_n(\mat{A}))
\end{pmatrix}.
$$
\subsection{Matrix Taylor expansion}
Let $x$ be a scalar variable that satisfies $|x|<1$. Then,
$$\ln(1- x) = -\sum_{k=1}^{\infty} x^k / k.$$
A matrix-valued generalization of this identity is the following statement.
\begin{lemma}\label{lem:taylor}
Let $\mat{A} \in \R^{n \times n}$ be a symmetric matrix whose eigenvalues all lie in the interval $(-1,1)$. Then,
\begin{equation*}\label{eq:matrixlog}
\log(\mat{I}_n -\mat{A}) = -\sum_{k=1}^{\infty} \mat{A}^k / k.
\end{equation*}
\end{lemma}
\subsection{Power method}\label{sec:power}
The first step in our algorithm for approximating the determinant of an SPD matrix is to obtain an estimate for the largest eigenvalue of the matrix. To achieve this we employ the so-called ``power method''. Given an SPD matrix $\mat{A} \in \R^{n \times n}$ we will use
Algorithm~\ref{alg:power} to obtain an accurate estimate of its largest eigenvalue. This estimated eigenvalue is denoted by $\tilde{\lambda}_1(\mat{A})$.
\begin{algorithm}[h!]
\begin{itemize}
\item Input: SPD matrix $\mat{A} \in \R^{n \times n},$ integer $t > 0$
\begin{enumerate}
\item Pick uniformly at random ${\mathbf x}_0 \in \{ +1, -1\}^{n} $
\item For $i=1,\dots,t$
\begin{itemize}
\item ${\mathbf x}_i = \mat{A} \cdot {\mathbf x}_{i-1}$
\item ${\mathbf x}_i = {\mathbf x}_i / \TNorm{{\mathbf x}_i}$
\end{itemize}
\item $\tilde{\lambda}_1(\mat{A}) = {\mathbf x}_t ^{\textsc{T}} \mat{A} {\mathbf x}_t $
\end{enumerate}
\item Return: $\tilde{\lambda}_1(\mat{A})$
\end{itemize}
\caption{Power method}\label{alg:power}
\end{algorithm}
\begin{algorithm}[h!]
\begin{itemize}
\item Input: SPD matrix $\mat{A} \in \R^{n \times n}$, accuracy parameter $0 < \varepsilon < 1,$ and failure probability $0 < \delta < 1$.
\begin{enumerate}
\item Let $\mathbf{g}_1,\mathbf{g}_2,\ldots, \mathbf{g}_p$ be a set of independent Gaussian vectors in $\mathbb{R}^n$
\item Let $p = 20 \ln(2/\delta) / \varepsilon^2$.
\item Let $\gamma = 0$
\item For $i=1,\dots,p$
\begin{itemize}
\item $\gamma = \gamma + \mathbf{g}_i^\top \mat{A} \mathbf{g}_i$
\end{itemize}
\item $\gamma = \gamma / p$
\end{enumerate}
\item Return: $\gamma$
\end{itemize}
\caption{Randomized Trace Estimation}\label{alg:trace}
\end{algorithm}
Algorithm~\ref{alg:power} requires $O(t (n + nnz(\mat{A})))$ arithmetic operations to compute $\tilde{\lambda}_1(\mat{A})$. The following lemma\footnote{See \url{http://www.eecs.berkeley.edu/~luca/cs359g/lecture07.pdf} for a proof.} argues that, for a sufficiently large $t$, $\tilde{\lambda}_1(\mat{A})$ is ``close'' to $\lambda_1(\mat{A})$.
\begin{lemma}\label{lem:power}
For any $t > 0,$ $\varepsilon > 0$, with probability at least $3/16$:
$$ \lambda_1(\mat{A}) \cdot (1 - \varepsilon) \cdot \frac{1}{1 + 4n(1-\varepsilon)^t} \le \tilde{\lambda}_1(\mat{A}) \le \lambda_1(\mat{A}).$$
\end{lemma}
An immediate corollary to this lemma follows.
\begin{corollary}
Set $\varepsilon = 1/2$ and $t = \log (4n)$. Then, with probability at least $3/16$:
$\lambda_1(\mat{A}) \cdot \frac{1}{4} \le \tilde{\lambda}_1(\mat{A}) \le \lambda_1(\mat{A}). $
\end{corollary}
\subsection{Trace Estimation}\label{sec:trace}
Though computing the trace of a square $n \times n$ matrix requires only $O(n)$ arithmetic operations, the situation is more complicated when $\mat{A}$ is given through a matrix function,
e.g., $\mat{A} = \mat{X}^2,$ for some matrix $\mat{X}$ and the user only observes
$\mat{X}$. For situations such as these, Avron and Toledo~\cite{AT11} analyzed several algorithms to estimate the trace of
$\mat{A}$. The following lemma presents a relevant result from their paper.
\begin{lemma}[Theorem~5.2 in \cite{AT11}]\label{thm:trace}
Let $\mat{A} \in \R^{n \times n}$ be an SPD matrix, let $0<\varepsilon < 1$ be an accuracy parameter, and let $0<\delta<1$ be a failure probability. If $\mathbf{g}_1,\mathbf{g}_2,\ldots, \mathbf{g}_p \in\mathbb{R}^n$ are independent random standard Gaussian vectors, then for all $p>0$ with probability at least $1 - e^{-p \varepsilon^2 /20}$:
\begin{equation*}\label{eq:trApprox}
\abs{
\text{\rm Tr}{\mat{A}} - \frac1{p} \sum_{i=1}^{p} \mathbf{g}_i^\top \mat{A} \mathbf{g}_i
} < \varepsilon \cdot \text{\rm Tr}{\mat{A}}.
\end{equation*}
\end{lemma}
The lemma indicates that choosing
$p = 20 \ln(2/\delta) / \varepsilon^2$
and approximating the trace of $\mat{A}$ by
$\frac1{p} \sum_{i=1}^{p} \mathbf{g}_i^\top \mat{A} \mathbf{g}_i$ results in a relative error approximation with probability at least $1-\delta$.
Algorithm~\ref{alg:trace} is a detailed description of the above procedure.
\section{Main Result}\label{sec:main}
Lemma~\ref{lem1} is the starting point of our main algorithm for approximating the determinant of a symmetric positive definite matrix. The message in the lemma is that
computing the log determinant of an SPD matrix $\mat{A}$ reduces to the
task of computing the largest eigenvalue of $\mat{A}$ and the trace of all the powers of a matrix $\mat{C}$ related to $\mat{A}$.
\begin{lemma}\label{lem1}
Let $\mat{A} \in \R^{n \times n}$ be an SPD matrix. For any $\alpha$ with
$\lambda_1(\mat{A}) < \alpha,$
define
\[\mat{B} := \mat{A} / \alpha \quad \text{and} \quad \mat{C} := \mat{I}_n - \mat{B}.
\]
Then,
\begin{equation*}\label{eq:summary}
\logdet{\mat{A}} = n \log(\alpha) -\sum_{k=1}^{\infty} \text{\rm Tr}{\mat{C}^k} / k.
\end{equation*}
\end{lemma}
\begin{proof}
Observe that $\mat{B}$ is an SPD matrix with $\TNorm{\mat{B}}<1$. It follows that
\begin{align*}
\logdet{\mat{A}}
&= \log( \alpha^n \det(\mat{A} /\alpha)) \\
&= n\log(\alpha) + \log (\prod_{i=1}^{n} \lambda_i(\mat{B})) \\
&= n\log(\alpha) + \sum_{i=1}^{n}\log (\lambda_i(\mat{B})) \\
&= n\log(\alpha) + \text{\rm Tr}{ \logm{\mat{B}}}.
\end{align*}
Here, we used standard properties of the determinant, standard properties of the logarithm function, and the fact that (recall that $\mat{B}$ is an SPD matrix),
$$\text{\rm Tr}{\logm{\mat{B}}} = \sum_{i=1}^{n} \lambda_i(\logm{\mat{B}}) = \sum_{i=1}^{n} \log(\lambda_i(\mat{B})).$$
Now,
\begin{align}\label{eq:trlog}
\text{\rm Tr}{ \logm{\mat{B}}}
&= \text{\rm Tr}{ \logm{\mathbf{I}_n - (\mathbf{I}_n - \mat{B})}} \\
&= \text{\rm Tr}{ -\sum_{k=1}^{\infty} (\mathbf{I}_n - \mat{B})^k / k} \\
& = -\sum_{k=1}^{\infty} \text{\rm Tr}{ \mat{C}^k} / k.
\end{align}
The second equality follows by Lemma~\ref{lem:taylor} because all the eigenvalues of $\mat{C} = \mathbf{I}_n - \mat{B}$ are contained\footnote{Indeed, $\lambda_i(\mat{C}) = 1 - \lambda_i(\mat{B})$ and $0<\lambda_i(\mat{B}) < 1 $ for all $i=1\ldots n$.} in $(0,1)$ and the last equality follows by the linearity of the trace operator.
\end{proof}
\subsection{Algorithm}
Lemma~\ref{lem1} indicates the following ``high-level''
procedure for computing the logdet of an SPD matrix $\mat{A}$:
\begin{enumerate}
\item Compute some $\alpha$ with $\lambda_1(\mat{A}) < \alpha$.
\item Compute $\mat{C} = \mat{I}_n - \mat{A} / \alpha$.
\item Compute the trace of \emph{all} the powers of $\mat{C}$.
\end{enumerate}
To implement the first step in this procedure we use the ``power iteration'' from the numerical linear algebra literature~(see Section~\ref{sec:power}). The second step is straightforward. To implement the third step,
we keep a finite number of summands in the expansion $\sum_{k=1}^{\infty} \text{\rm Tr}{ \mat{C}^k}$.
This step is important since the quality of the approximation, both theoretically and empirically, depends on the number of summands (denoted with $m$) that will be kept. On the other hand, the running time of the algorithm increases with $m$. Finally, to estimate the traces of the powers of $\mat{C}$, we use the randomized algorithm of Section~\ref{sec:trace}. Our approach is described in detail in Algorithm~\ref{alg1}.
\begin{algorithm}[h!]
\begin{algorithmic}[1]
\STATE {\bf{INPUT}}: $\mat{A} \in \R^{n \times n}$, accuracy parameter $\varepsilon > 0$, and integer $m >0$.
\STATE Compute $\tilde{\lambda}_1(\mat{A})$ using the method of Section~\ref{sec:power} with $t = \log(4n)$
\STATE Pick $\alpha = 5 \tilde{\lambda}_1(\mat{A})$ (notice that $\lambda_1(\mat{A}) < \alpha \le 5 \lambda_1(\mat{A})$)
\STATE Set $\mat{C} = \mat{I} - \mat{A} / \alpha$
\STATE Set $p = 20 \ln(200) / \varepsilon^2$
\STATE Let $\mathbf{g}_1,\mathbf{g}_2,\ldots, \mathbf{g}_p \in \mathbb{R}^n$ be i.i.d. random Gaussian vectors.
\STATE For {$i=1,2\ldots, p$}
\begin{itemize}
\item ${\mathbf v}_1^{(i)} = \mat{C} \mathbf{g}_i$ and $\gamma_1^{(i)} = \mathbf{g}_i^\top {\mathbf v}_1^{(i)}$
\item For {$k=2,\ldots , m$}
\begin{enumerate}
\item ${\mathbf v}_k^{(i)} : = \mat{C} {\mathbf v}_{k-1}^{(i)}$.
\item $\gamma_k^{(i)} = \mathbf{g}_i^\top {\mathbf v}_k^{(i)}$\
\text{(Inductively $\gamma_{k}^{(i)} = \mathbf{g}_i^\top \mat{C}^k \mathbf{g}_i$)}
\end{enumerate}
\item EndFor
\end{itemize}
\STATE EndFor
\STATE{\bf{OUTPUT}: $\alogdet{\mat{A}}$ as
$$\alogdet{\mat{A}} = n\log(\alpha) - \sum_{k=1}^{m} \left(\frac1{p}\sum_{i=1}^{p} \gamma_k^{(i)} \right) / k$$
}
\end{algorithmic}
\caption{Randomized Log Determinant Estimation}\label{alg1}
\small
\smallskip
\end{algorithm}
Notice that step $7$ in the above algorithm is an efficient way of
computing
$$
\alogdet{\mat{A}} := n\log(\alpha)- \sum_{k=1}^{m}
\left(
\frac{1}{p} \sum_{i=1}^{p} \mathbf{g}_i^\top \mat{C}^k \mathbf{g}_i
\right)/k.
$$
\subsection{Running time} Step 2 takes $\mathcal{O}(\log(n) \cdot nnz(\mat{A}))$ time.
For each $k>0$, ${\mathbf v}_k = \mat{C}^{k} \mathbf{g}_i$. The algorithm inductively computes ${\mathbf v}_k$ and $\mathbf{g}_i^\top \mat{C}^{k}\mathbf{g}_i = \mathbf{g}_i^\top {\mathbf v}_k$ for all $k=1,2,\ldots, m$. Given ${\mathbf v}_{k-1}$, ${\mathbf v}_{k}$ and $\mathbf{g}_i^\top \mat{C}^{k} \mathbf{g}_i$ can be computed in $nnz(\mat{C})$ and $\mathcal{O}(n)$ time, respectively. Notice that $nnz(\mat{C}) \leq n + { \rm nnz }(\mat{A})$. Therefore, step 7 requires
$
\mathcal{O}( p\cdot m \cdot { \rm nnz }(\mat{A}))
$
time. Since $p = O(\varepsilon^{-2}),$ the total cost is
$
\mathcal{O}( (m \varepsilon^{-2}+\log(n)) \cdot { \rm nnz }(\mat{A})).
$
\subsection{Error bound}
The following lemma proves that Algorithm~\ref{alg1} returns an accurate approximation to the logdet of $\mat{A}$.
\begin{lemma}\label{thm1}
Let $\alogdet{\mat{A}}$ be the output of Algorithm~\ref{alg1} on inputs $\mat{A},$ $m,$ and $\varepsilon$.
Then, with probability at least $3/16 - 0.01$,
$$
\abs{ \alogdet{\mat{A}} - \logdet{\mat{A}} } \leq
\left(\epsilon+\left(1-\gamma \right)^m\right) \cdot \Gamma,
$$
where
$$
\gamma = \frac{\lambda_{n}\left(\mat{A}\right)}{ \lambda_1(\mat{A})},
$$
and
$$
\Gamma = \sum_{i=1}^n \log\left( 5 \cdot \frac{\lambda_1(\mat{A})}{ \lambda_i(\mat{A}) }\right).
$$
\end{lemma}
\begin{proof}
We can derive our error bound as follows:
\begin{align*}
\abs{ \alogdet{\mat{A}} - \logdet{\mat{A}} }
&= \abs{ \sum_{k=1}^{m} \left( \frac{1}{p} \sum_{i=1}^{p} \mathbf{g}_i^\top \mat{C}^k \mathbf{g}_i \right)/k - \sum_{k=1}^{\infty} \text{\rm Tr}{\mat{C}^k} / k} \\
&\le
\abs{ \sum_{k=1}^{m}
\left(
\frac{1}{p} \sum_{i=1}^{p} \mathbf{g}_i^\top \mat{C}^k \mathbf{g}_i
\right)/k -\sum_{k=1}^{m} \text{\rm Tr}{\mat{C}^k}/k }\\
&+
\abs{ \sum_{k=m+1}^{\infty} \text{\rm Tr}{\mat{C}^k}/k } \\
&=
\underbrace{\abs{ \frac{1}{p} \sum_{i=1}^{p}
\mathbf{g}_i^\top \left(\sum_{k=1}^{m} \mat{C}^k/k \right) \mathbf{g}_i - \text{\rm Tr}{\sum_{k=1}^{m} \mat{C}^k/k }}}_{\Gamma_1} \\
&+
\underbrace{\abs{ \sum_{k=m+1}^{\infty} \text{\rm Tr}{\mat{C}^k}/k}}_{\Gamma_2}.
\end{align*}
Below, we bound the two terms $\Gamma_1$ and $\Gamma_2$ separately. We start with $\Gamma_1$:
the idea is to apply Lemma~\ref{thm:trace} on the matrix $\sum_{k=1}^{m} \mat{C}^k/k$ with $\delta = 10^{-2}$ and
$p = 20 \ln(200) / \varepsilon^2$.
Hence, with probability at least $0.99$:
$$
\Gamma_1 \le \varepsilon \cdot \text{\rm Tr}{\sum_{k=1}^{m} \mat{C}^k/k} \le \varepsilon \cdot \text{\rm Tr}{\sum_{k=1}^{\infty} \mat{C}^k/k}.
$$
In the last inequality we used the fact that $\mat{C}$ is a positive matrix, hence for all $k$, $\text{\rm Tr}{\mat{C}^k} > 0$.
The second term $\Gamma_2$ is bounded as:
\begin{align*}
\Gamma_2 &= \abs{\sum_{k=m+1}^{\infty} \text{\rm Tr}{\mat{C}^k}/k } \\
&\le \sum_{k=m+1}^{\infty} \text{\rm Tr}{\mat{C}^k}/k \\
&= \sum_{k=m+1}^{\infty} \text{\rm Tr}{\mat{C}^m \cdot \mat{C}^{k-m}}/k \\
&\le \sum_{k=m+1}^{\infty} \TNorm{\mat{C}^m} \cdot \text{\rm Tr}{ \mat{C}^{k-m}}/k \\
&= \TNorm{\mat{C}^m} \cdot \sum_{k=m+1}^{\infty}\text{\rm Tr}{ \mat{C}^{k-m}}/k \\
&\le \TNorm{\mat{C}^{m}} \cdot \sum_{k=1}^{\infty} \text{\rm Tr}{\mat{C}^k}/k \\
&\le \left(1-\frac{\lambda_{n}\left(\mat{A}\right)}{\alpha}\right)^m \cdot \sum_{k=1}^{\infty} \text{\rm Tr}{\mat{C}^k}/k.
\end{align*}
In the first inequality, we used the triangle inequality and the fact that $\mat{C}$ is a positive matrix.
In the second inequality, we used the following fact\footnote{This is easy to prove using Von Neumann's trace inequality.}: given two positive semidefinite matrices $\mat{A},\mat{B}$ of the same size,
$
\text{\rm Tr}{\mat{A} \mat{B}} \leq \TNorm{\mat{A}} \cdot \text{\rm Tr}{\mat{B}}.
$
In the last inequality, we used the fact that
$$
\lambda_{1}(\mat{C}) = 1 - \lambda_{n}(\mat{B}) = 1 - \lambda_{n}(\mat{A}) /\alpha.
$$
The bound for $\Gamma_2$ is deterministic.
Combining the bounds for $\Gamma_1$ and $\Gamma_2$ gives that with probability at least $3/16-0.01$,
\begin{align*}
\abs{ \alogdet{\mat{A}} - \logdet{\mat{A}} } & \leq
\left(\epsilon+\left(1-\frac{\lambda_{n}\left(\mat{A}\right)}{\alpha}\right)^m\right) \cdot \sum_{k=1}^{\infty} \frac{\text{\rm Tr}{\mat{C}^k}}{k}.
\end{align*}
We have already proven in Lemma~\ref{eq:trlog} that
$$ \sum_{k=1}^{\infty} \frac{\text{\rm Tr}{\mat{C}^k}}{k} = - \text{\rm Tr}{\logm{\mat{B}}} = n \log(a) - \logdet{\mat{A}}.$$
We further manipulate the last term as follows:
\eqan{
n \log(a) - \logdet{\mat{A}}
&=& n \log(\alpha) - \log ( \prod_{i=1}^n \lambda_i(\mat{A})) \\
&=& n \log(\alpha) - \sum_{i=1}^n \log (\lambda_i(\mat{A})) \\
&=& \sum_{i=1}^n \left( \log\left(\alpha\right) - \log\left(\lambda_i(\mat{A}) \right) \right) \\
&=& \sum_{i=1}^n \log\left( \frac{\alpha}{ \lambda_i(\mat{A}) }\right)
}
Collecting our results together, we get:
\begin{align*}
\abs{ \alogdet{\mat{A}} - \logdet{\mat{A}} } & \leq
\left(\epsilon+\left(1-\frac{\lambda_{n}\left(\mat{A}\right)}{\alpha}\right)^m\right) \cdot \sum_{i=1}^n \log\left( \frac{\alpha}{ \lambda_i(\mat{A}) }\right) .
\end{align*}
Using the relation
$
\lambda_1(\mat{A}) < \alpha \le 5 \lambda_1(\mat{A}),
$
concludes the proof.
\end{proof}
| {
"timestamp": "2015-03-03T02:14:50",
"yymm": "1503",
"arxiv_id": "1503.00374",
"language": "en",
"url": "https://arxiv.org/abs/1503.00374",
"abstract": "We introduce a novel algorithm for approximating the logarithm of the determinant of a symmetric positive definite (SPD) matrix. The algorithm is randomized and approximates the traces of a small number of matrix powers of a specially constructed matrix, using the method of Avron and Toledo~\\cite{AT11}. From a theoretical perspective, we present additive and relative error bounds for our algorithm. Our additive error bound works for any SPD matrix, whereas our relative error bound works for SPD matrices whose eigenvalues lie in the interval $(\\theta_1,1)$, with $0<\\theta_1<1$; the latter setting was proposed in~\\cite{icml2015_hana15}. From an empirical perspective, we demonstrate that a C++ implementation of our algorithm can approximate the logarithm of the determinant of large matrices very accurately in a matter of seconds.",
"subjects": "Data Structures and Algorithms (cs.DS)",
"title": "A Randomized Algorithm for Approximating the Log Determinant of a Symmetric Positive Definite Matrix",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018383629826,
"lm_q2_score": 0.7279754607093178,
"lm_q1q2_score": 0.709923007566866
} |
https://arxiv.org/abs/1404.2321 | Distinct distance estimates and low degree polynomial partitioning | We give a shorter proof of a slightly weaker version of a theorem of Nets Katz and the author. We prove that if a set of $L$ lines in $\mathbb{R}^3$ contains at most $L^{1/2}$ lines in any low degree algebraic surface, then the number of $r$-rich points is at most $C_\epsilon L^{(3/2) + \epsilon} r^{-2}$. Nets and I used this estimate to prove a distinct distance estimate for points in the plane. With the slightly weaker theorem in this paper, we get a slightly weaker distinct distance estimate: any set of $N$ points in $\mathbb{R}^2$ determines at least $c_\epsilon N^{1 - \epsilon}$ distinct distances. | \section{Background and notation}
Our proof is based on polynomial partitioning. Here we restate the partitioning theorem with an extra condition bounding the number of cells $O_i$.
\begin{theorem} \label{polyham} For each dimension $n$ and each degree $D \ge 1$, the following holds. For any finite set $S \subset \mathbb{R}^n$, we can find a non-zero polynomial $P$ of degree at most $D$ so that $\mathbb{R}^n \setminus Z(P)$ is a union of disjoint open sets $O_i$ obeying the following:
\begin{itemize}
\item For each $i$, $|S \cap O_i | \le C_n D^{-n} |S|$.
\item The number of open sets $O_i$ is at most $C_n D^n$.
\end{itemize}
\end{theorem}
\begin{proof} The first claim is Theorem 4.1 in \cite{GK2}. So we just need to prove the second claim.
The number of connected components of the complement $\mathbb{R}^n \setminus Z(P)$ is at most $C_n D^n$, by estimates proven independently by Oleinik-Petrovsky \cite{OP}, Milnor \cite{Mi}, and Thom \cite{Th}. A short proof was also given by Solymosi and Tao, as Theorem A.1. in their paper \cite{SolTao}. This implies the second claim.
However, we don't need to appeal to these results. The statement of the theorem does not require that each open set $O_i$ is connected. By Theorem 4.1 of \cite{GK2}, we can write $\mathbb{R}^n \setminus Z(P)$ as a union of open sets $U_j$ with $|S \cap U_j | \le C_n D^{-n} |S|$. We can then define each $O_i$ to be a union of some of the $U_j$ so that each $O_i$ contains $\le C_n D^{-n} |S|$ points of $S$ and the number of sets $O_i$ is at most $C_n D^n$. \end{proof}
We will also need a version of the B\'ezout theorem. The simplest version of the B\'ezout theorem is the following.
\begin{theorem} \label{bezout} If $P, Q$ are non-zero polynomials in $\mathbb{R}[x_1, x_2]$ with no common factor, then $Z(P) \cap Z(Q) \subset \mathbb{R}^2$ contains at most $(\Deg P) (\Deg Q)$ points.
\end{theorem}
We need a version of this theorem for polynomials in three variables where we count the number of lines in $Z(P) \cap Z(Q)$.
\begin{theorem} \label{bezoutlines} If $P, Q$ are non-zero polynomials in $\mathbb{R}[x_1, x_2, x_3]$ with no common factor, then $Z(P) \cap Z(Q)$ contains at most $(\Deg P) (\Deg Q)$ lines.
\end{theorem}
Proofs of these classical results appear in \cite{GK}. They are Corollaries 2.3 and 2.4. See also Section 2 of \cite{EKS} for a proof of Theorem \ref{bezoutlines} and a review of related material. A more general version of Theorem 1.2 can be found in van der Waerden's book {\it Modern Algebra}, \cite{VW}, Volume 2, page 16.
We will also use the Szemer\'edi-Trotter theorem, which we record here in the following form:
\begin{theorem} \label{szemtrot} (\cite{SzTr}) If $\frak L$ is a set of $L$ lines in $\mathbb{R}^n$, then
$$ |P_r(\frak L) | \le C \left( L^2 r^{-3} + L r^{-1} \right). $$
\end{theorem}
There are several nice proofs of the Szemer\'edi-Trotter theorem that have appeared since the original article. In \cite{CEGSW}, Clarkson et al. gave a proof using the method of cuttings. In \cite{Sz}, Sz\'ekely gave a proof using the crossing number lemma. In \cite{KMS}, Kaplan, Matou\u{s}ek, and Sharir gave a proof using the polynomial partitioning theorem. Their proof is closely related to the ideas in this paper.
We end with a note on constants. We will use $C$ to denote a constant that may change from line to line. If we want to label a particular constant to refer to later, we will call it $C_1, C_2,$ etc.
\section{A stronger result for inductive purposes}
We will prove Theorem \ref{mainincid} by induction. To make the induction work, we prove a slightly stronger result. The stronger result says that for any set of lines $\frak L$ in $\mathbb{R}^3$, there is a small set of low degree surfaces that account for all but $\sim L^{(3/2) + \epsilon} r^{-2}$ of the $r$-rich points of $\frak L$.
To state our theorem we need a piece of notation. If $\frak L$ is a set of lines and $Z$ is an algebraic surface, we define $\frak L_Z \subset \frak L$ to be the set of lines of $\frak L$ that lie in $Z$.
\begin{theorem} \label{clusterinZr} For any $\epsilon > 0$, there are $D(\epsilon)$, and $K(\epsilon)$ so that the following holds.
For any $ r \ge 2$, let $r' = \lceil (9/10) r \rceil$, the least integer which is at least $(9/10) r$.
If $\frak L$ is a set of $L$ lines in $\mathbb{R}^3$, and if $2 \le r \le 2 L^{1/2}$, then there is a set $\mathcal{Z}$ of algebraic surfaces so that
\begin{itemize}
\item Each surface $Z \in \mathcal{Z}$ is an irreducible surface of degree at most $D$.
\item Each surface $Z \in \mathcal{Z}$ contains at least $L^{(1/2) + \epsilon}$ lines of $\frak L$.
\item $|\mathcal{Z}| \le 2 L^{(1/2) - \epsilon}$.
\item $| P_r(\frak L) \setminus \cup_{Z \in \mathcal{Z}} P_{r'}(\frak L_Z) | \le K L^{(3/2) + \epsilon} r^{-2}$.
\end{itemize}
\end{theorem}
Theorem \ref{clusterinZr} implies Theorem \ref{mainincid}. If there are less than $L^{(1/2) + \epsilon}$ lines of $\frak L$ in any irreducible algebraic surface of degree at most $D$, then the set $\mathcal{Z}$ must be empty, and so Theorem \ref{clusterinZr} implies that $|P_r(\frak L)| \le K L^{(3/2) + \epsilon} r^{-2}$.
In our theorems above, we always assumed that $r \le 2 L^{1/2}$. Studying $r$-rich points for $r > 2 L^{1/2}$ is much simpler. We recall the following elementary estimate, which will also be useful in our proof.
\begin{prop} \label{bigr}
If $\frak L$ is a set of $L$ lines in $\mathbb{R}^d$ for $d \ge 2$, and if $r > 2 L^{1/2}$, then $|P_r(\frak L)| \le 2 L r^{-1}$.
\end{prop}
We include the well-known proof here, because it is a model for a different proof below.
\begin{proof} Let $P_r(\frak L)$ be $\{ x_1, x_2, .., x_M \}$, with $M = |P_r(\frak L)|$. Now $x_1$ lies in at least $r$ lines of $\frak L$. The point $x_2$ lies in at least $(r-1)$ lines of $\frak L$ that did not contain $x_1$. More generally, the point $x_j$ lies in at least $r- (j-1)$ lines of $\frak L$ that did not contain any of the previous points $x_1, ..., x_{j-1}$. Therefore, we have the following inequality for the total number of lines:
$$ L \ge \sum_{j=1}^M \max( r - j, 0). $$
If $M \ge r/2$, then we would get $L \ge (r/2) (r/2) = r^2/4$. But by hypothesis, $r > 2 L^{1/2}$, giving a contradiction. Therefore, $M < r/2$, and we get $L \ge M (r/2)$ which proves the proposition. \end{proof}
\section{Proof of Theorem \ref{clusterinZr}}
Here is an outline of our proof. We will use induction on the number of lines in $\frak L$.
First, we use a low degree polynomial partitioning argument to cut $\mathbb{R}^3$ into cells $O_i$. For each cell, we use induction to study the lines of $\frak L$ that enter that cell. For each cell, we get a set of surfaces $\mathcal{Z}_i$ that accounts for all but a few of the $r$-rich points in $O_i$. Combining these surfaces with the polynomial partitioning surface, we will get a large set of surfaces $\tilde {\mathcal{Z}}$ with the following properties:
\begin{itemize}
\item Each surface $Z \in \tilde {\mathcal{Z}}$ is an irreducible algebraic surface of degree at most $D$.
\item $| \tilde{\mathcal{Z}} | \le \Poly(D) L^{(1/2) - \epsilon} \log L$.
\item $| P_r(\frak L) \setminus \cup_{Z \in \tilde{\mathcal{Z}}} P_{r'}(\frak L_Z) | \le (1/100) K L^{(3/2) + \epsilon} r^{-2}$.
\end{itemize}
(We write $A \le \Poly(D) B$ to mean that is an exponent $p$ and a constant $C$ so that $A \le C D^p B$.)
This set of surfaces $\tilde {\mathcal{Z}}$ does not close the induction. There are too many surfaces in $\tilde{\mathcal{Z}}$, and we don't know that each surface contains $L^{(1/2) + \epsilon}$ lines of $\frak L$. The second step is to prune $\tilde{\mathcal{Z}}$. We will define
$$ \mathcal{Z} := \{ Z \in \tilde{\mathcal{Z}} | Z \textrm{ contains at least } L^{(1/2) + \epsilon} \textrm{ lines of } \frak L \}. $$
\noindent Then we will check that $\mathcal{Z} $ satisfies the conclusions of the theorem. First, we will prove that $|\mathcal{Z}| \le 2 L^{(1/2) - \epsilon}$. This follows from a simple counting argument, similar to the proof of Proposition \ref{bigr} above.
Second, we will check that the surfaces in $\tilde{\mathcal{Z}} \setminus \mathcal{Z}$ did not contribute too much to controlling the $r$-rich points of $\frak L$. More precisely we will prove that
$$ \sum_{Z \in \tilde{\mathcal{Z}} \setminus \mathcal{Z}} |P_{r'}( \frak L_Z) | \le (1/100) K L^{(3/2) + \epsilon} r^{-2}. $$
\noindent To prove this bound, we use Szemer\'edi-Trotter to bound the size of $P_{r'}(\frak L_Z)$ in terms of $|\frak L_Z|$ for each surface $Z \in \tilde{\mathcal{Z}} \setminus \mathcal{Z}$, and we use a simple counting argument to control how many surfaces $Z$ have large $|\frak L_Z|$. This finishes our outline. Now we begin the proof of Theorem \ref{clusterinZr}.
We remark that if $\epsilon \ge 1/2$ then the theorem is trivial: we can take $\mathcal{Z}$ to be empty, and it is easy to check that $|P_r(\frak L)| \le 2 L^2 r^{-2}$. (This follows from Szemer\'edi-Trotter, which gives a stronger estimate. But it also follows from a simple double-counting argument.) So we can assume that $\epsilon \le 1/2$.
We start by discussing how to choose $D = D(\epsilon)$ and $K = K(\epsilon)$. We will choose $D$ a large constant depending on $\epsilon$ and then we will choose $K$ a large constant depending on $\epsilon$ and $D$. As long as these are large enough at certain points in the proof, the argument goes through. For example, we will choose $K$ large enough that
\begin{equation} \label{klarge}
K \ge 10 (2 D)^{2 / \epsilon}.
\end{equation}
The proof is by induction on $L$. We start by checking the base of the induction. Because of equation \ref{klarge}, we claim the theorem holds when $L^\epsilon \le 2D$. Suppose that $\frak L$ is a set of $L$ lines with $L^\epsilon \le 2 D$, and that $2 \le r \le 2 L^{1/2}$. We choose $\mathcal{Z}$ to be the empty set. Using equation \ref{klarge}, we see that
$$ |P_r (\frak L)| \le L^2 \le (2D)^{2/\epsilon} \le K / 10 \le K L^{(3/2) + \epsilon} r^{-2}. $$
We have now established the base of the induction. By the inductive hypothesis, we can assume that the theorem holds for sets of at most $L/2$ lines.
\subsection{Building $\tilde{\mathcal{Z}}$}
Let $S$ be any subset of $P_r(\frak L)$. An important case is $S = P_r(\frak L)$, but we will have to consider other sets as well. We use Theorem \ref{polyham} to do a polynomial partitioning of the set $S$ with a polynomial of degree at most $D$. The polynomial partitioning theorem, Theorem \ref{polyham}, says that there is a non-zero polynomial $P$ of degree at most $D$ so that
\begin{itemize}
\item $\mathbb{R}^3 \setminus Z(P)$ is the union of at most $C D^3$ disjoint open cells $O_i$, and
\item for each cell $O_i$, $|S \cap O_i | \le C D^{-3} |S|$.
\end{itemize}
We define $\frak L_i \subset \frak L$ to be the set of lines from $\frak L$ that intersect the open cell $O_i$. We note that $S \cap O_i \subset P_r(\frak L_i)$. If a line does not lie in $Z(P)$, then it can have at most $D$ intersection points with $Z(P)$, which means that it can enter at most $D+1$ cells $O_i$. So each line of $\frak L$ intersects at most $D+1$ cells $O_i$. Therefore, we get the following inequality:
\begin{equation} \label{totalgammair}
\sum_i |\frak L_i| \le (D+1) L \le 2 D L.
\end{equation}
Let $\beta > 0$ be a large parameter that we will choose below. We say that a cell $O_i$ is $\beta$-good if
\begin{equation}\label{betagoodr}
|\frak L_i| \le \beta D^{-2} L.
\end{equation}
The number of $\beta$-bad cells is at most $2 \beta^{-1} D^3$. Each cell contains at most $C D^{-3} |S|$ points of $S$. Therefore, the bad cells all together contain at most $C \beta^{-1} |S|$ points of $S$. We now choose $\beta$ so that $C \beta^{-1} \le (1/100)$. $\beta$ is an absolute constant, independent of $\epsilon$. We now have the following estimate:
\begin{equation}\label{badcellsboundr}
\textrm{The union of the bad cells contains at most $(1/100) |S|$ points of $S$.}
\end{equation}
For each good cell $O_i$, we apply induction to understand $\frak L_i$. By choosing $D$ sufficiently large, we can guarantee that for each good cell, $| \frak L_i | \le (1/2) L$. Now there are two cases, depending on whether $r \le 2 |\frak L_i|^{1/2}$.
If $r \le 2 |\frak L_i|^{1/2}$, then we can apply the inductive hypothesis. In this case, we see that there is a set $\mathcal{Z}_i$ of irreducible algebraic surfaces of degree at most $D$ with the following two properties:
\begin{equation}\label{sizeofgamma_ir}
| \mathcal{Z}_i| \le 2 |\frak L_i|^{(1/2) - \epsilon} \le 2 (\beta D^{-2} L)^{(1/2) - \epsilon}.
\end{equation}
Because $S \cap O_i \subset P_r(\frak L_i)$, we also get:
\begin{equation}\label{missedpointsinO_ir}
| (S \cap O_i) \setminus \cup_{Z \in \mathcal{Z}_i} P_{r'}( \frak L_Z) | \le K |\frak L_i|^{(3/2) + \epsilon} r^{-2} \le K (\beta D^{-2} L)^{(3/2) + \epsilon} r^{-2} \le C_1 K D^{-3 - 2 \epsilon} L^{(3/2) + \epsilon} r^{-2}.
\end{equation}
On the other hand, if $r > 2 |\frak L_i|^{1/2}$, then we define $\mathcal{Z}_i$ to be empty, and Proposition \ref{bigr} gives the bound
\begin{equation}\label{missedpointsinO_irbigr}
|S \cap O_i| \le |P_r( \frak L_i) | \le 2 |\frak L_i| r^{-1} \le 2 L r^{-1} \le 4 L^{3/2} r^{-2}.
\end{equation}
By choosing $K$ sufficiently large compared to $D$, we can arrange that $4 L^{3/2} r^{-2} \le C_1 K D^{-3 - 2 \epsilon} L^{(3/2) + \epsilon} r^{-2}$. Therefore, inequality \ref{missedpointsinO_ir} holds for the good cells with $r > 2 |\frak L_i|^{1/2}$ as well as the good cells with $r \le 2 |\frak L_i|^{1/2}$.
We sum this inequality over all the good cells:
$$ \sum_{O_i \textrm{ good}} |(S \cap O_i) \setminus \cup_{Z \in \mathcal{Z}_i} P_{r'}( \frak L_Z) | \le C D^3 \cdot C_1 K D^{-3 - 2 \epsilon} L^{(3/2) + \epsilon} r^{-2} \le C_2 D^{-2 \epsilon} K L^{(3/2) + \epsilon} r^{-2} . $$
We choose $D(\epsilon)$ large enough so that $C_2 D^{-2 \epsilon} \le (1/400)$. Therefore, we get the following:
\begin{equation}\label{goodcellsboundr} \sum_{O_i \textrm{ good}} |(S \cap O_i) \setminus \cup_{Z \in \mathcal{Z}_i} P_{r'}( \frak L_Z) | \le (1/400) K L^{(3/2) + \epsilon} r^{-2}.
\end{equation}
We have studied the points of $S$ in the good cells. Next we study the points of $S$ in the zero set of the partioning polynomial $Z(P)$. Let $Z_j$ be an irreducible component of $Z(P)$. If $x \in S \cap Z_j$, but $x \notin P_{r'}(\frak L_{Z_j})$, then $x$ must be contained in at least $r/10$ lines of $\frak L \setminus \frak L_{Z_j}$. Each line of $\frak L$ that is not contained in $Z_j$ has at most $\Deg(Z_j)$ intersection points with $Z_j$. Therefore,
$$| (S \cap Z_j) \setminus P_{r'}(\frak L_{Z_j})| \le 10 r^{-1} (\Deg Z_j) L. $$
If $\{ Z_j \}$ are all the irreducible components of $Z(P)$, then we see that
$$ | (S \cap Z(P)) \setminus \cup_j P_{r'}(\frak L_{Z_j}) | \le 10 r^{-1} D L. $$
We choose $K = K(\epsilon, D)$ sufficiently large so that $10 D \le (1/800) K$. Since $r \le 2 L^{1/2}$, we have
\begin{equation}\label{cellwallsboundr}
| (S \cap Z(P)) \setminus \cup_j P_{r'}(\frak L_{Z_j}) | \le (1/800) K L r^{-1} \le (1/400) K L^{3/2} r^{-2}.
\end{equation}
Now we define $\tilde{\mathcal{Z}}_S$ to be the union of $\mathcal{Z}_i$ over all the good cells $O_i$ together with all the irreducible components $Z_j$ of $Z(P)$. Each surface in $\tilde{\mathcal{Z}}_S$ is an algebraic surface of degree at most $D$. By equation \ref{sizeofgamma_ir}, we have the following estimate for $| \tilde{\mathcal{Z}}_S|$:
\begin{equation}\label{size of Z_Sr}
| \tilde{\mathcal{Z}}_S | \le C D^3 (\beta D^{-2} L)^{(1/2) - \epsilon} + D \le \Poly(D) L^{(1/2) - \epsilon}.
\end{equation}
Summing the contribution of the bad cells in equation \ref{badcellsboundr}, the contribution of the good cells in equation \ref{goodcellsboundr}, and the contribution of
the cell walls in equation \ref{cellwallsboundr}, we get:
\begin{equation}\label{totalboundr}
| S \setminus \cup_{Z \in \tilde{\mathcal{Z}}_S} P_{r'}(\frak L_Z) | \le (1/100) |S| + (1/200) K L^{(3/2) + \epsilon} r^{-2}.
\end{equation}
If we didn't have the $(1/100) |S|$ term coming from the bad cells, we could simply take $S = P_r(\frak L)$ and $\tilde{\mathcal{Z}} = \tilde{\mathcal{Z}}_S$. Because of this term, we need to run the above construction repeatedly.
Let $S_1 = P_r(\frak L)$, and let $\tilde{\mathcal{Z}}_{S_1}$ be the set of surfaces constructed above. Now we define
$ S_2 = S_1 \setminus \cup_{Z \in \tilde{\mathcal{Z}}_{S_1}} P_{r'} (\frak L_Z)$. We iterate this procedure, defining
$$ S_{j+1} := S_j \setminus \cup_{Z \in \tilde{\mathcal{Z}}_{S_j}} P_{r'} (\frak L_Z). $$
Each set $S_j$ is a subset of $P_r(\frak L)$. Each set of surfaces $\tilde{\mathcal{Z}}_{S_j}$ has cardinality at most $\Poly(D) L^{(1/2) - \epsilon}$.
Iterating equation \ref{totalboundr} we see:
\begin{equation}\label{iterationr}
|S_{j+1} | \le (1/100) |S_j| + (1/200) K L^{(3/2) + \epsilon} r^{-2}.
\end{equation}
We define $J = C \log L$ for a large constant $C$. Because of the iterative formula in equation \ref{iterationr}, we get
\begin{equation}\label{endboundr}
|S_J| \le (1/100) K L^{(3/2) + \epsilon} r^{-2}.
\end{equation}
We define $\tilde{\mathcal{Z}} = \cup_{j=1}^{J-1} \tilde{\mathcal{Z}}_{S_j}$. This set of surfaces has the following properties. Since each set $\tilde{\mathcal{Z}}_{S_j}$ has at most $\Poly(D) L^{(1/2) - \epsilon}$ surfaces, we get:
\begin{equation}\label{sizeoftcZr}
|\tilde{\mathcal{Z}} | \le \Poly(D) L^{(1/2) - \epsilon} \log L.
\end{equation}
Also, $P_r(\frak L) \setminus \cup_{Z \in \tilde{\mathcal{Z}}} P_{r'}(\frak L_Z) = S_J$, and so equation \ref{endboundr} gives:
\begin{equation}\label{tcZboundr}
| P_r(\frak L) \setminus \cup_{Z \in \tilde{\mathcal{Z}}} P_{r'}(\frak L_Z) | \le (1/100) K L^{(3/2) + \epsilon} r^{-2}.
\end{equation}
This finishes our construction of $\tilde{\mathcal{Z}}$. Next we prune $\tilde{\mathcal{Z}}$ down to our desired set of surfaces $\mathcal{Z}$.
\subsection{Pruning $\tilde{\mathcal{Z}}$}
We define
$$ \mathcal{Z} := \{ Z \in \tilde{\mathcal{Z}} | Z \textrm{ contains at least } L^{(1/2) + \epsilon} \textrm{ lines of } \frak L \}. $$
To close our induction, we have to check two properties of $\mathcal{Z}$.
\begin{enumerate}
\item $|\mathcal{Z}| \le 2 L^{(1/2) - \epsilon}$.
\item $| P_r(\frak L) \setminus \cup_{Z \in \mathcal{Z}} P_{r'}(\frak L_Z) | \le K L^{(3/2) + \epsilon} r^{-2}$.
\end{enumerate}
We begin with a simple lemma about surfaces that each contain many lines.
\begin{lemma} \label{surfcountr} Suppose $\frak L$ is a set of lines in $\mathbb{R}^3$, and $\mathcal{Y}$ is a set of irreducible algebraic surfaces of degree at most $D$, and suppose that each surface $Z \in \mathcal{Y}$ contains at least $A$ lines of $\frak L$.
If $A > 2 D | \frak L |^{1/2}$, then $| \mathcal{Y} | \le 2 | \frak L | A^{-1}$.
\end{lemma}
\begin{proof} The proof of this lemma follows the same idea as the proof of Proposition \ref{bigr}. By the B\'ezout theorem for lines, Theorem \ref{bezoutlines}, the intersection of any two surfaces $Z_1, Z_2 \in \mathcal{Y}$ contains at most $D^2$ lines of $\frak L$.
We choose an ordering of the surfaces of $\mathcal{Y}$. We consider the surfaces one at a time in order and count the number of new lines.
$Z_1$ contains at least $A$ lines of $\frak L$.
$Z_2$ contains at least $A - D^2$ lines of $\frak L$ that are not in $Z_1$.
$Z_{j+1}$ contains at least $A - jD^2$ lines of $\frak L$ that are not in the previous surfaces $Z_1, ..., Z_j$. Therefore, we get the following inequality:
$$ |\frak L| \ge \sum_{j=1}^{|\mathcal{Y}|} \max( A - j D^2, 0 ). $$
If $j \le (1/2) A D^{-2}$, then $A - j D^2 \ge A/2$. Therefore, if $|\mathcal{Y}| \ge (1/2) A D^{-2}$, then we see that $|\frak L| \ge (1/2) A D^{-2} ( A/2)$. By hypothesis, we know $A > 2 D | \frak L|^{1/2}$, which gives the contradiction $| \frak L | > |\frak L|$. Therefore, $|\mathcal{Y}| \le (1/2) A D^{-2}$. Now we see that $ | \frak L | \ge |\mathcal{Y} | (A/2)$, and this completes the proof of the lemma. \end{proof}
We apply this lemma with $\mathcal{Y} = \mathcal{Z}$ and $A = L^{(1/2) + \epsilon}$. We can assume that $L^{\epsilon} > 2 D$, because the case of $L^\epsilon \le 2 D$ was the base of our induction, and we handled it by choosing $K$ sufficiently large. Therefore, $A = L^{(1/2) + \epsilon} > 2 D L^{1/2}$, and the hypotheses of Lemma \ref{surfcountr} are satisfied. The lemma tells us that $|\mathcal{Z}| \le 2 L^{(1/2) - \epsilon}$, which proves item (1) above. Now we turn to item (2). We recall equation \ref{tcZboundr}:
$$| P_r(\frak L) \setminus \cup_{Z \in \tilde{\mathcal{Z}}} P_{r'}(\frak L_Z) | \le (1/100) K L^{(3/2) + \epsilon} r^{-2}. $$
Therefore, it suffices to check that
$$ \sum_{Z \in \tilde{\mathcal{Z}} \setminus \mathcal{Z}} |P_{r'}( \frak L_Z) | \le (1/100) K L^{(3/2) + \epsilon} r^{-2}. $$
We sort $\tilde{\mathcal{Z}} \setminus \mathcal{Z}$ according to the number of lines in each surface. For each integer $s \ge 0$, we define:
$$ \tilde{\mathcal{Z}}_s := \{ Z \in \tilde{\mathcal{Z}} \textrm{ so that } | \frak L_Z | \in [2^{s}, 2^{s+1}) \}. $$
Since each surface of $\tilde{\mathcal{Z}}$ with at least $L^{(1/2) + \epsilon}$ lines of $\frak L$ lies in $\mathcal{Z}$, we see that:
\begin{equation}\label{uniontczsr}
\tilde{\mathcal{Z}} \setminus \mathcal{Z} \subset \bigcup_{2^s \le L^{(1/2) + \epsilon}} \tilde{\mathcal{Z}}_s.
\end{equation}
For each $Z \in \tilde{\mathcal{Z}}_s$, $|\frak L_Z| \le 2^{s+1}$. We use the Szemer\'edi-Trotter theorem, Theorem \ref{szemtrot}, to bound $P_{r'}(\frak L_Z)$. Since $r' \ge (9/10) r$, Szemer\'edi-Trotter gives:
\begin{equation}\label{stboundr}
P_{r'} ( \frak L_Z) \le C \left( 2^{2s} r^{-3} + 2^s r^{-1} \right).
\end{equation}
Using Lemma \ref{surfcountr} with $A = 2^s$, we get the following estimate for $| \tilde{\mathcal{Z}}_s |$:
\begin{equation}\label{sizetcZ_sr}
\textrm{If } 2^s > 2 D L^{1/2}, \textrm{ then } |\tilde{\mathcal{Z}}_s| \le 2 L 2^{-s}.
\end{equation}
We can now estimate $ \sum_{Z \in \tilde{\mathcal{Z}} \setminus \mathcal{Z}} |P_{r'}( \frak L_Z)|$.
\begin{equation}\label{groupbysr}
\sum_{Z \in \tilde{\mathcal{Z}} \setminus \mathcal{Z}} |P_{r'}( \frak L_Z)| \le \sum_{2^s \le L^{(1/2) + \epsilon}} \left( \sum_{Z \in \tilde{\mathcal{Z}}_s} |P_{r'}(\frak L_Z)| \right) \le C \sum_{2^s \le L^{(1/2) + \epsilon}} |\tilde{\mathcal{Z}}_s| \left( 2^{2s} r^{-3} + 2^s r^{-1} \right).
\end{equation}
We consider the contribution to the last sum from $s$ in the range $2 D L^{1/2} < 2^s \le L^{(1/2) + \epsilon}$. Using equation \ref{sizetcZ_sr} to estimate $|\tilde{\mathcal{Z}}_s|$ gives:
$$ \sum_{2 D L^{1/2} < 2^s \le L^{(1/2) + \epsilon}} |\tilde{\mathcal{Z}}_s| \left( 2^{2s} r^{-3} + 2^s r^{-1} \right) \le \sum_{2^s \le L^{(1/2) + \epsilon}}
(2 L 2^{-s}) \left( 2^{2s} r^{-3} + 2^s r^{-1} \right) \le $$
$$\le C \sum_{2^s \le L^{(1/2) + \epsilon}} (L 2^s r^{-3} + L r^{-1}) \le C( L^{(3/2) + \epsilon} r^{-3} + L (\log L) r^{-1} ) \le
C L^{(3/2) + \epsilon} r^{-2}. $$
Next we consider the contribution to the last sum in equation \ref{groupbysr} from $s$ in the range $2^s \le 2 D L^{1/2}$. In this range of $s$, we use Equation \ref{sizeoftcZr} to bound $| \tilde{\mathcal{Z}}_s|$: $| \tilde{\mathcal{Z}}_s| \le |\tilde{\mathcal{Z}}| \le \Poly(D) L^{(1/2) - \epsilon} \log L$.
\begin{equation} \label{smallscon} \sum_{2^s \le 2 D L^{1/2}} |\tilde{\mathcal{Z}}_s| \left( 2^{2s} r^{-3} + 2^s r^{-1} \right) \le \Poly(D) \left( L^{(1/2) - \epsilon} \log L \right) \left( 2^{2s} r^{-3} + 2^s r^{-1} \right). \end{equation}
Since $2^{s} \le 2 D L^{1/2}$ we see that $2^{2s} r^{-3} \le \Poly(D) L r^{-3}$ and $2^s r^{-1} \le \Poly(D) L^{1/2} r^{-1} \le \Poly(D) L r^{-2}$. Plugging these into the right-hand side of equation \ref{smallscon}, we get
$$ \sum_{2^s \le 2 D L^{1/2}} |\tilde{\mathcal{Z}}_s| \left( 2^{2s} r^{-3} + 2^s r^{-1} \right) \le \Poly(D) L^{3/2} r^{-2}. $$
All together, we see
$$ \sum_{Z \in \tilde{\mathcal{Z}} \setminus \mathcal{Z}} |P_{r'}( \frak L_Z) | \le \Poly(D) L^{(3/2) + \epsilon} r^{-2}. $$
Choosing $K = K(\epsilon, D)$ sufficiently large, we see that
$$ \sum_{Z \in \tilde{\mathcal{Z}} \setminus \mathcal{Z}} |P_{r'}( \frak L_Z) | \le (1/100) K L^{(3/2) + \epsilon} r^{-2}. $$
This proves item (2), closing the induction, and finishing the proof of Theorem \ref{clusterinZr}.
\section{Distinct distances}
In \cite{ES}, Elekes and Sharir proposed a new approach to the distinct distance problem, connecting it to incidence estimates about curves in $\mathbb{R}^3$. A tiny modification of these ideas is explained in Section 2 of \cite{GK2}, connecting the distinct distance problem to an estimate about incidences of lines in $\mathbb{R}^3$. The paper \cite{GK2} then uses Theorem \ref{gkthm} to control these incidences. We can also use our slightly weaker Theorem \ref{mainincid} to prove a slightly weaker bound on the number of distinct distances.
In this section, we give a concise review of the Elekes-Sharir approach to the distinct distance problem. Using our incidence bound, Theorem \ref{mainincid}, we prove the following distinct distance bound.
\begin{theorem} \label{distbound} For any $\epsilon > 0$, there is a constant $c_\epsilon > 0$ so that the following holds. If $P$ is a set of $N$ points in $\mathbb{R}^2$, then $P$ determines at least $c_\epsilon N^{1 - \epsilon}$ distinct distances.
\end{theorem}
If $P \subset \mathbb{R}^2$ is a set of points, we let $d(P)$ be the set of distinct distances:
$$ d(P) := \{ | p_1 - p_2| \}_{p_1, p_2 \in P}. $$
The approach of Elekes and Sharir involves the set of distance quadruples $Q(P)$:
$$ Q(P) := \{ (p_1, p_2, p_3, p_4) \in P^4 \textrm{ so that } |p_1 - p_2| = |p_3 - p_4| \not= 0 \}. $$
A simple Cauchy-Schwarz inequality proves the following estimate (Lemma 2.1 in \cite{GK2}):
\begin{equation} \label{dvsQ} |d(P)| \ge \frac{ N^4 - 2 N^3} { |Q(P)|}. \end{equation}
The heart of the matter is to prove an upper bound for $|Q(P)|$. The next step is to introduce a family of lines in $\mathbb{R}^3$, $\frak L(P)$, associated to the set $P \subset \mathbb{R}^2$. The incidence geometry of this family of lines encodes the distance quadruples.
For any two points $p_1, p_2 \in \mathbb{R}^2$, we define a line $l_{p_1, p_2} \subset \mathbb{R}^3$ as follows. Suppose that $p_1 = (x_1, y_1)$ and $p_2 = (x_2, y_2)$. We use $x, y, z$ for the coordinates of $\mathbb{R}^3$. Then $l_{p_1,p_2}$ is the line defined by the following equations:
\begin{equation} \label{eqlinex}
2 x = (x_1 + x_2) + (y_1 - y_2) z.
\end{equation}
\begin{equation} \label{eqliney}
2y = (y_1 + y_2) + (x_2 - x_1) z.
\end{equation}
The set $\frak L(P)$ is defined to be $\{ l_{p_1, p_2} \}_{p_1, p_2 \in P}$. If $P$ is a set of $N$ points, then $\frak L(P)$ is a set of $N^2$ lines. The connection between $Q(P)$ and $\frak L(P)$ appears in the following lemma.
\begin{lemma} \label{quadinter} A quadruple $(p_1, p_2, p_3, p_4) \in P^4$ is a distance quadruple if and only if
the line $l_{p_1, p_3}$ and the line $l_{p_2, p_4}$ are intersecting or parallel.
\end{lemma}
Remark: The condition of being intersecting or parallel is natural from the projective point of view. Two lines $l, \bar l$ are intersecting or parallel in $\mathbb{R}^n$ if and only if they intersect in $\mathbb{RP}^n$.
We now give a proof by direct computation. The paper \cite{ES} gives a nice motivation for introducing these lines. The motivation comes from the group of rigid motions of the plane, which is a symmetry group of the distinct distance problem. This point of view is also explained in Section 2 of \cite{GK2}. Lemma \ref{quadinter} is proven in Section 2 of \cite{GK2} using the point of view of rigid motions.
\begin{proof} First we describe the projective completion of the line $l_{p_1, p_2}$ in $\mathbb{RP}^3$. A point in $\mathbb{RP}^3$ is an equivalence class of non-zero vectors $(w,x,y,z) \in \mathbb{R}^4$, where two vectors are equivalent if one is a scalar multiple of the other. In these coordinates, the equations for the line $l_{p_1, p_2} \subset \mathbb{RP}^3$ are as follows:
\begin{equation} \label{eqlinexpro}
2 x = (x_1 + x_2) w + (y_1 - y_2) z.
\end{equation}
\begin{equation} \label{eqlineypro}
2y = (y_1 + y_2) w + (x_2 - x_1) z.
\end{equation}
Next we investigate when two lines in $\mathbb{RP}^3$ intersect. Suppose that $l$ is defined by the equations
\begin{equation} \label{eql}
2x = a_x w + b_x z; 2y = a_y w + b_y z.
\end{equation}
\noindent and $\bar l$ is defined by the equations
\begin{equation} \label{eqbarl}
2x = \bar a_x w + \bar b_x z; 2y = \bar a_y w + \bar b_y z.
\end{equation}
The lines $l$ and $\bar l$ intersect in $\mathbb{RP}^3$ if and only if the following system of two equations in $w,z$ has a non-zero solution:
\begin{equation} \label{lbarlinter}
a_x w + b_x z = \bar a_x w + \bar b_x z ; a_y w + b_y z = \bar a_y w + \bar b_y z
\end{equation}
\noindent By standard linear algebra, this system of equations has a non-zero solution if and only if an appropriate determinant vanishes, which we can rewrite as the following equation:
\begin{equation} \label{lbarlniceeq}
(a_x - \bar a_x) (b_y - \bar b_y) = (a_y - \bar a_y) (b_x - \bar b_x).
\end{equation}
Now we take $l = l_{p_1, p_3}$ and $\bar l = l_{p_2, p_4}$. Using equations \ref{eqlinexpro} and \ref{eqlineypro}, we can find the values of $a_x$ etc. In particular, we see that $a_x = x_1 + x_3$, $a_y = y_1 + y_3$, $b_x = y_1 - y_3$ and $b_y = x_3 - x_1$, and similarly $\bar a_x = x_2 + x_4$, $\bar a_y = y_2 + y_4$, $\bar b_x = y_2 - y_4$, and $\bar b_y = x_4 - x_2$. When we plug these values into equation \ref{lbarlniceeq}, we get a homogeneous quadratic equation in $x_i$ and $y_i$. We claim that this equation is equivalent to $(x_1 - x_2)^2 + (y_1 - y_2)^2 = (x_3 - x_4)^2 - (y_3 - y_4)^2$. Here is the computation. Plugging the values of $a_x$ etc. into equation \ref{lbarlniceeq}, we immediately get:
$$ \left[ (x_1 + x_3) - (x_2 + x_4) \right] \left[ (x_3 - x_1) - (x_4 - x_2) \right] = \left[ (y_1 + y_3) - (y_2 + y_4) \right] \left[ (y_1 - y_3) - (y_2 -y_4) \right]. $$
Rearranging the terms inside of each large parentheses, this is equivalent to
$$ \left[ (x_3 - x_4) + (x_1 - x_2) \right] \left[ (x_3 - x_4) - (x_1 - x_2) \right] = \left[ (y_1 - y_2) + (y_3 - y_4) \right] \left[ (y_1 - y_2) - (y_3 - y_4) \right] $$
Expanding both sides, this is equivalent to
$$ (x_3 - x_4)^2 - (x_1 - x_2)^2 = (y_1 - y_2)^2 - (y_3 - y_4)^2. $$
Moving the negative terms to the other sides, this is equivalent to
$$ (x_3 - x_4)^2 + (y_3 - y_4)^2 = (x_1 - x_2)^2 + (y_1 - y_2)^2. $$
This is equivalent to $|p_3 - p_4| = |p_1 - p_2|$.
\end{proof}
Because of Lemma \ref{quadinter}, each distance quadruple $(p_1, p_2, p_3, p_4) \in Q(P)$ can be labelled as an intersecting quadruple or a parallel quadruple, depending on whether $l_{p_1, p_3}$ and $l_{p_2, p_4}$ are intersecting or parallel.
The number of parallel quadruples is straightforward to bound. If $l_{p_1,p_3}$ and $l_{p_2, p_4}$ are parallel, then equations \ref{eqlinex} and \ref{eqliney} imply that $y_1 - y_3 = y_2 - y_4$ and $x_3 - x_1 = x_4 - x_2$. In other words, $l_{p_1, p_3}$ and $l_{p_2, p_4}$ are parallel if and only if $p_1 - p_2 = p_3 - p_4$. For any $p_1, p_2, p_3$, there is at most one $p_4 \in P$ so that $p_1 - p_2 = p_3 - p_4$, and so there are at most $N^3$ parallel distance quadruples.
From now on, we sometimes abbreviate $\frak L(P)$ by $\frak L$.
The number of intersecting distance quadruples can be counted as follows. We let $P_{=r}(\frak L)$ denote the set of points that lie in exactly $r$ lines of $\frak L$. At each point of $P_{=r}(\frak L)$ there are $r^2 - r$ intersecting pairs $(l_1, l_2) \in \frak L^2$. Therefore, the number of intersecting distance quadruples is
$$ |Q(P)_{inter}| = \sum_{r \ge 2} (r^2 - r) |P_{=r}(\frak L)|. $$
Since $|P_{=r}(\frak L)| = |P_r(\frak L)| - |P_{r+1}(\frak L)|$, we can rewrite this formula as
\begin{equation} \label{|Q|rrich} |Q(P)_{inter}| = \sum_{r \ge 2} (2r - 2) |P_r(\frak L)|. \end{equation}
Therefore, a bound on $|P_r(\frak L)|$ gives a bound on $|Q(P)|$.
To bound $|P_r(\frak L)|$ the paper \cite{GK2} proves the following result (Proposition 2.8 in \cite{GK2}):
\begin{lemma} \label{nonclustergk} If $P \subset \mathbb{R}^2$ is a set of $N$ points, then $\frak L(P)$ contains at most $C N$ lines in any plane or regulus, and at most $N$ lines of $\frak L(P)$ contain any point.
\end{lemma}
\noindent With this lemma in hand, \cite{GK2} can apply Theorem \ref{gkthm}, giving the bound $|P_r(\frak L)| \le C N^3 r^{-2}$ for all $2 \le r \le N$. (And for $r > N+1$, Lemma \ref{nonclustergk} says that $|P_r(\frak L)| = 0$.) Plugging these bounds into equation \ref{|Q|rrich} shows that $|Q(P)| \le N^3 + \sum_{r=2}^N C N^3 r^{-1} \le C N^3 \log N$.
We will use Theorem \ref{mainincid} in place of Theorem \ref{gkthm} to give a slightly weaker bound on the number of distance quadruples. In order to apply Theorem \ref{mainincid} we need a slightly stronger lemma.
\begin{lemma} \label{noncluster} For any degree $D \ge 1$ there is a constant $C_D$ so that the following holds. If $P \subset \mathbb{R}^2$ is a set of $N$ points, then $\frak L(P)$ contains at most $C_D N$ lines in any algebraic surface of degree at most $D$. Also $\frak L(P)$ contains at most $N$ lines that pass through any point.
\end{lemma}
\noindent We will give the proof of Lemma \ref{noncluster} below. Using Lemma \ref{noncluster}, we can apply Theorem \ref{mainincid}, giving the following bound: for any $\epsilon > 0$, there is a constant $C_\epsilon$ so that
$$|P_r(\frak L)| \le C_\epsilon N^{3 + \epsilon} r^{-2}. $$
Plugging this bound into equation \ref{|Q|rrich}, we see that
$$|Q(P)| \le N^3 + \sum_{r=2}^N (2r - 2) |P_r(\frak L)| \le N^3 + \sum_{r=2}^N C_\epsilon N^{3 + \epsilon} r^{-1} \le C_\epsilon N^{3 + \epsilon}. $$
Plugging this bound into equation \ref{dvsQ}, we see that $|d(P)| \ge c_\epsilon N^{1 - \epsilon}$ for any $\epsilon > 0$. This proves Theorem \ref{distbound}.
\subsection{The proof of the non-clustering lemma}
It only remains to prove Lemma \ref{noncluster}. Suppose that $P \subset \mathbb{R}^2$ is a set of $N$ points.
We first observe that if $p \in \mathbb{R}^2$ and $q_1 \not=q_2 \in \mathbb{R}^2$ then the lines $l_{p, q_1}$ and $l_{p, q_2}$ are skew. By Lemma \ref{quadinter}, $l_{p, q_1}$ and $l_{p, q_2}$ are non-skew if and only if $| p - p | = |q_1 - q_2|$. But $|p-p| = 0$ and $|q_1 - q_2| \not= 0$.
From this observation, we can quickly establish two parts of Lemma \ref{noncluster}. First, for any plane in $\mathbb{R}^3$, at most one of the lines $\{ l_{p, q} \}_{q \in P}$ can lie in the plane. Therefore, any plane contains at most $N$ lines of $\frak L(P)$. Second, for any point $\mathbb{R}^3$, at most one of the lines $\{ l_{p, q} \}_{q \in P}$ can contain the point. Therefore, for any point in $\mathbb{R}^3$, at most $N$ lines of $\frak L(P)$ contain the point.
Now consider an irreducible polynomial $Q$ with $1 < \Deg Q \le D$. We will prove that $Z(Q)$ contains $\le 3 D^2 N$ lines of $\frak L(P)$, and this will finish the proof of Lemma \ref{noncluster}.
We let $\frak L_p := \{ l_{p, q} \}_{q \in \mathbb{R}^2}$. We would like to understand how many lines of $\frak L_p$ may lie in $Z(Q)$.
\begin{lemma} \label{onebadpointZ(Q)} If $Q$ is an irreducible polynomial with $1 < \Deg Q \le D$, then there is at most one point $p \in \mathbb{R}^2$ so that $Z(Q)$ contains at least $2 D^2$ lines of $\frak L_p$.
\end{lemma}
Given Lemma \ref{onebadpointZ(Q)}, we now check that $Z(Q)$ contains at most $3 D^2 N$ lines of $\frak L(P)$. For $N-1$ of the points $p \in P$, $Z(Q)$ contains at most $2 D^2$ of the lines $\{ l_{p, p'} \}_{p' \in P}$. For the last point $p \in P$, $Z(Q)$ contains at most all $N$ of the lines $\{ l_{p, p'} \}_{p' \in P}$. In total, $Z(Q)$ contains at most $(2 D^2 + 1) N$ lines of $\frak L(P)$.
The proof of Lemma \ref{onebadpointZ(Q)} is based on a more technical lemma which describes the algebraic structure of the set of lines $\{ l_{p,q} \}$ in $\mathbb{R}^3$.
\begin{lemma} \label{lemV_p} For each $p$, each point of $\mathbb{R}^3$ lies in a unique line from the set $\{ l_{p,q} \}_{q \in \mathbb{R}^2}$. Moreover, for each $p$, there is a non-vanishing vector field $V_p(x_1,x_2,x_3)$, so that at each point, $V_p(x)$ is tangent to the unique line $l_{p,q}$ through $x$. Moreover, $V_p(x)$ is a polynomial in $p$ and $x$, with degree at most 1 in the $p$ variables and degree at most 2 in the $x$ variables.
\end{lemma}
Let us assume this technical lemma for the moment and use it to prove Lemma \ref{onebadpointZ(Q)}.
Fix a point $p \in \mathbb{R}^2$. Suppose $Z(Q)$ contains at least $2 D^2$ lines from the set $\frak L_p := \{ l_{p,q} \}_{p,q \in \mathbb{R}^2}$. On each of these lines, $Q$ vanishes identically, and $V_p$ is tangent to the line. Therefore, $V_p \cdot \nabla Q$ vanishes on all these lines. But $V_p \cdot \nabla Q$ is a polynomial in $x$ of degree at most $2 D - 2$. If $V_p \cdot \nabla Q$ and $Q$ have no common factor, then the Bezout theorem for lines, Theorem \ref{bezoutlines}, implies that there are at most $2 D^2 - 2D$ lines where the two polynomials vanish. Therefore, $V_p \cdot \nabla Q$ and $Q$ have a common factor. Since $Q$ is irreducible, $Q$ must divide $V_p \cdot \nabla Q$, and we see that $V_p \cdot \nabla Q$ vanishes identically on $Z(Q)$.
Now suppose that $Z(P)$ contains at least $2 D^2$ lines from $\frak L_{p_1}$ and from $\frak L_{p_2}$. We see that $V_{p_1} \cdot \nabla Q$ and $V_{p_2} \cdot \nabla Q$ vanish on $Z(Q)$. For each fixed $x$, the expression $V_p \cdot \nabla Q$ is a degree 1 polynomial in $p$. Therefore, for any point $p$ in the affine span of $p_1$ and $p_2$, $V_p \cdot \nabla Q$ vanishes on $Z(Q)$.
Suppose that $Z(Q)$ has a non-singular point $x$, which means that $\nabla Q(x) \not= 0$. In this case, $x$ has a smooth neighborhood $U_x \subset Z(Q)$ where $\nabla Q$ is non-zero. If $V_p \cdot \nabla Q$ vanishes on $Z(Q)$, then the vector field $V_p$ is a vector field on $U_x$, and so its integral curves lie in $U_x$. But the integral curves of $V_p$ are exactly the lines of $\frak L_p$. Therefore, for each $p$ on the line connecting $p_1$ and $p_2$, the line of $\frak L_p$ through $x$ lies in $Z(Q)$. Since $x$ is a smooth point, all of these lines must lie in the tangent plane $T_x Z(Q)$, and we see that $Z(Q)$ contains infinitely many lines in a plane. Using Bezout's theorem, Theorem \ref{bezoutlines}, again, we see that $Z(Q)$ is a plane, and that $Q$ is a degree 1 polynomial. This contradicts our assumption that $\Deg Q > 1$.
We have now proven Lemma \ref{onebadpointZ(Q)} in the case that $Z(Q)$ contains a non-singular point. But if every point of $Z(Q)$ is singular, then we get an even stronger estimate on the lines in $Z(Q)$:
\begin{lemma} Suppose that $Q$ is a non-zero irreducible polynomial of degree $D$ on $\mathbb{R}^3$. If $Z(Q)$ has no non-singular point, then $Z(Q)$ contains at most $D^2$ lines.
\end{lemma}
\begin{proof} Since every point of $Z(Q)$ is singular, $\nabla Q$ vanishes on $Z(Q)$. In particular, each partial derivative $\partial_i Q$ vanishes on $Z(Q)$. We suppose that $Z(Q)$ contains more than $D^2$ lines and derive a contradiction. Since $\partial_i Q = 0$ on $Z(Q)$ and $Z(Q)$ contains more than $D^2$ lines, then Bezout's theorem, Theorem \ref{bezoutlines}, implies that $Q$ and $\partial_i Q$ have a common factor. Since $Q$ is irreducible, $Q$ must divide $\partial_i Q$. Since $\Deg \partial_i Q < \Deg Q$, it follows that $\partial_i Q$ is identically zero for each $i$. This implies that $Q$ is constant. By assumption, $Q$ is not the zero polynomial and so $Z(Q)$ is empty. But we assumed that $Z(Q)$ contains at least $D^2 + 1$ lines, giving a contradiction.
\end{proof}
This finishes the proof of Lemma \ref{onebadpointZ(Q)} assuming Lemma \ref{lemV_p}.
It only remains to prove Lemma \ref{lemV_p}.
First we check that each point $x \in \mathbb{R}^3$ lies in exactly one of the lines $\{ l_{p,q} \}_{q \in \mathbb{R}^2}$. Suppose $p = (p_1, p_2)$ and $q=(q_1, q_2)$ are points in $\mathbb{R}^2$. Using Equation \ref{eqlinex} and \ref{eqliney}, we see that $(x_1, x_2, x_3) \in l_{p,q}$ if and only if:
\begin{equation} \label{eqlinex'}
2 x_1 = (p_1 + q_1) + (p_2 - q_2) x_3.
\end{equation}
\begin{equation} \label{eqliney'}
2 x_2 = (p_2 + q_2) + (q_1 - p_1) x_3.
\end{equation}
We can rewrite these equations as a matrix equation for $q$ as follows:
$$\left( \begin{array}{cc}
1 & - x_3 \\
x_3 & 1 \end{array} \right) \left( \begin{array}{cc} q_1 \\ q_2 \end{array} \right) = \left( 2x_1 - p_1 - x_3 p_2, 2 x_2 - p_2 + p_1 x_3 \right) =: a_p(x), $$
Note that $a_p(x)$ is a vector whose entries are polynomials in $x, p$ of degree $\le 1$ in $x$ and degree $\le 1$ in $p$. Since the determinant of the matrix on the left-hand side is $1 + x_3^2 > 0$, we can uniquely solve this equation for $q_1$ and $q_2$. The solution has the form
\begin{equation} \label{qfromxp} q_1 = (x_3^2 + 1)^{-1} b_{1,p}(x); q_2 = (x_3^2 + 1)^{-1} b_{2,p}(x), \end{equation}
\noindent where $b_1, b_2$ are polynomials in $x, p$ of degree $\le 2$ in $x$ and degree $\le 1$ in $p$.
We have now proven that each point of $\mathbb{R}^3$ lies in a unique line from the set $\{ l_{p,q} \}_{q \in \mathbb{R}^2}$. Now we can construct the vector field $V_p$. From Equations \ref{eqlinex'} and \ref{eqliney'}, we see that the vector $( {p_2 - q_2}, {q_1 - p_1}, 2)$ is tangent to $l_{p,q}$. If $x \in l_{p,q}$, then we can use Equation \ref{qfromxp} to expand $q$ in terms of $x, p$, and we see that the following vector field is tangent to $l_{p,q}$ at $x$:
$$ v_p(x) := (p_2 - (x_3^2+1)^{-1} b_{2,p}(x), (x_3^2+1)^{-1} b_{1,p}(x) - p_1, 2). $$
The coefficients of $v_p(x)$ are not polynomials because of the $(x_3^2+1)^{-1}$. We define $V_p(x) = (x_3^2 +1) v_p(x)$, so
$$ V_p(x) = \left( p_2 (x_3^2+1) - b_{2,p}(x), b_{1,p}(x) - p_1( x_3^2 + 1), 2 x_3^2 + 2 \right). $$
The vector field $V_p(x)$ is tangent to the family of lines $\{ l_{p,q} \}_{q \in \mathbb{R}^2}$. Moreover, $V_p$ never vanishes because its last component is $2 x_3^2 +2$. Therefore, the integral curves of $V_p$ are exactly the lines $\{ l_{p,q} \}_{q \in \mathbb{R}^2}$. Moreover, each component of $V_p$ is a polynomial of degree $\le 2$ in $x$ and degree $\le 1$ in $p$.
This finishes the proof of Lemma \ref{lemV_p} and hence the proof of Lemma \ref{noncluster}.
| {
"timestamp": "2014-11-12T02:06:19",
"yymm": "1404",
"arxiv_id": "1404.2321",
"language": "en",
"url": "https://arxiv.org/abs/1404.2321",
"abstract": "We give a shorter proof of a slightly weaker version of a theorem of Nets Katz and the author. We prove that if a set of $L$ lines in $\\mathbb{R}^3$ contains at most $L^{1/2}$ lines in any low degree algebraic surface, then the number of $r$-rich points is at most $C_\\epsilon L^{(3/2) + \\epsilon} r^{-2}$. Nets and I used this estimate to prove a distinct distance estimate for points in the plane. With the slightly weaker theorem in this paper, we get a slightly weaker distinct distance estimate: any set of $N$ points in $\\mathbb{R}^2$ determines at least $c_\\epsilon N^{1 - \\epsilon}$ distinct distances.",
"subjects": "Combinatorics (math.CO)",
"title": "Distinct distance estimates and low degree polynomial partitioning",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018376422668,
"lm_q2_score": 0.7279754607093178,
"lm_q1q2_score": 0.7099230070422026
} |
https://arxiv.org/abs/2202.02384 | A proof of the Erdős primitive set conjecture | A set of integers greater than 1 is primitive if no member in the set divides another. Erdős proved in 1935 that the series $f(A) = \sum_{a\in A}1/(a \log a)$ is uniformly bounded over all choices of primitive sets $A$. In 1986 he asked if this bound is attained for the set of prime numbers. In this article we answer in the affirmative. As further applications of the method, we make progress towards a question of Erdős, Sárközy, and Szemerédi from 1968. We also refine the classical Davenport-Erdős theorem on infinite divisibility chains, and extend a result of Erdős, Sárközy, and Szemerédi from 1966. | \section{Introduction}
A set of integers $A\subset \mathbb{Z}_{>1}$ is {\it primitive} if no member in $A$ divides another. For example, the integers in a dyadic interval $(x,2x]$ form a primitive set. Similarly the set of primes is primitive, along with the set $\mathbb{N}_k$ of numbers with exactly $k$ prime factors (with multiplicity), for each $k\ge1$. Another well-known example is the set of perfect numbers.\footnote{
Since Ancient Greece, a number $n$ is classified as `perfect,' `abundant,' or `deficient,' depending on whether the sum of its proper divisors equals $n$, is greater than $n$, or is less than $n$, respectively.}
The study of primitive sets emerged in the 1930s as a generalization of one special problem. A classical theorem of Davenport asserts that the set of abundant numbers has a positive asymptotic density. This was originally proved by sophisticated analytic methods, but Erd\H{o}s soon found an elementary proof by using primitive abundant numbers.\footnote{More precisely, `primitive non-deficient numbers'} The proof ideas led people to introduce the abstract definition of primitive sets and study them for their own sake. See Hall \cite{Hsetmult} or Halberstam--Roth \cite[\S 5]{HalbRoth} for detailed introductions to the subject.
There are a number of interesting and sometimes unexpected theorems about primitive sets. For instance, in 1934 Besicovitch \cite{Besicovitch} showed that the upper asymptotic density of a primitive set can be arbitrarily close to $1/2$, whereas in 1935 Behrend \cite{Behrend} and Erd\H{o}s \cite{Erdos35} proved the lower asymptotic density is always $0$. In fact, Erd\H{o}s proved the stronger result that
\begin{align*}
f(A) := \sum_{a\in A}\frac{1}{a\log a} \ < \ \infty,
\end{align*}
uniformly over all primitive sets $A$. Later in 1988 Erd\H{o}s famously asked if the maximum is attained by the primes $\mathcal{P}$.
\begin{conjecture}[Erd\H{o}s primitive set conjecture] \label{conj:EPS}
For any primitive set $A$, \
$f(A) \le f(\mathcal P)$.
\end{conjecture}
The prime sum is $f(\mathcal{P}) = \sum_p 1/(p\log p)=1.6366\cdots$ after computations of Cohen \cite{Cohen}. In 1993, Erd\H{o}s and Zhang \cite{EZ} proved the bound $f(A) < 1.84$ for all primitive $A$. Recently in 2019, Lichtman and Pomerance \cite{LPprim} improved the bound to $f(A) < e^\gamma=1.781\cdots$, where $\gamma$ is the Euler-Mascheroni constant. Note the tail of the series for $f(\mathcal{P})$ converges quite slowly $O(1/\log x)$, and moreover there are sets $A\subset[x,\infty)$ for which $f(A)\sim 1$ as $x\to\infty$ (in this connection see Conjecture \ref{conj:ESS} below). As such, Conjecture \ref{conj:EPS} is not susceptible to direct attack by computing partial sums up to $x$.
One potential strategy to approach Conjecture \ref{conj:EPS} is via integration. Namely,
\begin{align*}
f(A) = \sum_{a\in A}\frac{1}{a\log a} = \sum_{a\in A}\int_1^\infty a^{-t}\dd{t} = \int_1^\infty f_t(A)\dd{t},
\end{align*}
letting $f_t(A) = \sum_{a\in A} a^{-t}$. So Conjecture 1 would follow if for any $t>1$, primitive set $A$,
\begin{align}\label{eq:ftA}
f_t(A) \ \le \ f_t(\mathcal P).
\end{align}
However, it was shown in \cite{BM} that \eqref{eq:ftA} holds if and only if
\begin{align*}
t \ge \tau:=1.1403\cdots,
\end{align*}
where $t=\tau$ is the unique real solution to the equation
\begin{align*}
\sum_p p^{-t} = 1+\Big(1-\sum_p p^{-2t}\Big)^{1/2}.
\end{align*}
The fact that $\tau$ is markedly larger than 1 gives some indication as to why the Erd\H{o}s primitive set conjecture has remained open.
Similar analysis actually enables a disproof of a natural analogue of Conjecture \ref{conj:EPS} for the translated sum $f(A,h)=\sum_{a\in A}1/a(\log a+h)$, in that there are primitive $A$ for which $f(A,h)>f(\mathcal P,h)$ once $h\ge81$ \cite{LDM}, \cite{Laib}. This was refined down to just $h\ge1.04$ in \cite{Ltrans}, and suggests that the original conjecture (when $h=0$), if true, is only `barely' so.
Concerning \eqref{eq:ftA}, we also note Chan et al. \cite{CLP2prim} proved $f_t(A) \ \le \ f_t(\mathcal P)$ for all $t\ge .7983$ for all 2-primitive sets $A$, thereby resolving Conjecture 1 in this special case (also see \cite{CLPkprim}). Here a set $A$ is 2-primitive if no member of $A$ divides the product of 2 others.
A separate strategy for the problem is to split up $A$ according to the smallest prime factor. That is, for each prime $p$ let
$$A_p = \{ n\in A : n \text{ has least prime factor }p\}.$$
As in \cite{LPprim}, we say $p$ is {\it Erd\H{o}s strong} if the singleton set $\{p\}$ maximizes $f(A)$ among all primitive sets $A$ all of whose elements have least prime factor $p$. That is, $f(A_p)\le f(\{p\})=:f(p)$ for all primitive $A$. Conjecture \ref{conj:EPS} would follow if every prime is Erd\H{o}s strong, since then $f(A) = \sum_p f(A_p) \le f(\mathcal{P})$.
By a short argument (see Lemma \ref{lem:LMertfA}), a sufficient condition for a prime $p$ to be Erd\H{o}s strong is that
\begin{align}\label{eq:Mertprim}
e^{\gamma}\prod_{q<p}\Big(1-\frac{1}{q}\Big) \ \le \ \frac{1}{\log p}.
\end{align}
Here $q$ runs over primes. Note the two sides of this inequality are asymptotically equal by Mertens' prime product theorem. By direct computation, \eqref{eq:Mertprim} is satisfied by the first $10^8$ odd primes, but fails for $p=2$ since $\log 2> e^{-\gamma}$.
Moreover $99.999973\%$ of primes\footnote{More precisely, the set of such primes has discrete, logarithmic density equal to $0.99999973\cdots $ within $\mathcal P$.} satisfy \eqref{eq:Mertprim}, assuming the Riemann Hypothesis and the Linear Independence hypothesis\footnote{Namely, the sequence of numbers $\gamma_n > 0$ such that $\Delta(\frac{1}{2} + i\gamma_n) = 0$ is linearly independent over ${\mathbb Q}$.} \cite{LPrace}. This result is intimately related to the celebrated work of Rubinstein and Sarnak \cite{RubSarn} on the prime number race between $\pi(x)$ and $\textnormal{li}(x)$. On the Riemann Hypothesis alone \eqref{eq:Mertprim} fails for a positive proportion of primes $p$ (in log density), and even unconditionally \eqref{eq:Mertprim} is known to fail for infinitely many primes $p$. This perhaps suggests Conjecture \ref{conj:EPS} might be false, or at least beyond the reach of unconditional tools.
In this article we establish Conjecture \ref{conj:EPS}.
\begin{theorem}\label{thm:EPS}
For any primitive set $A$, we have
$f(A) \le f(\mathcal P)$.
\end{theorem}
Moreover, we show that every odd prime is Erd\H{o}s strong.
\begin{theorem}\label{thm:Estrongodd}
For any primitive set $A$ and any prime $p>2$, we have $f(A_p) \le f(p)$.
\end{theorem}
It remains an open question whether $p=2$ is Erd\H{o}s strong.
Another question related to Conjecture \ref{conj:EPS}, in 1968 Erd\H{o}s, S\'ark\"ozy, and Szemer\'edi posed the following \cite[eq. (11)]{ESS68}.
\begin{conjecture}[Erd\H{o}s--S\'ark\"ozy--Szemer\'edi] \label{conj:ESS}
We have
\begin{align*}
\lim_{x\to\infty}\sup_{\substack{A\subset[x,\infty)\\A\textnormal{ primitive}}}f(A) \ \le \ 1.
\end{align*}
\end{conjecture}
This also appears in \cite[p.\,244]{MathErdos} as Problem 2.2, and in \cite[p.\,224]{MathErdosII} as Problem 2.
Not much has been proven in this direction until very recently. Recall the set $\mathbb{N}_k$ of numbers with exactly $k$ prime factors (with multiplicity) lies in $[2^k,\infty)$. Lichtman and Pomerance \cite{LPprim} proved $f({\mathbb N}_k)\gg 1$, and in \cite{Lalmost} it was shown $f({\mathbb N}_k)\sim 1$ as $k\to\infty$. This means that if Conjecture \ref{conj:ESS} holds, then the limit must attain an {\it equality} of 1. We note \cite[Theorem 4.1]{Lalmost} gives for all ${\epsilon}>0$,
\begin{align}\label{eq:fNksim1}
f({\mathbb N}_k) = 1 + O_{\epsilon}(k^{{\epsilon}-1/2}).
\end{align}
Moreover, computations up to $k=20$ suggest the true rate of decay may be exponential $O(2^{-k})$, see \cite{Lalmost}.
The methods in this paper enable the following progress towards Conjecture \ref{conj:ESS}.
\begin{theorem}\label{thm:ESS}
We have
\begin{align*}
\lim_{x\to\infty}\sup_{\substack{A\subset[x,\infty)\\A\textnormal{ primitive}}}f(A) \ \le \ e^\gamma\frac{\pi}{4} \approx 1.399.
\end{align*}
\end{theorem}
\subsection*{Notation}
Let $p(a),P(a)$ denote the smallest and largest prime factors of $a\in{\mathbb Z}_{>1}$, respectively, and denote $a^*=a/P(a)$. Let $\Omega(n)$ denote the number of prime factors of $n$ (with multiplicity) and let ${\mathbb N}_k = \{n: \Omega(n)=k\}$. Define $f(a) = 1/(a\log a)$ and $f(A) = \sum_{a\in A}f(a)$ for $A\subset {\mathbb Z}_{>1}$. Let $\mathcal P$ be the set of prime numbers, whose elements we denote by $p$ and $q$, unless otherwise stated. Also $p^k\| n$ means $p^k\mid n$ and $p^{k+1}\nmid n$.
\subsection{Proof outline of Theorem \ref{thm:EPS}}
The proof is a refinement of the argument of \cite{LPprim}. The key new idea is to exploit the fact that $A$ cannot contain too many elements $a$ with $P(a)$ just slightly less than $a$. This improves the critical case in the argument of \cite{LPprim}, and ultimately leads to an improvement by a factor of $\pi/4$ from a contribution from each $a\in A$ which is not prime. Since $e^\gamma\pi/4<f(\mathcal P)$, this ultimately means that $f(A)$ is maximized when all elements are prime. (Additional care is needed for small numbers, using explicit bounds.)
Let us recall the rough argument of \cite{LPprim} (suppressing details for primes and small numbers). By Mertens' product theorem,
\begin{align}\label{eq:sumofden}
f(A)=\sum_{a\in A}\frac{1}{a\log{a}} < \sum_{a\in A}\frac{1}{a\log P(a)}\approx e^\gamma\sum_{a\in A}\frac{1}{a}\prod_{p<P(a)}\Big(1-\frac{1}{p}\Big).
\end{align}
But $a^{-1}\prod_{p<P(a)}(1-p^{-1})$ is the natural density of ${\rm L}_a=\{ba\;:\;p\mid b \Rightarrow p\ge P(a)\}$, and these sets turn out to be disjoint by primitivity of $A$ (Lemma \ref{lem:trichot}). So the sum of densities in \eqref{eq:sumofden} is trivially at most 1, leading to the bound $f(A)<e^\gamma$ for primitive $A$. This is inspired by the original 1935 argument of Erd\H{o}s \cite{Erdos35}.
There is a loss in the above argument when bounding $a$ by $P(a)$, and this loss is largest when $a$ is far from prime. We can save an additional factor of $\log{P(a)}/\log a$ for any individual $a\in A$, and this would be a significant improvement in the case $P(a)^2<a$, say. Therefore the critical case to handle is when $a\in A$ is composite with $P(a)$ close to $a$ in size. The key new ingredient (Proposition \ref{prop:maindensity}) shows that if $P(a)^{1+v}>a$ uniformly for all $a\in A$ (so the savings factor is $\log P(a)/\log a > 1/(1+v)$), then we can bound the sum of densities in \eqref{eq:sumofden} by $\sqrt{v}$. This refines the trivial bound of 1 in the range $0<v<1$, and quantifies the earlier statement that $A$ contains few elements $a$ with $P(a)$ slightly less than $a$. As the savings $1/(1+v)$ improves with $v$, the worst-case scenario is when the subset of $a\in A$ with $P(a)^{1+v}\approx a$ contributes about $\frac{\dd}{\dd v}[\sqrt{v}]=1/2\sqrt{v}$ to the sum of densities in \eqref{eq:sumofden}. Combining these ingredients ultimately leads to a savings of $\int_0^1\dd v/2\sqrt{v}(1+v)=\pi/4$, as desired.
Lastly, the key Proposition \ref{prop:maindensity} relies on the following observation (Lemma \ref{lem:disjointLac}): not only are the sets ${\rm L}_a$ disjoint, but so too are ${\rm L}_{ac}$ for many choices of integers $c$ (in fact all choices of $c$ with prime factors between $P_2(a)$ and $P_2(a)^{1/\sqrt{v}})$. Thus the sum of densities of these ${\rm L}_{ac}$ must be at most 1. But these sets ${\rm L}_{ac}$ are self-similar to the ${\rm L}_a$, and so the sum of their densities is roughly $1/\sqrt{v}$ times that of the ${\rm L}_a$, giving the desired bound $\sqrt{v}$.
\subsection{{\rm L}-primitive sets}
As outlined above, the subset of multiples of each $a\in A$,
\begin{align}\label{eq:defLa}
{\rm L}_a & := \big\{ba\in {\mathbb N} \;:\; p\mid b \implies p\ge P(a) \big\},
\end{align}
arises naturally in our proof. As such we shall introduce `{\rm L}' refinements of our common notions (here {\rm L} alludes to `lexicographic'). Specifically, if $n\in {\rm L}_a$ we say $n$ is an {\it {\rm L}-multiple} of $a$, and $a$ is an {\it{\rm L}-divisor} of $n$. Most importantly, we introduce the following key definition.
\begin{definition}
A set $A\subset {\mathbb Z}_{>1}$ is {\it {\rm L}-primitive} if $a'\notin {\rm L}_a$ for all distinct $a,a'\in A$.
\end{definition}
That is, $A$ is {\rm L}-primitive if no member of $A$ is an {\rm L}-multiple of another. In particular this definition is weaker than primitive.
One may apply the basic argument as in \eqref{eq:sumofden} more generally for {\rm L}-primitive sets $A$, leading to the same bound $f(A) < e^\gamma$ (again ignoring small numbers). Moreover, {\rm L}-primitive sets play a central role in the proof of Theorem \ref{thm:EPS}. However, it turns out the bound $e^\gamma$ is essentially best possible for {\rm L}-primitive sets (see Proposition \ref{prop:Lprimsup}), which is markedly larger than $f(\mathcal P)$. This further highlights the subtlety of Conjecture \ref{conj:EPS}.
\subsection{Density and divisibility chains}
Recall the natural (asymptotic) density ${\rm d}(S)=\lim_{x\to\infty} |S\cap [1,x]|/x$ of a set $S\subset {\mathbb N}$. We also consider $\log$ density $\delta(S)$ and $\log\log$ density $\Delta(S)$, given by
\begin{align}\label{eq:densitydefs}
\delta(S) & = \lim_{x\to\infty} \frac{1}{\log x}\sum_{n\in S, n\le x}\frac{1}{n},\qquad\text{and}\quad
&\Delta(S) = \lim_{x\to\infty} \frac{1}{\log\log x}\sum_{n\in S, 1<n\le x}\frac{1}{n\log n},
\end{align}
provided these limits exist. Recall the corresponding upper densities $\overline{\rm d}(S)$, $\overline{\delta}(S)$, $\overline{\Delta}(S)$ always exist, by replacing $\lim_{x\to\infty}$ with $\limsup_{x\to\infty}$ (and similarly $\liminf_{x\to\infty}$ for lower densities).
Taking an abstract view, a primitive set is an antichain for the partial ordering of integers by divisibility. As such this naturally leads to the dual notion of a chain in this context. Namely, a infinite sequence of integers $1<d_1<d_2<\cdots$ is a {\it divisibility chain} if $d_j\mid d_{j+1}$ for all $j\ge1$. A classical 1937 theorem of Davenport and Erd\H{os} \cite{DE37} asserts that if set $A\subset {\mathbb N}$ has upper $\log$ density $\overline{\delta}(A)>0$, then it contains an infinite divisibility chain $D\subset A$.
Analogously, we introduce the following refinement.
\begin{definition}
An infinite sequence of integers $1<d_1<d_2<\cdots$ is an {\it{\rm L}-divisibility chain} if $d_{j+1}\in {\rm L}_{d_j}$ for all $j\ge1$.
\end{definition}
That is, $d_{j+1}$ is an {\rm L}-multiple of $d_j$ for all $j\ge1$. In particular this definition is stronger than a (mere) divisibility chain.
We refine the Davenport--Erd\H{os} theorem to {\rm L}-divisibility chains.
\begin{theorem}\label{thm:LDavenErdos}
If a set $A\subset {\mathbb N}$ has upper $\log$ density $\overline{\delta}(A)>0$, then $A$ contains an infinite {\rm L}-divisibility chain.
\end{theorem}
In 1966 Erd\H{o}s, S\'ark\"ozy, and Szemer\'edi \cite[Theorem 1]{ESS66} quantified the Davenport--Erd\H{os} theorem by showing such a divisibility chain $D$ satisfies $\limsup_{y\to\infty}\sum_{d\in D,d\le y}1/\sqrt{\log\log y} > 0$, and proved such growth rate is best possible.
They also studied the analogous question for upper $\log\log$ density, which they write ``seems more interesting to us.'' Namely, in \cite[Theorem 2]{ESS66} they established the following quantitative result.
\begin{theorem}[Erd\H{o}s--S\'ark\"ozy--Szemer\'edi] \label{thm:ESSchain}
If $A\subset {\mathbb N}$ has upper $\log\log$ density $\overline{\Delta}(A)>0$, then there is an infinite divisibility chain $D\subset A$ of growth
\begin{align}\label{eq:ESSloglog}
\limsup_{y\to\infty}\sum_{\substack{d\in D\\d\le y}}\frac{1}{\log\log y} \ \ge \ \frac{\overline{\Delta}(A)}{e^\gamma}.
\end{align}
\end{theorem}
Analogously, we quantify Theorem \ref{thm:LDavenErdos} in the case of $\log\log$ density, thereby refining Theorem \ref{thm:ESSchain} of Erd\H{o}s--S\'ark\"ozy--Szemer\'edi to {\rm L}-divisibility chains.
\begin{theorem}\label{thm:ESSLchain}
If $A\subset {\mathbb N}$ has upper $\log\log$ density $\overline{\Delta}(A)>0$, then there is an infinite {\rm L}-divisibility chain $D\subset A$ of growth
\begin{align*}
\limsup_{y\to\infty}\sum_{\substack{d\in D\\d\le y}}\frac{1}{\log\log y} \ \ge \ \frac{\overline{\Delta}(A)}{e^\gamma}.
\end{align*}
\end{theorem}
In view of Proposition \ref{prop:Lprimsup} we believe that the lower bound $\overline{\Delta}(A)/e^\gamma$ above is best possible for {\rm L}-divisibility chains, though we are unable to settle this. Notably, this contrasts the situation in Theorem \ref{thm:ESSchain}, as Erd\H{o}s--S\'ark\"ozy--Szemer\'edi conjectured $\overline{\Delta}(A)/e^\gamma$ in \eqref{eq:ESSloglog} might be improved to $\overline{\Delta}(A)$, which would be best possible for divisibility chains, if true \cite[eq. (5)]{ESS66}.
\section{Preliminaries on {\rm L}-primitive sets}
Recall the set of {\rm L}-multiples ${\rm L}_a:=\{ba\in{\mathbb N} \,:\,p\mid b \Rightarrow p\ge P(a)\}$ from \eqref{eq:defLa}. In particular $a\in {\rm L}_a$ for $b=1$, and $p(b) \ge P(a)$ for $b>1$. For $A\subset {\mathbb N}$ define ${\rm L}_A :=\bigcup_{a\in A}{\rm L}_a$. Also let $A_a =A\cap {\rm L}_a$ so that ${\mathbb N}_a={\rm L}_a$ and $A_q = \{a\in A : p(a)=q\}$ for prime $q$.\footnote{Note the notation for $A_q$ differs slightly from what is used in \cite{EZ}, \cite{LPprim}}
Observe that $a\in {\rm L}_{a'}$ if and only if ${\rm L}_a\subset {\rm L}_{a'}$, as well as the following trichotomy.
\begin{lemma}\label{lem:trichot}
For any integers $a,a'>1$, if ${\rm L}_a\cap {\rm L}_{a'}\neq\emptyset$ then $a\in {\rm L}_{a'}$ or $a'\in {\rm L}_a$. Thus ${\rm L}_a\cap {\rm L}_{a'}=\emptyset$ or ${\rm L}_a\subset {\rm L}_{a'}$ or ${\rm L}_a\supset {\rm L}_{a'}$.
\end{lemma}
\begin{proof}
Suppose $ba=b'a' \in {\rm L}_a\cap {\rm L}_{a'}$. If $b=1$ or $b'=1$ then $a\in {\rm L}_{a'}$ or $a'\in {\rm L}_a$. Otherwise $b,b'>1$, so $P(a) \le p(b)$ and $P(a')\le p(b')$ imply $b\mid b'$ or $b'\mid b$. Thus $a' = a(b/b')\in {\rm L}_a$ or $a = a'(b'/b)\in {\rm L}_{a'}$ as well.
\end{proof}
As such we see $A$ is {\rm L}-primitive if and only if the sets $\{{\rm L}_a\}_{a\in A}$ are pairwise disjoint.
\begin{corollary}\label{cor:Lsplit}
If $A$ is an {\rm L}-primitive set, then ${\rm L}_a$ and ${\rm L}_{a'}$ are disjoint for distinct $a,a'\in A$.
\end{corollary}
Recall ${\rm L}_a$ has natural density ${\rm d}({\rm L}_a) = \frac{1}{a}\prod_{p<P(a)}(1-\frac{1}{p})$. And by Mertens' product theorem $\prod_{p<x}(1-\frac{1}{p})\sim 1/e^\gamma \log x$, where $\gamma=.57721\cdots$ is the Euler-Mascheroni constant. By a show argument below, we relate $f(A)$ to density of {\rm L}-multiples. This is essentially based on Erd\H{o}s \cite{Erdos35} (also see \cite[Lemma 1]{ESS67}, \cite[Proposition 2.1]{LPprim}).
\begin{lemma}\label{lem:LMertfA}
For an {\rm L}-primitive set $A$ and an integer $1<n\notin A$, we have $f(A_n) < e^\gamma\,{\rm d}({\rm L}_n)$.
\end{lemma}
\begin{proof}
We may assume $A=A_n$ is finite, since $f(A) = \lim_{x\to\infty} f(A\cap[1,x])$. As $n\notin A$ all elements of $A$ are composite. Also $A$ is {\rm L}-primitive so ${\rm d}({\rm L}_A)=\sum_{a\in A}{\rm d}({\rm L}_a)$ by Corollary \ref{cor:Lsplit}.
Next, Theorem 7 in \cite{RS1} implies $\prod_{p<x}\frac{p}{p-1} < e^\gamma\log(2x)$ for all $x>1$. Thus for any composite integer $a>1$, we have $a>2P(a)$ so that
\begin{align*}
f(a) = \frac{1}{a\log a} \ \le \ \frac{1}{a\log 2P(a)} \ < \ \frac{e^\gamma}{a}\prod_{p<P(a)}\Big(1-\frac{1}{p}\Big) \ = \ e^\gamma\,{\rm d}({\rm L}_a).
\end{align*}
Hence $f(A) = \sum_{a\in A}f(a) < e^\gamma\,{\rm d}({\rm L}_A) \le e^\gamma\,{\rm d}({\rm L}_n)$ since $A\subset {\rm L}_n$.
\end{proof}
We shall also need a technical refinement of Lemma \ref{lem:LMertfA}. For this, we rewrite Mertens' product theorem as $\mu_x\sim 1$, where we denote
\begin{align}\label{eq:muxdef}
\mu_x := e^\gamma \log x\prod_{p<x}\Big(1-\frac{1}{p}\Big).
\end{align}
In particular, for a prime $q$ we have
\begin{align}\label{eq:fqdLq}
f(q) = \frac{1}{q\log q} = \frac{1}{q}\frac{e^\gamma}{\mu_q}\prod_{p<q}\Big(1-\frac{1}{p}\Big) =\frac{e^\gamma}{\mu_q}{\rm d}({\rm L}_q).
\end{align}
We have the following explicit bounds for $\mu_x$, which critically are monotonic. We give upper bounds which hold on real $x\in {\mathbb R}$, but for lower bounds it turns out it suffices to restrict to the subsequence of primes $q\in\mathcal P$.
\begin{lemma}[Monotonic bounds] \label{lem:monotonic}
For $q\in\mathcal P$ and $x\in {\mathbb R}$, define
\begin{align*}
m_q := \inf_{\substack{p\ge q\\ p\in \mathcal P}}\mu_p, \quad\text{and}\quad
M_x := \sup_{\substack{y\ge x\\ y\in {\mathbb R}}}\mu_y.
\end{align*}
Then we have
\[m_q \ge \begin{cases}
\mu_7 = 0.9242\cdots & q \le 7\\
\mu_{19} = 0.9467\cdots & 7< q \le 300\\
1- \frac{1}{2(\log q)^2} & q> 300.
\end{cases}
\quad\text{and}\quad
M_x \le \begin{cases}
\mu_2 = 1.235\cdots & x \le 2\\
1 + \frac{1}{2\log(2\cdot10^9)^2} & 2< x \le 2\cdot10^9\\
1 + \frac{1}{2(\log x)^2} & x> 2\cdot10^9.
\end{cases}
\]
\end{lemma}
\begin{proof}
First, Rosser-Schoenfeld \cite[Theorem 7]{RS1} implies the product over primes $p< x$ is bounded in between
\begin{align*}
1-\frac{1}{2(\log x)^2} \ \overset{(x>285)}{\le} \ e^\gamma \log x\prod_{p< x}\Big(1-\frac{1}{p}\Big)
\ \overset{(x>1)}{\le} \ 1+\frac{1}{2(\log x)^2}.
\end{align*}
Note $\mu_x$ is increasing on $x\in(p,p']$ for consecutive primes $p,p'$. So the upper bound follows by computing $\mu_p<1$ for the first $10^8$ odd primes $p$ (note $p_{10^8} \ge 2\cdot10^9$). Hence $\mu_x<1$ for real $2< x\le 2\cdot10^9$. Below we display $\mu_q$ for the first few primes $q$, rounded to 4 significant digits.
\[\begin{array}{cc|cc|cc|cc|cc}
q & \mu_q & q & \mu_q & q & \mu_q & q & \mu_q & q & \mu_q\\
\hline
2 & 1.235 & 31 &{\bf0.9660}& 73 & 0.9766 & 127 & 0.9902 & 179 & 0.9909 \\
3 & 0.9784 & 37 & 0.9831 & 79 & 0.9809 & 131 & 0.9887 & 181 & 0.9874 \\
5 & 0.9555 & 41 & 0.9836 & 83 & 0.9795 & 137 & 0.9902 & 191 & 0.9921 \\
7 &{\bf0.9242}& 43 & 0.9720 & 89 & 0.9829 & 139 & 0.9858 & 193 & 0.9889 \\
11 & 0.9762 & 47 &{\bf0.9718} & 97 & 0.9906 & 149 & 0.9925 & 197 & 0.9876 \\
13 & 0.9492 & 53 & 0.9808 & 101 & 0.9890 & 151 & 0.9885 & 199 &{\bf 0.9844} \\
17 & 0.9679 & 59 & 0.9883 & 103 & 0.9834 & 157 & 0.9896 & & \\
19 &{\bf0.9467} & 61 & 0.9795 & 107 & 0.9818 & 163 & 0.9906 & & \\
23 &{\bf0.9551}& 67 & 0.9854 & 109 & 0.9765 & 167 & 0.9892 & & \\
29 & 0.9811 & 71 & 0.9841 & 113 &{\bf0.9749}& 173 & 0.9900 & &
\end{array}\]
The lower bound follows by identifying the primes $q$ for which $\mu_q = \inf_{p\ge q} \mu_p$ (in bold above), and then computing $\mu_{199} < \mu_p$ for $199<p\le 300$, as well as checking $\mu_{199} < 0.9846 < 1-\frac{1}{2(\log x)^2}$ for $x>300$. (In practice we shall only need $\mu_q$ for $q=7,19$.)
\end{proof}
We may now prove a technical refinement of Lemma \ref{lem:LMertfA} using $\mu_q$.
\begin{lemma}\label{lem:fMertPnu}
Let $A$ be an {\rm L}-primitive set. Take $v\ge0$, an integer $n\notin A$, and denote $q=P(n)$. If $P(a)^{1+v}\le a$ for all $a\in A_n$, then
\begin{align*}
f(A_n) = \sum_{a\in A_n}\frac{1}{a\log a} \ &\le \ \frac{e^{\gamma}}{m_q}\,\frac{{\rm d}({\rm L}_{A_n})}{1+v}.
\end{align*}
\end{lemma}
\begin{proof}
We may assume $A = A_n$ is finite, since $f(A) = \lim_{x\to\infty} f(A\cap[1,x])$. As $n\notin A$ all elements of $A$ are composite. Also $A$ is {\rm L}-primitive so ${\rm d}({\rm L}_A)=\sum_{a\in A}{\rm d}({\rm L}_a)$ by Corollary \ref{cor:Lsplit}. Moreover $(1+v)\log P(a) \le \log a$ for all $a\in A$. Thus by definition of $\mu_{P(a)}$ in \eqref{eq:muxdef},
\begin{align*}
\frac{1}{a\log a} \ \le \ \frac{1}{1+v}\frac{1}{a\log P(a)} = \frac{e^\gamma}{\mu_{P(a)}}\frac{1}{(1+v)a}\prod_{p<P(a)}\Big(1-\frac{1}{p}\Big) = \frac{e^\gamma}{\mu_{P(a)}}\frac{{\rm d}({\rm L}_a)}{1+v}.
\end{align*}
By monotonicity $\mu_{P(a)}\ge m_{P(a)}\ge m_q$ for $a\in A\subset {\rm L}_n$. Hence we conclude
\begin{align*}
f(A) = \sum_{a\in A}\frac{1}{a\log a} \ \le \ \frac{e^\gamma}{m_q}\frac{1}{1+v}\sum_{a\in A}{\rm d}({\rm L}_a) = \frac{e^\gamma}{m_q}\frac{{\rm d}({\rm L}_A)}{1+v}.
\end{align*}
\end{proof}
\section{Primitive sets}
Given $v\in(0,1)$, we shall be interested in elements $a\in A$ for which $P(a)^{1+v}>a$, and their multiples $ac$, where $c\in C_a^v$ for
\begin{align}\label{eq:defCa}
C_a^v \ := \ \big\{c\in{\mathbb N} \;: \; p\mid c \implies p\in [P(a^*), P(a^*)^{1/\sqrt{v}})\big\}.
\end{align}
Note $c=1\in C_a^v$. Recall $a^*=a/P(a)$, so $P(a^*)$ is the second largest prime of $a$. Also if $1<c\in C_a^v$ then $P(c)\le P(a^*)^{1/\sqrt{v}}$ is markedly smaller than $P(a) \ge P(a^*)^{1/v}$.
The following key lemma provides an upgrade to Corollary \ref{cor:Lsplit} in the case when $A$ is primitive, not just ${\rm L}$-primitive. Namely, the ${\rm L}_{ac}$ are disjoint, and so the larger set $\{ac : a\in A, c\in C_a^v\}$ is ${\rm L}$-primitive.
\begin{lemma}\label{lem:disjointLac}
Let $A$ be a primitive set of composite numbers, and take $v\in (0,1)$. If $P(a)^{1+v}>a$ for all $a\in A$, then the collection of sets ${\rm L}_{ac}$, ranging over $a\in A, c\in C_a^v$, are pairwise disjoint.
\end{lemma}
\begin{proof}
Suppose ${\rm L}_{ac}\cap {\rm L}_{a'c'}\neq\emptyset$ for some $a, a'\in A$ and $c\in C_a^v$, $c'\in C_{a'}^v$. Without loss, by Lemma \ref{lem:trichot} we may assume $ac\in {\rm L}_{a'c'}$. Note if $c=1$ then $a\in {\rm L}_{a'c'}$ implies $a'\mid a'c'\mid a$, which forces $a=a'$ and $c'=1$ by primitivity of $A$. So assuming $(a,c)\neq (a',c')$ we deduce $c>1$.
We factor $ac=p_1\cdots p_k$ into primes $p_1\ge\cdots\ge p_k$, so $ac\in {\rm L}_{a'c'}$ implies $a'c'=p_j\cdots p_k$ for some index $1<j<k$. Since $P(a)>P(c),p(c)\ge P(a^*)$ we also have $a^* = p_i\cdots p_k$ for some $2<i\le k$. If $i\le j$ then $a'c'\mid a^*$ so $a'\mid a$, contradicting $A$ as primitive. Hence $i>j$ so $a^*\mid a'c'$. Write $da^*=a'c'$ where $d=p_j\cdots p_{i-1}$, and note $P(d)=p_j=P(a')$. By definition of $1<c\in C_a^v$, we have
\begin{align}\label{eq:qPastar1}
p_j = P(d) \le P(c)< P(a^*)^{1/\sqrt{v}}.
\end{align}
Recall $P(a')^{v}> (a')^* \ge P((a')^*)$ for $a'\in A$. Now consider cases $c'>1$ and $c'=1$. When $1<c'\in C_{a'}^v$, we have $P(c')=p_{j+1}\ge p_i= P(a^*)$. Thus
\begin{align}\label{eq:qPastar2}
p_j = P(a') > P((a')^*)^{1/v} > P(c')^{1/\sqrt{v}} \ge P(a^*)^{1/\sqrt{v}}.
\end{align}
But \eqref{eq:qPastar2} contradicts \eqref{eq:qPastar1}, so ${\rm L}_{ac}$ and ${\rm L}_{a'c'}$ are disjoint.
Similarly when $c'=1$, we have $P((a')^*) = p_{j+1} \ge p_i = P(a^*)$ and so
\begin{align*}
p_j = P(a') > P((a')^*)^{1/v} \ge P(a^*)^{1/v}.
\end{align*}
This also contradicts \eqref{eq:qPastar1} (indeed $v<\sqrt{v}$). Hence ${\rm L}_{ac}$ and ${\rm L}_{a'}$ are disjoint in both cases.
\end{proof}
\begin{remark}
The exponent $1/\sqrt{v}$ in the definition of $C_a^v$ in \eqref{eq:defCa} is chosen as large as possible, constrained by the final steps \eqref{eq:qPastar1}, \eqref{eq:qPastar2} above. If one established a larger exponent in Lemma \ref{lem:disjointLac}, this would improve the final savings factor $\int_0^1\frac{\dd}{\dd v}[v^{1/2}]\frac{\dd{v}}{1+v} = \pi/4$.
\end{remark}
In the following proposition, we use Lemma \ref{lem:disjointLac} in order to bound the density of ${\rm L}_{A_n}$ by essentially a savings factor $\sqrt{v}$ from the trivial bound ${\rm d}({\rm L}_{n})$, when $P(a)^{1+v}>a$ for all $a\in A_n$.
\begin{proposition}\label{prop:maindensity}
Let $A$ be a finite primitive set. Take $v\in (0,1)$, an integer $n>1$ with $n\notin A$, and denote $q=P(n)$. If $P(a)^{1+v}>a$ for all $a\in A_n$ then
\begin{align}\label{eq:maindensity}
{\rm d}({\rm L}_{A_n}) \ \le \ \sqrt{v}\; r_q\;{\rm d}({\rm L}_n)
\end{align}
for the ratio $r_q := M_q/m_q$ when $q\ge3$, and $r_2:=r_3$.
\end{proposition}
\begin{proof}
Without loss assume $A = A_n$. Then $ac\in {\rm L}_n$ for all $a\in A$, $c\in C_a^v$ (recall $p(ac)=p(a)$), and so ${\rm L}_{ac} \subset {\rm L}_n$. Note the condition $P(a)^{1+v} > a$ is equivalent to $P(a)^v > a^*$, and $v<1$ implies $P(a)\nmid a^*$. By Lemma \ref{lem:disjointLac}, we have the following (finite) disjoint union,
\begin{align}\label{eq:disjointunion}
{\rm L}_n \ \supset \ \bigcup_{a\in A}\bigcup_{c\in C_a^v}{\rm L}_{ac}.
\end{align}
Thus taking the density of \eqref{eq:disjointunion}, we obtain
\begin{align}\label{eq:denprop}
{\rm d}({\rm L}_n) & \ge {\rm d}\bigg(\bigcup_{a\in A} \bigcup_{c\in C_a^v}{\rm L}_{ac}\bigg)
= \sum_{a\in A}\sum_{c\in C_a^v}{\rm d}({\rm L}_{ac}) = \sum_{a\in A}{\rm d}({\rm L}_a)\,\sum_{c\in C_a}\frac{1}{c},
\end{align}
noting $P(a)>P(c)$ for $1<c\in C_a^v$, so ${\rm L}_{ac} = \{bac: p(b) \ge P(a)\} = c\cdot{\rm L}_{a}$. Then by definitions of $C_a^v$ and $\mu_q$ in \eqref{eq:defCa} and \eqref{eq:muxdef},
\begin{align}\label{eq:sumCa}
\sum_{c\in C_a^v}\frac{1}{c}\ & = \prod_{p\in [P(a^*), P(a^*)^{1/\sqrt{v}})}\Big(1-\frac{1}{p}\Big)^{-1} = \prod_{p<P(a^*)^{1/\sqrt{v}}}\Big(1-\frac{1}{p}\Big)^{-1}\prod_{p<P(a^*)}\Big(1-\frac{1}{p}\Big) \nonumber\\
&= \frac{\log P(a^*)^{1/\sqrt{v}}}{\mu_{P(a^*)^{1/\sqrt{v}}}} \frac{\mu_{P(a^*)}}{\log P(a^*)} = \frac{\mu_{P(a^*)}}{\mu_{P(a^*)^{1/\sqrt{v}}}}\frac{1}{\sqrt{v}}.
\end{align}
When $q\ge3$, we use $\mu_{P(a^*)}/\mu_{P(a^*)^{1/\sqrt{v}}} \ge m_q/M_q=1/r_q$, which follows by monotonicity of $m_q,M_q$ in Lemma \ref{lem:monotonic}, and that $P(a^*),q\in\mathcal P$. Hence plugging \eqref{eq:sumCa} back into \eqref{eq:denprop},
\begin{align*}
{\rm d}({\rm L}_n) & \ge \frac{1}{\sqrt{v}\,r_q}\sum_{a\in A}{\rm d}({\rm L}_a) = \frac{1}{\sqrt{v}\,r_q}\,{\rm d}({\rm L}_A)
\end{align*}
as desired.
The result similarly holds when $q=2$: if $P(a^*)\ge 3$ then $\mu_{P(a^*)}/\mu_{P(a^*)^{1/\sqrt{v}}} \ge m_3/M_3=1/r_3$ as before. And if $P(a^*)= 2$ then $\mu_{2}/\mu_{2^{1/\sqrt{v}}} \ge 1$ also suffices.
\end{proof}
\section{Deduction of Theorems \ref{thm:EPS}, \ref{thm:Estrongodd}, \ref{thm:ESS}}
We now apply our analysis of the density of L-multiples to our original sum of interest $f(A) = \sum_{a\in A}\frac{1}{a\log a}$. First we need a simple lemma on bounding certain monotonic sequences.
\begin{lemma}\label{lem:mass}
For $k\ge1$, let $c_0\ge c_1\ge\cdots\ge c_k\ge 0$ and $0=D_0\le D_1\le \cdots \le D_k$. If $d_1,\ldots,d_k\ge0$ satisfy $\sum_{j\le i}d_j \le D_i$ for all $i\le k$, then we have
\begin{align*}
\sum_{i\le k}c_i d_i \ \le \ \sum_{i\le k}c_i(D_i - D_{i-1}).
\end{align*}
\end{lemma}
\begin{proof}
By rearranging sums,
\begin{align*}
\sum_{i\le k}c_id_i=\sum_{i\le k}c_i\Big(\sum_{j\le i}d_j-\sum_{j\le i-1}d_j\Big) =\sum_{i\le k-1}(c_i-c_{i+1})\sum_{j\le i}d_j \ + \ c_k\sum_{i\le k}d_i.
\end{align*}
Since $c_i \ge c_{i+1}$ and $\sum_{j\le i} d_j \le D_i$, we conclude
\begin{align*}
\sum_{i\le k}c_id_i \le \sum_{i\le k-1}(c_i-c_{i+1})D_i \ + \ c_kD_k = \sum_{i\le k}c_i (D_i-D_{i-1}).
\end{align*}
\end{proof}
To motivate the remainder of the proof, we offer a probabilistic interpretation of Proposition \ref{prop:maindensity}: for $v\ge0$, consider $D(v):=\sup_A{\rm d}({\rm L}_{A_n})/{\rm d}({\rm L}_n)$, ranging over primitive sets $A$ such that $P(a)^{1+v} > a$ for all $a\in A$. Note $D(v)$ may be viewed as a `cumulative distribution function', since $D(0)=0$ and $D(v)\to 1$ as $v\to\infty$. Now Proposition \ref{prop:maindensity} essentially bounds $D(v)$ by $\sqrt{v}$. Using the corresponding bound $1/2\sqrt{v}$ for the `probability density function', we establish quantitative bounds below.
\begin{proposition}\label{prop:oddstrongdLn}
For any primitive set $A$, and any integer $n\notin A$ with $q=P(n)\ge3$,
\begin{align*}
f(A_n) \ \le \ \frac{\pi}{4}\frac{M_q}{m_q^2}\;e^\gamma {\rm d}({\rm L}_n).
\end{align*}
\end{proposition}
\begin{proof}
Without loss, we may assume $A=A_n$ is finite, since $f(A) = \lim_{x\to\infty} f(A\cap[1,x])$. Also $n\notin A$ implies all elements of $A$ are composite.
Take $k\ge1$ and any sequence $0=v_0<v_1<\cdots<v_k=1$ and partition the set $A = \bigcup_{0\le i\le k} A_{(i)}$, where $A_{(k)}=\{a\in A: P(a)^2 \le a\}$ and for $0\le i\le k$,
\begin{align*}
A_{(i)} = \{a\in A : P(a)^{1+v_{i}} \le a < P(a)^{1+v_{i+1}}\}.
\end{align*}
Then applying Lemma \ref{lem:fMertPnu} to each $A_{(i)}$,
\begin{align}\label{eq:fAdLAweight}
f(A) & = \sum_{0\le i\le k} f(A_{(i)}) \ \le \ \frac{e^\gamma}{m_q} \sum_{0\le i\le k} \frac{{\rm d}({\rm L}_{A_{(i)}})}{1+v_i}.
\end{align}
Note since $A$ is primitive, $\{{\rm L}_{A_{(i)}}\}_{i\le k}$ are pairwise disjoint. Also for each $j< k$, the first $j$ components are $\bigcup_{0\le i\le j}A_{(i)}=\{a\in A:a<P(a)^{1+v_{j+1}}\}=:A^{(j)}$, so by Proposition \ref{prop:maindensity} they have density
\begin{align*}
\sum_{0\le i\le j}{\rm d}({\rm L}_{A_{(i)}}) = {\rm d}({\rm L}_{A^{(j)}})\le \sqrt{v_{j+1}} \,r_q\,{\rm d}({\rm L}_n).
\end{align*}
Also for $j=k$ we have $\sum_{0\le i\le k}{\rm d}({\rm L}_{A_{(i)}}) = {\rm d}({\rm L}_{A}) \le {\rm d}({\rm L}_n)$, which is trivially less than $r_q{\rm d}({\rm L}_n)$. Let $c_i = \frac{1}{1+v_{i}}$, $d_i = {\rm d}({\rm L}_{A_{(i)}})$, $D_i = \sqrt{v_{i+1}} \,r_q\,{\rm d}({\rm L}_n)$ (here we let $v_{k+1}=v_k$ so that $D_k - D_{k-1}=0$). Thus by Lemma \ref{lem:mass} we have
\begin{align*}
\sum_{0\le i\le k} \frac{{\rm d}({\rm L}_{A_{(i)}})}{1+v_{i}} = \sum_{0\le i\le k}c_id_i \ \le \ \sum_{0\le i\le k}c_i(D_i - D_{i-1})
=r_q\,{\rm d}({\rm L}_n)\sum_{0\le i\le k} \frac{\sqrt{v_{i+1}}-\sqrt{v_{i}}}{1+v_{i}}.
\end{align*}
Hence the weighted sum in \eqref{eq:fAdLAweight} is bounded by
\begin{align}\label{eq:Riemannsum}
f(A) & \le \frac{r_q}{m_q}\,e^\gamma\,{\rm d}({\rm L}_n)\sum_{1\le i\le k} \frac{\sqrt{v_{i}}-\sqrt{v_{i-1}}}{1+v_{i-1}}.
\end{align}
As \eqref{eq:Riemannsum} holds for any partition $0=v_0<v_1<\cdots<v_k=1$, we may set $v_i=\frac{i}{k}$ and obtain the corresponding integral,
\begin{align*}
\lim_{k\to\infty}\sum_{1\le i\le k} \frac{\sqrt{v_i}-\sqrt{v_{i-1}}}{1+v_{i-1}} = \lim_{k\to\infty}\sum_{1\le i\le k}\int_{v_{i-1}}^{v_i} \frac{\dd}{\dd v}\Big[\sqrt{v}\Big]\frac{\dd{v}}{1+v_{i-1}}
= \int_0^1 \frac{\dd{v}}{2\sqrt{v}(1+v)} = \frac{\pi}{4}.
\end{align*}
Hence we conclude $f(A) \le \frac{\pi}{4}\,\frac{r_q}{m_q}\,e^\gamma\,{\rm d}({\rm L}_n)$ as desired.
\end{proof}
We illustrate the value of these bounds by deducing Theorem \ref{thm:Estrongodd} in quantitative form.
\begin{corollary}
Let $A$ be a primitive set, and take an odd prime $p$. If $p\notin A$ then we have $f(A_p) <.901\,f(p)$, and moreover $f(A_p) \le (\frac{\pi}{4}+o(1))f(p)$ as $p\to\infty$. In addition, if $p>23$ and $2p\notin A$ then $f(A_{2p}) < f(2p)$.
\end{corollary}
\begin{proof}
For an odd prime $q$ define $b_q:=\frac{\pi}{4}\frac{M_q}{m_q^2}\mu_q$. Then Proposition \ref{prop:oddstrongdLn} shows that if $n\notin A$ we have
\begin{align*}
f(A_n) \le \frac{\pi}{4}\frac{M_q}{m_q^2}e^\gamma {\rm d}({\rm L}_n) = \frac{q}{n}\,b_q f(q)
\end{align*}
with $q=P(n)\ge3$, recalling ${\rm d}({\rm L}_n) = \frac{q}{n}{\rm d}({\rm L}_q)$ and \eqref{eq:fqdLq}. In particular for $n=q,2q$ we have $f(A_q)\le b_qf(q)$ and $f(A_{2q})\le \frac{1}{2}b_qf(q)$. Note $\mu_q,m_q,M_q\sim 1$ implies $b_q \sim\frac{\pi}{4}$ as claimed. Also the first few values of $b_q$ are displayed below.
\[\begin{array}{cc|cc}
q & b_q & q & b_q\\
\hline
3 & 0.9006 & 23 & 0.8232\\
5 & 0.8795 & 29 & 0.8266\\
7 & 0.8507 & 31 & 0.8139\\
11 & 0.8564 & 37 & 0.8184\\
13 & 0.8327 & 41 & 0.8189\\
17 & 0.8491 & 43 & 0.8092\\
19 & 0.8305 & 47 & 0.8090
\end{array}\]
Observe for $q>7$, we have
\begin{align*}
f(A_q) \le \frac{\pi}{4} \Big(\frac{M_q}{m_{11}}\Big)^2 f(q) \le \frac{\pi}{4} \Big(\frac{1+1/2\log(2\cdot 10^9)^2}{\mu_{19}}\Big)^2 f(q) < .879 f(q).
\end{align*}
In particular, with the table, we see $f(A_q) < .901 f(q)$ for all $q>2$ as claimed.
Finally, we note $f(A_{2q}) < f(2q)$ whenever $b_q < \frac{\log q}{\log(2q)}$. The result then follows since
\begin{align}\label{eq:logp2p}
b_q = \frac{\pi}{4}\frac{M_q}{m_q^2} \mu_q \ > \ \frac{\log q}{\log(2q)} \quad\qquad\text{iff} \quad q\le 23.
\end{align}
Indeed, this may be checked directly for $q< 47$. And for $q\ge47$ we observe that $\log q/\log(2q)\ge\log47/\log94\ge.847$ exceeds $b_q \le \frac{\pi}{4}(M_{2\cdot 10^9}/m_{47})^2 \le .834$.
\end{proof}
Importantly $b_q<1$ for all odd $q$, which means every odd prime is Erd\H os strong. However it remains an open question whether $q=2$ is Erd\H os strong. Now if $2\in A$ we immediately deduce $f(A) \le f(\mathcal P)=1.6366\cdots$. Thus to complete the proof of Theorem \ref{thm:EPS}, it suffices to assume $2\notin A$.
We achieve this in the result below. The argument is somewhat similar in spirit to that of Theorem 1.1 and Lemma 2.4 in \cite{LPprim}.
\begin{theorem}\label{thm:evenPS}
For any primitive set $A$ with $2\notin A$, we have $f(A) < 1.60$.
\end{theorem}
\begin{proof}
As $2\notin A$, denote by $K\ge 2$ the exponent for which $2^K \in A$. Note $K$ is unique by primitivity (Also if $2^k\notin A$ for all $k$ let $K=\infty$, in which case let $f(2^K)=0$). Partition $A$ into sets $A^0=\{a\in A : 2\nmid a\}$ and $A^k = \{a\in A: 2^k\| a\}$ for $k\ge1$, and let $B^k = \{a/2^{k}: a\in A^k\}$. We have
\begin{align}\label{eq:fAsplit2kp}
f(A) & = f(2^K)+\sum_{p\in A}f(p) +\sum_{p\notin A}f(A_{p}) \nonumber\\
&\le f(2^K) + \sum_{\substack{p>2\\p\in A}} f(p) + \sum_{\substack{p>2\\p\notin A}} b_pf(p) + \sum_{\substack{p>2\\p\notin A}}\sum_{k=1}^{K-1} f((A^k)_{2^k p}),
\end{align}
since $f((A^0)_p) \le b_pf(p)$ if $p\notin A$ by Proposition \ref{prop:oddstrongdLn}. More generally, if $2^kp\notin A$ then
\begin{align*}
f((A^k)_{2^k p}) \le 2^{-k}f((B^k)_p) \le 2^{-k} b_p f(p).
\end{align*}
By comparison if $2^kp\in A$ then $f((A^k)_{2^k p}) = f(2^kp) \le 2^{-k}f(p)\frac{\log p}{\log(2p)}$.
Observe that either $2^kp\notin A$ for all $k\ge1$, or $2^Jp\in A$ for a (unique) $J=J_p\in [1,K)$, in which case $(A^k)_{2^k p}=\emptyset$ for all $k>J$ by primitivity. Thus by \eqref{eq:logp2p}, it suffices to assume $2^kp\notin A$ for all $k\ge1$ when $p\le 23$, and $2^Jp\in A$ for some $J\in [1,K)$ when $p> 23$, so
\begin{align*}
\sum_{\substack{p>2\\p\notin A}}\sum_{k=1}^{K-1} f((A^k)_{2^k p}) & \le (1-2^{1-K})\sum_{\substack{2<p\le 23\\p\notin A}} b_pf(p) + \sum_{\substack{p> 23\\p\notin A}}f(p)\Big((1-2^{1-J})b_p + 2^{-J}\frac{\log p}{\log(2p)} \Big)\\
& \le (1-2^{1-K})\sum_{\substack{2<p\le 23\\p\notin A}} b_pf(p) + \sum_{\substack{p> 23\\p\notin A}}b_p f(p),
\end{align*}
since $2b_p > 1 > \frac{\log p}{\log(2p)}$ for all $p>2$. Moreover $(2-2^{1-K})b_p \ge (2-1/2)\frac{\pi}{4} > 1.1$, so \eqref{eq:fAsplit2kp} becomes
\begin{align}\label{eq:fAC12}
f(A)
& \le f(2^K) + \sum_{\substack{p>2\\p\in A}} f(p) + (2-2^{1-K})\sum_{\substack{2<p\le 23\\p\notin A}} b_pf(p) + 2\sum_{\substack{p> 23\\p\notin A}}b_p f(p) \nonumber\\
& \le f(2^K) + (2-2^{1-K})\sum_{2<p\le 23} b_pf(p) + 2\sum_{p> 23}b_p f(p)\nonumber\\
& =: \ f(2^K) + (2-2^{1-K})C_1 + 2C_2.
\end{align}
Now we compute the constants $C_1,C_2$. First, let $M=M_{2\cdot10^9}=1.001\cdots$. Recalling $\mu_p f(p) = e^\gamma {\rm d}({\rm L}_p)$,
\begin{align}\label{eq:C2}
C_2 :=\sum_{p>23}b_p f(p) = \frac{\pi}{4} e^\gamma \sum_{p> 23}\frac{M_p}{m_p^2}{\rm d}({\rm L}_p) \le \frac{\pi}{4}\frac{Me^\gamma}{\mu_{23}^2} \prod_{p\le 23}\big(1-\tfrac{1}{p}\big) = 0.251135\cdots,
\end{align}
since $\sum_{p> q}{\rm d}({\rm L}_{p}) = \prod_{p\le q}(1-\frac{1}{p})$. Similarly we have
\begin{align}\label{eq:C1}
C_1 :=\sum_{2<p\le 23}b_p f(p)= \sum_{2< p\le 23} \frac{\pi}{4}\frac{M}{m_p^2} e^\gamma{\rm d}({\rm L}_p) = \frac{\pi}{4}\,M e^\gamma\cdot 0.39012\cdots = 0.5463\cdots
\end{align}
Here we computed
\begin{align*}
\sum_{2< p\le 23} \frac{1}{m_p^2} {\rm d}({\rm L}_{p}) & = \frac{1}{\mu_7^2}\sum_{2< p\le 7}{\rm d}({\rm L}_{p}) + \frac{1}{\mu_{19}^2}\sum_{7< p\le 19}{\rm d}({\rm L}_{p}) + \frac{1}{\mu_{23}^2} {\rm d}({\rm L}_{23})
= 0.390126\cdots,
\end{align*}
using $\sum_{q< p\le q'}{\rm d}({\rm L}_{p}) = \prod_{p\le q}(1-\frac{1}{p})-\prod_{p\le q'}(1-\frac{1}{p})$.
Hence plugging \eqref{eq:C2} and \eqref{eq:C1} back into \eqref{eq:fAC12},
\begin{align}
f(A) & \le f(2^K) + (2-2^{1-K})C_1 + 2C_2 \nonumber\\
& \le 2^{-K}\Big(\frac{1}{\log 4} - 2C_1\Big) + 2(C_1 + C_2)
\le 2(C_1 + C_2) \le 1.595.
\end{align}
Here we used $2C_1 > .722 > 1/\log 4$. This completes the proof.
\end{proof}
\begin{remark}
A similar argument as in Theorem \ref{thm:evenPS} shows $f(A_2) < C_1+C_2 < 0.80$ when $2\notin A$. We leave this to the interested reader. Note this bound improves on $f(A_2) < e^\gamma/2\approx 0.89$ from \cite[Proposition 2.1]{LPprim}, but unfortunately still exceeds $f(2) \approx 0.72$.
\end{remark}
\subsection{Proof of Theorem \ref{thm:ESS}}
Take ${\epsilon}>0$. We shall introduce large parameters $y=y_{\epsilon}$, $k=k_{{\epsilon},y}$, and $x=x_{{\epsilon},k}$.
By Lemma \ref{lem:LMertfA}, we have $f(A_n) \le e^\gamma{\rm d}({\rm L}_n)$ for any integer $n\notin A$, $n>1$, and when $y=y_{\epsilon}\in{\mathbb R}$ is sufficiently large by Proposition \ref{prop:oddstrongdLn} we have the sharper bound
\begin{align}\label{eq:AnPny}
f(A_n) \ \le \ (\frac{\pi}{4}e^\gamma + {\epsilon}){\rm d}({\rm L}_n) \qquad\text{provided}\quad P(n) > y.
\end{align}
Next for $k=k_{{\epsilon}}=k_{{\epsilon},y}\in{\mathbb N}$ sufficiently large we have the crude bound
\begin{align}\label{eq:NkPnley}
\sum_{\substack{n\in {\mathbb N}_{k}\\P(n)\le y}}{\rm d}({\rm L}_n) \le \sum_{\substack{n\ge 2^k\\P(n)\le y}}\frac{1}{n} < {\epsilon}.
\end{align}
Indeed, using Rankin's trick,
\begin{align}\label{eq:rankin}
R(u) := \sum_{\substack{n\le u\\P(n)\le y}}1 \le u^{1/2}\sum_{\substack{n\le u\\P(n)\le y}}n^{-1/2} \le u^{1/2}\prod_{p\le y}(1-p^{-1/2})^{-1} \ll_y u^{1/2},
\end{align}
so by partial summation,
\begin{align*}
\sum_{\substack{n>u\\P(n)\le y}}\frac{1}{n} = -\frac{R(u)}{u}+\int_u^\infty R(t)t^{-2}\dd{t} \ll_y u^{-1/2} + \int_u^\infty t^{-3/2}\dd{t} \ll_y u^{-1/2}.
\end{align*}
Then setting $u = 2^k-1$ above, we obtain \eqref{eq:NkPnley} for $k=k_{{\epsilon},y}$ sufficiently large.
Finally, since $f({\mathbb N}_j)<2$ crudely for all $j$ there exists $x=x_{{\epsilon},k}\in{\mathbb R}$ sufficiently large so that $f\big(\bigcup_{j \le k}{\mathbb N}_j\cap [x,\infty)\big)<{\epsilon}$.
Now take a primitive set $A\subset [x,\infty)$, and consider the partition $A = A'\cup \bigcup_{n\in{\mathbb N}_k \setminus A}A_n$, where $A'$ consists of elements $a\in A$ with at most $k$ prime factors, and each other element $a\in A$ (with at least $k+1$ prime factors) then lies in $A_n=A\cap {\rm L}_n$, where $n\notin A$ is the product of the smallest $k$ primes of $a$. Hence we conclude
\begin{align*}
f(A) & = f(A') + \sum_{\substack{n\in {\mathbb N}_{k}\setminus A}}f(A_n)\\
& \le f\big(\bigcup_{j \le k}{\mathbb N}_j\cap [x,\infty)\big) + \sum_{\substack{n\in {\mathbb N}_{k}\setminus A\\P(n)\le y}}f(A_n)+ \sum_{\substack{n\in {\mathbb N}_{k}\setminus A\\P(n)> y}}f(A_n)\\
& \le {\epsilon} + e^\gamma\sum_{\substack{n\in {\mathbb N}_{k}\\P(n)\le y}}{\rm d}({\rm L}_n)+ (\frac{\pi}{4}e^\gamma + {\epsilon})\sum_{\substack{n\in {\mathbb N}_{k}\\P(n)>y}}{\rm d}({\rm L}_n) \ \le \ {\epsilon} + e^\gamma\,{\epsilon}+ (\frac{\pi}{4}e^\gamma + {\epsilon}),
\end{align*}
by \eqref{eq:AnPny}, \eqref{eq:NkPnley}, and noting $\sum_{n\in {\mathbb N}_{k}, P(n)>y}{\rm d}({\rm L}_n) \le 1$. Hence letting ${\epsilon}\to 0$ completes the proof of Theorem \ref{thm:ESS}.
\section{{\rm L}-primitive sets revisited}
\subsection{Upper density} As mentioned in the introduction, one of the striking early results in the study of primitive sets was due to Besicovitch \cite{Besicovitch}, who showed
\begin{align*}
\sup_{A\textnormal{ primitive}}\overline{{\rm d}}(A) \;=\; \frac{1}{2}.
\end{align*}
This came as quite a surprise at the time, in particular disproving a conjecture of Davenport. We shall extend this phenomenon further to {\rm L}-primitive sets, in Proposition \ref{prop:LBesicovitch}.
To proceed, we recall a result of Erd\H{o}s \cite{ErdosBes}, which bounds the density of the set of multiples of an interval. Also see Hall--Tenenbaum \cite[Theorem 21]{HTdivisor} for quantitatively stronger results. Denote the set of (all) multiples of $A\subset {\mathbb N}$ as ${\rm M}_A = \{na : n\in{\mathbb N}, a\in A\}$.
\begin{proposition}[Erd\H{o}s, 1936] \label{prop:ErdBes}
Let $\varepsilon(x)$ be any function with $\varepsilon(x)\to 0$ as $x\to\infty$. Then the upper density of ${\rm M}_{(x^{1-\varepsilon(x)},x]}$ tends to zero as $x\to\infty$.
\end{proposition}
We prove a Besicovitch-type result for {\rm L}-primitive sets, notably with full upper density.
\begin{proposition}\label{prop:LBesicovitch}
We have $\sup_A \overline{\rm d}(A) = 1$ over {\rm L}-primitive sets $A$.
\end{proposition}
\begin{proof}
Take $h\in{\mathbb Z}_{>1}$, ${\epsilon}>0$ and let $S = \{n\in{\mathbb N} : P(n)\le h\}$ be the set of $h$-smooth numbers. For a sequence of indices $k_1,k_2,\ldots$ to be determined, define intervals $I_i = (h^{k_i-1}, h^{k_i}]$. Let $S_i:=I_i\setminus S$ and note for $a\in S_i$ and $n\in {\rm L}_a$ we have $n\ge P(a)a >h^{k_i}$, so $n\notin I_i\supset S_i$. In particular $a'\notin {\rm L}_a$ for distinct $a,a'\in S_i$, so each set $S_i$ is {\rm L}-primitive. Now define the {\rm L}-primitive set
\begin{align}
A = \bigcup_{j\ge1}S_j\setminus\bigcup_{1\le i<j}{\rm M}_{I_i}.
\end{align}
Recall $|S\cap [1,x]| \ll_h \sqrt{x}$ by \eqref{eq:rankin}. For each fixed $h>1$, by Proposition \ref{prop:ErdBes} we see $\overline{\rm d}({\rm M}_{(x/h,x]})\to0$ as $x\to\infty$. So for $k_i$ large enough, we may assume $\overline{\rm d}({\rm M}_{I_i}) < {\epsilon}/2^i$.
For each $i$ the set of multiples ${\rm M}_{I_{i}}$ is a periodic set with period (dividing) $(h^{k_i})!$. So assuming
$k_{i+1} \ge (h^{k_{i}})!$ the relative density of ${\rm M}_{I_i}$ inside $I_{i+1}$ is at most $2\overline{\rm d}({\rm M}_{I_i})$. Hence
\begin{align*}
|A\cap [1,h^{k_j}]| &\ge |I_j| - \big|S\cap [1,h^{k_j}]\big| - 2h^{k_j}\sum_{1\le i<j}\overline{\rm d}({\rm M}_{I_i})\\
&\ge (h^{k_j}-h^{k_j-1}) - O_h\big(h^{k_j/2}\big) - 2{\epsilon} h^{k_j}\sum_{i\ge1}2^{-i}.
\end{align*}
Thus dividing by $x=h^{k_j}$ we see $\overline{\rm d}(A)=\limsup_{x\to\infty}|A\cap [1,x]|/x \ge 1 - 1/h - 2{\epsilon}$. Taking $h\to\infty$ and ${\epsilon}\to0$ completes the proof.
\end{proof}
\subsection{The Erd\H{o}s {\rm L}-primitive set conjecture}
Sets of {\rm L}-multiples play a central role in our proof of Theorem \ref{thm:EPS}, as the mathematical structures arising from a probabilistic interpretation of \eqref{eq:sumofden},\footnote{A variant in \cite{ESS67}, a set $A$ `possesses property I' if there is no solution to $a'=ba$ for $a,a'\in A$ with $p(b)>P(a)$. This is similar to $A$ as {\rm L}-primitive, but the latter imposes the inclusive inequality $p(b)\ge P(a)$, which arises naturally from a probabilistic viewpoint. This inclusivity leads to key structural properties, notably the trichotomy in Lemma \ref{lem:trichot}.} and implicit in the original 1935 argument of Erd\H{o}s \cite{Erdos35}.\footnote{The author was also recently shown `prefix-free sets' in \cite{AKS}, which coincides with {\rm L}-primitive for sets of squarefree numbers.}
As such it is natural to pose the {\rm L}-primitive analogue of Conjecture \ref{conj:EPS}, namely that $f(A) \le f(\mathcal P)$ for all {\rm L}-primitive sets $A$.
However, this conjecture turns out to be false.
\begin{proposition}\label{prop:Lprimsup}
We have
\begin{align}\label{eq:Lprimsupmax}
\sup_{A\textnormal{ L-primitive}} f(A) \ &= \ \sum_p\max\{f(p),e^\gamma{\rm d}({\rm L}_p)\},
\quad \text{and} \qquad
\lim_{x\to\infty}\sup_{\substack{A\subset [x,\infty)\\A\textnormal{ L-primitive}}} f(A) \ = \ e^\gamma.
\end{align}
\end{proposition}
Note the prime sum in \eqref{eq:Lprimsupmax} above is at least (and well-approximated by) $f(\mathcal P)-f(2) + e^\gamma/2 \approx 1.805$. In particular it exceeds $f(\mathcal P) \approx 1.636$. As such Proposition \ref{prop:Lprimsup}, along with Conjecture \ref{conj:BM} and related work in the literature, highlights how the Erd\H{o}s primitive set conjecture is quite fragile under certain seemingly natural directions of generalization.
We now proceed to set up the proof of Proposition \ref{prop:Lprimsup}. First, the trichotomy in Lemma \ref{lem:trichot} leads to the following.
\begin{lemma}\label{lem:genset}
Every set $S\subset {\mathbb N}$ has a unique {\rm L}-primitive subset $\langle S\rangle$ with ${\rm L}_{\langle S\rangle} = {\rm L}_S$. In particular $\langle S\rangle=S$ if $S$ is {\rm L}-primitive.
\end{lemma}
\begin{proof}
For any $s_1,s_2\in S$, by Lemma \ref{lem:trichot} either ${\rm L}_{s_1}\cap {\rm L}_{s_2}=\emptyset$ or ${\rm L}_{s_1}\subset {\rm L}_{s_2}$ (or vice versa). Thus each $s\in S$ has a (unique) smallest {\rm L}-divisor $s'\in S$, inducing a map $S\to S:s\mapsto s'$. We define $\langle S\rangle$ as the image of this map. Explicitly this is
\begin{align}
\langle S\rangle := \{s\in S : s\notin {\rm L}_t \;\forall\; t<s, t\in S\}.
\end{align}
By minimality ${\rm L}_{s_1}\cap {\rm L}_{s_2}=\emptyset$ for all $s_1,s_2\in \langle S\rangle$, so $\langle S\rangle$ is {\rm L}-primitive. Moreover ${\rm L}_S = \bigcup_{s\in S}{\rm L}_s = \bigcup_{s'\in \langle S\rangle}{\rm L}_{s'} = {\rm L}_{\langle S\rangle}$, where the latter union over $\langle S\rangle$ is disjoint by {\rm L}-primitivity. This completes the proof.
\end{proof}
Next take $v>0$, $n\in{\mathbb Z}_{>1}$, and consider the set $D_v(n)$ of prime divisors of $n$ whose induced {\rm L}-divisor is not smooth, i.e.
\begin{align}
D_v(n) = \Big\{ p\mid n \; :\, \prod_{q^e\| n, q< p} q^e \ \le \ p^v \Big\}.
\end{align}
We cite the following result of Bovey, based on earlier work of Erd\H{o}s \cite[\S1.2]{HTdivisor}.
\begin{proposition}[Bovey, 1977] \label{prop:Bovey}
For each $v>0$ there is a set $N_v\subset {\mathbb N}$ of full density with
\begin{align}\label{eq:Dickman}
\frac{|D_v(n)|}{\log\log n} \ \to \ e^{-\gamma}\int_0^{v} \rho(x)\dd{x}
\end{align}
as $n\to\infty$ on $N_v$. Here $\rho$ is the Dickman--de Bruijn function.
\end{proposition}
\begin{remark}
In probability, the righthand side of \eqref{eq:Dickman} is called the Dickman distribution.
\end{remark}
In particular, $|D_v(n)|\gg_v \log\log n$ for all $n\in N_v$. Now we may define a map $\beta: N_u\to {\mathbb N}$ sending $n$ to its {\rm L}-divisor $\beta(n)=p\prod_{q^e\| n, q< p} q^e$, for the largest prime $p\in D_v(n)$.
Define the {\rm L}-primitive generating set $B(v) := \langle \beta(N_v)\rangle$ as in Lemma \ref{lem:genset}. By construction ${\rm L}_{B(v)} = {\rm L}_{\beta(N_v)} \supset N_u$ has full density. Also, by definition of $\beta, D_v$,
\begin{align}\label{eq:P1u}
B(v)\subset \beta(N_v) \subset \{n\in{\mathbb N} : n \le P(n)^{1+v}\}.
\end{align}
We are now prepared to establish a local version of Proposition \ref{prop:Lprimsup}.
\begin{proposition}\label{prop:limsupLfAq}
For each prime $q$, we have
\begin{align*}
\lim_{y\to\infty}\sup_{\substack{A\subset[y,\infty)\\\textnormal{L-primitive }A\not\ni q}}f(A_q) \ = \ \sup_{\textnormal{L-primitive }A\not\ni q}f(A_q) \ = \ e^\gamma {\rm d}({\rm L}_q).
\end{align*}
\end{proposition}
\begin{proof}
By Lemma \ref{lem:LMertfA} we have $f(A_p) < e^\gamma {\rm d}({\rm L}_p)$ for all {\rm L}-primitive $A$ not containing $p$. It now suffices to provide {\rm L}-primitive sets $B\subset[y,\infty)$ with $f(B_q) \to e^\gamma {\rm d}({\rm L}_q)$ as $y\to\infty$.
Fix $v>0$. The {\rm L}-primitive set $B(v)$ in \eqref{eq:P1u} satisfies
\begin{align}\label{eq:fBuq}
f(B(v)_q) = \sum_{b\in B(v)_q}\frac{1}{b\log b} \ & \ge \ \frac{1}{1+v}\sum_{b\in B(v)_q}\frac{1}{b\log P(b)}.
\end{align}
Next, for $x>e^{e^{e^y}}$ we may assume $N_v\subset [x,\infty)$ and retain full density. Observe then $B(v)\subset [y,\infty)$ is our candidate {\rm L}-primitive set. Indeed, for each $n\in N_v$ by construction $\beta(n)$ is divisible by all primes $q\in D_v(n)$, so $\beta(n)$ is composite with $\beta(n)\ge |D_v(n)| \gg_v \log\log n \ge \log\log\log x > y$ for each $b\in B(v)$, for $y$ sufficiently large. And note Mertens' product theorem gives
\begin{align*}
{\rm d}({\rm L}_b) = \frac{1}{b}\prod_{p< P(b)}\Big(1-\frac{1}{p}\Big) = \frac{e^{-\gamma}+o_y(1)}{b\log P(b)}.
\end{align*}
Plugging back into \eqref{eq:fBuq}, we obtain
\begin{align*}
f(B(v)_q) \ge \frac{e^\gamma+o_y(1)}{1+v}\sum_{b\in B(v)_q}{\rm d}({\rm L}_b).
\end{align*}
Recall ${\rm L}_{B(v)} = {\rm L}_{\beta(N_v)} \supset N_v$ has full density, which implies $({\rm L}_{B(v)})_q={\rm L}_{B(v)_q}$ has full relative density ${\rm d}({\rm L}_{B(v)_q}) = {\rm d}({\rm L}_q)$. Hence by Corollary \ref{cor:Lsplit} this latter sum is
\begin{align*}
\sum_{b\in B(v)_q}{\rm d}({\rm L}_b) = {\rm d}({\rm L}_{B(v)_q}) = {\rm d}({\rm L}_q).
\end{align*}
Thus taking $y\to\infty$ and $v\to0$ gives $f(B(v)_q)\to e^\gamma{\rm d}({\rm L}_q)$ as desired.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:Lprimsup}]
Take {\rm L}-primitive $A\subset [x,\infty)$, so $p\ge x$ for all $p\in A$. Then by Lemma \ref{lem:LMertfA},
\begin{align*}
f(A) = \sum_p f(A_p) & = \sum_{\substack{p< x\\\text{or }p\notin A}} f(A_p) + \sum_{p\in A, p\ge x}f(p)\\
& \le e^\gamma\sum_{\substack{p< x\\\text{or }p\notin A}} {\rm d}({\rm L}_p) + \sum_{p\in A, p\ge x}f(p)\\
& \le e^\gamma\sum_{p} {\rm d}({\rm L}_p) + \big(e^\gamma+o_x(1)\big)\sum_{p\ge x}{\rm d}({\rm L}_p) \ \le \ e^\gamma+o_x(1)
\end{align*}
by Mertens' theorem, and noting $\sum_p {\rm d}({\rm L}_p)=1$. Thus $\lim_x\sup_{A\subset[x,\infty)}f(A) \le e^\gamma$. Equality in the limsup holds for the choice of $B = \bigcup_{q} B(v)_q$ and taking $v\to0$ as in Proposition \ref{prop:limsupLfAq}. Observe such $B$ inherits {\rm L}-primitivity from the $B(v)_q$. Note in general, a union $B=\bigcup_q B_q$ is {\rm L}-primitive if each $B_q$ is {\rm L}-primitive. (By contrast $B=\bigcup_q B_q$ is not necessarily primitive even if each $B_q$ is primitive, e.g. $B=\{3,6\}$.)
Next, consider the primes $\mathcal Q = \{q : f(q) > e^\gamma {\rm d}({\rm L}_q)\} = \{q : 1/\log q > e^\gamma\prod_{p<q}(1-\frac{1}{p})\}$. By Lemma \ref{lem:LMertfA} $f(A_q) < e^\gamma {\rm d}({\rm L}_q)$ when $q\notin A$, so in general $f(A_q) < \max\{f(q),e^\gamma {\rm d}({\rm L}_q)\}$ for all {\rm L}-primitive $A$ and all primes $q$. Hence
\begin{align*}
f(A) = \sum_q f(A_q) < \sum_q\max\{f(q),e^\gamma {\rm d}({\rm L}_q)\} = f(\mathcal Q) + e^\gamma\big(1-{\rm d}({\rm L}_{\mathcal Q})).
\end{align*}
This bound is attained for the choice of $B' = \mathcal Q\cup\bigcup_{q\notin \mathcal Q} B(v)_q$, and taking $v\to0$ as in Proposition \ref{prop:limsupLfAq}. Again $B'$ inherits {\rm L}-primitivity from the $B(v)_q$, as desired.
\end{proof}
\section{Deduction of Theorems \ref{thm:LDavenErdos}, \ref{thm:ESSLchain}}
Our study of sets of {\rm L}-multiples leads to Theorem \ref{thm:LDavenErdos}, refining Davenport--Erd\H{o}s. This in turn enables the proof of Theorem \ref{thm:ESSLchain}, by a modification of the argument in \cite[Theorem 2]{ESS66}, with greater care given to the constants involved.
To proceed we first establish some lemmas.
\begin{lemma}\label{lem:densitysumdLa}
For any {\rm L}-primitive $A\subset {\mathbb N}$, we have $\underline{{\rm d}}({\rm L}_A)\ge \sum_{a\in A}{\rm d}({\rm L}_a)$. Moreover if $\sum_{a\in A}1/a < \infty$ then the natural density ${\rm d}({\rm L}_A)$ exists and equals $\sum_{a\in A}{\rm d}({\rm L}_a)$.
\end{lemma}
\begin{proof}
For each $a\in A$ we have $\underline{{\rm d}}({\rm L}_a) = {\rm d}({\rm L}_a)$. So taking the lower density of the finite (disjoint) union $\bigcup_{a\in A, a\le x}{\rm L}_a \subset {\rm L}_A$, we have $\sum_{a\in A, a\le x}{\rm d}({\rm L}_a) \le \underline{{\rm d}}({\rm L}_A)$ for all $x>1$. Thus $\sum_{a\in A}{\rm d}({\rm L}_a) \le \underline{{\rm d}}({\rm L}_A)$. Moreover if $\sum_{a\in A}1/a < \infty$, then for all $y>1$
\begin{align*}
\frac{1}{x}\sum_{\substack{n\le x\\ n\in {\rm L}_{A\cap (y,\infty)}}}1 \le \frac{1}{x}\sum_{a\in A,a> y}\left\lfloor \frac{x}{a}\right\rfloor \le \sum_{a\in A,a> y}\frac{1}{a} = o_y(1).
\end{align*}
Thus $\overline{{\rm d}}({\rm L}_{A\cap (y,\infty)}) \to 0$ as $y\to\infty$, and so combining with ${\rm d}({\rm L}_{A\cap [1,y]})=\sum_{a\in A, a\le y}{\rm d}({\rm L}_a)$ completes the proof.
\end{proof}
The following lemma shows that sets of {\rm L}-multiples have a $\log$ density, refining Davenport--Erd\H{o}s' elementary proof for sets of (all) multiples \cite{DE51}.
\begin{lemma}
For any {\rm L}-primitive $A\subset {\mathbb N}$, the $\log$ density $\delta({\rm L}_A)$ exists and equals $\sum_{a\in A}{\rm d}({\rm L}_a)$.
\end{lemma}
\begin{proof}
In general $\underline{{\rm d}}(S) \; \le\; \underline{\delta}(S) \;\le \; \overline{\delta}(S) \; \le \; \overline{{\rm d}}(S)$ for any $S\subset {\mathbb N}$. So for $S={\rm L}_A$, by Lemma \ref{lem:densitysumdLa} it suffices to show
\begin{align}\label{eq:sumdLageDelt}
\sum_{a\in A}{\rm d}({\rm L}_a) \ \ge \ \overline{\delta}({\rm L}_A).
\end{align}
To this, for $y>1$ let $A^y = \{a\in A : P(a)\le y\}$ and $L^y = \{n\in {\rm L}_A : P(n)\le y\}$. Note $L^y \subset {\rm L}_{A^y}$. Also $\sum_{a\in A^y}\frac{1}{a} \le \prod_{p\le y}(1-\frac{1}{p})^{-1} = O_y(1)$, so by Lemma \ref{lem:densitysumdLa} ${\rm d}({\rm L}_{A^y})$ exists and equals $\sum_{a\in A^y}{\rm d}({\rm L}_a)$ for all $y>1$. In particular ${\rm d}({\rm L}_{A^y})\to \sum_{a\in A}{\rm d}({\rm L}_a)$ as $y\to\infty$.
Now observe each $n\in {\rm L}_{A^y}$ is a {\rm L}-multiple of a unique $a\in A^y$, so for $x\ge y>1$ we have
\begin{align}\label{eq:recipLx}
\sum_{n\in L^x\cap {\rm L}_{A^y}}\frac{1}{n} &= \sum_{a\in A^y}\frac{1}{a}\prod_{P(a)\le p\le x}(1-\tfrac{1}{p})^{-1} = \sum_{a\in A^y}\frac{1}{a}\prod_{p<P(a)}(1-\tfrac{1}{p})\prod_{p\le x}(1-\tfrac{1}{p})^{-1} \nonumber\\
&= {\rm d}({\rm L}_{A^y})\prod_{p\le x}(1-\tfrac{1}{p})^{-1}.
\end{align}
In particular for $x=y$ we have $\sum_{n\in L^x}\frac{1}{n}={\rm d}({\rm L}_{A^x})\prod_{p\le x}(1-\tfrac{1}{p})^{-1}$.
Then for all $x\ge y>1$, by \eqref{eq:recipLx} and Mertens' theorem
\begin{align}\label{eq:notrecipLy}
\sum_{n\in L^x \setminus {\rm L}_{A^y}}\frac{1}{n}
&= \ \sum_{n\in L^x}\frac{1}{n} \ - \sum_{n\in L^x \cap {\rm L}_{A^y}}\frac{1}{n} \nonumber\\
&= \big({\rm d}({\rm L}_{A^x})-{\rm d}({\rm L}_{A^y})\big)\prod_{p\le x}(1-\tfrac{1}{p})^{-1}
\ \ll \ (\log x)\big({\rm d}({\rm L}_{A^x})-{\rm d}({\rm L}_{A^y})\big).
\end{align}
Recall the natural density ${\rm d}({\rm L}_{A^y})$ exists, in which case equals the log density $\delta({\rm L}_{A^y})$. Hence by \eqref{eq:notrecipLy}, for each $y>1$ the upper log density is
\begin{align}
\overline{\delta}({\rm L}_{A}) \ = \ \limsup_{x\to\infty}\frac{1}{\log x}\sum_{\substack{n\le x\\n\in {\rm L}_{A}}}\frac{1}{n}
& \le \lim_{x\to\infty}\frac{1}{\log x}\sum_{\substack{n\le x\\n\in {\rm L}_{A^y}}}\frac{1}{n} \ + \ \limsup_{x\to\infty}\frac{1}{\log x}\sum_{n\in L^x \setminus {\rm L}_{A^y}}\frac{1}{n} \nonumber\\
& \ = \delta({\rm L}_{A^y}) \ + \ \lim_{x\to\infty}O\big({\rm d}({\rm L}_{A^x})-{\rm d}({\rm L}_{A^y})\big) \nonumber\\
& \ = {\rm d}({\rm L}_{A^y}) \ + \ O\Big(\sum_{a\in A}{\rm d}({\rm L}_a)-{\rm d}({\rm L}_{A^y})\Big).
\end{align}
Hence ${\rm d}({\rm L}_{A^y})\to \sum_{a\in A}{\rm d}({\rm L}_a)$ as $y\to\infty$ implies $\overline{\delta}({\rm L}_{A})\le \sum_{a\in A}{\rm d}({\rm L}_a)$, giving \eqref{eq:sumdLageDelt}.
\end{proof}
\vspace{.5em}
\noindent
{\bf Theorem \ref{thm:LDavenErdos}.} {\it If $\overline{\delta}(A)>0$, then $A$ contains an infinite {\rm L}-divisibility chain.}
\begin{proof}
We claim all such $A\subset {\mathbb N}$ contain an element $a\in A$ such that $A\cap {\rm L}_a$ has positive upper $\log$ density. (In other words, if $\overline{\delta}(A) > 0$ then there exists an element $a\in A$ such that $\overline{\delta}(A\cap {\rm L}_a)>0$.)
Assume this claim holds. Letting $A^1=A$, $a_1=a$, and for $i\ge1$ suppose $\overline{\delta}(A^i)>0$. By the claim there exists $a_i\in A^i$ such that $A^{i+1} := A^i\cap {\rm L}_{a_i}$ has positive upper $\log$ density. Hence by induction, we obtain an {\rm L}-divisibility chain $a_1,a_2,\cdots$, as desired.
Thus it remains to establish the above claim. For sake of contradiction, suppose $A\cap {\rm L}_a$ has zero $\log$ density for all $a\in A$. Next, for the {\rm L}-primitive generating set $B = \langle A\rangle$ by Lemma \ref{lem:densitysumdLa} $\delta({\rm L}_B)=\sum_{b\in B}{\rm d}({\rm L}_b)$ exists. Then for $z>1$ large enough we have $\delta({\rm L}_{B\cap(z,\infty)}) = \sum_{b\in B, b>z}{\rm d}({\rm L}_b)< \overline{\delta}(A)$.
Now by assumption $\overline{\delta}(A\cap {\rm L}_b)=0$ for all $b\le z$, $b\in B$, and so
\begin{align*}
\overline{\delta}(A) = \overline{\delta}(A\cap {\rm L}_{B\cap (z,\infty)}) \ \le \ \delta({\rm L}_{B\cap (z,\infty)}) < \overline{\delta}(A),
\end{align*}
a contradiction. Hence there exists $a\in A$ such that $A\cap {\rm L}_a$ has positive upper $\log$ density.
\end{proof}
\vspace{.5em}
\noindent
{\bf Theorem \ref{thm:ESSLchain}.} {\it If $\overline{\Delta}(A)>0$, then there is an infinite {\rm L}-divisibility chain $D\subset A$ of growth}
\begin{align*}
\limsup_{y\to\infty}\sum_{\substack{d\in D\\d\le y}}\frac{1}{\log\log y} \ \ge \ \frac{\overline{\Delta}(A)}{e^\gamma}.
\end{align*}
\begin{proof}
Take ${\epsilon}>0$. Without loss, we may suppose $A\subset [x_{\epsilon},\infty)$ for $x_{\epsilon}$ sufficiently large, so that by Proposition \ref{prop:Lprimsup} $f(A')\le e^\gamma+{\epsilon}$ for all {\rm L}-primitive subsets $A'\subset A$.
By definition of upper $\log\log$ density $\Delta:=\overline{\Delta}(A)>0$, there exists an unbounded sequence $(x_j)_{j=0}^\infty\subset{\mathbb R}$ such that for all $j\ge0$,
\begin{align}\label{eq:deltaAxj}
f(A\cap [1,x_j]) = \sum_{\substack{a\in A\\ a\le x_j}}\frac{1}{a\log a} > (\Delta-{\epsilon})\log\log x_j.
\end{align}
Recall the {\rm L}-primitive generating set $\langle S\rangle=\{s\in S: s\notin {\rm L}_t \; \forall t<s,t\in S\}$ of a set $S\subset {\mathbb N}$ from Lemma \ref{lem:genset}. We partition $A = \bigcup_{i\ge0} A^i$ into a disjoint collection of {\rm L}-primitive subsets, where $A^0 = \langle A\rangle$ and inductively $A^l = \langle A\setminus \bigcup_{i<l} A^i\rangle$. By construction each $a=a_l\in A^l$ has a (finite) chain of {\rm L}-divisors $a_i\in A^i$ with ${\rm L}_{a_0}\supset \cdots \supset {\rm L}_{a_l}={\rm L}_a$. Also note $f(A^i)\le e^\gamma+{\epsilon}$ by assumption, so in particular $A^i$ has zero $\log\log$ density. Hence \eqref{eq:deltaAxj} implies each $A^i$ in $A=\bigcup_{i\ge0} A^i$ is non-empty. Next, define the subset $B = \bigcup_{j\ge0}B_j$ for
\begin{align*}
B_j \ : = \ A\cap [1,x_j]\setminus \bigcup_{1\le i<r_j} A^i, \qquad\text{where}\quad r_j := \frac{\Delta-2{\epsilon}}{e^\gamma+{\epsilon}}\log\log x_j.
\end{align*}
Note the sets $B_j$ are pairwise disjoint: Indeed, since $A = \bigcup_{i\ge0} A^i$, for each $j$ we have $A\cap [1,x_j]\subset \bigcup_{i<s_j} A^i$ for some finite $s_j$, as determined by $x_j$. Then since $(x_j)_j$ is unbounded, (passing to a subsequence) we have $r_{j+1} > s_j$ and so $A\cap [1,x_j]\subset \bigcup_{i<r_{j+1}}A^i$. Thus $B_j=A\cap [1,x_j]\setminus \bigcup_{i<r_j} A^i \; \subset\; \bigcup_{r_j\le i<r_{j+1}} A^i$ inherits disjointness from the $A^i$, as claimed.
Since $B = \bigcup_{j\ge0}B_j$ forms a disjoint union, for each $b\in B$ there is a unique index $J(b)$ such that $b\in B_{J(b)}$, that is,
\begin{align}
b \ \le \ x_{J(b)} \quad \text{and}\quad b\notin \bigcup_{i<r_{J(b)}} A^i.
\end{align}
In addition, $B$ has positive upper $\log\log$ density, since by definitions of $B$, $r_j$, and \eqref{eq:deltaAxj},
\begin{align*}
f(B\cap [1,x_j]) \ \ge \ f(B_j)
& \ \ge \ f(A\cap [1,x_j]) \ - \ \sum_{i<r_j}f(A^i)\\
& \ > \ (\Delta-{\epsilon})\log\log x_j - r_j(e^\gamma+{\epsilon}) \ = \ {\epsilon}\log\log x_j.
\end{align*}
In particular $B$ has positive upper $\log$ density, so by Theorem \ref{thm:LDavenErdos} there exists an {\it infinite} {\rm L}--divisibility chain $D\subset B$. Since $D:=(d_k)_{k=0}^\infty$ is unbounded, (by passing to a subchain) we may assume $J(d_k) < J(d_{k+1})$ for all $k\ge0$.
Recall each $a\in A^i$ is at the end of an ${\rm L}$-divisibility chain of length $i$. As $b\in B_{J(b)}$ and $B_j$ is contained in $\bigcup_{r_j\le i<r_{j+1}}A^i$, we infer each $d\in D\subset B$ is at the end of an ${\rm L}$-divisibility chain of length (at least) $r_{J(d)}$. Write it as $c_0^{(k)}\mid c_1^{(k)}\mid \cdots\mid c_{r_{J(d_k)}}^{(k)}=d_k$, with
\begin{align*}
{\rm L}_{c_0^{(k)}} \supset \cdots \supset {\rm L}_{d_k}.
\end{align*}
Now let $i_k$ be the least index such that $c_{i_k}^{(k)}>d_{k-1}$ and define
\begin{align*}
C := \{d_{k-1}< c_i^{(k)} \le d_k \ : \ k,i\ge0\} \; =\; \bigcup_{k\ge0}\{c_i^{(k)} \ : \ i\in [i_k, r_{J(d_k)}]\} .
\end{align*}
We may assume $(r_j)_j$ grows fast enough, so that $\lfloor{\epsilon}\, r_{J(d_k)}\rfloor > d_{k-1}$.
Then the trivial bound $c_i^{(k)} > i$ implies $c_{\lfloor{\epsilon} \,r_{J(d_k)\rfloor}}^{(k)} > d_{k-1}$, and so $\lfloor{\epsilon} \,r_{J(d_k)}\rfloor \ge i_k$. Thus
\begin{align}\label{eq:supE}
\big|C\cap [1,x_{j(d_k)}]\big|
\;\ge\; \big|C\cap [d_{k-1},d_k]\big|
\;\ge\; (1-{\epsilon})r_{J(d_k)} = (1-{\epsilon})\frac{\Delta-2{\epsilon}}{e^\gamma+{\epsilon}}\log\log x_{j(d_k)}.
\end{align}
Hence taking ${\epsilon}\to 0$ in \eqref{eq:supE} above gives $\limsup_{x\to\infty} \sum_{c\in C,c\le x}1/\log\log x \ge \Delta/e^\gamma$ as desired.
Finally note $C$ forms an infinite {\rm L}-divisibility chain: for each $k$ we have $c_j^{(k)}\in{\rm L}_{c_i^{(k)}}$ for all $i_k\le i<j$, in particular $d_k\in{\rm L}_{c_i^{(k)}}$.
Also $d_k\in{\rm L}_{d_{k-1}}$ since $D$ is an {\rm L}-divisibility chain, so there exist factorizations
\begin{align*}
d_k = gc_i^{(k)} = hd_{k-1},
\end{align*}
with $p(g)\ge P(c_i^{(k)})$ and $p(h)\ge P(d_{k-1})$. As $c_i^{(k)}> d_{k-1}$, we deduce $c_i^{(k)}\in{\rm L}_{d_{k-1}}$. Thus the $k$th and $(k-1)$th pieces of $C$ are linked together. Hence $C$ is indeed an {\rm L}-divisibility chain.
\end{proof}
\section{Closing remarks}
In this discussion, we attempt to sample just a few of the multitude of open questions that have quickly arisen in connection with the Erd\H{o}s primitive set conjecture. We have already described a few in the introduction, including Conjecture \ref{conj:ESS}, as well as whether $p=2$ is Erd\H{o}s strong. We also note recent work has studied variants of the problem in function fields $\mathbb{F}_q[x]$, see \cite{funcfield}, \cite{funcfield2}. In addition, it would be interesting to further extend the classical study of sets of (all) multiples and of primitive sets, e.g. see Hall \cite{Hsetmult} or Halberstam--Roth \cite[\S 5]{HalbRoth}, to sets of {\rm L}-multiples and {\rm L}-primitive sets.
We conclude with a related question of Banks and Martin, which offers a potential unified framework to view the results described in this article. For $k\ge1$, recall $\mathbb{N}_k=\{n : \Omega(n)=k\}$, in particular $\mathbb{N}_1=\mathcal{P}$. In 1993, Zhang \cite{zhang2} proved $f(\mathbb{N}_k)<f(\mathcal{P})$ for each $k>1$. Later Bayless, Kinlaw, and Klyve \cite{BKK} showed that $f({\mathbb N}_2) > f({\mathbb N}_3)$. Banks and Martin \cite{BM} predicted $f(\mathbb{N}_k)<f(\mathbb{N}_{k-1})$ for each $k>1$.
In fact, they posed a vast generalization to Conjecture \ref{conj:EPS}.
\begin{conjecture}[odd Banks--Martin] \label{conj:BM}
Let $k\ge1$ and suppose $A$ is a primitive set with $\Omega(n)\ge k$ for all $n\in A$. Then for any set of odd primes $\mathcal Q$, we have
\begin{align}\label{eq:BMAQ}
f(A(\mathcal Q)) \ \le \ f\big(\mathbb{N}_k(\mathcal Q)\big).
\end{align}
Here $A(\mathcal Q)$ denotes the set of members of $A$ composed of primes in $\mathcal Q$.
\end{conjecture}
Banks and Martin managed to show \eqref{eq:BMAQ} in the special case when the set of primes $\mathcal Q$ is quite sparse, namely $\sum_{p\in \mathcal Q}1/p <1.74$ (even when $2\in \mathcal Q$). We note the original formulation of Conjecture \ref{conj:BM} included the cases $2\in \mathcal Q$, but this turns out to be false. Indeed, when $\mathcal Q=\mathcal P$ it was shown $f(\mathbb{N}_k) > f(\mathbb{N}_6)$ for each $k\neq6$ \cite{Lalmost}. In fact, numerical evidence suggests that in fact the reverse holds $f(\mathbb{N}_k)>f(\mathbb{N}_{k-1})$ for $k>6$. Nevertheless for $\mathcal Q=\mathcal P\setminus \{2\}$, the desired inequality $f(\mathbb{N}_k(\mathcal Q))<f(\mathbb{N}_{k-1}(\mathcal Q))$ holds up to at least $k=20$.
Observe that Theorem \ref{thm:Estrongodd} implies Conjecture \ref{conj:BM} in the special case $k=1$. Indeed, if $p\notin \mathcal Q$ then $A(\mathcal Q)_p=\emptyset$, so we deduce $f(A(\mathcal Q)) = \sum_{p\in \mathcal Q}f(A(\mathcal Q)_p) \le \sum_{p\in \mathcal Q}f(p) = f(\mathcal Q)$.
Moreover, if true, Conjecture \ref{conj:BM} implies Conjecture \ref{conj:ESS} of Erd\H{o}s--S\'ark\"ozy--Szemer\'edi. This follows by an argument similar to Theorem \ref{thm:ESS}, and using $f({\mathbb N}_k(\mathcal Q))\to 1/2$ as $k\to\infty$ when $\mathcal Q=\mathcal P\setminus \{2\}$, see \cite[Corollary 4.2]{Lalmost}. We leave this to the interested reader.
\section*{Acknowledgments}
The author expresses deep gratitude to Carl Pomerance for many discussions, and to Paul Kinlaw and James Maynard for careful readings and feedback. The author more broadly thanks Tsz Ho Chan and Scott Neville for engaging conversations over the years. The author also thanks Andr\'as S\'ark\"ozy for bringing \cite{MathErdosII} and references therein to his attention. The author is supported by a Clarendon Scholarship at the University of Oxford.
\bibliographystyle{amsplain}
| {
"timestamp": "2022-02-08T02:02:48",
"yymm": "2202",
"arxiv_id": "2202.02384",
"language": "en",
"url": "https://arxiv.org/abs/2202.02384",
"abstract": "A set of integers greater than 1 is primitive if no member in the set divides another. Erdős proved in 1935 that the series $f(A) = \\sum_{a\\in A}1/(a \\log a)$ is uniformly bounded over all choices of primitive sets $A$. In 1986 he asked if this bound is attained for the set of prime numbers. In this article we answer in the affirmative. As further applications of the method, we make progress towards a question of Erdős, Sárközy, and Szemerédi from 1968. We also refine the classical Davenport-Erdős theorem on infinite divisibility chains, and extend a result of Erdős, Sárközy, and Szemerédi from 1966.",
"subjects": "Number Theory (math.NT); Combinatorics (math.CO); Probability (math.PR)",
"title": "A proof of the Erdős primitive set conjecture",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018419665618,
"lm_q2_score": 0.7279754548076478,
"lm_q1q2_score": 0.7099230044348637
} |
https://arxiv.org/abs/2301.02043 | Orbifold braid groups | The orbifold braid groups of two dimensional orbifolds were defined in [1] (arXiv:math/9907194) to understand certain Artin groups as subgroups of some suitable orbifold braid groups.We studied orbifold braid groups in some more detail in [17] (arXiv:2006.07106) and [18] (arXiv:2106.08110), to prove the Farrell-Jones Isomorphism conjecture for orbifold braid groups and as a consequence for some Artin groups. In this article we apply the results from [17] and [18], to study two aspects of the orbifold braid groups. First we show that the homomorphisms induced on the orbifold braid groups by the inclusion maps of a generic class of sub-orbifolds of an orbifold are injective.Then, we prove that the centers of most of the orbifold braid groups are trivial. | \section{Introduction}
Let $M$ be a connected two dimensional orbifold (see \cite{Sco83}). Consider the following
hyperplane complement in $M^n$.
$$PB_n(M)=\{(x_1,x_2,\ldots, x_n)\in M^n\ |\ x_i\neq x_j\ \text{for}\ i\neq j\}.$$
\noindent
$PB_n(M)$ is an orbifold and called the {\it configuration space} of ordered $n$-tuples
of pairwise distinct points of $M$. There is an obvious action of the
symmetric group $S_n$ on $n$ letters, on
$PB_n(M)$ by permuting the coordinates. The quotient orbifold $PB_n(M)/S_n$ is denoted by
$B_n(M)$. The quotient map $PB_n(M)\to B_n(M)$ is an orbifold covering map with
$S_n$ as the group of orbifold covering transformations. This gives the following
exact sequence of orbifold fundamental groups.
\begin{align}\label{1.1}
\xymatrix{1\ar[r]&\pi_1^{orb}(PB_n(M))\ar[r]&\pi_1^{orb}(B_n(M))\ar[r]&S_n\ar[r]&1.}\end{align}
\begin{defn}(\cite{All02}){\rm The orbifold fundamental group
$\pi_1^{orb}(PB_n(M))$, denoted by ${\mathcal {PB}}_n(M)$, is called the
{\it pure orbifold braid group} of $M$ on $n$ strings, and
the orbifold fundamental group $\pi_1^{orb}(B_n(M))$, denoted by ${\mathcal B}_n(M)$, is called the
{\it orbifold braid group} of $M$ on $n$ strings.}\end{defn}
In the particular case $M={\mathbb R}^2$, classically, the corresponding
groups are called the {\it pure braid groups} and
the {\it braid groups}, respectively. The
(pure) braid groups were first defined by
Artin in \cite{Ar47}, and subsequently generalized to the case of
$2$-manifolds by Fox and Neuwirth in \cite{FoN62}. In \cite{All02}, Allcock
made a further generalization and considered two dimensional
orbifolds, as described above, to give a braid type pictorial representation of certain Artin
groups. See \cite{Tit66}, \cite{BS72} and \cite{Bri71} for some background on Artin groups.
The study of the (pure) braid groups is an important subject
in mathematics. An enormous amount of work has been done during the
last several decades, exposing its connections with several areas of
mathematics and physics. For some of the early fundamental works, other than the
ones mentioned above, see
\cite{FN62}, \cite{FB62}, \cite{Bir69}, \cite{Bir73}, \cite{Ga69} and \cite{Gol73}.
Let ${\mathcal C}'_0$ be the class of all genus zero connected $2$-manifolds with at least one
puncture or one boundary component, and ${\mathcal C}'_1$ be the class
of all connected $2$-manifolds of
genus greater or equal to one. In both the cases if there are infinitely
many punctures, then we assume that they are obtained after removing a
discrete subset. Since we need that in a compact part of the manifold there
should be only finitely many punctures.
Let ${\mathcal C}_0$ and ${\mathcal C}_1$ be the
classes of all two dimensional orbifolds with
only cone type singularities, and whose underlying spaces
belong to ${\mathcal C}'_0$ and ${\mathcal C}'_1$, respectively.
The celebrated fibration theorem of Fadell and Neuwirth for manifolds (see
\cite{FN62}) gave an important method
for better understanding of the configuration spaces of a manifold.
As a consequence of the fibration theorem, for each $S\in {\mathcal C}'_0\cup {\mathcal C}'_1$ the
following exact sequence can be deduced.
\begin{align}\label{1.2}
\xymatrix{1\ar[r]&\pi_1(PB_{n-r}(\widetilde S))\ar[r]&\pi_1(PB_n(S))\ar[r]&\pi_1(PB_r(S))\ar[r]&1.}\end{align}
\noindent
Above, the homomorphism
$\pi_1(PB_n(S))\to \pi_1(PB_r(S))$ is induced by
the restriction to $PB_n(S)$, of the projection $S^n\to S^r$
to the first $r$ coordinates. And $\widetilde S=S-\{r-\text{points}\}$, that
is, the fiber over a base point in $PB_r(S)$. Recall that, by the Fadell-Neuwirth fibration
theorem (see \cite{FN62}) the map $PB_n(S)\to PB_r(S)$ is a fibration and an important consequence,
obtained from the long exact sequence of homotopy
groups of this fibration, is that $\pi_k(PB_n(S))=1$, for
all $k\geq 2$. This is the reason the
above long exact sequence took the short form, as in (\ref{1.2}).
But in the case of $M\in {\mathcal C}_0\cup {\mathcal C}_1$,
the corresponding statement that $\pi^{orb}_k(PB_n(M))=1$, for
all $k\geq 2$, is not yet known. See the Asphericity conjecture in \cite{Rou21}. We gave some
examples of orbifolds where this is true (see Proposition 2.4 in
\cite{Rou21}).
Furthermore, in the orbifold category there does not exist any
suitable notion of a
fibration which would make $PB_n(M)\to PB_r(M)$ a fibration.
Rather, it is expected that $PB_n(M)\to PB_r(M)$ is a kind of quasifibration (see
\cite{Rou20} and the Quasifibration conjecture in \cite{Rou21}). Nevertheless,
in \cite{Rou20} and \cite{Rou21} we proved the existence of the following
exact sequence of pure orbifold braid groups, analogous to (\ref{1.2}),
for $M\in {\mathcal C}_0$ and $M\in {\mathcal C}_1$, respectively.
\begin{align}\label{1.3}
\xymatrix{1\ar[r]&{\mathcal {PB}}_{n-r}(\widetilde M)\ar[r]&{\mathcal {PB}}_n(M)\ar[r]&{\mathcal {PB}}_r(M)\ar[r]&1.}\end{align}
\noindent
Here, $\widetilde M$ is obtained from $M$ by removing $r$ regular points, that is, it is the
fiber over a regular (base) point of $PB_r(M)$.
Together with the work of Allcock (\cite{All02}),
the above exact sequences were required to prove the
Farrell-Jones Isomorphism conjecture for all (pure) orbifold braid
groups, and as a consequence for a class of Artin groups of
finite, complex and affine types (see \cite{Rou20} and
\cite{Rou21}).
For another application of (\ref{1.3}), see \cite{Rou22} or Remark \ref{last}.
In this article, using the exact sequences (\ref{1.1}) and
(\ref{1.3}), we will study some more properties of the (pure)
orbifold braid groups. More specifically, we will prove that,
for $M\in {\mathcal C}_0\cup {\mathcal C}_1$,
the homomorphisms induced by the inclusion maps of connected sub-orbifolds $N$ of $M$ on their (pure)
orbifold braid groups, are injective whenever $\pi_1^{orb}(N)\to
\pi_1^{orb}(M)$ is injective (Theorem \ref{thm1}).
Furthermore, we will show that the centers of ${\mathcal {PB}}_n(M)$
and ${\mathcal {B}}_n(M)$ are
trivial except for few cases (Theorem \ref{center}). These results and their proofs are motivated by
some analogous results of Paris and Rolfsen (\cite{PR99}) for
$2$-manifolds.
\section{Main results}
We start with the following definition of a sub-orbifold of a two
dimensional orbifold. First, recall that in dimension $2$, the
underlying space of an orbifold has a manifold structure (see \cite{Sco83}).
\begin{defn}{\rm
Let $M\in {\mathcal C}_0\cup {\mathcal C}_1$. A {\it sub-orbifold} $N$ of $M$
is a
two dimensional sub-manifold of
the underlying space of $M$ satisfying the following conditions.
$\bullet$ $\overline N-N$ is a $1$-dimensional manifold, that is, $M-N$ does
not contain any isolated point. We call the components of
$\overline N-N$, the {\it boundary components} of $N$ ( or of $\overline N$).
$\bullet$ No boundary component of $\overline N$ contains
any of the cone points of $M$.
$\bullet$ A boundary component of $\overline N$ either lies in the interior of $M$
or is a boundary component of $M$.
$\bullet$ $N$ has the induced orbifold structure from $M$.}\end{defn}
\subsection{Injectivity}
We first identify the sub-orbifolds $N$ of $M$, so that the inclusion $N\subset M$
will induce injective homomorphisms on their (pure) orbifold braid groups.
A necessary condition is that on the orbifold fundamental group
level the inclusion $N\subset M$ induces an injective
homomorphism, since ${\mathcal {PB}}_1(X)=\pi_1^{orb}(X)$ for any orbifold $X$.
We will see that this condition is also sufficient.
\begin{defn}\label{good} {\rm Let $M\in {\mathcal C}_0\cup {\mathcal C}_1$. A sub-orbifold $N$ of
$M$ is called {\it nice} if
the components of $\overline {M-N}$ satisfy the following conditions.
$\bullet$ If a component does not contain any cone point, then it is not
simply connected.
$\bullet$ If the underlying space of a component is simply connected, then it
contains at least two cone points.}\end{defn}
\begin{rem}\label{injective}{\rm In Proposition \ref{sub}
we will see that if $\pi_1^{orb}(N)$ is infinite,
then we need $N$ to be nice, to make
sure that $\pi_1^{orb}(N)\to \pi_1^{orb}(M)$ is injective.
Note that, in Definition \ref{good}, if $\pi^{orb}_1(N)$ is finite,
then $N$ is either a smooth disc or a disc with a single cone
point in its interior. There is nothing to prove when $N$ is a
smooth disc. In the second case we will prove in
Proposition \ref{sub} that, $\pi^{orb}_1(N)\to \pi_1^{orb}(M)$ is
again injective.}\end{rem}
We make one more relevant remark which we need later on.
\begin{rem}\label{nice-remark}{\rm Let $N$ be a sub-orbifold of $M$
and $p\in N$ be an interior point. Then,
clearly $\widetilde N:=N-\{p\}$ is a sub-orbifold of $\widetilde M:=M-\{p\}$.
Furthermore, since $\overline {\widetilde M-\widetilde N}=\overline {M-N}$, if
$N$ is nice in $M$, then $\widetilde N$ is also nice in $\widetilde M$.}\end{rem}
Let $n\leq m$ and $N\subset M$ be a
connected sub-orbifold. Choose $m$ regular points $s_1,s_2,\ldots,
s_m\in M$ in the interior of $M$,
such that $s_1,s_2,\ldots, s_n$ lie in the interior of $N$, and $s_{n+1}, s_{n+2},\ldots, s_m$ lie
in the interior of $M-N$.
We consider the orbifold fundamental groups of $PB_m(M)$ ($B_m(M)$) and $PB_n(N)$ ($B_n(N)$) with
respect to the base points $(s_1,s_2,\ldots, s_m)$ and $(s_1,s_2,\ldots, s_n)$, respectively.
\begin{thm}\label{thm1} Let $M\in {\mathcal C}_0\cup {\mathcal C}_1$ and
$N$ be a connected nice sub-orbifold of $M$. Then, the
following inclusion induced homomorphisms are injective.
\begin{align}\label{PB}
\xymatrix{{\mathcal {PB}}_n(N)\to {\mathcal {PB}}_m(M).}\end{align}
\begin{align}\label{B}
\xymatrix{{\mathcal B}_n(N)\to {\mathcal B}_m(M).}\end{align}
\end{thm}
Paris and Rolfsen proved the above theorem in \cite{PR99} for $2$-manifolds
$M$ and sub-manifolds $N$, assuming that no component of $\overline {M-N}$ is a disc.
See \cite{PR99} for more details.
\subsection{Center} We start with the following definition.
\begin{defn}{\rm A {\it simple surface} $S$ is a connected $2$-manifold whose
fundamental group has a non-trivial center. In
other words, $S$ belongs to the following manifolds or its interiors.
$${\mathbb {RP}}^2, C:={\mathbb S}^1\times I, T:={\mathbb S}^1\times {\mathbb S}^1,$$
$$Mb:={\mathbb S}^1\hat{\times} I (\text{M\"{o}bius band}), K:={\mathbb S}^1\hat{\times} {\mathbb S}^1 (\text{Klein bottle}).$$}\end{defn}
Note that, ${\mathbb {RP}}^2$ does not belong to ${\mathcal C}_0\cup {\mathcal C}_1$.
Paris and Rolfsen (\cite{PR99}) proved that the center of the (pure) surface braid groups of
compact large surfaces (see Definition in $\S$ 1.3 in \cite{PR99}) are trivial. We
prove an analogous result for (pure) orbifold braid groups, in the following theorem.
\begin{thm}\label{center} Let $M\in {\mathcal C}_0\cup {\mathcal C}_1$ and assume the following.
$\bullet$ If $M$ has no cone point, then it is not a simple surface.
$\bullet$ If the underlying space of $M$ is simple,
then it has at least one cone point.
$\bullet$ If the underlying space of $M$ is simply connected (note that
$M\neq {\mathbb S}^2$), then it has at least two cone points.
Then, the
center of ${\mathcal {PB}}_n(M)$ is trivial. Furthermore, assuming that, either $n=1$ or
$n\geq 3$, the center of ${\mathcal B}_n(M)$ is trivial.\end{thm}
Using the fact that ${\mathcal B}_n(M)$ is torsion free, for compact
large surfaces $M$, it was shown in \cite{PR99} that the center of ${\mathcal B}_n(M)$ is
trivial. Since the orbifold braid groups are in general not
torsion free, we will use
the fact that the symmetric group $S_n$ has trivial center for
$n\geq 3$, to draw this conclusion. We do not know much about
the center of ${\mathcal B}_2(M)$ for $M\in {\mathcal C}_0\cup {\mathcal C}_1$.
Also Theorem \ref{center} is not true when $M$ is a disc or a disc with one cone point
of order $q\geq 2$. The center of ${\mathcal B}_n({\mathbb D}^2)$,
for $n\geq 2$, is known to be infinite cyclic (see \cite{Cho48}).
For the disc $M$ with one cone point,
${\mathcal {PB}}_1(M)={\mathcal B}_1(M)={\mathbb Z}_q$. Computations of the center of
${\mathcal B}_n(M)$ for the cylinder and for the torus are
explicitly done in \cite{PR99}.
\section{Some basic results}
In this section we prove two propositions which are required to prove Theorems
\ref{thm1} and \ref{center}. This will help us to start a method of induction
for the proofs.
\subsection{Injectivity for $n=1$}
It is well known that if $N$ is a connected non-simply connected sub-manifold of a
connected $2$-manifold $M$, then the inclusion induced homomorphism
$\pi_1(N)\to \pi_1(M)$ is injective if and only if no component of $M-N$ is
simply connected. The main idea behind the proof is that a boundary component
of $N$ is $\pi_1$-injective. Therefore, by the Van-Kampen theorem a component of $M-N$ is
simply connected if and only if $\pi_1(N)\to \pi_1(M)$ is not injective.
The same idea can be used to prove the
following proposition.
\begin{prop}\label{sub} Let $M\in {\mathcal C}_0\cup {\mathcal C}_1$ and
$N\subset M$ be
a nice connected sub-orbifold. Then, the inclusion induced homomorphism
$\pi_1^{orb}(N)\to \pi_1^{orb}(M)$ is injective.\end{prop}
For the proof of the proposition we will need the following lemma.
\begin{lemma}\label{lemmasub} Let $M\in {\mathcal C}_0\cup {\mathcal C}_1$. If the underlying
space of $M$ is simply connected (that is, if it is a disc), then assume that it has at least
two cone points. Let $\partial$ be a circle boundary component of $M$. Then,
$\pi_1(\partial)\to \pi_1^{orb}(M)$ is injective, that is $\partial$ is
$\pi_1^{orb}$-injective.\end{lemma}
\begin{proof} First, suppose that the underlying space of $M$ is a disc and
there are $k$ cone points of orders $q_1,q_2,\ldots, q_k$,
($k\geq 2$), in its interior. Then,
$$\pi_1^{orb}(M)\simeq \langle \alpha_1\ |\ \alpha_1^{q_1}=1\rangle* \langle
\alpha_2\ |\ \alpha_2^{q_2}=1\rangle*\cdots *\langle \alpha_k\ |\
\alpha_k^{q_k}=1\rangle.$$
\noindent
Here, for $i=1,2,\ldots, k$, $\alpha_i$ is the loop around
the $i$-th cone point. See Figure 1. Then, the boundary $\partial$ represents
the element $\alpha_1\alpha_2\cdots\alpha_k$, which is obviously of
infinite order, since $k\geq 2$. Therefore, $\partial$ is $\pi_1^{orb}$-injective.
\medskip
\centerline{\includegraphics[height=4cm,width=4cm,keepaspectratio]{disc-cone.eps}}
\centerline{Figure 1: Disc with $k$ cone points.}
\medskip
Next, assume that the underlying space $\check{M}$ of $M$ is either
of genus zero and non-simply connected or of
genus $\geq 1$. Then, it is well known that $\pi_1(\partial)\to
\pi_1(\check{M})$ is injective.
\begin{align}\label{p-injective}
\xymatrix{&\pi_1^{orb}(M)\ar[d]&\pi_1(\breve{M})\ar[l]\ar[dl]\\
\pi_1(\partial)\ar[ur]\ar[r]&\pi_1(\check{M}).&}\end{align}
Now, we have the diagram (\ref{p-injective}) with commutative
triangles. Here $\breve{M}$ is equal to $M$ minus the cone
points. Note that, $\pi_1^{orb}(M)$ is obtained from
$\pi_1(\breve {M})$, by adding the relations $\alpha^q=1$,
for each loop $\alpha$ around the puncture of $\breve {M}$, which
was obtained by removing a cone point of order $q$ (see \cite{Thu91}).
This describes the top horizontal homomorphism. Clearly, the right hand
side slanted homomorphism is obtained by adding the
relation $\alpha=1$ to $\pi_1(\breve {M})$, where
$\alpha$ is as above. Furthermore, the
vertical (surjective) homomorphism is obtained by equating the
elements
of $\pi_1^{orb}(M)$, which are loops around
the cone points, to the trivial element.
Since, the bottom horizontal homomorphism is injective, so is the left
hand side slanted one.
This completes the proof of the lemma.
\end{proof}
Now, we come to the proof of Proposition \ref{sub}.
\begin{proof} [Proof of Proposition \ref{sub}] First,
we assume that $\pi_1^{orb}(N)$ is infinite. Then, it satisfies the
hypothesis of Lemma \ref{lemmasub}. Hence, the boundary
components of $N$ are $\pi_1^{orb}$-injective. Furthermore,
again by Lemma \ref{lemmasub}, the boundary of a disc
with cone points in its interior is $\pi_1^{orb}$-injective if and
only if there are at least two cone points in the disc. The
rest of the argument follows from the Van-Kampen theorem for
orbifolds.
Next, assume that $\pi_1^{orb}(N)$ is finite. Then, clearly $N$
is either a smooth disc or a disc with one cone point of order $q$ (say).
If $N$ is a smooth disc, then there is nothing to prove. So
assume the later and let $\partial$ be the boundary circle of $N$.
Then, by Lemma \ref{lemmasub}, $\partial$ is $\pi_1^{orb}$-injective in $\overline {M-N}$.
Furthermore, $\pi_1^{orb}(M)$ is obtained from
$\pi_1^{orb}(\overline {M-N})$, by attaching the relation
$\alpha^q=1$, where $\alpha$ represents $\partial$. Since $\partial$ is
$\pi_1^{orb}$-injective, $\alpha$ has order $q$ in
$\pi_1^{orb}(M)$. On the other hand
$\pi_1^{orb}(N)=\langle\alpha\ |\ \alpha^q=1\rangle$. It is now clear that
$\pi_1^{orb}(N)\to \pi_1^{orb}(M)$ is injective.
\end{proof}
\subsection{Center for $n=1$}
First we state a couple of easy to prove lemmas on the center of an amalgamated free
product, and the center of an extension of groups. Then, we recall a well-known theorem on
the center of one-relator groups. Let $Z(-)$ denotes the center of a group.
\begin{lemma} \label{amal} If $G$ is an amalgamated free product of
a non-trivial group with trivial center and an another group, then the center
of $G$ is trivial.\end{lemma}
\begin{proof} Let $G=G_1*_HG_2$, then using the normal form of elements of
a generalized free product, it can be deduced that
$Z(G)=Z(G_1)\cap Z(G_2)\cap H$. Therefore, if one of $G_1$ and $G_2$
is non-trivial and has trivial center, then $G$ also has trivial center.\end{proof}
\begin{lemma}\label{extension} Consider an exact sequence of groups.
\begin{align}\label{center-extension}
\xymatrix{1\ar[r] & K\ar[r] & G\ar[r]^p&H\ar[r]&1.}\end{align}
If $Z(K)=Z(H)=\langle 1\rangle$, then $Z(G)=\langle 1\rangle$.\end{lemma}
\begin{proof} Let $g\in Z(G)$, then $p(g)\in Z(H)$ since $p$ is
surjective. Since $Z(H)=\langle 1\rangle$, $p(g)=1$, and hence, $g\in
K$. Consequently, $g\in Z(K)$ and hence, $g=1$, since $Z(K)=\langle
1\rangle$.\end{proof}
\begin{thm}\label{one-relator} (\cite{KMS60}, \cite{N73}) Let $G$ be a non-cyclic one-relator
group with a non-trivial element of finite order. Then, the center of $G$ is trivial.\end{thm}
In the following proposition we describe exactly when the center of
the orbifold fundamental group of an orbifold in
${\mathcal C}_0\cup {\mathcal C}_1$ is trivial.
\begin{prop}\label{center-trivial} Let $M\in {\mathcal C}_0\cup {\mathcal C}_1$ and assume the
following.
$\bullet$ If $M$ has no cone point, then it is not a simple surface.
$\bullet$ If the underlying space of $M$ is a simple surface,
then it has at least one cone point.
$\bullet$ If the underlying space of $M$ is simply connected (note that
$M\neq {\mathbb S}^2$), then it has at least two cone points.
Then,
$\pi_1^{orb}(M)$ has trivial center.\end{prop}
\begin{proof}
If $M$ is smooth and not a simple surface, then it is well-known
that $\pi_1^{orb}(M)$ has trivial center. Since
the only $2$-manifold $M\in {\mathcal C}_0\cup {\mathcal C}_1$ with nontrivial center belongs to the following
$2$-manifolds and their interiors: the cylinder ($C$), the M\"{o}bius band ($Mb$), the
torus ($T$) and the Klein bottle ($K$).
The proof in Case 2 below also
gives a direct proof of this fact, in the particular case when $M$ is smooth and
not a simple surface.
We are now ready to complete the proof of the proposition in the case when
$M$ has at least one cone point. We divide the proof in the
following two cases.
\noindent
{\bf Case 1.} Assume that the underlying space of $M$ is a simple surface.
Hence, it is one
of the manifolds $C,T,Mb\ \text{or}\ K$ or its interiors.
First, assume that each of them has at least
two cone points. Take a disc $D\subset M$ in the interior of
$M$ which contains the
cone points in its interior. Then, split $M$ into two pieces along
the
boundary of $D$. Consequently, by Lemma
\ref{lemmasub} and by the
Van-Kampen theorem, $\pi_1^{orb}(M)$ satisfies
the hypothesis of Lemma \ref{amal}, and hence has a trivial center.
\medskip
\centerline{\includegraphics[height=8cm,width=12cm,keepaspectratio]{orbifold-one-cone.eps}}
\centerline{Figure 2: Orbifolds with one cone point and simple underlying space.}
\medskip
Next, assume that $M$ has only one cone point of order $q\geq 2$.
We denote these four compact orbifolds by $Cc$, $Tc$, $Mbc$ and $Kc$. Since, the
orbifold fundamental groups of the non-compact
cylinder and M\"{o}bius band fall in the corresponding compact cases, we need
to consider only the above four compact cases.
We write down their orbifold fundamental groups explicitly as follows.
$$\pi_1^{orb}(Cc)=\{a,b,c\ |\ (aba^{-1}c)^q=1\}, \pi_1^{orb}(Mbc)=\{a,b,c\ |\ (abac)^q=1\},$$
$$\pi_1^{orb}(Tc)=\{a,b\ |\ (aba^{-1}b^{-1})^q=1\}, \pi_1^{orb}(Kc)=\{a,b\ |\ (aba^{-1}b)^q=1\}.$$
The above calculation can be done using the pictorial presentation in
Figure 2 of
the orbifolds and the Van-Kampen theorem.
It is now easy to
see that $\pi_1^{orb}(M)$ is a non-cyclic group, by computing the
abelianization of $\pi_1^{orb}(M)$ from the above presentations.
Therefore, in all the four cases above the corresponding groups are
non-cyclic, one-relator and each has an element of finite order.
Hence, by Theorem \ref{one-relator}, the
center of $\pi_1^{orb}(M)$ is trivial.
{\bf Case 2.} Assume that the underlying space of $M$ is not a simple surface. Therefore,
by hypothesis the underlying space of $M$ is one
of the following types.
$\bullet$ It is simply connected (but not ${\mathbb S}^2$) and it has
at least two cone points.
$\bullet$ It has genus zero, has at least two punctures (or boundary
components)
in the non-orientable case, and
has at least three punctures (or boundary components) in the orientable case.
$\bullet$ It has genus
$\geq 2$ in the orientable case.
$\bullet$ It has genus $\geq 3$ in the non-orientable
case.
\medskip
\centerline{\includegraphics[height=12cm,width=12cm,keepaspectratio]{orbifold.eps}}
\centerline{Figure 3: Orientable and non-orientable orbifolds.}
\medskip
In the first possibility, clearly, $\pi_1^{orb}(M)$ has trivial center, since
it is isomorphic to the free product of more than one finite cyclic
groups (see the proof of lemma \ref{lemmasub}). And
in the second case, it is clear that $\pi_1^{orb}(M)$ has an amalgamated
free product structure over an infinite cyclic group, with one of
the factors non-abelian free.
Hence, it has trivial center, by Lemma \ref{amal}.
In the last two cases,
$\pi_1^{orb}(M)$ also has an amalgamated free product structure over
an infinite cyclic group, with one of the factors non-abelian free, and hence has
trivial center by Lemma \ref{amal}. To prove this, we will split $M$
along a separating $\pi_1^{orb}$-injective embedded circle such
that one piece contains all the cone points, and
the other one is smooth and has non-abelian free fundamental group. This can be done
by choosing a nice orientable sub-orbifold of genus one with non-empty
boundary and which
contains no cone point.
We describe this choice of a nice sub-orbifold in Figure 3.
Note
that, in the orientable case it has a torus as a connected sum
factor, since it has genus $\geq 2$. Next, choose a disc on this torus
which contains all the cone points lying on the left of the neck of
the torus as in Figure 3,
and then take the connected sum $C$ of the boundary $B$ of this disc and a
separating closed curve $E$ around the neck. Then, $C$ is the desired
separating circle, by applying Lemma \ref{lemmasub}.
In the non-orientable case it has genus $\geq 3$, and hence there are
at least three cross caps. We can replace three of these cross caps by
an orientable genus one surface with a single cross cap (see
\cite{D88}).
Then, similar to the orientable case,
we can choose a separating circle $C$ as shown in Figure 3.
Then, again we apply the Van-Kampen theorem and Lemma \ref{amal} to see
that the center of $\pi_1^{orb}(M)$ is trivial.
\end{proof}
An immediate corollary of Proposition \ref{center-trivial} is the following.
\begin{cor}\label{center-cor} Let $M\in {\mathcal C}_0\cup {\mathcal C}_1$
be as in Proposition \ref{center-trivial}, and $\widetilde M$ is
obtained from $M$ after removing a finite number of regular
points. Then, $\widetilde M$ also satisfies the hypothesis of Proposition
\ref{center-trivial}, and hence the center of $\pi_1^{orb}(\widetilde M)$ is
trivial.\end{cor}
\section{Proofs of the theorems}
We now prove the two theorems of Section 2.
\begin{proof}[Proof of Theorem \ref{thm1}] Recall that $N$ is a nice
sub-orbifold of $M\in {\mathcal C}_0\cup {\mathcal C}_1$. At first we prove the injectivity
of (\ref{PB}), that is of ${\mathcal {PB}}_n(N)\to {\mathcal {PB}}_m(M)$, when $n=m$. The proof is by induction
on $n$. Recall that $PB_1(X)=X$ for any orbifold $X$.
Hence, by Proposition \ref{sub}, ${\mathcal {PB}}_1(N)\to {\mathcal {PB}}_1(M)$ is injective.
Therefore, assume that (\ref{PB}) is true for $n=k-1$ and for any nice sub-orbifold of an
orbifold from ${\mathcal C}_0\cup {\mathcal C}_1$.
Next, consider the exact sequence
(\ref{1.3}) for $r=1$. Then, we have the following commutative diagram of exact sequences.
\begin{align}\begin{gathered}\label{PBProof}
\xymatrix@-.7pc{1\ar[r]&{\mathcal {PB}}_{k-1}(\widetilde N)\ar[r]\ar[d]&{\mathcal
{PB}}_k(N)\ar[r]\ar[d]&{\mathcal {PB}}_1(N)\ar[r]\ar[d]&1\\
1\ar[r]&{\mathcal {PB}}_{k-1}(\widetilde M)\ar[r]&{\mathcal {PB}}_k(M)\ar[r]&{\mathcal {PB}}_1(M)\ar[r]&1.}\end{gathered}\end{align}
Note that, $\widetilde N$ is a nice sub-orbifold of $\widetilde M$, since $\widetilde N$ is obtained from $N$
after removing one regular point and $\widetilde M$ is obtained from $M$ after removing the same
regular point (see Remark \ref{nice-remark}).
Therefore, in the commutative diagram (\ref{PBProof}), the first vertical homomorphism is injective
by the induction hypothesis, and the last one is injective from Proposition \ref{sub}. Hence, by a simple
diagram chase it follows that the middle homomorphism is injective as well. This proves
the injectivity of (\ref{PB}) when
$n=m$.
Now, we come to the proof of the injectivity of (\ref{PB}) when $n\leq m$. Consider the
following diagram. Here, $p:PB_m(M)\to PB_n(M)$ is the projection map to the first $n$ coordinates.
\begin{align}\begin{gathered}\label{PBProofG}
\xymatrix{{\mathcal {PB}}_n(N)\ar[r]\ar[dr]&{\mathcal {PB}}_m(M)\ar[d]^{p_*}\\
&{\mathcal {PB}}_n(M).}\end{gathered}\end{align}
The slanted homomorphism is injective by the previous case, and hence the top homomorphism is also injective. This
proves the injectivity of (\ref{PB}).
Next, we come to the proof of the injectivity of (\ref{B}), that is of ${\mathcal B}_n(N)\to {\mathcal B}_m(M)$.
Consider the following commutative diagram of the exact sequences (\ref{1.1}) for
$N$ and $M$. Here, $i:S_n\to S_m$ is the inclusion map.
\begin{align}\begin{gathered}\label{BProof}
\xymatrix{1\ar[r]&{\mathcal {PB}}_n(N)\ar[r]\ar[d]&{\mathcal B}_n(N)\ar[r]\ar[d]&S_n\ar[r]\ar[d]^{i}&1\\
1\ar[r]&{\mathcal {PB}}_m(M)\ar[r]&{\mathcal B}_m(M)\ar[r]&S_m\ar[r]&1.}\end{gathered}\end{align}
Once again a simple diagram chase and using the injectivity of (\ref{PB}) we conclude that the
middle vertical homomorphism (that is, (\ref{B})) in the above diagram is injective.
This completes the proof of Theorem \ref{thm1}.
\end{proof}
Now, we come to the proof of the triviality of the center of the (pure) orbifold braid groups.
\begin{proof}[Proof of Theorem \ref{center}]
At first we prove the theorem for ${\mathcal {PB}}_n(M)$, that is, we
show that ${\mathcal {PB}}_n(M)$ has trivial center. The proof is by induction
on $n$. By Proposition \ref{center-trivial}, for $n=1$, ${\mathcal {PB}}_1(M)=\pi_1^{orb}(M)$ has
trivial center. Therefore, we assume that ${\mathcal {PB}}_{n-1}(M)$ has trivial center.
Now, consider the exact sequence (\ref{1.3}) for $r=n-1$.
\begin{align}\label{center-exact}
\xymatrix@-.5pc{1\ar[r]&{\mathcal {PB}}_1(\widetilde M)\ar[r]&{\mathcal
{PB}}_n(M)\ar[r]&{\mathcal
{PB}}_{n-1}(M)\ar[r]&1.}\end{align}
Since, by hypothesis, ${\mathcal {PB}}_{n-1}(M)$ has trivial center and
by Corollary \ref{center-cor}, ${\mathcal {PB}}_1(\widetilde M)$ has trivial
center, we can apply Lemma \ref{extension} to conclude
that ${\mathcal {PB}}_n(M)$ has trivial center.
Next, we prove that the center of ${\mathcal B}_n(M)$ is trivial for
$n\geq 3$. Recall the exact sequence (\ref{1.1}).
\begin{align}\label{Bcenter}
\xymatrix{1\ar[r]&{\mathcal {PB}}_n(M)\ar[r]&{\mathcal B}_n(M)\ar[r]&S_n\ar[r]&1.}\end{align}
Note that, $S_n$ has trivial center for $n\geq 3$, and from the
previous case ${\mathcal {PB}}_n(M)$ has trivial center. Hence, by Lemma
\ref{extension} ${\mathcal {B}}_n(M)$ has trivial center.
Finally, for
$n=1$, $\pi_1^{orb}(M)={\mathcal {PB}}_1(M)={\mathcal {B}}_1(M)$ and hence
by Proposition \ref{center-trivial}, its center is trivial.
This completes the proof of Theorem \ref{center}.
\end{proof}
We conclude with the following remark.
\begin{rem}\label{last}{\rm For an action of a discrete group $G$
on a connected $2$-manifold $M$, one considers the {\it orbit
configuration space} ${\mathcal O}_n(M,G)$ of $n$ points.
By definition, ${\mathcal O}_n(M,G)$ is the space of all
$n$-tuples of points of $M$ with pairwise
distinct orbits. The orbit configuration space is an immediate
generalization of the configuration space, since
$PB_n(M)={\mathcal O}_n(M, \langle 1\rangle )$. During the last couple of
decades orbit configuration spaces were of much interest and
many works were done assuming the action is free and properly
discontinuous, since the Fadell-Neuwirth fibration theorem still holds
in this case.
We now assume that the action is properly discontinuous and effective
(need not be free),
with isolated fixed points. Then, we note here that the Fadell-Neuwirth
fibration theorem does not hold in this
generality of non-free actions (see Lemma 2.3 in \cite{Rou22}).
Nevertheless, using (\ref{1.3}), the following exact sequence was proved in \cite{Rou22}, assuming that
$M\neq {\mathbb S}^2, {\mathbb {RP}}^2$, and that when $M/G$ has genus zero, then
it either has a puncture or has nonempty boundary. See
\cite{Rou21} and \cite{Rou22} for some more on this matter.
\begin{align}
\xymatrix@-.5pc{1\ar[r]&\pi_1({\mathcal O}_{n-r}(M_r, G))\ar[r]&
\pi_1({\mathcal O}_n(M, G))\ar[r]&
\pi_1({\mathcal O}_r(M, G))\ar[r]&1.}\end{align}
\noindent
Here, $M_r$ is the complement in $M$ of the orbits of $r$ points, which
are not fixed points of the action.
From this exact sequence we can also prove the triviality of the center of
$\pi_1({\mathcal O}_n(M,G))$ for most such pairs $(M,G)$, following the
proof of Theorem \ref{center}.}\end{rem}
\newpage
\bibliographystyle{plain}
\ifx\undefined\leavevmode\hbox to3em{\hrulefill},
\newcommand{\leavevmode\hbox to3em{\hrulefill},}{\leavevmode\hbox to3em{\hrulefill},}
\fi
| {
"timestamp": "2023-01-06T02:09:40",
"yymm": "2301",
"arxiv_id": "2301.02043",
"language": "en",
"url": "https://arxiv.org/abs/2301.02043",
"abstract": "The orbifold braid groups of two dimensional orbifolds were defined in [1] (arXiv:math/9907194) to understand certain Artin groups as subgroups of some suitable orbifold braid groups.We studied orbifold braid groups in some more detail in [17] (arXiv:2006.07106) and [18] (arXiv:2106.08110), to prove the Farrell-Jones Isomorphism conjecture for orbifold braid groups and as a consequence for some Artin groups. In this article we apply the results from [17] and [18], to study two aspects of the orbifold braid groups. First we show that the homomorphisms induced on the orbifold braid groups by the inclusion maps of a generic class of sub-orbifolds of an orbifold are injective.Then, we prove that the centers of most of the orbifold braid groups are trivial.",
"subjects": "Group Theory (math.GR); Geometric Topology (math.GT)",
"title": "Orbifold braid groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018419665619,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7099230044348637
} |
https://arxiv.org/abs/math/0510604 | Monte Carlo comparisons of the self-avoiding walk and SLE as parameterized curves | The scaling limit of the two-dimensional self-avoiding walk (SAW) is believed to be given by the Schramm-Loewner evolution (SLE) with the parameter kappa equal to 8/3. The scaling limit of the SAW has a natural parameterization and SLE has a standard parameterization using the half-plane capacity. These two parameterizations do not correspond with one another. To make the scaling limit of the SAW and SLE agree as parameterized curves, we must reparameterize one of them. We present Monte Carlo results that show that if we reparameterize the SAW using the half-plane capacity, then it agrees well with SLE with its standard parameterization. We then consider how to reparameterize SLE to make it agree with the SAW with its natural parameterization. We argue using Monte Carlo results that the so-called p-variation of the SLE curve with p=1/nu=4/3 provides a parameterization that corresponds to the natural parameterization of the SAW. | \section{Introduction}
\label{intro}
The scaling limit of the two dimensional self-avoiding walk (SAW)
is believed to be the Schramm-Loewner evolution (SLE)
with $\kappa=8/3$ \cite{lsw_saw}.
Previous Monte Carlo studies have compared the scaling limit of the SAW and
SLE$_{8/3}$ as unparameterized curves \cite{tk_saw_sle_one,tk_saw_sle_two}.
These studies considered random variables that did not depend on how
the curve were parameterized and found excellent agreement between
the distributions for the SAW and SLE$_{8/3}$.
The goal of this paper is to use Monte Carlo simulations
to understand how the SAW and SLE$_{8/3}$ curves
should be parameterized so that they agree as parameterized curves.
We give a brief definition of the SAW. For a detailed treatment we refer
the reader to \cite{ms}. The relationship of the SAW to SLE$_{8/3}$
is the subject of \cite{lsw_saw}.
We restrict our attention to the SAW on a two-dimensional lattice
restricted to the upper half plane. Fix a positive integer $N$.
We consider all nearest neighbor walks which begin at the origin,
have exactly $N$ steps, remain in the upper half plane (except for
the starting point) and never visit the same site more than once.
There are a finite number of such walks, so we can put the uniform probability
measure on this set of walks. We then attempt to construct the scaling limit
by the following double limit. We first let $N \rightarrow \infty$.
For the case of the half-plane this limit was proved to exist in
\cite{lsw_saw}, but there is no general proof of its existence.
We then take the lattice spacing to zero. This limit has not been proved
to exist, but it is believed that it does and gives a measure on
curves in the upper half plane which start at the origin and go to $\infty$.
The SAW curves are simple (do not intersect themselves) by definition.
This does not insure that in the scaling limit the measure is
supported on simple curves, but it is believed that it is.
SLE was introduced in \cite{schramm} and studied further in \cite{rs}.
Expositions may be found in \cite{lawler} and \cite{werner}.
In the case of $\kappa=8/3$, SLE satisfies the restriction property and
so belongs to the one parameter family of restriction measures
studied by Lawler, Schramm and Werner \cite{lsw_restrict}.
They showed that the probabilities of certain
events for SLE$_{8/3}$ may be computed from certain conformal maps. In
some cases they may be computed explicitly. Past Monte Carlo simulations
have tested the equivalence of the scaling limit of the SAW and SLE$_{8/3}$
by computing these probabilities for the SAW by simulation and comparing
with the exact result for SLE$_{8/3}$. Here are two examples.
Let $P$ be a point on the horizontal axis and consider the minimum distance
of the SLE$_{8/3}$ curve to the point.
This is a random variable whose distribution
may be computed explicitly for SLE$_{8/3}$. For the second example
consider a vertical line given by $x=c$ for some $c \neq 0$.
The SLE$_{8/3}$ curve intersects it. Let $Y$ be the distance from
the horizontal axis to the lowest intersection. The distribution of $Y$
may be explicitly computed for SLE$_{8/3}$.
Note that for both of these random variables the time parameterization
of the curve plays no role. For these two examples and
several others, excellent agreement was found between the SLE$_{8/3}$
exact results and the simulation results for the scaling limit of the SAW
\cite{tk_saw_sle_one,tk_saw_sle_two}.
The SLE curve is usually parameterized so that its half-plane capacity
grows linearly with the time parameter $t$. Before we take the scaling
limit, the SAW has a natural parameterization given by the number of steps,
or equivalently the distance along the curve since all steps are
nearest neighbor. With a suitable scaling this should give a natural
parameterization of the scaling limit of the SAW.
Equipped with these natural parameterizations, the scaling limit of
the SAW and the SLE$_{8/3}$ parameterized curves are not the same.
We illustrate this with a simulation in section \ref{need}.
How should these curves be reparameterized so that they agree?
There are two ways to ask this question. We can ask how to
reparameterize the scaling limit of the SAW to make it agree
with SLE$_{8/3}$ with its parameterization using half-plane capacity.
The answer is obvious -
one should use the half-plane capacity of the scaling limit of
the SAW to parameterize it. We study this in section \ref{reparamSAW}.
A less obvious question is to ask how we should reparameterize
SLE$_{8/3}$ to make it agree with the scaling limit of the SAW
with its natural parameterization.
In section \ref{fvar} we give simulation results that
indicate that for the SAW with its natural parameterization,
the $p$-variation of the walk exists and is a deterministic, linear
function of time if we take $p=1/\nu=4/3$. In other words, one can
recover the natural parameterization of the SAW by computing its
$1/\nu$ variation.
Some care is needed here since different definitions of the
variation give different values. However, it appears they differ
only by a multiplicative constant.
In section \ref{reparamSLE} we use the
$1/\nu$ variation of the SLE to reparameterize the SLE curve and
compare this with the SAW with its natural parameterization.
Section \ref{simulating} discusses various computational aspects
of our simulations of the SAW and the SLE.
\section{Need for a random reparameterization}
\label{need}
We review some facts about SLE. We refer the reader to \cite{lawler} for
more detail. We restrict our attention to chordal SLE in the upper half plane.
It is defined by the differential equation
\begin{equation}
\dot{g}_t(z) = {2 \over g_t(z) - \sqrt{\kappa} B_t}
\label{sle_eq}
\end{equation}
and the initial condition $g_0(z)=z$.
The dot denotes differentiation with respect to time $t$,
$B_t$ is a standard real-valued Brownian motion, and
$z$ belongs to the upper half plane $\mathbb{H}$.
For some initial points $z$, the solution only exists for a finite
time interval. The set of initial points for which the solution
no longer exists at time $t$ is a random subset $K_t$ of $\mathbb{H}$.
For $\kappa \le 4$, $K_t$ is just a simple curve $\gamma(t)$, i.e.,
a curve that does not intersect itself.
We use $\gamma[0,t]$ to denote the image of the curve for the
time interval $[0,t]$. So $\gamma[0,t]=K_t$.
The parameterization of the curve $\gamma$ that results from
eq. \reff{sle_eq} corresponds to the half-plane capacity of the
curve which is defined as follows.
Since $\gamma$ is simple, $\mathbb{H} \setminus \gamma[0,t]$
is simply connected. So there is a conformal map of this domain onto $\mathbb{H}$.
If we require the map to send $\infty$ to itself, then
this map has an expansion about $\infty$ of the form
\begin{equation}
g(z)= c z + a_0 + \sum_{n=1}^\infty a_n z^{-n}
\end{equation}
The map is uniquely determined if we require $c=1$ and $a_0=0$.
The half plane capacity of $\gamma[0,t]$ is then the coefficient $a_1$.
We denote it by $hcap(\gamma[0,t])$.
The above definition applies to any simple curve $\gamma$. For the
SLE curve we have
\begin{equation}
hcap(\gamma[0,t]) = 2t
\end{equation}
If we were to parameterize SLE so that
\begin{equation}
hcap(\gamma[0,t])=b(t)
\end{equation}
for some differentiable, increasing function $b(t)$, then the $2$ in
the Schramm-Loewner equation would be replaced by $\dot{b}(t)$.
The scaling property of Brownian motion leads to a scaling property for
SLE: $\gamma[0,t]$ has the same distribution as $\sqrt{t} \, \gamma[0,1]$.
(Note that if we rescale a set by $r$, its half-plane capacity scales
by a factor of $r^2$.)
Thus
\begin{equation}
E[ \gamma(t)^2] = c \, t
\end{equation}
where $c=E[\gamma(1)^2]$.
The SAW has a natural parameterization.
Let $W(n)$ be the infinite SAW in the upper half plane on a lattice
with unit lattice spacing. We extend $W(t)$ to all $t \ge 0$ by
linearly interpolating between the integer times.
The average distance from the origin to $W(n)$
is believed to grow like $n^\nu$ with $\nu=3/4$.
The limit of taking the lattice
spacing to zero can then be obtained by defining
\begin{equation}
\omega(t) = \lim_{n \rightarrow \infty} n^{-\nu} W(nt)
\label{scalinglimit}
\end{equation}
This limit is believed to exist and give a probability measure on
simple curves in the upper half plane starting at the origin, but this
has not been proved.
This definition of the scaling limit is analogous to Brownian motion
as the scaling limit of an ordinary random walk. If we take $W(n)$ to be
an ordinary random walk in the plane and $\nu=1/2$, then \reff{scalinglimit}
would converge to two-dimensional Brownian motion.
We refer to this parameterization of the scaling limit of
the SAW as its natural parameterization.
If this limit exists, then $\omega(t)$ has the same distribution
as $t^\nu \omega(1)$. Thus
\begin{equation}
E [\omega(t)^2] = b \, t^{2 \nu}
\end{equation}
where $b=E [\omega(1)^2]$.
The behavior of $E[\gamma(t)^2]$ and $E[\omega(t)^2]$ show that
to make the scaling limit of the SAW and SLE$_{8/3}$
agree we must at least do a deterministic reparameterization.
If there is a reparameterization, $\hat{\gamma}(t)=\gamma(\phi(t))$,
of SLE$_{8/3}$ with a deterministic $\phi(t)$ that makes $\hat{\gamma}(t)$
and $\omega(t)$ agree in distribution, then we must have
$ E[\hat{\gamma}(t)^2] = E[\omega(t)^2]$ and so the reparameterization must
be
\begin{equation}
\hat{\gamma}(t) = \gamma(a \, t^{2 \nu})
\end{equation}
with $a=b/c$.
\begin{figure}[tbh]
\includegraphics{saw_sle_fix_time_dist}
\caption{ The distribution, $P(R \le t)$, for the distance $R$ from the
origin to $\gamma(1)$ (labeled by SLE) and to $\omega(1)$ (labeled by
SAW). Both random variables have been rescaled so that they have mean one.
}
\label{saw_sle_fix_time_dist}
\end{figure}
\begin{figure}[tbh]
\includegraphics{saw_sle_fix_time_angle}
\caption{ The distribution, $P(\Theta \le t)$, for the polar angle $\Theta$
of the points $\hat{\gamma}(1)$ (labeled by SLE) and $\omega(1)$ (labeled by
SAW).
}
\label{saw_sle_fix_time_angle}
\end{figure}
It is easy to see from simulations that this deterministic reparameterization
does not work, i.e., $\hat{\gamma}(t)$ and $\omega(t)$ do not have the
same distribution as parameterized curves. To show this we consider
$\hat{\gamma}(1)$ and $\omega(1)$. If they have the same distribution, then
$\gamma(1)/E[|\gamma(1)|]$ and $\omega(1)/E[|\omega(1)|]$ have
the same distribution.
For each of these two random points, we
compute two random variables - the distance $R$ from the origin to the random
point and the polar angle $\Theta$ of the random point.
Our rescaling means that $E[R]=1$ for both the SLE and the SAW.
The distributions of $R$ for the SLE and the SAW are shown
in figure \ref{saw_sle_fix_time_dist}. The distributions of $\Theta$ are
shown in figure \ref{saw_sle_fix_time_angle}.
We do not show any error bars in the figures. The width of the error bars
would be hard to see on this scale.
Clearly the distributions are different.
Throughout this paper our plots of the
distributions of random variables are plots of their cumulative
distribution functions rather than their densities. The cumulative
distribution is the function that is actually computed in the simulation.
Computing the density would require taking a numerical derivative.
\section{Reparameterizing the SAW}
\label{reparamSAW}
In this section we reparameterize the scaling limit of the SAW
so that it agrees with SLE with its usual parameterization.
For $t>0$ we define $C_t$ to be the random time for the scaling limit
of the SAW when
\begin{equation}
hcap(\omega[0,C_t])=2t
\end{equation}
We define $\hat{\omega}(t)=\omega(C_t)$ so that
$hcap(\hat{\omega}[0,t])=2t$. Then $\hat{\omega}(t)$ and $\gamma(t)$
should have the same distribution as parameterized curves.
We test this by comparing the distributions of $R$ and $\Theta$ for
$\gamma(1)$ and $\hat{\omega}(1)=\omega(C_1)$. As before, $R$ is the distance
from the random point to the origin and $\Theta$ is the polar angle of
the random point.
If we plot the distributions of $R$ for $\gamma(1)$ and
$\hat{\omega}(1)$, the two curves are virtually indistinguishable.
So in figure \ref{saw_sle_fix_cap_dist} we plot the difference of
these two distributions.
The important thing to note about this figure is the scale on the
vertical axis. The difference between the two distributions is typically
on the order of a tenth of a percent.
We caution the reader that in figure \ref{saw_sle_fix_time_dist}
the random variable $R$ was rescaled so that its mean was 1.
This is not done in figure \ref{saw_sle_fix_cap_dist}. The mean of $R$
here is slightly greater than $2$. In particular, the sharp spikes in
figure \ref{saw_sle_fix_cap_dist} just left of $R=2$ correspond to where
the distribution is increasing sharply. In figure \ref{saw_sle_fix_time_dist}
this sharp increase occurs just left of $R=1$.
The difference of the distributions of $\Theta$
for $\gamma(1)$ and $\hat{\omega}(1)$ is shown in
\ref{saw_sle_fix_cap_angle}. Again, the most important feature
of this plot is the scale on the vertical axis. These two distributions
also differ by on the order of a tenth of a percent.
The error bars in figures \ref{saw_sle_fix_cap_dist} and
\ref{saw_sle_fix_cap_angle} and subsequent figures are only statistical
errors, i.e., the error arising from not running the Monte Carlo
simulations forever. The error bars shown are two standard deviations.
The standard deviation is estimated using the techniques of batched
means. The simulations of the SAW and the SLE both
contain systematic errors. These are discussed in section
\ref{simulating}.
\bigskip
\begin{figure}[tbh]
\includegraphics{saw_sle_fix_cap_dist_dif}
\caption{The difference of the distributions of $R$ for $\hat{\omega}(1)$,
the SAW parameterized using capacity, and $\gamma(1)$, the SLE with its
usual parameterization.}
\label{saw_sle_fix_cap_dist}
\end{figure}
\begin{figure}[tbh]
\includegraphics{saw_sle_fix_cap_angle_dif}
\caption{The difference of the distributions of $\Theta$ for $\hat{\omega}(1)$,
the SAW parameterized using capacity, and $\gamma(1)$, the SLE with its
usual parameterization.}
\label{saw_sle_fix_cap_angle}
\end{figure}
To further test the equivalence of $\gamma(t)$ and $\hat{\omega}(t)$,
we consider the covariances of these two processes. We think of
these processes as taking values in $\mathbb{R}^2$ rather than the complex
plane and let $\gamma_i(t)$ and $\hat{\omega}_i(t)$ denote their coordinates
with $i=1,2$. The covariance of SLE is given by
\begin{equation}
C_{ij}(s,t) = E \, [\gamma_i(s) \gamma_j(t)]
- E \, [\gamma_i(s)] \, E \, [\gamma_j(t)]
\end{equation}
By scaling, the dependence of the last term on $s$ and $t$ is trivial:
\begin{equation}
E \, [\gamma_i(s)] \, E \, [\gamma_j(t)] = \sqrt{st} \,
E \, [\gamma_i(1)] \, E \, [\gamma_j(1)]
\end{equation}
So we will only study $E \, [\gamma_i(s) \gamma_j(t)]$.
By scaling,
\begin{equation}
E \, [\gamma_i(s) \gamma_j(t)] = s \, E \, [\gamma_i(1) \gamma_j(t/s)]
= t \, E \, [\gamma_i(s/t) \gamma_j(1)]
\end{equation}
So it suffices to study this quantity with $s=1$ and $0 \le t \le 1$.
We can rewrite this as
\begin{equation}
E \, [\gamma_i(1) \gamma_j(t)]
=E \, [\gamma_i(t) \gamma_j(t)]
+ E \, [(\gamma_i(1) - \gamma_i(t)) \gamma_j(t)]
\end{equation}
By scaling, the first term is just $c_{ij} \, t$ where
$c_{ij}= E \, [\gamma_i(1) \gamma_j(1)]$.
So we simulate just the second term. So let
\begin{equation}
\rho_{ij}(t)= E \, [(\gamma_i(1) - \gamma_i(t)) \gamma_j(t)]
\end{equation}
For the SAW parameterized by capacity, we define
\begin{equation}
\hat{\sigma}_{ij}(t)= E \, [(\hat{\omega}_i(1) - \hat{\omega}_i(t))
\hat{\omega}_j(t)]
\end{equation}
The functions $\rho_{ij}(t)$ and $\hat{\sigma}_{ij}(t)$ are shown
in figure \ref{saw_sle_fix_cap_covar} for $i=j=1$ and $i=j=2$.
(It is not hard to see that by symmetry these functions are zero
when $i \ne j$.)
The functions agree so well that we have plotted individual points
for $\hat{\sigma}_{ij}(t)$ and an interpolating curve for $\rho_{ij}(t)$
so that the two functions may be distinguished.
It is much harder to simulate the SAW at a constant capacity than to
simulate the SLE. Consequently the statistical errors for the SAW are
larger. So we have only shown error bars for the SAW points.
The difference between the covariances of SLE and the SAW parameterized
by capacity is small, but appears to be greater than the statistical
error shown by the error bars in the figure. We emphasize again
that these error bars do not include the systematic errors.
Computing the capacity of a SAW is time consuming. This limits
the SAW simulation to walks of modest length - one of the sources of
systematic error.
\begin{figure}[tbh]
\includegraphics{saw_sle_fix_cap_covar}
\caption{ The covariance $\rho_{ij}(t)$ for SLE and
the covariance $\hat{\sigma}_{ij}(t)$ for the SAW using capacity as its
parameterization.
The top curve is $i=j=2$, the bottom curve is $i=j=1$.
The SLE covariance is plotted with a curve and no error bars, while
individual points with error bars are plotted for the SAW.
}
\label{saw_sle_fix_cap_covar}
\end{figure}
\section{$p$-variation of the SAW}
\label{fvar}
We now consider the question of how to reparameterize SLE$_{8/3}$ so that it
agrees with the scaling limit of the SAW with its natural parameterization.
If one plots points on a SAW that are equally spaced in time, then the
points appear to be equally spaced along the SAW path. In other words,
the natural parameterization of the SAW corresponds to using the length
of the walk as the parameter. Of course, this is only a heuristic
statement since the SAW is not expected to have finite variation.
However, we will see that the so called ``$p$-variation'' provides
a way to make sense of the length along the SAW.
If $X(s)$ is a stochastic process, then the $p$-variation is defined
as follows. Let $0=t_0^n < t_1^n < t_2^n \cdots < t_{k_n}^n = t$ be a
sequence of partitions of $[0,t]$. Let $\Pi_n$ denote the $n$th partition
and let $||\Pi_n||$ be the width of the largest subinterval of $\Pi_n$.
We assume that $||\Pi_n||$ goes to zero as $n$ goes to infinity.
We define
\begin{equation}
var_p((X(s))_{0 \le s \le t},\Pi_n)=
\sum_{j=1}^{k_n} |X(t^n_j)-X(t^n_{j-1})|^p
\label{fvar_def}
\end{equation}
With some abuse of the notation we will write
$var_p((X(s))_{0 \le s \le t},\Pi_n)$ as just $var_p(X(0,t),\Pi_n)$.
We define the $p$-variation to be the limit as the partition gets finer:
\begin{equation}
var_p(X(0,t))=
\lim_{n \rightarrow \infty} var_p(X(0,t),\Pi_n)
\end{equation}
We have not specified the nature of the convergence in the above
definition, nor have we given any conditions on the sequence of partitions.
In our simulations of the SAW and SLE we will use a sequence of
uniform partitions for the $\Pi_n$. Changing the parameterization of
the process is equivalent to changing the partitions used. As we will see,
changing the parameterization can change the value of the $p$-variation.
It is important to note that we do not take a supremum over all partitions
in the definition of the $p$-variation. If one takes a supremum one
obtains a different quantity which, unfortunately, is also called the
$p$-variation in the literature. (It is sometimes called the strong
$p$-variation.)
For Brownian motion and $p=2$ the variation we have defined is the quadratic
variation studied by L\'evy \cite{levy}. If the sequence of partitions
satisfies $||\Pi_n|| = o(1/\log(n))$, then the quadratic variation of Brownian
motion converges with probability one to $t$ \cite{dud}. But
if the condition is relaxed to $||\Pi_n|| = O(1/\log(n))$ this
is not true \cite{vega}.
For other values of $p$, the $p$-variation has been shown to exist for
certain Gaussian processes including fractional Brownian motion and the
local times associated with a symmetric stable L\'evy process.
See the recent references \cite{fg, marrosa, marrosb, shao}
and references therein.
Now consider the SAW. If the scaling limit exists then
$|\omega(t_j)-\omega(t_{j-1})|$ is of size $(t_j-t_{j-1})^\nu$.
So it is natural to consider the $p$-variation with $p=1/\nu$.
We conjecture that
\begin{equation}
var_{1/\nu}(\omega(0,t))=c \, t
\end{equation}
for some constant $c$.
If the $1/\nu$ variation exists for the scaling limit of the SAW,
then it is trivial to see that
\begin{equation}
E [var_{1/\nu}(\omega(0,t))]=c \, t
\end{equation}
for some constant $c$. So the content of the conjecture is that the
$1/\nu$ variation exists and is non-random.
For the remainder of the paper, $p$ will be $1/\nu=4/3$, and so we will
drop the subscript $1/\nu$ on $var$.
We support the conjecture with a simulation. We only consider
a uniform partition with intervals of width $\Delta t$, and we take $t=1$.
In this case we denote $var(\omega(0,1),\Pi)$ by $var(\omega(0,1),\Delta t)$.
The distribution of this random variable
is shown in figure \ref{label_saw_fvar} for several values of $\Delta t$.
Two different sets of curves are shown in this figure.
The curves on the right side of the figure are for the variation
defined above.
Note that the scale on the horizontal axis begins at $0.92$, not $0$.
The figure clearly indicates that the distribution is converging to
a step function as it should if the $1/\nu$ variation is constant.
To study the convergence quantitatively, in figure
\ref{label_saw_fvar_variance} we plot the variance
of the random variable $var(\omega(0,1),\Delta t)$
as a function of $\Delta t$.
Three curves are shown in this figure.
The variance of $var(\omega(0,1),\Delta t)$ is the line of points
labeled ``var.''
In this log-log plot the data is very well fit by a line with slope $1$.
This corresponds to the variance being proportional to $\Delta t$.
Note that this is what we would find if the process had independent,
stationary increments. We do not expect the increments of the SAW to
be independent, but this result suggests that the lengths of these increments
are weakly correlated.
The fractional variation defined above involves the parameterization of the
SAW. A different parameterization could give a different value to this
variation. (We will see an example of this later.) Another definition of
the variation that does not depend on the parameterization is the
following. Let $\Delta t>0$. We define times $t_i$ as follows.
$t_0=0$. Given $t_i$, we let $t_{i+1}$ be the first time after $t_i$
such that $|\omega(t_{i+1})-\omega(t_i)|=(\Delta t)^\nu$. The variation
over the time interval $[0,t]$ is then defined to be $n \Delta t$,
where $n$ is the largest index with $t_n \le t$. We denote this variation
by $var_{no}(\omega(0,t),\Delta t)$. The subscript $no$ stands for
``no parameterization,'' to emphasize that this definition does not
depend on how the curve is parameterized.
The curves on the left of figure \ref{label_saw_fvar} show the variation
$var_{no}(\omega(0,1),\Delta t)$ for several values of $\Delta t$.
The figure shows this variation is also converging to a constant.
It also shows that the constant $var_{no}(\omega[0,t])$ is slightly
less than the constant $var(\omega[0,t])$.
The variance of $var_{no}(\omega[0,t],\Delta t)$ is the line of points
labeled ``var$_{\rm no}$''
in figure \ref{label_saw_fvar_variance}.
This data is also very well fit by a line with slope 1.
We now return to our first definition of the $p$-variation,
eq. \reff{fvar_def}, and consider what can happen if we use a different
parameterization. We are particularly interested in what happens when
we use the half-plane capacity to reparameterize the SAW.
We denote the resulting variation by $var_{cap}(\omega(0,t),\Pi_n)$.
We can numerically compute the capacity along a SAW and so we can simulate
this random variable.
We have simulated it with a uniform partition.
Computing the capacity is difficult and this limits the lengths of
the SAW's we can study. This in turn restricts the simulations to
larger values of $\Delta t$.
The distributions of this random variable for several choices of
$\Delta t$ are shown in figure \ref{label_saw_cp_fvar}.
The first thing to note about this figure is the scale on the horizontal
axis. These distributions have considerably larger variance than the
curves in figure \ref{label_saw_fvar} with the same values of $\Delta t$.
Nonetheless, it appears this this random variable is also converging
to a constant as $\Delta t$ goes to zero. We also see that
$var_{cap}(\omega(0,1))$ is slightly less than $var_{no}(\omega(0,1))$.
The variance of $var_{cap}(\omega(0,1),\Delta t)$ as a function of $\Delta t$
is the set of points labeled ``var$_{\rm cap}$''
in figure \ref{label_saw_fvar_variance}.
This data indicates the variance is converging to zero as $\Delta t$ goes
to zero, but it is not well fit by any line, at least for the range of
$\Delta t$ shown.
For each of the three variations we have studied, the plots of their
distributions for three values of $\Delta t$ very nearly have a common
point of intersection. This provides a simple way to estimate the values
of these variations in the limit $\Delta t \rightarrow 0$. We
estimate
\begin{eqnarray}
var(\omega(0,1)) &= 0.987 \pm 0.001 \nonumber \\
var_{no}(\omega(0,1)) &= 0.972 \pm 0.001 \nonumber \\
var_{cap}(\omega(0,1)) &= 0.92 \pm 0.01
\end{eqnarray}
The error bars we have given are just guesses. As we discuss in
section \ref{simulating} there are systematic errors in the simulations
which make it difficult to give reliable error bars.
\bigskip
\begin{figure}[tbh]
\includegraphics{saw_fvar}
\caption{ The set of curves on the right are the distributions of
$var(\omega(0,1),\Delta t)$; those on the left are the distributions of
$var_{no}(\omega(0,1),\Delta t)$.
}
\label{label_saw_fvar}
\end{figure}
\begin{figure}[tbh]
\includegraphics{saw_fvar_variance}
\caption{ The variance of the three different definitions of the
$p$-variation as functions of $\Delta t$.
}
\label{label_saw_fvar_variance}
\end{figure}
\begin{figure}[tbh]
\includegraphics{saw_cp_fvar}
\caption{The distribution of $var_{cap}(\omega(0,1),\Delta t)$, the random
variable that converges to the $p$-variation computed using the
parameterization of the SAW by capacity.
}
\label{label_saw_cp_fvar}
\end{figure}
\section{Reparameterizing the SLE}
\label{reparamSLE}
If the scaling limit of the SAW is indeed SLE, then the results of the
previous section suggest that each of the $1/\nu$ variations we considered
should exist for SLE and give a parameterization which corresponds to
the natural parameterization of the SAW.
Of course, there is no way to compute the variation $var(\omega[0,t])$
for SLE since it requires knowing the parameterization we are trying to find.
We must either use $var_{no}(\omega[0,t])$ or $var_{cap}(\omega[0,t])$.
We use the former since its variance for a nonzero $\Delta t$ is smaller.
Let $V_t$ be the random time on the SLE where
\begin{equation}
var_{no}(\gamma[0,V_t])=ct
\end{equation}
where $c=var_{no}(\omega[0,1])$.
We define $\hat{\gamma}(t)=\gamma(V_t)$ so that
$var_{no}(\hat{\gamma}[0,t])=var_{no}(\omega[0,t])$
Then $\hat{\gamma}(t)$ and $\omega(t)$
should have the same distribution as parameterized curves.
We test this by comparing the distributions of $\omega(1)$ and
$\hat{\gamma}(1)=\gamma(V_1)$.
We must compute the constant $c=var_{no}(\omega[0,1])$ by simulation.
This requires computing the variation $var_{no}(\omega(0,1),\Delta t)$
for several $\Delta t$ and then estimating the limit as
$\Delta t \rightarrow 0$.
However, when we compute the variation $var_{no}(\gamma[0,V_t])$ we must
also use a nonzero $\Delta t$ and then attempt to take a limit as
$\Delta t \rightarrow 0$. These extrapolations have some error in them.
We attempt to minimize this error by the following trick.
For a given $\Delta t$ we define $V_t$ by
\begin{equation}
var_{no}(\gamma[0,V_t],\Delta t)= var_{no}(\omega[0,1],\Delta t) \, t
\end{equation}
and then let $\hat{\gamma}(t)=\gamma(V_t)$. In other words, for the
constant $c$ we use the estimate of $c$ that comes from the SAW simulation
using the same value of $\Delta t$ that we use to compute the $p$-variation
for the SLE. (Note that $V_t$ now has some small dependence on $\Delta t$.)
\bigskip
\begin{figure}[tbh]
\includegraphics{saw_sle_fix_fvar_dist_dif}
\caption{The difference of the distributions of $R$ for $\omega(1)$,
the SAW with its natural parameterization, and $\hat{\gamma}(1)$, the
SLE parameterized by the variation $var_{no}$.
}
\label{saw_sle_fix_fvar_dist}
\end{figure}
\begin{figure}[tbh]
\includegraphics{saw_sle_fix_fvar_angle_dif}
\caption{The difference of the distributions of $\Theta$ for $\omega(1)$,
the SAW with its natural parameterization, and $\hat{\gamma}(1)$, the
SLE parameterized by the variation $var_{no}$.
}
\label{saw_sle_fix_fvar_angle}
\end{figure}
As before we compute the distributions of the random variables $R$
and $\Theta$.
In figure \ref{saw_sle_fix_fvar_dist} we show the difference between
the distributions of $R$ for $\omega(1)$ and $\hat{\gamma}(1)$.
Two plots are shown corresponding to two different values of $\Delta t$.
The difference is larger than that seen in figure \ref{saw_sle_fix_cap_dist}.
However, the figure also shows that the difference depends strongly
on $\Delta t$.
In figure \ref{saw_sle_fix_fvar_angle} we show the difference between
the distributions of $\Theta$ for $\omega(1)$ and $\hat{\gamma}(1)$.
For the angle we see almost no dependence on the choice of $\Delta t$.
Next we test the equivalence of $\hat{\gamma}(t)$ and $\omega(t)$
by comparing their covariances. As we saw before, the non-trivial
part of these covariances is given by
\begin{equation}
\hat{\rho}_{ij}(t)= E \, [(\hat{\gamma}_i(1)
- \hat{\gamma}_i(t)) \hat{\gamma}_j(t)]
\end{equation}
and
\begin{equation}
\sigma_{ij}(t)= E \, [(\omega_i(1) - \omega_i(t)) \omega_j(t)]
\end{equation}
These functions are shown in figure \ref{saw_sle_fix_fvar_covar}
for $i=j=1$ and $i=j=2$. The SAW and SLE covariances agree reasonably well.
As in figure \ref{saw_sle_fix_fvar_dist} there is significant
dependence on the $\Delta t$ used to compute the $p$-variation of the SLE.
For some values of $\Delta t$ the agreement between the two covariances
is not as good as that shown in the figure.
\begin{figure}[tbh]
\includegraphics{saw_sle_fix_fvar_covar}
\caption{ The covariance $\sigma_{ij}(t)$ for the SAW and the covariance
$\hat{\rho}_{ij}(t)$ for SLE using the variation $var_{no}$
for its parameterization.
The top curve is $i=j=2$, the bottom curve is $i=j=1$.
The SLE covariance is plotted with a curve and no error bars, while
individual points with error bars are plotted for the SAW.
}
\label{saw_sle_fix_fvar_covar}
\end{figure}
It is tempting to compare the plots in figures
\ref{saw_sle_fix_cap_covar} and \ref{saw_sle_fix_fvar_covar}.
The first thing to keep in mind is that in figure
\ref{saw_sle_fix_cap_covar}, $t=1$ corresponds to $hcap=2$ while
in figure \ref{saw_sle_fix_fvar_covar}, $t=1$ corresponds to
$var_{no} = 1$. Simulations show that the average capacity for a SAW
with $var_{no} = 1$ is close to $0.5$. Changing the capacity by a factor
of $4$ is the same as changing spatial scales by a factor of $2$, and
so corresponds to a change in the covariance by a factor of $4$.
Indeed the vertical scales in the two figures differ by a factor of
approximately $4$.
However, we should emphasize that
the two plots use completely different parameterizations
of the processes and so comparing them is not particularly meaningful.
These two plots should be different even if they are rescaled to have the
same vertical scale.
\section{Simulating SAW and SLE}
\label{simulating}
One method for simulating SLE is to approximate the conformal
map $g_t$ by the composition of a large number of conformal maps
which are known explicitly.
A particular implementation was studied by R. Bauer \cite{bauer}.
This method is also briefly discussed for the radial case in
\cite{mr}. Here we give a brief overview.
A more detailed review may be found in \cite{tk_sle}.
We consider the time interval $[0,1]$ and
divide it into $N$ subintervals.
(We could use a uniform partition of $[0,1]$, but one obtains a more
uniform distribution of points along the SLE curve by using a non-uniform
partition of the time interval. See \cite{tk_sle} for details.)
The conformal map $g_t$ can be written as the composition of
$N$ conformal maps, each of which corresponds to the solution of
the SLE equation \reff{sle_eq} over one of the small time intervals.
Each of these $N$ conformal maps is approximated by a simple
conformal map. We use the conformal map that takes the half plane
minus a slit onto the half plane. The parameters for these conformal
maps (e.g., the length and angle of the slit) are chosen so that
the conformal maps have the correct capacity and so that the
driving function corresponding to the composition of these conformal maps
approximates the original driving function $\sqrt{\kappa} B_t$.
For example, this can be done so that the driving function of the
approximation agrees with $\sqrt{\kappa} B_t$ at the times which are
endpoints of the $N$ subintervals.
To compute a point on the SLE trace, one must apply $O(N)$ of these
conformal maps. So the time to compute a single point is $O(N)$.
If one wants to compute $N$ points on the SLE trace this will
take a time $O(N^2)$. The paper \cite{tk_sle} shows how to implement
this method of simulating the SLE so that the time required to
compute a single point is approximately $O(N^{0.4})$ rather than $O(N)$.
Certain features of the SLE are relatively easy to study by
simulation. For example, to compute the distribution of the SLE trace
at a fixed time, one need only compute a single point on the SLE trace.
One can use a relatively large value of $N$ and generate
a large number of samples.
By contrast, computing the $1/\nu$ variation of the curve
involves a double limit. One needs to let $N \rightarrow \infty$ and
then let the time interval, $\Delta t$, used to compute the
variation go to zero. In practice this means that one must take a
small $\Delta t$ and then take $N$ large enough that $1/N$ is small
compared to $\Delta t$. Thus the simulations that require computing
the variation are among the most difficult.
We simulate the SAW using the pivot algorithm
\cite{ms}. The particular implementation of the pivot algorithm we use
is found in \cite{tk_pivot}. This algorithm is fast in two dimensions, and
one can study quantities like the location of the SAW at a fixed time
for walks with a million steps. The difficult part of the SAW
simulations is computing the capacity of a walk. This can only be
done for much shorter walks.
We compute the capacity of a SAW using the zipper algorithm \cite{kuh,mr}.
We give a brief explanation of how the algorithm
works in our context and refer the reader to \cite{mr} for more
detail. Consider a curve $\omega(t)$ in the upper half plane which
starts at the origin and ends at $P$, e.g., a self-avoiding walk.
To compute the capacity we need to find the conformal map $g(z)$ that
takes the half plane $\mathbb{H}$ minus the curve onto $\mathbb{H}$. It should
be normalized so that $g(\infty)=\infty$ and $g^\prime(\infty)=\infty$.
This determines the map up to a translation.
The capacity is the coefficient of $1/z$ in the expansion of $g(z)$
about $\infty$. It does not depend on the choice of translation.
Let $0=z_0,z_1,z_2, \cdots, z_n=P$ be points along the curve.
Let $C_i$ denote the portion of the curve from $z_{i-1}$ to $z_i$.
Let $g_1(z)$ be the conformal map that takes $\mathbb{H} \setminus C_1$
onto $\mathbb{H}$. In addition to the above normalizations,
we require $g_1(z_1)=0$. Then $g_1$ maps $C_2 \cup \cdots \cup C_n$
to a curve that starts at $0=g_1(z_1)$ and ends at $g_1(z_n)$ and
passes through the points $g_1(z_i)$, $i=2,\cdots, n-1$.
Now we let $g_2(z)$ be the conformal map that takes
$\mathbb{H} \setminus g_1(C_2)$ onto $\mathbb{H}$. Then $g_2 \circ g_1(z)$ takes
$C_3 \cup \cdots \cup C_n$ to a curve that starts at
$0=g_2 \circ g_1(z_2)$, ends at $g_2 \circ g_1(z_n)$
and passes through the points $g_2 \circ g_1(z_i)$, $i=3,\cdots, n-1$.
We continue to remove one $C_i$ at a
time. $g_i(z)$ is the conformal map that takes
$\mathbb{H} \setminus g_{i-1} \circ \cdots \circ g_2 \circ g_1(C_i)$
onto $\mathbb{H}$.
The composition $g(z)=g_n \circ \cdots \circ g_2 \circ g_1(z)$
is then the desired conformal map.
The capacity of $g$ is the sum of the capacities of the $g_i$.
Note that even if the original curve is piece-wise linear, the
segments of the form $g_1(C_1)$, $g_2 \circ g_1(C_2) \cdots$ are not.
The key idea of the zipper algorithm is to approximate $g_i$ by
the conformal map $h_i$
that takes $\mathbb{H} \setminus D_i$ onto $\mathbb{H}$ where $D_i$ is
a curve that starts at $0$ and ends at
$g_{i-1} \circ \cdots g_2 \circ g_1(z_i)$.
So $D_i$ has the same endpoints as
$g_{i-1} \circ \cdots \circ g_2 \circ g_1(C_i)$.
The curve $D_i$ is chosen so that the map $h_i$ is relatively easy to compute.
One possibility is to take $D_i$ to be a line segment from $0$ to
$g_{i-1} \circ \cdots \circ g_2 \circ g_1(z_i)$.
Then there is an explicit expression
for the inverse of $h_i$ and $h_i$ itself may be found by a Newton's
method. (Some care is needed in implementing Newton's method \cite{mr}.)
Another choice is to take $D_i$ to be the arc of the circle that is
perpendicular to the real axis and connects $0$ and
$g_{i-1} \circ \cdots \circ g_2 \circ g_1(z_i)$.
This has the advantage that $h_i$
only involves linear fractional transformations and a square root.
Because of the need to compute
$g_{i-1} \circ \cdots \circ g_2 \circ g_1(z_i)$ for every point
$z_i$, the zipper algorithm requires a time $O(n^2)$. The idea in
\cite{tk_sle} that speeds up the SLE simulation may be applied here
to dramatically reduce the time required by this algorithm.
The curve $D_i$ starts and ends at the same points as
$g_{i-1} \circ \cdots \circ g_2 \circ g_1(C_i)$,
but does not necessarily approximate it well. For a fixed curve one
improves the approximation to the capacity by using more points
$z_1, \cdots, z_n$ along the curve. We do not do this. Thus the capacity
we compute is not exactly the capacity of the SAW. However, it is
the exact capacity (up to numerical errors)
of some curve that passes through the same lattice points as the SAW.
We expect that this curve will have the same scaling limit as the SAW.
As in any Monte Carlo simulation there is error because we do not run
the simulation forever. We refer to this as ``statistical error.''
It is relatively easy to estimate (we use batched means).
There are also systematic errors in the simulations which we now discuss.
For the SAW, the scaling limit is given by a double limit. First we should
let the length of the walk go to infinity and then take the lattice
spacing to zero. In practice we fix a large $N$ and simulate SAW's
with $N$ steps but only use the first $N^\prime$ steps where
$N^\prime$ is significantly smaller than $N$.
The simulation is done with a unit lattice, but then we rescale the
walks by a factor of $(N^\prime)^{-\nu}$. So the portion of the
rescaled SAW that we use has a
size of order $1$ and approximates the scaling limit of the SAW
with the natural parameterization running from $t=0$ to $t=1$.
For the SAW simulations we will give the
number of iterations of the Monte Carlo Markov Chain. This number is
typically very large, but the samples generated are highly correlated.
For the SLE, $N$ denotes the number of conformal maps used in the
approximation.
For the SLE simulations we will give the number of SLE's generated.
These samples are independent.
We end this section with a discussion of the parameters used in the
various simulations.
In figures \ref{saw_sle_fix_time_dist} and \ref{saw_sle_fix_time_angle},
the SAW is simulated using $N=1,000,000$ and $N^\prime=200,000$.
The simulation ran for 1 billion iterations.
The SLE simulation used $N=100,000$, and 1 million samples were generated.
These large numbers are possible because all we need to compute for the
SLE is the location of its position at $t=1$.
In figures \ref{saw_sle_fix_cap_dist} and \ref{saw_sle_fix_cap_angle}
the SLE simulation used is the same as that in figures
\ref{saw_sle_fix_time_dist} and \ref{saw_sle_fix_time_angle}.
The SAW simulation requires computing the capacity of the walk.
This limits the simulation to significantly shorter walks.
We take $N=100,000$ and rescale the walk by a factor of $N^{-\nu}$.
For this rescaled walk we compute
the capacity along the walk up until $hcap=0.1$. (The number of
steps at which this occurs is random but well below $N$. The mean number
of steps at which the capacity is $0.1$ is about a third of the total number
of steps.) We then rescale the SAW again by a factor of $\sqrt{20}$
so that the random time we are finding is where the capacity is $2$.
As we have noted the samples generated by the SAW simulation are highly
correlated. If we are studying an observable which is trivial to compute,
we might as well compute the value of the observable at every time step.
The samples will be highly correlated, but there is very little cost in
terms of computation time.
For observables that are not trivial to compute, e.g., observables that
involve computing the capacity, if we computed the observable at every
time step we would spend most of the computation time on computing the
observable. For such observables we only compute it
every $s$ time steps. In this SAW simulation
we took $s=100,000$. The simulation was run for 8 billion iterations, so
$80,000$ samples of the observables were computed.
For the first covariance plot, figure \ref{saw_sle_fix_cap_covar},
the SLE simulation is done with $N=10,000$. We compute the SLE covariance
at $50$ equally spaced times. 200,000 samples of the SLE were
generated. For the SAW simulation, we take $N=100,000$, rescale the
walk by a factor of $N^{-\nu}$, and then compute
the capacity along the walk up until $hcap=0.1$.
We then rescale the SAW again by a factor of $\sqrt{20}$
so that the random time we are finding is where the capacity is $2$.
Computing the capacity of the SAW is slow, so we only compute it
every $100,000$ iterations of the Markov chain. We ran the chain for
$8$ billion iterations, so we generated $80,000$ samples of the SAW
covariance.
There are two simulations of the SAW in figure \ref{label_saw_fvar}.
In both of them $N=1,000,000$ and we use $N^\prime=500,000$ and
$N^\prime=1,000,000$. For most observables, taking $N^\prime$ at or near
$N$ is a bad idea - it will change the distribution of the observable.
However, for the two variations shown in the figure we find no difference
between the variations computed using $N^\prime=500,000$
and $N^\prime=1,000,000$. The curves shown use the latter value.
Both simulations were run for $1$ billion iterations.
In the simulation of the SAW used for figure \ref{label_saw_cp_fvar},
we take $N=100,000$ and $N^\prime=10,000$. The relatively small value
of $N^\prime$ is because of the difficulty in computing the capacity of a SAW.
We run the Markov chain for $1$ billion iterations, but only
compute $var_{cap}$ every $100,000$ iterations for a total of
only $10,000$ samples.
In figures \ref{saw_sle_fix_fvar_dist} and \ref{saw_sle_fix_fvar_angle}
the SAW simulation used is the same as that in figures
\ref{saw_sle_fix_time_dist} and \ref{saw_sle_fix_time_angle}.
For the SLE simulation we took $N=500,000$. However, we only
computed every fifth point along the SLE, so we compute a total of $100,000$
points on the SLE. We generated 146,000 samples. This is the longest
simulation in this paper. It required approximately 150 cpu-days.
The time needed for this simulation is significantly reduced by the
following trick. We compute the variation $var_{no}$ up to the time
when it is $1$. This time on the SLE when this is attained is random,
but occurs well before the end of the SLE we are computing. So we
actually only need to compute some initial fraction of the $100,000$ points on
the SLE.
Finally, for figure \ref{saw_sle_fix_fvar_covar} the SAW simulation
uses $N=1,000,000$ and $N^\prime=400,000$. The SAW simulation was
run for $1$ billion iterations. For the SLE simulation, $N=250,000$
but only every fifth point was computed. Again, as discussed in the
preceding paragraph we need only compute an initial portion of the SLE.
127,000 samples of the SLE were generated.
\section{Conclusions}
\label{conclusion}
The main conclusion of this paper is that the $p$-variation with
$p=1/\nu$ of the SLE trace provides a parameterization that corresponds to
the natural parameterization of the SAW. Simulations show that the
$p$-variation of the SAW is non-random. A trivial scaling argument then
implies it is proportional to the natural parameterization of the SAW.
Thus the SLE with the $p$-variation as its parameterization and the
SAW with its natural parameterization should agree as parameterized curves.
Two tests were done to check this agreement.
The distributions of the random curves at a fixed time were compared, and
the covariances of the two processes were compared. Good agreement
was found.
A secondary conclusion of this paper concerns using the half-plane capacity
to reparameterize the SAW. With this reparameterization it should agree with
the SLE as parameterized curves.
The same two tests of their equivalence as parameterized curves
showed good agreement.
For random fractal curves like the SAW or the loop-erased random walk
the exponent $\nu$ should be equal to the reciprocal of the Hausdorff
dimension of the curve. Beffara \cite{befa,befb} has proved that
if $\gamma$ is the SLE trace with parameter $\kappa$,
then the Hausdorff dimension of $\gamma$ is $1+\kappa/8$ a.s.
This suggests that the $p$-variation of SLE should exist for
$p=\nu^{-1}=1+\kappa/8$.
A natural question is to consider this variation for other discrete models
and see if it exists and is non-random as it appears to be for the SAW.
Preliminary simulations of the loop-erased random walk indicate that its
$1/\nu$ variation exists and is non random for $\nu=4/5$.
This was one of the models considered by Schramm in his original paper
\cite{schramm}. Lawler, Schramm and Werner \cite{lsw_lerw}
have proved that its scaling limit converges to SLE with $\kappa=2$.
\bigskip
\bigskip
\noindent {\bf Acknowledgments:}
The Banff International Research Station made possible many useful
interactions. In particular, the author thanks
David Brydges, Greg Lawler, Don Marshall, Daniel Meyer, Yuval Peres,
Stephen Rohde, Oded Schramm, Wendelin Werner and Peter Young for
useful discussions.
This work was supported by the National Science Foundation (DMS-0201566
and DMS-0501168).
\bigskip
\bigskip
| {
"timestamp": "2005-10-27T16:36:23",
"yymm": "0510",
"arxiv_id": "math/0510604",
"language": "en",
"url": "https://arxiv.org/abs/math/0510604",
"abstract": "The scaling limit of the two-dimensional self-avoiding walk (SAW) is believed to be given by the Schramm-Loewner evolution (SLE) with the parameter kappa equal to 8/3. The scaling limit of the SAW has a natural parameterization and SLE has a standard parameterization using the half-plane capacity. These two parameterizations do not correspond with one another. To make the scaling limit of the SAW and SLE agree as parameterized curves, we must reparameterize one of them. We present Monte Carlo results that show that if we reparameterize the SAW using the half-plane capacity, then it agrees well with SLE with its standard parameterization. We then consider how to reparameterize SLE to make it agree with the SAW with its natural parameterization. We argue using Monte Carlo results that the so-called p-variation of the SLE curve with p=1/nu=4/3 provides a parameterization that corresponds to the natural parameterization of the SAW.",
"subjects": "Probability (math.PR); Mathematical Physics (math-ph)",
"title": "Monte Carlo comparisons of the self-avoiding walk and SLE as parameterized curves",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018405251301,
"lm_q2_score": 0.7279754548076478,
"lm_q1q2_score": 0.7099230033855368
} |
https://arxiv.org/abs/2212.13251 | Robust computation of optimal transport by $β$-potential regularization | Optimal transport (OT) has become a widely used tool in the machine learning field to measure the discrepancy between probability distributions. For instance, OT is a popular loss function that quantifies the discrepancy between an empirical distribution and a parametric model. Recently, an entropic penalty term and the celebrated Sinkhorn algorithm have been commonly used to approximate the original OT in a computationally efficient way. However, since the Sinkhorn algorithm runs a projection associated with the Kullback-Leibler divergence, it is often vulnerable to outliers. To overcome this problem, we propose regularizing OT with the \beta-potential term associated with the so-called $\beta$-divergence, which was developed in robust statistics. Our theoretical analysis reveals that the $\beta$-potential can prevent the mass from being transported to outliers. We experimentally demonstrate that the transport matrix computed with our algorithm helps estimate a probability distribution robustly even in the presence of outliers. In addition, our proposed method can successfully detect outliers from a contaminated dataset | \section{Introduction}
Many machine learning problems such as density estimation and generative modeling are often formulated by a discrepancy between probability distributions \citep{leastsquare,GAN}. As a common choice, the Kullback-Leibler (KL) divergence \citep{KLdivergence} has been widely used since minimizing the KL-divergence of an empirical distribution from a parametric model corresponds to maximum likelihood estimation. However, the KL-divergence suffers from some problems. For instance, the KL-divergence of $p$ from $q$ is not well-defined when the support of $p$ is not completely included in the support of $q$. Moreover, the KL-divergence does not satisfy the axioms of metrics in a probability space. On the other hand, \emph{optimal transport} (OT) \citep{OTbook} does not suffer from these problems. OT does not require any conditions on the support of probability distributions and thus is expected to be more stable than the KL-divergence. Therefore, the divergence estimator is less prone to diverge to infinity. In addition, OT between two distributions is a metric in a probability space and therefore defines a proper distance between histograms and probability measures \citep{OTformulation}. Owing to these nice properties, OT has been celebrated with many applications such as image processing \citep{Rabin} and color modifications \citep{Solomon}. \par
However, the ordinary OT suffers from heavy computation. To cope with this problem, one of the common approaches is to regularize the ordinary OT problem with an entropic penalty term (Boltzmann--Shannon entropy \citep{ROTandRMD}) and use the Sinkhorn algorithm \citep{Sinkhorn1967} to approximate OT \citep{Sinkhorn}. The entropic penalty makes the objective strictly convex, ensuring the existence of the unique global optimal solution, and the Sinkhorn algorithm projects this global optimal solution onto a set of couplings in terms of the KL-divergence, a divergence associated with the Boltzmann--Shannon entropy \citep{ROTandRMD}.
Unfortunately, the KL projection in statistical estimation is often not robust in the presence of outliers \citep{Basu}. In our pilot study, we experimentally confirmed that the Sinkhorn algorithm is easily affected by outliers (Figure \ref{toyexperiment}). As can be seen in Table \ref{table_toyexperiment}, the output value of the Sinkhorn algorithm drastically increases even when only a small number of outliers are included in the dataset.
\begin{figure}[t]
\begin{tabular}{cc}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width = 4.0cm]{figures/toyexperiment1/introduction.pdf}
\subcaption{Sets of samples without outliers}
\label{figure(a)}
\end{minipage} &
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width = 4.0cm]{figures/toyexperiment1/introduction_contamination.pdf}
\subcaption{Sets of samples with outliers}
\label{figure(b)}
\end{minipage}
\end{tabular}
\caption{(a) 500 samples (red) are drawn from $\mathcal{N}([0, 0]^{\top}, I)$ and 500 samples (blue) are from $\mathcal{N}([5, 5]^{\top}, I)$. $I$ is the two-dimensional identity matrix. (b) 10 samples from two-dimensional uniform distribution $\mathrm{U}\{(x, y)|-50\le x, y \le 50 \}$ are added to the red samples.}
\label{toyexperiment}
\end{figure}
\begin{table}[t]
\caption{The ouput value of the Sinkhorn algorithm and our algorithm. The exact OT value is 50.13 for the sets of samples in Figure (\ref{figure(a)}).}
\centering
\begin{tabular}{ccc}
\toprule & Figure~\ref{figure(a)} & Figure~\ref{figure(b)} \\
\midrule
The Sinkhorn algorithm & 50.74 & 92.19 \\
Our algorithm & 50.10 & 50.00 \\
\bottomrule
\end{tabular}
\label{table_toyexperiment}
\end{table}
The high sensitivity of the Sinkhorn algorithm may lead to undesired solutions in probabilistic modeling when we deal with noisy and adversarial datasets \citep{adversarial_attack}. Several existing works have tackled this challenge. \citeauthor{Staerman}(\citeyear{Staerman}) proposed a median-of-means estimator of the $1$-Wasserstein dual to suppress outlier sensitivity. However, the obtained solution is hard to be interpreted as an approximation to OT because the corresponding primal problem is unclear. On the other hand, the following works robustly approximate OT by sending only a small probability mass to outliers, allowing some violation of the coupling constraint: \citeauthor{Balaji}(\citeyear{Balaji}) used unbalanced OT \citep{Chizat2017} with the $\chi^{2}$-divergence as their $f$-divergence penalty on marginal violation to compute OT robustly. This formulation requires access to the outlier proportion, which is usually not available. Moreover, in Section \ref{outlierdetection_experiment}, we show their method relying on the optimization package CVXPY \citep{CVXPY} does not scale to large samples. \citeauthor{Mukherjee}(\citeyear{Mukherjee}) mainly focused on outlier detection by truncating the distance matrix in OT. As a downside, one needs to set an appropriate threshold to use their method, which is hardly known in advance. Hence, we still lack a robust OT formulation independent of sensitive hyperparameters, with easily-accessible primal transport matrices.\par
In this work, we propose to mitigate the outlier sensitivity of the Sinkhorn algorithm by regularizing OT with the $\beta$-potential term instead of the Boltzmann--Shannon entropy. This formulation can be regarded as a projection based on the $\beta$-divergence~\citep{Basu,Futami}.
With some computational tricks, our algorithm is guaranteed not to move any probability mass to outliers (Figure~\ref{P_beta_figures}). It also suggests that our algorithm computes an approximate OT between the inliers. The approximate OT computed by our method was 50.10 and 50.00 in the settings of Figures~\ref{figure(a)} and~\ref{figure(b)}, respectively (Table \ref{table_toyexperiment}), meaning that our method is less prone to be affected by outliers. Through numerical experiments, we demonstrate that our proposed method can measure a distance between datasets more robustly than the Sinkhorn algorithm. As a practical application, we show our proposed method can be applied to an outlier detection task.
\begin{figure}[t]
\centering
\includegraphics[width = 10cm]{figures/toyexperiment1/P_beta_heatmap.pdf}
\caption{The heatmap of the transport matrix computed with our algorithm. The horizontal histogram is a set of 500 samples from a 1-dimensional standard normal distribution. The vertical histogram is a set of 495 samples from a 1-dimensional standard normal distribution added with 5 outliers with a value of 70. As can be seen from the heatmap, no mass was transported from outliers included in the source histogram.}
\label{P_beta_figures}
\end{figure}
\section{Background}
In this section, we first show the formulation of ordinary discrete OT. Subsequently, we review the Bregman divergence. Finally, we introduce the convex regularized discrete OT (CROT) formulation and the alternate Bregman projection to obtain the solution to the CROT.
\subsection{Optimal Transport (OT)}
We introduce OT in a discrete setting. In this case, OT can be regarded as the cheapest plan to deliver items from $m$ suppliers to $n$ consumers, where each supplier and consumer has supply $\frac{1}{m}$ and demand $\frac{1}{n}$, respectively. In this work, we mainly focus on measuring the transportation cost between two probability distributions. Suppose we have two sets of independent samples $\{ \boldsymbol{x}_i \}_{i = 1}^{m}$ and $\{ \boldsymbol{y}_j\}_{j = 1}^{n}$ drawn from two distributions $P_x$ and $P_y$, respectively. We write the corresponding empirical measures by $\hat{P}_{x} \coloneqq \frac{1}{m} \sum_{i = 1}^{m} \boldsymbol{x}_i \delta_{\boldsymbol{x}_i}$ and $\hat{P}_{y} \coloneqq \frac{1}{n} \sum_{i = 1}^{n} \boldsymbol{y}_i \delta_{\boldsymbol{y}_i}$, where $\delta_{\boldsymbol{x}}$ is the delta function at position $\boldsymbol{x}$. Let $\boldsymbol{\gamma} \in \mathbb{R}_{+}^{m \times n}$ be the distance matrix, where $\gamma_{ij}$ denotes the distance between $\boldsymbol{x}_i$ and $\boldsymbol{y}_j$. Transport matrices are confined to
\begin{align}\label{coupling_constraint}
\left\{ \boldsymbol{\Pi} \in \mathbb{R}^{m \times n}_{+} \,\middle|\, \boldsymbol{\Pi} \boldsymbol{1}_n = \frac{\boldsymbol{1}_m}{m}, \boldsymbol{\Pi}^{\top} \boldsymbol{1}_m = \frac{\boldsymbol{1}_n}{n} \right\} \eqqcolon \mathcal{G}\left(\frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n}\right),
\end{align}
where $\mathbb{R}^{m \times n}_{+}$ is the set of non-negative reals. We call $\mathcal{G}(\frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n})$ the \emph{coupling constraint}.
In order to keep the notation concise, the Frobenius inner product between two matrices $\boldsymbol{\pi}, \boldsymbol{\gamma} \in \mathbb{R}_{+}^{m \times n}$ is denoted by
\begin{equation}
\langle \boldsymbol{\pi}, \boldsymbol{\gamma} \rangle \coloneqq \sum_{i, j}\pi_{ij}\gamma_{ij}.
\end{equation}
Then, OT between the two empirical distributions $\hat{P}_{x}$ and $\hat{P}_{y}$ is defined as follows \citep{OTformulation}:
\begin{eqnarray}\label{DiscreteOT}
\mathrm{OT}(\hat{P}_{x} \| \hat{P}_{y}) &\coloneqq& \min_{\boldsymbol{\pi}\in \mathcal{G}(\frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n})} \langle\boldsymbol{\pi}, \boldsymbol{\gamma} \rangle.
\end{eqnarray}
\subsection{Bregman divergence}
Let $\mathcal{E}$ be a Euclidean space with inner product $\langle \cdot, \cdot \rangle$ and induced norm $\|\cdot\|$.
Let $\phi:\mathcal{E}\to\mathbb{R}$ be a strictly convex function on $\mathcal{E}$ that is differentiable on $\mathrm{int}(\dom\phi) \neq \emptyset$. The \emph{Bregman divergence} generated by $\phi$ is defined as follows:
\begin{equation}
B_{\phi}(\boldsymbol{x} \| \boldsymbol{y}) := \phi(\boldsymbol{x}) - \phi(\boldsymbol{y}) - \langle \boldsymbol{x} - \boldsymbol{y}, \nabla \phi(\boldsymbol{y})\rangle,
\end{equation}
for all $\boldsymbol{x} \in \dom\phi$ and $\boldsymbol{y} \in \dom\phi$. In this paper, for the sake of simplicity, we consider the so-called \emph{separable} Bregman divergences \citep{ROTandRMD} over the set of transport matrices $\mathcal{G}\left( \frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n} \right)$, which can be decomposed as the element-wise summation:
\begin{eqnarray}
B_{\phi}(\boldsymbol{\pi} \| \boldsymbol{\xi}) &=& \sum_{i = 1}^{m} \sum_{j = 1}^{n} B_{\phi}(\pi_{ij}\|\xi_{ij}), \\
\phi(\boldsymbol{\pi}) &=& \sum_{i = 1}^{m} \sum_{j = 1}^{n} \phi(\pi_{ij}),
\end{eqnarray}
where we used $\phi: \mathbb{R} \to \mathbb{R}$ to denote the generator function same across all elements, with a slight abuse of notation.
Suppose now that $\phi$ is of the Legendre type \citep{Heinz}, and let $\mathcal{C} \subseteq \mathcal{E}$ be a closed convex set such that $\mathcal{C}\cap\mathrm{int}( \dom \phi ) \neq \emptyset$. Then, for any point $\boldsymbol{y} \in \mathrm{int}(\dom\phi)$, the following problem,
\begin{equation}
T_{\mathcal{C}}(\boldsymbol{y}) = \operatornamewithlimits{argmin}_{\boldsymbol{x} \in \mathcal{C}} B_{\phi}(\boldsymbol{x} \| \boldsymbol{y}),
\end{equation}
has a unique solution. $T_{\mathcal{C}}(\boldsymbol{y})$ is called the \emph{Bregman} projection of $\boldsymbol{y}$ onto $\mathcal{C}$ \citep{ROTandRMD}.
\subsection{Formulation of CROT}
\begin{table}[t]
\centering
\begin{tabular}{ccc}
\toprule Regularization term & dom $\phi$ & dom $\psi$ \\
\midrule
$\beta$-potential ($\beta > 1$) & $\mathbb{R}_{+}$ & $(\frac{1}{1 - \beta}, \infty)$ \\
Boltzman-Shannon entropy& $\mathbb{R}_{+}$ & $\mathbb{R}$ \\
\bottomrule
\end{tabular}
\caption{Domains of each regularizer and its Fenchel conjugate.}
\label{domain_of_regularizer}
\end{table}
Here, we give the formulation of the CROT and show that obtaining the optimal solution of the CROT corresponds to minimizing the Bregman divergence between two matrices. \par
The CROT is formulated as a regularized version of (\ref{DiscreteOT}) by $\phi$ as follows:
\begin{equation}\label{CROT}
L_{\phi} (\boldsymbol{\pi}) \coloneqq \min_{\boldsymbol{\pi}\in \mathcal{G}(\frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n})} \langle \boldsymbol{\pi}, \boldsymbol{\gamma} \rangle + \lambda \phi(\boldsymbol{\pi}),
\end{equation}
where $\lambda > 0$ is a regularization parameter. Subsequently, we often work on the dual variable of $\boldsymbol{\pi}$. The dual variable $\boldsymbol{\theta}$ satisfies the following conditions:\footnote{This mapping based on gradients (not subgradients) is legitimate only when $\phi$ is of the Legendre type.}
\begin{eqnarray}
\boldsymbol{\pi} &=& \nabla\psi(\boldsymbol{\theta}), \\
\boldsymbol{\theta} & = & \nabla\phi(\boldsymbol{\pi}),
\end{eqnarray}
where $\psi$ is the Fenchel conjugate of $\phi$ \citep{ROTandRMD}.
The optimal solution of (\ref{CROT}) can be understood via the Bregman projection.
Let us consider the unconstrained version of (\ref{CROT}):
\begin{eqnarray}\label{non_constraint_CROT}
\min_{\boldsymbol{\pi} \in \mathbb{R}^{m \times n}} \langle \boldsymbol{\pi}, \boldsymbol{\gamma} \rangle + \lambda \phi(\boldsymbol{\pi}).
\end{eqnarray}
Since $\langle \boldsymbol{\pi}, \boldsymbol{\gamma} \rangle$ is linear and $\phi$ is strictly convex with respect to $\boldsymbol{\pi}$, there is a unique optimal solution $\boldsymbol{\xi}$ for (\ref{non_constraint_CROT}):
\begin{equation}\label{xi}
\boldsymbol{\xi} = \nabla\psi(-\boldsymbol{\gamma} / \lambda),
\end{equation}
which can be obtained by solving the first-order optimality condition of (\ref{non_constraint_CROT}) with the dual relationship $(\nabla \phi)^{-1} = \nabla\psi$:
\begin{equation}
\boldsymbol{\gamma} + \lambda \nabla\phi(\boldsymbol{\xi}) = 0.
\end{equation}
Then,
\begin{align}
\boldsymbol{\pi}_{\lambda}^{*} &\coloneqq \operatornamewithlimits{argmin}_{\boldsymbol{\pi} \in \mathcal{G}(\frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n})} L_{\phi}(\boldsymbol{\pi})\\
&= \operatornamewithlimits{argmin}_{\boldsymbol{\pi} \in \mathcal{G}(\frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n})} B_{\phi}(\boldsymbol{\pi}\|\boldsymbol{\xi}),
\end{align}
where the last equality is due to the following equation:
\begin{equation}
\langle \boldsymbol{\pi}, \boldsymbol{\gamma} \rangle + \lambda \phi(\boldsymbol{\pi}) - \lambda\phi(\boldsymbol{\xi}) - \langle \boldsymbol{\xi}, \boldsymbol{\gamma} \rangle = \lambda B_{\phi}(\boldsymbol{\pi} \| \boldsymbol{\xi}).
\end{equation}
Therefore, the solution of (\ref{CROT}) can be interpreted as the Bregman projection of the unconstrained solution $\boldsymbol{\xi}$ onto $\mathcal{G}(\frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n})$.
The Sinkhorn algorithm can be used to obtain a solution to OT regularized with the negative of Boltzmann--Shannon entropy
\begin{math}
\phi(\pi) = \pi \log \pi - \pi + 1
\end{math}
(Table \ref{domain_of_regularizer}), and runs a projection associated with the KL-divergence where $B_{\phi}(\pi\|\xi) = \pi \log\frac{\pi}{\xi} - \pi + \xi$.
\subsection{Alternate Bregman projection}
Here, we demonstrate how the Bregman projection onto $\mathcal{G}(\frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n})$ is executed based on \citeauthor{ROTandRMD}[\citeyear{ROTandRMD}]. \par
Let $\mathcal{C}_0, \mathcal{C}_1, \mathcal{C}_2$ be the following convex sets:
\begin{eqnarray}
\mathcal{C}_0 &=& \mathbb{R}_{+}^{m \times n}, \\
\mathcal{C}_1 &=& \left\{ \boldsymbol{\pi} \in \mathbb{R}^{m \times n} \,\middle|\, \boldsymbol{\pi}\boldsymbol{1}_n = \textstyle{\frac{\boldsymbol{1}_m}{m}} \right\}, \\
\mathcal{C}_2 &=& \left\{ \boldsymbol{\pi} \in \mathbb{R}^{m \times n} \,\middle|\, \boldsymbol{\pi}^{\top} \boldsymbol{1}_m = \textstyle{\frac{\boldsymbol{1}_n}{n}} \right\}.
\end{eqnarray}
Then, $\mathcal{G}(\frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n})$ can be written as follows:
\begin{equation}
\mathcal{G}(\textstyle{\frac{\boldsymbol{1}_m}{m}}, \textstyle{\frac{\boldsymbol{1}_n}{n}}) = \mathcal{C}_0 \cap \mathcal{C}_1 \cap \mathcal{C}_2.
\end{equation}
We can get the Bregman projection onto $\mathcal{G}(\frac{\boldsymbol{1}_m}{m}, \frac{\boldsymbol{1}_n}{n})$ by alternately performing projections onto $\mathcal{C}_0$, $\mathcal{C}_1$, and $\mathcal{C}_2$. \par
Next, let us consider the projection of a given matrix $\overline{\boldsymbol{\pi}} \in \mathrm{int}(\dom \phi)$ onto $\mathcal{C}_0$, $\mathcal{C}_1$, and $\mathcal{C}_2$. The corresponding projection onto each set is denoted by $\boldsymbol{\pi}_{0}^{*}$, $\boldsymbol{\pi}_{1}^{*}$, and $\boldsymbol{\pi}_{2}^{*}$, respectively. Subsequently, we show how to obtain them \citep{ROTandRMD} (see Section A in the supplementary file for details).
\subsubsection{Projection onto $\mathcal{C}_0$}
When considering the separable Bregman divergence, the projection onto $\mathcal{C}_{0}$ can be performed with a closed-form expression in terms of primal parameters:
\begin{equation}\label{non-negativity_constraint}
\pi^{*}_{0, ij} = \max \{ 0, \overline{\pi}_{ij} \},
\end{equation}
where, $\pi^{*}_{0, ij}$ is the $(i, j)$-element of matrix $\boldsymbol{\pi}_{0}^{*}$.
Since $\phi'$ is increasing, this is equivalently expressed in terms of the dual parameters of $\boldsymbol{\pi}_0^*$, $\boldsymbol{\theta}_0^*$, as
\begin{equation} \label{projectionC0}
\theta_{0, ij}^{*} = \max \{ \phi'(0), \overline{\theta}_{ij} \}.
\end{equation}
Here, the dual coordinate of the input matrix $\overline{\boldsymbol{\pi}}$ is denoted by $\overline{\boldsymbol{\theta}} = \nabla \psi (\overline{\boldsymbol{\pi}})$.
\subsubsection{Projections onto $\mathcal{C}_1$ and $\mathcal{C}_2$}
Next, we consider the Bregman projection onto $\mathcal{C}_{1}$. The projection onto $\mathcal{C}_2$ can be executed in the same way and thus omitted here. The Lagrangian associated to the Bregman projection $\boldsymbol{\pi}_{1}^{*}$ of a given matrix $\overline{\boldsymbol{\pi}} \in \mathrm{int}(\dom\phi)$ onto $\mathcal{C}_1$ is given as follows:
\begin{align}
\mathcal{L}_1 (\boldsymbol{\pi}, \boldsymbol{\mu}) = \phi(\boldsymbol{\pi}) - \langle \boldsymbol{\pi},& \nabla\phi(\overline{\boldsymbol{\pi}}) \rangle \nonumber + \boldsymbol{\mu}^{\top}(\boldsymbol{\pi} \boldsymbol{1}_n - \textstyle{\frac{\boldsymbol{1}_m}{m}}),
\end{align}
where $\boldsymbol{\mu} \in \mathbb{R}^{m}$ are Lagrange multipliers.
Their gradients are given on $\mathrm{int}(\dom\phi)$ by
\begin{align}
\nabla_{\boldsymbol{\pi}}\mathcal{L}_{1}(\boldsymbol{\pi}, \boldsymbol{\mu}) = \nabla_{\boldsymbol{\pi}}\phi(\boldsymbol{\pi}) - \nabla_{\boldsymbol{\pi}}\phi(\overline{\boldsymbol{\pi}}) + \boldsymbol{\mu}{\boldsymbol{1}_{n}}^{\top},
\end{align}
and by noting $(\nabla\phi)^{-1} = \nabla\psi$, $\nabla_{\boldsymbol{\pi}}\mathcal{L}_1(\boldsymbol{\pi}_1, \boldsymbol{\mu}) = \boldsymbol{0}_{m\times n}$ if and only if \citep{ROTandRMD},
\begin{align}\label{vanish_condition}
\boldsymbol{\pi}_{1}^{*} = \nabla\psi( \nabla\phi(\overline{\boldsymbol{\pi}}) - \boldsymbol{\mu}{\boldsymbol{1}_{n}}^{\top}).
\end{align}
By multiplying $\boldsymbol{1}_n$ on the both sides of (\ref{vanish_condition}), the following equation system is obtained:
\begin{align}
\nabla\psi (\nabla\phi(\overline{\boldsymbol{\pi}}) - \boldsymbol{\mu}{\boldsymbol{1}_n}^{\top}) \boldsymbol{1}_n = \textstyle{\frac{\boldsymbol{1}_m}{m}}.
\end{align}
Due to the separability, the projection onto $\mathcal{C}_{1}$ can be divided into $m$ subproblems in each coordinate of the dual variable as follows:
\begin{align}\label{m_subproblem}
\sum_{j = 1}^{n} \psi'(\overline{\theta}_{ij} - \mu_{i}) = \frac{1}{m}.
\end{align}
To solve equation (\ref{m_subproblem}) with respect to $\mu_{i}$, we use the Newton--Raphson method \citep{NewtonRaphson}. \par
\begin{algorithm}[t]
\caption{Non-negative alternate scaling algorithm for $\beta$-divergence when $\beta > 1$}
\label{ouralgorithm}
\begin{algorithmic}[1]
\STATE $\tilde{\boldsymbol{\theta}} \leftarrow - \boldsymbol{\gamma} / \lambda$
\STATE\label{line2} $\boldsymbol{\theta^{*}} \leftarrow \max \{ \nabla\phi(\boldsymbol{0}_{m \times n}), \tilde{\boldsymbol{\theta}} \}$
\FOR{$t$ = 1, 2, \ldots, $T$} \label{line3}
\STATE\label{line4} $\boldsymbol{\tau} = \frac{\nabla\psi(\boldsymbol{\theta^{*}})\boldsymbol{1}_n - \frac{\boldsymbol{1}_m}{\mathit{m}}}{\nabla^{2}\psi(\boldsymbol{\theta}^{*}) \boldsymbol{1}_n}$
\STATE\label{line5} $\boldsymbol{\tau} \leftarrow \max (\boldsymbol{\tau},\ \hat{\boldsymbol{\theta}}^{*} - \nabla\phi(\frac{\boldsymbol{1}_m}{m}))$
\STATE\label{line6} $\boldsymbol{\tilde{\theta}} \leftarrow \boldsymbol{\tilde{\theta}} - \boldsymbol{\tau} {\boldsymbol{1}_n}^{\top}$
\STATE\label{line7} $\boldsymbol{\theta^{*}} \leftarrow \max \{\nabla\phi(\bf{0}), \boldsymbol{\tilde{\theta}}\}$
\STATE\label{line8} $\boldsymbol{\sigma} = \frac{{\boldsymbol{1}_m}^{\top} \nabla \psi(\boldsymbol{\theta^{*}}) - (\frac{\boldsymbol{1}_n}{\mathit{n}})^{\top}}{{\boldsymbol{1}_m}^{\top} \nabla^{2}\psi(\boldsymbol{\theta}^{*})}$
\STATE\label{line9} $\boldsymbol{\sigma} \leftarrow \max (\boldsymbol{\sigma},\hat{\boldsymbol{\theta}}^{*} - \nabla\phi(\frac{\boldsymbol{1}_n}{\mathit{n}}))$
\STATE\label{line10} $\boldsymbol{\tilde{\theta}} \leftarrow \boldsymbol{\tilde{\theta}} - \boldsymbol{1}_m \boldsymbol{\sigma} $
\STATE\label{line11} $\boldsymbol{\theta^{*}} \leftarrow \max \{\nabla\phi(\boldsymbol{0}_{m \times n}), \boldsymbol{\tilde{\theta}}\}$
\ENDFOR\label{line12}
\STATE $\boldsymbol{\pi^{*} \leftarrow \nabla \psi(\theta^{*})}$
\end{algorithmic}
\end{algorithm}
\section{Outlier-robust CROT}
In this section, we first formalize a model of outliers. To make the CROT robust against outliers under the model, we propose the CROT with the $\beta$-potential ($\beta > 1$) and introduce how to compute the CROT with the $\beta$-potential. Finally, we show its theoretical properties.
\subsection{Definition of outliers}
In this paper, outliers are formally defined as follows. Suppose we have two datasets $\{ \boldsymbol{x}_i\}_{i = 1}^{m}$ and $\{\boldsymbol{y}_j\}_{j = 1}^{n}$. We assume $\{\boldsymbol{x}_i\}_{i = 1}^{m}$ are samples from a clean distribution, while $\{\boldsymbol{y}_j\}_{j = 1}^{n}$ are samples that are contaminated by outliers. Let $\boldsymbol{\gamma}$ be the distance matrix.
\begin{definition} \label{outlier_definition}
For $z>0$, the indices of outliers $J$ are defined as follows:
\begin{equation}
\forall j \in J, \ \forall i \in \{1, \ldots, m\}, \gamma_{ij} \geq z.
\end{equation}
\end{definition}
This means that any point in $\{\boldsymbol{y}_j\}_{j = 1}^{n}$ that is more than or equal to $z$ away from any point in $\{\boldsymbol{x}_i\}_{i = 1}^{m}$ is considered as outliers.\par
\subsection{$\beta$-potential regularization}
We use the $\beta$-potential
\begin{equation}
\phi(\pi) = \frac{1}{\beta(\beta - 1)} (\pi^{\beta} - \beta \pi + \beta - 1),
\end{equation}
associated with the $\beta$-divergence,
\begin{equation}
B_{\phi}(\pi \| \xi) = \frac{1}{\beta(\beta - 1)} (\pi^{\beta} + (\beta - 1) \xi^{\beta} - \beta \pi \xi^{\beta - 1}),
\end{equation}
to robustify the CROT, where $\beta > 1$.
The domains of primal $\phi$ and its Fenchel conjugate $\psi$ are shown in Table \ref{domain_of_regularizer}. \par
Our proposed algorithm is shown in Algorithm \ref{ouralgorithm}. The dual coordinate of the unconstrained CROT solution is denoted by $\boldsymbol{\tilde{\theta}} = \nabla \phi (\boldsymbol{\xi})$. We execute the projections in the cyclic order of $\mathcal{C}_0 \rightarrow \mathcal{C}_1 \rightarrow \mathcal{C}_0 \rightarrow \mathcal{C}_2\rightarrow \mathcal{C}_0 \rightarrow \mathcal{C}_1 \rightarrow \mathcal{C}_0 \rightarrow \mathcal{C}_2 \rightarrow \cdots$. \par
Lines \ref{line2}, \ref{line7}, and \ref{line11} in Algorithm \ref{ouralgorithm} enforce the dual constraint $\theta_{ij}^* \ge \frac{1}{1-\beta}$ corresponding to $\dom\psi = (\frac{1}{1-\beta}, \infty)$ (Table \ref{domain_of_regularizer}).
Lines \ref{line4}--\ref{line6} correspond to the projection onto $\mathcal{C}_1$ implemented on the dual coordinate. Since the dual variable must satisfy $\theta_{ij}^* \ge \frac{1}{1-\beta}$ due to $\dom\psi = (\frac{1}{1-\beta}, \infty)$, we update the dual variable only once in the Newton--Raphson method (line \ref{line4}) since $\theta_{ij}^* \ge \frac{1}{1-\beta}$ is no longer guaranteed after the first update. Similarily, the projection onto $\mathcal{C}_2$ is shown in lines \ref{line8}--\ref{line10}. \par
The procedure in line~\ref{line5} is based on Section 4.6 in \citeauthor{ROTandRMD}(\citeyear{ROTandRMD}) accelerating the convergence of Algorithm~\ref{ouralgorithm} by truncating the optimization variable $\boldsymbol{\tau}$, which we describe subsequently. Recall that, for any $i$, we have the following condition,
\begin{equation}
\label{condition_of_pi}
\forall j, \ 0\le \pi_{1, ij}^* \le \textstyle{\frac{1}{m}},
\end{equation}
implicitly from the coupling constraint (\ref{coupling_constraint}). Since naively updating Newton--Raphson method can ``overshoot'', we truncate $\boldsymbol{\tau}$ so that (\ref{condition_of_pi}) is satisfied after each update. Below, we show this condition is satisfied mathematically. Let $\hat{\boldsymbol{\theta}}^{*}$ be the $m$-dimensional vector whose $i$th element is the largest value in the $i$th row of $\boldsymbol{\theta}^{*}$ defined as follows:
\begin{eqnarray}
\hat{\theta}^{*}_{i} &:=& \mathrm{max}\{ \theta^{*}_{ij} \}_{1 \le j \le n}.
\end{eqnarray}
Since $\phi$ is convex,
\begin{eqnarray}
&0\le \pi_{1, ij}^* \le \frac{1}{m}& \nonumber\\
\iff & \phi'(0) \le \phi'(\pi_{1, ij}^{*}) = \theta^{*}_{1, ij} \le \phi'(\frac{1}{m})&
\end{eqnarray}
holds.
Hence, for every $i$, if we lower-bound $\tau_{i}$, the Newton--Raphson decrement for the $i$th row of $\boldsymbol{\theta^{*}}$ as
\begin{eqnarray}
\tau_{i} &\leftarrow& \max\{\tau_{i}, \hat{\theta}^{*}_{i} - \phi'\left(\textstyle{\frac{1}{m}} \right)\},
\end{eqnarray}
then, for any $j$,
\begin{eqnarray}
\tilde{\theta}_{ij} - \tau_{i} &\leq& \theta^{*}_{ij} - \tau_{i}\\
&\leq& \hat{\theta}^{*}_{i} - \tau_{i}\\
&\leq& \phi'\left(\textstyle{\frac{1}{m}}\right).
\end{eqnarray}
This means that every element in the $i$th row of $\tilde{\boldsymbol{\theta}}$ computed in line \ref{line6} in Algorithm \ref{ouralgorithm} is no larger than $\phi'(\frac{1}{m})$. After line~\ref{line7}, $\boldsymbol{\theta}^{*}$ satisfies the condition (\ref{condition_of_pi}). Similarly, we force $\pi_{2, ij}$ to satisfy the following conditions:
\begin{equation}
\forall i, \ 0\leq \pi_{2, ij} \leq \textstyle{\frac{1}{n}}.
\end{equation}
After line \ref{line11}, this condition is satisfied.
\subsection{Theoretical analysis}
In the presence of outliers, we expect to approximate the OT by preventing mass transport to outliers.
This property is formalized below.
\begin{definition}
Suppose $\boldsymbol{\pi} \in \mathbb{R}_{+}^{m \times n}$ and a set of indices O $\subseteq \{ 1, \ldots, n \}$ satisfies the following condition:
\begin{equation}\label{transportnomass}
\forall i, \pi_{ij} = 0 \ \ \mathrm{if} \ \ j \in O.
\end{equation}
Then, we say $\boldsymbol{\pi}$ transports no mass to O.
\end{definition}
Although we do not expect to transport any mass to outliers, the optimal solution of the CROT must satisfy the coupling constraint and then the condition (\ref{transportnomass}) is never satisfied. To ensure (\ref{transportnomass}), we consider solving the CROT with only a finite number of updates subsequently. Then, an intermediate solution can satisfy (\ref{transportnomass}), although the coupling constraint is not satisfied. This is in stark contrast to the previous works \citep{Chizat2017,Balaji}, which cannot avoid transporting some mass to outliers.\par
The following proposition provides sufficient conditions on the number of iterations $T$ to ensure the condition (\ref{transportnomass}). Refer to Section B in the supplementary file for the proof.
\begin{proposition}\label{proposition}
For a given $z \ (>\frac{\lambda}{\beta - 1})$, let $J\subseteq\{1, \ldots, n \}$ be a subset of indices which satisfies the condition shown in Definition \ref{outlier_definition}.
Suppose we obtained a transport matrix $\boldsymbol{\pi}^{\mathrm{output}}$ by running the alogrithm $T$ times satisfying the following condition:
\begin{equation}\label{robust_condition}
T < \frac{\frac{z}{\lambda}(\beta - 1) - 1}{(\frac{1}{m})^{\beta - 1} + (\frac{1}{n})^{\beta - 1}}.
\end{equation}
Then, $\boldsymbol{\pi}^{\mathrm{output}}$ transports no mass to J.
\label{theoreticalanalysis}
\end{proposition}
Here, $z>\frac{\lambda}{\beta - 1}$ is necessary so that $T$ is upper-bounded by a positive number.
Intuitively, this means that the transport matrix obtained by Algorithm \ref{ouralgorithm} disregards points distant from inliers more than or equal to $z$.
Note that the condition (\ref{robust_condition}) tells us that a sufficiently small number of iterations $T$ leads to an approximate CROT solution that does not transport any mass to outliers. \par
We discuss the selection of hyperparameters $\beta$ and $\lambda$ in Sections \ref{hyperparameter_selection}.
\section{Experiments}
Here, we show two applications of our method to demonstrate the practical effectiveness. In both of the experiments in Sections~\ref{dataset_distance_section} and \ref{outlierdetection_experiment}, we set the hyperparameters in the proposed method as $\beta = 1.2$ and $\lambda = 2$. We discuss the selection of these hyperparameters in Section~\ref{hyperparameter_selection}.
\subsection{Measuring distance between datasets}\label{dataset_distance_section}
In the first experiments, we numerically confirm that our method can compute the distance more robustly than the Sinkhorn algorithm. We used the following benchmark datasets: MNIST \citep{MNIST}, FashionMNIST \citep{FashionMNIST}, KMNIST \citep{KMNIST}, and EMNIST(Letters)\citep{EMNIST}. From each benchmark dataset, we randomly sampled 10000 data points, and split them into two subsets, each containing 5000 data. We regarded these data points as inliers. Then a portion of one subset was replaced by data from another benchmark dataset which were regarded as outliers. We computed CROT and outlier-robust CROT between these two subsets, and investigated how they changed when the outlier ratio are 5\%, 10 \%, 15\%, 20\%, 25\% and 30\%. We simply used the raw data to compute the distance matrix $\gamma_{ij} = \| \boldsymbol{x}_i - \boldsymbol{y}_j \|_{2}^{2}$, i.e., the Euclidean distance between raw data, and used their median value as the threshold $z$. In this way, we expect that outliers are distinguished from inliers.
The results are shown in Figure~\ref{dataset_distance}. Although the distance computed by the Sinkhorn algorithm drastically changes when the outlier ratio gets larger, the degree of change in the output values of our algorithm is milder in every dataset. Therefore, we can see that our algorithm computes the distance between datasets more stably than the Sinkhorn algorithm.
\begin{figure}
\centering
\includegraphics[width = 15cm]{figures/dataset_distance/dataset_distance.pdf}
\caption{The mean and standard deviation of the output value of the Sinkhorn algorithm (red) and our algorithm (blue) over 20 runs. $\bigcirc$, $\triangle$, $\triangledown$, and $\diamondsuit$ represents when the outlier dataset are MNIST, FashionMNIST, KMNIST, and EMNIST, respectively. The dotted line is the output value when the dataset is clean.}
\label{dataset_distance}
\end{figure}
\subsection{Applications to outlier detection}\label{outlierdetection_experiment}
\begin{table}
\centering
\begin{tabular}{ccc}
\toprule
& Outliers & Inliers \\
\midrule
One-class SVM & 49.78 $\pm$ 1.83 \% & 49.99 $\pm$ 0.10 \% \\
Local outlier factor & 49.43 $\pm$ 3.68 \% & 99.13 $\pm$ 0.11 \% \\
Isolation forest & 42.78 $\pm$ 6.95 \% & 72.81 $\pm$ 3.34 \% \\
Elliptical envelope & 95.57 $\pm$ 2.77 \% & 69.37 $\pm$ 5.61 \% \\
\midrule
Baseline technique (95th) & 92.78 $\pm$ 1.67 \% & 92.66 $\pm$ 0.44 \% \\
Baseline technique (97.5th) & 84.19 $\pm$ 2.10 \% & 96.41 $\pm$ 0.32 \% \\
Baseline technique (99th) & 65.04 $\pm$ 2.75 \% & 98.60 $\pm$ 0.16 \% \\
\midrule
ROBOT (95th) & 99.96 $\pm$ 0.08 \% & 68.76 $\pm$ 0.49 \% \\
ROBOT (97.5th) & 99.89 $\pm$ 0.14 \% & 77.22 $\pm$ 0.63 \% \\
ROBOT (99th) & 99.48 $\pm$ 0.31 \% & 84.79 $\pm$ 0.47 \% \\
\midrule
Our Method (95th) & 98.98 $\pm$ 0.66 \% & 86.72 $\pm$ 0.72 \% \\
Our Method (97.5th) & 96.96 $\pm$ 1.71 \% & 91.58 $\pm$ 0.38 \% \\
Our Method (99th) & 92.25 $\pm$ 1.53 \% & 95.73 $\pm$ 0.34 \% \\
\bottomrule
\end{tabular}
\caption{The percentage of true outliers/inliers detected as outliers/inliers over 50 runs. The numbers show the mean and standard deviation. ``(Xth)" means $X$th percentile was used in its subsampling phase.}
\label{outlierdetection}
\end{table}
Our algorithm enables us to detect outliers. Let $\mu_m$ be a clean dataset and $\nu_n$ be a dataset which is polluted with outliers. We regard the $j$th data point in $\nu_n$ is an outlier if Algorithm~\ref{ouralgorithm} outputs a transport matrix whose $j$th column is all zeros.\par
In this experiment, we used Fashion-MNIST \citep{FashionMNIST} as a clean dataset and MNIST \citep{MNIST} as outliers. $\nu_n$ consists of 9500 images from Fashion-MNIST and 500 images from MNIST. $\mu_m$ consists of 10000 images from Fashion-MNIST. We computed the transport matrix with the two datasets and identified the outlying MNIST images. We simply used the raw data to compute the distance matrix $\gamma_{ij} = \|\boldsymbol{x}_i - \boldsymbol{y}_j\|^2_2$, i.e., the Euclidean distance between raw data. \par
We compared the proposed method with the ``ROBust Optimal Transport" (ROBOT) method \citep{Mukherjee} and the method proposed by \citeauthor{Balaji}(\citeyear{Balaji})
, which are existing methods to compute OT robustly.
We also compared our method with a variety of popular outlier detection algorithms available in scikit-learn \citep{Scikit-learn}: the one-class support vector machine (SVM) \citep{oneclassSVM}, local outlier factor \citep{localoutlierfactor}, isolation forest \citep{isolationforest}, and elliptical envelope \citep{ellipticalenvelope}. In the ROBOT method, we set the cost truncation hyperparameter to the (1) 95th (2) 97.5th (3) 99th percentile of the distance matrix in the subsampling phase \citep{Mukherjee}. \par
For our method, the distance tolerance parameter $z$ in Definition~\ref{outlier_definition} is necessary to detect outliers by leveraging Proposition~\ref{theoreticalanalysis}. Once $z$ is chosen, after running the algorithm $\left \lfloor \frac{\frac{z}{\lambda}(\beta - 1) - 1}{(\frac{1}{m})^{\beta - 1} + (\frac{1}{n})^{\beta - 1}} \right \rfloor$ times satisfying the condition (\ref{robust_condition}), points in $\nu_n$ that are far from any points in $\mu_m$ with more than or equal to distance $z$ are regarded as outliers.
To determine $z$, we need a subsampling phase using the clean dataset similar to \citeauthor{Mukherjee}(\citeyear{Mukherjee}). We propose the following heuristics: since we know that $\mu_m$ is clean, we subsample two datasets from it and compute the distance matrix. Then, we choose the minimum value for each row and use the largest value among them as $z$. This procedure is essentially estimating the maximum distance between two samples in the clean dataset. In order to avoid subsampling noise, we used the (1) 95th (2) 97.5th (3) 99th percentile instead of the maximum. Additionaly, we compared our method with a natural baseline to identify a data point as an outlier if the minimum distance to the clean dataset is larger than the distance computed in the subsampling phase. We call this method ``the baseline technique''.
The results are shown in Table \ref{outlierdetection}. One can see that our method has a high performance in detecting not only outliers but also inliers. \par
\begin{table}[t]
\centering
\begin{tabular}{ccc}
\toprule
& Outliers & Inliers \\
\midrule
\citeauthor{Balaji}[\citeyear{Balaji}] & 89.0 $\pm$ 16.9 \% & 67.0 $\pm$ 8.9 \% \\
Our Method & 96.6 $\pm$ 2.0 \% & 88.0 $\pm$ 0.7 \% \\
\bottomrule
\end{tabular}
\caption{Comparison with \citeauthor{Balaji}[\citeyear{Balaji}] with 1000 data points. The numbers show the mean and standard deviation of the percetage of the true outliers/inliers detected as outliers/inliers over 10 runs.}
\label{Balaji_table}
\end{table}
We also tried the code of \citeauthor{Balaji}(\citeyear{Balaji}) based on CVXPY \citep{CVXPY}, which is not scalable so that the computational time is not negligible even with 1000 data points. Similar to the previous experiments, the clean dataset $\mu_m$ consists of 1000 Fashion-MNIST data points and the polluted dataset $\nu_n$ consists of 950 Fashion-MNIST data as inliers and 50 MNIST data points as outliers. Table \ref{Balaji_table} shows the mean accuracy and standard deviations over 10 runs. The run-time of their method was $820\pm17$ seconds, while that of our method was $6\pm0.2$ seconds. Our method outperforms the method by \citeauthor{Balaji}[\citeyear{Balaji}] in terms of not only outlier detection performance but also computation time.\par
\subsection{The selection of hyperparamters $\beta$ and $\lambda$}\label{hyperparameter_selection}
Here, we dicuss the selection of hyperparameters $\beta$ and $\lambda$. Figure~\ref{hp_sensitivity_MNIST} shows the sensitivity to the hyperparameters for the same outlier task above. We see $2 \leq \lambda \leq 14$ and $1.2 \leq \beta \leq 1.5$ are good choices of possible hyperparameters. \par
Then, how about other $\beta$ or $\lambda$? Since, we are running the algorithm $\left \lfloor \frac{\frac{z}{\lambda}(\beta - 1) - 1}{(\frac{1}{m})^{\beta - 1} + (\frac{1}{n})^{\beta - 1}} \right \rfloor$ times, if we choose $\beta$ excessively large or $\lambda$ excessively small, we will harmfully increase the computation time. On the other hand, if we choose $\beta$ excessively small or excessively $\lambda$ large, $\left \lfloor \frac{\frac{z}{\lambda}(\beta - 1) - 1}{(\frac{1}{m})^{\beta - 1} + (\frac{1}{n})^{\beta - 1}} \right \rfloor$ will become less than or equal to 0, which means that we can not start running the algorithm.
However, these discussions are when $z$ is fixed. If we scale the raw value of data by constant multiplication, $z$ will also change. By scaling $z$, we can adjust the number of times running the algorithm so that it will be larger than 0, and at the same time, not too large.
In the MNIST detection task, we scaled the raw data so that the number of times running the algorithm will fit in $(0, 20)$ when $(\beta, \lambda)$ = (1.4, 14), (1.3, 10), (1.3, 12), (1.3, 14), (1.2, 6), (1.2, 8), (1.2, 10), (1.2, 12), (1.2, 14). We can see that scaling the raw data by constant multiplication has no problem in detecting outliers (Figure~\ref{hp_sensitivity_MNIST}). Therefore, since we can scale $z$, we can adjust the number of times running the algorithm with limited $\lambda\in [2, 14]$ and $\beta\in[1.2, 1.5]$.
\par
Below, we confirm that the proposed method is sufficiently stable in the above range of hyperparameters by using the credit card fraud detection dataset\footnote{https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud}.
We experimentally observe the sensitivity of the proposed method to the choice of the hyperparameters $\beta$ and $\lambda$. We used the credit card fraud detection dataset to verify that the proposed method is sufficiently stable in a certain range of the hyperparameters. \par
The credit card fraud detection dataset contains transactions made by credit cards in 2013 by European cardholders. Due to confidentiality issues, it does not provide the original features and more background information about the data. Instead, it contains 28-dimensional numerical feature vectors, which are the result of a principal component analysis transformation. We used these feature vectors to compute the cost matrix, which is the L2 distance among them. The task is to detect 450 frauds out of 9000 transactions. We conducted ten experiments for each pair of $\beta$ and $\lambda$. \par
We show the results in Figure \ref{hp_sensitivity_CreditCard}. We can see that the detection accuracy is sufficiently stable when $1.2\leq \beta \leq 1.5$ and $1\leq \lambda \leq 14$.
\begin{figure}[t]
\begin{tabular}{cc}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width = 7.5cm]{figures/sensitivity/MNIST_3D.pdf}
\subcaption{The hyperparameter sensitivity in the Fashion-MNIST detection task. (Blue) The inlier detection accuracy. (Red) The outlier detection arruracy. Error bars represent the mean and standard deviation.}
\label{hp_sensitivity_MNIST}
\end{minipage} &
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width = 7.5cm]{figures/sensitivity/CreditCard_3D.pdf}
\subcaption{The hyperparameter sensitivity in the credit card fraud detection task. (Blue) The inlier detection accuracy. (Red) The outlier detection arruracy. Error bars represent the mean and standard deviation.}
\label{hp_sensitivity_CreditCard}
\end{minipage}
\end{tabular}
\caption{The hyperparameter ($\beta$ and $\lambda$) sensitivity in the Fashion-MNIST dataset and in the credit card fraud detection dataset.}
\end{figure}
\section{Conclusion}
In this work, we proposed to robustly approximate OT by regularizing the ordinary OT with the $\beta$-potential term. By leveraging the domain of the Fenchel conjugate of the $\beta$-potential, our algorithm does not move any probability mass to outliers. We demonstrated that our proposed method can be used in estimating a probability distribution robustly even in the presence of outliers and successfully detecting outliers from a contaminated dataset.
\acks{}
SN was supported by JST SPRING, Grant Number JPMJSP2108. MS was supported by JST CREST Grant Number JPMJCR18A2.
| {
"timestamp": "2022-12-27T02:19:18",
"yymm": "2212",
"arxiv_id": "2212.13251",
"language": "en",
"url": "https://arxiv.org/abs/2212.13251",
"abstract": "Optimal transport (OT) has become a widely used tool in the machine learning field to measure the discrepancy between probability distributions. For instance, OT is a popular loss function that quantifies the discrepancy between an empirical distribution and a parametric model. Recently, an entropic penalty term and the celebrated Sinkhorn algorithm have been commonly used to approximate the original OT in a computationally efficient way. However, since the Sinkhorn algorithm runs a projection associated with the Kullback-Leibler divergence, it is often vulnerable to outliers. To overcome this problem, we propose regularizing OT with the \\beta-potential term associated with the so-called $\\beta$-divergence, which was developed in robust statistics. Our theoretical analysis reveals that the $\\beta$-potential can prevent the mass from being transported to outliers. We experimentally demonstrate that the transport matrix computed with our algorithm helps estimate a probability distribution robustly even in the presence of outliers. In addition, our proposed method can successfully detect outliers from a contaminated dataset",
"subjects": "Machine Learning (cs.LG); Artificial Intelligence (cs.AI)",
"title": "Robust computation of optimal transport by $β$-potential regularization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018354801189,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7099229997128923
} |
https://arxiv.org/abs/1606.08393 | Location of the Adsorption Transition for Lattice Polymers | We consider various lattice models of polymers: lattice trees, lattice animals, and self-avoiding walks. The polymer interacts with a surface (hyperplane), receiving a unit energy reward for each site in the surface. There is an adsorption transition of the polymer at a critical value of $\beta$, the inverse temperature. We present a new proof of the result of Hammersley, Torrie, and Whittington (1982) that the transition occurs at a strictly positive value of $\beta$ when the surface is impenetrable, i.e. when the polymer is restricted to a half-space. In contrast, for a penetrable surface, it is an open problem to prove that the transition occurs at $\beta=0$ (i.e., infinite temperature). We reduce this problem to showing that the fraction of N-site polymers whose span is less than $N/\log^2 N$ is not too small. | \section{Introduction}
\label{sec.intro}
We shall work in the $d$-dimensional hypercubic lattice $\mathbb{L}^d$ ($d\geq 2$),
with sites $x=(x_1,\ldots,x_d)\in \mathbb{Z}^d$ and edges connecting nearest neighbours.
Let $\mathbb{L}^d_+$ be the part of $\mathbb{L}^d$ in the half-space $x_1\geq 0$.
Here is our ``big picture'' of adsorption for lattice polymer models. We have
a surface in our space $\mathbb{L}^d$ (in our case, the hyperplane $x_1=0$).
For each $N\geq 1$, we have a finite set ${\cal P}_N$ of possible configurations of a
polymer molecule of size $N$
attached to a fixed site in the surface (the origin). In this paper, ${\cal P}_N$ will be the
set of lattice trees or lattice animals (representing branched polymers) or self-avoiding
walks (representing linear polymers) with $N$ sites (representing monomers).
These are classical lattice models of polymer configurations (see for example
de Gennes 1979 and Vanderzande 1998).
Each polymer $\rho$ is rewarded according to the number $\sigma(\rho)$
of sites of $\rho$ that lie in the surface. For real $\beta$, we define the partition function
\begin{equation}
\label{eq.Zgen}
Z_N(\beta) \; :=\; \sum_{\rho\in {\cal P_N} } \exp(\beta \sigma(\rho)) \,.
\end{equation}
The absolute value of $\beta$ represents the inverse temperature; the sign of $\beta$ tells
us whether the surface is attractive or repulsive. In our cases, there exists a
\textit{limiting free energy}
\begin{equation}
\label{eq.Fgen}
{\cal F}(\beta) \;:=\; \lim_{N\rightarrow\infty}\frac{1}{N}\,\log Z_N(\beta) \,.
\end{equation}
The limit ${\cal F}(\beta)$ is a finite non-decreasing function of $\beta$
that is automatically convex (e.g.\ Lemma 4.1.2 of Madras and Slade)
and hence continuous.
In particular, we have $\lim_{N\rightarrow\infty}|{\cal P}_N|^{1/N}\,=\, \exp ({\cal F}(0))$
(where the cardinality of a set $A$ is denoted $|A|$).
In our models, we also find that ${\cal F}(\beta)\,=\,{\cal F}(0)$ for every negative $\beta$,
which says that in the repulsive regime, the energy imparted by surface interaction is
negligible---i.e., the polymer desorbs and most of it does not lie in the surface.
We say that $\{\beta: {\cal F}(\beta)\,=\,{\cal F}(0)\}$ is the \textit{desorbed} regime,
and $\{\beta: {\cal F}(\beta)\,>\,{\cal F}(0)\}$ is the \textit{adsorbed} regime.
There is an \textit{adsorption transition} at the critical point $\beta_c$ which is the right
endpoint of the desorbed regime. We know that $\beta_c$ is finite (Hammersley, Torrie and Whittington, 1982).
In the context of polymer modelling, the surface could either be
impenetrable (e.g., the wall of a container) or penetrable (e.g., an interfacial
layer between two fluids). We shall always represent the surface by the
hyperplane $x_1=0$. In the impenetrable case, the polymer configurations
will be restricted to the half-space $\mathbb{L}^d_+$. We shall write
$\beta_c^+$ and $\beta_c^P$ to denote the adsorption critical points for the
impenetrable and penetrable models respectively.
A basic qualitative question about the adsorption transition
is whether $\beta_c$ is zero or nonzero---i.e., whether the transition occurs at infinite
or at finite temperature. It turns out that when the surface is impenetrable,
then $\beta_c^+>0$. This had been proven
by other authors (Hammersley et al.\ 1982, for self-avoiding walks;
Janse van Rensburg and You, 1998, for lattice trees),
but we present a new and shorter proof.
In the case of a penetrable surface, with the polymers not restricted to a half-space,
it is generally believed that $\beta_c^P=0$. It is an open problem to prove this
rigorously. We do not fully solve this problem, but we show that it is a rigorous consequence of
a weak assertion about the diameter of polymers which seems to be beyond reasonable doubt.
Specifically, let the span of the polymer $\rho$ be the maximum value of $|u_1-v_1|$
where $u$ and $v$ range over all sites of $\rho$. Let $f_N$ be the
fraction of polymers in ${\cal P}_N$ whose span is at most $N/\log^2N$. We prove that if
$f_N$ is bounded below $N^{-\delta}$ for some fixed $\delta$, then $\beta_c$ must be
zero. This condition is much weaker than the standard scaling assumption about
polymers, which is that the average span of members of ${\cal P}_N$ scales
as $N^{\nu}$ for some $\nu<1$.
It is worth remarking that the methods of Hammersley et al.\ (1982) and Janse van Rensburg and
You (1998) yields an explicit positive lower bound on $\beta^+_c-\beta^P_c$; the strict
positivity of $\beta^+_c$ is then a corollary of this result and
the relatively easy observation that $\beta_c^P\geq 0$. In contrast, the method of the
present paper provides an explicit positive lower bound on $\beta^+_c$ but does not
give a direct proof that $\beta^+_c>\beta^P_c$.
Beaton et al.\ (2014) considered the important special case of self-avoiding walks
on the hexagonal lattice, and proved that $\beta^{+}_c=\ln (1+\sqrt{2})$, thus verifying
a prediction of Batchelor and Yung (1995). This result depends on special properties
of the hexagonal lattice, and seems difficult to generalize.
We note that when ${\cal P}_N$ is the set of $N$-step nearest-neighbour random walk paths
(not necessarily self-avoiding), then a relatively straightforward application of generating
functions shows that $\beta_c$ is 0 in the penetrable case and is strictly positive (in fact
equal to $\ln(2d/(2d-1))$) in the impenetrable case (see for example Hammersley, 1982).
The book of Giacomon (2007) deals extensively with related random walk models.
Our proofs are simplest in the case of lattice trees and lattice animals. The same methods
work for self-avoiding walks, but some technical modifications are necessary.
Here is the organization of the rest of the paper. The results are stated formally in Section
\ref{sec.results}. After Section \ref{sec.defs} sets up the basic framework and some terminology,
Sections \ref{sec.BP1} and \ref{sec.SAW1} present the results for lattice trees (and lattice
animals) and for self-avoiding walks respectively.
Section \ref{sec.BP2} presents
the proofs for lattice trees, as well as the minor modifications needed for lattice animals.
Section \ref{sec.SAW2} presents the proofs for self-avoiding walks.
\section{Results}
\label{sec.results}
\subsection{Basic Background and Notation}
\label{sec.defs}
We denote the standard basis of $\mathbb{R}^d$ by $u^{(1)},\ldots,u^{(d)}$; that is, $u^{(i)}$ is the
unit vector in the $+x_i$ direction.
We write $\mathbb{Z}^d$ for
the set of points $(x_1,\ldots,x_d)$ in $\mathbb{R}^d$ whose coordinates $x_i$ are all integers.
The $d$-dimensional hypercubic lattice $\mathbb{L}^d$
is the infinite graph embedded in $\mathbb{R}^d$,
whose sites are the points of $\mathbb{Z}^d$ and whose edges join each pair of sites that
are distance 1 apart.
Let $\mathbb{L}_+^d$ be the part of $\mathbb{L}^d$ that lies in the half-space $\{x:x_1\geq 0\}$.
If $A\subset \mathbb{R}^d$ (or if $A$ is a subgraph of $\mathbb{L}^d$)
and $x\in \mathbb{Z}^d$, then the translation of $A$ by the vector $x$ is denoted $A+x$.
For a subgraph $\rho$ of $\mathbb{L}^d$, let $\mathcal{H}(\rho)$ be the set of sites $x$ of $\rho$
such that $x_1=\rho$. Thus, referring to Equation (\ref{eq.Zgen}), the quantity
$\sigma(\rho)$ equals $|\mathcal{H}(\rho)|$, the cardinality of $\mathcal{H}(\rho)$.
We shall frequently use superscripts $+$ and $P$ to denote
impenetrable and penetrable surfaces respectively. Also, we shall use $T$, $A$, and $W$
superscripts to denote trees, animals, and (self-avoiding) walks.
\subsection{Branched Polymers: Trees and animals}
\label{sec.BP1}
A lattice animal is a finite connected subgraph of $\mathbb{L}^d$, and a lattice tree is a
lattice animal with no cycles. Each corresponds to a standard discrete model
of the configuration of a branched polymer.
Let ${\cal{T}}_N$ be the set of all $N$-site lattice trees that contain the origin.
Let $\bar{\cal{T}}_N$ be the set of $N$-site lattice trees whose lexicographically smallest
site is the origin. (The elements of $\bar{\cal{T}}_N$ correspond to equivalence classes
of all $N$-site lattice trees up to translation.)
Then $|{\cal{T}}_N|\,=\, N\,|\bar{\cal{T}}_N|$.
Let $t_N=|\bar{\cal{T}}_N|$. It is well known (Klarner, 1967; Klein, 1981)
that $t_Nt_M\leq t_{N+M}$
for all $N,M\geq 1$, and that $t_N^{1/N}$ has a finite limit $\lambda_d$ with the
property that
\begin{equation}
\label{eq.tNleq}
t_N\leq \lambda_d^N \hspace{5mm} \hbox{for every $N$}.
\end{equation}
The notation and results for lattice animals are exactly analogous: ${\cal{A}}_N$,
$\bar{\cal{A}}_N$, $a_N=|\bar{\cal{A}}_N|=|{\cal A}_N|/N$,
$\lambda_{d,A}:= \lim_{n\rightarrow\infty} a_N^{1/N}$,
and $a_N\leq \lambda_{d,A}^N$.
Let ${\cal{T}}_N^+$ be the set of all trees $\tau\in {\cal{T}}_N$ such that
$\tau\subset \mathbb{L}^d_+$.
Then for every site $x$ of every tree $\tau$ in ${\cal T}_N^+$, we have $x_1\geq 0$.
Observe that $\bar{\cal{T}}_N\subset {\cal{T}}_N^+\subset {\cal{T}}_N$.
We now consider the ensemble of lattice trees in the half-space $\mathbb{L}^d_+$ in which
each site in the boundary plane $x_1=0$ receives unit energy reward.
For real $\beta$, define the partition function
\begin{equation}
\label{eq.Zdef}
Z^{T+}_N(\beta) \; := \; \sum_{\tau\in {\cal{T}}_N^+} \exp(\beta |{\cal H}(\tau)|) \,.
\end{equation}
As shown in Theorem 6.23 of Janse van Rensburg (2000), a concatenation argument
can be used to prove that the limiting free energy
\begin{equation}
\label{eq.defF}
{\cal F}^{T+}(\beta) \;:= \; \lim_{N\rightarrow\infty}\frac{1}{N}\,\log Z^{T+}_N(\beta)
\end{equation}
exists and is finite for every real $\beta$.
It is not hard to see that
the number of trees $\tau$ in ${\cal T}_N^+$ with $|{\cal H}(\tau)|=1$ is exactly
$|{\cal T}_{N-1}^+|$
for every $N$, and hence
\begin{equation}
\label{eq.tN1ZN}
t_{N-1}e^{\beta} \; \leq \; Z^{T+}_N(\beta) \,.
\end{equation}
For $\beta\leq 0$, we also have $Z^+_N(\beta)\leq |{\cal T}_N^+| \leq N t_N$,
and combining this with Equation (\ref{eq.tN1ZN}) shows that
\begin{equation}
\label{eq.Zlimneg}
{\cal F}^{T+}(\beta)
\;=\; \log \lambda_d \hspace{5mm}
\hbox{for every $\beta\leq 0$}.
\end{equation}
This says that the polymer desorbs from the surface whenever $\beta$ is nonpositive---that is,
we have $\beta_c^{T+}\geq 0$.
The following result tells us that, in fact, that the polymer desorbs
whenever $\beta \leq \lambda_d^{-1}$.
\begin{thm}
\label{prop.tree}
For lattice trees,
we have ${\cal F}^{T+}(\beta)\,=\,\log \lambda_d$ for every $\beta \leq \lambda_d^{-1}$.
\end{thm}
Theorem \ref{prop.tree} says that for adsorption of lattice trees to an impenetrable surface, the
critical point satisfies $\beta^{T+}_c\geq \lambda_d^{-1}$. This result is somewhat
better than the
bound $\beta_c^{T+}\geq \beta^{T+}_c-\beta_c^{TP}\geq \frac{1}{2}\log(1+\lambda_d^{-1})$
that follows from
Theorem 4.7 of Janse van Rensburg and You (1998) (which however applies to a larger class of
tree models). However, the main contribution of our Theorem \ref{prop.tree} is the new method of
proof, rather than the improved numerical value of the bound.
\smallskip
We now consider adsorption at a penetrable surface, and
the relevant ensemble ${\cal T}_N$ of all $N$-site trees that contain the origin.
The corresponding partition function is
\begin{equation}
\label{eq.ZPdef}
Z^{TP}_N(\beta) \; := \; \sum_{\tau\in {\cal{T}}_N} \exp(\beta |{\cal H}(\tau)|) \,.
\end{equation}
As in the impenetrable case, a concatenation argument (see Theorem 6.23 of
Janse van Rensburg 2000) shows that the limit
\begin{equation}
\label{eq.defFP}
{\cal F}^{TP}(\beta) \;:= \; \lim_{N\rightarrow\infty}\frac{1}{N}\,\log Z^{TP}_N(\beta)
\end{equation}
exists and is finite for every real $\beta$. As was the case for ${\cal F}^{T+}$,
\begin{equation}
\label{eq.ZPlimneg}
{\cal F}^{TP}(\beta)
\;=\; \log \lambda_d \hspace{5mm}
\hbox{for every $\beta\leq 0$}.
\end{equation}
It is not hard to show that $0\leq \beta_c^{TP}\leq \beta_c^{T+}\leq \ln(\lambda_d/\lambda_{d-1})$
(see Hammersley et al., 1982, or Janse van Rensburg and You, 1998).
However, in marked contrast to the situation for ${\cal F}^{T+}$, it is generally
believed that ${\cal F}^{TP}(\beta)\,>\, \log \lambda_d$ for every $\beta>0$ ---
i.e., that $\beta^{TP}_c=0$.
Proving this is a challenging open problem. We shall show that it is a consequence of a
different
property that has not been proven rigorously but is widely believed to be true.
In the following, we let $\Pr_A$ denote the uniform probability distribution
on the set $A$. Define the $x_1$-span of a tree $\tau$
to be the number of integers $j$ such that $\tau$ contains a site $v$
with $v_1=j$. We write $\textrm{Span}(\tau)$ to denote the $x_1$-span of $\tau$.
Since trees are connected, we have
\[ \textrm{Span}(\tau) \;:=\; 1\,+\, \max\{ |u_1-v_1| \,: u,v\in \tau\} \,. \]
\begin{thm}
\label{prop.treeperm}
Assume there exists $\delta \in (0,\infty)$ such that
\begin{equation}
\label{eq.spancond}
\Pr\!{}_{{\cal T}_N}\left(\left\{ \tau : \,\textrm{Span}(\tau) \,\leq \, \frac{N}{\log^2 N} \right\} \right)
\;\geq \; \frac{1}{N^{\delta}}
\end{equation}
for all sufficiently large $N$. Then ${\cal F}^{TP}(\beta)\,>\, \log \lambda_d$ for every $\beta>0$
(that is, $\beta_c^{TP}=0$).
\end{thm}
\begin{rem}
\label{rem.tree}
(\textit{i})
It is generally believed that the expected value of $\textrm{Span}(\tau)$ over ${\cal T}_N$
scales as $N^{\nu}$ for some (dimension-dependent) critical exponent $\nu<1$ (e.g.\ see
section 9.2 of Vanderzande 1998). This would imply the truth of Equation (\ref{eq.spancond});
indeed, it would imply that the left-hand side of (\ref{eq.spancond})
converges to 1 as $N$ tends to $\infty$.
\\
(\textit{ii}) It will be seen from the proof that the statement of Theorem \ref{prop.treeperm}
can be strengthened
slightly, e.g.\ by replacing the square (of the logarithm) by a power greater than 1.
\\
(\textit{iii}) The direct analogues of Theorems \ref{prop.tree} and \ref{prop.treeperm} also hold
for lattice animals (see Remarks \ref{rem.treeimp} and \ref{rem.treeperm}).
\\
(\textit{iv}) There are other ways to define the span of a tree, but the choice of method
will not substantially affect the statement of the theorem. Our choice, using the $x_1$
coordinate, is for convenience.
\end{rem}
\subsection{Linear polymers: Self-avoiding walks}
\label{sec.SAW1}
An $N$-step self-avoiding walk (SAW) in $\mathbb{L}^d$ is a sequence
$\omega=(\omega(0),\omega(1),\ldots,\omega(N))$
of $N+1$ distinct points of $\mathbb{Z}^d$ such that $\omega(i)$ is a nearest
neighbour of $\omega(i-1)$ for $i=1,\ldots,N$. We write $\omega_j(i)$ to denote the
$j^{th}$ coordinate of the $i^{th}$ point of $\omega$. The self-avoiding walk is a
classical model of the configuration of a linear polymer.
Let ${\cal S}_N$ be the set of all $N$-step self-avoiding walks in $\mathbb{L}^d$ that start
at the origin, and let $c_N=|{\cal S}_N|$.
Then the limit $\mu_d=\lim_{N\rightarrow\infty}c_N^{1/N}$ exists (Hammersley and Morton 1954;
or see Section 1.2 of Madras and Slade 1993).
Our notation for SAWs is very similar to our notation for trees.
Let ${\cal S}_N^+$ be the set of all SAWs in ${\cal S}_N$ that are contained in $\mathbb{L}^d_+$.
Then $|{\cal S}_N^+|^{1/N}$
also converges to $\mu_d$ (e.g., by Corollary 3.1.6 of Madras and Slade 1993).
The partition function for adsorption at an impenetrable surface is defined to be
\begin{equation}
Z^{W+}_N(\beta) \; :=\; \sum_{\omega\in {\cal S}_N^+} \exp(\beta|{\cal H}(\omega)|) \,.
\label{eq.ZNsaw}
\end{equation}
Hammersley et al.\ (1982) proved the existence of the limit
\begin{equation}
\label{eq.defFwalk}
{\cal F}^{W+}(\beta) \;:= \; \lim_{N\rightarrow\infty}\frac{1}{N}\,\log Z^{W+}_N(\beta)
\end{equation}
for every real $\beta$.
The following result is the analogue of Theorem \ref{prop.tree} for SAWs, proving
that $\beta^{W+}_c\geq \frac{1}{2}\mu_d^{-2}$.
\begin{thm}
\label{prop.sawimp}
We have ${\cal F}^{W+}(\beta)\,=\,\log \mu_d$ for every $\beta \leq \frac{1}{2}\mu_d^{-2}$.
\end{thm}
For the case of a penetrable surface, let
\begin{equation}
\label{eq.ZWPdef}
Z^{WP}_N(\beta) \; := \; \sum_{\tau\in {\cal{S}}_N} \exp(\beta |{\cal H}(\tau)|) \,.
\end{equation}
Hammersley et al.\ (1982) proved that the limit
\begin{equation}
\label{eq.defFWP}
{\cal F}^{WP}(\beta) \;:= \; \lim_{N\rightarrow\infty}\frac{1}{N}\,\log Z^{WP}_N(\beta)
\end{equation}
exists and is finite for every real $\beta$, and equals $\log\mu_d$ whenever $\beta\leq 0$.
We define the $x_1$-span of a SAW exactly as for trees:
\[ \textrm{Span}(\omega) \;:=\; 1\,+\, \max\{ |\omega_1(i)-\omega_1(j)| \,: 0\leq i,j\leq N\} \,. \]
We define an $N$-step bridge to be an $N$-step self-avoiding walk with the property that
\[ \omega_d(0)<\omega_d(i)\leq \omega_d(N) \hspace{5mm}\hbox{for $i=1,\ldots,N$.}
\]
Let ${\cal S}_N^B$ be the set of all bridges in ${\cal S}_N$, and let $b_N=|{\cal S}^B_N|$.
The following result provides a sufficient condition for $\beta^{WP}_c$ to be zero,
analogously to Theorem \ref{prop.treeperm}.
\begin{thm}
\label{prop.sawperm}
Assume there exists $\delta \in (0,\infty)$ such that
\begin{equation}
\label{eq.Wspancond}
\Pr\!{}_{{\cal S}^B_N}\left(\left\{ \omega : \,\textrm{Span}(\omega) \,\leq \,
\frac{N}{\log^2 N} \right\} \right)
\;\geq \; \frac{1}{N^{\delta}}
\end{equation}
for all sufficiently large $N$. Then ${\cal F}^{WP}(\beta)\,>\, \log \mu_d$ for every $\beta>0$.
\end{thm}
Similarly to Remark \ref{rem.tree}(\textit{i}), it is generally believed that the left side
of Equation (\ref{eq.Wspancond}) converges to 1 as $N$ tends to infinity.
\section{Branched Polymers: Proofs}
\label{sec.BP2}
\subsection{Branched Polymers at an Impenetrable Boundary}
\label{sec.treeimp}
\begin{rem}
\label{rem.treeimp}
Everything in this subsection holds if lattice trees are replaced by lattice animals.
\end{rem}
For $\tau\in {\cal{T}}_N^+$, we
think of the set of sites ${\cal H}(\tau)$ as
the ``left side of $\tau$''. The set ${\cal H}(\tau)$ is not empty
because $\tau$ contains the origin. For $1\leq k\leq N$, let
\[ \textrm{left}_N(k) \;=\; \left|\{ \tau\in {\cal T}^+_N\,: \, |{\cal H}(\tau)|=k\, \}\right| \,.
\]
Then we can write (recalling
Equation (\ref{eq.Zdef}))
\begin{equation}
\label{eq.Zdef2}
|{\cal T}^+_N| \;=\; \sum_{k=1}^N \textrm{left}_N(k)
\hspace{5mm} \hbox{and} \hspace{5mm}
Z^{T+}_N(\beta)
\;=\; \sum_{k=1}^N \textrm{left}_N(k)\, e^{\beta k}\,.
\end{equation}
\noindent
\textbf{Proof of Theorem \ref{prop.tree} :}
Fix $\beta$ such that $0<\beta<\lambda_d^{-1}$.
From Equation (\ref{eq.Zdef2}) we have
\begin{equation}
\label{eq.ZkNexp}
Z^{T+}_N(\beta)
\;=\; \sum_{k=1}^{N} \, \sum_{j=0}^{\infty} \, \frac{\beta^jk^j}{j!} \,\textrm{left}_N(k) \,.
\end{equation}
For any $j\geq 0$ and $k\geq 1$, we have
\begin{equation}
\label{eq.combin}
\frac{k^j}{j!} \; \leq \; \binom{k+j-1}{j} \,.
\end{equation}
The right hand side of inequality (\ref{eq.combin}) is the number of ways to put $j$ identical balls
into $k$ distinct boxes. More formally, it is the number of $k$-tuples $(w_1,\ldots,w_k)$
of nonnegative integers such that $w_1+\cdots+w_k=j$.
We shall define a \textit{marked tree} (with $N$ sites) to be a tree $\tau$ in ${\cal{T}}_N^+$
that has a nonnegative integer $w(\tau;v)$ assigned to each site $v$ of ${\cal H}(\tau)$.
(We think of $w(\tau;v)$ as the number of ``marks'' on the site $v$ of $\tau$.)
Let ${\cal T}_N^{(j)}$ be the set of all marked trees $\tau$ with $N$ sites such that the total
number of marks on the sites of $\tau$ is $j$ (that is, $\sum_{v\in {\cal H}(\tau)}w(\tau;v)=j$).
See Figure \ref{fig1}.
Then
\begin{equation}
\label{eq.taumark}
\left| {\cal T}_N^{(j)} \right| \;=\; \sum_{k=1}^N \binom{k+j-1}{j} \, \textrm{left}_N(k) \,.
\end{equation}
\setlength{\unitlength}{.45mm}
\begin{figure}
\newsavebox{\dink}
\savebox{\dink}(0,0){\line(0,1){0.8}}
\begin{center}
\begin{picture}(190,90)(0,0)
\put(0,0){
\begin{picture}(70,80)(0,0)
\put(30,30){\line(1,0){10}}
\put(40,20){\line(0,1){10}}
\put(10,20){\line(1,0){40}}
\put(20,10){\line(0,1){10}}
\put(40,50){\line(0,1){10}}
\put(10,40){\line(1,0){20}}
\put(20,60){\line(1,0){20}}
\put(10,20){\line(0,1){20}}
\put(40,50){\line(1,0){30}}
\put(50,10){\line(0,1){10}}
\put(20,40){\line(0,1){20}}
\put(10,70){\line(1,0){20}}
\put(30,60){\line(0,1){10}}
\put(60,30){\line(0,1){30}}
\put(50,40){\line(1,0){10}}
\put(10.1,70){\circle*{1.7}}
\put(10.1,40){\circle*{1.7}}
\put(10.1,30){\circle*{1.7}}
\put(10.1,20){\circle*{1.7}}
\put(12,29.5){\tiny{0}}
\multiput(9,-1)(0,2){42}{\usebox{\dink}}
\multiput(11,-1)(0,2){42}{\usebox{\dink}}
\end{picture}
}
\put(120,0){
\begin{picture}(70,90)(0,0)
\put(30,30){\line(1,0){10}}
\put(40,20){\line(0,1){10}}
\put(10,20){\line(1,0){40}}
\put(20,10){\line(0,1){10}}
\put(40,50){\line(0,1){10}}
\put(10,40){\line(1,0){20}}
\put(20,60){\line(1,0){20}}
\put(10,20){\line(0,1){20}}
\put(40,50){\line(1,0){30}}
\put(50,10){\line(0,1){10}}
\put(20,40){\line(0,1){20}}
\put(10,70){\line(1,0){20}}
\put(30,60){\line(0,1){10}}
\put(60,30){\line(0,1){30}}
\put(50,40){\line(1,0){10}}
\put(10.1,70){\circle*{1.7}}
\put(10.1,40){\circle*{1.7}}
\put(10.1,30){\circle*{1.7}}
\put(10.1,20){\circle*{1.7}}
\put(3,67){3}
\put(3,38){0}
\put(3,28){5}
\put(3,18){3}
\put(12,29.5){\tiny{0}}
\multiput(9,-1)(0,2){42}{\usebox{\dink}}
\multiput(11,-1)(0,2){42}{\usebox{\dink}}
\put(-10,88){number of marks}
\put(5,86){\vector(0,-1){10}}
\end{picture}
}
\end{picture}
\end{center}
\caption{\label{fig1} \textit{Left:} A tree $\tilde{\tau}$ in ${\cal T}^+_{28}$.
The vertical dashed double line
denotes the surface $\{x_1=0\}$. Here, $|{\cal H}(\tilde{\tau})|=4$.
\textit{Right:} A marked tree $\tilde{\tau}$ in ${\cal T}_{28}^{(11)}$. The numbers show the values
of $w(\tilde{\tau};v)$ for each site $v$ in ${\cal H}(\tilde{\tau})$.
}
\end{figure}
Combining Equations (\ref{eq.ZkNexp}--\ref{eq.taumark}) shows that
\begin{equation}
\label{eq.ZbdTj}
Z^{T+}_N(\beta) \;\leq \; \sum_{j=0}^{\infty} \beta^j \,\left| {\cal T}_N^{(j)} \right| \,.
\end{equation}
Now, consider an arbitrary marked tree $\tau\in {\cal T}_N^{(j)}$.
For every site $v$ in ${\cal H}(\tau)$, enlarge the tree by attaching a segment of length
$w(\tau;v)$ from $v$ to $v-w(\tau;v)u^{(1)}$.
The result is a tree $f(\tau)$ in ${\cal{T}}_{N+j}$ (with no marks). See Figure \ref{fig2}.
\setlength{\unitlength}{.45mm}
\begin{figure}[h]
\savebox{\dink}(0,0){\line(0,1){0.8}}
\begin{center}
\begin{picture}(210,85)(0,0)
\put(0,0){
\begin{picture}(70,80)(0,0)
\put(30,30){\line(1,0){10}}
\put(40,20){\line(0,1){10}}
\put(10,20){\line(1,0){40}}
\put(20,10){\line(0,1){10}}
\put(40,50){\line(0,1){10}}
\put(10,40){\line(1,0){20}}
\put(20,60){\line(1,0){20}}
\put(10,20){\line(0,1){20}}
\put(40,50){\line(1,0){30}}
\put(50,10){\line(0,1){10}}
\put(20,40){\line(0,1){20}}
\put(10,70){\line(1,0){20}}
\put(30,60){\line(0,1){10}}
\put(60,30){\line(0,1){30}}
\put(50,40){\line(1,0){10}}
\put(10.1,70){\circle*{1.7}}
\put(10.1,40){\circle*{1.7}}
\put(10.1,30){\circle*{1.7}}
\put(10.1,20){\circle*{1.7}}
\put(3,67){3}
\put(3,38){0}
\put(3,28){5}
\put(3,18){3}
\put(12,29.5){\tiny{0}}
\multiput(9,-1)(0,2){42}{\usebox{\dink}}
\multiput(11,-1)(0,2){42}{\usebox{\dink}}
\end{picture}
}
\put(140,0){
\begin{picture}(70,80)(0,0)
\put(30,30){\line(1,0){10}}
\put(40,20){\line(0,1){10}}
\put(10,20){\line(1,0){40}}
\put(20,10){\line(0,1){10}}
\put(40,50){\line(0,1){10}}
\put(10,40){\line(1,0){20}}
\put(20,60){\line(1,0){20}}
\put(10,20){\line(0,1){20}}
\put(40,50){\line(1,0){30}}
\put(50,10){\line(0,1){10}}
\put(20,40){\line(0,1){20}}
\put(10,70){\line(1,0){20}}
\put(30,60){\line(0,1){10}}
\put(60,30){\line(0,1){30}}
\put(50,40){\line(1,0){10}}
\multiput(-19.5,70)(10,0){3}{\circle*{1.3}}
\multiput(-39.5,30)(10,0){5}{\circle*{1.3}}
\multiput(-19.5,20)(10,0){3}{\circle*{1.3}}
\put(-20,70){\line(1,0){30}}
\put(-40,30){\line(1,0){50}}
\put(-20,20){\line(1,0){30}}
\put(10.1,70){\circle*{1.7}}
\put(10.1,40){\circle*{1.7}}
\put(10.1,30){\circle*{1.7}}
\put(10.1,20){\circle*{1.7}}
\put(12,29.5){\tiny{0}}
\multiput(9,-1)(0,2){42}{\usebox{\dink}}
\multiput(11,-1)(0,2){42}{\usebox{\dink}}
\end{picture}
}
\put(80,40){$\lhook\joinrel\relbar\joinrel\rightarrow$}
\end{picture}
\end{center}
\caption{\label{fig2} \textit{Left:} A marked tree $\tau$ in ${\cal T}_{28}^{(11)}$ (see
Figure \ref{fig1}).
\textit{Right:} The tree $f(\tau)$ in ${\cal T}_{39}$.}
\end{figure}
The mapping $f: {\cal T}_N^{(j)} \rightarrow {\cal{T}}_{N+j}$ is
clearly one-to-one (since $\tau\,=\,f(\tau) \cap \mathbb{L}^d_+$ and the marks are easily
recovered from the segments of $f(\tau)$ outside of $\mathbb{L}^d_+$), and hence
$| {\cal T}_N^{(j)} | \,\leq \,|{\cal T}_{N+j}|\,=\, (N+j)\,t_{N+j}$.
Combining this with Equations (\ref{eq.ZbdTj}) and (\ref{eq.tNleq}) gives
\begin{eqnarray}
\nonumber
Z^{T+}_N(\beta) \; \;\leq \;\; \sum_{j=0}^{\infty} (N+j) \beta^j \lambda_d^{N+j}
& = & \frac{N \,\lambda_d^N}{1-\beta\lambda_d} \,+\,
\frac{\lambda_d^N\,(\beta \lambda_d)}{(1-\beta\lambda_d)^2} \\
\label{eq.Zbdgeom}
& \leq & \frac{N \,\lambda_d^N}{(1-\beta\lambda_d)^2}
\end{eqnarray}
(the above series converge because $0<\beta<\lambda_d^{-1}$).
Equations (\ref{eq.Zbdgeom}) and (\ref{eq.tN1ZN}) imply that
${\cal F}^{T+}(\beta)\,=\,\log \lambda_d$.
This proves that Equation (\ref{eq.Zlimneg}) extends to every $\beta<\lambda_d^{-1}$.
The extension to $\beta=\lambda_d^{-1}$ holds by continuity of ${\cal F}^{T+}$ (see
Equation (\ref{eq.Fgen}) and the comments below it).
\hfill$\Box$
\subsection{Branched Polymers at a Penetrable Boundary}
\label{sec.treeperm}
\medskip
\noindent
\textbf{Proof of Theorem \ref{prop.treeperm}:}
A mean-field bound due to Bovier, Fr\"{o}hlich, and Glaus (1986) (see Section 7.2 of
Slade 2006 for a more detailed proof) says that there exists a constant
$A$ such that
\begin{equation}
\label{eq.mfbound}
1+\sum_{N=1}^{\infty}N^2t_Nz^N \;\geq \; \frac{A}{\sqrt{1-\lambda_dz}}
\hspace{5mm}\hbox{for all $z\in [0,\lambda_d^{-1})$.}
\end{equation}
In particular, the power series on the left diverges at $z=1/\lambda_d$.
It follows that
\begin{equation}
\label{eq.iobound}
t_n \;\geq \; n^{-4}\lambda_d^n \hspace{5mm}\hbox{for infinitely many values of $n$}.
\end{equation}
Let ${\cal B}_N$ be the set of trees in $\bar{\cal T}_N$ whose $x_1$-span is at most
$N/\log^2N$. Observe that the left-hand side of Equation (\ref{eq.spancond}) does not
change if we replace $\Pr_{{\cal T}_N}$ by $\Pr_{\bar{\cal T}_N}$.
Thus Equation (\ref{eq.spancond}) says that $|{\cal B}_N|\,\geq \, t_N/N^{\delta}$.
By Equation (\ref{eq.iobound}), we obtain
\begin{equation}
\label{eq.iobound2}
|{\cal B}_n| \;\geq \; n^{-(4+\delta)} \lambda_d^n
\hspace{5mm}\hbox{for infinitely many values of $n$}.
\end{equation}
For every $N>1$, let
\begin{equation*}
{\cal T}^*_N \;:=\; \left\{ \tau\in {\cal T}_N: \hbox{0 is the lexicographically smallest site of
${\cal H}(\tau)$}\, \right\}
\end{equation*}
and
\[
{\cal D}_N \;:=\; \{\tau\in {\cal T}^*_N: |{\cal H}(\tau)|\geq \log^2N \} \,.
\]
Consider an arbitrary $\tau$ in ${\cal B}_N$. There must be some integer
$j\in [0,(N/\log^2N)-1]$ such that
$\tau$ has at least $\log ^2N$ sites $x$ satisfying $x_1=j$.
Let $\hat{x}$ be the lexicographically smallest site in $\{x\in \tau: x_1=j\}$,
and let $\hat{\tau}$ be the translation of $\tau$ by the vector $-\hat{x}$.
Then $\hat{\tau}\in {\cal D}_N$.
Observe that each $\hat{\tau}$ uniquely determines $\tau$, since ${\cal B}_N\subset \bar{\cal T}_N$
and no two trees in $\bar{\cal T}_N$ can be translations of one another.
Therefore
\begin{equation}
\label{eq.DNBN}
|{\cal D}_N|\,\geq \,|{\cal B}_N|.
\end{equation}
Now fix $\beta>0$. By Equation (\ref{eq.iobound2}), there exists an integer $n$ for which
\begin{equation}
\label{eq.fixn}
|{\cal B}_n|\exp(\beta \log^2 n) \,>\,\lambda_d^n.
\end{equation}
Fix this $n$ for the rest of the proof.
We can concatenate members of ${\cal D}_n$ by translating them along vectors in
the hyperplane $x_1=0$. Details are given in Section \ref{sec.cat} below.
For any integer $k\geq 2$, we can concatenate any $k$ members of ${\cal D}_n$ in this way to
produce a member $\tilde{\tau}$ of ${\cal T}_{kn}$ with ${\cal H}(\tilde{\tau})\,\geq \, k\log^2n$.
Moreover, this map $({\cal D}_n)^k\rightarrow {\cal T}_{kn}$ is injective (see Section \ref{sec.cat}).
Therefore, using Equation (\ref{eq.DNBN}), we have
\begin{align}
Z_{kn}^{TH}(\beta) \;& \geq \; |{\cal D}_n|^k \exp(\beta k \log^2n)
\nonumber \\
& \geq \; |{\cal B}_n|^k \exp(\beta k \log^2n)
\hspace{5mm}(k=1,2,\ldots).
\label{eq.ZPconcat}
\end{align}
Take the $(kn)^{th}$ root of Equation (\ref{eq.ZPconcat}) and let $k\rightarrow\infty$.
Since the limit of the left-hand side exists, we obtain
\[
\exp({\cal F}^{TH}(\beta) ) \;\geq \; \left( |{\cal B}_n| \exp(\beta \log^2n) \right)^{1/n},
\]
and the right hand side is strictly greater than $\lambda_d$ by Equation (\ref{eq.fixn}).
This proves that ${\cal F}^{TH}(\beta) \,>\, \log \lambda_d$.
\hfill $\Box$
\begin{rem}
\label{rem.treeperm}
The analogue of Equation (\ref{eq.mfbound}) for lattice animals appears in
Section 1.3 of Hara and Slade (1990). Everything else in this section extends
immediately to lattice animals.
\end{rem}
\subsection{Concatenation of Lattice Branched Polymers}
\label{sec.cat}
This section describes a concatenation procedure that preserves the number of
sites in the surface $x_1=0$. We shall discuss trees, but the argument for
animals is essentially the same.
Let $N$ and $M$ be positive integers. We shall describe an operation $\oplus$
such that, for every pair of trees $\tau\in{\cal T}^*_N$ and $\psi\in{\cal T}^*_M$,
we obtain a tree $\tau\oplus\psi\in {T}^*_{N+M}$ such that
$|{\cal H}(\tau\oplus\psi)| \,=\, |{\cal H}(\tau)|\,+\,|{\cal H}(\psi)|$. Moreover, the operation
$\oplus: {\cal T}^*_N\times {\cal T}^*_M \rightarrow{\cal T}^*_{N+M}$
is one-to-one.
Let $\tau\in{\cal T}^*_N$ and $\psi\in{\cal T}^*_M$. Let
\[ K \;=\; \max\left\{ k\in \mathbb{Z}: (\psi+ku^{(2)})\cap \tau \neq \emptyset \right\} \,.
\]
Since $\psi\cap\tau$ contains the origin, we see that $K\geq 0$.
Let $v$ be a site in $(\psi+Ku^{(2)})\cap \tau$, and let $b$ be the edge from $v$ to $v+u^{(2)}$.
Observe that $\psi+(K+1)u^{(2)}$ contains $v+u^{(2)}$ but contains no point of $\tau$.
Therefore $(\psi+(K+1)u^{(2)})\cup \tau \cup b$ is a tree, which we shall call $\theta$.
We define $\tau\oplus\psi$ to be $\theta$. We shall now check that $\theta$ has
the claimed properties of $\oplus$.
First observe that the construction ensures that we have
\begin{verse}
\textbf{Property A:} \hspace{1mm} ${\cal H}(\theta)$ is the disjoint union of
and ${\cal H}(\psi)+(K+1)u^{(2)}$ and ${\cal H}(\tau)$.
\end{verse}
It is clear that $\theta\in {\cal T}_{N+M}$. To show that $\theta\in {\cal T}^*_{N+M}$, we
must show that 0 is the lexicographically smallest site of ${\cal H}(\theta)$.
But this follows from Property A, the fact that 0 is the lexicographically smallest site of
${\cal H}(\tau)$ and of ${\cal H}(\psi)$, and our earlier observation that $K\geq 0$.
The relation $|{\cal H}(\tau\oplus\psi)| \,=\, |{\cal H}(\tau)|\,+\,|{\cal H}(\psi)|$ also
follows from Property A.
It remains to show that $\oplus$ is one-to-one, i.e.\ that we can recover $\tau$ and $\psi$
knowing $\theta$ (for given $N$ and $M$).
To do this, we first observe that for the edge $b$ in our construction, the
following property holds with $e=b$:
\begin{verse}
\textbf{Property B:} \hspace{1mm} Deleting the edge $e$ from $\theta$ creates
two components, and the component containing the origin has exactly $N$ sites.
\end{verse}
In general, there may be two or more edges $e$ of $\theta$ that satisfy Property B, so
we need to decide which of them is $b$.
Let $J=\max\{j\in \mathbb{Z}: ju^{(2)}\in \theta\}$. Since $(K+1)u^{(2)}\in \theta$, we see
that $J\geq K+1$. Thus, whatever $\tau$ and $\psi$ are, we know that $0\in \tau$
and $Ju^{(2)}\not\in \tau$ (by the definition of $K$ and the fact that $Ju^{(2)}\in \psi+Ju^{(2)}$).
Therefore the edge $b$ belongs to $\pi$, where $\pi$
is any path in $\theta$ from 0 to $Ju^{(2)}$.
(When $\theta$ is a tree, there is only one such path.)
Furthermore, it is not hard to see that at most one edge of $\pi$ can satisfy Property B.
Therefore the edge $b$ is determined from $\theta$, and hence $\tau$ and $\psi$ are determined.
This proves that $\oplus$ is one-to-one.
\section{Linear Polymers}
\label{sec.SAW2}
\subsection{Self-Avoiding Walks at an Impenetrable Boundary}
\label{sec.SAWimp}
\noindent
\textbf{Proof of Theorem \ref{prop.sawimp}:}
Hammersley et al.\ (1982) proved that ${\cal F}^{W+}(\beta)=\log\mu_d$
for every $\beta\leq 0$, so we shall only consider positive $\beta$.
The general idea of the proof is the same as for trees (Theorem \ref{prop.tree}),
but there is a technical
difficulty when it comes to proving the analogue of $|{\cal T}_N^{(j)}| \,\leq |{\cal T}_{N+j}|$.
To get around this, we introduce a slightly different model of adsorption, in which we weight
a walk according the number of edges in the surface. For $\omega\in{\cal S}_N^+$,
define ${\cal H}\Left(\omega)$ to be the set of edges of $\omega$ that have both endpoints
in $\{x\in \mathbb{Z}^d:x_1=0\}$, and define
\[
Z^{WW+}_N(\beta) \; :=\; \sum_{\omega\in {\cal S}_N^+} \exp(\beta|{\cal H}\Left(\omega)|) \,.
\]
Then $|{\cal H}(\omega)| \;\leq \; 2\,|{\cal H}\Left(\omega)|$
for every $\omega\in {\cal S}_N^+$,
and hence for every $\beta\geq 0$ we have
\begin{equation}
\label{eq.Zww}
\; Z_N^{W+}(\beta) \;\leq \; Z_N^{WW+}(2\beta) \,.
\end{equation}
We define a \textit{marked walk} (with $N$ sites) to be a SAW $\omega$ in ${\cal S}_N^+$ that
has a nonnegative integer $m(\omega;b)$ assigned to each edge $b$ of ${\cal H}\Left(\omega)$.
Let ${\cal S}_N^{(j)}$ be the set of all marked walks $\omega$ with $N$ sites such that
$\sum_{b\in{\cal H}\Left(\omega)}m(\omega;b)\,=\,j$. Then the same argument as in the
proof of Theorem \ref{prop.tree} shows that
\begin{equation}
\label{eq.ZbdSj}
Z^{WW+}_N(2\beta) \;\leq \; \sum_{j=0}^{\infty} (2\beta)^j \,\left| {\cal S}_N^{(j)} \right| \,.
\end{equation}
Now, fix a positive $\beta< \frac{1}{2}\mu_d^{-2}$. Choose $\epsilon>0$ small enough so that
$2\beta(\mu_d+\epsilon)^2<1$. Then there exists a constant $A$ such that
\begin{equation}
\label{eq.sawsum}
\sum_{n=0}^Mc_n \;\leq \; A(\mu_d+\epsilon)^M \hspace{5mm}\mbox{for all $M\geq 0$}.
\end{equation}
Consider an arbitrary marked walk $\omega$ in ${\cal S}_N^{(j)}$. Let $E_1$ be the
set of edges of $\omega$ that are not in ${\cal H}\Left(\omega)$. Let $E_2$ be the
set of edges in ${\cal H}\Left(\omega)$ after each edge is translated in the $-x_1$
direction by a distance equal to the number of marks on that edge:
\[ E_2 \;=\; \{b-m(\omega;b)u^{(1)}: b\in {\cal H}\Left(\omega) \} \,.
\]
Let $f(\omega)$ be the shortest SAW starting at the origin that contains all edges
of $E_1\cup E_2$ and all of whose remaining edges are parallel to $\pm u^{(1)}$.
Observe that $f(\omega)$ is obtained by adding at most $2j$ edges
to $E_1\cup E_2$.
It is not hard to see that the function $f:{\cal S}_N^{(j)}\rightarrow \bigcup_{n=N}^{N+2j}{\cal S}_N$
is one-to-one, so by Equation (\ref{eq.sawsum})
\[ |{\cal S}_N^{(j)}| \;\leq \; A(\mu_d+\epsilon)^{N+2j}.
\]
From this and Equation (\ref{eq.ZbdSj}), and our choice of $\epsilon$, we obtain
\[
Z_N^{WW+}(2\beta) \;\leq\; \frac{A (\mu_d+\epsilon)^N}{1-2\beta(\mu_d+\epsilon)^2} \,.
\]
Combining this with Equation (\ref{eq.Zww}) proves that
${\cal F}^{W+}(\beta) \,\leq\,\log(\mu_d+\epsilon)$.
Since $\epsilon$ can be made arbitrarily small, and since
${\cal F}^{W+}(\beta)\,\geq \,{\cal F}^{W+}(0)\,=\,\log\mu_d$, we are done.
\hfill $\Box$
\subsection{Self-Avoiding Walks at a Penetrable Boundary}
\label{sec.sawpen}
\noindent
\textbf{Proof of Theorem \ref{prop.sawperm}:}
First observe that if $\omega\in {\cal S}_N^B$, then $\omega(1)=(0,\ldots,0,1)$ and
$|\omega_1(N)|\leq N-1$.
It is known
that the series $\sum_{n=1}^{\infty}b_nz^n$ diverges at $z=\mu_d^{-1}$
(Kesten, 1963; or Corollary 3.1.8 of Madras and Slade 1993) . Therefore we have
\begin{equation}
\label{eq.iobound3}
b_n \;\geq \; n^{-2} \mu_d^n
\hspace{5mm}\hbox{for infinitely many values of $n$}.
\end{equation}
For every $N>1$, let
\[
{\cal D}_N \;:=\; \{\omega\in {\cal S}^B_N: \textrm{Span}(\omega) \,\leq \,N/\log^2N \} \,.
\]
By the assumption (\ref{eq.Wspancond}), $|{\cal D}_N|/b_N \,\geq \,N^{-\delta}$ for sufficiently large $N$. Therefore by (\ref{eq.iobound3}),
\begin{equation}
\label{eq.iobound4}
|{\cal D}_n| \;\geq \; n^{-(2+\delta)} \mu_d^n
\hspace{5mm}\hbox{for infinitely many values of $n$}.
\end{equation}
Fix $\beta>0$.
Fix a positive integer $n$ such that $\frac{\beta}{2}\log^2n > \log(4n^{4+\delta})$ and
the inequality of (\ref{eq.iobound4}) holds.
For integers $j$ and $m$ let
\[
{\cal D}_{n,j,m} \;:=\; \{\omega\in {\cal D}_n: |\{i:\omega_1(i)=j\}|\geq \log^2n,\,
\omega_1(n)=m\}\,.
\]
Since
\[ {\cal D}_n\,=\,\bigcup_{j=-(n-1)}^{n-1}\bigcup_{m=-(n-1)}^{n-1}{\cal D}_{n,j,m}
\]
and by symmetry,
there exist integers $J\geq 0$ and $M$ such that
$|{\cal D}_{n,J,M}|\,\geq \,|{\cal D}_n|/(2n-1)^2$.
By this and (\ref{eq.iobound4}),
\begin{equation}
\label{eq.DnJMbd}
|{\cal D}_{n,J,M}|
\;\geq \; \frac{\mu_d^n}{4n^{4+\delta}} \,.
\end{equation}
For two SAWs $\omega=(\omega(0),\ldots,\omega(N))$ and $\psi=(\psi(0),\ldots,\psi(M))$,
we define the concatenation $\omega\oplus\psi$ to be the $(N+M)$-step walk
$\theta$ defined by
\begin{align*}
\theta(i) \; & =\; \omega(i) \hspace{25mm} \hbox{for $i=0,\ldots, N$, and } \\
\theta(N+j) \;&=\; \omega(N)+\psi(j)-\psi(0)
\hspace{5mm} \hbox{for $j=1,\ldots, M$} \,.
\end{align*}
In general, $\theta$ need not be self-avoiding. However, if $\omega$ and $\psi$ are both
bridges, then $\theta$ is self-avoiding---indeed, $\theta$ is a bridge. Thus
$\oplus$ defines a one-to-one map from ${\cal S}^B_N\times {\cal S}^B_M$ into
${\cal S}^B_{N+M}$.
Suppose now that $\omega\in {\cal D}_{n,J,M}$ and $\psi\in {\cal D}_{n,-J,-M}$, and
let $\theta\,=\,\omega\oplus\psi$. Then $\theta$ is a $(2n)$-step bridge
such that $\theta_1(2n)=0$ and $|\{i:\theta_1(i)=J\}|\geq \log^2 n$ (the inequality
is due only to sites in the first half of $\theta$).
We shall use these observations in the construction that follows.
For any positive integer $k$, let $\omega^{[1]},\ldots,\omega^{[k]}$ be bridges in
${\cal D}_{n,J,M}$ and let $\psi^{[1]},\ldots,\psi^{[k]}$ be bridges in ${\cal D}_{n,-J,-M}$.
Consider the bridge $\pi$ obtained by repeated concatenation of these bridges:
\[
\pi \;:=\; \omega^{[1]}\oplus\psi^{[1]} \oplus \omega^{[2]} \oplus \psi^{[2]} \oplus
\cdots \oplus \omega^{[k]}\oplus \psi^{[k]} \,.
\]
Then $|\{i:\pi_1(i)=J\}|\geq k\log^2 n$. Next,
let $\xi$ be the $(J+1)$-step bridge with $\xi(0)=0$ and $\xi(J{+}1)=(-J,0,\ldots,0,1)$.
For $\zeta:=\xi\oplus\pi$, we have
$\zeta\in {\cal S}^B_{J+1+2kn}$ and $|{\cal H}(\zeta)| \,\geq \, k\log^2 n$.
Since
$\zeta$ unambiguously determines the $\omega^{[i]}$'s and $\psi^{[i]}$'s, it follows that
\begin{align*}
|\{\zeta \in {\cal S}^B_{J+1+2kn}: |{\cal H}(\zeta)| \,\geq \, k\log^2 n \}|
\; & \geq \; \left( |{\cal D}_{n,J,M}|\,|{\cal D}_{n,-J,-M}|\right)^k \\
& = \; |{\cal D}_{n,J,M}|^{2k} \hspace{8mm}\hbox{(by symmetry).}
\end{align*}
Using this and Equation (\ref{eq.DnJMbd}), we see that
\[
Z^{WP}_{J+1+2kn}(\beta) \;\geq\;
\exp(\beta k\log^2 n) \,\frac{\mu_d^{2kn} }{(4n^{4+\delta})^{2k}} \,.
\]
Therefore
\[
\frac{\log Z^{WP}_{J+1+2kn}(\beta) }{J+1+2kn} \;\geq\;
\frac{\beta k \log^2n -2k\log(4n^{4+\delta}) +2kn\log\mu_d}{J+1+2kn} \,.
\]
Now let $k\rightarrow\infty$, and we obtain
\begin{align*}
{\cal F}^{WP}(\beta) \; & \geq \;
\frac{1}{n} \left( \frac{\beta \log^2 n} {2} \,-\, \log(4n^{n+\delta}) \right) \,+\, \log \mu_d \\
& > \; \log\mu_d \,,
\end{align*}
where the strict inequality follows from our choice of $n$. This proves the result.
\hfill $\Box$
\section*{Acknowledgments}
This research was supported in part by a Discovery Grant from the Natural Sciences and
Engineering Research Council of Canada. Part of this work was done while the author was
visiting the Fields Institute for Research in Mathematical Sciences.
| {
"timestamp": "2016-06-28T02:20:12",
"yymm": "1606",
"arxiv_id": "1606.08393",
"language": "en",
"url": "https://arxiv.org/abs/1606.08393",
"abstract": "We consider various lattice models of polymers: lattice trees, lattice animals, and self-avoiding walks. The polymer interacts with a surface (hyperplane), receiving a unit energy reward for each site in the surface. There is an adsorption transition of the polymer at a critical value of $\\beta$, the inverse temperature. We present a new proof of the result of Hammersley, Torrie, and Whittington (1982) that the transition occurs at a strictly positive value of $\\beta$ when the surface is impenetrable, i.e. when the polymer is restricted to a half-space. In contrast, for a penetrable surface, it is an open problem to prove that the transition occurs at $\\beta=0$ (i.e., infinite temperature). We reduce this problem to showing that the fraction of N-site polymers whose span is less than $N/\\log^2 N$ is not too small.",
"subjects": "Mathematical Physics (math-ph); Probability (math.PR)",
"title": "Location of the Adsorption Transition for Lattice Polymers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018426872776,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7099229992042073
} |
https://arxiv.org/abs/1511.09141 | Consecutive Integers and the Collatz Conjecture | Pairs of consecutive integers have the same height in the Collatz problem with surprising frequency. Garner gave a conjectural family of conditions for exactly when this occurs. Our main result is an infinite family of counterexamples to Garner's conjecture. | \section{\@startsection {section}{1}{\z@}
{-30pt \@plus -1ex \@minus -.2ex}
{2.3ex \@plus.2ex}
{\normalfont\normalsize\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries}}
\renewcommand{\@seccntformat}[1]{\csname the#1\endcsname. }
\makeatother
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}{Lemma}
\newtheorem{conjecture}{Conjecture}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
\theoremstyle{definition}
\newtheorem{definition}{Definition}
\newcommand{\mathbb{N}}{\mathbb{N}}
\begin{document}
\begin{center}
\uppercase{\bf Consecutive Integers and the Collatz Conjecture}
\vskip 20pt
{\bf Marcus Elia}\\
{\smallit Department of Mathematics, SUNY Geneseo, Geneseo, NY}\\
{\tt mse1@geneseo.edu}\\
\vskip 10pt
{\bf Amanda Tucker}\\
{\smallit Department of Mathematics, University of Rochester, Rochester, NY}\\
{\tt amanda.tucker@rochester.edu}\\
\end{center}
\vskip 30pt
\vskip 30pt
\centerline{\bf Abstract}
\noindent
Pairs of consecutive integers have the same height in the Collatz problem with surprising frequency. Garner gave a conjectural family of conditions for exactly when this occurs. Our main result is an infinite family of counterexamples to Garner's conjecture.
\pagestyle{myheadings}
\markright{\smalltt INTEGERS: 15 (2015)\hfill}
\thispagestyle{empty}
\baselineskip=12.875pt
\vskip 30pt
\section{Introduction} \label{intro}
The Collatz function $C$ is a recursively defined function on the positive integers given by the following definition.
\[
C^k(n)=
\begin{cases}
n, \text{ if } k=0\\
C^{k-1}(n)/2, \text{ if } C^{k-1}(n)\text{ is even}\\
3*C^{k-1}(n)+1, \text{ if } C^{k-1}(n)\text{ is odd}.
\end{cases}
\]
The famed Collatz conjecture states that, under the Collatz map, every positive integer converges to one~\cite{Lagarias}.
The \emph{trajectory} of a number is the path it takes to reach one. For example, the trajectory of three is
\begin{equation*}
3\rightarrow 10\rightarrow 5\rightarrow 16\rightarrow 8\rightarrow 4\rightarrow 2\rightarrow 1.
\end{equation*}
The \emph{parity vector} of a number is its trajectory considered modulo two. So the parity vector of three is
\begin{equation*}
\langle 1,0,1,0,0,0,0,1\rangle.
\end{equation*}
Because applying the map $n \mapsto 3n+1$ to an odd number will always yield an even number, it is sometimes more convenient to use the following alternate definition of the Collatz map, often called $T$ in the literature.
\[
T^k(n)=
\begin{cases}
n, \text{ if } k=0\\
T^{k-1}(n)/2, \text{ if } T^{k-1}(n)\text{ is even}\\
(3*T^{k-1}(n)+1)/2, \text{ if } T^{k-1}(n)\text{ is odd}.
\end{cases}
\]
With this new definition, the trajectory of three becomes
\begin{equation*}
3\rightarrow 5\rightarrow 8\rightarrow 4\rightarrow 2\rightarrow 1
\end{equation*}
and its $T$ parity vector is
\begin{equation*}
\langle 1,1,0,0,0,1\rangle.
\end{equation*}
Since the Collatz conjecture states that, for every positive integer $n$, there exists a non-negative integer $k$ such that $C^k(n)=1$, it is natural to ask for the smallest such value of $k$. This $k$ is called the \emph{height} of $n$ and denoted H$(n)$. So, for example, the height of three is seven because it requires seven iterations of the map $C$ for three to reach seven. In this paper, height is used only in association with the map $C$, never the map $T$.
It turns out that consecutive integers frequently have the same height. Garner made a conjecture that attempts to predict, in terms of the map $T$ and its parity vectors, exactly which pairs have the same height~\cite{Garner}. He proved that his condition is sufficient to guarantee two consecutive numbers will have the same height, but only surmised that it is a necessary condition.
The main idea in this paper is that phrasing Garner's conjecture in terms of the map $C$ reveals an easier-to-verify implication of Garner's conjecture, namely, that if two consecutive integers have the same height, then they must reach $4$ and $5\pmod 8$ at the same step of their trajectory (see Proposition~\ref{GarnerEquivalent}). Because this condition is much easier to check than the conclusion of Garner's conjecture, we were able to find an infinite family of pairs of consecutive integers that do not satisfy this condition, and, hence, constitute counterexamples to Garner's conjecture (see Theorem~\ref{MainTheorem}).
\begin{bf}Acknowledgements:\end{bf}
This research was made possible by an Undergraduate Research Fellowship from the Research Foundation for SUNY. In addition, we would like to thank Jeff Lagarias and Steven J. Miller for helpful conversations.
We would also like to thank the referee for careful reading and helpful suggestions and corrections.
\section{Heights of consecutive integers}\label{heights}
Recall that the smallest non-negative $k$ such that $C^k(n)=1$ is called the \emph{height} of $n$ and denoted H$(n)$. The following is a graph of the height $H$ as a function of $n$.
\begin{center}
\includegraphics[height=2.4in]{StoppingTime1200highlightBW.jpg}
\end{center}
The striking regularity in the above graph is the starting point for our studies, but remains largely elusive. If one na\"ively searches for curves of best fit to the visible curves therein, one quickly runs into a problem. What appear to be distinct points in the above graph are actually clusters of points, as can be seen below. Thus, it is not entirely clear which points one ought to work with when trying to find a curve of best fit.
\begin{center}
\includegraphics[height=2.4in]{ExplanationBW.jpg}\\
\end{center}
This leads to the surprising observation that many consecutive integers have the same height. This is counterintuitive because if two integers are consecutive then they are of opposite parity, so the Collatz map initially causes one to increase ($n \mapsto 3n+1$) and the other to decrease ($n \rightarrow \frac{n}{2}$). How, then, do they reach one in the same number of iterations? We give a sufficient congruence condition to guarantee two consecutive numbers will have the same height, and show that an all-encompassing theorem like Garner conjectured in~\cite{Garner} is not possible. In fact, we show the situation is much more complicated than Garner originally thought.
The first pair of consecutive integers with the same height is twelve and thirteen. We see that for both numbers, $C^3(n)=10$. Clearly, once their trajectories coincide, they will stay together and have the same height. This happens because twelve follows the path\\
\begin{equation*}
12\rightarrow 6\rightarrow 3 \rightarrow 10,
\end{equation*}
and thirteen follows the path\\
\begin{equation*}
13\rightarrow 40\rightarrow 20\rightarrow 10.
\end{equation*}
Now we seek to generalize this. It turns out that twelve and thirteen merely form the first example of a general phenomenon, namely, numbers that are $4$ and $5\pmod{8}$ always coincide after the third iteration. The following result agrees with what Garner found using parity vectors~\cite{Garner}.
\begin{theorem}
If $n>4$ is congruent to $4\pmod{8}$, then $n$ and $n+1$ coincide at the third iteration and, hence, have the same height.
\end{theorem}
\begin{proof}
Suppose $n>4$ and $n\equiv 4\pmod{8}$. Then $n=8k+4$, for some $k\in\mathbb{N}$. Then, because $8k+4$ and $4k+2$ are even, while $2k+1$ is odd, the trajectory of $n$ under the map $C$ is
\[
8k + 4 \rightarrow 4k+2 \rightarrow 2k+1 \rightarrow 6k+4.
\]
Because $n+1=8k+5$ is odd, and $24k+16$ and $12k+8$ are even, the trajectory of $n+1$ under the map $C$ is
\[
8k+5 \rightarrow 24k+16 \rightarrow 12k+8 \rightarrow 6k+4.
\]
Therefore, $n$ and $n+1$ coincide at the third iteration.
\end{proof}
\section{Garner's conjecture}
Garner wanted to generalize this to predict all possible pairs of consecutive integers that coincide. Since he used the map $T$ (defined in Section~\ref{intro}) instead of the map $C$, we will do the same in this section except during the proof of Proposition~\ref{GarnerEquivalent}. He observed that whenever two consecutive integers have the same height, their parity vectors appear to end in certain pairs of corresponding \emph{stems} immediately before coinciding. He defined a \emph{stem} as a parity vector of the form
\begin{equation*}
s_i = \langle 0,\underbrace{ 1, 1, ..., 1}_{i\:\: 1's}, 0, 1\rangle,
\end{equation*}
and the \emph{corresponding stem} as
\begin{equation*}
s_i'=\langle 1,\underbrace{1,1,...,1}_{i\: \: 1's},0,0\rangle.
\end{equation*}
LaTourette used the following definitions of a stem and a block in her senior thesis~\cite{LaTourette}, which we adhere to here as well. In what follows, we write $T_w(n)$ to mean apply the sequence of steps indicated by the parity vector $w$ to the input $n$ using the map $T$.
\begin{definition}
(LaTourette) A pair of parity sequences $s$ and $s'$ of length $k$ are \emph{corresponding stems} if, for any integer $x$, $T_s(x) = T_{s'}(x+1)$ and, for any initial subsequences $v$ and $v'$ of $s$ and $s'$ of equal length, $|T_v(x)-T_{v'}(x+1)|\ne 1$ and $T_v(x)\ne T_{v'}(x+1)$.
\end{definition}
\begin{definition}
(LaTourette) A \emph{block} prefix is a pair of parity sequences $b$ and $b'$, each of length $k$, such that for all positive integers $x$, $T_b(x)+1 = T_{b'}(x+1)$.
\end{definition}
In his conclusion, Garner conjectured that all corresponding stems will be of the form $s_i$ and $s_i'$ listed above. LaTourette conjectured the same.
\begin{conjecture}~\label{GarnConj}
(Garner) Any pair of consecutive integers of the same height will have parity vectors for the non-overlapping parts of their trajectories ending in $s_i$ and $s_i'$~\cite{Garner}.
\end{conjecture}
Garner gave no bound on the length of stem involved, though, so searching for counterexamples by computer was a lengthy task. The big innovation in this paper is that using the map $C$ instead of the map $T$ yields a much simple implication of Garner's conjecture, which makes it possible to search for counterexamples.
\begin{proposition}\label{GarnerEquivalent}
If $n$ and $n+1$ have parity vectors for the non-overlapping parts of their trajectories ending in $s_i$ and $s_i'$, and $k$ is the smallest positive integer such that $C^k(n)=C^k(n+1)$, then $C^{k-3}(n)\equiv 4\pmod{8}$ and $C^{k-3}(n+1)=C^{k-3}(n)+1$ or $C^{k-3}(n+1)\equiv 4\pmod{8}$ and $C^{k-3}(n)=C^{k-3}(n+1)+1$.
\end{proposition}
\begin{proof}
To see this, we must change the Garner stems to be consistent with the map $C$. Converting the parity vectors simply involves inserting an extra `0' after each `1'. So Garner's stems in terms of the map $C$ now look like
\begin{equation*}
s_i = \langle 0,\underbrace{ 1,0, 1,0, ..., 1,0}_{i\:\: 1,0's}, 0, 1,0\rangle,
\end{equation*}
and
\begin{equation*}
s_i'=\langle 1,0,\underbrace{1,0,1,0,...,1,0}_{i\: \: 1,0's},0,0\rangle.
\end{equation*}
Now we will rearrange this more strategically. We have
\begin{equation*}
s_i = \langle \underbrace{0, 1,0, 1, ..., 0,1}_{i\:\: 0,1's},0, 0, 1,0\rangle,
\end{equation*}
and
\begin{equation*}
s_i'=\langle \underbrace{1,0,1,0,...,1,0}_{i\: \: 1,0's},1,0,0,0\rangle.
\end{equation*}
The point of these stems is that the trajectories coincide right after this vector. Since both end with a `0', they have coincided one step before the end, so we can simply omit the last `0'. Now the corresponding stems are only $\langle 0,0,1\rangle$ and $\langle 1,0,0\rangle$, with repeated blocks in front of them. Terras\cite{Terras} proved that there is a bijection between the set of integers modulo $2^k$ and the set of parity vectors of length $k$. The algorithm to get from a parity vector of length 3 to an integer modulo 8 is explicit, so we can easily determine that numbers with those parity vectors are congruent to 4 and 5$\pmod{8}$, respectively.
Let $j$ be the point at which they coincide, so $C^k(n) = C^k(n+1) = j$. Applying $C^{-1}$ to $j$ as prescribed by both $\langle 0,0,1\rangle$ and $\langle 1,0,0\rangle$ yields $\frac{4j-1}{3}-1$ and $\frac{4j-1}{3}$, respectively. Thus, we see that $C^{k-3}(n+1)=C^{k-3}(n)+1$. An identical argument yields the case where $C^{k-3}(n+1)\equiv 4\pmod{8}$, and we get $C^{k-3}(n)=C^{k-3}(n+1)+1$ in that case as well.
\end{proof}
So, written in terms of the map $C$, all of Garner's other stems are simply repeated blocks of `$01$' and `$10$' in front of the stems $\langle 0,0,1\rangle$ and $\langle 1,0,0\rangle$. This is the benefit of applying the map $C$ in this situation. It is now feasible to check if a pair of consecutive integers is a counterexample to Garner's conjecture. Suppose $n$ and $n+1$ have the same height. According to Garner's conjecture, $n$ and $n+1$ would have $T$ parity vectors before coinciding that end in $s_i$ and $s_i'$. By Proposition \ref{GarnerEquivalent}, this would in turn imply that $n$ and $n+1$ have $C$ parity vectors ending in $\langle 0,0,1\rangle$ and $\langle 1,0,0\rangle$. Therefore, if we find a pair of positive integers $n$ and $n+1$ such that their parity vectors do not end in $\langle 0,0,1\rangle$ and $\langle 1,0,0\rangle$, we have found a counterexample to Garner's conjecture.\\
\section{A counterexample to Garner's conjecture}
We initially believed Garner's conjecture, but have since found many counterexamples. The first counterexample is the pair 3067 and 3068. The $C$-parity vector of $3067$ before coinciding with $3068$ is
\begin{eqnarray*}
\langle 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1,
0, 0, 0, 1 \rangle,
\end{eqnarray*}
and that of $3068$ is
\begin{eqnarray*}
\langle 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1,
0, 0, 0, 0\rangle.
\end{eqnarray*}
By inspection, the parity vectors do not end with $\langle 0,0,1\rangle$ and $\langle 1,0,0\rangle$ as Garner predicted. Thus, Garner's conjecture is false.\\
A computer search found that there are 946 counterexample pairs less than a million. For numbers less than 5 billion, $0.214$\% of pairs of consecutive integers of the same height are counterexamples. By a simple argument, we can see that there must be infinitely many counterexample pairs. \\
\begin{theorem}\label{MainTheorem}
There are infinitely many counterexamples to Garner's conjecture.
\end{theorem}
\begin{proof}
Consider the parity vectors of 3067 and 3068 up to the point where they coincide. We know that there will be a pair with the same parity vectors for every integer of the form $2^{19}m + 3067$ by Terras's bijection\cite{Terras}. Each of these pairs will coincide in the same way that 3067 and 3068 do and, thus, have the same height. Therefore, there are infinitely many counterexamples to Garner's conjecture.
\end{proof}
\section{Conclusion}
At this point, we look at those numbers that do not have the stems Garner predicted to see why they coincide. To salvage Garner's conjecture, we seek to expand the list of possible stems. To see what is going on, we have no choice but to examine the trajectories of 3067 and 3068, side by side (See Appendix A).
We can see that there are no other places within the trajectories where their values have a difference of one. Therefore, by the current definition of a stem, the entire parity vector of length 27 (up until they coincide at 1384) is a new stem. However, by this logic, the next counterexample, 4088 and 4089, has a new stem of length 30. The next pair, 6135 and 6136, has a stem of length 28. It would be ridiculous to have only one stem (of length 3) before 3067 and to suddenly add dozens more of varying lengths. Instead, we look for some new type of stems within these counterexample, stems that do not start with consecutive integers. The trajectories of all three pairs listed above coincide at 1384. In fact, they have the same 22 elements leading up to that. Thus, it is tempting to label that beginning as the stem. But if we look further, the consecutive integers 32743 and 32744 join that group just 5 steps before coinciding at 1384. Therefore, the situation is much more complicated than Garner's stems. It would be interesting to know if there is some pattern similar to what Garner conjectured, perhaps with a much-expanded list of stems, that explains every pair of consecutive numbers that converges together. However, we have found no such simple salvage of Garner's conjecture.
We have shown that pairs of integers of the form $8m+4$ and $8m+5$ have coinciding trajectories after 3 steps (and therefore have the same height). We have also shown that all pairs that obey Garner's conjecture ultimately reduce down to the $4$ and $5 \pmod{8}$ case before coinciding. This allowed us to find that 3067 and 3068 form the smallest of an infinite family of counterexamples to Garner's longstanding conjecture~\cite{Garner}.
| {
"timestamp": "2015-12-01T02:15:04",
"yymm": "1511",
"arxiv_id": "1511.09141",
"language": "en",
"url": "https://arxiv.org/abs/1511.09141",
"abstract": "Pairs of consecutive integers have the same height in the Collatz problem with surprising frequency. Garner gave a conjectural family of conditions for exactly when this occurs. Our main result is an infinite family of counterexamples to Garner's conjecture.",
"subjects": "Number Theory (math.NT)",
"title": "Consecutive Integers and the Collatz Conjecture",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018398044142,
"lm_q2_score": 0.7279754489059775,
"lm_q1q2_score": 0.7099229971055536
} |
https://arxiv.org/abs/1108.1938 | The classification of weighted projective spaces | We obtain two classifications of weighted projective spaces; up to homeomorphism and up to homotopy equivalence. We show that the former coincides with Al Amrani's classification up to isomorphism of algebraic varieties, and deduce the latter by proving that the Mislin genus of any weighted projective space is rigid. | \section{Introduction}
Weighted projective spaces are the simplest projective toric varieties
that exhibit orbifold singularities. They have been extensively
investigated by algebraic geometers, but have attracted only fleeting
attention from algebraic topologists
since Kawasaki's pioneering work~\cite{Kawasaki:1973}, in which he
computed their integral cohomology rings. Subsequently, their
$K$-theory was determined by Al~Amrani~\cite{AlAmrani:1994}, and the
study of their $KO$-theory was initiated by
Nishimura--Yosimura~\cite{NishimuraYosimura:1997}.
In toric geometry, weighted projective spaces are classified by their
fans. Here, we give two classifications that are fundamental to
algebraic topology: up to homeo\-morphism, and up to homotopy
equivalence. We obtain the second as a consequence of the
fact that the Mislin genus of a weighted projective space is rigid.
Our results are stated below, following summaries of the definitions and
notation.
A \emph{weight vector}~$\chi=(\chi_{0},\dots,\chi_{n})$ is a finite
sequence of positive integers. It gives rise to a weighted action
of~$S^{1}$ on~$S^{2n+1}\subset\mathbb C^{n+1}$,
\begin{equation}
\label{eq:weighted-action}
g\cdot z = \bigl(g^{\chi_{0}}z_{0},\dots,g^{\chi_{n}}z_{n}\bigr)
\qquad \hbox{for~$g\in S^{1}, z\in S^{2n+1}$.}
\end{equation}
The quotient~$S^{2n+1}/S^{1}\langle \chi\rangle$ is the weighted
projective space~$\mathbb P(\chi)$. Alternatively, $\mathbb P(\chi)$ may be
defined as the quotient of~$\mathbb C^{n+1}\setminus\{0\}$ by the same
weighted action of~$\C^{\times}$; this exhibits $\mathbb P(\chi)$ as a complex
projective variety.
Scaling the weight vector~$\chi$ leads to isomorphic weighted projective
spaces~$\mathbb P(\chi)$ and~$\mathbb P(m\chi)$, for any integer~$m\geq 1$.
Moreover,
if all weights except, say, $\chi_{0}$ are divisible by some prime~$p$,
then the map
\begin{equation}
\label{eq:normiso}
\mathbb P(\chi) \to \mathbb P(\chi_{0},\chi_{1}/p,\dots,\chi_{n}/p),
\quad
\bigl[z_{0}:\dots:z_{n}\bigr] \mapsto \bigl[z_{0}^{p}:z_{1}:\dots:z_{n}\bigr]
\end{equation}
is an isomorphism as well, \emph{cf.}~\cite[\S 5.7]{Fletcher:2000}. This
leads to the notion of \emph{normalized weights}: a weight
vector~$\chi$ is normalized if for any prime~$p$ at least two weights
in~$\chi$ are not divisible by~$p$. Any weight vector can be
transformed to a unique normalized vector by repeated application of
scaling and \eqref{eq:normiso}. Consequently, two weighted projective
spaces are isomorphic as algebraic varieties and homeomorphic
as topological spaces if they have the same normalized weights,
up to order. We prove that the converse is also true.
In particular, we recover Al~Amrani's classification up to
isomorphism of algebraic varieties \cite[\S 8.1]{AlAmrani:1989}.
\begin{theorem}
\label{thm:classification-homeomorphism}
The following are equivalent for any weight vectors $\chi$~and~$\chi'$:
\begin{enumerate}
\item \label{thm:a1}
The normalizations
of $\chi$~and~$\chi'$ are the same, up to order.
\item \label{thm:a2}
$\mathbb P(\chi)$~and~$\mathbb P(\chi')$ are isomorphic as algebraic varieties.
\item \label{thm:a3}
$\mathbb P(\chi)$~and~$\mathbb P(\chi')$ are homeomorphic.
\end{enumerate}
\end{theorem}
For any prime $p$, the \emph{$p$-content}~$\pcont{\chi}{p}$ of~$\chi$
is the vector made up of the highest powers of~$p$ dividing the
individual weights. For example, $\pcont{(1,2,3,4)}{2}=(1,2,1,4)$.
Let $\chi$~and~$\chi'$ be two normalized weight vectors. It follows
from Kawasaki's result that the cohomology rings
$H^{*}(\mathbb P(\chi);\mathbb Z)$~and~$H^{*}(\mathbb P(\chi');\mathbb Z)$ are isomorphic if and
only if, for all primes~$p$, the $p$-contents
$\pcont{\chi}{p}$~and~$\pcont{\chi'}{p}$ are the same up to order.
The same phenomenon can be observed in $K$-theory and $KO$-theory. In
fact, no cohomology theory can tell such spaces apart:
\begin{theorem}
\label{thm:classification-homotopy}
Two weighted projective spaces are homotopy equivalent
if and only if for all primes~$p$, the $p$-contents of
their normalized weights are the same, up to order.
\end{theorem}
The torus~$T = (S^{1})^{n+1}/S^{1}\langle\chi\rangle \cong (S^{1})^{n}$
and its complexification~$T_{\mathbb C}$ act on~$\mathbb P(\chi)$ in a canonical
way, and the resulting equivariant homotopy type is a
finer invariant. As shown in~\cite[Thm.~5.1]{BahriFranzRay:2009},
the equivariant cohomology ring~$H_{T}^{*}(\mathbb P(\chi);\mathbb Z)$
determines the normalized weights up to order.
Let $\pcont{\chi^{*}}{p}$ be the vector obtained from $\pcont{\chi}{p}$
by ordering its coordinates as a
non-decreasing sequence, and let $\chi^{*}$ denote the product of
the~$\pcont{\chi^{*}}{p}$, taken coordinatewise. For
example, $(1,2,3,4)^*=(1,1,2,12)$.
By Theorem~\ref{thm:classification-homotopy},
$\mathbb P(\chi)$ is homotopy equivalent to~$\mathbb P(\chi^{*})$.
The weights in~$\chi^{*}$ form a
\emph{divisor chain}, in the sense that each weight divides the next.
As a consequence, the space~$\mathbb P(\chi^{*})$ is particularly easy to
work with because the differences
\begin{equation}
\label{eq:cell-decomposition}
* = \mathbb P(\chi^{*}_{n}), \;\;
\mathbb P(\chi^{*}_{n-1},\chi^{*}_{n}) \setminus \mathbb P(\chi^{*}_{n}),\;\;
\dots,\;\;
\mathbb P(\chi^{*}_{0},\dots,\chi^{*}_{n}) \setminus
\mathbb P(\chi^{*}_{1},\dots,\chi^{*}_{n})
\end{equation}
form a cell decomposition of~$\mathbb P(\chi^{*})$
(see Remark~\ref{rem:cell-decomposition} below), and
\begin{equation}
* = \mathbb P(\chi^{*}_{0}) \subset \mathbb P(\chi^{*}_{0},\chi^{*}_{1})
\subset \cdots \subset
\mathbb P(\chi^{*}_{0},\dots,\chi^{*}_{n-1}) \subset
\mathbb P(\chi^{*}_{0},\dots,\chi^{*}_{n})
\end{equation}
displays $\mathbb P(\chi^{*})$ as an iterated Thom space
\cite[Cor.~3.8]{BahriFranzRay:2011}.
The \emph{Mislin genus} of a weighted projective space~$\mathbb P(\chi)$ is
the set of all homotopy classes of simply connected CW~complexes~$Y$
of finite type such that for all primes~$p$ the $p$-localizations
of~$Y$~and~$\mathbb P(\chi)$ are homotopy equivalent. The Mislin genus of a
space is \emph{rigid} if it contains only the class of the space itself.
\begin{theorem}\label{thm:genus-wps}
The Mislin genus of any weighted projective space is rigid.
\end{theorem}
For~$\mathbb{CP}^{n}$, this has been established by
McGibbon~\cite[Thm.~4.2\,(ii)]{McGibbon:1982}.
\medbreak
In Section~\ref{sec:kawasaki} we review Kawasaki's results on which
our work is based. Theorem~\ref{thm:classification-homeomorphism} is
proved in Section~\ref{sec:homeo}, and Theorems
\ref{thm:classification-homotopy} and \ref{thm:genus-wps} in
Section~\ref{sec:homotopy}; necessary conditions for the
rigidity of the Mislin genus are established in Section \ref{sec:mislin}.
\section{Kawasaki's results}
\label{sec:kawasaki}
From now on, $\chi=(\chi_{0},\dots,\chi_{n})$ always denotes a
normalized a weight vector, and cohomology is taken with integer
coefficients unless otherwise stated. In order to make Kawasaki's
description of $H^{*}(\mathbb P(\chi))$ explicit, it is convenient to recall
his notation $(r_{0}(\chi;p),\dots,r_{n}(\chi;p))$ for the
non-decreasing weight vector $\pcont{\chi^{*}}{p}$; given any
$0\le i\le n$, we then set
\begin{equation}
l_{i} = l_{i}(\chi) =
\prod_{\text{$p$ prime}}\! r_{n-i+1}(\chi;p)\cdots r_{n}(\chi;p).
\end{equation}
We also consider the map
\begin{equation}
\label{eq:definition-phi}
\phi=\phi_{\chi}\colon \mathbb{CP}^{n}\to\mathbb P(\chi),
\quad
[z_{0}:\dots:z_{n}] \mapsto [z_{0}^{\chi_{0}}:\dots:z_{n}^{\chi_{n}}].
\end{equation}
\begin{theorem}[{\cite[Thm.~1]{Kawasaki:1973}}]
\label{thm:kawasaki-wps}
Additively, $H^{*}(\mathbb P(\chi))\cong H^{*}(\mathbb{CP}^{n})$. Furthermore,
there exist generators $\xi_{i}\in H^{2i}(\mathbb P(\chi))$ and
$\eta\in H^{2}(\mathbb{CP}^{n})$ such that $\phi^{*}(\xi_{i})=l_{i}\eta^{i}$
for $0\le i\le n$; the multiplicative structure is specified by
\begin{equation*}
\xi_{i}\xi_{j} = \frac{l_{i}l_{j}}{l_{i+j}}\,\xi_{i+j}
\end{equation*}
in $H^{2(i+j)}(\mathbb P(\chi))$, for $0\le i+j\le n$.
\end{theorem}
\begin{remarks}\label{rem:simfree}
Kawasaki's proof of Theorem \ref{thm:kawasaki-wps} shows that the
integral homology groups $H_*(\mathbb P(\chi))$ are finitely generated and
torsion-free, and therefore isomorphic to $\Hom(H^*(\mathbb P(\chi)),\mathbb Z)$
by the Universal Coefficient Theorem.
Moreover, \cite[Sec.~3.2]{Fulton:1993} and
\cite[Cor.~7.2]{Illman:1983} confirm that $P(\chi)$ is a simply
connected finite CW complex, for every choice of $\chi$.
\end{remarks}
Kawasaki also determined the cohomology of the generalized lens
space~$L(k;\chi) = S^{2n+1}/\mathbb Z_{k}\langle\chi\rangle$, where in this
case $\chi$ describes the weights of the $k$-th roots of unity. The
answer depends on the augmented weight
vector~$(\chi,k)=(\chi_{0},\dots,\chi_{n},k)$.
\begin{theorem}[{\cite[Thm.~2]{Kawasaki:1973}}]
\label{thm:kawasaki-lens}
The non-zero cohomology groups of~$L=L(k;\chi)$ are
$H^{0}(L)\cong H^{2n+1}(L)\cong\mathbb Z$ and
$H^{2i}(L) \cong \mathbb Z_{q}$ for~$1\le i\le n$, where
$q = l_{i}(\chi,k)/l_{i}(\chi)$.
\end{theorem}
\section{Classification up to homeomorphism}
\label{sec:homeo}
\defq{q}
\defq'{q'}
Consider a point~$z\in\mathbb P(\chi)$.
Let $I$~and~$J$ be the subsets of~$\{0,\dots,n\}$ corresponding
to the zero and non-zero homogeneous coordinates of~$z$, respectively,
and let $q=\gcd\{\chi_{i}:i\in J\}$.
Also, let $U_{I}=\{ [z_{0}:\dots:z_{n}]:\text{$z_{i}\ne 0$ for $i\notin I$}\}$,
and write $\chi_{I}\in\mathbb Z^{I}$ for the weights indexed by~$I$.
\begin{lemma}[{\emph{cf.}~\cite[\S 5.15]{Fletcher:2000}}]
\label{wps-to-lens-general}
There is an isomorphism of algebraic varieties
\begin{equation*}
U_{I} \cong (\C^{\times})^{|J|-1}\times \mathbb C^{I} \!/\,
\mathbb Z_{q}\langle \chi_{I}\rangle,
\end{equation*}
sending
$z$ to a point of the form~$(\tilde z,0)$.
\end{lemma}
Observe that $\mathbb C^{I} / \mathbb Z_{q}\langle \chi_{I}\rangle$ is the
unbounded cone over $L(q;\chi_{I})$.
\begin{proof}
The weight vector~$\chi_{J}$ determines a morphism~$\C^{\times}\to(\C^{\times})^{J}$
with kernel~$\mathbb Z_{q}$. Let $T'$ be its image and $T''\cong(\C^{\times})^{|J|-1}$
a torus complement. Then
\begin{equation*}
U_{I} =
\bigl( (\C^{\times})^{J}\times \mathbb C^{I}\bigr)\,\big/\,\C^{\times}\langle \chi\rangle
= \bigl(T''\times T'\times\mathbb C^{I} \bigr)\,\big/\,\C^{\times}\langle \chi\rangle
= T''\times \mathbb C^{I}\!/\,\mathbb Z_{q}\langle \chi_{I}\rangle.
\qedhere
\end{equation*}
\end{proof}
\begin{remark}
\label{rem:cell-decomposition}
If $\chi_{0}=1$ and $z=[1:0:\dots:0]$, then $U_{I}\cong\mathbb C^{n}$. If
the weights form a divisor chain, we have $\mathbb P(\chi)\setminus
U_{I}=\mathbb P(\chi_{1},\dots,\chi_{n})=
\mathbb P(1,\chi_{2}/\chi_{1},\dots,\chi_{n}/\chi_{1})$;
hence we obtain an inductive decomposition of~$\mathbb P(\chi)$ into
$n+1$~cells~$*$,~$\mathbb C$,~$\mathbb C^{2}$,~\dots,~$\mathbb C^{n}$.
\end{remark}
\begin{lemma}
\label{local-homology-wps}
There is an isomorphism
$H^{2n-1}\bigl(\mathbb P(\chi),\mathbb P(\chi)\setminus\{z\}\bigr)
\cong \mathbb Z_{q}$.
\end{lemma}
\begin{proof}
Set $X = (\C^{\times})^{|J|-1}$, $Y = \mathbb C^{I} / \mathbb Z_{q}\langle \chi_{I}\rangle$
and $m=|I|-1$.
Note that $X$ is a manifold of dimension~$2(n-m-1)$,
so that $H^{*}(X,X\setminus\{\tilde z\})$ is isomorphic to~$\mathbb Z$
in dimension~$2(n-m-1)$ and zero otherwise.
Excision, Lemma~\ref{wps-to-lens-general} and the Künneth formula for
relative cohomology therefore imply
\begin{align*}
H^{*}\bigl(\mathbb P(\chi),\mathbb P(\chi)\setminus\{z\}\bigr)
&\cong H^{*}\bigl(U_{I},U_{I}\setminus\{z\}\bigr) \\
&\cong H^{*}(X\times Y, (X\setminus\{\tilde z\})
\times Y \cup X\times(Y\setminus\{0\})) \\
&\cong H^{*}(X,X\setminus\{\tilde z\})\otimes H^{*}(Y,Y\setminus\{0\}),\\
\intertext{because $H^{*}(X,X\setminus\{\tilde z\})$ is free.
In particular,}
H^{2n-1}\bigl(\mathbb P(\chi),\mathbb P(\chi)\setminus\{z\}\bigr)
&\cong H^{2m+1}\bigl(Y,Y\setminus\{0\}\bigr)
\cong \tilde H^{2m}(L(q;\chi_{I})).
\end{align*}
If $m=0$, then $q=1$ because $\chi$ is normalized, and the claim holds.
Otherwise, Theorem~\ref{thm:kawasaki-lens} gives
$H^{2m}(L(q;\chi_{I})) \cong \mathbb Z_{q'}$, where
the $p$-content of~$q'$ is given by
\begin{equation}
\label{eq:product-r}
\hbox{$p$-content of} \;\; \frac{l_{m}(\chi_{I},q)}{l_{m}(\chi_{I})}
= \prod_{i=1}^{m} \frac{r_{m+2-i}(\chi_{I},q;p)}{r_{m+1-i}(\chi_{I};p)}.
\end{equation}
We have to show $q'=q$, which means that $q'$~and~$q$ have the
same $p$-content for all~$p$. This is clearly true if $q$ is not
divisible by~$p$. Otherwise, $\chi_{I}$ inherits from the
normalized weight vector~$\chi$ two weights not divisible by~$p$.
(Recall that $q$ is the gcd of the weights appearing in~$\chi$,
but not in~$\chi_{I}$.) Hence, $r_{1}(\chi;p)=1$, and the numerator
of~\eqref{eq:product-r} differs from the denominator by the
$p$-content of~$q$.
This finishes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:classification-homeomorphism}]
By the remarks preceding the theorem, we only have to prove the
implication~$\eqref{thm:a3}\Rightarrow\eqref{thm:a1}$. In order to
do so, we show how to read off the normalized weights from
topological invariants of a weighted projective space~$\mathbb P(\chi)$.
For~$z\in\mathbb P(\chi)$, let $q'(z)$ be the order of the finite
group~\smash{$H^{2n-1}\bigl(\mathbb P(\chi),\mathbb P(\chi)\setminus\{z\}\bigr)$}.
Lemma~\ref{local-homology-wps} implies that for all~$d\ge 1$ the
space
\begin{equation*}
X(d) = \bigl\{ z\in\mathbb P(\chi) : d \mid q'(z) \bigr\}
\end{equation*}
is again a weighted projective space or empty. In fact,
\begin{equation*}
X(d) = \bigl\{ [z_{0}:\dots:z_{n}] \in\mathbb P(\chi) :
\text{$z_{i} = 0$ if $d \nmid \chi_{i}$} \bigr\}
\end{equation*}
because $d$ divides $q'(z)=q$ if and only if it divides
$\chi_{i}$ for all~$i$ such that $z_{i}\ne0$. For each~$d$, the
dimension of~$X(d)$ (which equals the degree of the highest
non-vanishing cohomology group) therefore tells us the number of
weights divisible by~$d$. This determines the normalized weights
completely up to order.
\end{proof}
\section{The Mislin genus}\label{sec:mislin}
This section relies heavily on the theory of localization and
homotopy pullbacks. We refer readers to
\cite{HiltonMislinRoitberg:1975}, especially Chapter~II, and
to \cite[Chap.~7]{Strom:2011}, for background information.
Throughout the section, $X$, $Y$, and $Z$ denote simply connected
CW~complexes. A map~$f\colon X\to Y$ is therefore a homotopy equivalence
(written $X\simeq Y$) if and only if it induces an isomorphism~$H_*(f)$
of integral homology; in this case, $f^{-1}$ denotes a homotopy inverse
for~$f$.
Given any set~$\mathcal P$ of primes, the algebraic localization of~$\mathbb Z$
is denoted by~$\mathbb Z_{\mathcal P}$, and the homotopy theoretic localization of
$X$ by $X_{\mathcal P}$; the latter is also a CW complex. Every~$X$ admits a
localization map~$l_{\mathcal P}\colon X\to X_{\mathcal P}$, which induces an
isomorphism $H_*(l_{\mathcal P};\mathbb Z_{\mathcal P})$, and every $f$ admits a
localization $f_{\mathcal P}\colon X_{\mathcal P}\to Y_{\mathcal P}$, for which the square
\begin{equation*}
\begin{CD}
X@>f>>Y\\
@Vl_{\mathcal P}VV@VVl_{\mathcal P}V\\
X_{\mathcal P}@>>f_{\mathcal P}>Y_{\mathcal P}
\end{CD}
\end{equation*}
is homotopy commutative. Any map~$g\colon X_{\mathcal P}\to Y_{\mathcal P}$ of
localized spaces is a homotopy equivalence if and only if it
induces an isomorphism $H_*(g;\mathbb Z_{\mathcal P})$.
If $\mathcal P$ is empty, then the localization map $l_\emptyset$ is
\emph{rationalization}, and is denoted by
$l_0\colon X\to X_0$; likewise, $f_\emptyset$ is denoted by
$f_0\colon X_0\to Y_0$. If $\mathcal P$ consists of a single prime $p$, then
the localization map $l_{\{p\}}$ is abbreviated to $l_p\colon X\to X_p$,
and $f_{\{p\}}$ is abbreviated to~$f_p\colon X_p\to Y_p$. If
$\mathcal P$ contains all primes, then $l_{\mathcal P}$ is a homotopy equivalence.
The \emph{homotopy pullback} of a diagram~$X\to Z\leftarrow Y$
may be constructed by replacing either map with a fibration,
and pulling it back along the other in the standard fashion.
The resulting square is unique up to equivalence of diagrams in the
homotopy category \cite[\S 7.3]{Strom:2011}; in particular, the
homotopy pullback is well-defined up to homotopy equivalence.
For any set of primes $\mathcal P$, the rationalization of $X_{\mathcal P}$ is
homotopy equivalent to $X_0$, so the rationalization map may be
expressed as $l_0\colon X_{\mathcal P}\to X_0$. If $\mathcal P$~and~$\mathcal Q$ are
disjoint, then the homotopy pullback of
\begin{equation}\label{eq:locpq}
\begin{CD}
X_{\mathcal Q}@>{l_0}>>X_0@<l_0<<X_{\mathcal P}
\end{CD}\,
\end{equation}
is $X_{\mathcal P\cup\mathcal Q}$ (see \cite[Prop.~2.9.3]{Neisendorfer:2010}
or~\cite[proof of Thm.~7.13]{HiltonMislinRoitberg:1975}).
\begin{lemma}\label{thm:homotopy-pullback}
Given two disjoint sets $\mathcal P$ and~$\mathcal Q$ of primes,
let $f\colon Y_{\mathcal P}\larrow{\simeq} Z_{\mathcal P}$ and
$g\colon Y_{\mathcal Q}\larrow{\simeq} Z_{\mathcal Q}$ be
homotopy equivalences, and define $h=f_0g_0^{-1}$;
then $Y_{\mathcal P\cup\mathcal Q}$ is the homotopy pullback of the diagram
\begin{equation}\label{eq:homotopy-pullback}
\begin{CD}
Z_{\mathcal Q}@>{hl_0}>>Z_0@<l_0<<Z_{\mathcal P}
\end{CD}\,.
\end{equation}
If also there exist homotopy equivalences
$d\colon Z_{\mathcal P}\to Z_{\mathcal P}$ and~$e\colon Z_{\mathcal Q}\to Z_{\mathcal Q}$
such that $h\simeq d_0e_0^{-1}$, then $Y_{\mathcal P\cup\mathcal Q}$ and
$Z_{\mathcal P\cup\mathcal Q}$ are homotopy equivalent.
\end{lemma}
\begin{proof}
The vertical maps in the homotopy commutative ladder
\begin{equation}\label{eq:ladder}
\begin{CD}
Y_{\mathcal Q}@>{l_0}>>Y_0@<{l_0}<<Y_{\mathcal P}\\
@VgVV@VV{f_0}V@VVfV\\
Z_{\mathcal Q}@>>{hl_0}>Z_0@<<{l_0}<Z_{\mathcal P}
\end{CD}
\end{equation}
are homotopy equivalences, and the homotopy pullback of the upper
row is $Y_{\mathcal P\cup\mathcal Q}$, by analogy with \eqref{eq:locpq}.
So the ladder induces a homotopy equivalence of homotopy pullbacks
following \cite[\S7.3]{Strom:2011}, and the first claim follows.
Substituting $Y=Z$, $f=d$ and $g=e$ into \eqref{eq:ladder}
creates an upper row with homotopy pullback $Z_{\mathcal P\cup\mathcal Q}$ . The
second claim is then immediate.
\end{proof}
The following proposition gives criteria for ensuring that the genus of
a finite CW~complex is rigid.
\begin{proposition} \label{thm:genus-rigid}
Let $Z$ be a simply connected finite CW~complex satisfying
\begin{itemize}
\item[(i)] for any space~$Y$ in the Mislin genus of~$Z$, there exists
a rational homotopy equivalence $k\colon Y\to Z$, and
\item[(ii)] for any disjoint sets~$\mathcal P$ and $\mathcal Q$ of primes, and any
rational homotopy equivalence~$h\colon Z_0\to Z_0$, there exist
homotopy equivalences $d\colon Z_{\mathcal P}\to Z_{\mathcal P}$
and $e\colon Z_{\mathcal Q}\to Z_{\mathcal Q}$ such that $h\simeq d_0e^{-1}_0$:
\end{itemize}
then the genus of~$Z$ is rigid.
\end{proposition}
\begin{proof}
Let $Y$ belong to the Mislin genus of~$Z$. It follows from
\cite[p.\,105]{HiltonMislinRoitberg:1975} that there is an
isomorphism $H_*(Y;\mathbb Z)\cong H_*(Z;\mathbb Z)$ of graded abelian
groups. Since $H_*(k;\mathbb Q)$ is an isomorphism, there exists a
maximal set $\mathcal Q$ of primes for which $H_*(k_{\mathcal Q})$ is also an
isomorphism. Since $H_*(Y;\mathbb Z)$ and $H_*(Z;\mathbb Z)$ are finitely
generated in each dimension (and vanish in large dimensions),
its complement $\mathcal P$ is finite. If $\mathcal P$ is
non-empty, write its elements as $p_1$, \dots, $p_s$ and define
$\mathcal Q_i=\mathcal Q\cup\{p_1,\dots ,p_i\}$; otherwise, take $\mathcal Q_0=\mathcal Q$.
Since $\mathcal Q_s$ contains \emph{all} primes, it suffices to show
that $Y_{\mathcal Q_s}\simeq Z_{\mathcal Q_s}$.
In fact we prove that $Y_{\mathcal Q_i}\simeq Z_{\mathcal Q_i}$ for
every~$0\le i\le s$, using induction on $i$. The base case is
$i=0$; it holds because $H_*(k)$ is an isomorphism when
$\mathcal P=\emptyset$, so $k$ is a homotopy equivalence.
Now assume that $g\colon Y_{\mathcal Q_i}\to Z_{\mathcal Q_i}$ is a homotopy
equivalence, and write $p=p_{i+1}$. By choice of~$Y$, there is a
homotopy equivalence~$f\colon Y_p\to Z_p$, so we may apply
the first claim of Lemma~\ref{thm:homotopy-pullback}. This
identifies $Y_{\mathcal Q_{i+1}}$ as the homotopy pullback of
\[
\begin{CD}
Z_{\mathcal Q_i}@>{hl_0}>>Z_0@<l_0<<Z_p
\end{CD}\,,
\]
where $h$ is the homotopy equivalence
$f_0g_0^{-1}\colon Z_0\to Z_0$. By assumption, there exist
homotopy equivalences $d\colon Z_{p}\to Z _{p}$ and
$e\colon Z_{\mathcal Q_i}\to Z_{\mathcal Q_i}$ such that $h\simeq d_0e_0^{-1}$.
The second claim of Lemma~\ref{thm:homotopy-pullback}
then confirms that $Y_{\mathcal Q_{i+1}}\simeq Z_{\mathcal Q_{i+1}}$,
and completes the inductive step.
\end{proof}
\section{Classification up to homotopy equivalence}\label{sec:homotopy}
Finally, we return to the case of weighted projective space.
In Theorem~\ref{thm:kawasaki-wps} we selected a generator~$\xi_{1}$
for $H^2(\mathbb P(\chi))\cong\mathbb Z$. Given any set of primes $\mathcal P$, its localization
in~$H^{2}(\mathbb P(\chi)_{\mathcal P};\mathbb Z_{\mathcal P})\cong\mathbb Z_{\mathcal P}$ must also be a
generator. We therefore define the \emph{degree}~$\deg(h)$ of any self-map
$h$ of $\mathbb P(\chi)_{\mathcal P}$ to be the $\mathcal P$-local integer satisfying
$H^*(h;\mathbb Z_{\mathcal P})(\xi_1)=\deg(h)\,\xi_1$; this determines a multiplicative
function
\begin{equation}
\label{eq:degree}
{\deg}\colon [\mathbb P(\chi)_{\mathcal P},\mathbb P(\chi)_{\mathcal P}] \to \mathbb Z_{\mathcal P}.
\end{equation}
Remark \ref{rem:simfree} shows that any such $h$ is a homotopy
equivalence if and only if $H^*(h;\mathbb Z_{\mathcal P})$ is an isomorphism.
\begin{proposition} \label{thm:degree} \hfill
\begin{enumerate}
\item \label{thm:degree-unit}
A self-map of~$\mathbb P(\chi)_{\mathcal P}$ is a homotopy equivalence
if and only if its degree is a unit in~$\mathbb Z_{\mathcal P}$.
\item \label{thm:degree-surjective}
The degree function~\eqref{eq:degree} is surjective.
\item \label{thm:degree-CPn}
If $\mathcal P$ contains no divisor of any~$\chi_j$, then the degree
function is a bijection.
\end{enumerate}
\end{proposition}
\begin{proof}
Since $\deg$ is multiplicative and the degree of the identity map is $1$,
it maps homotopy equivalences to units. Let
$h$ be any self-map of~$\mathbb P(\chi)_{\mathcal P}$, and assume that it has
degree~$a$. By Theorem~\ref{thm:kawasaki-wps}, $H^*(h;\mathbb Z_{\mathcal P})$
induces multiplication by $a^k$
on~$H^{2k}(\mathbb P(\chi)_{\mathcal P};\mathbb Z_{\mathcal P})\cong\mathbb Z_{\mathcal P}$, for
every~$1\leq k\leq n$. If $a$ is a unit, then $H^*(h;\mathbb Z_{\mathcal P})$ is an
isomorphism, so $h$ is a homotopy equivalence. Thus
\eqref{thm:degree-unit} holds.
Fix a positive integer~$a$, and define the self-map
$m_a\colon\mathbb P(\chi)\to\mathbb P(\chi)$ by raising each homogeneous
coordinate to the power~$a$; in particular, write
$m'_a\colon\mathbb{CP}^{n}\to\mathbb{CP}^{n}$ for the standard case. Thus
$m_a$~and~$m'_a$ commute with the map~$\phi$ of
\eqref{eq:definition-phi}, leading to the commutative diagram
\begin{equation*}
\begin{CD}
H^{*}(\mathbb P(\chi))@>H^*(\phi)>>H^{*}(\mathbb{CP}^{n})\\
@VH^*(m_a)VV@VVH^*(m'_a)V\\
H^{*}(\mathbb P(\chi))@>>H^*(\phi)>H^{*}(\mathbb{CP}^{n})
\end{CD}
\end{equation*}
Since $H^2(m'_a)$ is multiplication by~$a$, it follows that
$\deg(m_a)=a$. But every element~$c\in \mathbb Z_{\mathcal P}$ may be written
as a
quotient~$c=b/a$ of integers, where $a$ is a positive unit in
$\mathbb Z_{\mathcal P}$. Then \eqref{thm:degree-surjective} follows
from~\eqref{thm:degree-unit},
combined with the observations that complex conjugation on
a single coordinate has degree~$-1$, and constant self-maps
have degree~$0$.
If $\mathcal P$ contains no divisor of any weight, then
$\phi_{\mathcal P}\colon\mathbb{CP}^{n}_{\mathcal P}\to\mathbb P(\chi)_{\mathcal P}$ is a homotopy
equivalence by Theorem~\ref{thm:kawasaki-wps}. To prove
\eqref{thm:degree-CPn}, it therefore suffices to consider maps
$h_1$,~$h_2\colon \mathbb{CP}^{n}_{\mathcal P}\to\mathbb{CP}^{n}_{\mathcal P}$ of equal
degree; in other words, we may restrict attention to the special case
$\mathbb{CP}^n$. Since $\mathbb{CP}^\infty_{\mathcal P}$ is an Eilenberg--Mac\,Lane space~%
$K(\mathbb Z_{\mathcal P},2)$, the compositions of
$i_{\mathcal P}\colon\mathbb{CP}^n_{\mathcal P}\to\mathbb{CP}^\infty_{\mathcal P}$ with $h_1$ and $h_2$
are homotopic. Moreover, $\mathbb{CP}^{n}_{\mathcal P}$ is $2n$-dimensional and its
image is the $(2n+1)$-skeleton of $\mathbb{CP}^\infty_{\mathcal P}$, so the
homotopy corestricts to a homotopy $h_1\simeq h_2$. Thus
$\deg$ is injective, and \eqref{thm:degree-CPn} follows.
\end{proof}
The special case $\mathbb{CP}^n$ of part~\eqref{thm:degree-CPn} is well-known
\cite[Thm.~2.2]{McGibbon:1982}, but is stated there without proof.
To complete the proof of Theorem~\ref{thm:genus-wps}, it remains
only to show that the criteria of Proposition \ref{thm:genus-rigid}
apply to $\mathbb P(\chi)$.
\begin{proof}[Proof of Theorem~\ref{thm:genus-wps}]
Let $Y$ be an element of the Mislin genus of~$\mathbb P(\chi)$.
Since $H_{*}(Y)\cong H_{*}(\mathbb P(\chi))$ as graded abelian groups,
$Y$ is homotopy equivalent to a CW complex of dimension~$2n$,
by \cite[Prop.~4C.1]{Hatcher:2001}.
Furthermore, $H^*(\mathbb P(\chi);\mathbb Q)$ is multiplicatively generated by
a single element of degree~$2$, so any of the homotopy
equivalences $Y_p\simeq\mathbb P(\chi)_p$ induces the corresponding
structure on $H^*(Y;\mathbb Q)$. A multiplicative generator $\gamma$ may
be chosen to be integral in $H^2(Y;\mathbb Q)$, because the Universal
Coefficient Theorem confirms that $H^2(Y;\mathbb Z)\cong\mathbb Z$. Also,
$\gamma$ is represented by a
map~$j\colon Y\to\mathbb{CP}^\infty\simeq K(\mathbb Z,2)$, for which $H^2(j;\mathbb Z)$
is an isomorphism. Up to homotopy, $j$ factors through
$\mathbb{CP}^{n}\subset\mathbb{CP}^{\infty}$, so its corestriction
$j'\colon Y\to\mathbb{CP}^n$ is a rational homotopy equivalence.
Since $\phi\colon\mathbb{CP}^n\to\mathbb P(\chi)$ is a rational homotopy
equivalence by Theorem~\ref{thm:kawasaki-wps}, the same holds
for the composition $\phi j'\colon Y\to\mathbb P(\chi)$. Criterion (i)
of Proposition \ref{thm:genus-rigid} is therefore satisfied by
$k=\phi j'$.
Now let $h\colon \mathbb P(\chi)_0 \to \mathbb P(\chi)_0$ be a homotopy
equivalence, let $\deg(h)=\pm a/b$ where $a,b\in \mathbb N$, and
let $\mathcal P$ and $\mathcal Q$
be two disjoint sets of primes.
Write $a=a'a''$ and $b=b'b''$, where $a'$, $b'$ are divisible
only by primes not contained in $\mathcal P$, and $a''$, $b''$ are
divisible only by primes contained in $\mathcal P$.
Then $a'/b'\in \mathbb Z_{\mathcal P}$ and $b''/a''\in \mathbb Z_{\mathcal Q}$ are units.
So Proposition~\ref{thm:degree}\,\eqref{thm:degree-surjective}
guarantees the existence of homotopy equivalences
$d\colon \mathbb P(\chi)_{\mathcal P}\to \mathbb P(\chi)_{\mathcal P}$ and
$e\colon \mathbb P(\chi)_{\mathcal Q}\to \mathbb P(\chi)_{\mathcal Q}$
of degrees $\pm a'/b'$~and~$b''/a''$ respectively, and
$h\simeq d_0e_0^{-1}$ by
Proposition~\ref{thm:degree}\,\eqref{thm:degree-CPn}.
Criterion (ii) of Proposition \ref{thm:genus-rigid} is
therefore satisfied, as required.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:classification-homotopy}]
If $\chi$~and~$\chi'$ have the same $p$-content up to order, then
some permutation of homogeneous coordinates defines a
homeomorphism~$\mathbb P(\pcont{\chi}{p})\cong\mathbb P(\pcont{\chi'}{p})$
for each prime~$p$. This homeomorphism may be localized at~$p$.
Now consider the map
\begin{equation*}
g\colon\mathbb P(\pcont{\chi}{p})\to\mathbb P(\chi), \quad
[z_{0}:\dots:z_{n}]\mapsto
[z_{0}^{\alpha(0)}:\dots:z_{n}^{\alpha(n)}],
\end{equation*}
where $\alpha(j)=\chi_j/\pcont{\chi}{p}_j$ for~$0\leq j\leq n$.
Theorem~\ref{thm:kawasaki-wps} implies that $H^*(g;\mathbb Z_p)$ is an
isomorphism, and Remarks \ref{rem:simfree} confirm that $g_p$ is
a homotopy equivalence. So $g_p^{-1}$~and~$g'_p$ determine a
chain of maps
\begin{equation*}
\mathbb P(\chi)_p\simeq\mathbb P(\pcont{\chi}{p})_p\cong
\mathbb P(\pcont{\chi'}{p})_p\simeq\mathbb P(\chi')_p
\end{equation*}
for any prime~$p$, and the result follows from Theorem~\ref{thm:genus-wps}.
\end{proof}
\subsection*{Acknowledgements}
The authors are particularly grateful to the referee, whose suggestions led
to several improvements in the structure and exposition of this work. A.\,B.\ was
supported in part by a Rider University Summer Research Fellowship and
Grant \#210386 from the Simons Foundation, and M.\,F.\ by an NSERC Discovery
Grant.
| {
"timestamp": "2012-12-04T02:04:32",
"yymm": "1108",
"arxiv_id": "1108.1938",
"language": "en",
"url": "https://arxiv.org/abs/1108.1938",
"abstract": "We obtain two classifications of weighted projective spaces; up to homeomorphism and up to homotopy equivalence. We show that the former coincides with Al Amrani's classification up to isomorphism of algebraic varieties, and deduce the latter by proving that the Mislin genus of any weighted projective space is rigid.",
"subjects": "Algebraic Topology (math.AT); Algebraic Geometry (math.AG)",
"title": "The classification of weighted projective spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018434079934,
"lm_q2_score": 0.727975443004307,
"lm_q1q2_score": 0.7099229939735509
} |
https://arxiv.org/abs/1204.1718 | Computational complexity and memory usage for multi-frontal direct solvers in structured mesh finite elements | The multi-frontal direct solver is the state-of-the-art algorithm for the direct solution of sparse linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from B-spline-based isogeometric finite elements, where the mesh is a structured grid. Specifically we provide the estimates for systems resulting from $C^{p-1}$ polynomial B-spline spaces and compare them to those obtained using $C^0$ spaces. | \section{Introduction}\input{intro.tex}
\section{Multi-frontal direct solver algorithm}\input{alg.tex}
\section{Computational complexity and memory usage}\input{complexity.tex}
\section{Numerical Results}\input{results.tex}
\section{Conclusions}\input{conc.tex}
\section{Acknowledgements}
DP has been partially supported by the Spanish Ministry of Sciences
and Innovation Grant MTM2010-16511. MRP has been partially supported
by the Polish MNiSW grant no. NN 519 405737 and NN519 447 739.
\bibliographystyle{elsarticle-num}
\subsection{Single level example: two element mesh}\label{s:sc}
Consider the partitioning of the elemental matrices in
equation~(\ref{eqn:e1}). We reorder the elemental matrices by first
listing those degrees of freedom that are fully assembled on the
element level, $x_e$, followed by those that are shared with other
elements, $y_e$, where subscript $e$ refers to the element
number. Thus the element matrix can be blocked accordingly to
represent interactions between fully assembled degrees of freedom and
those shared with other elements. Note that $A_e$ represents the block
of interactions which are fully assembled at the element level, blocks
$B_e$ and $C_e$ represent the interactions of fully assembled and
shared degrees of freedom, and block $D_e$ represents interactions of
shared degrees of freedom. Particularizing this for the two element
mesh, we obtain
\begin{equation}
\begin{bmatrix}
A_1 & B_1\\
C_1 & D_1
\end{bmatrix}\cdot\begin{bmatrix}
x_1\\
y_1
\end{bmatrix} = \begin{bmatrix}
f_1\\
g_1
\end{bmatrix},\hspace{0.5in}\begin{bmatrix}
A_2 & B_2\\
C_2 & D_2
\end{bmatrix}\cdot\begin{bmatrix}
x_2\\
y_2
\end{bmatrix} = \begin{bmatrix}
f_2\\
g_2
\end{bmatrix}\label{eqn:e1}
\end{equation}
Because the block $A_e$ is fully assembled, we may begin the
$LU$-factorization early and at the element level. Thus for each
element, we can multiply the top row by $C_eA_e^{-1}$ and subtract
from the bottom row,
\begin{equation}
\begin{bmatrix}
A_e & B_e\\
0 & D_e-C_eA_e^{-1}B_e
\end{bmatrix}\cdot\begin{bmatrix}
x_e\\
y_e
\end{bmatrix} = \begin{bmatrix}
f_e\\
g_e-C_eA_e^{-1}B_e
\end{bmatrix}\label{eqn:sc}
\end{equation}
Then, for each element $e=1,2$, the block matrix $D_e-C_eA_e^{-1}B_e$
and the vector $g_e-C_eA_e^{-1}B_e$ is assembled. After the
contributions from both elements are assembled, we can solve for
$y$. Once $y$ is computed, we can resort to backward substitution at
the element level to compute $x_e$.
\subsection{Multilevel example: eight element mesh in three dimensions}
The procedure can be recursively generalized into multiple levels. For
example, the procedure for an eight element mesh is shown in
figure~\ref{f:multifrontal}. The elimination proceeds as follows:
\begin{enumerate}
\item Perform the local elimination of fully assembled degrees of
freedom in each element as described in section~\ref{s:sc}. Note
that the degrees of freedom eliminated, if any, are those which have
support only on the element, the so-called bubble functions.
\item Pair the eight elements into any 4 clusters where the pairs of
elements share a common face. To these frontal matrices, we apply
the algorithm again, that is, we eliminate the fully assembled
degrees of freedom in terms of the remaining degrees of freedom
shared with other elements. At this level, the degrees of freedom
eliminated are those with support on the shared face.
\item At the next level, we pair the four element clusters again into
two which share a common interface. The recursive elimination
procedure is applied at this level and repeated until we obtain a
single cluster whose degrees of freedom are fully assembled (the top
level in figure~\ref{f:multifrontal}).
\end{enumerate}
The connectivity graph describing the order of elimination and
clustering is called the {\em elimination tree}. At this point, with
the solution to the fully assembled system, we can move down the
elimintation tree, using backward substitution to recover the
remaining unknown degrees of freedom (those that were fully assembled
at each elimination level).
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{./figs/multifrontal.pdf}
\caption{The four levels of the elimination tree for a cube-shaped mesh with eight finite elements}\label{f:multifrontal}
\end{figure}
\subsection{Cost of the Schur complement}
In order to estimate the FLOPS and memory required to perform the
above partial $LU$ factorization, we will count operations and memory
used in forming the Schur complement, as shown in
equation~(\ref{eqn:sc}). We denote the dimension of the square matrix
$A$ by $q$. We denote the number of columns in $B$ and rows in $C$ as
$r$, where $r$ is an assumed a constant. Then, we have:
\begin{equation}
\begin{tabular}{l}
\mbox{FLOPS} = ${\cal O}( q^3 + q^2 r + q r^2) = {\cal O}( q^3 + q r^2)$\\[0.03in]
\mbox{Memory}= ${\cal O}( q^2 + q r)$
\end{tabular}\label{eqn:schur}
\end{equation}
The FLOPS estimate is obtained by counting the operations needed to
find the $LU$ factors of $A$, $\mathcal{O}(q^3)$. To this we add the
FLOPS required to perform $r$ back-substitutions to form $A^{-1}B$,
$\mathcal{O}(rq^2)$. Finally, we add the cost of matrix multiplication
of $C$ to $A^{-1}B$, $\mathcal{O}(q r^2)$. The memory estimate is
obtained by adding the memory needed to store the $LU$ factors of the
matrix $A$, $\mathcal{O}(q^2)$, to that required to store $A^{-1}B$
and $CA^{-1}$, $\mathcal{O}(rq)$.
In the above memory estimate, we are only concerned with the space
required to store $L$ and $U$, since it is well-known that the cost of
storing original matrix is always smaller or equal than the memory
required to store factors $L$ and $U$. In particular, we have not
included the memory required to store the Schur complement, since this
is replaced in the next steps of $LU$ factorization by additional
Schur complement operations.
\subsection{Cost of the multi-frontal solver}
We divide our computational domain in $N_c$ clusters of elements. For
the $C^0$ case, each cluster is simply an element, while for
$C^{p-1}$, each cluster is a set of $p+1$ consecutive elements in each
spatial dimension. We assume for simplicity that the number of
clusters in our computational domain is $(2^d)^s$, where $s$ is a
positive integer which represents the number of levels of the
multi-frontal algorithm. Notice that even if this assumption is not
verified, the final result still holds true provided that the number
of degrees of freedom is sufficiently large.
The multi-frontal direct solver algorithm is summarized in algorithm
~\ref{a:multifrontal}. The FLOPS and memory required by
algorithm~\ref{a:multifrontal} can be expressed as
\begin{equation}
\sum_{i=0}^{s-1} N_c(i)S(i)\label{eqn:cost}
\end{equation}
where $S(i)$ is the cost (either FLOPS or memory) of performing each
Schur complement at the $i^{th}$ level. Using the notation of the
previous subsection on the Schur complement, we define $q=q(i)$ as the
number of interior unknowns of each cluster at the $i^{th}$ step, and
$r=r(i)$ as the number of interacting unknowns at the $i^{th}$
step. We construct estimates for these numbers and summarize them in
table~\ref{tab:q_r_estimates}.
\begin{algorithm}
\caption{Multi-Frontal Algorithm\label{a:multifrontal}}
\begin{algorithmic}[1]
\For{$i=0 \mbox{ to } s-1$}
\State $N_c=N_c(i)=(2^d)^{s-i}$
\If{$i=0$}
\State Define $N_c(0)$ clusters
\Else
\State Join the old $N_c(i-1)$ clusters
\State Eliminate interior degrees of freedom
\State Define $N_c(i)$ new clusters
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{table}[htp]
\centering
\caption{\label{tab:q_r_estimates} Number of interior ($q$) and interacting ($r$)
unknowns at each level $i$ of the multi-frontal solver.}
\begin{tabular}{lcccccc}
\hline
& & $q(0)$ & $r(0)$ & & $q(i),\ i \neq 0$ & $r(i),\ i \neq 0$ \\
\hline
$C^0$ & & ${\cal O}(p^d)$ & ${\cal O}( p^{d-1})$ & &
${\cal O}( 2^{(d-1)i} p^{d-1})$ & ${\cal O}( 2^{(d-1)i} p^{d-1})$ \\
$C^{p-1}$ & & ${\cal O}( 1)$ & ${\cal O}(p^d)$ & &
${\cal O}(2^{(d-1)i} p^d)$ & ${\cal O}( 2^{(d-1)i} p^d)$ \\
\hline
\end{tabular}
\end{table}
Let $N$ be the total number of unknowns in the original system. We use
the results from table~\ref{tab:q_r_estimates} with the FLOPS and
memory estimates in equation~(\ref{eqn:schur}) to develop
table~\ref{tab:S_i_estimates}. This table describes the cost in FLOPS
and memory of each level of the multi-frontal algorithm.
\begin{table}[htp]
\centering
\caption{\label{tab:S_i_estimates}FLOPS and memory estimates at each
level $i$ of the multi-frontal solver.}
\begin{tabular}{lcccccc}
\hline
& & FLOPS & Memory & & FLOPS & Memory\\
& & S(0) & S(0) & &
S(i) , $i \neq 0$ & S(i) , $i \neq 0$ \\
\hline
$C^0$ & & ${\cal O}(p^{9})$ & ${\cal O}(p^{6})$ & & ${\cal O}(
2^{6i} p^{6})$ & ${\cal O}( 2^{4i} p^{6})$ \\
$C^{p-1}$ & & ${\cal O}( p^{6})$ & ${\cal O}(p^3)$ & & ${\cal O}(
2^{6i} p^{9})$ & ${\cal O}( 2^{4i} p^{6})$ \\
\hline
\end{tabular}
\end{table}
Finally we use equation~(\ref{eqn:cost}) and table~\ref{tab:S_i_estimates}
to specialize estimates for $C^0$ and $C^{p-1}$ B-splines in one to
three spatial dimensions.
\paragraph{Estimates for 1D $C^0$ B-splines}
\begin{equation*}
\begin{tabular}{ll}
FLOPS = & $\displaystyle 2^{s} p^3 + \sum_{i=1}^{s-1} 2^{s-i} = {\cal O}(2^s p^3) = {\cal
O}(N_p p^3) = {\cal O}(N p^2)$\\
Memory = & $\displaystyle 2^{s} p^2 + \sum_{i=1}^{s-1} 2^{s-i} = {\cal O}(2^s p^2) = {\cal
O}(N_p p^2) = {\cal O}(N p)$
\end{tabular}
\end{equation*}
\paragraph{Estimates for 1D $C^{p-1}$ B-splines}
\begin{equation*}
\begin{tabular}{ll}
FLOPS = & $\displaystyle 2^{s} p^2 + \sum_{i=1}^{s-1} 2^{s-i} p^3 = {\cal O}(2^s p^3) = {\cal
O}(N_p p^3) = {\cal O}(N p^2)$,\\
Memory = & $\displaystyle 2^{s} p + \sum_{i=1}^{s-1} 2^{s-i} p^2 = {\cal O}(2^s p^2) = {\cal
O}(N_p p^2) = {\cal O}(N p)$.
\end{tabular}
\end{equation*}
\paragraph{Estimates for 2D $C^0$ B-splines}
\begin{equation*}
\begin{tabular}{ll}
FLOPS =& $\displaystyle 2^{2s} p^6 + \sum_{i=1}^{s-1} 2^{2(s-i)} 2^{3i} p^3 = {\cal
O}(2^{2s} p^6 + 2^{3s} p^3) =$\\
& $\displaystyle {\cal O}(N_p^2 p^6 + N_p^3 p^3) =
{\cal O}(N p^4 + N^{1.5})$\\
Memory =& $\displaystyle 2^{2s} p^4 + \sum_{i=1}^{s-1} 2^{2(s-i)} 2^{2i} p^2 = {\cal
O}(2^{2s} p^4 + s^2 2^{2s} p^2) =$\\
& $\displaystyle {\cal O}(N_p^2 p^4 + N_p^2 p^2 \log (N_p^2/p^2)) = {\cal O}(N p^2 + N \log (N/p^2))$
\end{tabular}
\end{equation*}
\paragraph{Estimates for 2D $C^{p-1}$ B-splines}
\begin{equation*}
\begin{tabular}{ll}
FLOPS =& $\displaystyle 2^{2s} p^4 + \sum_{i=1}^{s-1} 2^{2(s-i)} 2^{3i} p^6 = {\cal
O}(2^{2s} p^4 + 2^{3s} p^6) =$\\
& $\displaystyle {\cal O}(N_p^3 p^6) = {\cal O}(N^{1.5} p^3)$\\
Memory =& $\displaystyle 2^{2s} p^2 + \sum_{i=1}^{s-1} 2^{2(s-i)} 2^{2i} p^4 = {\cal
O}(2^{2s} p^2 + s^2 2^{2s} p^4) =$\\
& $\displaystyle {\cal O}(N_p^2 p^4 \log (N_p^2/p^2)) = {\cal O}(p^2 N \log (N/p^2))$
\end{tabular}
\end{equation*}
\paragraph{Estimates for 3D $C^{0}$ B-splines}
\begin{equation*}
\begin{tabular}{ll}
FLOPS =& $\displaystyle 2^{3s} p^9 + \sum_{i=1}^{s-1} 2^{3(s-i)} 2^{6i} p^6 = {\cal
O}(2^{3s} p^9 + 2^{6s} p^6) =$\\
& $\displaystyle {\cal O}(N_p^3 p^9 + N_p^6 p^6) =
{\cal O}(N p^6 + N^2)$\\
Memory =&$\displaystyle 2^{3s} p^6 + \sum_{i=1}^{s-1} 2^{3(s-i)} 2^{4i} p^4 = {\cal
O}(2^{3s} p^6 + 2^{4s} p^4) =$\\
& $\displaystyle {\cal O}(N_p^3 p^6 + N_p^4 p^4) =
{\cal O}(N p^3 + N^{4/3})$
\end{tabular}
\end{equation*}
\paragraph{Estimates for 3D $C^{p-1}$ B-splines}
\begin{equation*}
\begin{tabular}{ll}
FLOPS =& $\displaystyle 2^{3s} p^6 + \sum_{i=1}^{s-1} 2^{3(s-i)} 2^{6i} p^9 = {\cal
O}(2^{3s} p^6 + 2^{6s} p^9) =$\\
& $\displaystyle {\cal O}(N_p^3 p^6 + N_p^6 p^9) =
{\cal O}(N^2 p^3)$\\
Memory =&$\displaystyle 2^{3s} p^4 + \sum_{i=1}^{s-1} 2^{3(s-i)} 2^{4i} p^6 = {\cal
O}(2^{3s} p^4 + 2^{4s} p^6) =$\\
& $\displaystyle {\cal O}(N_p^4 p^6) =
{\cal O}(p^2 N^{4/3})$
\end{tabular}
\end{equation*}
| {
"timestamp": "2012-04-10T02:01:49",
"yymm": "1204",
"arxiv_id": "1204.1718",
"language": "en",
"url": "https://arxiv.org/abs/1204.1718",
"abstract": "The multi-frontal direct solver is the state-of-the-art algorithm for the direct solution of sparse linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from B-spline-based isogeometric finite elements, where the mesh is a structured grid. Specifically we provide the estimates for systems resulting from $C^{p-1}$ polynomial B-spline spaces and compare them to those obtained using $C^0$ spaces.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Computational complexity and memory usage for multi-frontal direct solvers in structured mesh finite elements",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018354801187,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7099229939575726
} |
https://arxiv.org/abs/1304.1312 | On nonlinear potential theory, and regular boundary points, for the p-Laplacian in N space variables | We turn back to some pioneering results concerning, in particular, nonlinear potential theory and non-homogeneous boundary value problems for the so called p-Laplacian operator. Unfortunately these results, obtained at the very beginning of the seventies, were kept in the shade. We believe that our proofs are still of interest, in particular due to their extreme simplicity. Moreover, some contributions seem to improve the results quoted in the current literature. | \part{Foreword}\label{foreword}
\vspace{0.2cm}
\section{Introduction}\label{introduction}
At the very beginning of the seventies we proved a set of results
concerning nonlinear potential theory related to the so-called
$\,p-$Laplace operator. Following \cite{ricmat}, here we use the
symbol $"\,t\,"$ instead of the nowadays more common $"\,p\,"$, to
denote the leading integrability exponent (see \eqref{zeroquatro}).
In the 1972 paper \cite{ricmat}(see also \cite{c3}) we considered,
in a non-linear setting, notions such as barriers, order
preservation, capacitary potentials, regular boundary points, and so
on. This contribution seems almost forgotten in the subsequent
literature. However we believe that its topicality and interest
still remains, or has even grown. In fact, the basic ideas on which
the theory is founded, was emphasized by the original simplicity of
the broad lines. In Part I, we turn back to the results published in
reference \cite{ricmat}. We keep the presentation as close as
possible to the original paper. However, addition of suitable
remarks, together with some changes in notation, may help the
reader. By the way, we warn the reader that \cite{ricmat} is full of
small misprints, luckily very easy to single out and correct. In
Part II we turn back to an unpublished proof of a result stated in
reference \cite{ricmat}(theorem \ref{teo-unpub} below), and to a
related result proved in reference \cite{b1} (theorem \ref{teo-bvc}
below), both concerning regularity of boundary points for
$\,p$-Laplacian equations. The contribution of \cite{b1} to this
last problem was to prove H\H older continuity of the solutions to
the obstacle problem in the lower dimension $\,N-\,1\,.$ Below we
merely prove the continuity of the above solutions,
since this weaker is sufficient here. %
\vspace{0.2cm}
The main object of this work is the Dirichlet boundary value problem
\eqref{zeroseis}, whose prototype is the following problem
\begin{equation}\label{zerozero}\left\{
\begin{array}{ll}\vspace{1ex}
div \,\big(\,|\,{\nabla} \,u\,|^{t-\,2}\,{\nabla}\,u\,\big) =\,0 \ \mbox{ in }
\Omega\,,
\\%
u=\,\phi\ \mbox{ on } \partial \Omega\,.
\end{array}\right .
\end{equation}
For $\,t=\,2\,$ we get the classical Laplace equation. It is worth
noting that the theory developed in references \cite{ricmat} and
\cite{b1} could have been extended to similar, but more general,
equations. However, at that time, we were only interested in the
basic picture. Regular boundary points for the above Dirichlet
problem is here the core subject, since it establishes at any time
the direction to follow to get to its resolution. In equation
\eqref{zeroseis}, arbitrary continuous boundary data $\,\phi\,$ are
allowed. This leads us to consider two distinct notions of
solutions, generalized and variational.
\vspace{0.2cm}
We recall that a boundary point $\,y\,$ is said to be regular if to
each continuous boundary data $\,\phi\,$ the corresponding solution
is continuous in $\,y\,.$ Theorem \ref{teo-A} below (called theorem
A, in reference \cite{ricmat}) states that a point $\,y\,$ is
regular if and only if there is at $\,y\,$ a system of non-linear
barriers, see definition \ref{defzeroum}. By appealing to this last
result, we prove the theorem \ref{teo-B} (called theorem B in
reference \cite{ricmat}), which establishes that a point
$\,y\in\,\partial\,{\Omega}\,$ is regular if and only if the $\,t-$capacitary
potentials of the sets $\,E_{\rho}\,$ satisfy \eqref{zerodezasseis},
for each positive real $\,m\,$, and each sufficiently small radius
$\,\rho\,,$ where
$$
\,E_{\rho}=\,(\complement \,{\Omega})(y,\,\rho)\,,
$$
denotes the complementary set of $\,{\Omega}\,$ with respect to the
closed ball $\,\overline{I(y,\,\rho)}\,$.%
\vspace{0.2cm}
In part II, by appealing to the theorem \ref{teo-B}, we establish
two explicit, geometrical, sufficient condition for regularity. Let
us briefly illustrate these results.\par%
Denote by
\begin{equation}\label{densmedida}
\sigma(\rho)=\,\frac{|\,E_\rho\,|}{|\,I(y,\,\rho)\,|}%
\end{equation}
the density (with respect to the $N$-dimensional Lebesgue measure)
of $\,E_\rho\,$ with respect to the sphere $\,I(y,\,\rho)\,$. In
theorem \ref{teo-unpub} it is stated that there is a positive
constant $\,\Lambda\,$ such that if
\begin{equation}\label{densasr}
\big[\,\sigma(\rho)\,\big]^{\frac{t}{t-\,1}}\geq\,\Lambda\,(\log\,\log\,\rho^{-\,1}\,)^{-1}\,,
\end{equation}
for small, positive, values of $\,\rho\,,$ then the boundary point
$\,y\,$ is regular. Note that
\begin{equation}\label{melhor}
\lim_{\rho \rightarrow \,0} \,\sigma(\rho) =\,0
\end{equation}
is included, so the above condition is stronger than the usual
$N-$dimensional, external, cone property, and similar notions. This
result was already stated in the introduction of reference
\cite{ricmat} (due to a misprint, the second exponent $\,-\,1\,$ in
\eqref{densasr} was overlooked). At that time we did not publish the
proof, since we had used similar ideas in reference \cite{b1}, where
it was proved (still, appealing to theorem \ref{teo-B}) that a
boundary point $\,y\,$ is regular if a $\,(N-\,1)-$dimensional
external cone property is satisfied at the point $\,y\,$ (a
Lipschitz image of such a cone being sufficient).
See theorem \ref{teo-bvc} below.\par%
In fact, theorems \ref{teo-unpub} and \ref{teo-bvc} are corollaries
of the same result, theorem \ref{teo-nopub}, where it is proved that
the necessary and sufficient condition for regularity stated in
theorem \ref{teo-B} holds under the assumption \eqref{e62b}. The
proofs of theorems \ref{teo-nopub}, \ref{teo-unpub} and
\ref{teo-bvc} are shown in Part II below.
\vspace{0.2cm}
Our proofs do not require knowledge of particularly specialized
results. They appeal, in particular, to a suitable extension of De
Giorgi's truncation method to non-linear variational inequalities
with obstacles, following in particular reference \cite{c3} (se also
\cite{c1}). De Giorgi's truncation method was also used in reference
\cite{ziemer} to obtain the following sufficient condition for
regularity:
\begin{equation}\label{ziemer}
\limsup_{\rho \rightarrow \,0}
\,\textrm{cap}(E_{\rho}\,)\,\rho^{t-\,N}
>\,0\,,
\end{equation}
where $\,\textrm{cap}\equiv\, \textrm{cap}_{\,t}\,$ denotes (here
and in the sequel) the capacity of order $\,t\,.$ Since
$$
|\,E\,|^{\frac{N-\,t}{N}} \leq\, C\,\,\textrm{cap}_{\,t}\,E\,,
$$
condition \eqref{ziemer} leads to
\begin{equation}\label{ziemer2}
\limsup_{\rho \rightarrow \,0}
\,\frac{|\,E_\rho\,|}{|\,I(y,\,\rho)\,|} >\,0\,,
\end{equation}
which, basically, is equivalent to the $\,N-$dimensional external
cone property, as well as the corkscrew condition, stated in
\cite{heinonen}, theorem 6.31. This treatise furnishes a
wide-ranging excursion into the above and related results. See, in
particular, chapter 9.
\vspace{0.2cm}
Readers interested in a quick overlook on the main results should go
directly to definition \ref{defzeroum} and theorem \ref{teo-A}; To
definitions \ref{defzerodois} and \ref{demac}, and theorem
\ref{teo-B}; And, in Part II, to theorems \ref{teo-unpub} and
\ref{teo-bvc}.
\part{}
\section{Some definitions and main results}\label{defas}
We are concerned with the differential operator
\begin{equation}\label{zeroum}
\mathcal{L}\, u =:\,div \,A(\nabla\,u)\,,
\end{equation}
where $\,A(p)\,$ denotes a continuous map from ${\mathbb R}^N$ into itself,
$u$ is a real function defined on an open subset of $\,{\mathbb R}^N\,,$ and
$\,\nabla\,u$ is its gradient. We assume the following conditions on
$\,A(p)\,$:
\begin{equation}\label{zerodois}
A(0)=\,0\,,
\end{equation}
\begin{equation}\label{zerotres}
\big(\,A(p)-\,A(q)\,\big)\cdot\,(p-\,q) > \,0\,,\quad \textrm{if}
\quad p \neq\,q\,,
\end{equation}
\begin{equation}\label{zeroquatro}
A(p)\cdot\,p \geq\,a\,|\,p\,|^t,\quad \textrm{if} \quad
|p|\geq\,p_{\,0}\,,
\end{equation}
\begin{equation}\label{zerocinco}
|A(p)|\leq\,a^{-\,1}\, |p\,|^{t-\,1},\quad \textrm{if} \quad
|p|\geq\,p_{\,0}\,,
\end{equation}
where $\,a>\,0$, $\,p_{\,0} \geq \,0\,,$ and $\,t>\,0$ are
constants. Further, $\,|\,x\,|\,$ and $\,x\cdot\,y\,$ denote,
respectively, the norm and the scalar product in $\,{\mathbb R}^N\,.$ Note
that the above assumptions imply $\,A(p)\cdot\,p >\,0\,,$ for all
$\,p \in\,{\mathbb R}^N\,.$\par%
\vspace{0.2cm}
In the following, $\,{\Omega}\,$ is an open bounded subset of ${\mathbb R}^N$,
with boundary denoted by $\,\partial\,{\Omega}\,$. We define
$H^{1,\,t}({\Omega})\,$ as the completion of $\,C^1(\overline{\Omega})$
(or equivalently, $\,Lip\,(\overline{\Omega})\,)\,$ with respect to
the norm $\,\|\,v\,\|_{1,\,t}=\,\|\,v\,\|_t +\,\|\nabla\,v\,\|_t\,$.
$\,C^1(\overline{\Omega})$ is the set of functions which belong to
$\,C^0(\overline{\Omega})\,,$ and have continuous first order
partial derivatives in $\,{\Omega}\,,$ which can be extended continuously
to $\,\overline{\Omega}\,.$ Furthermore, $\,H^{1,\,t}_0({\Omega})\,$
denotes the closure in $\,H^{1,\,t}({\Omega})\,$ of
$\,C^1_0(\overline{\Omega})$, the set of the
$\,C^1(\overline{\Omega})$ functions, with compact support in
$\,{\Omega}\,$. See, for instance \cite{b9}. Furthermore,
$\,H^{1,\,t}_{loc}({\Omega})\,$ denotes the set consisting of functions
defined in $\,{\Omega}\,$, whose restriction to any $\,{\Omega}'
\subset\,\subset \,{\Omega}\,$ belongs to $\,H^{1,\,t}({\Omega}')\,.$\par%
We recall here the following property. Let $\,\phi(t)\,$ be a real,
Lipschitz continuous function of the real variable $\,t\,$, with, at
most, a finite number of points of non-differentiability. Further,
let $\,v \in\,H^{1,\,t}({\Omega})\,.$ Then
$\,\phi(v(x))\in\,H^{1,\,t}({\Omega})\,,$ moreover $
\partial_i\,\phi(v(x))=\,\phi'(v(x))\,\partial_i\,v(x)\,, $ a.e. in $\,{\Omega}\,.$
In particular
\begin{equation}\label{rq}%
\partial_i\,max\{v(x),\,k\}=\, \left\{\begin{array}{ll}\displaystyle \partial_i\,v(x)
&\displaystyle \mbox{ if }\ v(x)\geq\,k\,,\\
\hskip1cm 0 & \displaystyle \mbox{ if }\ v(x)\leq\,k\,,%
\end{array}\right.\end{equation}
a.e. in $\,{\Omega}\,.$\par%
For convenience, we set
$$
\mathbb V=\,\mathbb V({\Omega})=\,H^{1,\,t}({\Omega})\,, \quad
\mathbb V_0=\,\mathbb V_0({\Omega})=\,H^{1,\,t}_0({\Omega})\,,
$$
and so on.\par%
In the sequel we are interested in the Dirichlet problem
\begin{equation}\label{zeroseis}\left\{
\begin{array}{ll}\vspace{1ex}
\mathcal{L}\,u =\,0 \ \mbox{ in } \Omega\,,
\\%
u=\,\phi\ \mbox{ on } \partial \Omega\,,
\end{array}\right .
\end{equation}
where $\,\mathcal{L}\,u\,$ is defined by \eqref{zeroum}, and $\,\phi \in
\,C^0(\partial\,{\Omega})\,.$ In the sequel we show that to each $\,\phi \in
\,C^0(\partial\,{\Omega})\,$ there corresponds a unique solution $\,u
\in\,H^{1,\,t}_{loc}({\Omega})\cap \,C^0({\Omega})\,$ to the problem
\eqref{zeroseis}, see Theorem \ref{existesum}. This solution will be
called generalized solution.\par%
Since $\,A(p)\,$ may be merely continuous, local solutions of
problem $\,\mathcal{L}\,u =\,0\,$ in $\, \Omega\,$ are understood in the
following, well known, weak sense. One considers the form
\begin{equation}\label{zerooito}
{\mathfrak{a}}(v,\,\psi)=:\,\int_{{\Omega} }\, A({\nabla}\,v\,)\cdot\,{\nabla}\,\psi\, dx\,,
\end{equation}
defined on $\,\mathbb V\times\,\mathbb V\,$, or on
$\,H^{1,\,t}_{loc}({\Omega})\times\,D({\Omega})\,,$ and give the following
definition.
\begin{definition}\label{nig}
We say that a function $\,u\,$ is a weak solution in $\,{\Omega}\,$ of
problem
\begin{equation}\label{doisum} \mathcal{L} \,u \equiv \,div
\,A({\nabla}\,u)=\,0
\end{equation}
if $\,u \,$ belongs to $\, H^{1,\,t}_{loc}({\Omega})\,$ and satisfies the
condition
\begin{equation}\label{doisdois}
{\mathfrak{a}}(u,\,\psi) =\,0\,, \quad \forall \,\psi\,\in \,\mathcal{D}({\Omega})\,.
\end{equation}
\end{definition}
Note that it immediately follows that \eqref{doisdois} holds for all
$\,\psi \in\, H^{1,\,t}({\Omega})\,$ with
compact support in $\,{\Omega}\,.$\par%
The above definition does not take into account boundary values. The
definition of generalized solution to the boundary value problem
\eqref{zeroseis}, where $\phi \in\,\,C^0(\partial\,{\Omega})\,,$ is given
below, see definition \ref{saneg}. Generalized solutions to the
boundary value problem are defined as limits of suitable sequences
of variational solutions. In reference \cite{ricmat} we have used in
both cases the term "solution". However, for clarity, we decided to
use in these notes the two notions, "variational" and "generalized",
to denote related but distinct concepts.\par%
Next, we recall the definition of variational solution. Let $\,\phi
\in \,\mathbb V({\Omega})\,.$ We set
\begin{equation}\label{zerodez}
\mathbb V_{\phi}({\Omega})=\,\big\{\,v \in\,\mathbb V({\Omega})\,:\,v-\,\phi \in
\,\mathbb V_0({\Omega})\,\big\}\,.
\end{equation}
Properties (i) to (iv) below are easily shown.%
\vspace{0.2cm}
i) $\quad {\mathfrak{a}}(v,\,v-\,u) -\, {\mathfrak{a}}(u,\,v-\,u)\geq\,0\,,$ for all pair
$u,\,v \in\, \mathbb V({\Omega})\,$ (monotonicity);\par%
(ii) $\quad {\mathfrak{a}}(u+\,t\,v,\,w)$ is a continuous function of the real
variable $\,t\,$, for all triad $\,u,\,v,\,w \in\,\mathbb V({\Omega})\,$
(emicontinuity);\par%
(iii) $\quad {\mathfrak{a}}(v,\,v-\,u) -\, {\mathfrak{a}}(u,\,v-\,u)=\,0\,$ implies
$\,\nabla\,u=\,\nabla\,v\,$ in $\,{\Omega}\,$; Moreover, if $\,u-\,v
\in\,\mathbb V_0({\Omega})\,$ then $\,u=\,v\,$;\par%
(iv) One has (coercivity)
\begin{equation}\label{zerodoze}
\lim_{\|\,v\,\|_{1,\,t} \rightarrow \,\infty}\,
\frac{{\mathfrak{a}}(v,\,v)}{\|\,v\,\|_{1,\,t}}=\,+\,\infty\,,
\end{equation}
where $\,v\in\,\mathbb V_{\phi}({\Omega})\,$.\par%
Existence and uniqueness of the solution to the following
variational problem is well known:
\begin{equation}\label{zerotreze}
u_1 \in\,\mathbb V_{\phi}({\Omega})\,, \quad {\mathfrak{a}}(u_1,\,v)=\,0 \quad \forall
v\in\,\mathbb V_0({\Omega})\,.
\end{equation}
Clearly, these solutions are weak solutions of \eqref{doisum}
in $\,{\Omega}\,$. All this was already classical in the sixties.\par%
\begin{definition}\label{vasol}
The function $\,u=\,u_1 \,$ in \eqref{zerotreze} is, by definition,
the \emph{variational solution} to the Dirichlet problem
\eqref{zeroseis} when the boundary data is defined by means of an
element $\,\phi \in \,H^{1,\,t}({\Omega})\,.$ In this case,
$\,u=\,\phi\,$ on $\,\partial \Omega\,$ means that $\,u-\,\phi \in
\,H^{1,\,t}_0({\Omega})\,.$
\end{definition}
In the sequel, our first step is to extend to all continuous
boundary data $\,\phi\,$ the notion of solution. This will be done
as in reference \cite{ricmat}. Given $\,\phi \in\,\,C^0(\partial\,{\Omega})\,$
we consider an arbitrary sequence of functions in $\,\phi_n \in
\,C^1({\overline{\Omega}})\,,$ which converge uniformly to $\,\phi\,$ on
$\,\partial\,{\Omega}\,$, and we consider the sequence $\,u_n(x)\,$ consisting
of the variational solutions to the Dirichlet problem
\eqref{zeroseis}, with boundary data $\,\phi_n\,.$ Then we prove
(theorem \ref{teodoisquatro}) that the sequence $\,u_n(x)\,$
converges uniformly in $\,{\Omega}\,$ to a function $\,u(x) \in
\,H^{1,\,t}_{loc}({\Omega}) \cap\,C^0(\,{\Omega})\,.$ Moreover, we show that
$\,u(x)\,$ is a weak solution in $\,{\Omega}\,$ of problem
\eqref{doisum}, and also that it does not depend on the particular
sequence $\,\phi_n\,.$ So, to each continuous boundary data
$\,\phi\,$ there corresponds a unique element $\,u(x) \in
\,H^{1,\,t}_{loc}({\Omega}) \cap\,C^0(\,{\Omega})\,,$ obtained by the above
procedure. The above argument leads to the following, natural, definition.\par%
\begin{definition}\label{saneg}
Let $\,\phi \in\,\,C^0(\partial\,{\Omega})\,$ be given. By definition, the
above, unique, element $\,u(x) \in \,H^{1,\,t}_{loc}({\Omega})
\cap\,C^0(\,{\Omega})\,$ is the generalized solution to the Dirichlet
problem \eqref{zeroseis} with the continuous boundary data
$\,\phi\,.$
\end{definition}
We anticipate the following result.
\begin{theorem}\label{existesum}
To each boundary value $\,\phi \in\,\,C^0(\partial\,{\Omega})\,$ there
corresponds a unique generalized solution to the Dirichlet problem
\eqref{zeroseis}.
\end{theorem}
It is worth noting that the auxiliary variational solutions
$\,u_n(x)\,$ used above are not necessarily continuous up to the
boundary, even though $\,\phi_n \in \,C^1({\overline{\Omega}})\,.$ Even more, this
negative situation holds for generalized solutions. Hence, a crucial
problem is to study the possible continuity up to a boundary point
$\,y\,$ of the solutions to the Dirichlet problem. In this direction
we give the following definition.
\begin{definition}\label{regpoint}
We say that a point $\,y \in \,\partial\,{\Omega}\,$ is regular, with respect
to $\,{\Omega}$ and $\,\mathcal{L}\,,$ if given an arbitrary data $\phi
\in\,\,C^0(\partial\,{\Omega})\,,$ the corresponding generalized solution $u$
of Dirichlet problem \eqref{zeroseis} satisfies the condition
\begin{equation}\label{zerosete}%
\lim_{x \,\in {\Omega}\,, \,x\rightarrow \,y}\, u(x)=\,\phi(y)\,.%
\end{equation}
\end{definition}
As proved in the theorem \ref{teoquatrodois} below, the notion of
regular point has a local character.\par%
We remark that in the above definition, as in the following, we do
not assume (in any sense) that the continuous boundary data
$\,\phi\,$ is the trace on $\,\partial\,{\Omega}\,$ of an element of
$\,H^{1,\,t}({\Omega})\,.$%
\vspace{0.2cm}
For the Laplace operator, $\,A(p)=\,p\,,$ regular points have been
characterized by Wiener; see \cite{b14}, \cite{b15}, and Frostman
\cite{b5}. For linear operators with discontinuous coefficients,
$$
A_i(p)=\,\sum_{j} a_{i,\,j}(x)\,p_j\,,
$$
where
$$
\sum_{j} a_{i,\,j}(x)\,\xi_i\,\xi_j \geq\,\nu\,|\,\xi\,|^2\,,
$$
$\nu>\,0\,$, and $\,a_{i,\,j} \in\,L^{\infty}({\Omega})\,,
i,\,j=\,1,...,N\,,$ such a characterization was given by Littmann,
Stampacchia, and Weinberger in \cite{b9}.%
\vspace{0.2cm}
The following definitions are crucial to the theory
(see \cite{b12}, definition 1.1, and remarks)
\begin{definition}\label{defourier}
Let $\,{\Sigma}\,$ be an open, bounded, set and $\,E \subset \overline
{\Sigma}\,$ be a measurable set. We say that $\,v \in\,H^{1,\,t}({\Sigma})\,$
satisfies the inequality $\,v \geq\,0\,$ on $\,E\,$ in the
$H^{1,\,t}({\Sigma})\,$ sense if there is a sequence $\,v_n \in
C^1(\overline {\Sigma})\,$, convergent to $\,v\,$ in $H^{1,\,t}({\Sigma})\,$,
and satisfying $\,v_n \geq \,0\,$ on $\,E\,.$ Similarly, we define
$\,v \leq\,0\,$ on $\,E\,$, in the $H^{1,\,t}({\Sigma})\,$ sense.
Further, $\,v=\,0\,$ on $\,E\,$ if, simultaneously, $\,v \geq\,0\,$
and $\,v \leq\,0\,$. Finally, $\,v\geq\,w\,$ on $\,E\,$, in the
$H^{1,\,t}({\Sigma})\,$ sense, if
$\,v-\,w\geq\,0\,$ on $\,E\,$, and so on.\par%
Furthermore, we denote respectively by $\,\sup_{E} \,v$ and
$\,\inf_{E} \,v$ the upper bound and the lower bound of $\,v\,$ on
$\,E\,$ in the $\,H^{1,\,t}({\Sigma})\,$ sense. Essential upper bounds
and lower bounds (i.e., up to sets of zero Lebesgue measure) are
denoted by the symbols $\,\textrm{Sup}_{E} \,v$ and
$\,\textrm{Inf}_{E} \,v\,$, respectively.
\end{definition}
It is worth noting that the above definition is meaningless if the
$\,(N-\,2)-$ dimensional measure of set $\,E\,$ vanishes. This claim
is in general not true if we replace $\,N-\,2\,$ by $\,N-\,1\,.$ Let
us consider the following specific example, related to our results.
Assume that $\,E\,$ is an $\,(N-\,1)-$dimensional truncated cone
(see, for instance \eqref{cumum}) contained in a given sphere
$\,{\Sigma}\,$. Since elements $\,v \in\,H^{1,\,t}({\Sigma})\,$ do have a
trace (for instance, in the usual Sobolev's spaces sense) on the
surface $\,E\,$, it follows that if $\,v \in\,H^{1,\,t}({\Sigma})\cap
\,C^0({\Sigma})\,$ satisfies $\,v \geq\,m>\,0\,$ on $\,E\,,$ in the
$H^{1,\,t}({\Omega})\,$ sense, then $\,v \geq\,m\,$ pointwisely on
$\,E\,$. However, if $\,E\,$ is an $\,N-\,2\,$ dimensional cone and
$\,t <\,N\,$, the result is not true in general. For instance the
continuous, constant, function $\,v=\,0\,$ in $\,{\Sigma}\,$ satisfies
$\,v \geq\,m>\,0,$ on $\,E\,$, in the sense of definition \ref{defourier}.\par%
\vspace{0.2cm}
To illustrate the results obtained in this work, we need additional
definitions and results. Given $y\in\,R^N$ and $\,\rho>\,0\,$, we
denote by $\,I(y,\,\rho)\,$ the open sphere with center in $y\,$ and
radius $\,\rho\,.$ If $\,B \subset\,{\mathbb R}^N\,,$ we set
$\,B(y,\,\rho)=\,B\,\cap\,I(y,\,\rho)\,.$ By $\,\complement \,B\,$
and $\,\overline B\,$ we denote the complementary set and the
closure of $\,B\,$ in $\,R^N\,,$ respectively.
\vspace{0.2cm}
As in \cite{ricmat}, we give the following definitions.
\begin{definition}\label{emcima}
We say that $\,v \in \,H^{1,\,t}_{loc}({\Omega})\,$ is a
\emph{supersolution} [resp., a \emph{subsolution}] in $\,{\Omega}\,$,
with respect to the operator $\,\mathcal{L}$, if
\begin{equation}\label{zerooito}
{\mathfrak{a}}(v,\,\psi)\geq\,0\,, \quad \forall \,\psi \in \mathcal{D}({\Omega})\,, \quad
\psi\geq\,0 \quad [\textrm{resp}.\, \psi\leq\,0\,]\,.
\end{equation}
\end{definition}
Obviously, if $\,v \in\,\mathbb V=\,H^{1,\,t}({\Omega})\,$, then $\,\mathcal{D}({\Omega})\,$
may be replaced by $\,\mathbb V_0\,.$ Formally, a supersolution satisfies
$\,\mathcal{L} \,v \leq\,0\,$ in $\,{\Omega}\,.$\par%
The following definition generalizes Perron's notion of barrier (see
Perron \cite{perron} and Courant-Hilbert \cite{b2}, p.p. 306-312
and 341).%
\begin{definition}\label{defzeroum}
We say that there is a system of barriers at a point $\,y \in
\partial\,{\Omega}\,$ with respect to $\mathcal{L}\,$ if, given two positive arbitrary
reals $\,\rho\,$ and $\,m\,$, there exist a supersolution
$\,V\geq\,0\,$ and a subsolution $\,U\leq\,0\,$, which belong to
$\,\mathbb V \cap \,\,C^0({\Omega})\,,$ and satisfy the following
conditions:%
\vspace{0.2cm}
$(j) \quad V\geq\,m \quad \textrm{and} \quad \,U\leq\,-\,m\, \quad
\textrm{on} \quad (\partial\,{\Omega})\,\cap\,\complement \,I(y,\,\rho),$
\vspace{0.2cm}
$(jj) \quad \lim_{x \rightarrow\,y}\,V(x)=\,\lim_{x
\rightarrow\,y}\,U(x)=\,0\,.$
\end{definition}
In definition \ref{defzeroum}, and in the sequel, inequalities like
$\,V\geq\,m\,,$ $\,U\leq\,-\,m\,$, and so on, are to be intended in
the sense introduced in definition \ref{defourier}. Note that the
above definition does not change by restriction of the range of the
radius $\,\rho\,$ to values smaller than some positive
$\,\rho_0(y)\,$.\par%
Under suitable symmetry conditions, definition \ref{defzeroum} may
be simplified, as follows.
\begin{remark}\label{rem-1.1}
\rm{Define
\begin{equation}\label{umum}
B(p)=\,-\,A(-\-p)\,.
\end{equation}
The continuous function $\,B(p)\,$ inherits the properties
\eqref{zerodois},...,\,\eqref{zerocinco}. Furthermore, consider the
operator $\,\overline \mathcal{L} \,w=\,div \,B({\nabla}\,w)\,.$ The
transformation $\,w \rightarrow\, -\,w\,$ maps solutions of
\eqref{zerocatorze}, relative to one of the operators $\mathcal{L}$ or
$\,\overline \mathcal{L}\,,$ onto the solutions of \eqref{zeroquinze}
relative to the other operator, and reciprocally. Further, the same
transformation, maps supersolutions, solutions, and subsolutions,
relative to one of the operators onto, respectively, subsolutions,
solutions, and supersolutions, relative
to the other operator.\par%
In particular, if
\begin{equation}\label{umdois}
A(-\,p)=\,-\,A(p)\,,
\end{equation}
the transformation $\,w \rightarrow\, -\,w\,$ maps supersolutions
onto subsolutions, and reciprocally. In this case, it is sufficient
in definition \ref{defzeroum} to consider upper-solutions
$\,V\,$.\par%
Finally, if the function $\,A(p)\,$ is positively homogeneous
\begin{equation}\label{tressete}
A(s\,p)=\,s^{t-\,1}\,A(p), \quad \forall \,s>\,0\,,
\end{equation}
it is sufficient, in definition \ref{defzeroum}, to consider the
value $\,m=\,1\,.$}
\end{remark}
Main examples:
$\,A(p)=\,(\,1+\,|\,p\,|^2\,)^{\frac{(t-\,2)}{2}}\,p\,$ satisfies
\eqref{umdois}, and $\,A(p)=\,|\,p\,|^{t-\,2}\,p\,$ satisfies
\eqref{umdois}, and \eqref{tressete}. The differential equations
associate to this functions are the Euler equations to the extremals
of the integrals $\int \, (\,1+\,|\,{\nabla}\,u\,|^2\,)^{\frac{t}{2}}
\,dx\,$ and $\int \, |\,{\nabla}\,u\,|^t \,dx\,,$ respectively.
\vspace{0.2cm}
In section \ref{steoa} we prove the following result (see
\cite{ricmat}, theorem A):
\begin{theorem}\label{teo-A}
A point $\,y\,$ is regular if and only if there is at $\,y\,$ a
system of barriers.
\end{theorem}
As in reference \cite{ricmat}, the symbols $\,{\Omega}\,$ and
$\,\Sigma\,$ denote suitable open sets. However, in this rewriting,
we make the reading easier by a better use of the above symbols.
\vspace{0.2cm}
Let $\,{\Sigma}\,$ be an open bounded set, $\,E \subset \Sigma\,$ be a
closed set, and $\,m\,$ be a positive constant (the fact that
$\,{\Sigma}\,$ is assumed to be a sphere is not necessary here). We
introduce the following convex, closed, subsets of
$\,\mathbb V_0(\Sigma)\,$.
\begin{equation}\label{zeroonze}
{\mathbb K}_m(\Sigma)=\,\big\{\,v \in\,\mathbb V_0(\Sigma)\,:\,v \geq\,m \quad
\mbox{on}\quad E\,\big\}\,,
\end{equation}
and
\begin{equation}\label{zeroonze2}
{\mathbb K}_{-\,m}(\Sigma)=\,-\,{\mathbb K}_m(\Sigma)=\,\big\{\,v
\in\,\mathbb V_0(\Sigma)\,:\,v \leq\,-\,m \quad \mbox{on}\quad E\,\big\}\,.
\end{equation}
Inequalities are in the $\,H^{1,\,t}(\Sigma)\,$ sense.\par%
Obviously, properties (i) to (iii) hold with $\,{\Omega}\,$ replaced by
$\,{\Sigma}\,.$ Moreover, as easily shown, the coercivity property (iv)
holds by replacing $\,v\in\,\mathbb V_{\phi}({\Omega})\,$ by
$\,v\in\,{\mathbb K}_m({\Sigma})\,,$ or by $\,v\in\,{\mathbb K}_{-\,m}({\Sigma})\,.$ Hence, from
properties (i) to (iv), together with well known general theorems
(see Hartman-Stampacchia \cite{b6} and J.-L.Lions \cite{b8}),
existence and uniqueness of solutions to the following
two problems follows.\par%
\begin{equation}\label{zerocatorze}
u_2 \in\,{\mathbb K}_m({\Sigma})\,, \quad {\mathfrak{a}}(u_2,\,v-\,u_2)\geq\,0\,, \quad
\forall v\in\,{\mathbb K}_m({\Sigma})\,;
\end{equation}
\begin{equation}\label{zeroquinze}
u_3 \in\,{\mathbb K}_{-\,m}({\Sigma})\,, \quad {\mathfrak{a}}(u_3,\,v-\,u_3)\geq\,0\,, \quad
\forall v\in\,{\mathbb K}_{-\,m}({\Sigma})\,.
\end{equation}
\vspace{0.2cm}
Next we introduce the $\,t-$capacitary potentials. The following
definition is related to the notion of capacity used by Serrin in
\cite{b11}.%
\begin{definition}\label{defzerodois}
Let $\,\Sigma\,$ be an open sphere, $\,E\subset \Sigma\,$ be a
closed set, and $\,m\,$ be a positive real. The solutions to the
problems \eqref{zerocatorze} and \eqref{zeroquinze} are called
$t-$capacitary potentials of the set $\,E\,$ with respect to the
non-linear operator $\,\mathcal{L}\,$, the real $\,m\,$ and the sphere
$\,\Sigma\,$. Since $\,t\,$ is fixed, we drop the label $\,t\,$.
\end{definition}
In definition \ref{defzerodois}, the dependence on the particular
fixed sphere $\,{\Sigma}\,$ is without significance. In particular, the
numerical values of the related capacities remain equivalent
provided that the distances from the sets $\,E\,$ to the boundary
$\,\partial\,{\Sigma}\,$ have a positive, fixed, lower bound. From now on we
fix, once and for all, a sphere
$$\,{\Sigma} =\,I(y_0,\,2\,R)\,
$$
such that
$$
{\Omega} \subset \,\,I(y_0,\,R)\,.
$$
So
$$
dist({\Omega},\,\partial\,{\Sigma}) \leq\,R\,.
$$%
Further, for each couple $\,y,\,\rho\,,$ where $\,y \in\,\partial\,{\Omega}
\,$ and $\, 0<\,\rho<\,\frac{R}{2} \,,$ we set
\begin{equation}\label{erro}
E_{\rho}=\,(\complement\,{\Omega})\cap\,\overline{I(y,\,\rho)}\,.
\end{equation}
\begin{definition}\label{demac}
We denote by $\,u_{m,\,\rho\,}\,$ and $\,u_{-m,\,\rho\,}\,$ the
capacitary potentials of the above sets $\,E_{\rho}\,$ relative to
the values $m$ and $-\,m$ respectively.%
\end{definition}
\vspace{0.2cm}
In section \ref{steob} we prove the following result (see
\cite{ricmat}, theorem B):
\begin{theorem}\label{teo-B}
A point $\,y\in\,\partial\,{\Omega}\,$ is regular if and only if the
capacitary potentials of the sets $\,E_{\rho}\,$ are continuous in
$\,y\,$. More precisely, if and only if
\begin{equation}\label{zerodezasseis}\left\{
\begin{array}{ll}\vspace{1ex}
\lim_{x \rightarrow \,y} u_{m,\,\rho}(x)=\,m\,,
\\%
\lim_{x \rightarrow \,y} u_{-m,\,\rho}(x)=\,-\,m\,,
\end{array}\right.
\end{equation}
for each couple $\,\rho\,$, $\,m\,$ as above (or, equivalently, for
a sequence $\,(\rho_n,\,m_n)\,$ such that
$\,(\rho_n,\,m_n)\rightarrow\, (0,\,+\,\infty)\,.$
\end{theorem}
From theorem \ref{teo-B}, together with the immersion of
$\,H^{1,\,t}({\Sigma})\,$ in $\,C^{0,\,1-\,\frac{N}{t}}(\overline{\Sigma})\,$,
one gets the following result.
\begin{corollary}\label{coro-C}
Any boundary point is regular with respect to the operator $\,\mathcal{L}\,$
if $\,t>\,N\,$.
\end{corollary}
\section{Maximum principles and related results}
In this section we state some results concerning maximum principles,
order preservation, and similar notions. Related results may be
found, for
instance, in \cite{b13}, \cite{b3}, and \cite{b10}.\par%
The section is divided into two subsections. The first one concerns
variational solutions in $\,{\Omega}\,$ to the non-linear boundary value
problem \eqref{zeroseis}. The second one concerns solutions to the
variational inequalities \eqref{zerocatorze} and \eqref{zeroquinze},
which describe obstacle problems in $\,{\Sigma}\,$.
\subsection{Variational solutions in $\,{\Omega}\,.$ }
We denote by $\,|B|\,$ the Lebesgue measure of a set $\,B\,.$ By
$\,c,\,c_0,\,c_1\,,$ etc., we denote positive constants that depend,
at most, on $\,t,\,N,\,a,\,,$ and $\,p_0\,.$ The same symbol may be
used to denote different constants of the same type.\par%
One has the following \emph{maximum principle}.
\begin{lemma}\label{lemumdois}
The (variational) solution $\,u=\, u_1\,$ of problem
\eqref{zerotreze} satisfies the estimates
\begin{equation}\label{umtres}
\inf_{\partial\,{\Omega}} \phi \leq\,\textrm{Inf}_{\,{\Omega}} \,u
\leq\,\textrm{Sup}_{\,{\Omega}}\, u \leq\,\sup_{\partial\,{\Omega}} \phi\,.
\end{equation}
\end{lemma}
\begin{proof}
We prove that $\,Sup_{\,{\Omega}}\, u \leq\,k\,,$ where
$\,k=\,\sup_{\partial\,{\Omega}} \phi\,.$ For convenience we set
$$
A(k)=\,\{x \in\,{\Omega}:\,u(x) \geq\,k\,\}\,.
$$
If $|\,A(k)\,|=\,0\,$ the thesis is obvious. Assume that
$|\,A(k)\,|>\,0\,,$ and set $\,v=\,max \{u-\,k,\,0\}\,$. Since
$\,v\in \,H^{1,\,t}_0({\Omega})\,,$ it follows from \eqref{zerotreze}
that
$$
\int_{A(k)} \,A({\nabla}\,u)\cdot\,{\nabla}\,u \,dx=\,0\,.
$$
This equation, together with \eqref{zerodois} and \eqref{zerotres},
shows that $\,{\nabla}\,u=\,0\,$ on $\,A(k)\,$, so $\,u=\,k\,$ on
$\,A(k)\,$. This proves our thesis. A similar argument proves the
first inequality \eqref{umtres}.
\end{proof}
\begin{lemma}\label{lemumtres}
\emph{Order Preserving}: Let $w$ be a subsolution, $z$ a
supersolution, and assume that $\,w\leq\,z\,$ on $\,\partial\,{\Omega}\,$.
Then $\,w(x)\leq\,z(x)\,$ almost everywhere in $\,{\Omega}\,$.
\end{lemma}
\begin{proof}
Set $\,\eta=\,min\{0,\,z-\,w\}\,.$ It follows that $\,\eta\in
\,H^{1,\,t}_0({\Omega})\,,$ moreover $\,\eta(x) \leq\,0\,.$ By taking
into account definition \ref{emcima}, we may write
\begin{equation}\label{dezass}
\int_{{\Omega}} \,A({\nabla}\,w)\cdot\,{\nabla}\,\eta \,dx=\, \int_{\{w\geq\,z\}}
\,A({\nabla}\,w)\cdot\,{\nabla}\,(z-\,w)\,dx \geq\,0\,,
\end{equation}
and
\begin{equation}\label{dezaset}
\int_{\{w\geq\,z\}} \,A({\nabla}\,z)\cdot\,{\nabla}\,(z-\,w)\,dx \geq\,0\,,
\end{equation}
where $\,\{w\geq\,z\}=\,\{x\in\,{\Omega} :\,w(x) \geq\,z(x)\,\}\,.$ From
\eqref{dezass} and \eqref{dezaset} it follows that
\begin{equation}\label{dezota}
\int_{\{w\geq\,z\}} \,(\,A({\nabla}\,z) -\,A({\nabla}\,w)\,)
\cdot\,(\,{\nabla}\,z-\,{\nabla}\,w\,) \,dx \leq\,0\,.
\end{equation}
This inequality together with \eqref{umdois} imply
$\,\,{\nabla}\,(w-\,z\,)=\,0\,$ on $\,\{w-\,z\,\geq\,0\}\,.$ By
appealing to the hypothesis $\,w-\,z\,\leq\,0\,$ on $\,\partial\,{\Omega}\,,$
the thesis follows.
\end{proof}
\begin{corollary}\label{coroumquatro}
If $\,u\,$ and $\,v\,$ are two (variational) solutions, which belong
respectively to $\,\mathbb V_{\phi}\,$ and $\,\mathbb V_{\psi}\,,$ then
\begin{equation}\label{umquatro}
Sup_{{\Omega}} \,|\,u-\,v\,| \leq\,\sup_{\partial\,{\Omega}}\, |\,\phi-\,\psi\,|\,.
\end{equation}
\end{corollary}
\begin{proof}
Set $\,\eta=\,\sup_{\partial\,{\Omega}}\, |\,\phi-\,\psi\,|\,.$ The function
$\,w=\,v+\,\eta\,$ is a variational solution in $
H^{1,\,t}_{\psi+\,\eta}({\Omega})\,,$ moreover $\,u \leq\,w\,$ on
$\,\partial\,{\Omega}\,.$ By lemma \ref{lemumtres} it follows that $\,u
\leq\,w=\,v+\,\eta\,$ a.e. in $\,{\Omega}\,,$ that is $\,u-\,v\,
\leq\,\eta\,$ a.e. in $\,{\Omega}\,.\,$ Similarly, one proves that $\,
v-\,u\, \leq\,\eta\,,$ a.e. in $\,{\Omega}\,.$ These two relations yield
the thesis.
\end{proof}
\subsection{Variational inequalities in $\,{\Sigma}\,.$ }
In this subsection $\,{\Sigma}\,,$ $\,E\,$ and $\,m\,,$ are as in
definition \ref{defzerodois}.
\begin{lemma}\label{lemumcinco}
Let $\,u=\,u_2\,$ be the solution of problem \eqref{zerocatorze}.
Then $\,u(x)\leq\,m\,$ almost everywhere in $\,\Sigma\,$. In
particular, $\,u=\,m\,$ on $\,E\,$.\par%
Analogously, the solution $\,u=\,u_3\,$ of \eqref{zeroquinze}
satisfies the inequality $\,u(x)\geq\,-\,m\,$ almost everywhere in
$\,\Sigma\,$. In particular, $\,u=\,-m\,$ on $\,E\,$.
\end{lemma}
\begin{proof}
Let be $\,u=\,u_2\,$, and set $\,v=\,\min\{u,\,m\}\,.$ Since $\,v\in
{\mathbb K}_m({\Sigma})\,,$ from \eqref{zerocatorze} we get
$$
\int_{{\Sigma}} \,A({\nabla}\,u)\cdot\,{\nabla}\,(v-\,u)\, dx \geq\,0\,,
$$
that is
\begin{equation}\label{dosdos}
\int_{B_m}\,A({\nabla}\,u)\cdot\,{\nabla}\,u \,dx \leq\,0\,.
\end{equation}
From \eqref{dosdos}, \eqref{zerodois}, and \eqref{zerotres} it
follows that $\,{\nabla}\,u=\,0\,$ a.e. on the set $\,\{x \in\,{\Sigma}:\,u(x)
\geq\,m\,\}\,$. From this last property, since $\,u\in {\mathbb K}_m({\Sigma})\,$,
it readily follows that $\,u=\,m\,$ on $\,E\,.$ The second part of
the lemma may be proved in a similar way, or as a consequence of the
first part, together with the remark \ref{rem-1.1}.
\end{proof}
\begin{lemma}\label{lemumseis}
The solution $\,u=\,u_2\,$ of problem \eqref{zerocatorze} solves, in
$\,\Sigma-\,E\,$, the problem
\begin{equation}\label{umcinco}
\int_{\Sigma -\,E} \,A({\nabla}\,u)\cdot\,{\nabla}\,v \,dx=\,0\,, \quad
\forall \,v\,\in H^{1,\,t}_0(\Sigma -\,E)\,.
\end{equation}
Moreover, $\,u_2\,$ is a super-solution in $\,\Sigma\,$. Similarly,
the solution $\,u=\,u_3\,$ of \eqref{zeroquinze} solves, in
$\,\Sigma-\,E\,$, the problem \eqref{umcinco}, and is a sub-solution
in $\,\Sigma\,$.
\end{lemma}
\begin{proof}
Equation \eqref{zerocatorze} may be written in the form%
\begin{equation}\label{doqas}%
\int_{{\Sigma}} \,A({\nabla}\,u)\cdot\,{\nabla}\,(w-\,u)\, dx \geq\,0\,,\,, \quad
\forall w\in\,{\mathbb K}_m({\Sigma})\,.%
\end{equation}
Given $\,v\,\in H^{1,\,t}_0(\Sigma -\,E)\,$, denote by $\,{\overline{v}}\,$
the function equal to $\,v\,$ in $\,{\Sigma} -\,E\,,$ and vanishing on
$\,E\,.$ By the construction, the functions $\,u+\,{\overline{v}}\,$ and
$\,u-\,{\overline{v}}\,$ belong to $\,{\mathbb K}_m({\Sigma})\,.$ By replacing these
functions in equation \eqref{doqas} we obtain \eqref{umcinco}.\par%
Furthermore, $\,u\,$ is a super-solution. In fact, let $\,\psi
\in\,C^{\infty}_0({\Omega})\,,$ be non-negative. Then the function
$\,w=\,u+\,\psi\,$ belongs to $\,{\mathbb K}_m({\Sigma})\,.$ By using it as test
function in equation \eqref{doqas}, one proves \eqref{zerooito}.\par%
The second part of the lemma may be obtain similarly or,
alternatively, by appealing to the remark \ref{rem-1.1}.
\end{proof}
\section{A convergence result. Proof of the existence theorem \ref{existesum}}\label{sectres}
In this section we associate to each boundary data $\,\phi\,\in
\,C^0(\partial\,{\Omega})\,$ a weak solution $\,u\,$ in $\,{\Omega}\,$ of equation
\eqref{doisum}. Recall that, by definition, $\,u\,$ is a weak
solution of \eqref{doisum} in $\,{\Omega}\,$ if $\,u \in\,
H^{1,\,t}_{loc}({\Omega})\,$ satisfies \eqref{doisdois}, namely
$$
\int_{{\Omega}} \,A({\nabla}\,u)\cdot\,{\nabla}\,\psi \,dx=\,0\,, \quad \forall
\,\psi\,\in \,\mathcal{D}({\Omega})\,.
$$
As already remarked, it immediately follows that \eqref{doisdois}
holds for all $\,\psi \in\, H^{1,\,t}({\Omega})\,,$ with compact support
in $\,{\Omega}\,.$
\begin{remark}\label{remdoisum}
$\,L^{\infty}({\Omega})$ solutions to equation \eqref{doisdois}
necessarily belong to $\,C^0({\Omega})\,.$
\end{remark}
In fact, the above solution is locally H\H older continuous in
$\,{\Omega}\,$, see Ladyzhenskaya-Ural'tseva \cite{b7}. Actually,
continuity may be proved by appealing to a simplification of the
argument used in Part II below.
\begin{lemma}\label{lemdoisdois}
A family of solutions to equation \eqref{doisdois}, equi-bounded in
$\,L^{\infty}({\Omega})\,,$ is necessarily equi-bounded in
$\,H^{1,\,t}({\Omega}')\,,$ for each $\,{\Omega}' \subset\,\subset \,{\Omega}\,.$
\end{lemma}
\begin{proof}
From the properties of $\,A(p)\,$ it immediately follows that
\begin{equation}\label{doistres}\left\{
\begin{array}{ll}\vspace{1ex}
A(p) \cdot\,p \geq\,a\,|p|^t -\,a\,p_{\,0}^{\,t}\,,
\\%
|\,A(p)\,| \leq\,a^{-\,1}\,|p|^{t-\,1}+\,d_0\,,
\end{array}\right.
\end{equation}
where $\,d_0\,$ is a non-negative constant. Let be $\,k>\,0\,$, and
consider the family $\,\mathcal{F}\,$ consisting of the solutions to
\eqref{doisdois} for which $\,Sup_{{\Omega}}\,|\,u(x)\,| \leq\,k\,$.
Equi-boundedness of $\,\|\,u\,\|_{t,\,{\Omega}}\,$ is obvious. let us
proof the equi-boundedness of $\,\|\,{\nabla}\,u\,\|_{t,\,{\Omega}}\,.$ Let
$\,\Lambda\,$ be an open set such that $\,{\Omega}' \subset\,\subset
\Lambda \,\subset\,\subset \,{\Omega}\,$, and let $\,\phi\,$ be a regular
function, $\,0\leq\,\,\phi(x) \leq\,1\,$, equal to $\,1\,$ in
$\,{\Omega}'\,$, and vanishing on $\,{\Omega} -\,\Lambda\,$. One easily shows
that
\begin{equation}\label{doisquatro}
\int_{\Lambda} \,A({\nabla}\,u)\cdot\,({\nabla}\,u\,)\,\,\phi^t \,dx \leq\,t\,
\int_{\Lambda} \,|\,A({\nabla}\,u)\,| \,|\,{\nabla}\,\phi\,|\,|\,u\,|
\,\,\phi^{t-\,1} \,dx\,.
\end{equation}
By appealing to H\H older's inequality one gets
$$
\int_{\Lambda} \,A({\nabla}\,u)\cdot\,({\nabla}\,u\,)\,\,\phi^t \,dx
\leq\,C\,\big(\, \int_{\Lambda}
\,|\,A({\nabla}\,u)\,|^{\frac{t}{t-\,1}}\,\,\,\phi^t
\,dx\,\big)^{\frac{t-\,1}{t}}\,,
$$
where $\,C=\,t\,k\,\|\,{\nabla}\,\phi\,\|_{t,\,\Lambda}\,.$ The last inequality together with \eqref{doistres}
leads to
$$
a\,\int_{\Lambda} \,|\,{\nabla}\,u\,|^t\,\phi^t \,dx \leq\,C_0\,\big(\,
\int_{\Lambda} \,|\,{\nabla}\,u\,|^t\,\phi^t
\,dx\,\big)^{\frac{t-\,1}{t}}\,+\,C_1\,.
$$
Since $\,\frac{t-\,1}{t}<\,1\,,$ it readily follows that the
integral in the left hand side of the above inequality is bounded by
a constant $\,C_2\,.$ So
$$
\int_{{\Omega}'} \,|\,{\nabla}\,u\,|^t \,dx \leq \,\int_{\Lambda}
\,|\,{\nabla}\,u\,|^t\,\phi^t \,dx \leq \,C_2\,.
$$
\end{proof}
\begin{lemma}\label{lemdoistres}
Let $\,\{u_n\}\,$ be a sequence of solutions to \eqref{doisdois}, equi-bounded in
$\,L^{\infty}({\Omega})\,,$ and uniformly convergent in $\,{\Omega}\,$ to a function $\,u(x)\,.$ Then
$\,u(x)\,$ is a solution to \eqref{doisdois}\,.
\end{lemma}
\begin{proof}
Note that $\,u_n\,\in C^0({\Omega})\,,$ as follows from the remark
\ref{remdoisum}. Lemma \ref{lemdoisdois} shows that $\,u
\in\,H^{1,\,t}({\Omega}')\,,$ for each $\,{\Omega}'\,$ as above. Let $\,u^0
\in\,H^{1,\,t}({\Omega}')\,$ be the variational solution in $\,{\Omega}'\,$ of
the problem $\,\mathcal{L}\,u^0=\,0\,$ in $\,{\Omega}'\,,$ $\,u^0-\,u\,\in\,
H^{1,\,t}_0({\Omega}')\,.$ By applying the corollary \ref{coroumquatro}
to the functions $\,u^0\,$ and $\,u_n\,$ it follows that
\begin{equation}\label{doiscinco}
Sup_{\,{\Omega}'}\,|\,u_n-\,u^0\,| \leq\,\sup_{\partial\,{\Omega}'}\,|\,u_n-\,u\,|\,.
\end{equation}
Since $\,u_n(x) \rightarrow\,u(x)\,$ uniformly in $\,{\Omega}\,$, from
\eqref{doiscinco} it follows that $\,u_n(x) \rightarrow\,u^0(x)\,$
uniformly in $\,{\Omega}'\,.$ So, $\,u^0(x)=\,u(x)\,$ in $\,{\Omega}'\,.$ In
particular, $\,\mathcal{L}\,u=\,0\,$ in $\,{\Omega}'\,.$ From the arbitrarity of
$\,{\Omega}'\,,$ the thesis follows (note that local uniform convergence
in ${\Omega}\,$ would be sufficient here)
\end{proof}
\vspace{0.2cm}
The following statement corresponds to the Theorem 2.4 in reference
\cite{ricmat}.
\begin{theorem}\label{teodoisquatro}
To each $\,\phi\,\in C^0(\partial\,{\Omega})\,$ there corresponds a (unique)
function $\,u(x)\,$ such that the following holds:\par%
Let $\,\{\,\phi_n\,\}\,$ be an arbitrary sequence of functions in
$\,C^1({\overline{\Omega}})\,$ uniformly convergent to $\,\phi\,$ on $\,\partial\,{\Omega}\,$
(it is well know that these sequences exist). Further, denote by
$\,u_n(x)\,$ the variational solutions to the problem
$\,\mathcal{L}\,u_n=\,0\,$, $\,u_n \in\,\mathbb V_{\phi_n}\,.$ Then the sequence
$\,\{\,u_n\}\,$ converges uniformly in $\,{\Omega}\,$ to a function
$\,u(x)\,.$ Moreover, the function $\,u(x)\,$, which belongs to
$\,H^{1,\,t}_{loc}({\Omega}) \cap\,C^0(\,{\Omega})\,,$ is a weak solution in
$\,{\Omega}\,$, i.e. $\,u\,$ solves \eqref{doisdois}.
\end{theorem}
\begin{proof}
Let $\,\phi\,$, $\,\phi_n\,,$ and $\,\,u_n\,$ be as in the above
statement. The variational solutions $\,\,u_n\,$ are continuous in
$\,\Omega\,$, see the remark \ref{remdoisum}. Clearly, they are also
equi-bounded. By corollary \ref{coroumquatro} it follows that, for
all couple of indexes $\,m,\,n\,,$
\begin{equation}\label{umquatro}
Sup_{\,{\Omega}} \,|\,u_n-\,u_m\,| \leq\,\sup_{\partial\,{\Omega}}\,
|\,\phi_n-\,\phi_m\,|\,.
\end{equation}
So the sequence $\,\{\,u_n(x)\}\,$ is uniformly convergent in
$\,\Omega\,$ to some $\,u(x)\,\in C^0({\Omega})\,.$ Lemma \ref{lemdoistres}
shows that $\,u(x)\,$ is a weak solution in $\,\Omega\,.$ Moreover, by
appealing to lemma \ref{lemdoisdois}, we get $\,u
\in\,H^{1,\,t}_{loc}({\Omega})\,.$ Furthermore, the limit $\,u\,$ is
independent of the particular sequence $\,\{\,\phi_n\}\,$, as
follows from \eqref{umquatro} applied to two distinct, arbitrary,
sequences $\,(\phi_n,\,u_n)\,$ and $\,(\psi_n,\,v_n)\,$. This
argument also proves the uniqueness of the solution $\,u\,.$
\end{proof}
The theorem \ref{teodoisquatro} justifies the definition \ref{saneg}
of generalized solution given in section \ref{defas}, and also
proves the existence and uniqueness Theorem \ref{existesum}.
\vspace{0.2cm}
It is worth noting that from definition \ref{saneg}, lemma
\ref{lemumdois}, and corollary \ref{coroumquatro}, it follows that
if $\,u\,$ and $\,v\,$ are the solutions corresponding to the
continuous data $\,\phi\,$ and $\,\psi\,$, then
$$
\min_{\partial\,{\Omega}} \phi \leq\,Inf_{\,{\Omega}}\, u \leq\,Sup_{ \,{\Omega}} \,u
\leq\,\max_{\partial\,{\Omega}} \phi\,,
$$
and
$$
Sup_{\,{\Omega}} \,|\,u -\,v\,| \leq\,\max_{\partial\,{\Omega}} |\,\phi-\,\psi\,|\,.
$$
Minimum and maximum are used here in the very classical sense.
\section{Proof of Theorem \ref{teo-A}}\label{steoa}
In this section we prove the theorem \ref{teo-A}. We denote by
$\,C^1(\partial\,{\Omega})\,$ the functional space consisting on the
restrictions to $\,\partial\,{\Omega}\,$ of functions in $\,C^1({\overline{\Omega}})\,.$
\begin{lemma}\label{lemtresum}
A point $\,y \in\,\partial\,{\Omega}\,$ is regular if and only if condition
\eqref{zerosete} holds for each $\,\phi\in \,C^1(\partial\,{\Omega})\,.$
\end{lemma}
\begin{proof}
Let $\,u\,$ be the solution corresponding to a given data $\,\phi\in
\,C^0(\partial\,{\Omega})\,,$ and let $\,\{\,\phi_n\,\}\,$ and
$\,\{\,u_n\,\}\,$ be as in theorem \ref{teodoisquatro} (by the way,
note that the solutions $\,\,u_n\,$ are variational and
generalized). Define $\,\overline{u}_n(x)\,$ by
$\,\overline{u}_n(x)=\,u_n(x)\,$ in $\,{\Omega}\,$,
$\,\overline{u}_n(y)=\,\phi_n(y)\,,$ and define
$\,\overline{u}(x)\,$ by $\,\overline{u}(x)=\,u(x)\,$ in $\,{\Omega}\,$,
$\,\overline{u}(y)=\,\phi(y)\,.$ The functions
$\,\overline{u}_n(x)\,$ are, by the assumptions, continuous in
$\,{\Omega} \cup\,\{y\}\,$, and uniformly convergent in $\,{\Omega}
\cup\,\{y\}\,$ to the function $\,\overline{u}(x)\,.$ So,
$\,\overline{u}(x)\,$ is continuous in $\,{\Omega} \cup\,\{y\}\,$.
\end{proof}
\begin{proof} \emph{of theorem} \ref{teo-A}.\par%
Necessary condition: assume that $\,y \in\,\partial\,{\Omega}\,$ is regular.
Given $\,\rho\,$ and $\,m\,$, consider the restriction to
$\,\partial\,{\Omega}\,$ of the function
$\,h(x)=\,m\,\frac{|\,x-\,y|^2}{\rho^2}\,.$ This function belongs to
$\,C^1(\partial\,{\Omega})\,.$ Let $\,V(x)\,$ be the solution with $\,h(x)\,$
as boundary data. By the construction, $\,V(x)\,$ satisfies
condition (j) in definition \ref{defzeroum}. Further, by the
definition of regular point, $\,V(x)\,$ satisfies the condition
(jj). Similarly, by considering the data $\,-\,h(x)\,,$ one proves
the existence of the function $\,U(x)\,$ required in definition \ref{defzeroum}.\par%
Sufficient condition: Assume that, at point $\,y\,,$ there exists a
system of barriers. By lemma \ref{lemtresum} we may assume that
$\,k(x) \in \,C^1(\partial\,{\Omega})\,$. Let $\,u(x)\,$ be the corresponding
solution and set
\begin{equation}\label{tresum}
M =\,\sup_{\partial\,{\Omega}}\,|\,k(x)\,|\,.
\end{equation}
Given $\,\epsilon \,>0\,$, there is $\,\rho_{\epsilon}>\,0\,$ such that
\begin{equation}\label{tresdois}
|\,k(x)-\,k(y)\,| <\,\frac{\epsilon}{2} \quad \textrm{if} \quad
|\,x-\,y\,| <\,\rho_{\epsilon}\,, \quad x \in\,\partial\,{\Omega}\,.
\end{equation}
Let $\,V\,$ and $\,U\,$ be barriers related to the values
$\,\rho=\,\rho_{\epsilon}\,$ and $\,m=\,M\,$. Then (see also \cite{b9})
\begin{equation}\label{trestres}
V(x) \geq\,M\,, \quad \textrm{and} \quad U(x) \leq\,-\,M\,, \quad
\textrm{on} \quad (\partial\,{\Omega}) \cap\,\complement \,I(y,\,\rho)\,.
\end{equation}
By appealing to \eqref{tresum}, \eqref{tresdois}, and
\eqref{trestres}, we show that
\begin{equation}\label{tresquatro}\left\{
\begin{array}{ll}\vspace{1ex}
k \leq\,k(y) +\,\frac{\epsilon}{2}+\,V\,,
\\%
k \geq\,k(y) -\,\frac{\epsilon}{2}+\,U\,,
\end{array}\right.
\end{equation}
on $\partial\,{\Omega}\,$.\par%
From \eqref{tresquatro} and lemma \ref{lemumtres} it follows that
\begin{equation}\label{trescinco}\left\{
\begin{array}{ll}\vspace{1ex}
u(x) \leq\,k(y) +\,\frac{\epsilon}{2}+\,V(x)\,,
\\%
u(x) \geq\,k(y) -\,\frac{\epsilon}{2}+\,U(x)\,,
\end{array}\right.
\end{equation}
almost everywhere in $\,{\Omega}\,$, since $\,k(y)
+\,\frac{\epsilon}{2}+\,V(x)\,$ is a super-solution, etc. Furthermore,
the property (jj) in definition \ref{defzeroum} implies the
existence of $\,\overline{\rho}_{\epsilon}>\,0 \,$ such that
\begin{equation}\label{tresseis}
x \in\,{\Omega}\, \cap\,I(y,\,\overline{\rho}_{\epsilon})\quad \Longrightarrow
\quad |\,V(x)\,| \leq\,\frac{\epsilon}{2} \quad \textrm{and} \quad
|\,U(x)\,| \leq\,\frac{\epsilon}{2}\,.
\end{equation}
From \eqref{trescinco} and \eqref{tresseis} we show that
$$
\,x \in\,{\Omega}\, \cap\,I(y,\,\overline{\rho}_{\epsilon})\,\Longrightarrow
\,-\,\epsilon \leq\,u(x)-\,h(y)\leq\,\epsilon\,.
$$
So, $\,\lim_{x \rightarrow\,y}\,u(x)=\,h(y)\,.$ Hence $\,y\,$ is
regular.
\end{proof}
\section{Proof of Theorem \ref{teo-B}}\label{steob}
We start by some preliminary results.
\begin{lemma}\label{lemquatroum}
The Lipschitz continuous function
$$
u(x)=\,\,{\alpha} \,|\,x-\,y\,| +\, \beta\,,
$$
where ${\alpha}$ and $\beta$ are constants, is a super-solution if
${\alpha}>\,0$, and a sub-solution if ${\alpha}<\,0$.
\end{lemma}
\begin{proof}
Without loss of generality we assume that $\,u(x)=\,{\alpha}\,r\,$, where
$\,r=\,|\,x\,|\,.$ We start by assuming that $\,A(p)\,$ is
indefinitely differentiable. By taking into account the monotony
assumptions, we easily show (for instance, by appealing to the first
order Taylor's formula with Lagrange form of the remainder) that the
Jacobian matrix $\,D\,A(p)\,$ of the transformation $\,A(p)\,$ is
positive semi-definite at each point $\,p\,\in\,{\mathbb R}^N\,$. So, for
each unit vector $\,\xi \in \,{\mathbb R}^N\,$,
\begin{equation}\label{quatroum}
DA(\,p)\, \xi \cdot\,\xi\leq\,tr\,DA(p), \quad \forall
\,\xi\in\,R^N\,,
\end{equation}
since the trace coincides with the sum of the eigenvalues.\par%
Let us denote the generical element of the Jacobian matrix
$\,D\,A(p)\,$ by $\,A_{i\,j}(p)\,.$ By setting $\,p=\,{\nabla}\,u\,$ one
has
\begin{equation}\label{amais}
div\,A(\,{\nabla}\,u(x)\,)=\,\sum_{i,\,j} \,A_{i\,j}(p) \,\partial_i\,p_j\,.
\end{equation}
Since $\,p=\,{\alpha}\,x\, r^{-\,1}\,$, it follows that
$\,\partial_i\,p_j=\,{\alpha}\ r^{-\,1} \,(\,\delta_{i\,j} -
\,r^{-\,2}\,x_i\,x_j\,)\,$. So, from \eqref{amais}, we get
\begin{equation}\label{quatrodois}\begin{array}{ll}\vspace{1ex}\displaystyle%
div\,A(\,{\nabla}\,u(x)\,)= \\
\displaystyle {\alpha}\ r^{-\,1} \, \big\{\,tr\,DA({\alpha}\,x\,
r^{-\,1})-\,DA({\alpha}\,x\,
r^{-\,1})\,(\,x\, r^{-\,1})\cdot\,(\,x\, r^{-\,1})\,\big\}\,,%
\end{array}%
\end{equation}
for each $\,x\neq\,0\,.$ From \eqref{quatrodois} and
\eqref{quatroum} it follows that $\,div\,A(\,{\nabla}\,u(x)\,)$ has the
sign of the constant $\,{\alpha}\,$, for each $\,x\neq\,0\,.$\par%
Let $\,\phi\,$ be a non-negative, indefinitely differentiable
function in $\,{\mathbb R}^N\,.$ Fix $\,R>\,0\,$ such that
$$
\, supp \, \,\phi\,\subset\, I(0,\,R)\,.
$$
Next, fix a function $\gamma(x)\in \,D({\mathbb R}^N)\,$ such that $\,0 \leq
\,\gamma(x)\leq\,1\,$, and $\,\gamma(x)=\,1\,$ for $\,|x|\leq\,1\,$.
To fix ideas, assume that $\,supp \,\, \gamma \,\subset I(0,\,2)\,.$
Further define, for each $s>\,0\,,$
$$
\gamma_s (x)=\,\gamma(s^{-\,1}\,x)\,, \quad \textrm{and} \quad
\phi_{s}(x)=\,\phi(x) \,(\,1-\,\gamma_{s}(x)\,).
$$
Note that, for all $s \in \,(0,\,R)\,,$
$$
supp\,\, \phi_{s} \subset \, I(0,\,R) -\,I(0,\,s)\,.
$$
Hence, by an integration by parts,
$$
{\alpha}\,\int_{I(0,\,R)} \,A({\nabla}\,u(x)) \cdot \,{\nabla}\,\phi_s(x)
\,dx=\,-\,{\alpha}\, \int_{\complement I(0,\,s)}
\,div\,A({\nabla}\,u(x))\,\phi_s(x) \,dx \leq\,0\,,
$$
where $\,u(x)=\,{\alpha}\,r\,$. Note that, on the left hand side, we may
replace $\,I(0,\,R)\,$ by $\,{\mathbb R}^N\,.$ We want to show that%
\begin{equation}\label{quatrotres}
\lim_{s \rightarrow \,0} \, \int_{I(0,\,R)} \,A({\nabla}\,u(x)) \cdot
\,{\nabla}\,\phi_s(x) \,dx=\, \int_{I(0,\,R)} \,A({\nabla}\,u(x)) \cdot
\,{\nabla}\,\phi(x) \,dx\,.
\end{equation}
This proves that
$$
{\alpha} \, \int_{{\mathbb R}^N} \,A(\,{\nabla}\,u)\cdot\,{\nabla}\,\phi \,dx \leq\,0\,,
$$
which is our thesis.\par%
Straightforward calculations shows that
\begin{equation}\label{eunem}
{\nabla}\,\phi_s(x)=\,(1-\,\gamma_s
(x)\,)\,{\nabla}\,\phi(x)-\,s^{-1}\,\phi(x)\,({\nabla}\,\gamma)(s^{-1}\,x)\,.
\end{equation}
Since $\,(1-\,\gamma_s (x)\,)\,{\nabla}\,\phi(x)\,$ converges
point-wisely to $\,{\nabla}\,\phi(x)\,,$ $\,x\neq\,0\,,$ as $\,s
\rightarrow\,0\,,$ it readily follows, by Lebesgue dominated
convergence Theorem, that \eqref{quatrotres} holds by replacing, in
the left hand side, $\,{\nabla}\,\phi_s\,$ by $\,(1-\,\gamma_s
(x)\,)\,{\nabla}\,\phi(x)\,.$\par%
Let's see that in the left hand side of \eqref{quatrotres} the
contribution due to the second term in the right hand side of
\eqref{eunem} tends to zero. One has
$$
s^{-1}\,\int_{I(0,\,R)} \,|\,\phi(x)\,({\nabla}\,\gamma)(s^{-1}\,x)\,|
\,dx \leq\,s^{-1}\,\int_{I(0,\,2)}
\,|\,\phi(s\,y)\,({\nabla}\,\gamma)(y)\,| s^N \,dx\,
$$
$$
\phantom{aaaaaaaaaaaaaaaaaaaaaaaaaa}\leq 2^N\,V_N\,
s^{N-\,1}\,\|\,\phi\,\|_{L^{\infty}({\mathbb R}^N)}\,\|\,{\nabla}\,\gamma\,\|_{L^{\infty}({\mathbb R}^N)}
\,,
$$
where $\,V_N\,$ denotes the volume of the unit sphere. Since $
\,A({\nabla}\,u(x))\,$ is uniformly bounded in $\,{I(0,\,R)}\,$, the
thesis follows.
\vspace{0.2cm}
Assume now that $\,A(p)$ is merely continuous. Let
$\,j_{\epsilon}(\eta)\,$ be, for each $\epsilon>\,0\,,$ a real, nonnegative
function, indefinitely differentiable, with compact support
contained in the sphere $\,I(0,\,\epsilon)\,$, and integral equal to
$\,1\,$. Set
$$
A_{\epsilon}(p)=\,\int \,A(\eta)\,j_{\epsilon}(p-\,\eta\,)\,d\eta\,.
$$
These functions are indefinitely differentiable. Furthermore,
\begin{equation}\label{quatroquatro}
A_{\epsilon}(p)-\,A_{\epsilon}(q)=\,\int\,\big[\,A(p-\,\xi)-\,A(q-\,\xi)\,\big]\,j_{\epsilon}(\xi)\,d\xi\,.
\end{equation}
In particular, this last inequality implies that $\,A_{\epsilon}(p)\,$
satisfies the monotony hypothesis \eqref{zerotres} (note that
assumptions \eqref{zerodois}, \eqref{zeroquatro}, and
\eqref{zerocinco} were not used here). From the first part of the
proof it follows that
\begin{equation}\label{quatrocinco}
{\alpha} \, \int \,A_{\epsilon}(\,{\nabla}\,u)\cdot\,{\nabla}\,\phi \,dx \leq\,0\,,
\end{equation}
for all nonnegative $\,\phi \in \,D({\Omega})\,.$ Further, since%
$$
A_{\epsilon}(p)-\,A(p)=\,\int\,\big[\,A(\eta)-\,A(p)\,\big]\,j_{\epsilon}(p-\,\eta)\,d\eta\,,
$$
and since $\,A(p)\,$ is uniformly continuous on compact sets, it follows that
$\,A_{\epsilon}(p) \rightarrow \,A(p)\,$ uniformly on compact sets. So, letting $\,\epsilon \rightarrow\,0\,$ in equation
\eqref{quatrocinco}, one gets the thesis.
\end{proof}
\vspace{0.2cm}
The next result concerns the local character of the notion of regular point.
\begin{theorem}\label{teoquatrodois}
let ${\Omega}\,$ and $\,\Lambda\,$ be two open bounded sets, and let $\,y\,\in\, \partial\,{\Omega} \cap\,\partial\,\Lambda\,.$
Assume, moreover, that there exists a sphere $\,I(y,\,r)\,$ such that
\begin{equation}\label{quatroseis}
I(y,\,r)\cap\,{\Omega}=\,I(y,\,r)\cap\,\Lambda\,.
\end{equation}
Then $\,y\,$ is regular with respect to $\,{\Omega}\,$ if and only if it
is regular with respect to $\,\Lambda\,.$
\end{theorem}
\begin{proof}
Due to Theorem \ref{teo-A}, it is sufficient to show that there is a
system of barriers with respect to $\,\Lambda\,$ if and only if
there is a system of barriers with respect to $\,{\Omega}\,.$\par%
Let $y$ be regular with respect to $\,\Lambda\,.$ Assume, for the
time being, that $\,\Lambda \subset {\Omega}\,$. Given $\,\rho\,$ and
$\,m\,$, $\,0<\,\rho<\,r\,$ and $\,0<\,m\,$, let $\,V(x)\,$ be the
variational solution in $\,{\Omega}\,$ to the problem \eqref{zeroseis}
with boundary data given by
$$
h(x)=\,m\,|\,x-\,y\,|^2\,\rho^{-\,2}\,.
$$
By the construction, $\,V(x)\,$ satisfies the condition (j) in
definition \ref{defzeroum}. Let us show that it also satisfies
condition (jj). Let be
$$
M\geq\,max\{\,1,\,m^{-\,1}\,Sup_{\,{\Omega}}\,|\,V(x)\,|\,\}\,,
$$
and let $\,V'(x)\,$ be the solution in $\,\Lambda\,$ with boundary
data $\,h'(x)=\,M\,m\,|\,x-\,y\,|^2\,\rho^{-\,2}\,$. Clearly,
$\,V(x)\,$ is a solution in $\,\Lambda\,$. Furthermore, from the
definition of $\,M\,$, it follows that $\,V'\geq\,V\,$ on
$\,\partial\,{\Omega}\,.$ from this last inequality, together with lemma
\ref{lemumtres}, we show that $\,V'(x)\geq\,V(x)\,$ almost
everywhere in $\,\Lambda\,$. From this last assertion, together
with the regularity of $\,y\,$ with respect to $\,\Lambda\,$, it
follows that
$$
0\leq\,\lim_{ x\in\,{\Omega} ,\, x\rightarrow\,y}\,V(x) \leq\,\lim_{
x\in\,\Lambda ,\, x\rightarrow\,y}\,V'(x)=\,0\,.
$$
This proves the assumption (jj). By appealing to the theorem
\ref{teo-A} we conclude that $\,y\,$ is regular with respect to
$\,{\Omega}\,$. The existence of the function $\,U(x)\,$ referred in
definition \ref{defzeroum} may be shown by a similar argument, or by
appealing to remark \ref{rem-1.1}.\par%
Reciprocally, assume that $\,y\,$ is regular with respect to
$\,{\Omega}\,$. Given $\,\rho\,$ and $\,m\,$, $\,0<\,\rho<\,r\,$ and
$\,0<\,m\,$, we construct below the corresponding barrier $\,V(x)\,$
in $\,\Lambda\,$, according to definition \ref{defzeroum}. Let
$\,V(x)\,$ be the solution in $\,{\Omega}\,$ with boundary data
$\,h(x)=\,m\,|\,x-\,y\,|\,\rho^{-\,1}\,$ on $\,\partial\,{\Omega}\,.$
$\,V(x)\,$ is a solution in $\,\Lambda\,$, and satisfies the
condition (ii) since $y$ is regular with respect to $\,{\Omega}\,$ and
$\,h(x)=\,0\,$. Further, since $\,V=\,h\,$ on $\,\partial\,{\Omega}\,$ and
$\,h(x)\,$ is a sub-solution in $\,{\Omega}\,$ (lemma \ref{lemquatroum}),
it must be $\,V(x) \geq\,h(x)\,$ almost everywhere in $\,{\Omega}\,$.
In particular, $\,V \geq\,h\,$, so $\,V \geq\,m\,$, on $\,\partial\,\Lambda \cap\,\complement I(y,\,\rho)\,,$ as desired.\par%
Finally, if $\Lambda\,$ is not contained in $\,{\Omega}\,$, consider the open set $\,D=\,I(y,\,r)\cap\,\Lambda=\,I(y,\,r)\cap\,{\Omega}\,,$
and take into account that $\,D\subset\,\Lambda\,$ and $\,D\subset\,{\Omega}\,.$
\end{proof}
\vspace{0.2cm}
We end this section by proving the theorem \ref{teo-B}.%
\vspace{0.2cm}
\emph{Necessary condition}: Let $\,y\,$ be regular. By theorem
\ref{teoquatrodois} it follows that $\,y\,$ is regular with respect
to $\,{\Sigma} -\,E_{\rho}\,.$ Since the capacitary potential
$\,u_{\rho,\,m}\,$ is the solution in $\,{\Sigma} -\,E_{\rho}\,$ (lemma
\ref{lemumseis}) with data $\,m\,$ on $\,\partial\,E_{\rho}\,$ and
$\,0\,$ on $\,\partial\,{\Sigma}\,$ (lemma \ref{lemumcinco}), the first
equation \eqref{zerodezasseis} follows. A similar argument applies
to $\,u_{\rho,\,-\,m}\,.$%
\vspace{0.2cm}
\emph{Sufficient condition:} We assume that the hypothesis
\ref{zerodezasseis} holds, and we prove the existence of a system of
barriers at $\,y\,$. Given $\,\rho>\,0\,$ and $\,m>\,0\,,$ we
construct the function $\,V(x)\,$ referred in the definition
\ref{defzeroum}. Let $\,R_0\,$ be such that $\,{\Sigma} \subset
\,I(y,\,R_0)\,$, and define $\,k>\,0\,$ by
\begin{equation}\label{quatrosete}
\frac{(k+\,2\,m)\,\rho}{2\,R_0}=\,m\,.
\end{equation}
For convenience, we denote by $\,u\,$ the capacitary potential
$\,u=\,u_{\frac{\rho}{2},\,-(m+\,k)}\,.$ Furthermore, we define in
$\,{\Sigma}\,$ the function $\,V=\,u+\,(m+\,k)\,$. $\,V\,$ is a solution
in $\,{\Sigma}-\,E_{\frac{\rho}{2}}$ (lemma \ref{lemumseis}) and, in
particular, it is a solution in $\,{\Omega}\,.$ Since $\,\lim_{x
\rightarrow\,y}\,u(x)=\,-\,(m+\,k)\,,$ $\,V\,$ satisfies the
condition (jj) in definition \ref{defzeroum}. Obviously
$\,V(x)\geq\,0\,$ a.e. in $\,{\Sigma}\,$, as follows from lemma
\ref{lemumcinco}.
\vspace{0.2cm}
Next we prove the condition (j). Consider in $\,I(y,\,R_0)\,$ the
function $\,f(x)=\,(k+\,2\,m)\,
R_0^{-\,1}\,|\,x-\,y\,|-\,(k+\,2\,m)\,.$ This function is a
sub-solution in $\,I(y,\,R_0)\,$ (lemma \ref{lemquatroum}) and, in
particular, is a sub-solution in
$\,I(y,\,R_0)-\,I(y,\,\frac{\rho}{2})\,$. Since $\,{\Sigma}
\subset\,I(y,\,R_0)\,,$ it follows that $\,f\leq\,0\,,$ on
$\,\partial\,{\Sigma}\,.$ Further, from \eqref{quatrosete}, it follows that
$\,f=\,-\,(k+\,m)\,$ on $\,\partial\,I(y,\,\frac{\rho}{2})\,.$ So
\begin{equation}\label{quatrooito}
f\leq\,u \quad \textrm{on} \quad \partial\,I(y,\,\frac{\rho}{2})\,.
\end{equation}
By appealing to \eqref{quatrooito}, to the inequality $\,f\leq\,0\,$
on $\,\partial\,{\Sigma}\,$, and to the lemma \ref{lemumtres} applied in
$\,{\Sigma}-\,I(y,\,\frac{\rho}{2})\,,$ it follows that $\,f(x)
\leq\,u(x)\,$ almost everywhere in this last set. So,
\begin{equation}\label{quatronove}
V(x) \geq\,f(x)+\,(m+\,k) \geq\,m, \quad \textrm{ a.e. on} \quad {\Sigma}-\,I(y,\,\rho)\,.
\end{equation}
In particular, \eqref{quatronove} implies that $\,V\geq\,m\,$ on $
\partial\,{\Omega}-\,I(y,\,\rho)\,,$ hence the condition (j) holds.%
\part{}
\section{Main results}\label{intpart2}
We start by remarking that the proofs presented in Part II strongly
rely on ideas and techniques used in reference \cite{b12}, to
which the reader is referred.\par%
The aim of the second part of this work is to state sufficient
conditions for regularity of a given boundary point $\,y\,$. This
task is done by appealing to the theorem \ref{teo-B}. The sufficient
conditions obtained here consist in assumptions on the sets%
\begin{equation}\label{erocompl}%
E_{\rho}=\,(\complement \,{\Omega})(y,\,\rho)\,,%
\end{equation}%
the complementary sets of $\,{\Omega}\,$ with respect to the closed balls
$\,\overline{I(y,\,\rho)}\,.$ They always concern sufficiently small
values of the radius $\,\rho\,.$\par%
The cornerstone result of part II, the theorem \ref{teo-nopub}, has
an "abstract" feature due to the assumption \eqref{e62b}. However we
show that this assumption holds if simple geometrical conditions are
fulfilled. This leads to the statements in theorems
\ref{teo-unpub} and \ref{teo-bvc} below.\par%
\begin{definition}\label{assumpsigma}
Let $\,y \in\,\partial\,{\Omega}\,$ be a boundary point. Given $\,\rho>\,0\,$,
we denote by $\,{\widehat{\sigma}}(\rho)\,$ a positive real such that
the estimate%
\begin{equation}\label{e62}%
|\,v(x)\,|\leq \,{\widehat{\sigma}}(\rho)^{\,-\,1} \,\int_{I(y,\,\rho\,)} \,
\frac{|\,{\nabla}\,v(z)\,|}{|\,x-\,z\,|^{\,N-\,1}}\, dz%
\end{equation}%
holds almost everywhere in $\,I(y,\,\rho\,)\,$, for all
$\,v\in\,H^{\,1,\,t}(I(y,\,\rho\,)\,)\,$ vanishing identically on
$\,E_{\rho}\,.$\par%
If such a positive value does not exist, we set
$\,{\widehat{\sigma}}(\rho)=\,0\,.$ %
\end{definition}
We assume that there is a strictly positive function
$\,\sigma(\rho)\,$, and a constant $\,C\,$ such that, for each
positive $\,\rho\,$ in a arbitrarily small neighborhood of zero.
\vspace{0.2cm}
The next theorem, and the related theorems \ref{teo-unpub} and
\ref{teo-bvc} below, are the main results in part II.
\begin{theorem}\label{teo-nopub}
There is a positive constant $\,\Lambda\,$, which depends only on
$\,t,\,N,\,a,\,$ and $p_0$, such that if%
\begin{equation}\label{e62b}%
\big[\,{\widehat{\sigma}}(\rho)\,\big]^{\frac{t}{t-\,1}}\geq\,\Lambda\,(\log\,\log\,\rho^{-\,1}\,)^{-1}\,,%
\end{equation}%
for small, positive, values of $\,\rho\,,$ then the point
$\,y\in\,\partial\,{\Omega}\,$ is regular with respect to the operator $\,\mathcal{L}\,$ .%
\end{theorem}
The next two theorems are corollaries of the theorem
\ref{teo-nopub}.
\begin{theorem}\label{teo-unpub}
There is a positive constant $\,\Lambda\,$, which depends only on
$\,t,\,N,\,a,\,$ and $p_0$, such that if
$$
\Big(\,\frac{|\,E_\rho\,|}{|\,I(y,\,\rho)\,|}\,\Big)^{\frac{t}{t-\,1}}\geq\,\Lambda\,(\log\,\log\,\rho^{-\,1}\,)^{-1}\,,
$$
for small, positive, values of $\,\rho\,,$ then the point
$\,y\in\,\partial\,{\Omega}\,$ is regular with respect to the operator $\,\mathcal{L}\,$ .%
\end{theorem}
Note that this condition is stronger then the usual cone condition
since the right hand side goes to zero with $\,\rho\,.$\par%
The next statement is theorem 5.5 in reference \cite{b1} (see also
\cite{ricmat}, page 5).
\begin{theorem}\label{teo-bvc}
A point $\,y\in\,\partial\,{\Omega}\,$ is regular with respect to the operator
$\,\mathcal{L}\,$ if $\,y\,$ satisfies a $\,(N-\,1)-$dimensional external
cone property. The $\,(N-\,1)-$dimensional external cone property
may be replaced by a generalized $\,(N-\,1)-$dimensional external
cone property.
\end{theorem}
In this section, by assuming that theorem \ref{teo-nopub} holds, we
prove theorems \ref{teo-unpub} and \ref{teo-bvc}. This is done by
proving that \eqref{e62b} follows from the geometrical assumptions
required both in theorem \ref{teo-unpub} and in theorem
\ref{teo-bvc}. So, as soon as this purpose is fulfilled, our task
will be to present the proof of theorem \ref{teo-nopub}. This proof
is postponed to the next two sections.\par%
We start by theorem \ref{teo-unpub}. This theorem follows
immediately from theorem \ref{teo-nopub}, by appealing to the
following result.
\begin{lemma}\label{lestamfourier}
Let $\,\sigma(\rho)\,$ be defined by \eqref{densmedida} and assume
that $\,|\,E_{\rho}\,|>\,0\,.$ Then
$$
{\widehat{\sigma}}(\rho)^{-\,1} \leq\,C\, \sigma(\rho)^{-\,1}\,,%
$$
where $\,C=\,\frac{\,2^N}{N\,V_N}\,.$
\end{lemma}
Let us prove this lemma. The proof strictly follows the proof of
theorem 6.2, shown in reference \cite{b12}. We denote by $\,S\,$ the
surface of the $\,N$ dimensional unit sphere. Further, if
${\Theta}\subset\,S$, we denote by $\,|\,\sphericalangle{\Theta}\,|\,$ the $\,(N-\,1)-$\~
dimensional spherical measure of $\,{\Theta}\,$,
$$
|\,\sphericalangle{\Theta}\,|=\,\int_{{\Theta}} \,dS\,.
$$
\begin{lemma}\label{misas}%
Set $\,I=\,I(y,\,\rho)\,,$ and let $\,E=\,E_\rho\,$ be given by
\eqref{erocompl}. Furthermore, let a point $x\in I\,, $ $x\notin
E\,$, be given, and denote by $S$ the surface of the unit sphere
centered in $x$. Finally, consider the set
$$
{\Theta}=\,{\Theta}(x)=\,\{\xi \in S:\,\exists \,t=\,t(\xi)\in\,{\mathbb R}
\,,\,x+\,t\,\xi \in\,E\,\}\,.
$$
Then the estimate
$$
|v(x)|\leq\,\frac{1}{|\sphericalangle{\Theta}(x)|}\,\,\int_{I} \,
\frac{|\,{\nabla}\,v(z)\,|}{|\,x-\,z\,|^{\,N-\,1}}\, dz%
$$
holds for any function $v\in\,C^1(I)$ vanishing on $E$.
\end{lemma}
\begin{proof}
Let $\xi \in \,\Theta\,$ and $t(\xi)\in\,{\mathbb R}\,$ be such that
$\,x+\,t(\xi)\,\xi \in\,E\,$. Since
$$
|\,v(\,x+\,t(\xi)\,\xi\,)-\,v(x)\,|\leq\,\int_{0}^{t(\xi)}
\,|\,{\nabla}\,v(x+\,r\,\xi)\,|\,dr\,,
$$
and $\,|x-\,z|^{N-\,1}\,dS\,dr=\,dz\,,$ it follows that
$$
|\,\sphericalangle\,\Theta\,|\,|v(x)|=\,\int_{\Theta}\,|v(x)|\, dS
\leq\,\int_{{\Theta}}\,\int_{0}^{t(\xi)}
\,|\,{\nabla}\,v(x+\,r\,\xi)\,|\,dr\,\,dS\leq\, \int_{I} \,
\frac{|\,{\nabla}\,v(z)\,|}{|\,x-\,z\,|^{\,N-\,1}}\, dz\,.%
$$
\end{proof}
\begin{corollary}\label{misacorol}%
Let $v\in\,C^1(I)$ vanish on $E$. Assume that
\begin{equation}\label{liminf}%
|\,\sphericalangle\,{\Theta}\,|\equiv \inf_{x}\,|\,\sphericalangle\,{\Theta}(x)\,| \,>\,0\,.
\end{equation}%
Then \begin{equation}\label{liminfb}%
|v(x)|\leq\,\frac{1}{|\,\sphericalangle\,{\Theta}\,|}\,\,\int_{I} \,
\frac{|\,{\nabla}\,v(z)\,|}{|\,x-\,z\,|^{\,N-\,1}}\, dz\,,%
\end{equation}%
for all $x\in\,I\,.$ Furthermore, if $\,v \in H^{1,\,t}(I)\,$
vanishes on $\,E\,$ in the $\,H^{1,\,t}(I)\,$ sense, then
\eqref{liminfb} holds a.e. in $\,I\,.$
\end{corollary}
Note that the estimate \eqref{liminfb} is obvious if $x\in \,E\,.$
The last assertion in the corollary follows from well known results
on the continuity of the linear map defined by convolution with the
kernel $\,|\,z\,|^{-\,(N-\,1)}\,$. Actually, this map is continuous
from $\,\L^r\,$ to $\,L^{r^*}\,$ where $1/r^{r^*}=1/r-\,1/n\,.$ See,
for instance, \cite{stein}, Chap.V.\par%
Lemma \ref{lestamfourier} follows by appealing to the "volumetric"
estimate
$$(1/N) \,|\sphericalangle\,{\Theta}(x)| \, (\textrm{diam}\,I)^N \geq\,|E|\,.$$
\vspace{0.2cm}
Next, we prove theorem \ref{teo-bvc}. Some details are left to the
reader. By "cone" (in any dimension) we mean a right circular cone,
truncated by a sphere with center the vertex of the cone. For
instance, the $\,(N-\,1)-$dimensional "truncated cones" with vertex
$\,y=\,0\,$ have the form
\begin{equation}\label{cumum}
C_{\rho,\,{\omega}}=\,\{\,x\in\,{\mathbb R}^N\,:\,x_1\geq\,0\,,\,x_N=\,0\,,
\,|\,x\,| \leq\,\rho\,,\, |\,x\,|^2 \leq\, (1+\,{\omega})\,{x_1}^2\,\}\,,
\end{equation}
where $\,\rho\,$ and $\,{\omega} \,$ are positive constants. Note that,
by setting $\,x=\,(x_1,\,x',\,x_N\,)\,,$ the above
condition means that $\,|\,x'\,|^2 \leq\,\,{\omega}\,\,x^2_1\,.$\par%
\begin{definition}\label{s-cone}
We say that a point $\,y\in\,\partial\,{\Omega}\,$ satisfies an
$\,(N-1)-$dimensional external cone property if there exists an
$\,(N-1)-$dimensional cone $\,C\,$ with vertex in $\,y\,$ and
contained in $\,\complement \,{\Omega}\,.$ Similarly, we define
generalized $\,(N-1)-$dimensional cone property at the point
$\,y\,$, by replacing the cone $\,C\,$ by a Lipschitz image of
itself.
\end{definition}
\vspace{0.2cm}
The proof of theorem \ref{teo-bvc} follows immediately from theorem
\ref{teo-nopub} and corollary \ref{misacorol}, by a small
modification of the argument used to prove the theorem
\ref{teo-unpub}. As above, we appeal to the corollary
\ref{misacorol}. Roughly speaking, as for the theorem
\ref{teo-unpub}, we would like to show that there is a positive
lower bound $\,|\,\sphericalangle\,{\Theta}\,|\,$ for the values of the solid angles
$\,|\,\sphericalangle\,{\Theta}(x)\,|\,$ from which the set $\,E_\rho\,$ can be
"watched" from points $\,x\in I(y,\,\rho)\,.$ Clearly, this is false
in general, since (for instance) $\,x\,$ and $\,E_\rho \,$ may belong
to a $\,(N-\,1)-$dimensional hyperplane. However the same argument
applies here. Let's prove that equation \eqref{e62} holds for a
positive $\,\sigma(\rho)\,,$ independent of $\,\rho\,.$ To show this
claim, note that geometry and estimates for a generical value
$\,\rho\,$ can immediately be brought back to the case $\,\rho=\,1\,$,
by a suitable homothety. Next, note that the estimates in play are
invariant under Lipschitz maps, up to multiplication by positive
constants. So, we may fold up the original $\,(N-1)-$ dimensional
cone into an "non flat" $\,(N-1)-$ dimensional "twisted cone", which
contains $\,N\,$ distinct pieces of surface, each one orthogonal to
a single $\,x_i\,$ direction, $\,i=\,1,...,\,N\,.$ Now, from each
point $\,x \in I(y,\,1)\,,$ one "watches", at least, one of the
above pieces of surface, from a positive solid angle
$\,|\,\sphericalangle\,{\Theta}(x)\,|\,.$ Moreover, the lower bound
$\,|\,\sphericalangle\,{\Theta}\,|\,$ of the values of solid angles is positive.
This proves theorem \ref{teo-bvc}.\par%
Note that it would be sufficient to prove that the lower bounds
behaves like $\,\sigma(\rho)\,$ in equation \eqref{densasr}, as
$\,\rho\,$ goes to zero.
\section{A recursive estimate for the local oscilation}\label{regcrit}
In the sequel, to avoid unessential devices, we assume in equations
\eqref{zeroquatro} and \eqref{zerocinco} that $\,p_0=\,0\,$. One
easily extends the proof to the general situation by appealing to
\eqref{doistres}. This leads to the appearance of "lower order"
terms, easy to control.
\vspace{0.2cm}
We prove the theorem \ref{teo-unpub} by showing that
\eqref{zerodezasseis} holds. More precisely, we fix a couple of
positive constants $\,\rho_0\,$ and $\,m\,,$ and prove that
$$
\lim_{x \rightarrow \,y} u_{m,\,\rho_0}(x)=\,m\,.
$$
The proof of the second equation \eqref{zerodezasseis} is absolutely
identical. Alternatively, we may appeal to the remark \ref{rem-1.1},
to refer the proof to that of the first equation.\par
\vspace{0.3cm}
In the sequel the "large" ball $\,{\Sigma}\,$, the point $\,y\in \partial
\,{\Omega}\,$, and the positive constants $m$ and $\,\rho_0\,$ are
assumed to be fixed, once and for all. The capacitary potential
$u_{m,\,\rho_0}(x)$ of $\,E_{\rho_0} \,$ will be simply denoted by
$u(x)$. Furthermore, without loss of generality, we place the
origin at $\,y\,$, so%
$$
y=\,0\,.
$$
We set $\,I(r)=\,I(0,\,r)\,.$
The following result is well known.
\begin{lemma}\label{L51}
One has
\begin{equation}\label{52}
\|\,v\,\|_{t^*,\,r} \leq\,c\,\|\,{\nabla}\,v\,\|_{t,\,r}\,,\quad \forall
\, \,v \in H^{1,\,t}_0(r)\,,
\end{equation}
where $ \frac{1}{t^*}=\,\frac{1}{t}-\,\frac{1}{N}\,. $
\end{lemma}
We define sets
\begin{equation}\label{asbas}%
\,B(k,\,r)= \,\{x \in\,{\Omega}(y,\,r)\,: u(x) \leq\,k\,\}\,,%
\end{equation}%
and introduce the cut-off function
\begin{equation}\label{fixis}%
\phi(x)=\, \left\{\begin{array}{ll}\displaystyle \,1
&\displaystyle \mbox{ if }\ |x|\leq\,\rho\,,\\
\displaystyle \frac{R-\,|x|}{R-\,\rho} & \displaystyle \mbox{ if }\ \rho
\leq\,|\,x\,|\leq\,R\,,\\%
\,0 &\displaystyle \mbox{ if }\ R \leq\,|x|\,.
\end{array}\right.\end{equation}
In the sequel, $\,0<\,\rho <\,R <\,\rho_0\,.$ For brevity, we set
$$
B(k)=\,B(k,\,R)\,.
$$
The following kind of estimates is well known.
\begin{lemma}\label{L52}
Assume that $\,0<\,\rho\, <R <\,\overline{R}\,,$ and
$\,0<\,h<\,k\,.$ Let be $\,v \in \, H^{1,\,t}(\overline{R})\,.$
Then, the following estimates hold.
\begin{equation}\label{exle64}%
\left\{\begin{array}{ll}\displaystyle
\int_{B(h,\,\rho)} (h-\,u)^t \,dx\leq\\%
c\,\Big(\,(R-\,\rho)^{-\,t} \int_{B(k)} \,(k-\,u)^t \,dx\,+
\int_{B(k)} \, |\,{\nabla}\,u\,|^t \,\phi^{\,t}
\,dx\,\Big)\,|B(k)|^\frac{t}{N}\,,\\
\\
\displaystyle |B(h,\,\rho)|\,(k-\,h)^t \leq \, \int_{B(h,\,\rho)} \,(k-\,u)^t
\,dx\, \leq \, \int_{B(k)} \,(k-\,u)^t \,dx\,.
\end{array}\right.\end{equation}
\end{lemma}
For the proof of the first estimate see, for instance, the proof of
the first inequality (6.12) in reference \cite{c2}. The second
estimate \eqref{exle64} is obvious.
\begin{theorem}\label{T51}
Let $\,\phi\,$ be given by \eqref{fixis}. Then, for each real $k$,%
\begin{equation}\label{eq54}%
\int_{B(k,\,R)}\, |\,{\nabla}\,u\,|^t \,\phi^{\,t}\,dx \leq\,
c\,(R-\,\rho)^{-\,t} \int_{B(k,\,R)} \,|\,u-\,k\,|^{\,t} \,dx\,.%
\end{equation}
\end{theorem}
\begin{proof}
By the definition of $u_{m,\,\rho_0}(x)\,$ one has
\begin{equation}\label{Eq55}
\int_{{\Sigma}} \,\big(\,A({\nabla}\,u\,),\,{\nabla}\,(\,v-\,u\,)\,\big) \,dx
\geq\,0\,, \quad \forall \quad v \in\,{\mathbb K}_m(\Sigma)\,.
\end{equation}
where (recall \eqref{zeroonze})
$$
{\mathbb K}_m(\Sigma)=\,\big\{\,v \in\,H^{1,\,t}_0({\Sigma})\,:\,v \geq\,m \quad
\mbox{ on }\quad E_{\rho_0}\,\big\}\,.
$$
By setting in equation \eqref{Eq55} $\,v=\,u-\,\phi^{\,t}
\,\min(u-\,k,\,0)\,$ it follows that
\begin{equation}\label{Eq56}
\int_{B(k)} \,\big(\,A({\nabla}\,u\,),\,{\nabla}\,u\,\,\big) \,\phi^t \,dx
\leq\,-t\,\int_{B(k)}
\,\big(\,A({\nabla}\,u\,),\,{\nabla}\,\phi\,\big)\,(\,u-\,k)\,\phi^{\,t-\,1}
\,dx.
\end{equation}
From \eqref{Eq56}, by appealing to H\H older's inequality and to
properties enjoyed by $\,\phi\,$ and $\,A(p)\,,$ we show that
\begin{equation}\label{eq57}\begin{array}{ll}\displaystyle%
a\, \int_{B(k)} \, |\,{\nabla}\,u\,|^t \,\phi^{\,t} \,dx \leq \\
\\
t\,a^{\,t-1}\,\Big(\,\int_{B(k)} \,|\,{\nabla}\,u\,|^t \,\phi^{\,t}
\,dx\,\Big)^{\frac{t-\,1}{\,t}}\, \Big(\, \int_{B(k)} \,
|\,u-\,k\,|^t \,|\,{\nabla}\,\phi\,|^{\,t}
\,dx\,\Big)^{\frac{\,1}{\,t}}\,.%
\end{array}\end{equation}%
Equation \eqref{eq57} leads to
$$
\int_{B(k)} \, |\,{\nabla}\,u\,|^t \,\phi^{\,t} \,dx \leq\,c\,
\int_{B(k)} \, |\,u-\,k\,|^t \,|\,{\nabla}\,\phi\,|^{\,t} \,dx\,.
$$
Since $\,|\,{\nabla}\,\phi\,| \leq\,(R-\,\rho\,)^{\,-1}\,,$ the thesis
follows.
\end{proof}
The next result follows by appealing to theorem \ref{T51} and lemma
\ref{L52}.
\begin{lemma}\label{L53}
Assume that $\,0<\,\rho\, <R\,,$ and $\,0<\,h<\,k\,.$ The following
estimates hold.
\begin{equation}\label{exle64}%
\left\{\begin{array}{ll}\displaystyle \int_{B(h,\,\rho)} (h-\,u)^t \,dx\leq \,
c_1\,|B(k)|^\frac{t}{N}\,(R-\,\rho)^{-\,t} \int_{B(k)} \,(k-\,u)^t \,dx\,,\\
\\
\displaystyle |B(h,\,\rho)|\,(k-\,h)^t \leq \, \int_{B(k)} \,(k-\,u)^t \,dx\,.
\end{array}\right.\end{equation}
\end{lemma}
For brevity we set
\begin{equation}\label{exle64b}%
\left\{\begin{array}{ll}\displaystyle%
u(h,\,\rho)=\,\int_{B(h,\,\rho)} (h-\,u)^t \,dx\,,\\
\\
\displaystyle b(h,\,\rho)=\,|B(h,\,\rho)|\,.
\end{array}\right.\end{equation}
So, equation \eqref{L53} takes the form
\begin{equation}\label{exle64b}%
\left\{\begin{array}{ll}\displaystyle%
u(h,\,\rho)\leq \,
c_1\,b(k,\,R)^\frac{t}{N}\,(R-\,\rho)^{-\,t}\,u(k,\,R)\,,\\
\\
\displaystyle b(h,\,\rho)\,(k-\,h)^t \leq \,u(k,\,R)\,.
\end{array}\right.\end{equation}
Next, we define
\begin{equation}\label{psistamp}%
\psi (h,\,\rho)=\,u(h,\,\rho)^{\,\theta \frac{N}{t}} \, b(h,\,\rho)\,,%
\end{equation}%
where
$$
\theta=\,\frac12 +\,\sqrt{\frac{1}{4}+\,\frac{t}{N}}>\,1\,.
$$
Straightforward calculations show that
\begin{equation}\label{psisminus}%
\psi(h,\,\rho)\leq\,c_1^{\,\frac{N}{t}\,\theta\,}\,
\,\frac{1}{(\,R-\,\rho\,)^{\,N\,\theta}}\,
\frac{1}{(\,k-\,h\,)^{\,t}}\,\psi(k,\,R)^\theta \,.%
\end{equation}%
Note that
$\,\frac{t}{N}+\,\theta=\,\theta^{\,2}\,.$ We point out that the above choice
of $\,\theta\,$ is the only choice possible to get an estimate of the form \eqref{psisminus}.\par%
\vspace{0.2cm}
\begin{lemma}\label{L55bis}
Let be $\,0<\,r_0 \leq\,\frac{\,\rho_0}{2}\,,$ $\,k_0\in\,{\mathbb R}\,,$ and
$\,d>\,0\,.$ Define, in correspondence to each no-negative integer
$m$, the following quantities:
\begin{equation}\label{e514bis}%
\left\{\begin{array}{ll}%
r_m=\,\frac{r_0}{2}+\,\frac{r_0}{2^{m+1}}\,,\\
\\
k_m=\,k_0-\,d +\,\frac{d}{2^{\,m}}\,,%
\end{array}\right.%
\end{equation}
\begin{equation}\label{e515bis}%
\left\{\begin{array}{ll}%
a_m=\,|B(k_m,\,r_m)|\,,\\
\\
u_m=\,\int_{B(k_m,\,r_m)} \,(k_m-\,u)^t \,dx\,,%
\end{array}\right.%
\end{equation}
and
\begin{equation}\label{e516biss}%
\psi_m=\,u_m^{\,\theta \frac{N}{t}} \, b_m\,.%
\end{equation}%
Then
\begin{equation}\label{demeias}%
\big|B(k_0-\,d,\,\frac{r_0}{2})\big|=\,0%
\end{equation}%
if
\begin{equation}\label{ddt}%
d \geq c_1^{\,\frac{N \,\theta}{\,t^2} }\, \frac{
2^{\frac{\beta\,\theta}{t}}}{(2\,r_0)^{\frac{\,N\,\theta}{t}}}\,
\,\psi_0^{\frac{\,\theta-1}{t}}\equiv \,
C\,\frac{\psi_0^{\frac{\,\theta-1}{t}}}{r_0^{\frac{\,N\,\theta}{t}}}\,.%
\end{equation}%
\end{lemma}
\begin{proof}
Note that $\,a_m\,,\,u_m\,,$ and $\,\psi_m\,,$ are non-increasing
sequences. By setting in equation \eqref{psisminus}
$\,(k,\,R)=\,(k_m,\,r_m)\,,$ and $
\,(h,\,\rho)=\,(k_{m+\,1},\,r_{m+\,1})\,,$
one shows that%
\begin{equation}\label{bicas}%
\psi_{m+1}\leq\,c_1^{\,\frac{N}{t}
\theta\,}\,\frac{1}{d^t}\,\frac{1}{(2\,r_0)^{\,N\,\theta}}\,
\,2^{(\,m+\,1)\,(\,t+\,N\,\theta)}\,\psi_m^{\theta}\,.
\end{equation}%
We want to prove, by induction, that
\begin{equation}\label{inducao}%
\psi_m\leq\,\frac{\psi_0}{2^{\,\beta\,m}}\,,\quad \forall \,
m\geq\,0\,,
\end{equation}%
where
$$
\beta=\,\frac{\,t+\,N\,\theta}{\theta-\,1}\,.
$$
For $m=0\,,$ \eqref{inducao} is obvious. Assume it for some
$\,m\geq\,0\,.$ By appealing to \eqref{bicas} and \eqref{inducao}
straightforward calculations show that
\begin{equation}\label{bicasb}%
\psi_{m+1}\leq\,c_1^{\,\frac{N}{t}
\theta\,}\,\frac{1}{d^t}\,\frac{2^{\beta\,\theta}}{(2\,r_0)^{\,N\,\theta}}\,
\,\psi_0^{\theta-\,1}\,\frac{\psi_0}{2^{\,\beta\,(m+\,1)}}\,.
\end{equation}%
This proves \eqref{inducao}, under the assumption \eqref{ddt}. In
particular, $\,\psi_m \rightarrow \,0\,,$ as $\,m \rightarrow
\,\infty\,.$ Since
$$
\big|\,B(k_0 -\,d,\,\frac{r_0}{2})\,\big| \,\Big\{\,\int_{B(k_0
-\,d,\,\frac{r_0}{2})} \,\big(\,(\,k_0 -\,d\,)-\,u\,\big)^t
\,dx\,\Big\}^{\,\theta \frac{N}{t}} \leq\,\psi_m\,,
$$
the thesis of the theorem follows.%
\end{proof}
\begin{corollary}\label{corinf}%
There is a constant $C$, independent of $r_0$ and $k_0$, such that
\begin{equation}\label{minimus}%
\textrm{Inf}_{I(\frac{r_0}{2})} \,u \geq\,k_0 -\,
C\,\Big\{\,\frac{1}{r_0^N}\,\int_{B(k_0,\,r_0)} \,
\big(\,k_0-\,u\,\big)^t \,dx\,\Big\}^\frac{1}{t}\,
\Big\{\,\frac{1}{r_0^N}\,|\,B(k_0,\,r_0)\,|\Big\}^\frac{\theta-1}{t}\,.
\end{equation}%
In particular,
\begin{equation}\label{minimus2}%
\textrm{Inf}_{I(\frac{r_0}{2})} \,u \geq\,k_0 -\,
C\,\Big\{\,\frac{1}{r_0^N}\,\int_{B(k_0,\,r_0)} \,
\big(\,k_0-\,u\,\big)^t \,dx\,\Big\}^\frac{1}{t}\,.
\end{equation}%
\end{corollary}
The proof of the first estimate follows immediately from
\eqref{demeias}, by taking into account that the $C$ term in the
right hand sice of \eqref{minimus} is equal to the $\,C$ term in the
right hand side of \eqref{ddt}. The second estimate follows from the
first one (here, we change the value of the constant $C$). Since $C$
does not depend on $r_0$ and $k_0\,,$ we drop the index $0$.
Further, we define
$$
i(r)=\,{Inf}_{I(r)}\,u \,, \quad s(r)=\,{Sup}_{I(r)}\,u \,,\quad
{\omega}(r)=\,s(r) -\,i(r)\,.
$$
By setting in \eqref{minimus2} $k=\,i(2\,r) +\,\eta\,{\omega}(2\,r)\,,$
where $\,\eta>\,0\,,$ and by
taking into account that for $\,x\,\in B(k,\,r)\,$ one has%
$$
0\leq\,k-\,u(x) \leq\,\eta\,{\omega}(2\,r)\,,
$$
it follows that
$$
i(\frac{r}{2})\geq\,i(2r) +\,\eta\,{\omega}(2r)-\,
C\,\Big\{\,\frac{1}{r^N}\,|\,B(k,\,r)\,|\,\Big\}^\frac{1}{t}\,\eta\,{\omega}(2r)\,.
$$
Hence,
$$
{\omega}(\frac{r}{2})\leq\,\Big\{\,1-\,\eta \Big[\,1-\,
C\,\Big(\,\frac{1}{r^N}\,|\,B(k,\,r)\,|\,\Big)^\frac{1}{t}\,\Big]\,\Big\}\,{\omega}(2r)\,.
$$
\vspace{0.2cm}
For convenience we replace $r$ by $2r$ in the next result.
\begin{proposition}\label{propas}
Let be $\,k=\,i(4\,r) +\,\eta\,{\omega}(4\,r)\,,$ Then
\begin{equation}\label{E521}%
{\omega}(r)\leq\,\Big\{\,1-\,\eta \, \Big[\,1-\,
C\,\Big(\,\frac{1}{r^N}\,|\,B(k,\,2r)\,|\,\Big)^\frac{1}{t}\,\Big]\,\Big\}\,{\omega}(4r)\,.
\end{equation}%
\end{proposition}%
\begin{remark}\label{melhas}
\rm{In reference \cite{c2} it was proved ( \cite{c2}, equation (6.21)) that%
\begin{equation}\label{exnovo}%
|B(h,\,\rho)|\,(k-\,h)^t \leq \\%
c\,\Big(\,(R-\,\rho)^{-\,t} \int_{B(k)} \,(k-\,u)^t \,dx\,+
\int_{B(k)} \, |\,\nabla\,u\,|^t \,\phi^{\,t}
\,dx\,\Big)\,|B(h,\,\rho)|^\frac{t}{N}\,.%
\end{equation}%
This estimate, together with \eqref{eq54}, shows that
\begin{equation}\label{exle64bb}%
|B(h,\,\rho)|^{\,1-\,\frac{\,t}{N}}\,(k-\,h)^t \leq \,
c_1\,(R-\,\rho)^{-\,t}
\int_{B(k)} \,(k-\,u)^t \,dx\,.%
\end{equation}%
If we appeal to this estimate (instead of appealing to the second
estimate \eqref{exle64}) we get \eqref{minimus} with the exponent
$\,\frac{\theta-1}{t}\,$ replaced by $\,\frac{1}{N\,\theta_1}\,,$
where $\,\frac{t}{N-\,t}+\,\theta_1=\,\theta_1^{\,2}\,.$}%
\end{remark}
\section{Proof of theorem \ref{teo-nopub}}\label{regcrit2}
We start this section by stating a well known potential theory
result.
\begin{lemma}\label{L62}
Let $\,\mu\,$ be a compact supported, bounded variation measure in
$\,{\mathbb R}^N\,,$ and let%
\begin{equation}\label{e63}%
U^{\mu}_1(x)=\,\int \, \frac{\,d\,\mu(z)}{|\,x-\,z|^{N-\,1}}%
\end{equation}%
be the potential of order $\,1\,$ generated by $\,\mu\,.$ Then,
there is a positive constant $\,c\,$ such that
\begin{equation}\label{e64}%
|\,\{\,x \in\,{\mathbb R}^N: \,|\,U^{\mu}_1(x)\,| \geq\,\tau\,\}\,|
\leq\,\Big(\,\frac{c\,\int
\,|\,d\,\mu\,|}{\tau}\,\Big)^{\frac{\,N}{N-\,1}}\,,%
\end{equation}%
for each $\,\tau>\,0\,$.
\end{lemma}
For potentials of order $\,2\,$, the above result is due essentially
to E. Cartan, see \cite{cartan} lemma 4. The result is easily
extended to potentials of arbitrary order $\,\alpha\,.$ For
$\,\alpha=\,1\,,$ it claims that
$$
\textrm{cap}^{\,*}_1\{\,x \in\,{\mathbb R}^N: \,|\,U^{\mu}_1(x)\,|
\geq\,\tau\,\}\leq\,\frac{2^{N-\,1}\,\,\int \,|\,d\,\mu\,|}{\tau}\,,
$$
for each $\,\tau>\,0\,,$ where $\,\textrm{cap}^{\,*}_1 (E)\,$
denotes the internal capacity of order $\,1\,$ of the set $\,E\,$.
Equation \eqref{e64} follows by appealing to the classical estimate
$$
|\,E\,|\leq\, c(N)\,(\,\textrm{cap}^{\,*}_1
(E)\,)^{\frac{\,N}{N-\,1}}\,.
$$
Next we prove the following result.
\begin{lemma}\label{L63}
Let be $\,0\leq\,h<\,k\leq\,m\,,$ and $\,0<\,r<\,\frac{\rho_0}{2}\,.$
Then \begin{equation}\label{e65}%
\begin{array}{ll}
|\,B(h,\,2\,r)\,|^{\frac{\,t\,(N-\,1)}{N\,(\,t-\,1)}}\leq\,c\,[\,(k-\,h)\,\sigma(2\,r)\,]^{\,-
\frac{\,t}{\,t-\,1}}\,(\,|\,B(k,\,2\,r)\,|-\,|\,B(h,\,2\,r)\,|\,)\\
\\
\Big(\,(\,2\,r\,)^{\,-t} \,\int_{B(k,\,4\,r)} \, |\,u-\,k\,|^t \,dx
\,\Big)^{\frac{1}{t-\,1}}\,.%
\end{array}\end{equation}
\end{lemma}
\begin{proof}
Set
$$%
v=\,\left\{%
\begin{array}{ll}%
k-\,h\ \quad \textrm{if} \quad u\leq\,h\,,\\
k-\,u \quad \textrm{if} \quad h \leq\,u\leq\,k\,,\\
0 \quad \textrm{if} \quad k\leq\,u\,,\\
\end{array}%
\right.%
$$%
and%
\begin{equation}%
\mu(z)=\,\left\{%
\begin{array}{ll}%
|\,{\nabla}\,v(z)\,| \quad \textrm{on} \quad I(0,\,2\,r)\,,\\
\\
0 \quad \textrm{on} \quad (\complement \,I)(0,\,2\,r)\,.
\end{array}%
\right.%
\end{equation}%
Since $\,v\,$ vanishes on $\,E_{2\,r}\,$, from assumption
\ref{assumpsigma} it follows $\,|\,v(x)\,|
\leq\,c\,\sigma(2\,r)^{\,-\,1}\,\,U^{\mu}_1(x)\,$ on $\,I(2\,r)\,.$
Hence, by lemma \ref{L62}, we show that
\begin{equation}\label{e66}\begin{array}{ll}%
|\,\{\,x\in\,I(2\,r)\,:\,|\,v(x)\,|\geq\,\tau\,\}\,|
\leq\,c\,\Big(\,(\sigma(2\,r)\,\tau\,)^{\,-\,1}\,\int_{I(\,2\,r)}
\,|\,{\nabla}\,v(z)\,| \, dz \,\Big)^{\frac{\,N}{N-\,1}}\,,%
\end{array}\end{equation}%
for each $\,\tau>\,0\,.$ Let be $\,\tau=\,k-\,h-\,\epsilon\,$, where
$\,\epsilon>\,0\,.$\par%
By appealing to the definition of $\,v\,$ we prove that
$$
\begin{array}{ll}%
|\,B(h,\,2\,r)\,| \leq \,|\,\{\,x \in\,I(2\,r): v(x)
\geq\,\tau\,\}\,|
\\
\leq\,c\,\Big(\,\big[\sigma(2\,r)\,(k-\,h-\,\epsilon)\,\big]^{\,-\,1}\,\int_{B(k,\,2\,r)-\,B(h,\,2\,r)}
\, |\,{\nabla}\,v(z)\,| \, dz \,\Big)^{\frac{\,N}{N-\,1}}\,.%
\end{array}
$$
Further, by letting $\,\epsilon \rightarrow 0\,$ in the last equation,
and by appealing to H\H older's inequality, we obtain the estimate
\begin{equation}\label{e67}
\begin{array}{ll}%
|\,B(h,\,2\,r)\,|^{\frac{N-\,1}{\,N}}
\leq\,c\,\Big(\,\big[\sigma(2\,r)\,(k-\,h)\,\big]^{\,-\,1}\,
\Big(\,\int_{B(k,\,2\,r)} \, |\,{\nabla}\,u\,|^t \, dx \,\Big)^{\frac{\,1}{\,t}}\cdot\\
\\
\,\big(\,|\,B(k,\,2\,r)\,|-\,|\,B(h,\,2\,r)\,|\,\big)^{\frac{\,t-\,1}{\,t}}\,\Big)\,.%
\end{array}\end{equation}%
Finally, by raising both terms of the last equation to the power
$\,\frac{\,t}{\,t-\,1}\,,$ and by appealing to theorem \ref{T51}
(with $\,\rho=\,2\,r\,,$ and $\,R=\,4\,r\,$) the thesis follows.
\end{proof}
\begin{theorem}\label{T61}
Let be $\,0<\,r<\,4^{\,-\,1}\rho_0\,.$ There is a constant
$\,C_1\,$, which depends at most on $\,a,\,p_0,\,d,\,t\,,$ and
$\,N\,,$ such that if $\,n_0=\,n_0(r) \,$ satisfies \eqref{enezero}
below, then%
\begin{equation}\label{e68}%
{\omega}(r)\leq\,(\,1-\,2^{\,-\,1}\,\eta_{\,n_0}\,)\,{\omega}(4\,r)\,,%
\end{equation}%
where
$$
\eta_{\,n_0}=\,2^{\,-(\,n_0+\,1)}\,.
$$
\end{theorem}
\begin{proof}
Let be $\,l=\,i(4r)\,,$ $\,{\omega}=\,{\omega}(4r)\,,$ and set, for each no-negative integer $\,j\,,$%
\begin{equation}\label{E520}%
\left\{%
\begin{array}{ll}
\eta_j=\,2^{\,-\,(j+\,1)}\,,\\
\\
k_j=\,i(4r)+\,\eta_j \,{\omega}(4r)\,,
\end{array}%
\right.%
\end{equation}%
and $\,b_j=\,|\,B(k_j,\,2\,r)\,|\,.$ By lemma \ref{L63} with
$\,k=\,k_j \,$ and $\,h=\,k_{\,j+\,1} \,,$ we obtain
$$
\begin{array}{ll}%
{b_{j+1}}^{\frac{t(\,N-\,1)}{\,N(t-1)}}\leq\,c\,\big[\,2^{\,-(\,j+\,2)}\,{\omega}\,\sigma(2r)
\,\big]^{-\,\frac{t}{\,t-\,1}}\,(\,b_{j}-\,b_{j+1}\,)\cdot \\
\\
\big[\,(2r)^{\,-t}\,V_N\,(4\,r)^N\,(2^{\,-(\,j+\,1)}\,{\omega})^t
\,\big]^{\frac{\,1}{\,t-1}}\,.
\end{array}%
$$
Straightforward calculations show that%
\begin{equation}\label{e610}%
\begin{array}{ll}%
{b_{j+1}}^{\frac{t(\,N-\,1)}{\,N(t-1)}}\leq\,c\,r^{\frac{N-\,t}{\,t-\,1}}\,\sigma(2r)^{\,-
\frac{\,t}{\,t-1}}\,(\,b_{j}-\,b_{j+1}\,)\,,%
\end{array}\end{equation}%
where, for convenience, the value of the constant $c$ may change
from equation to equation (clearly, it depends only on fixed quantities like $N$, $t$, etc.).\par%
Denote by $\,n_0=\,n_0(r)\,$ an arbitrary positive integer, to be
fixed later on. From \eqref{e610} it follows that
$$
{b_{n_0}}^{\frac{t(\,N-\,1)}{\,N(t-1)}}\leq\,{b_{j+1}}^{\frac{t(\,N-\,1)}{\,N(t-1)}}\leq\,
c\,r^{\frac{N-\,t}{\,t-\,1}}\,\sigma(2r)^{\,-
\frac{\,t}{\,t-1}}\,(\,b_{j}-\,b_{j+1}\,)\,,
$$
for each $j\,$, $\,0\leq\,j\leq\,n_0 -\,1\,.$ Consequently,
$$
\begin{array}{ll}%
n_0\,{b_{n_0}}^{\frac{t(\,N-\,1)}{\,N(t-1)}}\leq\,
c\,r^{\frac{N-\,t}{\,t-\,1}}\,\sigma(2r)^{\,- \frac{\,t}{\,t-1}}\,
\sum_{j=\,0}^{n_0-\,1} \, (\,b_{j}-\,b_{j+1}\,)\\%
\\
\qquad \qquad \qquad \leq\,c_0\,\sigma(2r)^{\,- \frac{\,t}{\,t-1}}\,
(2r)^{\frac{\,t(N-\,1)}{\,t-1}}\,.%
\end{array}%
$$%
Hence,
\begin{equation}\label{e612}%
\Big(\,{\frac{b_{n_0}}{(2\,r)^N}}\Big)^{\frac{1}{t}}
\leq\,C\,{n_0}^{-\frac{N(t-1)}{t^2\,(N-1)}}
\,\sigma(2r)^{\,- \frac{\,N}{\,t(N-1)}}\,.%
\end{equation}%
On the other hand, from \eqref{E521}, one has
\begin{equation}\label{E521bis}%
{\omega}(r)\leq\,\Big\{\,1-\,2^{-(n_0+1)} \, \Big[\,1-\,
C\,\Big(\,\frac{b_{n_0}}{r^N}\,\,\Big)^\frac{1}{t}\,\Big]\,\Big\}\,{\omega}(4r)\,.
\end{equation}%
Finally, from \eqref{e612} and \eqref{E521bis},
\begin{equation}\label{E521tis}%
{\omega}(r)\leq\,\Big\{\,1-\,2^{-(n_0+1)} \, \Big[\,1-\,
\,C_0\,{n_0}^{-\frac{N(t-1)}{t^2\,(N-1)}} \,\sigma(2r)^{\,-
\frac{\,N}{\,t(N-1)}}\,\Big]\,\Big\}\,{\omega}(4r)\,.
\end{equation}%
Next, we want to single out an index $n_0=\,n_0(r)$ such that the
expression under square brackets is less or equal to $\frac12\,,$
for each positive (small) radius $r$. This leads to
\begin{equation}\label{enezero}%
n_0(r)\geq\,C_1\,\sigma(2r)^{\,- \frac{\,t}{\,t-1}}\,,%
\end{equation}%
where $C_1$ is a constant which depends at most on
$\,a,\,p_0,\,d,\,t\,,$ and $\,N\,$. In the sequel we denote by
$n_0(r)$ the smallest integer for which \eqref{enezero} holds. Hence
\begin{equation}\label{enezero3}%
C_1\,\sigma(2r)^{\,- \frac{\,t}{\,t-1}}\leq\,n_0(r)<\,1+\,C_1\,\sigma(2r)^{\,- \frac{\,t}{\,t-1}}\,.%
\end{equation}%
\end{proof}
\begin{lemma}\label{L64}
Let $\,C_1\,$ be the constant in equation \eqref{enezero}. Then,
\begin{equation}\label{densasr2}
\big[\,\sigma(r)\,\big]^{\frac{t}{t-\,1}}\geq\,C_1\,(\log
2)\,(\log\,\log\,(r^{-\,1})\,)^{-1}\,,
\end{equation}
for each positive $\,r\,,$ in a arbitrarily
small neighborhood of zero (clearly, $\,r<\,1\,$ is assumed), then%
\begin{equation}\label{e613}%
\lim_{r \rightarrow \,0} {\omega}(r)=\,0\,.%
\end{equation}%
In particular, the boundary point $\,y\,$ is regular.
\end{lemma}
\begin{proof}
Fix a positive $\,r_0\,$ such that
\begin{equation}\label{e614}%
{\omega}(r)\leq\,(1-\,4^{\,-1}\,\eta_{\,n_0}\,)\,{\omega}(4\,r)\,, \quad
\forall \, r<\,r_0\,.%
\end{equation}%
This choice is possible, by \eqref{e68}. Further, define,
for each no-negative index $\,i\,$,%
\begin{equation}\label{e615}%
r_i=\,4^{\,-\,i}\,r_0\,.%
\end{equation}%
Furthermore, set $\,n_0(i)=\,n_0(r_i)\,.$ From \eqref{e614} it
follows that
$\,{\omega}(r_i)\leq\,(1-\,4^{\,-1}\,\eta_{\,n_0(i)}\,)\,{\omega}(r_{\,i-1})\,,$
for each $\,i\geq\,1\,,$ so%
\begin{equation}\label{e616}
{\omega}(r_i)\leq\,\prod_{k=\,1}^{i}(1-\,4^{\,-1}\,\eta_{\,n_0(k)}\,)\,{\omega}(r_0)\,.%
\end{equation}%
From \eqref{enezero3}, and \eqref{densasr2}, it follows that
$$
n_0(r) <\,1+\,(\log
2\,)^{-1}\,\log\big(\,\log(\,2\,r)^{\,-\,1}\,\big)\,.
$$
Hence,%
$$
2^{\,n_0(k) +\,1} \leq\,4\,
e^{\log\big(\,\log(\,2\,r)^{\,-\,1}\,\big)}=\,4\,\log(\,2\,r)^{\,-\,1}\,,
$$
where $\,r=\,r_k\,.$ It follows that%
\begin{equation}\label{e618}%
\eta_{n_0(k)}\geq\,4^{\,-1}\,\big(\,\log\,(2\,r_k)^{-1}\,\big)^{\,-1}\,,
\quad \forall \,k \geq\,1\,.%
\end{equation}%
Further, by appealing to \eqref{e615}, one gets
\begin{equation}\label{e618}%
\eta_{n_0(k)}\geq\,\frac{1}{4\,\big(k\,\log\,4-\,\log\,(2\,r_0)\,\big)}\,.%
\end{equation}%
Since $\,\log(1-\,x) \leq\,-\,x\,$ we get
$$
\log(\,1-\,4^{\,-1}\,\eta_{n_0(k)}\,)\leq\,\frac{-1}{4^2\,\big(k\,\log\,4-\,\log\,(2\,r_0)\,\big)}\,.%
$$
So,
$$
\sum_{k=\,1}^{+\,\infty}\,
\log\,(1-\,4^{\,-1}\,\eta_{\,n_0(k)}\,)\,=\,-\,\infty\,.
$$
Hence%
\begin{equation}\label{e619}
\prod_{k=\,1}^{+\,\infty}(1-\,4^{\,-1}\,\eta_{\,n_0(k)}\,)=\,0\,.%
\end{equation}%
Equation \eqref{e613} follows from \eqref{e616} and \eqref{e619}\,.
\end{proof}
\begin{remark}
\rm{ In the more general situation \eqref{zerotres},
\eqref{zeroquatro}, \eqref{zerocinco}, one has to appeal to
\eqref{doistres}. In this case \eqref{e68} is replaced by
\begin{equation}\label{e68b}%
{\omega}(r)\leq\,(\,1-\,2^{\,-\,1}\,)\,{\omega}(4\,r)+\,(\,c+\,\eta_{\,n_0}^{-1})\,r\,.%
\end{equation}%
So, in the proof of lemma \ref{L64}, one has to consider also the
event of the non existence of a positive $\,r_0\,$ for which
\eqref{e614} holds.}%
\end{remark}
Proof of Theorem \ref{teo-nopub}:
\begin{proof}
From lemma \ref{L64} we conclude that the capacitary potentials
$\,u_{\rho,\,m}(x)\,$ are continuous at point $\,y$. Since
$\,u_{\rho,\,m}=\,m\,$ on $\,E_{\rho_0}\,,$ and
$\,|\,E_{\rho}\,|>\,0\,$ for each positive $\,\rho\,,$ it must be
$\,u_{\rho,\,m}(y)=\,m\,.$ The continuity of the potentials
$\,u_{\rho,\,- m}(x)\,$ at $\,y\,$, and $\,u_{\rho,\,-
m}(y)=\,-\,m\,,$ are proved in a totally similar way or,
alternatively, by appealing to the remark \ref{rem-1.1}.\par%
Finally, the regularity of the boundary point $\,y\,$ follows from
theorem \ref{teo-B}.
\end{proof}
| {
"timestamp": "2013-04-05T02:01:54",
"yymm": "1304",
"arxiv_id": "1304.1312",
"language": "en",
"url": "https://arxiv.org/abs/1304.1312",
"abstract": "We turn back to some pioneering results concerning, in particular, nonlinear potential theory and non-homogeneous boundary value problems for the so called p-Laplacian operator. Unfortunately these results, obtained at the very beginning of the seventies, were kept in the shade. We believe that our proofs are still of interest, in particular due to their extreme simplicity. Moreover, some contributions seem to improve the results quoted in the current literature.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "On nonlinear potential theory, and regular boundary points, for the p-Laplacian in N space variables",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018426872777,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7099229934488877
} |
https://arxiv.org/abs/2008.06267 | On the homology of independence complexes | The independence complex $\mathrm{Ind}(G)$ of a graph $G$ is the simplicial complex formed by its independent sets. This article introduces a deformation of the simplicial boundary map of $\mathrm{Ind}(G)$ that gives rise to a double complex with trivial homology. Filtering this double complex in the right direction induces a spectral sequence that converges to zero and contains on its first page the homology of the independence complexes of $G$ and various subgraphs of $G$, obtained by removing independent sets and their neighborhoods from $G$. It is shown that this spectral sequence may be used to study the homology of $\mathrm{Ind}(G)$. Furthermore, a careful investigation of the sequence's first page exhibits a relation between the cardinality of maximal independent sets in $G$ and the vanishing of certain homology groups of the independence complexes of some subgraphs of $G$. This relation is shown to hold for all paths and cyclic graphs. | \section{Introduction}
An \textit{independent set} in a graph $G=(V,E)$ is a subset of its vertices $I \subset V$ such that no two elements in $I$ are adjacent. More generally, a subset $I\subset V$ is \textit{$r$-independent} if every connected component of the \textit{induced subgraph} $G[I]:=(I,E')$ with $E'=\{e \in E \mid e\in I\times I\}$ has at most $r$ vertices.
Since the property of being $r$-independent is closed under taking subsets, the set of all $r$-independent sets of $G$ forms a simplicial complex, the \textit{$r$-independence complex} $\mathrm{Ind}_r( G)$ of $G$;
the vertex set of $\mathrm{Ind}_r( G)$ is $V$ and $I\subset V$ forms a simplex if and only if $I$ is $r$-independent in $G$. In the following we write $\I G$ for $\mathrm{Ind}_1( G)$. See Figure \ref{fig:g,icg,2icg} for an example.
\begin{figure}[h]\label{fig:g,icg,2icg}
$G$ \ \begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\fill[blue] (v1) circle (.0666cm) node[left]{$v_1$};
\fill[red] (v2) circle (.0666cm) node[above]{$v_2$};
\fill[cyan] (v3) circle (.0666cm) node[right]{$v_3$};
\end{tikzpicture}
\hspace{1cm}
$\I G$ \ \begin{tikzpicture}[scale=1]
\coordinate (v1) at (0,0.5);
\coordinate (v2) at (0,-0.5);
\coordinate (v3) at (0.5,0);
\draw (v1) -- (v2) ;
\fill[blue] (v1) circle (.0666cm) node[left]{$v_1$};
\fill[cyan] (v2) circle (.0666cm) node[left]{$v_3$};
\fill[red] (v3) circle (.0666cm) node[right]{$v_2$};
\end{tikzpicture}
\hspace{1cm}
$\mathrm{Ind}_2(G)$ \ \begin{tikzpicture}[scale=1]
\coordinate (v1) at (-0.5,0);
\coordinate (v2) at (0.5,0);
\coordinate (v3) at (0,1);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\draw (v3) -- (v1);
\fill[cyan] (v1) circle (.0666cm)node[left]{$v_3$};
\fill[red] (v2) circle (.0666cm)node[right]{$v_2$};
\fill[blue] (v3) circle (.0666cm)node[right]{$v_1$};
\end{tikzpicture}
\caption{A graph, its independence complex and its 2-independence complex}
\end{figure}
These complexes are a special instance of a great variety of simplicial complexes associated to graphs. See \cite{jonsson} for an overview. Some of these complexes are related to each other. For example, the independence complex of a graph is the \textit{matching complex} of its \textit{line graph} and the \textit{clique complex} of its \textit{dual graph}.
The topology of these types of simplicial complexes is a well-studied topic. For independence complexes, most work has been done on connectivity and homotopy-type, cf.\
\cite{kozlov,ehrenborg,szabo-tardost,engstrom2,engstrom,csorba,ADAMASZEK2012,adamaszek}, with applications, for example, to the study of graph colorings \cite{babson-kozlov,barmak}. For the case of higher independence complexes, see \cite{szabo-tardost,deshp-singh}, as well as references therein.
The homology groups of independence complexes have also been investigated, see \cite{meshulam,jonsson2,dao-schweig}.
Here, applications reach from statistical physics, where the Euler characteristic of $\I L$ for $L$ a periodic lattice is referred to as its \textit{Witten index} \cite{bm-sv-ne,hui-schout,adamaszek2}, to group theory, where certain local homology groups of the classical braid groups are related to the homology of certain (higher) independence complexes \cite{salvetti,paolini-salvetti}.
\newline
The main result of this article is a computational recipe for calculating the homology groups of $\I G$ (and to some extent $\mathrm{Ind}_r(G)$, see the discussion in Remark \ref{rem:higher}). It is implied by Corollary \ref{cor:specseq} of Theorem \ref{thm:acyclic}, and strengthened by the properties established in Proposition \ref{thm:properties}.
\begin{thm}\label{thm:1}
Let $G$ be a finite simple graph. There exists a spectral sequence whose $E^1$-page contains a copy of the homology of $\I G$. Its other entries are given by homology groups of independence complexes of graphs $G \! - \! N[U]$, obtained from $G$ by deleting all vertices in $U \subset V$ together with their neighbors, $U$ running over all independent sets of $G$. The sequence collapses to $E^\infty$ which has one entry isomorphic to $\mb Z$ and all other vanishing. Moreover, the differential $d^1:E^1 \to E^1$ is explicitly given and easy to compute.
\end{thm}
This allows to study the homology of $\I G$ by monitoring the typical dramatic action associated with spectral sequences.\footnote{citing J.F.\ Adams from \cite{ug-specseq} `` \ldots \ the behavior of this spectral sequence \ldots \ is a bit like an Elizabethan drama, full of action, in which the business of each character is to kill at least one other character, so that at the end of the play one has the stage strewn with corpses and only one actor left alive (namely the one who has to speak the last few lines)." }
It effectively reduces the computation of the homology of $\I G$ to the problem of determining the homology of independence complexes of the graphs $G \! - \! N[U]$ and inspecting the first page(s) of the above mentioned spectral sequence. In good cases this allows to determine all homology groups of $\I G$. In general one finds at least some relations between them (and the homology of the building blocks $G\! - \! N[U]$).
Moreover, the ``empirical data'' hints at a rather peculiar property of independence complexes that is satisfied by a large family of examples, including all paths and cyclic graphs; it is however not true in general.\footnote{Thanks to Dmitry Feichtner-Kozlov for pointing this out. The reader is invited to find a counter-example on her/his own. \textit{Hint:} Start with a disconnected graph...}
\begin{thm}\label{thm:2}
Let $G$ be a path or a cyclic graph. If $G$ has no maximal independent set of cardinality $p$, then
\begin{equation*}
\tilde H_{p-q-1}(\I {G\! - \! N[U]}) \cong 0
\end{equation*}
holds for all $q>0$ and all independent sets $U\subset V$ with $|U|=q$.
\end{thm}
This is proven in Theorem \ref{thm:vp}, using the observations made in Proposition \ref{thm:properties}.
\newline
The basic idea to set up the desired spectral sequence is to consider a deformation of the simplicial boundary map $d$ of $\I G$.
For this we model the simplicial chain complex of $\I G$ by a chain complex generated by certain decorations of the vertices of $G$. These decorations, hereafter called \textit{markings}, are given by maps $m: V \to \{ 0,1\}$ with $m^{-1}(1)$ independent in $G$. The differential $d$ is given by a signed sum over all ways of removing the marking of a single vertex $v \in m^{-1}(1)$.
We then enhance this picture by introducing a second type of marking, i.e.\ we consider now maps $m:V \to \{ 0,1,2 \}$ with $m^{-1}(\{ 1,2 \})$ independent. This allows to define a second differential $\delta$ that changes the first into the second type. The two differentials anticommute, so that we can form a double complex of markings on $G$, graded by the number of markings of the first and second type, called \textit{1-} and \textit{2-markings}, respectively. Setting $D:=d + \delta$ defines a differential on the total complex
\begin{equation*}
T(G):=\bigoplus_n T_n(G) ,\ T_n(G):=\bigoplus_{i-j=n}T_{i,j}(G),
\end{equation*}
where $T_{i,j}(G)$ is the free abelian group generated by markings $m:V \to \{ 0,1,2 \}$ with $i$ marked vertices in total and $j$ 2-marked vertices.
It turns out that this total complex $(T(G),D)$ is acyclic.
Filtering it by the number of 2-markings induces a spectral sequence with first page
\begin{equation*}
E^1_{p,q}= H_p(T_{\bullet,q}(G), d).
\end{equation*}
Thus, the row $q=0$ contains the homology of $\I G$, while the entries with $q>0$ are identified with the homology groups of graphs $G\! - \! N[U]$ for $U\subset V$ independent and $|U|=q$. Since the spectral sequence converges to zero, this allows to apply standard techniques from homological algebra to study the homology of $\I G$.
\newline
The whole construction is based on the article \cite{ksvs} where two similar complexes (of edge- and cycle-markings) were used to encode consistency conditions in the perturbative quantization of non-abelian gauge theories.
The masters thesis \cite{knispel} by Knispel studied the cohomology of these complexes in detail; he showed that every variant of marking can be pulled back to the case of marking vertices in an associated simple graph $G$, allowing to compute the cohomology of all such complexes at once (a streamlined version of this construction, using the above introduced spectral sequence in a slightly different disguise, can be found in \cite{mb-ak}). He then continued to study the nontrivial part $d$ of the differential $D=d+\delta$, relating it to the notion of independent sets and cliques in a graph $G$.
In this article we show that this relation can in fact be pushed much further. Firstly, the map $d$ really \emph{is} the boundary map of the independence complex of $G$, and secondly, the total complex $(T(G),D)$ may be used to study its homology.
\newline
The exposition is organized as follows. In Section \ref{sec:ismark} we define the notion of markings to model independent sets in a graph $G$. We then introduce two differentials $d$ and $\delta$ to set up a double complex $(T(G),D)$ that contains a copy of the simplicial chain complex of $\I G$.
The next two sections recite the results of \cite{mb-ak} (Sections 3.1 and 3.2 therein). In Section \ref{sec:delta} the vertical differential $\delta$ of $(T(G),D)$ is studied and its homology is shown to be trivial, except in bidegree $(0,0)$ where it is isomorphic to $\mb Z$. We use this in Section \ref{sec:doublecomplex} to compute the homology of the total complex $(T(G),D)$, showing that it is acyclic as well.
Section \ref{sec:homology} contains the heart of this article. It introduces the spectral sequence of Theorem \ref{thm:1} that allows to study the homology of $\I G$. We proceed then by investigating its most important properties. The section finishes with a discussion of the property mentioned in Theorem \ref{thm:2}, the relation between the nonexistence of maximal independent sets and the vanishing of certain homology groups of independence complexes of subgraphs of $G$. We proof this for the case of paths and cyclic graphs. The extension of this statement to other families of graph (and higher independence complexes) is left as an open problem.
In Section \ref{sec:eg} we look at some elaborated examples.
\section{Independent sets and markings}\label{sec:ismark}
Let $G=(V,E)$ be a finite, simple graph.
We start by introducing a model for independent sets $I \subset V$ of $G$. For this we simply label the vertices in $I$, and call such a labeling a \textit{marking} of $G$. The raison d’être is that this point of view allows to
\begin{enumerate}
\item model the simplicial boundary map of $\I G$ as a map on $G$ that removes labels on vertices,
\item introduce a second kind of label which gives rise to a deformation of the simplicial boundary map of $\I G$.
\end{enumerate}
In what follows everything will depend on the chosen graph $G$, but whenever there is no risk of confusion, this dependence is dropped from notation.
Throughout this paper $H$ and $\tilde H$ denote homology and reduced homology, respectively, with integer coefficients.
\begin{defn}\label{defn:marking}
Let $G$ be a graph. A \textit{marking} of $G$ is a map $m:V \to \{0,1,2\}$ such that $V_m:=m^{-1}(\{1,2\})$ is an independent set in $G$. For $i=1,2$ we refer to the elements of $V_i:=m^{-1}(i)$ as \textit{$i$-marked} and to the elements of $V_0:=m^{-1}(0)$ as \textit{unmarked}.
\end{defn}
\begin{defn}\label{defn:T}
Choose an order on $V$ such that $V=\{v_1,\ldots, v_n\}$ with $v_i<v_j$ if and only if $i<j$.
Let $T_{i,j}=T_{i,j}(G)$ be the free abelian group generated by all markings of $G$ with $i$ marked and $j$ 2-marked vertices,
\begin{equation*}
T_{i,j}:= \mb Z \big\langle m:V \to \{0,1,2\} \mid |V_m|=i, |V_2|=j \big\rangle.
\end{equation*}
Define linear maps $d: T_{i,j}\to T_{i-1,j}$ and $\delta: T_{i,j}\to T_{i,j+
1}$ by
\begin{align*}
d m & :=\sum_{v \in V_1} (-1)^{\# \{ w \in V_1 \mid w<v \} } m_{v\mapsto 0},
\\
\delta m & :=\sum_{v \in V_1} (-1)^{\# \{ w \in V_1 \mid w<v \} } m_{v\mapsto 2},
\end{align*}
where
\begin{equation*}
m_{v\mapsto i}(x):=
\begin{cases} m(x) & \text{ if } x\neq v,\\
i & \text{ if } x=v,
\end{cases}
\end{equation*}
changes the marking $m$ by relabeling the vertex $v$ with $i$.
\end{defn}
\begin{eg}\label{eg:firstexample}
Let $G=P_3=
\raisebox{0.023cm}{\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\filldraw[fill=black] (v1) circle (.0666);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=black] (v3) circle (.0666);
\end{tikzpicture}}$
with $V=\{v_1,v_2,v_3\}$ ordered from left to right. Let us denote 1-marked and 2-marked vertices by orange and white filled circles. For instance, the marking
\begin{equation*}
m: v_1 \longmapsto 1, v_2 \longmapsto 0,v_3 \longmapsto 1
\end{equation*}
is represented by
\begin{equation*}
m=
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=orange] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (.1);
\end{tikzpicture}.
\end{equation*}
Computing the differentials gives
\begin{align*}
d \
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=orange] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (.1);
\end{tikzpicture}
&=
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\filldraw[fill=black] (v1) circle (.0666);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (.1);
\end{tikzpicture}
-
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\filldraw[fill=orange] (v1) circle (.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=black] (v3) circle (.0666);
\end{tikzpicture}
, \\
\delta \
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=orange] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (.1);
\end{tikzpicture}
& =
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\filldraw[fill=white] (v1) circle (.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (.1);
\end{tikzpicture}
-
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\filldraw[fill=orange] (v1) circle (.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=white] (v3) circle (.1);
\end{tikzpicture},
\\
\delta d \
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=orange] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (.1);
\end{tikzpicture}
& =
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\filldraw[fill=black] (v1) circle (.0666);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=white] (v3) circle (.1);
\end{tikzpicture}
-
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\filldraw[fill=white] (v1) circle (.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=black] (v3) circle (.0666);
\end{tikzpicture}
= - d \delta \
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=orange] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (.1);
\end{tikzpicture}.
\end{align*}
If on the other hand $m=
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\filldraw[fill=black] (v1) circle (.0666);
\filldraw[fill=white] (v2) circle (.1) ;
\filldraw[fill=black] (v3) circle (.0666);
\end{tikzpicture}$, then $d m = \delta m = 0$.
\end{eg}
Our goal is to define a deformation of $d$ using the map $\delta$, that is we want $d+\delta$ to be a differential as well.
\begin{prop}\label{prop:differential}
$d^2=\delta^2=0$ and $d\delta + \delta d=0$.
\end{prop}
\begin{proof}
The first statement follows by a standard computation,
\begin{align*}
d dm & = d \sum_{v \in V_1 }(-1)^{\# \{ w \in V_1 \mid w<v \} } m_{v\mapsto 0} \\
&= \sum_{v \in V_1 }(-1)^{\# \{ w \in V_1 \mid w<v \} }\sum_{v' \in V_1 \! \setminus \! \{v\} }(-1)^{ \# \{ w' \in V_1 \! \setminus \! \{v\} \mid w' < v' \}} m_{v,v' \mapsto 0} \\
&= \sum_{v \in V_1 }\sum_{v' \in V_1 \! \setminus \! \{v\}} (-1)^{ \# \{ w \in V_1 \mid w < v\} + \# \{ w' \in V_1 \! \setminus \! \{v\} \mid w' < v'\}} m_{v,v' \mapsto 0} \\
& = \sum_{v,v' \in V_1,v'<v }(-1)^{\# \{ u \in V_1 \mid v'< u < v\}+1 }m_{v,v' \mapsto 0} \\
& \quad + \sum_{v,v' \in V_1, v'>v } (-1)^{\# \{ u \in V_1 \mid v< u < v' \}} m_{v,v' \mapsto 0}= 0,
\end{align*}
and similarly for $\delta$.
The same argument shows that $d$ and $\delta$ anticommute,
\begin{align*}
d \delta m & = d \sum_{v \in V_1 }(-1)^{\# \{ w \in V_1 \mid w < v\} } m_{v \mapsto 2} \\
&= \sum_{v \in V_1} \sum_{ v' \in V_1\! \setminus \! \{v\} }(-1)^{ \# \{ w \in V_1 \mid w < v\}+ \# \{ w' \in V_1 \! \setminus \! \{v\} \mid w' < v'\}} m_{v \mapsto 2,v' \mapsto 0} \\
&=\sum_{v,v' \in V_1, v'<v} (-1)^{ \# \{ u \in V_1 \mid v' < u < v \} +1} m_{v\mapsto 2,v' \mapsto 0} \\
& \quad + \sum_{v,v' \in V_1,v'>v }(-1)^{ \# \{ u \in V_1 \mid v < u < v' \} } m_{v \mapsto 2,v' \mapsto 0} \\
&= (-1) \Big( \sum_{v,v' \in V_1, v'<v} (-1)^{ \# \{ u \in V_1 \mid v' < u < v \} } m_{v\mapsto 2,v' \mapsto 0} \\
& \quad + \sum_{v,v' \in V_1,v'>v }(-1)^{ \# \{ u \in V_1 \mid v < u < v' \} +1} m_{v \mapsto 2,v' \mapsto 0} \Big) \\
&= (-1) \Big( \sum_{v,v' \in V_1,v < v' }(-1)^{ \# \{ u \in V_1 \mid v < u < v' \} +1} m_{v\mapsto 2,v' \mapsto 0} \\
& \quad + \sum_{v \in V_1, v > v'} (-1)^{ \# \{ u \in V_1 \mid v' < u < v \} } m_{v \mapsto 2,v' \mapsto 0} \Big) \\
& = -\delta d m.
\end{align*}
\end{proof}
Note that the complex $(T_{\bullet,0},d)$ models the simplicial chain complex of $\mathrm{Ind}(G)$. More precisely, for any choice of order on $V$ and orientation of $\mathrm{Ind}(G)$ there exists a unique isomorphism of chain complexes
\begin{equation}\label{eq:zerocomplex}
(T_{\bullet,0},d) \cong (C_{\bullet-1}(\mathrm{Ind}(G)), \partial)
\end{equation}
where $(C_\bullet(\mathrm{Ind}(G)), \partial)$ denotes the augmented simplicial chain complex of $\mathrm{Ind}(G)$. On the level of chain groups this isomorphism is given by simply mapping every independent set $I$ in $G$ to the marking $m_I$ that marks the vertices in $I$ by 1 and every other vertex by 0. Since an orientation of $\mathrm{Ind}(G)$ is the same as a linear order on its vertex set, which is equal to $V$, this correspondence defines a chain map. Thus, $H_n(T_{\bullet,0},d)$ is isomorphic to the reduced simplicial homology $\tilde H_{n-1}(\mathrm{Ind}(G))$ of the independence complex of $G$.
\newline
What about the complexes with 2-marked vertices, i.e.\ the case $j\neq 0$? In this case we can relate the complex $(T_{\bullet,j},d)$ to independence complexes of graphs obtained from $G$ by removing $j$ vertices together with their neighborhoods.
\begin{defn}
The \textit{neighborhood} of a vertex $v$ in a graph $G$ is $N(v):= \{ w \in V \mid \{v,w\} \in E \}$, the set of vertices of $G$ adjacent to $v$. The \textit{closed neighborhood} of $v$ is $N[v]:= \{v\} \cup N(v)$. Likewise, for a subset $U \subset V$ we define
\begin{equation*}
N(U):= \bigcup_{v \in U} N(v) \ \text{ and } \ N[U]:= \bigcup_{v \in U} N[v]
\end{equation*}
For any subset $W\subset V$ we write $G - W$ for the graph obtained from $G$ by deleting all elements in $W$, i.e.\ for the graph $G- W:=(V',E')$ with
\begin{equation*}
V'=V \! \setminus \! W, \quad E'= \{ e=\{x,y\} \in E \mid x,y \in V' \},
\end{equation*}
also known as the \textit{induced subgraph} $G[V-W]$.
\end{defn}
\begin{prop}\label{prop:dcomplex}
For each $j\geq 0$ there is an isomorphism of chain complexes,
\begin{equation}\label{eq:dcomplex}
(T_{\bullet, j}(G),d) \cong \bigoplus_{ \substack{ U\subset V \text{ independent} \\ |U|=j } } ( C_{\bullet-1}(G \! - \! N[U]),\partial ),
\end{equation}
$(C_{\bullet-1}(G\! - \! N[U]),\partial)$ denoting the augmented (and degree shifted) simplicial chain complex of $\I {G\! - \! N[U]}$ with the convention
\begin{equation*}
(C_{\bullet}(\emptyset),\partial) := 0 \overset{\partial_0}{\longrightarrow} \mb Z \overset{\partial_{-1}}{\longrightarrow} 0.
\end{equation*}
\end{prop}
\begin{proof}
The case $j=0$ has been discussed above, so let $j>0$. By definition every set of 2-marked vertices forms an independent set in $G$. Conversely, every independent set of size $j$ can be modeled by an appropriate 2-marking. Since $d$ acts only on 1-marked vertices, the complex $(T_{\bullet,j},d)$ splits into a direct sum of complexes with one summand for each independent/2-marked set of size $j$.
Given such an independent set $U$, the remaining vertices that can be marked are precisely the non-neighbors of vertices in $U$, i.e.\ the vertices of the graph $G\! - \! N[U]$. If $G\! - \! N[U]$ is not empty, then \eqref{eq:dcomplex} follows from the interpretation \eqref{eq:zerocomplex}.
In the special case that $U$ is a maximal independent set there is no possibility to put any 1-markings on the remaining vertices, so $G\! - \! N[U]=\emptyset$. The corresponding chain complex has only one nontrivial chain group in degree zero, generated by a single element, the marking that marks every vertex in $U$ by 2.
\end{proof}
In terms of chain complexes of markings $\left(T(\cdot),d\right)$ we may rephrase the previous proposition. It states that
\begin{equation} \label{eq:dcomplex2}
(T_{\bullet, j}(G),d) \cong \bigoplus_{ \substack{ U\subset V \text{ independent} \\ |U|=j } } ( T_{\bullet-j,0}(G\! - \! N[U]),d ).
\end{equation}
This leads to two important observations.
Firstly, even in the presence of 2-marked vertices the complexes $(T_{\bullet,j},d)$ can be interpreted as chain complexes of independence complexes of graphs (more precisely, of subgraphs of $G$).
Secondly, as the number $j$ of 2-markings increases, the complexes $(T_{\bullet, j},d)$ split into simpler and simpler building blocks.
We will see that these two observations -- together with the simplicity of the second differential $\delta$ -- allow us to set up a spectral sequence to study the homology of $(T_{\bullet, 0},d)$, and thus $H_\bullet(\mathrm{Ind}(G))$.
\begin{rem}\label{rem:higher}
The whole construction outlined in this paper works also in the case of higher independence complexes (as well as for more general markings where a given set of subgraphs is allowed to be marked, cf.\ \cite{mb-ak}).
One simply requires in Definition \ref{defn:marking} the set $V_m$ of marked elements to be $r$-independent. Then everything works exactly as presented here for the case $r=1$, except for one crucial difference. The splitting of $(T_{\bullet, j},d)$ in \eqref{eq:dcomplex} or \eqref{eq:dcomplex2} becomes more complicated: The direct sum runs now over $r$-independent sets in $G$ and the appropriate replacements of the graphs $G\! - \! N[U]$ are not necessarily subgraphs of $G$ anymore due to the varying cardinality of $r$-independent sets. Only for $r=2$ a similar looking formula can be recovered where the summands in \eqref{eq:dcomplex2} have to be replaced by 1-independence complexes of appropriately associated graphs. See Example \ref{eg:2ind} and \ref{eg:2ind2}.
\end{rem}
\section{The second differential $\delta$}\label{sec:delta}
While $d$ models the boundary map of independence complexes, the map $\delta: T_{i,j} \to T_{i,j+1}$ acts by relabeling already marked vertices. Thus, it is effectively independent of the topology of $G$.
However, this differential may also be interpreted as the (co-)boundary map of a simplicial complex, albeit of a very simple one, the standard simplex $\Delta^{i-1}$ on $i$ vertices.
\begin{rem}
Note that $\delta$ has bidegree $(0,1)$, going in the ``wrong" direction. Nevertheless, we will use homological terminology for both maps, $d$ and $\delta$. This avoids awkwardly changing between homology and cohomology. For the purists this choice of convention may be justified by flipping the sign in the second part of the grading of $T_{i,j}$, i.e.\ by defining
\begin{equation*}
T_{i,j}:= \mb Z \big\langle m:V \to \{0,1,2\} \mid |V_m|=i, |V_2|=-j \big\rangle.
\end{equation*}
\end{rem}
\begin{prop}\label{prop:deltahomtrivial}
All homology groups $H_n( T_{i,\bullet} , \delta )$ are trivial unless $i=0$ and $n=0$. In this case $H_0( T_{0,\bullet} , \delta )\cong \mb Z$, generated by the trivial marking
\begin{equation}
m_0:V \longrightarrow \{0,1,2\}, \ m_0(v) = 0 \text{ for all } v\in V.
\end{equation}
\end{prop}
The proof of this proposition is not needed in the sequel and may be omitted on a first read; the following example should give an intuitive idea why the statement holds.
\begin{eg}
Let $G=P_3$ as in Example \ref{eg:firstexample}. For $i=2$ we have
\begin{align*}
\delta \
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=orange] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (.1);
\end{tikzpicture}
& =
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=white] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (.1);
\end{tikzpicture}
-
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=orange] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=white] (v3) circle (.1);
\end{tikzpicture}
, \\
\delta \
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=white] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (.1);
\end{tikzpicture}
& =
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=white] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=white] (v3) circle (.1);
\end{tikzpicture}
=
\delta \
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=orange] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=white] (v3) circle (.1);
\end{tikzpicture}
, \\
\delta \
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0);
\coordinate (v3) at (1,0);
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\filldraw[fill=white] (v1) circle (0.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=white] (v3) circle (.1);
\end{tikzpicture}
& = 0.
\end{align*}
\end{eg}
A formal proof of Proposition \ref{prop:deltahomtrivial} relies on the following two lemmata.
\begin{lem}
For $k>1$ let $(C_\bullet(k),\partial)$ denote the augmented and degree shifted simplicial chain complex of the standard simplex on $k$ vertices, $(C_\bullet(k),\partial):=(C_{\bullet-1}(\Delta^{k-1}),\partial)$.
Then for $i>0$
\begin{equation*}
(T_{i,\bullet},\delta) \cong \bigoplus_{ \substack{U \subset V \text{independent} \\ |U | =i} } (C_\bullet(i),\partial).
\end{equation*}
\end{lem}
\begin{proof}
Fix $i>0$. The map $\delta$ neither creates any new marked vertices nor does it see adjacency relations in $G$; it simply operates on the set of marked vertices, regardless of their distribution in $G$. Therefore, the complex $(T_{i,\bullet},\delta)$ splits into a direct sum
\begin{equation*}
(T_{i,\bullet},\delta) \cong \bigoplus_{ \substack{U \subset V \text{independent} \\ |U | =i} } (T^{f}(U)_\bullet,\delta),
\end{equation*}
where
\begin{equation*}
T^{f}(U):=\mb Z \langle m:U\to \{1,2\} \rangle
\end{equation*}
denotes the free abelian group generated by all ``full" markings of the graph $U$ on $i=|U|$ disjoint vertices, graded by the number of 1-marked elements.
Identifying the vertices in $U$ with the vertices of $\Delta^{i-1}$, there is a unique orientation preserving bijection between the elements $m \in T^{f}(U)$ and the simplices in $\Delta^{i-1}$, sending $m$ to the (oriented) simplex $m^{-1}(1) \subset U$. This is clearly a chain map (after shifting the degree by one), so that
\begin{equation*}
(T^{f}(U)_\bullet,\delta) \cong (C_{\bullet-1}(\Delta^{i-1}),\partial)
\end{equation*}
and the claim follows.
\end{proof}
\begin{lem}
For each $i>0$ and all $n\in \mb N$ the groups $H_n ( C_\bullet(i),\partial \big)$ are trivial.
\end{lem}
\begin{proof}
A simplex is contractible, so its reduced homology vanishes.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:deltahomtrivial}]
Combining the two lemmata shows the first assertion in the proposition, the second one follows by direct computation: Clearly, $m_0 \in \ker \delta$, and since $\delta$ keeps the number of marked elements constant, $m_0$ cannot be an element of $\mathrm{im} \ \! \delta$.
\end{proof}
Our next task is to study the differential $d+\delta$, viewed as a deformation of $d$, and to use it to extract information about the differential $d$, especially when restricted to the subcomplex of markings with no 2-marked vertices.
\section{The total complex and its homology}\label{sec:doublecomplex}
To study the homology of $(T_{\bullet, 0},d)$ we first consider the total complex associated to $T_{i,j}$ with $d$ as horizontal and $\delta$ as vertical differential.
For this let
\begin{equation*}
T:= \bigoplus_n T_n, \quad T_n:= \bigoplus_{i-j=n} T_{i,j}.
\end{equation*}
Note that the total grading is given by the number of 1-marked vertices. Define a differential on $T$ by
\begin{equation*}
D_n: T_n \longrightarrow T_{n-1}, \ D_n:=d+\delta .
\end{equation*}
Proposition \ref{prop:differential} implies that $(T,D)$ is a chain complex. Its homology is given by
\begin{thm}\label{thm:acyclic}
The complex $(T,D)$ is acyclic,
\begin{equation*}
H_n(T,D)\cong\begin{cases}
\mb Z & n=0, \\
0 & n \neq 0.
\end{cases}
\end{equation*}
\end{thm}
\begin{proof}
Consider an ascending exhaustive filtration
\begin{equation*}
0=F_{-1}T \subset \ldots \subset F_{p-1}T \subset F_pT \subset \ldots \subset T,
\end{equation*}
defined by
\begin{equation}\label{eq:filtration}
F_pT_n:= \bigoplus_{ \substack{ i-j=n, \\ i\leq p}} T_{i,j}.
\end{equation}
It induces an associated spectral sequence which starts with
\begin{align*}
& E^0_{p,q}:= F_p T_{p-q} / F_{p-1} T_{p-q} = \bigoplus_{ \substack{ i-j=p-q, \\ i = p}} T_{i,j} = T_{p,q} ,
\\
& d^0_{p,q} : E^0_{p,q} \longrightarrow E^0_{p,q+1} = \delta : T_{p,q} \longrightarrow T_{p,q+1}.
\end{align*}
On its first page we have $E^1_{p,q}=H_q(T_{p,\bullet},\delta)$ which by Proposition \ref{prop:deltahomtrivial} vanishes for $(p,q)\neq (0,0)$, while for $p=q=0$ we have $H_0(T_{0,\bullet},\delta) \cong \mb Z$.
It follows that all differentials on the next page
\begin{equation*}
d^1_{p,q}: E^1_{p,q} \longrightarrow E^1_{p-1,q}
\end{equation*}
are trivial and the sequence collapses with $E^\infty=E^1$.
By standard spectral sequence arguments\footnote{We follow the conventions in \cite{gm-homalg}. For nice introductions see \cite{chow-specseq} as well as \cite{ramos-specseq}, and \cite{ug-specseq} for a concise treatment of the subject.} we can reconstruct from $E^\infty$ the (associated graded pieces of the) homology of $(T,D)$. In the present case this is simple,
\begin{equation*}
H_0(T,D)\cong E^1_{0,0} \cong \mb Z, \quad H_n(T,D) \cong 0 \text{ for all }n>0.
\end{equation*}
\end{proof}
In the next section we will study a spectral sequence associated to the other canonical filtration of $T$, obtained by filtering in the horizontal direction. This spectral sequence converges to the same limit. Furthermore, its first page will be populated by the homology groups of $\mathrm{Ind}(G)$ and of the independence complexes of the graphs $G\! - \! N[U]$ for $U\subset V$ independent.
\section{The homology of $\mathrm{Ind}(G)$}\label{sec:homology}
We now turn our attention to the homology groups $H_n(T_{\bullet,0},d)\cong \tilde{H}_{n-1}(\mathrm{Ind}(G))$. The proof of Theorem \ref{thm:acyclic} implies the following
\begin{cor}\label{cor:specseq}
There is a spectral sequence converging to $H_n(T,D)$ with its first page containing a copy of the (reduced) homology of the independence complex of $G$.
\end{cor}
\begin{proof}
Consider the spectral sequence associated to a filtration of $T$, ``orthogonal" to the one in \eqref{eq:filtration},
\begin{equation*}
T=F_{0}T \supset \ldots \supset F_pT \supset F_{p+1}T \supset \ldots \supset 0
\end{equation*}
with
\begin{equation*}
F_pT_n := \bigoplus_{ \substack{ i-j=n \\ j\geq p } } T_{i,j}.
\end{equation*}
Since there are only finitely many nonvanishing $T_{i,j}$, the associated spectral sequence converges to the same limit as the one in the proof of Theorem \ref{thm:acyclic} (this is a standard fact; see, for instance, Proposition 3.5.1 in \cite{gm-homalg}). Its starting page $E^0$ is given by
\begin{align*}
& E^0_{p,q} = F_pT_{p-q} / F_{p+1}T_{p-q}= \bigoplus_{ \substack{ i-j=p-q, \\ j = p} } T_{i,j} = T_{2p-q,p}, \\
& d^0_{p,q}: E^0_{p,q} \longrightarrow E^0_{p,q+1}=d: T_{2p-q,p} \longrightarrow T_{2p-q-1,p}.
\end{align*}
Here it is important to note that these unusual index shifts arise because $d$ is of bidegree $(-1,0)$ while $\delta$ is of bidegree $(0,1)$. If we reshuffle $p$ and $q$ (or tilt and stretch our sheet of paper), then we may return to the conventional picture with
\begin{equation*}
E^0_{p,q} = T_{p,q} \ \text{ and } \ d^0_{p,q}: T_{p,q} \to T_{p-1,q}.
\end{equation*}
With these conventions we find on the next page
\begin{equation*}
E^1_{p,q}= H_p(T_{\bullet,q},d),
\end{equation*}
and the maps $d^1_{p,q}:H_p(T_{\bullet,q},d)\to H_p(T_{\bullet,q-1},d)$ are induced by $\delta$,
\begin{equation*}
d^1[x] = [\delta x].
\end{equation*}
Proposition \ref{prop:dcomplex} and Equation \eqref{eq:dcomplex2} identify the $q=0$ column with $\tilde{H}_{\bullet}(\mathrm{Ind}(G))$. The other columns are populated by direct sums of the homology groups of $\I {G\! - \! N[U]}$ for $U \subset V$ independent with $|U|=q$.
\end{proof}
The usefulness of this corollary lies in the simplicity of the spectral sequence's limit $E^\infty$. If we know the homology of the complexes $(T_{\bullet,j},d)$ for $j>0$, we can deduce information about $H_\bullet(\mathrm{Ind}(G))$ from studying the $E^1$ (or $E^2$-) page of this spectral sequence. Since we know that it eventually collapses, every entry except for a single copy of $\mb Z$ must disappear at some stage.
In more dramatic words, as the spectral sequences progresses further, all but one entries of $E^1$ are eventually paired together, both partners doomed to killing each other.
There will be only one lucky survivor, a representative from the group of maximum independent sets of $G$.
\begin{eg}
Let us look at a very simple example, $G=K_n$ the complete graph on $n$ vertices (more sophisticated examples can be found in Section \ref{sec:eg}). The $E^1$ page of the spectral sequence from Corollary \ref{cor:specseq} has two nontrivial rows, $E^1_{p,0}=H_p(T_{\bullet,0},d)$ containing the homology of $\I {K_n}$ and
\begin{equation*}
E^1_{p,1}=H_p(T_{\bullet,1},d) \overset{ \eqref{eq:dcomplex2}}{ \cong } \bigoplus_{ \substack{ U\subset V \text{ independent} \\ |U|=1 } } H_{p-1}( T_{\bullet,0}(G\! - \! N[U]),d ).
\end{equation*}
Here $G-N[U]= \emptyset$ because every vertex of $K_n$ is already a maximal (and maximum) independent set. By Proposition \ref{prop:dcomplex} the row $E^1_{p,1}$ is thus given by $E^1_{1,1} \cong \mb Z^n$ and $E^1_{p,1} \cong 0$ for $p\neq 1$:
\begin{equation*}
\begin{tikzpicture}
\draw (0,0) node {0} (1,0) node {$E^1_{1,0}$} (1,1) node {$\mb Z^n$} (2,0) node {0} (2,1) node {0} (2,2) node {0} ;
\draw[->] (1,0.3) -- (1,.8) node[pos=0.5,right] {$d^1$};
\draw[->] (2,0.3) -- (2,.8);
\draw[->] (2,1.3) -- (2,1.8);
\end{tikzpicture}
\end{equation*}
The only way for this spectral sequence to exhibit the expected convergence is to have the differential $d^1_{1,0}: E^1_{1,0} \to E^1_{1,1}$ satisfy
\begin{equation*}
\mathrm{im} \ \! d^1_{1,0} \cong \mb Z^{n-1} \text{ and } \ker d^1_{1,0} \cong 0.
\end{equation*}
This implies $E^1_{1,0}\cong \tilde H_0(\I {K_n})\cong \mb Z^{n-1}$ and $E^1_{p,0}\cong \tilde H_{p-1}(\I {K_n})\cong 0$ for $p\neq 1$.
\end{eg}
In the general case there is a similar behavior with respect to maximum (and maximal) independent sets.
\begin{prop}\label{thm:properties}
Let $\alpha(G)$ denote the \textit{independence number} of $G$, i.e.\ the cardinality of a maximum independent set in $G$. Then we have for the spectral sequence from Corollary \ref{cor:specseq}:
\begin{enumerate}
\item $E^\infty_{p,q} \cong \begin{cases}
\mb Z & p=q= \alpha(G) \\
0 & \text{ else}
\end{cases}$
\item $E^1_{p,p}\cong \mb Z^{n_p}$ where $n_p$ is the number of maximal independent sets of size $p$ in $G$
\item all diagonal entries with $p<\alpha(G)$ vanish already on the $E^2$ page,
\begin{equation*}
E^2_{p,p}=H_p(E^1_{p,\bullet},d^1) \cong H_p(H_p(T_{\bullet,\bullet},d),\delta)\cong0
\end{equation*}
\end{enumerate}
\end{prop}
\begin{proof}
Assertion (1) holds by construction because the spectral sequence converges to the associated graded piece of the homology of $(T,D)$. It follows however also from (3) by Theorem \ref{thm:acyclic} which assures that the surviving entry must lie on the diagonal; these entries represent the total degree 0 in which the only nontrivial homology of $(T,D)$ is concentrated.
To prove (2) we consider the diagonal entries $E^1_{p,p}=H_p(T_{\bullet,p},d)$. Proposition \ref{prop:dcomplex} shows that this group is nonzero if and only if $G$ has a maximal independent set of cardinality $p$ (because every nonempty graph has $H_0(T_{\bullet,0},d) \cong 0$). Since $d$ does not act on 2-marked vertices, the complex $(T_{\bullet,p},d)$ splits into a direct sum of complexes, one for each 2-marking of size $p$. Thus, we find one generator for each maximal independent set of cardinality $p$.
In particular, if $G$ does not have such maximal independent sets, then $E^1_{p,p}\cong 0$.
For (3) let now $p<\alpha(G)$. The map $d^1$ on $E^1$ is given by the restriction of $\delta$ to the homology classes of $(T,d)$. Our goal is therefore to find for each generator of $H_p(T_{\bullet,p},d)$ a homology class in $H_p(T_{\bullet,p-1},d)$ that gets mapped to it by $\delta$. Note that, if it were not for the restriction to homology classes of $d$, this task would be trivial as the homology with respect to $\delta$ vanishes (Proposition \ref{prop:deltahomtrivial}).
By (2) the generators of $H_p(T_{\bullet,p},d)$ are represented by maximal independents sets $I$ with $|I|=p$. For each such $I$ there exists a vertex $z\in I$ such that the graph $G\! - \! N[I\! \setminus \! \{z\}]$ has at least two vertices that can be simultaneously marked -- otherwise $I$ would be not only maximal, but also maximum independent.
Choose two such vertices, denote them by $x$ and $y$, and consider the markings
\begin{equation*}
m_0: v \mapsto \begin{cases}
2 & \text{ if } v \in I, \\ 0 & \text{ else,}
\end{cases}
\quad
m_z: v \mapsto \begin{cases}
1 & \text{ if }v=z, \\ 2 & \text{ if } v \in I\! \setminus \!\{ z\}, \\ 0 & \text{ else,}
\end{cases}
\end{equation*}
\begin{equation*}
m_x: v \mapsto \begin{cases}
1 & \text{ if }v=x, \\ 2 & \text{ if } v \in I\! \setminus \! \{x,z\}, \\ 0 & \text{ else,}
\end{cases}
\quad
m_{x,y}: v \mapsto \begin{cases}
1 & \text{ if }v =y, \\ 2 & \text{ if } v \in I\! \setminus \! \{y,z\}, \\ 0 & \text{ else.}
\end{cases}
\end{equation*}
See Figure \ref{fig:zeroclass} for an example.
We have $d(m_z - m_x)=0$ and, since $\{v,z\} \in E$ holds for every vertex $v$ of $G\! - \! N[I\! \setminus \! \{z\}]$, the element $m_z - m_x$ cannot lie in the image of $d$. Hence,
\begin{equation*}
0 \neq [m_z - m_x] \in H_p(T_{\bullet,p-1},d).
\end{equation*}
Moreover, since $\delta m_x= d m_{x,y}$, we have found a $\delta$-preimage of $[m_0]$,
\begin{equation*}
[\delta (m_z-m_x)] = [m_0- \delta m_x] = [m_0] \in H_p(T_{\bullet,p},d).
\end{equation*}
\end{proof}
\begin{figure}
\begin{tikzpicture}[scale=0.5]
\coordinate (v1) at (-2,-2);
\coordinate (v2) at (2,-2);
\coordinate (v3) at (2,2);
\coordinate (v4) at (-2,2) ;
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\draw (v3) -- (v4);
\draw (v4) -- (v1);
\coordinate (w1) at (-1,-1);
\coordinate (w2) at (1,-1);
\coordinate (w3) at (1,1);
\coordinate (w4) at (-1,1);
\draw (w1) -- (w2);
\draw (w2) -- (w3);
\draw (w3) -- (w4);
\draw (w4) -- (w1);
\draw (v1) -- (w1);
\draw (v2) -- (w2);
\draw (v3) -- (w3);
\draw (v4) -- (w4);
\filldraw[fill=black] (v1)node[above left,cyan]{$y$} node[xshift=-.5cm,yshift=.9cm]{$m_0=$}circle (.0666) ;
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=black] (v3)node[right,cyan]{$x$} circle (.0666) ;
\filldraw[fill=white] (v4)node[left,cyan]{$z$} circle (0.13);
\filldraw[fill=black] (w1) circle (.0666) ;
\filldraw[fill=white] (w2) circle (.13);
\filldraw[fill=black] (w3) circle (.0666) ;
\filldraw[fill=black] (w4) circle (.0666) ;
\end{tikzpicture}
\begin{tikzpicture}[scale=0.5]
\coordinate (v1) at (-2,-2);
\coordinate (v2) at (2,-2);
\coordinate (v3) at (2,2);
\coordinate (v4) at (-2,2) ;
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\draw (v3) -- (v4);
\draw (v4) -- (v1);
\coordinate (w1) at (-1,-1);
\coordinate (w2) at (1,-1);
\coordinate (w3) at (1,1);
\coordinate (w4) at (-1,1);
\draw (w1) -- (w2);
\draw (w2) -- (w3);
\draw (w3) -- (w4);
\draw (w4) -- (w1);
\draw (v1) -- (w1);
\draw (v2) -- (w2);
\draw (v3) -- (w3);
\draw (v4) -- (w4);
\filldraw[fill=black] (v1)node[xshift=-.5cm,yshift=.9cm]{$m_z=$} circle (.0666) ;
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=black] (v3) circle (.0666) ;
\filldraw[fill=orange] (v4) circle (0.13);
\filldraw[fill=black] (w1) circle (.0666) ;
\filldraw[fill=white] (w2) circle (.13);
\filldraw[fill=black] (w3) circle (.0666) ;
\filldraw[fill=black] (w4) circle (.0666) ;
\end{tikzpicture}
\begin{tikzpicture}[scale=0.5]
\coordinate (v1) at (-2,-2);
\coordinate (v2) at (2,-2);
\coordinate (v3) at (2,2);
\coordinate (v4) at (-2,2) ;
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\draw (v3) -- (v4);
\draw (v4) -- (v1);
\coordinate (w1) at (-1,-1);
\coordinate (w2) at (1,-1);
\coordinate (w3) at (1,1);
\coordinate (w4) at (-1,1);
\draw (w1) -- (w2);
\draw (w2) -- (w3);
\draw (w3) -- (w4);
\draw (w4) -- (w1);
\draw (v1) -- (w1);
\draw (v2) -- (w2);
\draw (v3) -- (w3);
\draw (v4) -- (w4);
\filldraw[fill=black] (v1)node[xshift=-.5cm,yshift=.9cm]{$m_x=$} circle (.0666) ;
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=black] (v4) circle (.0666) ;
\filldraw[fill=orange] (v3) circle (0.13);
\filldraw[fill=black] (w1) circle (.0666) ;
\filldraw[fill=white] (w2) circle (.13);
\filldraw[fill=black] (w3) circle (.0666) ;
\filldraw[fill=black] (w4) circle (.0666) ;
\end{tikzpicture}
\begin{tikzpicture}[scale=0.5]
\coordinate (v1) at (-2,-2);
\coordinate (v2) at (2,-2);
\coordinate (v3) at (2,2);
\coordinate (v4) at (-2,2) ;
\draw (v1) -- (v2);
\draw (v2) -- (v3);
\draw (v3) -- (v4);
\draw (v4) -- (v1);
\coordinate (w1) at (-1,-1);
\coordinate (w2) at (1,-1);
\coordinate (w3) at (1,1);
\coordinate (w4) at (-1,1);
\draw (w1) -- (w2);
\draw (w2) -- (w3);
\draw (w3) -- (w4);
\draw (w4) -- (w1);
\draw (v1) -- (w1);
\draw (v2) -- (w2);
\draw (v3) -- (w3);
\draw (v4) -- (w4);
\filldraw[fill=orange] (v1)node[xshift=-.6cm,yshift=.9cm]{$m_{x,y}=$} circle (.13) ;
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=black] (v4) circle (.0666) ;
\filldraw[fill=white] (v3) circle (0.13);
\filldraw[fill=black] (w1) circle (.0666) ;
\filldraw[fill=white] (w2) circle (.13);
\filldraw[fill=black] (w3) circle (.0666) ;
\filldraw[fill=black] (w4) circle (.0666) ;
\end{tikzpicture}
\caption{Examples for the markings constructed in the proof of Proposition \ref{thm:properties}. Vertices marked by 1 are colored in orange, 2-marked vertices in white.}
\label{fig:zeroclass}
\end{figure}
The preceding proposition improves the computational capability of the spectral sequence from Corollary \ref{cor:specseq}. This implies that in ``good'' cases the spectral sequence contains already enough information to fully determine the homology of $\mathrm{Ind}(G)$ (or at least to find relations between the groups in different dimensions). In ``not so good'' cases one has to examine the differential $d^1$ or throw in some additional information. Fortunately, $d^1$ is induced by the map $\delta$ and therefore rather simple.
Here the term ``good'' essentially means that we know or are able to compute the homology of the independence complexes of the graphs $G\! - \! N[U]$ for $U \subset V$ independent. For instance, if $G$ has many vertices of high valence or is a very symmetric graph, then the graphs $G\! - \! N[U]$ become very simple as the cardinality of $U$ grows. This is demonstrated by the examples in the next section.
\newline
Last, but not least, there is one peculiar property of the spectral sequence that holds for a large class of examples, including all paths and cyclic graphs.
\begin{thm} \label{thm:vp}
Let $G$ be a path or cyclic graph.
If $E^1_{p,p}\cong 0$, i.e.\ $G$ has no maximal independent set of cardinality $p$, then all entries of the upper column $E^1_{p,q}$, $q>0$, vanish.
\end{thm}
This theorem has strong implications. Together with
\begin{equation*}
E^1_{p,q} = H_p(T_{\bullet,q},d) \overset{\eqref{eq:dcomplex2}}{\cong} \bigoplus_{ \substack{ U\subset V \text{ independent} \\ |U|=q } } H_{p-q}(T_{\bullet,0}(G\! - \! N[U]),d)
\end{equation*}
it allows to deduce the vanishing of the rank $p-q-1$ homology groups of the independence complex of \emph{every} subgraph of $G$ that can be obtained by deleting $q$ independent vertices and their neighborhoods.\footnote{In a way reflecting the ancient calculus tables where solutions of integrals are listed that were found by differentiating all kinds of functions.}
\begin{proof}
Throughout this proof we abbreviate $H_i(T_{\bullet,0},d)$ by $H_i$ and we drop set brackets in the notation for induced subgraphs.
Let $G$ be a path or cyclic graph and $p>1$ such that $G$ has no maximal independent sets of size $p$. Let $I\subset V$ be independent with $\vert I \vert =p$. Note that it suffices to discuss the case of paths since $G\! - \! N[I]$ is a union of paths in both cases.
We will show for all $k\in \{2,\ldots, p\}$ and all $U\subset I$ with $ |U|=k$ that
$$
H_{p-k}( G \! - \! N[U] ) \cong 0
$$
implies
$$
H_{p-k+1}( G \! - \! N[U\! \setminus \! v]) \cong 0
$$
for each $v \in U$.
To prove this we need three ingredients:
\begin{itemize}
\item A cofiber sequence, introduced in \cite{ADAMASZEK2012},
\begin{equation}\label{eq:cofiber}
\I {G\! - \! N[v]} \hookrightarrow \I {G \! - \! \{v\}} \hookrightarrow \I G \to
\Sigma \I {G\! - \! N[v]} \to \ldots,
\end{equation}
where $\Sigma$ denotes unreduced suspension. It expresses $\I G$ as the mapping cone of the subcomplex inclusion $\I {G\! - \! N[v]} \hookrightarrow \I {G\! - \! v}$. See \cite{ADAMASZEK2012} for the details.
\item The homotopy type of $\I {P_n}$, a path on $n$ vertices, shown in \cite{kozlov} to be
\begin{equation}\label{eq:homtype}
\I {P_n} \simeq
\begin{cases}
S^{k-1} & \text{ if } n=3k \text{ or } n=3k-1,\\
\{\mathrm{pt}\} & \text{ else.}
\end{cases}
\end{equation}
\item The fact that $\I {G \sqcup G'}= \I G * \I {G'}$ with $*$ denoting the topological join operation. In that regard, recall also that $S^i * S^j $ is homeomorphic to $S^{i+j+1}$.
\end{itemize}
The sequence \eqref{eq:cofiber} gives rise to a long exact sequence in homology,\footnote{using that the mapping cone of a subcomplex inclusion $A \hookrightarrow B$ is $B$ with a cone attached over $A$ and thus homotopy equivalent to $B/A$.}
\begin{equation*}
\ldots \to \tilde{H}_n(\I {G\! - \! N[v]}) \to \tilde{H}_n(\I {G \! - \! v}) \to \tilde{H}_n( \I G ) \to
\tilde{H}_{n-1}(\I {G\! - \! N[v]} )\to \ldots
\end{equation*}
Applying this to $G\! - \! N[U\! \setminus \! v]=G \! - \! N[U] \! + \! N[v]$ with $U$ as above and $v\in U$ (here $+$ means ``putting a set of vertices back into the graph", $G \! - \! Y \! + \! X := G \! - \! (Y \! \setminus \! X)$ for $X\subset Y\subset V$), the cofiber sequence reads
\begin{equation*}
\I {G\! - \! N[U]} \hookrightarrow \I {G\! - \! N[U] \! + \! N(v) } \hookrightarrow \I {G\! - \! N[U] \! + \! N[v]} =\I {G\! - \! N[U\! \setminus \! v]} \to \ldots
\end{equation*}
The associated long exact sequence in homology is
\begin{equation} \label{eq:les}
\ldots \to H_n(G\! - \! N[U]) \to H_n( G\! - \! N[U] +N(v) ) \to H_n( G\! - \! N[U\! \setminus \! v] ) \overset{\partial}{\to}
H_{n-1}( G\! - \! N[U] )\to \ldots
\end{equation}
where we used the identification $H_i(T_{\bullet,0}(G),d) \cong \tilde H_{i-1}(\I G)$.
Our goal is thus to show that for $n=p-k+1$ and $H_{n-1}( G\! - \! N[U] )\cong 0$ the map $\partial$ is an isomorphism.
\newline
1.\ Case $k=p$, $U=I$:
Let $v\in U$. By assumption, $H_0( G\! - \! N[U] )\cong 0$, so the graph $G\! - \! N[U]$ is nonempty. Putting $N[v]$ back into $G\! - \! N[U]$ ``creates'' at least two more vertices.
If the resulting graph $G\! - \! N[U]+N[v]=G\! - \! N[U\! \setminus \! v]$ has more than three vertices, then by \eqref{eq:homtype} its independence complex is homotopy equivalent to either a point or a sphere $S^m$ with $m>0$, hence $H_1( G\! - \! N[U\! \setminus \! v] )\cong 0$.
The case that $G\! - \! N[U\! \setminus \! v]$ has exactly three vertices is possible only in one configuration; it is a path $P_3$ containing the first or last vertex of $G$ which must have been $v$. If we mark instead of $v$ its neighbor $v'$, then $G\! - \! N[(U\! \setminus \! v) \cup v']=\emptyset$, a contradiction to the assumption that $G$ does not admit a maximal independent set of size $p$.
See Figure \ref{fig:egcase1} for an example.
\begin{figure}[h]
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (0,0);
\coordinate (v2) at (1,0);
\coordinate (v3) at (2,0);
\coordinate (v4) at (3,0);
\coordinate (v5) at (4,0);
\coordinate (v6) at (5,0);
\coordinate (v7) at (6,0);
\coordinate (v8) at (7,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\draw (v3) -- (v4) ;
\draw (v4) -- (v5);
\draw (v5) -- (v6) ;
\draw (v6) -- (v7) ;
\draw[red] (v7) -- (v8) ;
\filldraw[fill=white] (v1) circle (.1);
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=red] (v3) circle (.0666);
\filldraw[fill=black] (v4) circle (.0666);
\filldraw[fill=white] (v5) circle (.1);
\filldraw[fill=black] (v6) circle (.0666);
\filldraw[fill=red] (v7) circle (.0666);
\filldraw[fill=red] (v8) circle (.0666);
\end{tikzpicture}
\caption{An example for the case $k=p=2$ in the graph $P_8$. 2-marked vertices are colored white, the graph $G\! - \! N[U]$ is colored red. For a non-example remove the two rightmost vertices and keep the same marking.}
\label{fig:egcase1}
\end{figure}
2.\ Case $2\leq k < p$:
Let $U \subset I$ with $|U|=k$ and $v \in U$. The relevant part of the long exact sequence \eqref{eq:les} is given by
$$
H_{p-k+1}( G\! - \! N[U] ) \overset{i_*}{\to}
H_{p-k+1}( G\! - \! N[U] +N(v) ) \overset{j_*}{\to} H_{p-k+1}( G\! - \! N[U\! \setminus \! v] ) \overset{\partial}{\to}
H_{p-k}( G\! - \! N[U] ).
$$
Putting $N(v)$ back into the graph $ G\! - \! N[U]$ has one of the following effects:
\begin{enumerate}
\item[(a)] It creates one or two isolated vertices in $G\! - \! N[U] \! + \! N(v)$.
\item[(b)] $G\! - \! N[U]$ is a disjoint union of paths and $G\! - \! N[U] \! + \! N(v)$ is as well, but with one or two components lengthened by one vertex.
\item[(c)] $G\! - \! N[U] = G\! - \! N[U] \! + \! N(v)$. This happens if and only if $v$ is the first or last vertex of $G$ and its neighbor's neighbor is already marked.
\end{enumerate}
All three cases have $H_{p-k+1}( G\! - \! N[U\! \setminus \! v] )\cong 0$ as consequence, because
\begin{enumerate}
\item[(a)] $G\! - \! N[U] \! + \! N(v) \simeq \{\mathrm{pt}\}$, since $\I {G' \sqcup v'}\simeq \I {G'}*\{ \mathrm{pt}\}$ is a cone on $\I {G'}$ for any isolated vertex $v'$ in any graph $G'$.
\item[(b)] $G\! - \! N[U] = P_{n_1} \sqcup \ldots \sqcup P_{n_i}$ and $G\! - \! N[U] \! + \! N(v)$ is isomorphic to
\begin{equation*}
\text{either } P_{n_1+1} \sqcup P_{n_2} \sqcup \ldots \sqcup P_{n_i} \text{ or } P_{n_1+1} \sqcup P_{n_2+1}\sqcup P_{n_3} \sqcup \ldots \sqcup P_{n_i}.
\end{equation*}
From \eqref{eq:homtype} we see that $G\! - \! N[U] \! + \! N(v)$ is homotopy equivalent to a point if $n_1$ and $n_2$ are both not congruent 2 modulo 3. On the other hand, if they are, then
\begin{equation*}
\I {G\! - \! N[U]} \simeq \I {G\! - \! N[U]\! + \! N(v)},
\end{equation*}
so
\begin{equation*}
H_\bullet(G\! - \! N[U]) \cong H_\bullet(G\! - \! N[U]\! + \! N(v)).
\end{equation*}
This means $i_*$ is an isomorphism. Since \eqref{eq:les} is exact, we get $\ker{\partial} \cong \image{j_*}\cong 0 $.
\item[(c)] $i_*$ is the identity map. Again, by exactness, $\ker{\partial} \cong 0 $.
\end{enumerate}
See Figure \ref{fig:egcase2} for an example.
\begin{figure}[h]
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (0,0);
\coordinate (v2) at (1,0);
\coordinate (v3) at (2,0);
\coordinate (v4) at (3,0);
\coordinate (v5) at (4,0);
\coordinate (v6) at (5,0);
\coordinate (v7) at (6,0);
\coordinate (v8) at (7,0);
\coordinate (v9) at (8,0);
\coordinate (v10) at (9,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\draw (v3) -- (v4) ;
\draw[red] (v4) -- (v5);
\draw (v5) -- (v6) ;
\draw (v6) -- (v7) ;
\draw (v7) -- (v8) ;
\draw (v8) -- (v9) ;
\draw[red] (v9) -- (v10) ;
\filldraw[fill=black] (v1) circle (.0666) node[left, xshift=-.5cm] {(a), (b)};
\filldraw[fill=white] (v2) circle (.1) ;
\filldraw[fill=black] (v3) circle (.0666);
\filldraw[fill=red] (v4) circle (.0666);
\filldraw[fill=red] (v5) circle (.0666);
\filldraw[fill=black] (v6) circle (.0666);
\filldraw[fill=white] (v7) circle (.1);
\filldraw[fill=black] (v8) circle (.0666);
\filldraw[fill=red] (v9) circle (.0666);
\filldraw[fill=red] (v10) circle (.0666);
\end{tikzpicture}
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (0,0);
\coordinate (v2) at (1,0);
\coordinate (v3) at (2,0);
\coordinate (v4) at (3,0);
\coordinate (v5) at (4,0);
\coordinate (v6) at (5,0);
\coordinate (v7) at (6,0);
\coordinate (v8) at (7,0);
\coordinate (v9) at (8,0);
\coordinate (v10) at (9,0);
\draw (v1) -- (v2) ;
\draw (v2) -- (v3) ;
\draw (v3) -- (v4) ;
\draw (v4) -- (v5);
\draw[red] (v5) -- (v6) ;
\draw[red] (v6) -- (v7) ;
\draw[red] (v7) -- (v8) ;
\draw[red] (v8) -- (v9) ;
\draw[red] (v9) -- (v10) ;
\filldraw[fill=white] (v1) circle (.1) node[left, xshift=-1cm] {(c)};
\filldraw[fill=black] (v2) circle (.0666) ;
\filldraw[fill=white] (v3) circle (.1);
\filldraw[fill=black] (v4) circle (.0666);
\filldraw[fill=red] (v5) circle (.0666);
\filldraw[fill=red] (v6) circle (.0666);
\filldraw[fill=red] (v7) circle (.0666);
\filldraw[fill=red] (v8) circle (.0666);
\filldraw[fill=red] (v9) circle (.0666);
\filldraw[fill=red] (v10) circle (.0666);
\end{tikzpicture}
\caption{Examples for the cases (a), (b) and (c) with $k=2<p=3$ in the graph $P_{10}$. 2-marked vertices are colored in white, the graph $G\! - \! N[U]$ is colored in red.}
\label{fig:egcase2}
\end{figure}
\end{proof}
Although the preceding proof is specifically tailored to the case of paths and cyclic graphs, the statement holds also for many other graphs, including the examples in the next section, some cubic graphs and possibly all ladder graphs (checked for up to six rungs).
It is therefore natural to ask the following
\begin{q}
For which (families of) graphs $G$ does the statement of Theorem \ref{thm:vp} hold?
\end{q}
For instance, the independence complex of a forest is homotopy equivalent to either a point or a sphere (Corollary 6.1 in \cite{ehrenborg}, see also \cite{engstrom}), but a concrete characterization as in \ref{eq:homtype} is not known. Nevertheless, this suggests that Theorem \ref{thm:vp} could very well hold also for forests.
An obvious further generalization is
\begin{q}
Does the statement of Theorem \ref{thm:vp} hold for higher independence complexes? If yes, for which (families of) graphs?
\end{q}
\section{Examples}\label{sec:eg}
In this section we look at some examples to see the machinery at work.
Our first goal is to compute the homology of $\I P$ for $P$ the Petersen graph. For this we need a preparatory calculation. Throughout this section let $H_n$ denote $H_n(T_{\bullet,0},d)$.
\begin{eg}[$C_6$, the cyclic graph on six vertices]
To set up the spectral sequence for $H(\I {C_6})$ we need to calculate the homology of the complexes in \eqref{eq:dcomplex2} for $i=1,2,3$.
Removing any vertex with its neighbors from $C_6$ gives a path $P_3$ on three vertices, so
\begin{equation*}
(T_{\bullet,1},d) \cong \bigoplus_{k=1}^6 (T_{\bullet-1}(P_3),d).
\end{equation*}
The homology of $\I{P_3}$ may be calculated directly, or simply read off from Figure \ref{fig:g,icg,2icg}. We find
\begin{equation*}
H_n(T_{\bullet,1},d) \cong \begin{cases} \mb Z^6 & n=1, \\ 0 & n \neq 1. \end{cases}
\end{equation*}
Removing two independent vertices together with their neighborhoods produces either an empty graph or a single vertex. The latter has trivial homology while the former adds a copy of $\mb Z$ in degree zero. There are three different maximum independent sets of size two, so
\begin{equation*}
H_n(T_{\bullet,2},d) \cong \begin{cases} \mb Z^3 & n=0, \\ 0 & n \neq 0. \end{cases}
\end{equation*}
Finally, $C_6$ has two maximal independent sets of size three,
\begin{equation*}
H_n(T_{\bullet,3},d) \cong \begin{cases} \mb Z^2 & n=0, \\ 0 & n \neq 0. \end{cases}
\end{equation*}
Filling out the first page of the associated spectral sequence
\vspace{.35cm}
\begin{equation*}
\begin{sseq}[grid=none,labels=none,entrysize=.8cm]{0...3}{0...3}
\ssdrop{0} \ssmove{1}{0} \ssdrop{H_1} \ssmove{1}{0} \ssdrop{H_2} \ssmove{1}{0} \ssdrop{H_3}
\ssmoveto{1}{1} \ssdrop{0} \ssmove{1}{0} \ssdrop{\mb Z^6} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{2}{2} \ssdrop{\mb Z^3} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{3}{3} \ssdrop{\mb Z^2}
\end{sseq}
\end{equation*}
we immediately deduce that $H_3$ must vanish and $H_2 \cong \mathrm{im}\ \! d^1_{2,0} \leq \mb Z^6$. The next page is
\vspace{.35cm}
\begin{equation*}
\begin{sseq}[grid=none,labels=none,entrysize=.8cm]{0...3}{0...3}
\ssdrop{0} \ssmove{1}{0} \ssdrop{H_1} \ssmove{1}{0} \ssdrop{0} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{1}{1} \ssdrop{0} \ssmove{1}{0} \ssdrop{E^2_{2,1} } \ssmove{1}{0} \ssdrop{0}
\ssmoveto{2}{2} \ssdrop{E^2_{2,2}} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{3}{3} \ssdrop{\mb Z^2}
\end{sseq}
\end{equation*}
with $E^2_{2,1} \cong \ker d^1_{2,1} / H_2$ and $E^2_{2,2} \cong \mb Z^3 / \mathrm{im}\ \! d^1_{2,1}$. From Proposition \ref{thm:properties} we know that $E^2_{2,2}\cong 0$. Hence, $\image d^1_{2,1} \cong \mb Z^3$ and $\ker d^1_{2,1} \cong \mb Z^3$.
Since this page's differential $d^2$ goes one step to the right and two steps up, we must have $E^2_{2,1} \cong \mb Z$ and $H_1 \cong E^2_{2,2}\cong 0$ for the spectral sequence to exhibit its expected convergence behavior.
Putting everything together we conclude
\begin{equation*}
H_1 \cong 0, \ H_2 \cong \mb Z^2 \ \Longrightarrow \tilde{H}_n(\I {C_6}) \cong \begin{cases}
\mb Z^2 & \text{ if } n=1, \\
0 & \text{ else.}
\end{cases}
\end{equation*}
\end{eg}
\begin{eg}[The Petersen graph $P$]
\begin{figure}
\begin{tikzpicture}[scale=1]
\coordinate (v1) at (-1,0);
\coordinate (v2) at (0,0.7);
\coordinate (v3) at (1,0);
\coordinate (v4) at (0.7,-1);
\coordinate (v5) at (-0.7,-1);
\draw (v1) -- (v3) ;
\draw (v1) -- (v4) ;
\draw (v2) -- (v4) ;
\draw[red] (v2) -- (v5) ;
\draw (v3) -- (v5) ;
\fill[] (v1) circle (.0666cm) node[above]{$v_1$};
\fill[] (v2) circle (.0666cm) node[right]{$v_2$};
\fill[] (v3) circle (.0666cm) node[above]{$v_3$};
\fill[] (v4) circle (.0666cm) node[right]{$v_4$};
\fill[] (v5) circle (.0666cm) node[left]{$v_5$};
\coordinate (w1) at (-1.8,0);
\coordinate (w2) at (0,1.55);
\coordinate (w3) at (1.8,0);
\coordinate (w4) at (1,-1.6);
\coordinate (w5) at (-1,-1.6);
\draw (v1) -- (w1) ;
\draw (v3) -- (w3) ;
\draw (v4) -- (w4) ;
\draw (w1) -- (w2) ;
\draw (w5) -- (w1) ;
\fill[] (w1) circle (.0666cm) node[left]{$w_1$};
\fill[] (w2) circle (.0666cm) node[above]{$w_2$};
\fill[] (w3) circle (.0666cm) node[right]{$w_3$};
\fill[] (w4) circle (.0666cm) node[right]{$w_4$};
\fill[] (w5) circle (.0666cm) node[left]{$w_5$};
\draw[red] (w2) -- (w3) ;
\draw[red] (w3) -- (w4) ;
\draw[red] (w4) -- (w5) ;
\draw[red] (v5) -- (w5) ;
\draw[red] (v2) -- (w2) ;
\fill[blue] (v1) circle (.0666cm);
\fill[blue] (w1) circle (.0666cm);
\fill[blue] (v3) circle (.0666cm);
\fill[blue] (v4) circle (.0666cm);
\fill[red] (w2) circle (.0666cm);
\fill[red] (v2) circle (.0666cm);
\fill[red] (v5) circle (.0666cm);
\fill[red] (w5) circle (.0666cm);
\fill[red] (w4) circle (.0666cm);
\fill[red] (w3) circle (.0666cm);
\end{tikzpicture}
\caption{The Petersen graph $P$. The closed neighborhood $N[v_1]$ is depicted in blue, its complement graph $P\! - \! N[v_1]$ in red.}\label{fig:peter}
\end{figure}
Removing the ball around each of the vertices of $P$ produces a cyclic graph $C_6$ on six vertices. See Figure \ref{fig:peter} for the case of an ``interior'' vertex and note that we get an isomorphic graph if we do the same with one of the $w_i$, $i=1, \ldots, 5$.
Using the previous example we have thus
\begin{equation*}
H_n(T_{\bullet,1},d) \cong \begin{cases} \mb Z^{20} & n=2, \\ 0 & n \neq 2. \end{cases}
\end{equation*}
Deleting another vertex and its neighbors in the remaining graph produces $P_3$, a path on three vertices. There are thirty different ways of doing so, hence
\begin{equation*}
H_n(T_{\bullet,2},d) \cong \begin{cases} \mb Z^{30} & n=1, \\ 0 & n \neq 1. \end{cases}
\end{equation*}
Arguing for the remaining $P_3$ as in the previous example and counting all different ways of deleting three vertices and their neighborhoods in $P$, we find
\begin{equation*}
H_n(T_{\bullet,3},d) \cong \begin{cases} \mb Z^{10} & n=0, \\ 0 & n \neq 0.\end{cases}
\end{equation*}
The Petersen graph has five maximum independent sets of size four, so that
\begin{equation*}
H_n(T_{\bullet,4},d) \cong \begin{cases} \mb Z^{5} & n=0, \\ 0 & n \neq 0. \end{cases}
\end{equation*}
The first page of the associated spectral sequence is then given by
\vspace{.35cm}
\begin{equation*}
\begin{sseq}[grid=none,labels=none,entrysize=.8cm]{0...4}{0...4}
\ssdrop{0} \ssmove{1}{0} \ssdrop{H_1} \ssmove{1}{0} \ssdrop{H_2} \ssmove{1}{0} \ssdrop{H_3} \ssmove{1}{0} \ssdrop{H_4}
\ssmoveto{1}{1} \ssdrop{0} \ssmove{1}{0} \ssdrop{0} \ssmove{1}{0} \ssdrop{\mb Z^{20} } \ssmove{1}{0} \ssdrop{0}
\ssmoveto{2}{2} \ssdrop{0} \ssmove{1}{0} \ssdrop{\mb Z^{30}} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{3}{3} \ssdrop{\mb Z^{10}} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{4}{4} \ssdrop{\mb Z^{5}}
\end{sseq}
\end{equation*}
We deduce $H_3 \cong \image d^1_{3,0}$ and $H_4 \cong 0$. Furthermore, it must hold that $\image d^1_{3,3}\cong \mb Z^{10}$ because $E^2_{3,3}\cong 0$ by Proposition \ref{thm:properties}. This implies $\ker d^1_{3,3}\cong \mb Z^{20}$.
Therefore, we find on the $E_2$-page
\begin{equation*}
\begin{sseq}[grid=none,labels=none,entrysize=.8cm]{0...4}{0...4}
\ssdrop{0} \ssmove{1}{0} \ssdrop{H_1} \ssmove{1}{0} \ssdrop{H_2} \ssmove{1}{0} \ssdrop{0} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{1}{1} \ssdrop{0} \ssmove{1}{0} \ssdrop{0} \ssmove{1}{0} \ssdrop{X/H_3 } \ssmove{1}{0} \ssdrop{0}
\ssmoveto{2}{2} \ssdrop{0} \ssmove{1}{0} \ssdrop{\mb Z^{20}/Y} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{3}{3} \ssdrop{0} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{4}{4} \ssdrop{\mb Z^{5}}
\end{sseq}
\end{equation*}
for $X:= \ker d^1_{3,1}$ and $Y:=\image d^1_{3,1}$, satisfying $X \oplus Y \cong \mb Z^{20}$. The only nontrivial differentials are
\begin{equation*}
d^2_{1,0}:H_1 \to 0,\quad d^2_{2,0}: H_2 \to \mb Z^{20}/Y, \quad d^2_{3,2}: \mb Z^{20}/Y \to \mb Z^5, \quad d^2_{3,1}: X/H_3 \to 0.
\end{equation*}
Convergence of the spectral sequence implies
\begin{equation*}
H_1 \cong 0,\quad \image d^2_{3,2} \cong \mb Z^4, \quad \ker d^2_{3,2} \cong H_2, \quad X \cong H_3,
\end{equation*}
and, using $X \oplus Y \cong \mb Z^{20}$, this is equivalent to $H_3 \cong \mb Z^4 \oplus H_2$.
We see that the spectral sequence does not always solve the problem completely; additional calculations may be necessary. In the present case one finds $H_2\cong 0$, so that
\begin{equation*}
\tilde H_n(\I P) \cong \begin{cases} \mb Z^{4} & n=2, \\ 0 & n \neq 2. \end{cases}
\end{equation*}
\end{eg}
\begin{eg}[$K$, the 1-skeleton of a three dimensional cube] Here the $E^1$-page of the associated spectral sequence looks like
\begin{equation*}
\begin{sseq}[grid=none,labels=none,entrysize=.8cm]{0...4}{0...4}
\ssdrop{0} \ssmove{1}{0} \ssdrop{H_1} \ssmove{1}{0} \ssdrop{H_2} \ssmove{1}{0} \ssdrop{H_3} \ssmove{1}{0} \ssdrop{H_4}
\ssmoveto{1}{1} \ssdrop{0} \ssmove{1}{0} \ssdrop{\mb Z^8} \ssmove{1}{0} \ssdrop{0 } \ssmove{1}{0} \ssdrop{0}
\ssmoveto{2}{2} \ssdrop{\mb Z^4} \ssmove{1}{0} \ssdrop{0} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{3}{3} \ssdrop{0} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{4}{4} \ssdrop{\mb Z^{2}}
\end{sseq}
\end{equation*}
from which it follows that
\begin{equation*}
\tilde H_n(\I K) \cong \begin{cases} \mb Z^{3} & n=1, \\ 0 & n \neq 1. \end{cases}
\end{equation*}
The details of the computation are left to the reader.
\end{eg}
We finish with two examples on the homology of 2-independence complexes. Note that Proposition \ref{prop:dcomplex} still applies, but in \eqref{eq:dcomplex} we have to replace the graphs $G- N[U]$ appropriately.
In the following let $T_{i,j}$ be given as in Definition \ref{defn:T}, except that markings $m:V \to \{0,1,2\}$ are now defined by requiring that $V_m=m^{-1}(\{1,2\})$ is a 2-independent set in $G$.
\begin{eg}[$C_4$, the cyclic graph on four vertices] \label{eg:2ind}
We consider the 2-independence complex $\mathrm{Ind}_2(C_4)$ whose geometric realization is homeomorphic to the 1-skeleton of a tetrahedron $\Delta^3$.
To fill out the first page of the associated spectral sequence we need to know the homology of the complexes $(T_{\bullet,j},d)$ for $j=1,2$.
Let a single vertex $v$ of $C_4$ be 2-marked. We may still mark any of the remaining three vertices without violating the condition of 2-independence, but not more. This means that in \eqref{eq:dcomplex} we have to replace each $G\! - \! N[v]$ by a $K_3$, the complete graph on the vertex set $V\! \setminus \! \{ v\}$,
\begin{equation*}
(T_{\bullet,1},d) \cong \bigoplus_{k=1}^4 (C_{\bullet-1}(K_3),\partial).
\end{equation*}
Now let $j=2$, i.e.\ two vertices be 2-marked. Every such 2-independent set is already maximum, so
\begin{equation*}
(T_{\bullet,2},d) \cong \bigoplus_{ \substack{ U\subset V \text{ maximum} \\ \text{2-independent}}} (C_{\bullet-1}(\emptyset),\partial).
\end{equation*}
The homology groups of the latter two complexes are easy to compute: For the first one observe that $\I {K_3}$ consists of three disjoint vertices, for the second one note that there are six maximum 2-independent sets in $C_4$. Thus,
\begin{equation*}
H_n(T_{\bullet,1},d) \cong \begin{cases} (\mb Z^2)^4 & n=1, \\ 0 & n \neq 1, \end{cases} \text{ and } H_n(T_{\bullet,2},d) \cong \begin{cases} \mb Z^6 & n=0, \\ 0 & n \neq 0. \end{cases}
\end{equation*}
The first page of the associated spectral sequence is then
\vspace{.35cm}
\begin{equation*}
\begin{sseq}[grid=none,labels=none,entrysize=.8cm]{0...2}{0...2}
\ssdrop{0} \ssmove{1}{0} \ssdrop{H_1} \ssmove{1}{0} \ssdrop{H_2}
\ssmoveto{1}{1} \ssdrop{0} \ssmove{1}{0} \ssdrop{\mb Z^8}
\ssmoveto{2}{2} \ssdrop{\mb Z^6}
\end{sseq}
\end{equation*}
which implies $H_1\cong 0$ and $H_2\cong \mb Z^3$ (the other possible solution, $H_1\cong \mb Z^5$ and $H_2\cong \mb Z^8$, cannot be true -- the rank of $H_1$ is always less than the number of vertices). We conclude
\begin{equation*}\tilde{H}_n(\mathrm{Ind}_2(G))\cong \begin{cases}
\mb Z^3 & \text{ if } n=1, \\
0 & \text{ else.}
\end{cases} \end{equation*}
\end{eg}
\begin{eg}[$C_5$, the cyclic graph on five vertices] \label{eg:2ind2}
$C_5$ admits 2-independent sets of cardinality up to three, so we need to find the homology of the complexes $(T_{\bullet,j},d)$ for $j=1,2,3$.
Let a single vertex of $C_5$ be 2-marked, say $v_1$. Ordering the vertices of $C_5$ cyclically, its maximal 2-independent sets containing $v_1$ are
\begin{equation*}
\{v_1,v_2,v_4 \}, \{v_1,v_3,v_4 \},\{v_1,v_3,v_5 \}.
\end{equation*}
To model all allowed 1-markings if $v_1$ is 2-marked, we have to replace in the decomposition formula \eqref{eq:dcomplex} the summand corresponding to $C_5 - N[v_1]$ by a path $P_4$ on four vertices $v_2,v_3,v_4,v_5$ with edge set
\begin{equation*}
E(P_4)= \big\{ \{v_3,v_2 \}, \{v_2,v_5 \}, \{v_5,v_4 \} \big\}.
\end{equation*}
Thus, using that $\I {P_4}$ is contractible,
\begin{equation*}
(T_{\bullet,1},d) \cong \bigoplus_{k=1}^5 (C_{\bullet-1}(P_4),\partial) \quad \Longrightarrow \quad H_n(T_{\bullet,1},d) \cong 0 \text{ for all } n \in \mb N.
\end{equation*}
Now let $j=2$, i.e.\ two vertices be 2-marked. The graphs encoding the remaining possible markings consist of either a single vertex or a $K_2$, two vertices connected by an edge (if we start with $v_1$ these cases correspond to 2-marking the sets $\{v_1,v_2\}$, $ \{v_1,v_5\}$ or $\{v_1,v_3\}$, $\{v_1,v_4\}$, respectively),
\begin{equation*}
(T_{\bullet,2},d) \cong \Big( \bigoplus_{ \{v_i,v_j\} \in E} (C_{\bullet-1}(*),\partial) \Big) \oplus
\Big( \bigoplus_{ \substack{ \{v_i,v_j\} \subset V \\ |i-j| \in \{2,3\} }} (C_{\bullet-1}(K_2),\partial) \Big)
\end{equation*}
The first case has trivial homology, the latter contributes a copy of $\mb Z$ in degree one,
\begin{equation*}
H_n(T_{\bullet,2},d) \cong \begin{cases} \mb Z^5 & n=1, \\ 0 & n \neq 1. \end{cases}
\end{equation*}
Lastly, there are five maximal 2-independent sets of size three,
\begin{equation*}
H_n(T_{\bullet,3},d) \cong \begin{cases} \mb Z^5, & n=0, \\ 0 & n \neq 0. \end{cases}
\end{equation*}
Filling out the first page of the associated spectral sequence gives
\vspace{.35cm}
\begin{equation*}
\begin{sseq}[grid=none,labels=none,entrysize=.8cm]{0...3}{0...3}
\ssdrop{0} \ssmove{1}{0} \ssdrop{H_1} \ssmove{1}{0} \ssdrop{H_2} \ssmove{1}{0} \ssdrop{H_3}
\ssmoveto{1}{1} \ssdrop{0} \ssmove{1}{0} \ssdrop{0} \ssmove{1}{0} \ssdrop{0} \ssmove{1}{0}
\ssmoveto{2}{2} \ssdrop{0} \ssmove{1}{0} \ssdrop{\mb Z^5}
\ssmoveto{3}{3} \ssdrop{\mb Z^5}
\end{sseq}
\end{equation*}
so that $H_3 \cong 0$.
The next page $E^2$ reads
\vspace{.35cm}
\begin{equation*}
\begin{sseq}[grid=none,labels=none,entrysize=.8cm]{0...3}{0...3}
\ssdrop{0} \ssmove{1}{0} \ssdrop{H_1} \ssmove{1}{0} \ssdrop{H_2} \ssmove{1}{0} \ssdrop{0}
\ssmoveto{1}{1} \ssdrop{0} \ssmove{1}{0} \ssdrop{0} \ssmove{1}{0} \ssdrop{0} \ssmove{1}{0}
\ssmoveto{2}{2} \ssdrop{0} \ssmove{1}{0} \ssdrop{X}
\ssmoveto{3}{3} \ssdrop{\mb Z^5/Y}
\end{sseq}
\end{equation*}
with $X:=\ker d^1_{3,2}$ and $Y:= \image d^1_{3,2}$, $X\oplus Y \cong \mb Z^5$.
For the spectral sequence to converge accordingly, we must have $H_2 \cong X \ncong 0$ (if $\ker d^1_{3,2}\cong 0$, then $Y\cong 0$) and $( \mb Z^5/Y ) / H_1 \cong \mb Z$. This shows $ H_2 \cong H_1 \oplus\mb Z$. Now, either by inspecting the differential $d^1_{3,2}$ more closely, or by simply noting that $\mathrm{Ind}_2(C_5)$ is connected, we find $H_1 \cong 0$ and therefore
\begin{equation*}
\tilde{H}_n(\mathrm{Ind}_2(C_5))\cong \begin{cases}\mb Z & n=1, \\ 0 & n\neq 1. \end{cases}
\end{equation*}
\end{eg}
| {
"timestamp": "2020-10-01T02:13:40",
"yymm": "2008",
"arxiv_id": "2008.06267",
"language": "en",
"url": "https://arxiv.org/abs/2008.06267",
"abstract": "The independence complex $\\mathrm{Ind}(G)$ of a graph $G$ is the simplicial complex formed by its independent sets. This article introduces a deformation of the simplicial boundary map of $\\mathrm{Ind}(G)$ that gives rise to a double complex with trivial homology. Filtering this double complex in the right direction induces a spectral sequence that converges to zero and contains on its first page the homology of the independence complexes of $G$ and various subgraphs of $G$, obtained by removing independent sets and their neighborhoods from $G$. It is shown that this spectral sequence may be used to study the homology of $\\mathrm{Ind}(G)$. Furthermore, a careful investigation of the sequence's first page exhibits a relation between the cardinality of maximal independent sets in $G$ and the vanishing of certain homology groups of the independence complexes of some subgraphs of $G$. This relation is shown to hold for all paths and cyclic graphs.",
"subjects": "Algebraic Topology (math.AT); Combinatorics (math.CO)",
"title": "On the homology of independence complexes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018347594028,
"lm_q2_score": 0.7279754489059775,
"lm_q1q2_score": 0.7099229934329092
} |
https://arxiv.org/abs/1806.02408 | On the existence of symmetric minimizers | In this note we revisit a less known symmetrization method for functions with respect to a topological group $G$, which we call $G$-averaging. We note that, although quite non-technical in nature, this method yields $G$-invariant minimizers of functionals satisfying some relaxed convexity properties. We give an abstract theorem and show how it can be applied to the $p$-Laplace and polyharmonic Poisson problem in order to construct symmetric solutions. We also pose some open problems and explore further possibilities where the method of $G$-averaging could be applied to. |
\section{Introduction}
Identifying symmetries of problems has always been of importance in the deeper understanding of mathematical and physical problems. As Kawohl wrote in \cite{Kawohl1998}: ``Many problems in analysis appear symmetric, yet in fact their solutions are sometimes nonsymmetric.'' It has been a subject of thorough study in the theory of partial differential equations, to find conditions that imply the inheritance of the symmetries of the problem to its solutions. For example Dacorogna \textit{et al} in \cite{DacorognaGangboEtAl1992} and subsequently Belloni and Kawohl in \cite{BelloniKawohl1999} studied symmetry properties of minimizers of problems related to best Sobolev constants and isoperimetric problems in crystallography. Symmetries of systems of semilinear equations in the context of criticality, were studied by Bozhkov and Mitidieri in \cite{BozhkovMitidieri2007}, using Poho\v zaev's and Noether's identities and Lie group theory. When it comes to quasilinear equations, D'Ambrosio \textit{et al} in \cite{DAmbrosioFarinaEtAl2013} studied the symmetry properties of distributional solutions. Lastly, Kr\"omer studied the radial symmetries of non-convex functionals in \cite{Kroemer2008}.
A number of methods exist in order to prove inheritance of symmetries, many of them are described in \cite{Kawohl1998} and in the more recent survey \cite{Weth2010}. Here we will try to give a short description of the available tools, that have been so far developed an implemented; this list of references is by far not exhaustive.
Maybe one of the most widely developed one is Alexandrov's moving plane method \cite{Alexandrov1962}, adapted to pdes by Serrin in \cite{Serrin1971} and further used by Gidas, Ni and Nirenberg in \cite{GidasNiNirenberg1979,GidasNiNirenberg1981} to prove that positive solutions to the semilinear Poisson problem in a ball, are radial. This method has been further refined by Cl\'ement and Sweers in \cite{ClementSweers1989} to include subsolutions and by Kawohl and Sweers in \cite{KawohlSweers2002} to include Steiner-symmetric domains. In the case of exterior domains, we refer to the work \cite{Reichel1997} of Reichel and to the work \cite{Sirakov2001} of Sirakov. For semilinear systems in bounded domains see Troy \cite{Troy1981} and for the unbounded case the work \cite{BuscaSirakov2000} of Busca and Sirakov. For polyharmonic semilinear equations see Gazzola, Grunau and Sweers' monograph \cite{GazzolaGrunauEtAl2010}. For quasilinear equations we refer to Serrin and Zou \cite{SerrinZou1999}. Da Lio and Sirakov in \cite{DaLioSirakov2007} extended the method to viscosity solutions to fully nonlinear equations. Fleckinger and Reichel in \cite{FleckingerReichel2005} proved radiality of global solution branches for problems involving the $p$-Laplacian on balls. Herzog and Reichel treated elliptic systems in ordered Banach spaces in \cite{HerzogReichel2012}. For more details and for an application of to $p$-Laplace equations see also Brezis' survey \cite{Brezis1999}, Pacella and Ramaswamy's survey \cite{PacellaRamaswamy2008} or Fraenkel's monograph \cite{Fraenkel2000}. For more details on the maximum principle and related symmetry results we also refer to Pucci and Serrin's monograph \cite{PucciSerrin2007}. One can also use symmetry methods to prove nonexistence results, as did Poho\v zaev (\cite{Pohozaev1965} or \cite[Chapter III, 1]{Struwe2008} and references therein) or Reichel and Zou in \cite{ReichelZou2000}.
Moreover, a reflection method was considered by Lopes in \cite{Lopes1996_1,Lopes1996_2} and further developed by Mari\c{s} in \cite{Maris2009} in order to prove symmetry of constrained minimizers that possess either some continuation properties or increased regularity. Later works using this method include the one by Jeanjean and Squassina \cite{JeanjeanSquassina2009} and references therein.
Another method consists in the so-called technique of symmetrization and symmetric rearrangements and foliations; see for example Kawohl's monograph \cite{Kawohl1985} and the surveys of Brock \cite{Brock2007} and Pacella and Ramaswamy \cite{PacellaRamaswamy2008}. Symmetrization techniques can also find application in minimax problems, as Van Schaftingen proved in \cite{VanSchaftingen2005}. Concerning Schwarz symmetrization, we would also like to mention Hajaiej and Kr\"omer's recent work \cite{HajaiejKroemer2012} and the references therein.
All the above methods are quite technical and often make assumptions which are similar in nature. However, in some cases, symmetry can be obtained via short and elegant arguments. For example, uniqueness of solutions to a symmetric problem leads directly to invariance; otherwise any symmetric transformation of a solution will also be one. Or, that minimizers of strictly convex functionals must inherit the symmetry of the problem as shown by Montenegro and Valdinoci in \cite{MontenegroValdinoci2012}.
Finally we would also like to refer to the work of Pacella and Weth \cite{PacellaWeth2007} for proving symmetry results via Morse theory, and to the work of Jarohs and Weth \cite{JarohsWeth2016} for nonlocal problems.
The present article is of a similar nature. We consider an averaging type symmetrization of a function and its properties when it comes to inheritance of symmetry. This idea was firstly used by Nicolaescu in \cite{Nicolaescu1988} in order to prove radiality of optimal controls for increasing cost functionals controlled by a semilinear Poisson state equation. In \cite{Kroemer2008}, Kr\"omer applied it also to prove radiality of minimizers. It was also used by Struwe in his monograph \cite[3.3 Remark, p.82]{Struwe2008} to construct symmetric pseudo-gradient flows for functionals on Banach spaces. Up to our knowledge, it has not been used in its full generality to prove inheritance of symmetry to solutions, despite its compact and elegant form. (The method in the radial cases above will follow from the one presented here if one assumes a mean value theorem for integrals; see Section \ref{mean_value}.) The aim of this note is to explore the possibilities that this method may have, when it comes to finding symmetric solutions.
\section{Existence of invariant minimizers}
We begin by making clear what we mean by the $G$-averaging of an element of a general Banach space. We will later make this more concrete by applying the method on appropriate function spaces.
\begin{definition}
Let $X$ be a Banach space, $C\subseteq X$ closed and convex and $G$ a compact topological group acting continuously on $C$. Moreover, let $\theta$ denote the Haar-measure on $G$ (the unique left- and right-invariant probability measure with respect to the Borel $\sigma$-algebra on $G$) and $u\in C$. Define the $G$-average of $u$ by
\begin{equation}\label{ug}
u_G\mathrel{\hspace{-0.1ex}\mathrel{\rlap{\raisebox{0.3ex}{$\m@th\cdot$}}\raisebox{-0.3ex}{$\m@th\cdot$}}\hspace{-0.73ex}\resizebox{0.55em}{0.84ex}{$=$}\hspace{0.67ex}}\hspace{-0.25em} \int_G g\cdot u\; d\theta(g),
\end{equation}
where the integral is in the sense of Bochner.
\end{definition}
\begin{example}[{\protect\cite[Proposition 4.2]{Nicolaescu1988}}] For $X=C[-1,1]$ we get
\begin{equation*}
u_{O(1)}(x)=\frac{1}{2}\big(u(x)+u(-x)\big),
\end{equation*}
whereas for $X=C(\overline \Omega)$ with $\Omega\mathrel{\hspace{-0.1ex}\mathrel{\rlap{\raisebox{0.3ex}{$\m@th\cdot$}}\raisebox{-0.3ex}{$\m@th\cdot$}}\hspace{-0.73ex}\resizebox{0.55em}{0.84ex}{$=$}\hspace{0.67ex}}\hspace{-0.25em}\{z\in \mathbb C: |z|<1\}$ we get
\begin{equation*}
u_{O(2)}(z)=\frac{1}{4\,\pi}\int_0^{2\pi}\Big(u\big(e^{i\phi}\,z\big)+u\big(e^{i\phi}\,\overline z\big)\Big)\;d\phi.
\end{equation*}
\end{example}
Still, it is not directly clear why the $G$-average is well-defined in the abstract setting. This problem is dealt with in the next lemma.
\begin{lemma}\label{inC}
Let $X$ be a Banach space, $C\subseteq X$ closed and convex and $G$ a compact topological group acting continuously on $C$. Moreover, let $u\in C$. Then the $G$-average of $u$ (defined by \eqref{ug}) satisfies $u_G\in C$.
\end{lemma}
\begin{proof}
Since $G$ acts on $C$ we have that $g\cdot u\in C$ for all $g\in G$ and since $C$ is closed and convex holds that $\overline{\textrm{conv}} G u\subseteq C$. Moreover, due to the continuity of the $G$-action we get that the real function $g\mapsto \left\| g\cdot u\right\|_X$ is continuous. Since $G$ is compact we estimate
\begin{equation}
\int_G\left\| g\cdot u\right\|_X\;d\theta(g)\leq \max_{g\in G} \left\| g\cdot u\right\|_X\, \int_G\;d\theta(g)<\infty,
\end{equation}
since $\theta$ is a probability measure and the function $g\mapsto \left\| g\cdot u\right\|_X$ attains its maximum in the compact space $G$. Thus the mapping $g\mapsto g\cdot u$ is Bochner integrable. But then \cite[Proposition 1.2.12]{HytoenenVanNeervenEtAl2016} implies that $u_G\in \overline{\textrm{conv}} G u$ and thus $u_G\in C$.
\end{proof}
We can now prove the main abstract result on the inheritance of symmetry from a minimization problem to its minimizers. The importance of the result lies in the fact that we neither impose regularity nor any kind of maximum principle. Our assumptions only include some convexity and invariance. Although $1.$ and $2.$ in the theorem below are partially overlapping, the reason for this formulation is to illustrate different approaches: Jensen's inequality is not indispensable to the proof.
\begin{theorem}\label{th1}
Let $X$ be a Banach space, $C\subseteq X$ closed and convex and $G$ a compact topological group acting continuously on $C$. Assume that $F:X\strongly \mathbb R$ has a minimizer in $C$ and that it is $G$-invariant with respect to the minimizer, i.e., $F(g\cdot u)=F(u)$ for all $g\in G$, where $u\in C$ is the (local) minimizer. Moreover assume that either
\begin{enumerate}
\item $F$ is convex, and either continuous at $u_G$ or lower semi-continuous in $C$, or
\item $F$ is lower semi-continuous and convex in $\overline{\textrm{conv}} G u$.
\end{enumerate}
Then $u_G$ is a $G$-invariant minimizer of $F$ in $C$.
\end{theorem}
\begin{proof}
First note that $u_G$ is $G$-invariant: due to the (left) invariance of the Haar measure we have for any $\tilde g\in G$ that
\begin{equation*}
\tilde g \cdot u_G=\int_G (\tilde g\cdot g)\cdot u\; d\theta(g)=\int_G g\cdot u\; d\theta(g)=u_G.
\end{equation*}
We consider the cases separately:
\textit{1.} Since $F$ is a convex functional we can either apply a version of Jensen's inequality for the Bochner integral (\cite[Proposition 1.2.11]{HytoenenVanNeervenEtAl2016}):
\begin{align*}
F(u_G)={}&F\Big(\int_G g\cdot u\; d\theta(g)\Big)\leq \int_G F(g\cdot u)\; d\theta(g)\\
={}&\int_G F(u)\; d\theta(g)=F(u)\int_G\; d\theta(g)=F(u),
\end{align*}
or use the fact that the subdifferential of $F$ at $u_G$ is not empty:
\begin{equation*}
F(g\cdot u)\geq F(u_G)+\langle \beta^*,g\cdot u-u_G\rangle_{X^*,X}
\end{equation*}
for all $g\in G$, for a nonzero $\beta^*\in \partial F(u)$. Integrating the above in $G$ with respect to the Haar measure we obtain $F(u_G)\leq F(u)$ and, together with Lemma \ref{inC}, that $u_G$ is a minimizer in $C$.
\textit{2.} Since $u_G\in \overline{\textrm{conv}} G u$, there exists a sequence $\{t_n\,g_{1,n}\cdot u+(1-t_n)\,g_{2,n}\cdot u\}_{n\in\mathbb N}$, where $g_{1,n},g_{2,n}\in G$, strongly converging to $u_G$. Due to the semi-continuity and convexity of $F$ we get
\begin{align*}
F(u_G)={}& F\Big(\lim_{n\rightarrow\infty} \big(t_n\,g_{1,n}\cdot u+(1-t_n)\,g_{2,n}\cdot u\big)\Big)\\
\leq{}&\lim_{n\rightarrow\infty} F\big(t_n\,g_{1,n}\cdot u+(1-t_n)\,g_{2,n}\cdot u\big)\\
\leq{}& \lim_{n\rightarrow\infty}\Big(t_n\,F\big(g_{1,n}\cdot u)+(1-t_n)\,F(g_{2,n}\cdot u\big)\Big)=F(u).
\end{align*}
Again, using Lemma \ref{inC}, we infer the minimality of $u_G$.
\end{proof}
Next we apply the above abstract theorem in more precise setting. Namely, we prove the inheritance of symmetry from the domain to weak solutions to the $p$-Laplace Poisson problem, only assuming an decreasing nonlinearity.
\begin{corollary}\label{cor1}
Let $\Omega\subset \mathbb R^n$ be a bounded Lipschitz domain and let $G$ denote the subgroup of $O(n)$ corresponding to the symmetries of $\Omega$. Assume that $G$ is compact and that the functional $F:W^{1,p}_0(\Omega)\strongly \mathbb R$ defined by
\begin{equation}\label{eq1}
F(u)\mathrel{\hspace{-0.1ex}\mathrel{\rlap{\raisebox{0.3ex}{$\m@th\cdot$}}\raisebox{-0.3ex}{$\m@th\cdot$}}\hspace{-0.73ex}\resizebox{0.55em}{0.84ex}{$=$}\hspace{0.67ex}}\hspace{-0.25em} \int_\Omega\Big(\frac{1}{p}\,|\nabla u|^p-f(u)\Big)\;dx,
\end{equation}
with $f:\mathbb R\strongly\mathbb R$ concave and $p>1$, possesses a minimizer. Then $F$ possesses a $G$-invariant minimizer.
\end{corollary}
\begin{proof}
Define the action of $G$ on $W^{1,p}_0(\Omega)$ by composition (pullback) as
\begin{equation*}
(g\cdot u) (x)\mathrel{\hspace{-0.1ex}\mathrel{\rlap{\raisebox{0.3ex}{$\m@th\cdot$}}\raisebox{-0.3ex}{$\m@th\cdot$}}\hspace{-0.73ex}\resizebox{0.55em}{0.84ex}{$=$}\hspace{0.67ex}}\hspace{-0.25em} u(g\,x)
\end{equation*}
and note that the standard norm in $W^{1,p}_0(\Omega)$ is $G$-invariant: Since orthogonal matrices preserve lengths we calculate
\begin{equation*}
\big|\nabla\big(u(g\,x)\big)\big|=\big|g^\top\,\nabla u(g\,x)\big|=|\nabla u(g\,x)|
\end{equation*}
and keep in mind that $|\det g|=1$. We only need to prove the continuity of the action, which, although similarly elementary in nature, we include for the sake of completeness. Let $v\in C_0^\infty(\Omega)$ and $g_1,g_2\in G$. Since $v$ and its first derivatives are Lipschitz continuous we get that
\begin{equation*}
\left\| g_1\cdot v-g_2\cdot v\right\|_{1,p}\leq C_v\,d(g_1,g_2),
\end{equation*}
for some constant $C_v>0$. Here $d$ denotes the standard (Euclidean) metric in $O(n)$. Let $\varepsilon>0$ and pick any $u_1,u_2\in W^{1,p}_0(\Omega)$ with $\left\| u_1-u_2\right\|_{1,p}<\varepsilon/4$, $v\in C_0^\infty(\Omega)$ with $\left\| u_1-v\right\|_{1,p}<\varepsilon/4$ and $g_1,g_2\in G$ with $d(g_1,g_2)< \varepsilon/(4\,C_v)$. We then obtain that
\begin{align*}
{}&\left\| g_1\cdot u_1-g_2\cdot u_2\right\|_{1,p}\leq \left\| g_1\cdot u_1-g_2\cdot u_1\right\|_{1,p}+\left\| g_2\cdot u_1-g_2\cdot u_2\right\|_{1,p}\\
{}&\qquad\qquad= \left\| g_1\cdot u_1-g_2\cdot u_1\right\|_{1,p}+\left\| g_2\cdot u_1-g_2\cdot u_2\right\|_{1,p}\\
{}&\qquad\qquad\leq \left\| g_1\cdot u_1-g_1\cdot v\right\|_{1,p}+\left\| g_1\cdot v-g_2\cdot v\right\|_{1,p}+\left\| g_2\cdot v-g_2\cdot u_1\right\|_{1,p}\\
{}&\qquad\qquad\qquad+\left\| g_2\cdot u_1-g_2\cdot u_2\right\|_{1,p}\\
{}&\qquad\qquad=\left\| u_1-v\right\|_{1,p}+C_v\,d(g_1,g_2)+\left\| v-u_1\right\|_{1,p}+\left\| u_1-u_2\right\|_{1,p}\\
{}&\qquad\qquad<\frac{\varepsilon}{4}+C_v\,\frac{\varepsilon}{4\,C_v}+\frac{\varepsilon}{4}+\frac{\varepsilon}{4}=\varepsilon,
\end{align*}
i.e., the action of $G$ on $W^{1,p}_0(\Omega)$ is continuous. Thus, applying Theorem \ref{th1} we obtain the existence of a $G$-invariant minimizer.
\end{proof}
\begin{remark}
The group $G$ is supposed to include any possible symmetries of the domain, that is, rotations with respect to a fixed angle and a vector, reflections with respect to any hyperplane and of course rigid motions. As long as all these symmetries form a compact subgroup of $O(n)$ (which is for example the case for any finite subgroup, or for any subgroup corresponding to rotations around any number of vectors, since it will be isomorphic to some $SO(m)$, $m\leq n$), the $G$-average of a minimizer will be symmetric with respect to all of the symmetries of the domain.
\end{remark}
\begin{remark}
The moving plane method relies strongly on the existence of some kind of comparison principle for solutions. But such results are generally not available for higher order pdes. Still, if one restricts the problem to balls, it is possible to substitute the use of comparison principles by kernel estimates and monotonicity properties of the biharmonic Green function (see \cite[Section 7.1]{GazzolaGrunauEtAl2010}). It is not clear how to extend this to general domains, since the a formula for the Green function is explicitly available only for balls (\cite[Section 1.2]{GazzolaGrunauEtAl2010}). The $G$-averaging method does not rely on such results and yields directly symmetric minimizers, as shown in the next corollary.
\end{remark}
\begin{corollary}
Let $\Omega\subset \mathbb R^n$ be a bounded Lipschitz domain and let $G$ denote the subgroup of $O(n)$ corresponding to the symmetries of $\Omega$. Assume that $G$ is compact and that the functional $\displaystyle F:H^{2\,m}_0(\Omega)\strongly \mathbb R$ defined by
\begin{equation}\label{eq2}
F(u)\mathrel{\hspace{-0.1ex}\mathrel{\rlap{\raisebox{0.3ex}{$\m@th\cdot$}}\raisebox{-0.3ex}{$\m@th\cdot$}}\hspace{-0.73ex}\resizebox{0.55em}{0.84ex}{$=$}\hspace{0.67ex}}\hspace{-0.25em} \int_\Omega\Big(\frac{1}{2}\,(\Delta^m u)^2-f(u)\Big)\;dx,
\end{equation}
with $f:\mathbb R\strongly\mathbb R$ concave and $m\in\mathbb N$, possesses a minimizer. Then $F$ possesses a $G$-invariant minimizer.
\end{corollary}
\begin{proof}
Define as in the proof of Corollary \ref{cor1} the action of the group via composition and note that it is continuous in $\displaystyle H^{2\,m}_0(\Omega)$. Moreover, since the Laplacian is invariant with respect to orthogonal transformations, so is its $m$-th power. Finally, $F$ is a convex functional so that Theorem \ref{th1} proves the claim.
\end{proof}
\begin{remark}
Since the method of $G$-averaging works for minimizers, the assumptions for right-hand sides are relaxed compared to the moving plane method. Writing the strong Euler-Lagrange equations for \eqref{eq1} and \eqref{eq2}:
\begin{equation*}
\bigg\{
\begin{aligned}
-\Delta_p u ={}& f'(u) {}&& \text{ in } \Omega\\
u={}&0{}&& \text{ on }\partial \Omega
\end{aligned}\quad\text{ and }\quad
\bigg\{
\begin{aligned}
\Delta^{2\,m} u ={}& f'(u) {}&& \text{ in } \Omega\\
\partial^\alpha u|_{\partial\Omega}={}&0{}&& \text{ for }|\alpha|\leq 2\,m-1
\end{aligned}
\end{equation*}
remark that we only assume a decreasing $f'$, whereas one would normally assume Lipschitz continuous right-hand sides when applying the moving plane method. Still, there are refinements of the latter assumption; discontinuous nonlinearities satisfying some technical assumptions are treated in \cite[Section 3.2]{Fraenkel2000}.
\end{remark}
\section{Further possible applications of the $G$-averaging}
In what follows we will present some open questions and possible generalizations and applications of the presented method.
\subsection{A mean value theorem in Banach spaces}\label{mean_value} If one assumes that a ``strong'' mean value theorem is true, that is, there exists $g_u\in G$ such that $u_G=g_u\cdot u$ for a minimizer $u$, then we can directly see that
$$F(u_G)=F(g_u\cdot u)=F(u), \text{ and } u=(g_u^{-1}\cdot g_u)\cdot u=g_u^{-1}\cdot u_G=u_G.$$
In general, it is not possible to obtain anything better than the assertion
\begin{equation*}
\frac{1}{\mu(A)}\int_A f(x)\;d\mu(x)\in\overline{\textrm{conv}} f(A)
\end{equation*}
for a Bochner integrable function $f:\Omega\strongly X$ and a $\mu$-measurable $A\subset \Omega$. One way to prove this is using the Hahn-Banach separation theorem to arrive to a contradiction (see \cite[Proposition 2.1.21]{GasinskiPapageorgiou2006}). Would it be possible to use non-convex separation theorems involving separating functionals in normal cones like the one in \cite{BorweinJofre1998} together with specific connectedness assumptions on the group $G$, to obtain a more accurate result for the $G$-average? Having such, one could relax the convexity assumptions on the functional.
\subsection{Less convexity, more structure} As already mentioned in the introduction, there are methods to prove the symmetry of minimizers that do not assume convexity (but still either some regularity of positivity) for solutions to pdes. Is it possible to obtain a result just by direct substituting the $G$-average into the functional? What is the relation of the $G$-average to other symmetrizations? Is it possible to deal with more general functionals? Answers to these questions will not only enable the study of minimizers, but also of critical points in the spirit of \cite{VanSchaftingen2005}. Taking $G$-averages of a Palais-Smale sequence would directly lead to critical points, since the action of the group $G$ on the underlying space is continuous.
\subsection{Polyconvexity} An interesting application of the $G$-average would be in the context of nonlinear elasticity. This is a vectorial case and as it is well known (see eg. Ciarlet's classical monograph \cite{Ciarlet1988}) that convexity turns out to be quite restrictive: so-called hyperelastic materials are modelled by non-convex energies. That is where the so-called polyconvex functionals are of interest. This notion was firstly introduced by Ball in \cite{Ball1976} and it is a sufficient condition for the weak lower semi-continuity of the energies. Still, the proof of Theorem \ref{th1} does not directly work for polyconvex functionals. We shortly describe the problem and begin with the definition:
\begin{definition}[{\protect \cite[Definition 5.1]{Dacorogna2008}}]
A function $f:\mathbb R^{k\times n}\strongly\mathbb R$ is called \emph{polyconvex} if there exists $g:\mathbb R^{\tau(k,n)}\strongly\mathbb R$ convex, such that
\begin{equation*}
f(\xi) = g \big(T(\xi)\big),\ \text{ where }\ T(\xi)\mathrel{\hspace{-0.1ex}\mathrel{\rlap{\raisebox{0.3ex}{$\m@th\cdot$}}\raisebox{-0.3ex}{$\m@th\cdot$}}\hspace{-0.73ex}\resizebox{0.55em}{0.84ex}{$=$}\hspace{0.67ex}}\hspace{-0.25em}(\xi, \adj_2 \xi,\dots,\adj_{\min\{k,n\}} \xi),
\end{equation*}
$\adj_s \xi$ denotes the matrix of all $s\times s$ minors of the matrix $\xi\in\mathbb R^{k\times n}$ with $2\leq s \leq \min \{k,n\}$, and
\begin{equation*}
\tau(k,n)\mathrel{\hspace{-0.1ex}\mathrel{\rlap{\raisebox{0.3ex}{$\m@th\cdot$}}\raisebox{-0.3ex}{$\m@th\cdot$}}\hspace{-0.73ex}\resizebox{0.55em}{0.84ex}{$=$}\hspace{0.67ex}}\hspace{-0.25em} \sum_{s=1}^{\min\{k,n\}}\binom{k}{s}\,\binom{n}{s}.
\end{equation*}
\end{definition}
Let $\Omega\subset \mathbb R^n$ be a bounded Lipschitz domain and for $p>1$ define the functional
\begin{equation*}
F:W^{1,p}(\Omega;\mathbb R^k)\strongly \mathbb R,\ \text{ by }\ F(u)\mathrel{\hspace{-0.1ex}\mathrel{\rlap{\raisebox{0.3ex}{$\m@th\cdot$}}\raisebox{-0.3ex}{$\m@th\cdot$}}\hspace{-0.73ex}\resizebox{0.55em}{0.84ex}{$=$}\hspace{0.67ex}}\hspace{-0.25em} \int_\Omega W(x,u,\nabla u)\;dx,
\end{equation*}
where $W(x,\cdot,\xi)$ is a.e. convex and $G$-invariant and $W(x,s,\cdot)$ is a.e. polyconvex and $G$-invariant (one should also assume some integrability and continuity on $W$, for example that it is a so-called normal integrand). One would like to prove that if $F$ possesses a minimizer in some closed and convex set $C\subseteq W^{1,p}(\Omega;\mathbb R^k)$ (incorporating the boundary conditions), then $F$ possesses a $G$-invariant minimizer.
Arguing as in the proof of the previous theorem, we obtain the action of $G$ on $C$ is continuous and thus from Lemma \ref{inC}, the $G$-average of the minimizer $u$ satisfies $u_G\in C$. Thus it is left to show that $F(u_G)\leq F(u)$ and for that one would hope that polyconvexity allows one to use Jensen's inequality pointwise. However for that one would need to prove that
\begin{equation*}
\adj_s \nabla u_G=\adj_s \int_G (g\cdot \nabla u)\,g\; d\theta(g)= \int_G \adj_s\big((g\cdot \nabla u)\,g\big)\; d\theta(g),
\end{equation*}
but this does not work since integrals do not respect multiplication. So a way out would be either to prove an assertion in the spirit of part $2.$ of Theorem \ref{th1}, or, assuming that more is known for the minimizer $u$, to provide a more explicit description of its $G$-average.
\def$'${$'$}
| {
"timestamp": "2018-07-03T02:16:58",
"yymm": "1806",
"arxiv_id": "1806.02408",
"language": "en",
"url": "https://arxiv.org/abs/1806.02408",
"abstract": "In this note we revisit a less known symmetrization method for functions with respect to a topological group $G$, which we call $G$-averaging. We note that, although quite non-technical in nature, this method yields $G$-invariant minimizers of functionals satisfying some relaxed convexity properties. We give an abstract theorem and show how it can be applied to the $p$-Laplace and polyharmonic Poisson problem in order to construct symmetric solutions. We also pose some open problems and explore further possibilities where the method of $G$-averaging could be applied to.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "On the existence of symmetric minimizers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018347594028,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7099229934329091
} |
https://arxiv.org/abs/0910.0281 | Hypergraphic LP Relaxations for Steiner Trees | We investigate hypergraphic LP relaxations for the Steiner tree problem, primarily the partition LP relaxation introduced by Koenemann et al. [Math. Programming, 2009]. Specifically, we are interested in proving upper bounds on the integrality gap of this LP, and studying its relation to other linear relaxations. Our results are the following. Structural results: We extend the technique of uncrossing, usually applied to families of sets, to families of partitions. As a consequence we show that any basic feasible solution to the partition LP formulation has sparse support. Although the number of variables could be exponential, the number of positive variables is at most the number of terminals. Relations with other relaxations: We show the equivalence of the partition LP relaxation with other known hypergraphic relaxations. We also show that these hypergraphic relaxations are equivalent to the well studied bidirected cut relaxation, if the instance is quasibipartite. Integrality gap upper bounds: We show an upper bound of sqrt(3) ~ 1.729 on the integrality gap of these hypergraph relaxations in general graphs. In the special case of uniformly quasibipartite instances, we show an improved upper bound of 73/60 ~ 1.216. By our equivalence theorem, the latter result implies an improved upper bound for the bidirected cut relaxation as well. | \section{Introduction}\label{sec:intro}
In the {\em Steiner tree} problem, we are given an undirected graph
$G=(V,E)$, non-negative costs $c_e$ for all edges $e \in E$, and a set
of {\em terminal} vertices $R \subseteq V$. The goal is to find a
minimum-cost tree $T$ spanning $R$, and possibly some {\em Steiner
vertices} from $V\setminus R$. We can assume that the graph is
complete and that the costs induce a metric. The problem takes a
central place in the theory of combinatorial optimization and has
numerous practical applications. Since
the Steiner tree problem is \ensuremath{\mathsf{NP}}-hard\footnote{Chleb{\'i}k and
Chleb{\'i}kov{\'a} show that no $(96/95-\epsilon)$-approximation
algorithm can exist for any positive $\epsilon$ unless \ensuremath{\mathsf{P}}=\ensuremath{\mathsf{NP}}
~\cite{CC02}.} we are interested in approximation algorithms for it.
The best published approximation algorithm for the Steiner tree
problem is due to Robins and Zelikovsky \cite{RZ05}, which for any
fixed $\epsilon > 0$, achieves a performance ratio of $1+\frac{\ln
3}{2}+\epsilon \doteq 1.55$ in polynomial time; an improvement is
currently in press~\cite{BGRS10}, see also \prettyref{remark:byrka}.
In this paper, we study linear programming (LP) relaxations for the
Steiner tree problem, and their properties. Numerous such formulations
are known \myifthen{(e.g., see
\cite{An80,CR94a,CR94b,DB+01,Ed67,GM93,Pol03,PVd01,War98,Wo84}),}{(e.g., see \cite{Ed67,GM93,Pol03,PVd01,War98,Wo84}),} and
their study has led to impressive running time improvements for
integer programming based methods. Despite the significant body of
work in this area, none of the known relaxations is known to exhibit
an {\em integrality gap} provably smaller\myifthen{\footnote{Achieving an integrality gap
of $2$ is relatively easy for most relaxations by showing that the
minimum spanning tree restricted on the terminals is within a factor
$2$ of the LP.}}{}
than $2$. The integrality gap of a relaxation is the maximum
ratio of the cost of integral and fractional optima, over all
instances. It is commonly regarded as a measure of strength of
a formulation. One of the contributions of this paper are
improved bounds on the integrality gap for a number of Steiner
tree LP relaxations.
A Steiner tree relaxation of particular interest is the {\em
bidirected cut relaxation} \cite{Ed67,Wo84} (precise definitions
will follow in \prettyref{sec:lpss}). This relaxation has a flow formulation
using $O(|E||R|)$ variables and constraints, which is much more compact
than the other relaxations we study.
Also, it is also widely believed to have an
integrality gap significantly smaller than $2$ (e.g., see
\cite{CDV08,RV99,Va00}). The largest lower bound on the integrality
gap known is $8/7$ (by Martin Skutella, reported in \cite{KPT09}), and
Chakrabarty et al. \cite{CDV08} prove an upper bound of $4/3$ in so
called {\em quasi-bipartite} instances (where Steiner vertices form an
independent set).
Another class of formulations are the so called {\em hypergraphic} LP
relaxations for the Steiner tree problem. These relaxations are
inspired by the observation that the minimum Steiner tree problem can
be encoded as a minimum cost hyper-spanning tree (see
\prettyref{sec:hyp}) of a certain hypergraph on the terminals. They
are known to be stronger than the bidirected cut
relaxation~\cite{PVd03}, and it is therefore natural to try to use
them to get better approximation algorithms, by drawing on the large
corpus of known LP techniques.
In this paper, we focus on one hypergraphic LP in particular: the
{\em partition} LP of K\"{o}nemann et al.~\cite{KPT09}.
\subsection{Our Results and Techniques}
There are three classes of results in this paper: structural results,
equivalence results, and integrality gap upper bounds.
\smallskip
\noindent {\bf Structural results}, \prettyref{sec:up}: We extend the
powerful technique of {\em uncrossing}, traditionally applied to
families of sets, to families of {\em partitions}. Set uncrossing has
been very successful in obtaining exact and approximate algorithms for
a variety of problems (for instance, \cite{EG77,J98,SL07}). Using
partition uncrossing, we show that any basic feasible solution to the
partition LP has at most $(|R|-1)$ positive variables (even though it
can have an exponentially large number of variables and constraints).
\smallskip
\noindent {\bf Equivalence results}, \prettyref{sec:equiv}: In
addition to the partition LP, two other hypergraphic LPs have been
studied before: one based on \emph{subtour elimination} due to Warme
\cite{War98}, and a \emph{directed hypergraph relaxation} of Polzin
and Vahdati Daneshmand \cite{PVd03}; these two are known to be
equivalent \cite{PVd03}. We prove that in fact \emph{all three
hypergraphic relaxations are equivalent} (that is, they have the same objective value
for any Steiner tree instance).
\myifthen{We
give two proofs (for completeness and to demonstrate our new
techniques), one showing the equivalence of the partition LP and the subtour LP via
partition uncrossing, and one showing the equivalence of the partition LP to the
directed LP via hypergraph orientation results of Frank et
al.~\cite{FKK03b}.}{}
We also show that, on {\em quasibipartite instances}, the
hypergraphic and the bidirected cut LP relaxations are
equivalent.
\myifthen {
We find this surprising for the following reasons.
Firstly, some instances are known where the hypergraph relaxations
is {\em strictly} stronger than the bidirected cut
relaxation~\cite{PVd03}. Secondly, the bidirected cut relaxations
seems to resist uncrossing techniques; e.g.\ even in quasi-bipartite
graphs extreme points for bidirected cut can have as many as
$\Omega(|V|^2)$ positive
variables~\cite[Sec.~4.9]{P09thesis}. Thirdly, the known approaches
to exploiting the bidirected cut relaxation (mostly primal-dual and
local search algorithms \cite{RV99,CDV08}) are very different from
the combinatorial hypergraphic algorithms for the Steiner tree
problem (almost all of them employ greedy strategies). In short,
there is no qualitative similarity to suggest why the two
relaxations should be equivalent!
}{
This result is surprising since we are aware of no qualitative similarity to suggest why the two
relaxations should be equivalent.
}
We believe a better understanding
of the bidirected cut relaxation is important because it is central in theory \emph{and}
practical for implementation.
\smallskip
\noindent {\bf Improved integrality gap upper bounds},
\prettyref{sec:gapbounds}: For \emph{uniformly quasibipartite
instances} (quasibipartite instances where for each Steiner vertex,
all incident edges have the same cost), we show that the integrality
gap of the hypergraphic LP relaxations is upper bounded by $73/60
\doteq 1.216$. Our proof uses the approximation algorithm of
Gr\"{o}pl et al.~\cite{GH+02} which achieves the same ratio with
respect to the (integral) optimum. We show, via a simple dual fitting
argument, that this ratio is also valid with respect to the LP
value. To the best of our knowledge this is the only nontrivial class
of instances where the best currently known approximation ratio and
integrality gap upper bound are the same.
For general graphs, we give simple upper bounds of $2\sqrt{2}-1 \doteq
1.83$ and $\sqrt{3} \doteq 1.729$ on the integrality gap of the
hypergraph relaxation. Call a graph {\em gainless} if the minimum {\em
spanning} tree of the terminals is the optimal Steiner tree. To
obtain these integrality gap upper bounds, we use the following key
property of the hypergraphic relaxation which was implicit in
\cite{KPT09}: on gainless instances (instances where the optimum terminal
spanning tree is the optimal Steiner tree), the LP value equals the minimum
spanning tree and the integrality gap is 1. Such a theorem was known
for quasibipartite instances and the bidirected cut relaxation
(implicitly in \cite{RV99}, explicitly in \cite{CDV08}); we
extend techniques of \cite{CDV08} to obtain improved integrality gaps
on all instances.
\begin{remark}\label{remark:byrka}
The recent independent work of Byrka et al.~\cite{BGRS10}, which gives
an improved approximation for Steiner trees in general graphs, also shows an
integrality gap bound of $1.55$ on the hypergraphic directed cut
LP.
This is stronger than our integrality gap bounds and was obtained prior
to the completion of our paper;
yet we include our bounds because they are obtained
using fairly different methods which might be of independent
interest in certain settings.
\comment{ We have added these bounds to this paper despite the
existence of a stronger result for two main reasons. First, the
methods used in our proofs are independent of the work in
\cite{BGRS10} and rely on properties of hypergraphic relaxations
in gainless graphs derived in \cite{KPT09}, and the scaling
technique of \cite{CDV08}. Secondly, the algorithms to prove these
upper bounds are simple, and in addition have an ``online''
flavour, which we believe is interesting in its own right.}
\comment{ (see the e-print \cite{CKP09}. The result of the upper
bound of $\sqrt{3}$ in general graphs was obtained afterwards. We
have added this result to our paper despite the existence of a
stronger result because of two reasons. We believe the theorem of
\cite{KPT09} about hypergraphic relaxations in gainless graphs,
and the scaling technique of \cite{CDV08} are simple and, as we
show, quite useful in proving upper bounds. Secondly, the
algorithm to prove the upper bound is a simple algorithm (along
the lines of that in \cite{CDV08}) which we believe deserves merit
in itself. We stress here that although we were aware of the
results of \cite{BGRS10}, our methods are completely independent.}
The proof in \cite{BGRS10} can be easily modified to show an
integrality gap upper bound of $1.28$ in quasibipartite
instances. Then using our equivalence result, we get an integrality
gap upper bound of $1.28$ for the bidirected cut relaxation on
quasibipartite instances, improving the previous best of $4/3$.
\end{remark}
\subsection{Bidirected Cut and Hypergraphic Relaxations}\label{sec:lpss}
\subsubsection{The Bidirected Cut Relaxation}\label{sec:bcr}
The first bidirected LP was given by Edmonds \cite{Ed67} as an exact
formulation for the spanning tree problem. Wong \cite{Wo84} later
extended this to obtain the bidirected cut relaxation for the Steiner
tree problem, and gave a dual ascent heuristic based on the
relaxation. For this relaxation, introduce two arcs $(u,v)$ and
$(v,u)$ for each edge $uv \in E$, and let both of their costs be
$c_{uv}$. Fix an arbitrary terminal $r \in R$ as the root. Call a
subset $U \subseteq V$ {\em valid} if it contains a terminal but not
the root, and let $\mathrm{valid}(V)$ be the family of all valid sets.
Clearly, the in-tree rooted at $r$ (the directed tree with all
vertices but the root having out-degree exactly $1$) of a Steiner tree
$T$ must have at least one arc with tail in $U$ and head outside $U$,
for all valid $U$. This leads to the bidirected cut relaxation
\eqref{eq:LP-B} (shown in \prettyref{fig:B} with dual) which has a
variable for each arc $a \in A$, and a constraint for every valid set
$U$. Here and later, $\delta^{\mathrm{out}}(U)$ denotes the set of arcs in $A$ whose
tail is in $U$ and whose head lies in $V\setminus U$. When there are
no Steiner vertices, Edmonds' work~\cite{Ed67} implies this relaxation
is exact.
\begin{figure}[h]
\begin{minipage}{\lpbox} \begin{align}
\min \sum_{a \in A} c_ax_a: \quad& x \in \mathbf{R}^A_{\ge 0}
\tag{\ensuremath{\mathcal{B}}}\label{eq:LP-B} \\
\sum_{a\in \delta^{\mathrm{out}}(U)} x_a \ge 1, \quad& \forall
U \in {\mathrm{valid}(V)} \myifthen{\label{eq:LP-B2}}{\notag}
\end{align} \end{minipage}
\hfill \vline \hfill
\begin{minipage}{\lpbox} \begin{align}
\max \sum_{U} z_U: \quad& z \in
\mathbf{R}^{\mathrm{valid}(V)}_{\ge 0}
\tag{\ensuremath{\mathcal{B}_D}}\label{eq:LP-BD} \\
\sum_{U:a\in \delta^{\mathrm{out}}(U)} z_U \le c_a, \quad&\forall a\in A \myifthen{\label{eq:LP-BD1}}{\notag}
\end{align}\end{minipage}
\caption{\small The bidirected cut relaxation \eqref{eq:LP-B} and its dual \eqref{eq:LP-BD}.}\label{fig:B}
\end{figure}
Goemans \& Myung~\cite{GM93} made significant progress in
understanding the LP, by showing that the bidirected cut LP has the
same value independent of which terminal is chosen as the root, and by
showing that a whole ``catalogue" of very different-looking LPs also
has the same value; later Goemans~\cite{Go94} showed that if the graph
is series-parallel, the relaxation is exact. Rajagopalan and Vazirani
\cite{RV99} were the first to show a non-trivial integrality gap upper
bound of $3/2$ on quasibipartite graphs; this was subsequently
improved to $4/3$ by Chakrabarty et al.~\cite{CDV08}, who gave another
alternate formulation for \eqref{eq:LP-B}.
\subsubsection{Hypergraphic Relaxations}\label{sec:hyp}
Given a Steiner tree $T$, a \emph{full component} of $T$ is a maximal
subtree of $T$ all of whose leaves are terminals and all of whose
internal nodes are Steiner nodes. The edge set of any Steiner tree can
be partitioned in a {\em unique} way into full components by splitting
at internal terminals; see \prettyref{fig:decomp} for an example.
\begin{figure}[htb]
\begin{center} \leavevmode
\myifthen{\begin{pspicture}(0,0)(4,2.4)\psset{unit=0.8}}{\begin{pspicture}(0,0)(3.8,2)\psset{unit=0.65}}
\terminal{0,2}{t1}
\terminal{0.6,0.8}{t2}
\steiner{0.6,1.3}{s1}
\ncline{s1}{t1}
\ncline{s1}{t2}
\steiner{1.05,2.45}{s2}
\steiner{1.45,0.65}{s3}
\steiner{1.21,1.55}{s4}
\ncline{s1}{s4}
\terminal{1.8,2}{t3}
\ncline{s4}{t3}
\terminal{2.4,1}{t4}
\ncline{s4}{t4}
\terminal{2.5,0.1}{t5}
\steiner{2.46,2.2}{s5}
\ncline{t4}{t5}
\steiner{3.6,1.45}{s6}
\terminal{3.6,2.88}{t6}
\ncline{t4}{s6}
\ncline{s6}{t6}
\terminal{4.7,0.9}{t8}
\terminal{4.6,0.2}{t9}
\ncline{s6}{t8}
\ncline{t8}{t9}
\steiner{4.2,0.56}{s7}
\end{pspicture}
\myifthen{\hfill}{}
\myifthen{\begin{pspicture}(0,0)(4,2.4)\psset{unit=0.8}}{\begin{pspicture}(0,0)(3.8,2)\psset{unit=0.65}}
\terminal{0,2}{t1}
\terminal{0.6,0.8}{t2}
\steiner{0.6,1.3}{s1}
\ncline{s1}{t1}
\ncline{s1}{t2}
\steiner{1.21,1.55}{s4}
\ncline{s1}{s4}
\terminal{1.8,2}{t3}
\ncline{s4}{t3}
\terminal{2.4,1}{t4l}
\terminal{2.8,1}{t4r}
\terminal{2.6,0.75}{t4b}
\ncline{s4}{t4l}
\terminal{2.7,-0.15}{t5}
\ncline{t4b}{t5}
\steiner{4.0,1.45}{s6}
\terminal{4.0,2.88}{t6}
\ncline{t4r}{s6}
\ncline{s6}{t6}
\terminal{5.1,0.9}{t8l}
\terminal{5.1,0.6}{t8r}
\terminal{5.0,-0.1}{t9}
\ncline{s6}{t8l}
\ncline{t8r}{t9}
\end{pspicture}
\myifthen{\hfill}{}
\myifthen{\begin{pspicture}(0,0)(4,2.4)\psset{unit=0.8}}{\begin{pspicture}(0,0)(3.8,2)\psset{unit=0.65}}
\pspolygon[linearc=.2](-0.3,2.2)(0.5,0.6)(2.7,0.84)(1.9,2.2)
\pspolygon[linearc=.2](2.1,0.8)(5,0.7)(3.6,3.3)
\psellipse(2.45,0.55)(0.3,0.75)
\psellipse(4.65,0.55)(0.3,0.75)
\terminal{0,2}{t1}
\terminal{0.6,0.8}{t2}
\terminal{1.8,2}{t3}
\terminal{2.4,1}{t4}
\terminal{2.5,0.1}{t5}
\terminal{3.6,2.88}{t6}
\terminal{4.7,0.9}{t8}
\terminal{4.6,0.2}{t9}
\end{pspicture}
\end{center}
\caption{\small Black nodes are terminals and white nodes are
Steiner nodes. Left: a Steiner tree for this instance.
Middle: the Steiner tree's edges are partitioned into full
components; there are four full components. Right: the
hyperedges corresponding to these full
components.} \label{fig:decomp} \end{figure}
Let $\ensuremath{\mathcal{K}}$ be the set of all nonempty subsets of terminals
(\emph{hyperedges}). We associate with each $K \in \ensuremath{\mathcal{K}}$ a fixed full
component spanning the terminals in $K$, and let $C_K$ be its
cost\footnote{We choose the minimum cost full component if there are
many. If there is no full component spanning $K$, we let $C_K$ be
infinity. Such a minimum cost component can be found in polynomial
time, if $|K|$ is a constant.}. The problem of finding a
minimum-cost Steiner tree spanning $R$ now reduces to that of finding
a minimum-cost hyper-spanning tree in the hypergraph $(R,\ensuremath{\mathcal{K}})$.
Spanning trees in (normal) graphs are well understood and there are
many different exact LP relaxations for this problem. These exact LP
relaxations for spanning trees in graphs inspire the {\em hypergraphic
relaxations} for the Steiner tree problem. Such relaxations have a
variable $x_K$ for every\footnote{Observe that there could be
exponentially many hyperedges. This computational issue is
circumvented by considering hyperedges of size at most $r$, for some
constant $r$. By a result of Borchers and Du~\cite{BD97}, this leads
to only a $(1+\Theta(1/\log r))$ factor increase in the optimal
Steiner tree cost.} $K\in \ensuremath{\mathcal{K}}$, and the different relaxations are
based on the constraints used to capture a hyper-spanning tree, just
as constraints on edges are used to capture a spanning tree in a
graph.
The oldest hypergraphic LP relaxation is the subtour LP introduced by
Warme \cite{War98} which is inspired by Edmonds' subtour elimination
LP relaxation \cite{Ed71} for the spanning tree polytope. This LP
relaxation uses the fact that there are no hypercycles in a
hyper-spanning tree, and that it is spanning. More formally, let
$\rho(X) := \max(0,|X|-1)$ be the {\em rank} of a set $X$ of
vertices. Then a sub-hypergraph $(R,\ensuremath{\mathcal{K}}')$ is a hyper-spanning tree iff
$\sum_{K \in \ensuremath{\mathcal{K}}'} \rho(K)=\rho(R)$ and $\sum_{K \in \ensuremath{\mathcal{K}}'} \rho (K \cap
S) \leq \rho(S)$ for every subset $S$ of $R$. The corresponding LP
relaxation, denoted below as \eqref{eq:LP-S}, is called the {\em subtour elimination} LP relaxation.
\begin{align}
\min \Big\{\sum_{K \in \ensuremath{\mathcal{K}}} C_Kx_K: ~ & x \in \mathbf{R}^\ensuremath{\mathcal{K}}_{\ge 0}, ~
\sum_{K \in \ensuremath{\mathcal{K}}} x_K\rho(K) = \rho(R), \tag{\ensuremath{\mathcal{S}}}\label{eq:LP-S} \\
& \sum_{K \in \ensuremath{\mathcal{K}}} x_K\rho(K \cap S) \leq \rho(S), ~\forall S
\subset R \Big\} \notag
\end{align}
Warme showed that if the maximum hyperedge size $r$ is bounded by a
constant, the LP can be solved in polynomial time.
\def\Delta^{\mbox{\scriptsize {\ensuremath{\mathrm{in}}}}}{\Delta^{\mbox{\scriptsize {\ensuremath{\mathrm{in}}}}}}
\def\Delta^{\mbox{\scriptsize {\ensuremath{\mathrm{out}}}}}{\Delta^{\mbox{\scriptsize {\ensuremath{\mathrm{out}}}}}}
The next hypergraphic LP introduced for Steiner tree was a directed
hypergraph formulation \eqref{eq:LP-PUDir}, introduced by Polzin and
Vahdati Daneshmand \cite{PVd03}, and inspired by the bidirected cut
relaxation. Given a full component $K$ and a terminal $i\in K$, let
$K^i$ denote the arborescence obtained by directing all the edges of
$K$ towards $i$. Think of this as directing the hyperedge $K$ towards
$i$ to get the directed hyperedge $K^i$. Vertex $i$ is called the
\emph{head} of $K^i$ while the terminals in $K\setminus i$ are the
\emph{tails} of $K$. The cost of each directed hyperedge $K^i$ is
the cost of the corresponding undirected hyperedge $K$. In the
directed hypergraph formulation, there is a variable $x_{K^i}$ for
every directed hyperedge $K^i$. As in the bidirected cut relaxation,
there is a vertex $r\in R$ which is a root, and as described above, a
subset $U\subseteq R$ of terminals is valid if it does not contain the
root but contains at least one vertex in $R$. We let $\Delta^{\mbox{\scriptsize {\ensuremath{\mathrm{out}}}}}(U)$ be
the set of directed full components coming out of $U$, that is all
$K^i$ such that $U\cap K\neq \varnothing$ but $i\notin U$. Let
$\overrightarrow{\ensuremath{\mathcal{K}}}$ be the set of all directed hyperedges. We show
the directed hypergraph relaxation and its dual in Figure \ref{fig:PUDir}.
\begin{figure}[h]
\begin{minipage}{\lpbox}
\begin{align} \min \Big\{\sum_{K \in \ensuremath{\mathcal{K}},i\in
K} C_{K}x_{K^i}: & \,\, x \in \mathbf{R}^{\overrightarrow{\ensuremath{\mathcal{K}}}}_{\ge 0}
\tag{\ensuremath{\mathcal{D}}}\label{eq:LP-PUDir} \\
\sum_{K^i \in \Delta^{\mbox{\scriptsize {\ensuremath{\mathrm{out}}}}}(U)} x_{K^i} \geq 1, \quad& \forall \mbox{
valid }~ U\subseteq R
\Big\} \myifthen{\label{eq:LP-PUDir2}}{\notag}
\end{align} \end{minipage}
\hfill \vline \hfill
\begin{minipage}{\lpbox} \begin{align}
\max \Big\{\sum_{U} z_U:~~~~ \quad \myifthen{}{\hspace{1cm}}&\myifthen{}{\hspace{-1cm}}z \in
\mathbf{R}^{\textrm{valid}(R)}_{\ge 0}\!\!\!\! \tag{\ensuremath{\mathcal{D}_D}}\label{eq:LP-A} \\
\sum_{U:K \cap U \neq \varnothing, i \notin U} z_U \leq C_K,
\quad&\forall K\in \ensuremath{\mathcal{K}}, \myifthen{\forall}{} i\in K \Big\}\myifthen{}{\!\!} \myifthen{\label{eq:LP-A2}}{\notag}
\end{align}\end{minipage}
\caption{\small The directed hypergraph relaxation \eqref{eq:LP-PUDir} and its
dual \eqref{eq:LP-A}.}\label{fig:PUDir} \end{figure}
Polzin \& Vahdati Daneshmand~\cite{PVd03} showed that
$\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PUDir}=\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-S}$. Moreover they observed
that this directed hypergraphic relaxation strengthens the bidirected
cut relaxation.
\begin{lemma}[\cite{PVd03}]\label{lem:simple}
For any instance, $\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PUDir} \ge \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-B}$.
\myifthen{}{There are instances for which this inequality is strict.}
\end{lemma}
\myifthen{
\begin{proof}[Proof sketch.]
It suffices to show that any solution $x$ of \eqref{eq:LP-PUDir} can
be converted to a feasible solution $x'$ of \eqref{eq:LP-B} of the
same cost. For each arc $a$, let $x'_a$ be the sum of $x_{K^i}$ over
all directed full components $K^i$ that (when viewed as an
arborescence) contain $a$. Now for any valid subset $U$ of $V$, it
is not hard to see that every directed full component leaving $R
\cap U$ has at least one arc leaving $U$, hence $\sum_{a\in
\delta^{\mathrm{out}}(U)} {x'}_a \ge \sum_{K^i \in \Delta^{\mbox{\scriptsize {\ensuremath{\mathrm{out}}}}}(R \cap U)} x_{K^i} \ge
1$ and $x'$ is feasible as needed.
\end{proof}
\noindent See \cite{PVd03} for an example where the strict inequality
$\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PUDir} > \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-B}$ holds.
}{}
K\"onemann et al.~\cite{KPT09}, inspired by the work of Chopra
\cite{Cho89}, described a partition-based relaxation which captures
that given any partition of the terminals, any hyper-spanning tree
must have sufficiently many ``cross hyperedges". More formally, a
partition, $\pi$, is a collection of pairwise disjoint nonempty
terminal sets $(\pi_1, \ldots, \pi_q)$ whose union equals $R$. The
number of {\em parts} $q$ of $\pi$ is referred to as the partition's
{\em rank} and denoted as $r(\pi)$. Let $\Pi_R$ be the set of all
partitions of $R$. Given a partition $\pi = \{\pi_1, \ldots, \pi_q\}$,
define the {\em rank contribution} $\mathtt{rc}_K^\pi$ of hyperedge $K \in \ensuremath{\mathcal{K}}$
for $\pi$ as the rank reduction of $\pi$ obtained by merging the parts
of $\pi$ that are touched by $K$; i.e., $\mathtt{rc}_K^{\pi} := |\{i \,:\,
K\cap \pi_i \neq \varnothing \}| - 1.$ Then a hyper-spanning tree
$(R,\ensuremath{\mathcal{K}}')$ must satisfy $\sum_{K\in \ensuremath{\mathcal{K}}'} \mathtt{rc}^\pi_K \ge r(\pi) - 1$.
The partition based LP of \cite{KPT09} and its dual are given in
\prettyref{fig:PU}.
\begin{figure}[h]
\begin{minipage}{\lpbox} \begin{align}
\min \Big\{\sum_{K \in \ensuremath{\mathcal{K}}} C_Kx_K: \quad& x \in \mathbf{R}^\ensuremath{\mathcal{K}}_{\ge 0}
\tag{\ensuremath{\mathcal{P}}}\label{eq:LP-PU} \myifthen{\!\!\!}{} \\
\sum_{K \in \ensuremath{\mathcal{K}}} x_K\mathtt{rc}_K^\pi \geq r(\pi)-1, \quad& \forall \pi
\in \Pi_R \Big\} \myifthen{\label{eq:LP-PU2}}{\notag}
\end{align} \end{minipage}
\hfill \vline \hfill
\begin{minipage}{\lpbox} \begin{align}
\max \Big\{\sum_{\pi} (r(\pi)-1)\myifthen{\cdot}{}y_\pi: \quad& y \in
\mathbf{R}^{\Pi_R}_{\ge 0} \tag{\ensuremath{\mathcal{P}_D}}\label{eq:LP-PUD} \\
\sum_{\pi \in \Pi_R} y_\pi\mathtt{rc}_K^\pi\leq C_K,
\quad&\forall K\in \ensuremath{\mathcal{K}} \Big\} \myifthen{\label{eq:LP-PUD3}}{\notag}
\end{align}\end{minipage}
\caption{\small The unbounded partition relaxation \eqref{eq:LP-PU} and its
dual \eqref{eq:LP-PUD}.}\label{fig:PU}
\end{figure}
The feasible region of \eqref{eq:LP-PU} is \emph{unbounded}, since if
$x$ is a feasible solution for \eqref{eq:LP-PU} then so is any $x' \ge
x$. We obtain a \emph{bounded} partition LP relaxation, denoted by
\eqref{eq:LP-P2} and shown below, by adding a valid equality
constraint to the LP.
\begin{align}\tag{$\mathcal P'$}\label{eq:LP-P2}
\min \Big\{\sum_{K \in \ensuremath{\mathcal{K}}} C_Kx_K: x \in \eqref{eq:LP-PU}, \sum_{K \in \ensuremath{\mathcal{K}}} x_K (|K|-1) = |R|-1 \Big\}
\end{align}
\myifthen{
\subsubsection{Discussion of Computational Issues}\label{sec:discussion}
The bidirected cut relaxation is very attractive from a perspective of
computational implementation. Although the formulation given in
\prettyref{sec:bcr} has an exponential number of constraints, an equivalent
compact flow formulation with $O(|E||R|)$ variables and constraints is
well-known.
What is known regarding solving the hypergraphic LPs? They are good
enough to get theoretical results but less attractive in practice, as
we now explain. Using a separation oracle, Warme showed~\cite{War98}
that for any chosen family $\ensuremath{\mathcal{K}}$ of full components, the subtour LP can
be optimized in time $\textrm{poly}(|V|,|\ensuremath{\mathcal{K}}|)$. For the common
\emph{$r$-restricted setting} of $\ensuremath{\mathcal{K}}$ to be all possible full
components of size at most $r$ for constant $r$, we have $\ensuremath{\mathcal{K}} \le
\tbinom{|R|}{r}$. This is polynomial for any fixed $r$, and the
relative error caused by this choice of $r$ is at most the
\emph{$r$-Steiner ratio} $\rho_r = 1+\Theta(1/\log
r)$~\cite{BD97}. But this is not so practical: to get relative error
$1+\epsilon$, we apply the ellipsoid algorithm to an LP with
$|R|^{\exp(\Theta(1/\epsilon))}$ variables!
In the \emph{unrestricted setting} where $\ensuremath{\mathcal{K}}$ contains all possible
full components without regard to size, it is an open problem to
optimize any of the hypergraphic LPs exactly in polynomial time. We
make some progress here: in quasibipartite instances, the proof method
of our hypergraphic-bidirected equivalence theorem
(\prettyref{sec:lifting}) implies that one can exactly compute the LP
optimal value, and a dual optimal solution. Regarding this open
problem, we note that the $r$-restricted LP optimum is at most
$\rho_r$ times the unrestricted optimum, and wonder whether there
might be some advantage gained by using the fact that the hypergraphic
LPs have sparse optima.
We reiterate our feeling that it is important to obtain practical
algorithms and understand the bidirected cut relaxation as well as
possible, e.g.\ we know now that it has an integrality gap of at most
1.28 on quasi-bipartite instances, but obtaining such a bound {\em
directly} could give new insights.
}{}
\myifthen{
\subsubsection{Other Related Work}
In the special case of $r$-restricted instances for $r=3$, the partition hypergraphic LP is essentially a special case of an LP introduced by Vande Vate~\cite{Vate92} for matroid matching, which is totally dual half-integral~\cite{GP08}. Additional facts about the hypergraphic relaxations appear in the thesis of the third author~\cite{P09thesis}, e.g.~a combinatorial ``gainless tree formulation" for the LPs similar in flavour to the ``1-tree bound" for the Held-Karp TSP relaxation.
}{}
\section{Uncrossing Partitions}
\label{sec:up} \label{sec:setuncrossing}
In this section we are interested in {\em uncrossing} a minimal set of {\em tight partitions}
that uniquely define a basic feasible solution to \eqref{eq:LP-PU}. We start with a few preliminaries necessary to state our result formally.
\subsection{Preliminaries}
We introduce some needed well-known properties of partitions that arise in combinatorial lattice theory~\cite{Stan86}.
\begin{definition}
We say that a partition $\pi'$ \emph{refines} another partition $\pi$ if each part of $\pi'$ is contained in some
part of $\pi$. We also say $\pi$ {\em coarsens} $\pi'$. Two partitions \emph{cross} if neither refines the other.
A family of partitions forms a \emph{chain} if no pair of them cross. Equivalently, a chain is any family $\pi^1, \pi^2, \dotsc, \pi^t$ such that $\pi^i$ refines $\pi^{i-1}$ for each $1 < i \leq t$.
\end{definition}
The family $\Pi_R$ of all partitions of $R$ forms a \emph{lattice}
with a \emph{meet operator} $\wedge : \Pi_R^2 \to \Pi_R$ and a
\emph{join operator} $\vee : \Pi_R^2 \to \Pi_R$. The meet $\pi \wedge
\pi'$ is the coarsest partition that refines both $\pi$ and $\pi'$,
and the join $\pi \vee \pi'$ is the most refined partition that
coarsens both $\pi$ and $\pi'$. See \prettyref{fig:partitions} for an
illustration.
\begin{definition}[Meet of partitions]\label{definition:meet}Let the parts of $\pi$ be $\pi_1, \dotsc, \pi_t$ and let the
parts of $\pi'$ be $\pi'_1, \dotsc, \pi'_u$. Then the parts of the
meet $\pi \wedge \pi'$ are the nonempty intersections of parts of
$\pi$ with parts of $\pi'$,
$$\pi \wedge \pi' = \{\pi_i \cap \pi'_j \mid 1 \leq i \leq t, 1 \leq
j \leq u \textrm{ and } \pi_i \cap \pi'_j \neq \varnothing\}.$$
\end{definition}
\noindent
Given a graph $G$ and a partition $\pi$ of $V(G)$, we say that $G$
\emph{induces} $\pi$ if the parts of $\pi$ are the vertex sets of
the connected components of $G$.
\begin{definition}[Join of partitions]\label{definition:join}
Let $(R, E)$ be a graph that induces $\pi$, and let $(R, E')$ be a
graph that induces $\pi'$. Then the graph $(R, E \cup E')$ induces
$\pi \vee \pi'$.
\end{definition}
\comment{\begin{pspicture}(-2,-2)(2,2)\psset{unit=0.66}
\terminal{0,1}{t1}\terminal{1,0}{t2}\terminal{-1,0}{t3}\terminal{0,-1}{t4}
\terminal{1,2}{t1a}\terminal{2,1}{t2a}\terminal{-1,2}{t3a}\terminal{-2,1}{t4a}
\terminal{1,-2}{t1b}\terminal{2,-1}{t2b}\terminal{-1,-2}{t3b}\terminal{-2,-1}{t4b}
\psccurve(-1,-0.4)(-2.4,1)(-1,2.4)(0.4,1)
\psccurve(-2.4,-0.8)(-1,-0.5)(0.4,-0.8)(1.4,-2.2)(-1.4,-2.2)
\psccurve(0.8,2.4)(0.5,1)(0.8,-0.4)(2.2,-1.4)(2.2,1.4)
\psset{linestyle=dashed}
\psccurve(-1,-0.2)(-2.2,1)(-1,2.2)(0.2,1)
\pscircle(1,-2){0.3}
\pscircle(2,-1){0.3}
\pscircle(1,2){0.3}
\psccurve(0.8,-0.1)(1.2,0.8)(2.2,1.1)(1.8,0.2)
\psccurve(-2.2,-0.9)(-1,-0.6)(0.2,-0.9)(-1,-2.2)
\rput*(0,-3){(a): The dashed partition refines the solid one.}
\end{pspicture}}
\begin{figure}[htb]
\begin{center} \leavevmode
\begin{pspicture}(-2,-2)(2,2)\psset{unit=0.66}
\terminal{0,1}{t1}\terminal{1,0}{t2}\terminal{-1,0}{t3}\terminal{0,-1}{t4}
\terminal{1,2}{t1a}\terminal{2,1}{t2a}\terminal{-1,2}{t3a}\terminal{-2,1}{t4a}
\terminal{1,-2}{t1b}\terminal{2,-1}{t2b}\terminal{-1,-2}{t3b}\terminal{-2,-1}{t4b}
\psccurve(-2.2,1)(-0.9,-0.2)(-0.6,1)(-0.9,2.2)
\psccurve(-2.2,-0.8)(-1.7,-1.7)(-0.8,-2.2)(-1.3,-1.3)
\psccurve(-0.2,0.8)(0.2,1.8)(1.2,2.2)(0.8,1.2)
\psccurve(-0.2,-1.2)(2.2,1.2)(2.2,-1.2)(1,-2.2)
\psset{linestyle=dashed}
\psccurve(2.2,0.8)(1.8,1.8)(0.8,2.2)(1.2,1.2)
\psccurve(1.2,-2.2)(0.8,-1.2)(-0.2,-0.8)(0.2,-1.8)
\psccurve(-2,-1.3)(-2.4,0)(-2,1.3)(-1.6,0)
\psccurve(-1,-2.3)(-1.5,0)(-1,2.3)(-0.5,0)
\psccurve(-0.2,1.2)(1.3,0.3)(2.2,-1.2)(0.7,-0.3) \rput*(0,-3){(a)}
\end{pspicture}\hfill
\begin{pspicture}(-2,-2)(2,2)\psset{unit=0.66}
\pscircle(-2,-1){0.3}
\pscircle(-1,-2){0.3}
\pscircle(-2,1){0.3}
\pscircle(0,1){0.3}
\pscircle(1,2){0.3}
\pscircle(2,1){0.3}
\psccurve(-1,-0.3)(-1.4,1)(-1,2.3)(-0.6,1)
\psccurve(1.2,-2.2)(0.8,-1.2)(-0.2,-0.8)(0.2,-1.8)
\psccurve(2.2,-1.2)(1.8,-0.2)(0.8,0.2)(1.2,-0.8)
\terminal{0,1}{t1}\terminal{1,0}{t2}\terminal{-1,0}{t3}\terminal{0,-1}{t4}
\terminal{1,2}{t1a}\terminal{2,1}{t2a}\terminal{-1,2}{t3a}\terminal{-2,1}{t4a}
\terminal{1,-2}{t1b}\terminal{2,-1}{t2b}\terminal{-1,-2}{t3b}\terminal{-2,-1}{t4b}
\rput*(0,-3){(b)}
\end{pspicture}\hfill
\begin{pspicture}(-2,-2)(2,2)\psset{unit=0.66}
\psccurve(-2.2,1)(-2.2,-1)(-1,-2.2)(-0.45,0)(-1,2.2)
\psccurve(-0.2,1)(-0.3,0)(-0.2,-1)(1,-2.2)(2.1,-1.1)(2.1,1.1)(1,2.2)
\terminal{0,1}{t1}\terminal{1,0}{t2}\terminal{-1,0}{t3}\terminal{0,-1}{t4}
\terminal{1,2}{t1a}\terminal{2,1}{t2a}\terminal{-1,2}{t3a}\terminal{-2,1}{t4a}
\terminal{1,-2}{t1b}\terminal{2,-1}{t2b}\terminal{-1,-2}{t3b}\terminal{-2,-1}{t4b}
\rput*(0,-3){(c)}
\end{pspicture}
\end{center}
\caption{\small Illustrations of some partitions. The black dots are the
terminal set $R$. (a): two partitions; neither refines the other. (b): the meet of the partitions from (a). (c): the join of the partitions from (a).} \label{fig:partitions}
\end{figure}
\def\wedge{\wedge}
\def\vee{\vee}
\noindent
Given a feasible solution $x$ to \eqref{eq:LP-PU}, a partition $\pi$
is \emph{tight} if $\sum_{K\in \ensuremath{\mathcal{K}}}x_K\mathtt{rc}^\pi_K = r(\pi) - 1$. Let
$\mathop{{\tt tight}}(x)$ be the set of all tight partitions. We are interested in
{\em uncrossing} this set of partitions. More precisely, we wish to
find a cross-free set of partitions (chain) which uniquely defines
$x$. One way would be to prove the following.
\begin{ppty}\label{ppty:quote}
If two crossing partitions $\pi$ and $\pi'$ are in $\mathop{{\tt tight}}(x)$, then
so are $\pi \wedge \pi'$ and $\pi \vee \pi'$.
\end{ppty}
This type of property is already well-used~\myifthen{\cite{CNF85,EG77,J98,SL07}}{\cite{EG77,J98,SL07}}
for sets (with meets and joins replaced by unions and intersections
respectively), and the standard approach is the following. The typical proof
considers the constraints in \eqref{eq:LP-PU} corresponding to $\pi$
and $\pi'$ and uses the ``supermodularity'' of the RHS and the
``submodularity'' of the coefficients in the LHS. In particular, if
the following is true,
\begin{align}
\forall \pi, \pi':~ r(\pi \vee \pi')+r(\pi \wedge \pi') &~~~\geq~~~ r(\pi) + r(\pi') \label{eq:naive1} \\
\forall K, \pi, \pi': ~ \mathtt{rc}_K^\pi + \mathtt{rc}_K^{\pi'} &~~~\ge~~~
\mathtt{rc}_K^{\pi \vee \pi'} + \mathtt{rc}_K^{\pi \wedge \pi'} \label{eq:naive}
\end{align}
then \prettyref{ppty:quote} can be proved easily by writing a
string of inequalities.\footnote{In this hypothetical scenario we get
$r(\pi) + r(\pi') - 2 = \sum_{K} x_K(\mathtt{rc}_K^{\pi}+\mathtt{rc}_K^{\pi'}) \ge
\sum_{K} x_K(\mathtt{rc}_K^{\pi\wedge\pi'}+\mathtt{rc}_K^{\pi\vee\pi'}) \ge
r(\pi\wedge\pi')+r(\pi\vee\pi') - 2 \ge r(\pi)+r(\pi')-2$; thus the
inequalities hold with equality, and the middle one shows $\pi
\wedge \pi'$ and $\pi \vee \pi'$ are tight.}
Inequality \eqref{eq:naive1} is indeed true (see, for example,
\cite{Stan86}), but unfortunately inequality \eqref{eq:naive} is not
true in general, as the following example shows.
\begin{example}\label{example:nonsubm}Let $R = \{1, 2, 3, 4\}$, $\pi =
\{\{1, 2\}, \{3, 4\}\}$ and $\pi' = \{\{1, 3\}, \{2, 4\}\}.$ Let $K$
denote the full component $\{1, 2, 3, 4\}$. Then $\mathtt{rc}_K^\pi +
\mathtt{rc}_K^{\pi'} = 1 + 1 < 0 + 3 = \mathtt{rc}_K^{\pi \vee \pi'} + \mathtt{rc}_K^{\pi
\wedge \pi'}.$
\end{example}
Nevertheless, \prettyref{ppty:quote} is true; its correct proof is
given in \myifthen{\prettyref{sec:pui}}{the full version of this paper
\cite{CKP09}} and depends on a simple though subtle extension of the
usual approach. The crux of the insight needed to fix the approach is
not to consider \emph{pairs} of constraints in \eqref{eq:LP-PU}, but
rather multi-sets which may contain more than two inequalities. Using
this uncrossing result, we can prove the following theorem (details
are given in \myifthen{\prettyref{sec:pbu}}{\cite{CKP09}}). Here, we let
$\underline{\pi}$ denote $\{R\}$, the unique partition with (minimal)
rank 1; later we use $\overline{\pi}$ to denote $\{\{r\}\mid r\in
R\}$, the unique partition with (maximal) rank $|R|$.
\begin{theorem}\label{thm:1}
Let $x^*$ be a basic feasible solution of \eqref{eq:LP-PU}, and let
\ensuremath{\mathcal{C}}\ be an inclusion-wise maximal chain in $\mathop{{\tt tight}}(x^*) \backslash \underline{\pi}$. Then
$x^*$ is uniquely defined
by
\begin{equation} \label{eq:chain}
\sum_{K \in \ensuremath{\mathcal{K}}} \mathtt{rc}_K^\pi x^*_K = r(\pi)-1 \quad \forall \pi \in \ensuremath{\mathcal{C}}.
\end{equation}
\end{theorem}
Any chain of distinct partitions of $R$ that does not contain
$\underline{\pi}$ has size at most $|R|-1$, and this is an upper bound
on the rank of the system in \eqref{eq:chain}. Elementary linear
programming theory immediately yields the following corollary.
\begin{corollary}\label{corollary:structure}
Any basic solution $x^*$ of \eqref{eq:LP-PU} has at most $|R|-1$ non-zero coordinates.
\end{corollary}
\myifthen{
\subsection{Partition Uncrossing Inequalities}\label{sec:pui}
We start with the following definition.
\begin{definition}\label{definition:merged}
Let $\pi \in \Pi_R$ be a partition and let $S \subset R$. Define the
\emph{merged partition} $m(\pi, S)$ to be the most refined partition
that coarsens $\pi$ and contains all of $S$ in a single part. See
\prettyref{fig:merged} for an example. Informally, $m(\pi,S)$ is
obtained by merging all parts of $\pi$ which intersect $S$.
Formally, $m(\pi,S)$ equals the set of parts $\{\{\pi_j\}_{j:
\pi_j\cap S=\varnothing}, \bigcup_{j:\pi_j\cap S\neq \varnothing}
\pi_j\}$.
\end{definition}
\begin{figure}[htb]
\myifthen{}{\psset{unit=0.8cm}}
\begin{center} \leavevmode
\begin{pspicture}(-3,-2.5)(3,2.5)
\terminal{0,1}{t1}\terminal{1,0}{t2}\terminal{-1,0}{t3}\terminal{0,-1}{t4}
\terminal{1,2}{t1a}\terminal{2,1}{t2a}\terminal{-1,2}{t3a}\terminal{-2,1}{t4a}
\terminal{1,-2}{t1b}\terminal{2,-1}{t2b}\terminal{-1,-2}{t3b}\terminal{-2,-1}{t4b}
\psccurve(2.2,0.8)(1.8,1.8)(0.8,2.2)(1.2,1.2)
\psccurve(1.2,-2.2)(0.8,-1.2)(-0.2,-0.8)(0.2,-1.8)
\psccurve(-2,-1.3)(-2.4,0)(-2,1.3)(-1.6,0)
\psccurve(-1,-2.3)(-1.5,0)(-1,2.3)(-0.5,0)
\psccurve(-0.2,1.2)(1.3,0.3)(2.2,-1.2)(0.7,-0.3)
\psccurve[linestyle=dashed](-1.2,0.1)(1.2,0.2)(2.2,-1)(1,-2.2)
\end{pspicture}
\hspace{0.8cm}
\begin{pspicture}(-3,-2.5)(3,2.5)
\terminal{0,1}{t1}\terminal{1,0}{t2}\terminal{-1,0}{t3}\terminal{0,-1}{t4}
\terminal{1,2}{t1a}\terminal{2,1}{t2a}\terminal{-1,2}{t3a}\terminal{-2,1}{t4a}
\terminal{1,-2}{t1b}\terminal{2,-1}{t2b}\terminal{-1,-2}{t3b}\terminal{-2,-1}{t4b}
\psccurve(2.2,0.8)(1.8,1.8)(0.8,2.2)(1.2,1.2)
\psccurve(-2,-1.3)(-2.4,0)(-2,1.3)(-1.6,0)
\psccurve(-1,-2.3)(-1.4,0)(-1,2.3)(0.8,0.8)(2.2,-1.2)(1,-2.2)
\end{pspicture}
\end{center}
\caption{Illustration of merging. The left figure shows a (solid)
partition $\pi$ along with a (dashed) set $S$. The right figure
shows the merged partition $m(\pi, S)$.} \label{fig:merged}
\end{figure}
\noindent
We will use the following straightforward fact later:
\begin{equation}\label{eq:rdrop}\mathtt{rc}_K^\pi = r(\pi)-r(m(\pi,
K)).\end{equation}
We now state the (true) inequalities which replace the false inequality \eqref{eq:naive}. Later, we show how one uses these to obtain
partition uncrossing, e.g.\ to prove \prettyref{ppty:quote}.
\begin{lemma}[Partition Uncrossing Inequalities]\label{lemma:pu}
Let $\pi, \pi' \in \Pi_R$ and let the parts of $\pi$ be $\pi_1,
\pi_2, \dotsc, \pi_{r(\pi)}$.
\begin{eqnarray}
\label{eq:const} r(\pi) \left[r(\pi')-1\right] +
\left[r(\pi)-1\right] &=& \left[r(\pi \wedge \pi')-1\right] +
\sum_{i=1}^{r(\pi)} \left[r(m(\pi', \pi_i))-1\right]\\
\label{eq:coeff}
\forall K \in \ensuremath{\mathcal{K}}: \quad r(\pi) \Bigl[\mathtt{rc}_K^{\pi'}\Bigr] +
\Bigl[\mathtt{rc}_K^\pi\Bigr] &\geq& \Bigl[\mathtt{rc}_K^{\pi \wedge \pi'}\Bigr] +
\sum_{i=1}^{r(\pi)} \Bigl[\mathtt{rc}_K^{m(\pi', \pi_i)}\Bigr]
\end{eqnarray}
\end{lemma}
\noindent
Before giving the proof of the above lemma, let us first show how it can be used to prove the statement \prettyref{ppty:quote}. \\
\noindent
{\em Proof of \prettyref{ppty:quote}}.
Since $\pi$ and $\pi'$ are tight,
\begin{align*}
r(\pi)[r(\pi') - 1] + [r(\pi) - 1] = r(\pi)\Bigl[\sum_{K} x_K\mathtt{rc}_K^{\pi'}\Bigr] + \Bigl[\sum_K x_K\mathtt{rc}_K^\pi\Bigr] = \sum_K x_K \biggl(r(\pi)\Bigl[\mathtt{rc}_K^{\pi'}\Bigr] + \Bigl[\mathtt{rc}_K^\pi\Bigr]\biggr) \\
\ge \sum_K x_K\biggl(\Bigl[\mathtt{rc}_K^{\pi \wedge \pi'}\Bigr] +
\sum_{i=1}^{r(\pi)} \Bigl[\mathtt{rc}_K^{m(\pi', \pi_i)}\Bigr]\biggr) = \sum_K x_K\Bigl[\mathtt{rc}_K^{\pi \wedge \pi'}\Bigr] + \sum_{i=1}^{r(\pi)} \sum_K x_K \Bigl[\mathtt{rc}_K^{m(\pi', \pi_i)}\Bigr] \\
\ge \left[r(\pi \wedge \pi')-1\right] +
\sum_{i=1}^{r(\pi)} \left[r(m(\pi', \pi_i))-1\right] = r(\pi) \left[r(\pi')-1\right] +
\left[r(\pi)-1\right]
\end{align*}
where the first inequality follows from \eqref{eq:coeff} and the second from~\eqref{eq:LP-PU2} (as $x$ is feasible); the last equality is \eqref{eq:const}. Since the first and last terms are equal, all the inequalities are equalities, in particular our application of \eqref{eq:LP-PU2} shows that $\pi \wedge \pi'$ and each $m(\pi',\pi_i)$ is tight.
Iterating the latter fact, we see that $m(\dotsb m(m(\pi', \pi_1), \pi_2), \dotsb)=\pi \vee \pi'$ is also tight. $\square$\\
\noindent
To prove the inequalities in \prettyref{lemma:pu} we need the following lemma that relates the rank of sets and the rank contribution of partitions. Recall $\rho(X) := \max(0,|X|-1)$.
\begin{lemma} \label{lemma:rc} For a partition $\pi = \{\pi_1, \dotsc, \pi_t\}$ of $R$, where $t = r(\pi)$, and for any $K\subseteq R$, we have
$$\rho(K) = \mathtt{rc}_K^\pi + \sum_{i=1}^t\rho(K \cap \pi_i).$$
\end{lemma}
\begin{proof}
By definition, $K \cap \pi_i \neq \varnothing$ for exactly $1 +
\mathtt{rc}_K^\pi$ values of $i$. Also, $\rho(K \cap \pi_i)=0$ for all other
$i$. Hence \begin{equation}\sum_{i=1}^t\rho(K \cap \pi_i) =
\sum_{i:K \cap \pi_i \neq \varnothing} (|K \cap \pi_i|-1) =
\left(\sum_{i:K \cap \pi_i \neq \varnothing} |K \cap
\pi_i|\right)-(\mathtt{rc}_K^\pi+1).\label{eq:houston}\end{equation} Observe
that $ \sum_{i:K \cap \pi_i \neq \varnothing} |K \cap
\pi_i|=|K|=\rho(K)+1$; using this fact together with Equation
\eqref{eq:houston} we obtain
$$\sum_{i=1}^t\rho(K \cap \pi_i) =
\left(\sum_{i:K \cap \pi_i \neq \varnothing} |K \cap
\pi_i|\right)-(\mathtt{rc}_K^\pi+1) = \rho(K)-1+(\mathtt{rc}_K^\pi+1).$$
Rearranging, the proof of \prettyref{lemma:rc} is complete.
\end{proof}
\noindent
{\em Proof of \prettyref{lemma:pu}}.
First, we argue that $\pi \wedge \pi' = \overline{\pi}$ holds without loss of generality.
In the general case, for each part $p$ of $\pi \wedge \pi'$ with
$|p|\ge 2$, contract $p$ into one pseudo-vertex and define
the new $K$ to include the pseudo-vertex corresponding to $p$ if and
only if $K \cap p \neq \varnothing$. This contraction does not affect
the value of any of the terms in Equations \eqref{eq:coeff} and
\eqref{eq:const}, so is without loss of generality. After contraction, for any part $\pi_i$ of
$\pi$ and part $\pi'_j$ of $\pi'$, we have $|\pi_i \cap \pi'_j| \leq
1$, so indeed $\pi \wedge \pi' = \overline{\pi}$.
\begin{proof}[Proof of Equation \eqref{eq:const}]
Fix $i$. Since $|\pi_i \cap \pi'_j| \leq 1$ for all $j$, the rank
contribution $\mathtt{rc}_{\pi_i}^{\pi'}$ is equal to $|\pi_i|-1.$ Then
using Equation \eqref{eq:rdrop} we know that $r(m(\pi', \pi_i)) =
r(\pi') - |\pi_i|+1$. Thus adding over all $i$, the right-hand side
of Equation \eqref{eq:const} is equal to
$$|R|-1 + \sum_{i=1}^{r(\pi)}(r(\pi')-|\pi_i|) = |R|-1 + r(\pi)r(\pi')-|R|$$
and this is precisely the left-hand side of Equation
\eqref{eq:const}.
\end{proof}
\begin{proof}[Proof of Equation \eqref{eq:coeff}]
Fix $i$. Since $|\pi_i \cap \pi'_j| \leq 1$ for all $j$, we have
\begin{equation}
\mathtt{rc}_K^{\pi'} - \mathtt{rc}_K^{m(\pi', \pi_i)} \geq \rho(\pi_i \cap
K)\label{eq:tight}
\end{equation}
because, when we merge the parts of $\pi'$ intersecting $\pi_i$, we
make $K$ span at least $\rho(\pi_i \cap K)$ fewer parts. Note that the inequality could be strict if both $\pi_i$ and $K$ intersect a part of $\pi'$ without having a common vertex in that part.
Adding the right-hand side of Equation \eqref{eq:tight} over all $i$
gives
\begin{equation}\sum_{i=1}^{r(\pi)}(\mathtt{rc}_K^{\pi'} - \mathtt{rc}_K^{m(\pi', \pi_i)}) \geq \sum_{i=1}^{r(\pi)}\rho(\pi_i \cap K) = \rho(K)-\mathtt{rc}_K^\pi.\label{eq:snore}\end{equation}
where the last equality follows from \prettyref{lemma:rc}. To finish
the proof we observe $\rho(K) = \mathtt{rc}_K^{\pi\wedge\pi'}$, since $\pi
\wedge \pi' = \overline{\pi}$.
\end{proof}
\noindent
This completes the proof of \prettyref{lemma:pu}. $\hfill \Box$\\
\subsection{Sparsity of Basic Feasible Solutions: Proof of \prettyref{thm:1}}\label{sec:pbu}
\def{\tt row}{{\tt row}}
\begin{proof}
Let $\mathop{{\tt supp}}(x^*)$ be the full components $K$ with $x^*_K>0$. Consider
the constraint submatrix with rows corresponding to the
tight partitions and columns corresponding to the full components in
$\mathop{{\tt supp}}(x^*)$. Since $x^*$ is a basic feasible solution, any full-rank
subset of rows uniquely defines $x^*$. We now show that any maximal chain $\ensuremath{\mathcal{C}}$ in $\mathop{{\tt tight}}(x^*)$
corresponds to such a subset.
Let ${\tt row}(\pi) \in \mathbf{R}^{\mathop{{\tt supp}}(x^*)}$ denote the row corresponding to partition $\pi$ of
this matrix, i.e., ${\tt row}(\pi)_K = \mathtt{rc}^\pi_K$, and given a collection $\mathcal{R}$ of partitions (rows), let
$\mathop{{\tt span}}(\mathcal{R})$ denote the linear span of the rows in $\mathcal{R}$.
We now prove that for any tight partition $\pi \notin \ensuremath{\mathcal{C}}$, we have
${\tt row}(\pi) \in \mathop{{\tt span}}(\ensuremath{\mathcal{C}})$; this will complete the proof of the theorem.
For sake of contradiction, suppose ${\tt row}(\pi) \not\in \mathop{{\tt span}}(\ensuremath{\mathcal{C}})$. Choose $\pi$ to be the
counterexample partition with smallest rank $r(\pi)$. Firstly, since
$\ensuremath{\mathcal{C}}$ is maximal, $\pi$ must cross some partition $\sigma$ in $\ensuremath{\mathcal{C}}$.
Choose $\sigma$ to be the most refined partition in $\ensuremath{\mathcal{C}}$ which crosses
$\pi$. Let the parts of $\sigma$ be $(\sigma_1,\ldots,\sigma_t)$. The
following claim uses the partition uncrossing inequalities to derive a
linear dependence between the rows corresponding to $\sigma,\pi$ and
the partitions formed by merging parts of $\sigma$ with $\pi$.
\begin{claim}\label{claim:talon}
We have ${\tt row}(\sigma) + |r(\sigma)|\cdot {\tt row}(\pi) = {\tt row}(\pi\wedge\sigma) + \sum_{i=1}^t {\tt row}(m(\pi,\sigma_i))$.
\end{claim}
\begin{proof}
Since $\sigma$ and $\pi$ are both tight partitions, the proof of
\prettyref{ppty:quote} shows that the partition inequality
\eqref{eq:coeff} holds with equality for all $K\in \mathop{{\tt supp}}(x^*)$,
$\pi$ and $\sigma$, implying the claim.
\end{proof}
Let $\mathtt{cp}_\pi(\sigma)$ be the parts of $\sigma$ which intersect at
least two parts of $\pi$; i.e., merging the parts of $\pi$ that
intersect $\sigma_i$, for any $\sigma_i \in \mathtt{cp}_\pi(\sigma)$,
decreases the rank of $\pi$. Formally,
$$\mathtt{cp}_\pi(\sigma) := \{ \sigma_i \in \sigma: ~~ m(\pi,\sigma_i) \neq \pi \}$$
Note that one can modify \prettyref{claim:talon} by subtracting $(r(\sigma) - |\mathtt{cp}_\pi(\sigma)|){\tt row}(\pi)$ from both sides to get
\begin{equation}\label{eq:lincomb}
{\tt row}(\sigma) + |\mathtt{cp}_\pi(\sigma)|\cdot {\tt row}(\pi) = {\tt row}(\pi\wedge\sigma) + \sum_{\sigma_i\in \mathtt{cp}_\pi(\sigma)} {\tt row}(m(\pi,\sigma_i))
\end{equation}
Now if ${\tt row}(\pi) \notin \mathop{{\tt span}}(\ensuremath{\mathcal{C}})$, we must have either ${\tt row}(\pi\wedge\sigma)$ is not in $\mathop{{\tt span}}(\ensuremath{\mathcal{C}})$ or ${\tt row}(m(\pi,\sigma_i))$ is not in $\mathop{{\tt span}}(\ensuremath{\mathcal{C}})$ for some $i$.
We show that either case leads to the needed contradiction, which will prove the theorem.
\begin{description}
\item[Case 1:]${\tt row}(\pi \wedge \sigma) \notin \mathop{{\tt span}}(\ensuremath{\mathcal{C}})$. Note there is $\sigma'\in \ensuremath{\mathcal{C}}$ which crosses $\pi\wedge\sigma$, since $\pi\wedge\sigma$ is not in the maximal chain $\ensuremath{\mathcal{C}}$.
Since $\sigma', \sigma \in \ensuremath{\mathcal{C}}$ and by considering the refinement order, it is easy to see that $\sigma'$ (strictly) refines $\sigma$ and $\sigma'$ crosses $\pi$. This contradicts our choice of $\sigma$ as the most refined partition in $\ensuremath{\mathcal{C}}$ crossing $\pi$, since $\sigma'$ was also a candidate.
\item[Case 2:]
${\tt row}(m(\pi, \sigma_i)) \not\in \mathop{{\tt span}}(\ensuremath{\mathcal{C}})$. Note $m(\pi, \sigma_i)$ is also tight.
Since $\sigma_i \in \mathtt{cp}_\pi(\sigma)$, $m(\pi, \sigma_i)$ has smaller rank than
$\pi$. This contradicts our choice of $\pi$.
\end{description}
This completes the proof of \prettyref{thm:1}.
\end{proof}}{}
\section{Equivalence of Formulations}\label{sec:equiv}
In this section we describe our equivalence results. A summary of the
known and new results is given in \prettyref{fig:overview}.
\begin{figure}[htb]
\begin{center} \leavevmode
\myifthen{}{\psset{unit=0.8cm}}
\begin{pspicture}(-2,-2)(14,2.5)
\rput(3,-2){\rnode{S}{\psframebox{\ensuremath{\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-S}}}}}
\rput(3,2){\rnode{PU}{\psframebox{\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}}}}
\rput(0,0){\rnode{P}{\psframebox{\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-P2}}}}
\rput(6,0){\rnode{D}{\psframebox{\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PUDir}}}}
\rput(12,0){\rnode{B}{\psframebox{\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-B}}}}
\psset{arcangle=30}
\ncline{-}{PU}{P}\mput*{$=$ {[Thm.~\ref{theorem:p-pu}]}}
\ncline{-}{P}{S}\mput*{$=$ {[Thm.~\ref{theorem:spe}]}}
\ncline{-}{S}{D}\mput*{$=$ {\cite{PVd03}}}
\ncline{-}{PU}{D}\mput*{$=$ {\myifthen{[Appendix \ref{app:reproof}]}{\cite{CKP09}}}}
\ncarc{-}{D}{B}\mput*{$\ge$ {[Lemma \ref{lem:simple}],\cite{PVd03}}}
\ncarc{-}{B}{D}\mput*{$\le$ in quasi-bipartite {[Thm.~\ref{theorem:lifting}]}}
\end{pspicture}
\end{center}
\caption{Summary of relations among various LP relaxations}
\label{fig:overview} \end{figure}
\myifthen{
As we mentioned in the introduction, we give a redundant set of proofs
for completeness and to demonstrate novel techniques. The proof that \eqref{eq:LP-PU} and \eqref{eq:LP-PUDir} have the same value, which appears in Appendix \ref{app:reproof}, is a consequence of hypergraph orientation results of Frank et al.~\cite{FKK03b}.
\subsection{Bounded and Unbounded Partition Relaxations}
\begin{theorem}\label{theorem:p-pu}
The LPs \eqref{eq:LP-P2} and \eqref{eq:LP-PU} have the same optimal
value.
\end{theorem}
\noindent We actually prove a stronger statement.
\begin{definition}The collection $\ensuremath{\mathcal{K}}$ of hyperedges is
\emph{down-closed} if whenever $S \in \ensuremath{\mathcal{K}}$ and $\varnothing \neq T
\subset S$, then $T \in \ensuremath{\mathcal{K}}.$ For down-closed $\ensuremath{\mathcal{K}}$, the cost function
$C: \ensuremath{\mathcal{K}} \to \mathbf{R}_+$ is \emph{non-decreasing} if $C_S \leq C_T$ whenever
$S \subset T$.\end{definition}
\begin{theorem}\label{theorem:p-pu-app}
If the set of hyperedges is down-closed and the cost function is
non-decreasing, then \eqref{eq:LP-P2} and \eqref{eq:LP-PU} have the
same optimal value.
\end{theorem}
\prettyref{theorem:p-pu-app} implies \prettyref{theorem:p-pu} since
the hypergraph and cost function derived from instances of the Steiner
tree problem are down-closed and non-decreasing (e.g.~$C_{\{k\}} = 0$
for every $k \in R$; we remark that the variables $x_{\{k\}}$ act just
as placeholders). Our proof of \prettyref{theorem:p-pu} relies on the
following operation which we call {\em shrinking}.
\begin{definition}
Given an assignment $x:\ensuremath{\mathcal{K}} \to \mathbf{R}_+$ to the full components, suppose $x_K > 0$ for some $K$. The operation ${\tt Shrink}(x,K,K',\delta)$, where $K' \subseteq K$, $|K'| = |K|-1$ and $0 < \delta \le x_K$, changes $x$ to $x'$ by decreasing $x'_K := x_K - \delta$ and increasing $x'_{K'} := x_{K'} + \delta$. \end{definition}
\noindent
Note that shrinking is defined only for down-closed hypergraphs. Also note that on performing a shrinking operation, the cost of the solution cannot increase, if the cost function is non-decreasing. The theorem is proved by taking the optimum solution to \eqref{eq:LP-PU} which minimizes the sum $\sum_{K\in \ensuremath{\mathcal{K}}} x_K|K|$, and then showing that this must satisfy the equality in \eqref{eq:LP-P2}, or a shrinking operation can be performed. Now we give the details.
\begin{proof}[Proof of \prettyref{theorem:p-pu-app}]
It suffices to exhibit an optimum solution of \eqref{eq:LP-PU} which satisfies the equality in \eqref{eq:LP-P2}.
Let $x$ be an optimal solution to \eqref{eq:LP-PU} which minimizes the sum $\sum_{K\in \ensuremath{\mathcal{K}}} x_K|K|$.
\begin{claim}\label{claim:inter}
For every $K$ with $x_K>0$ and for every $r\in K$, there exists a tight partition (w.r.t.\ $x$) $\pi$ such that the part of $\pi$ containing
$r$ contains no other vertex of $K$.
\end{claim}
\begin{proof}
Let $K'=K\setminus\{r\}$. If the above is not true, then this implies that for every tight partition $\pi$, we have $\mathtt{rc}_K^\pi = \mathtt{rc}_{K'}^\pi$.
We now claim that there is a $\delta > 0$ such that we can perform ${\tt Shrink}(x,K,K',\delta)$ while retaining feasibility in \eqref{eq:LP-PU}. This is a contradiction since the shrink operation
strictly reduces $\sum_K |K|x_K$ and doesn't increase cost.
Specifically, take
\begin{center}
$\delta := \min\{x_K, \min_{\pi: \mathtt{rc}_{K'}^\pi \neq \mathtt{rc}_K^\pi}\sum_K \mathtt{rc}_K^\pi x_K - r(\pi)+1\}$
\end{center}
which is positive since for tight partitions we have $\mathtt{rc}_K^\pi = \mathtt{rc}_{K'}^\pi$.
\end{proof}
Let $\mathop{{\tt tight}}(x)$ be the set of tight partitions, and $\pi^* :=
\bigwedge \{ \pi \mid \pi \in \mathop{{\tt tight}}(x)\}$ the meet of all tight
partitions. By \prettyref{ppty:quote}, $\pi^*$ is tight. By
\prettyref{claim:inter}, for any $K$ with $x_K > 0$, we have
$\mathtt{rc}_K^{\pi^*} = |K|-1$. Thus, $r(\pi^*)-1 = \sum_{K\in \ensuremath{\mathcal{K}}}
x_K\mathtt{rc}_K^{\pi^*} = \sum_{K\in\ensuremath{\mathcal{K}}} x_K(|K| - 1) \ge r(\overline{\pi}) -
1$. But since $\overline{\pi}$ is the unique maximal-rank partition,
this implies $\pi^* = \overline{\pi}$. Thus $\overline{\pi}$ is
tight. This implies $x \in \eqref{eq:LP-P2}$.
\end{proof}
\subsection{Partition and Subtour Elimination Relaxations}\label{sec:partsub}
\begin{theorem}\label{theorem:spe}
The feasible regions of \eqref{eq:LP-P2} and \eqref{eq:LP-S} are the
same.
\end{theorem}
\begin{proof}
Let $x$ be any feasible solution
to the LP \eqref{eq:LP-S}. Note that the equality constraint of \eqref{eq:LP-P2}
is the same as that of \eqref{eq:LP-S}.
We now show that $x$ satisfies \eqref{eq:LP-PU2}.
Fix a partition $\pi=\{\pi_1, \dotsc, \pi_t\}$, so $t=r(\pi)$. For
each $1 \leq i \leq t$, subtract the inequality constraint in \eqref{eq:LP-S}
with $S = \pi_i$, from the equality constraint in \eqref{eq:LP-S} to obtain
\begin{equation}
\sum_{K \in \ensuremath{\mathcal{K}}} x_K \Bigl( \rho(K)-\sum_{i=1}^t\rho(K \cap \pi_i) \Bigr)
\geq \rho(R)-\sum_{i=1}^t\rho(\pi_i). \label{eq:derived}\end{equation}
From \prettyref{lemma:rc}, $\rho(K)-\sum_{i=1}^t \rho(K \cap \pi_i) =
\mathtt{rc}_K^\pi$. We also have $\rho(R)-\sum_{i=1}^t\rho(\pi_i) =
|R|-1-(|R|-r(\pi)) = r(\pi)-1$. Thus $x$ is a feasible solution to the LP \eqref{eq:LP-P2}. \\
\noindent
Now, let $x$ be a feasible solution to \eqref{eq:LP-P2} and it suffices to show that it
satisfies the inequality constraints of \eqref{eq:LP-S}. Fix a set $S\subset R$.
Note when $S = \varnothing$ that inequality constraint is vacuously true so we may assume $S \neq
\varnothing$. Let $R \backslash S = \{r_1, \dotsc, r_u\}$. Consider the partition $\pi =
\{\{r_1\}, \dotsc, \{r_u\}, S\}$. Subtract \eqref{eq:LP-PU2} for this
$\pi$ from the equality constraint in \eqref{eq:LP-P2}, to obtain
\begin{equation}\label{eq:donut} \sum_{K \in \ensuremath{\mathcal{K}}} x_K
(\rho(K)-\mathtt{rc}_K^{\pi}) \leq \rho(R)-r(\pi)+1.\end{equation} Using
\prettyref{lemma:rc} and the fact that $\rho(K \cap \{r_j\}) = 0$ (the set is either empty or a singleton), we get
$\rho(K)-\mathtt{rc}_K^{\pi} = \rho(K \cap S)$. Finally, as $\rho(R)-r(\pi)+1 = |R|-1-(|R\backslash S|+1)+1 = \rho(S),$ the inequality \eqref{eq:donut} is the same as the constraint needed.
Thus $x$ is a feasible solution to \eqref{eq:LP-S}, proving the theorem.
\end{proof}
\subsection{Partition and Bidirected Cut Relaxations in Quasibipartite Instances}\label{sec:lifting}
\begin{theorem}\label{theorem:lifting}
On quasibipartite Steiner tree instances, $\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-B} \ge \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PUDir}$.
\end{theorem}
\noindent
To prove \prettyref{theorem:lifting}, we
look at the duals of the two LPs and we show $\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-BD}
\ge \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-A}$ in quasibipartite instances.
Recall that the support of a solution to \eqref{eq:LP-A} is the family
of sets with positive $z_U$. A family of sets is called \emph{laminar} if for
any two of its sets $A,B$ we have $A\subseteq B, B\subseteq A$, or $A\cap B=\varnothing$.
\begin{lemma} \label{lemma:3lps}
There exists an optimal solution to \eqref{eq:LP-A} whose support is a laminar family of sets.
\end{lemma}
\begin{proof}
Choose an optimal solution $z$ to \eqref{eq:LP-A} which maximizes
$\sum_U z_U|U|^2$ among all optimal solutions. We claim that the
support of this solution is laminar. Suppose not and there exists
$U$ and $U'$ with $U\cap U' \neq \varnothing$ and $z_U > 0$ and
$z_{U'} > 0$. Define $z'$ to be the same as $z$
except $z'_{U} = z_U - \delta$, $z'_{U'} = z_{U'} - \delta$,
$z'_{U\cup U'} = z_{U\cup U'} + \delta$ and $z'_{U\cap U'} =
z_{U\cap U'} + \delta$; we will show for small $\delta > 0$, $z'$ is feasible. Note that
$U\cap U'$ is not empty and $U\cup U'$ doesn't contain $r$, and the
objective value remains unchanged. Also note that for any $K$ and
$i\in K$, if $z_{U\cup U'}$ or $z_{U\cap U'}$ appears in the summand of a constraint,
then at least one of $z_{U}$ or $z_{U'}$ also appears. If both
$z_{U\cup U'}$ and $z_{U\cap U'}$ appears, then both $z_U$ and
$z_{U'}$ appears. Thus $z'$ is an optimal solution and $\sum_U
z'_U|U|^2 > \sum_U z_U|U|^2$, contradicting the choice of $z$.
\end{proof}
\begin{lemma}\label{lem:main-qb}
For quasibipartite instances, given a solution of \eqref{eq:LP-A}
with laminar support, we can get a feasible solution to
\eqref{eq:LP-BD} of the same value.
\end{lemma}
\begin{proof}
This lemma is the heart of the theorem, and is a little technical to
prove. We first give a sketch of how we convert a feasible
solution $z$ of \eqref{eq:LP-A} into a feasible solution to
\eqref{eq:LP-BD} of the same value.
Comparing \eqref{eq:LP-A} and \eqref{eq:LP-BD} one first notes that
the former has a variable for every valid subset of the terminals,
while the latter assigns values to all valid subsets of the entire
vertex set.
We say that an edge $uv$ is \emph{satisfied} for a candidate
solution $z$, if both a) $\sum_{U:u\in U, v\notin U} z_{U} \le
c_{uv}$ and b) $\sum_{U:v\in U, u\notin U} z_{U} \le c_{uv}$
hold; $z$ is then feasible for \eqref{eq:LP-BD} if {\em all} edges
are satisfied.
Let $z$ be a feasible solution to \eqref{eq:LP-A}.
One easily verifies that all terminal-terminal edges are
satisfied. On the other hand, terminal-Steiner edges may
initially not be satisfied. To see this consider the Steiner vertex
$v$ and its neighbours depicted in \prettyref{fig:lift} below.
Initially, none of the sets in $z$'s support contains $v$, and
the load on the edges incident to $v$ is quite {\em skewed}:
the left-hand side of condition a) above may be large, while
the left-hand side of condition b) is initially $0$.
To construct a valid solution for \eqref{eq:LP-BD}, we therefore
{\em lift} the initial value $z_S$ of each terminal subset $S$ to
supersets of $S$, by adding Steiner vertices. The lifting
procedure processes each Steiner vertex $v$ one at a time; when
processing $v$, we change $z$ by moving dual from some sets $U$ to
$U \cup \{v\}$. Such a dual transfer decreases the left-hand side
of condition a) for edge $uv$, and increases the
(initially $0$) left-hand sides of condition b) for edges connecting $v$ to
neighbours other than $v$.
We will soon see that there is a way of carefully lifting duals
around $v$ that ensures that all edges incident to $v$ become
satisfied. The definition of our procedure will ensure
that these edges remain satisfied for the rest of the lifting
procedure. Since there are no Steiner-Steiner edges, all edges will
be satisfied once all Steiner vertices are processed.
\piccaptioninside
\piccaption{\label{fig:lift} Lifting variable $z_U$.}
\parpic(5.5cm,4.5cm)[fr]{\includegraphics[scale=.8]{lift.eps}}
Throughout the lifting procedure, we will maintain that $z$ remains
unchanged, when projected to the terminals. Formally, we maintain
the following crucial {\em projection invariant}:
\begin{equation}\label{eq:pi}\tag{PI}
\mbox{\begin{minipage}{7cm}
The quantity
$\sum_{U: S\subseteq U
\subseteq S\cup (V\setminus R)} z_{U} $\\[1ex]
remains constant, for all terminal
sets $S$.
\end{minipage}}
\end{equation}
This invariant leads to two observations: first, the
constraint \eqref{eq:LP-A2} is satisfied by $z$ at all times, even
when it is defined on subsets of all vertices; second,
$\sum_{U\subseteq V} z_U$ is constant throughout, and
the objective value of $z$ in \eqref{eq:LP-BD} is not affected
by the lifting. The existence of a lifting of duals around Steiner vertex $v$ such
that \eqref{eq:pi} is maintained, and such that all edges incident
to $v$ are satisfied can be phrased as a feasibility problem for
a linear system of inequalities. We will use
Farkas' lemma and the feasibility of $z$ for \eqref{eq:LP-A2}
to complete the proof.
We now fill in the proof details. Let $\Gamma(v)$ denote the
set of neighbours of vertex $v$ in the given graph $G$. In each
iteration, where we process Steiner node $v$, let
$$ \mathcal U_v := \{U: z_U > 0 ~~\textrm{and} ~~U\cap \Gamma(v) \neq \varnothing \} $$
be the sets in $z$'s support that contain neighbours of $v$. Note
that $U\in \mathcal U_v$ could contain Steiner vertices on which the lifting
procedure has already taken place. However, by \eqref{eq:pi} and by
\prettyref{lemma:3lps} the multi-family $\{U\cap R: U\in\mathcal U_v\}$ is
laminar. In the lifting process, we will transfer $x_U$ units of the
$z_U$ units of dual of each set $U \in \mathcal U_v$ to the set $U'=U \cup
\{v\}$; this decreases the dual load (LHS of \eqref{eq:LP-BD1}) on
arcs from $U \cap \Gamma(v)$ to $v$ (e.g.~$uv$ in \prettyref{fig:lift}) and
increases the dual load on arcs from $v$ to $\Gamma(v) \backslash U$
(e.g.~$vu'$ in the figure). The following system of inequalities
describes the set of feasible liftings.
\begin{align}
\forall U\in \mathcal U_v: &\qquad x_U \le z_U \tag{L1} \label{eq:L1}\\
\forall u \in \Gamma(v): &\qquad \sum_{U: u\in U} (z_U - x_U) \le c_{uv} \tag{L2} \label{eq:L2}\\
\forall u \in \Gamma(v):&\qquad \sum_{U: u\notin U} x_U \le c_{uv} \tag{L3} \label{eq:L3}
\end{align}
\begin{claim}\label{claim:primal}
If \eqref{eq:L1}, \eqref{eq:L2}, \eqref{eq:L3} have a feasible
solution $x \ge 0$, then the lifting procedure can be performed at Steiner
vertex $v$, while maintaining the projection invariant property.
\end{claim}
\begin{proof}
Define the new solution to be $z_U := z_U - x_U$, and, $z_{(U\cup
v)} := x_U$, for all $U\in \mathcal U_v$, and $z_U$ remains unchanged for
all other $U$. It is easy to check that all edges which were
satisfied remain satisfied, and \eqref{eq:L2} and \eqref{eq:L3}
imply that all edges incident to $v$ are satisfied. Also note that
the projection invariant property is maintained.
\end{proof}
By Farkas' lemma, if \eqref{eq:L1}, \eqref{eq:L2}, \eqref{eq:L3} do {\em
not} have a feasible solution $x \ge 0$, then there exist non-negative multipliers ---
$\lambda_U$ for all $U\in \mathcal U_v$, and $\alpha_u,\beta_u$ for all $u\in
\Gamma(v)$ --- satisfying the following dual set of linear inequalities:
\begin{align}
\sum_{U\in \mathcal U_v} \lambda_Uz_U + \sum_{u\in \Gamma(v)} \alpha_u \bigl(c_{uv}
- \sum_{U:u\in U} z_U\bigr) + \sum_{u\in \Gamma(v)} \beta_u c_{uv}
& \quad < \quad 0 \label{eq:D1} \tag{D1} \\
\forall U \in \mathcal U_v: \lambda_U - \sum_{u\in U} \alpha_u +
\sum_{u\notin U} \beta_u & \quad \ge \quad 0 \label{eq:D2} \tag{D2}
\end{align}
As a technicality, note that the sub-system
$\{\eqref{eq:L1},\eqref{eq:L2},x\ge0\}$ is feasible --- take $x=z$.
Thus any $\alpha, \beta, \lambda$ satisfying \eqref{eq:D1} and
\eqref{eq:D2} has $\sum_u \beta_u > 0$, so by dividing all $\alpha,
\beta, \lambda$ by $\sum_i \beta_i$, we may assume without loss of
generality that
\begin{align}
\sum_{u \in \Gamma(v)} \beta_u = 1 \tag{D3}. \label{eq:D3}
\end{align}
Subtracting \eqref{eq:D3} from \eqref{eq:D2} allows us to rewrite
the latter set of constraints conveniently as
\begin{align}
\forall U \in \mathcal U_v: &\qquad \lambda_U - \sum_{u\in U} (\alpha_u + \beta_u) + 1\ge 0 \tag{D2'}. \label{eq:D2'}
\end{align}
The following claim shows that \eqref{eq:L1}, \eqref{eq:L2},
\eqref{eq:L3} does have a feasible solution, and thus by
\prettyref{claim:primal}, lifting can be done, which completes the
proof of \prettyref{lem:main-qb}.
\begin{claim}\label{claim:dual}
There exists no feasible solution to $\{\alpha,\beta,\lambda \ge 0: \eqref{eq:D1},\eqref{eq:D2'}, \textrm{and } \eqref{eq:D3}\}$.
\end{claim}
\begin{proof}
Consider the linear program which minimizes the LHS of \eqref{eq:D1}
subject to the constraints \eqref{eq:D2'} and \eqref{eq:D3}. We show that the LP
has value at least $0$, which will complete the proof.
Let $(\lambda^*,\alpha^*,\beta^*)$ be an
optimal solution to the LP. In \prettyref{lemma:tu} we will show that the
constraint matrix of the LP is
totally unimodular; hence, since the right-hand side of the given
system is integral, we may assume that $\lambda^*, \alpha^*$, and
$\beta^*$ are non-negative and integral. From \eqref{eq:D3} we infer
\begin{equation}
\textrm{There is a unique $\bar{u} \in \Gamma(v)$ for which
$\beta^*_{\bar{u}}=1$; for all $u \neq \bar{u}$, $\beta^*_u=0$.}\label{eq:olda}\end{equation}
Moreover, since each $\lambda_U$ appears only in the two constraints \eqref{eq:D2'} and $\lambda_U \ge 0$, and since $\lambda_U$ has nonnegative coefficient in the objective, we may assume \begin{equation} \lambda^*_U = \lambda^*_U(\alpha^*, \beta^*) := \max\{\sum_{u\in U} (\alpha^*_u + \beta^*_u) - 1, 0\} \label{redu}\end{equation}
for all $U$.
Next, we establish the following:
\begin{equation}
\textrm{$\alpha^*_u+\beta^*_u \in \{0,1\}$ for all $u \in \Gamma(v)$.}\label{eq:oldb}\end{equation}
Suppose for the sake of contradiction that property \eqref{eq:oldb} does not hold for our solution.
Let $u$ be such that $\alpha^*_u+\beta^*_u \geq 2$. By \eqref{eq:olda}, $\alpha^*_u \ge 1$. We propose the following
update to our solution:
decrease $\alpha^*_u$ by $1$ (which by \eqref{redu} will decrease $\lambda^*_U$ by
$1$ for all $U\in \mathcal U_v$). This maintains the
feasibility of \eqref{eq:D2'}, and the objective value
decreases by
$$ \sum_{U\in \mathcal U_v : u \in U}z_U + (c_{uv} - \sum_{u\in U} z_U) $$
which is non-negative as $c \geq 0$.
By repeating this operation, we may clearly ensure property \eqref{eq:oldb}.
Let $K\subseteq \Gamma(v)$ be the set $\{u \mid \alpha^*_u + \beta^*_u =
1\}$ and recall $\bar{u}$ is the unique terminal with
$\beta^*_{\bar{u}}=1$; $\bar{u}$ is clearly a member of $K$.
At $(\alpha^*, \beta^*, \lambda^*)$, we evaluate the objective and collect like terms to get value
\begin{align*}
\sum_{U\in \mathcal U_v} z_U\rho(U\cap K) + \sum_{u\in K\setminus \bar{u}}
(c_{uv} - \sum_{U: u\in U}z_U) + c_{\bar{u}v}
&=
\sum_{u\in K} c_{uv} + \sum_{U\in \mathcal U_v}z_U(\rho(U\cap K) -|(K \backslash \bar{u}) \cap U|) \\
&= \sum_{u\in K} c_{uv} - \sum_{U \in \mathcal U_v: U \cap K \neq \varnothing, \bar{u} \not\in U}z_U
\end{align*}
where the last equality follows by considering cases. Finally, combining the fact that
$\sum_{u\in K} c_{uv} \ge C_K$ (since these edges form one possible full component on terminal set $K$) together with \eqref{eq:LP-A2} for the
pair $(K,\bar{u})$, it follows that the LP's optimal value is non-negative as needed.
\end{proof}
\begin{lemma}\label{lemma:tu}
The incidence matrix defined by \eqref{eq:D2'} and \eqref{eq:D3} is totally unimodular.
\end{lemma}
\begin{proof}
The incidence matrix has $|\mathcal U_v|+1$ rows ($|\mathcal U_v|$ corresponding to
\eqref{eq:D2'} and one last row corresponding to \eqref{eq:D3}) and
$|\mathcal U_v| + 2|\Gamma(v)|$ columns. Furthermore, the columns corresponding to $\alpha_u$'s are same
as those corresponding to $\beta_u$'s, except for the last row,
where there are $0$'s in the $\alpha$-columns and $1$'s in the
$\beta$-columns.
To show that this matrix is totally unimodular
we use Ghouila-Houri's characterization of total unimodularity (e.g.\ see \cite[Thm. 19.3]{Sc86}):
\begin{theorem}[Ghouila-Houri 1962] A matrix is totally unimodular iff the following holds for \emph{every} subset $\mathcal{R}$ of rows: we can assign weights $w_r \in \{-1, +1\}$ to each row $r \in \mathcal{R}$ such that $\sum_{r\in \mathcal{R}} w_r r$ is a $\{0, \pm1\}$-vector.\end{theorem}
Note that we can safely ignore the columns corresponding to
variables $\lambda_U$ for sets $U \in \mathcal U_v$, since each of them contains a
single $1$ occurring in constraint \eqref{eq:D2'} for set
$U$.
The row subset $\mathcal{R}$ corresponds to a subset of $\mathcal U_v$ --- which we will denote $\mathcal{R} \cap \mathcal U_v$ --- plus possibly the single row corresponding to \eqref{eq:D3}. Each row in $\mathcal{R} \cap \mathcal U_v$ has its values determined by the characteristic vector of $U\cap \Gamma(v)$. So long as any set appears more than once in $\{U\cap \Gamma(v) \mid U \in \mathcal{R} \cap \mathcal U_v\}$ we can assign one copy weight $+1$ and the other copy weight $-1$; these rows cancel out. Thus, henceforth we assume $\{U\cap \Gamma(v) \mid U \in \mathcal{R} \cap \mathcal U_v\}$ has no duplicate sets.
There is a standard
representation of a laminar family as a forest of rooted trees,
where there is a node corresponding to each set, with containment in the family corresponding to ancestry in the forest. Given the
forest for the laminar family $\{U\cap \Gamma(v) \mid U \in \mathcal{R} \cap \mathcal U_v\}$, the assignment of weights to the rows of the matrix is as
follows. Let the root nodes of all trees be at height $0$ with
height increasing as one goes to children nodes. Give weight $-1$
to rows corresponding to nodes at even height, and weight $+1$ to rows
corresponding to nodes at odd height. If $\mathcal{R}$ contains
the row corresponding to \eqref{eq:D3}, give it weight $+1$.
Finally, let us argue that these weights have the needed property.
Consider first a column corresponding to $\alpha_u$ for any $u$.
The rows of $\mathcal{R}$ with $1$ in this column form a path, from the largest set containing $u$ (which is a root node) to the smallest set containing $u$.
The weighted sum in this column is an alternating sum $-1+1-1+1\dotsb$, which is either $-1$ or $0$, which is in $\{0, \pm1\}$ as needed. Second, in a column for some $\beta_u$, if $\mathcal{R}$ doesn't contain (resp.\ contains) the row corresponding to \eqref{eq:D3}, the weighted sum is the same as for $\alpha_u$ (resp.\ plus 1); in either case its weighted sum is in $\{0, \pm1\}$ as needed.
\end{proof}
This finishes the proof of \prettyref{lem:main-qb}, and hence also
that of \prettyref{theorem:lifting}.
\end{proof}
\iffalse
We use $z$ to construct a feasible solution of the same total
value.
Note that initially, for an edge $(u,v)$ with $u$ as terminal and $v$ as Steiner,
the total $z$-value faced by $u$, that is, $\sum_{U:u\in
U, v\notin U} z_{U}$ can be much larger than $c_{uv}$, while the total $z$-value faced by $v$ is
$0$.
To construct a valid solution,
$\beta$, for \eqref{eq:LP-BD}, we {\em lift} the value of $z_U$ to
subsets $U'$ possibly containing Steiner vertices, such that $U
\subset U' \subset U \cup (V \backslash R)$. We do so in a manner such that
all edges $(u,v)$ are satisfied, and we get feasibility. This
proves the lemma.
The lifting algorithm starts with $\beta_U = z_U$ for all valid $U$
and proceeds Steiner vertex by Steiner vertex. Quasibipartiteness is
crucial in that we can treat every Steiner vertex independently in the
{\em lifting procedure} and do not have to bother about the
feasibility of edges between two Steiner vertices. For each Steiner
vertex $v$, we use the laminarity of the $z$ solution to get a tree
structure on the subsets $U\subseteq \Gamma(v)$, where $\Gamma(v)$ are
the neighbors of $v$. The algorithm proceeds in a top-down fashion in
these trees, transferring dual from $U$ to $U\cup v$, stopping when
the total dual value faced by each terminal $u$ in $\Gamma(v)$ on
$(u,v)$ edges is at most $c_{uv}$. Although this process might
possibly make the $z$-value faced by $v$ for some edge $(u,v)$ too
high, we use the feasibility of $z$ in \eqref{eq:LP-A} to show this
cannot be the case. A separate easy calculation shows that this
process also satisfies the constraints corresponding to
terminal-terminal edges. We now proceed to the details.
\begin{proof}
In \eqref{eq:LP-BD}, we have two constraints for each bidirected arc of an edge $(u,v)$: \\
one is \mbox{$\sum_{U:u\in U,v\notin U} \beta_U \le c_{uv}$} and the other
$\sum_{U: u\notin U, v\in U} \beta_U \le c_{uv}$. Given $\beta$'s, we say an edge $(u,v)$ is \emph{$u$-satisfied} if the first inequality holds and \emph{$v$-satisfied}
if the second holds. We say an edge is \emph{$u$-tight} or \emph{$v$-tight} if the inequalities are satisfied as equalities.
We say $(u,v)$ is satisfied if it is both $u$-satisfied and $v$-satisfied. Note $\beta \geq 0$ is feasible if and only if every edge is satisfied.
Our approach starts with $\beta_U = z_U$ for $U \subset R$ and $\beta_U=0$ otherwise. Observe that for every Steiner node $v$ and every edge $(u, v)$, $(u, v)$ is $v$-satisfied since $z$ is nonzero only on subsets of $R$, but $(u,v)$ is not necessarily $u$-satisfied. Also observe that all edges $(u,v)$ where $u$ and $v$ are both {\em terminals} are both $u$-satisfied and $v$-satisfied by this initial $\beta$; to see this, note that constraint in \eqref{eq:LP-A} for the full component $(u,v)$ precisely says that the edge $(u,v)$ is satisfied.
In the rest of our proof, we iterate through each Steiner vertex $v$ and
\emph{lift $\beta$ at $v$}. After lifting at $v$, all edges incident to $v$ become satisfied. Lifting consists of repeated application of the following transfer operation: reduce the value $\beta_U$ for some set $U$ not containing $v$, and increase $\beta_{U \cup \{v\}}$ by an equal amount. Our proof depends crucially on the laminarity of $\mathop{{\tt supp}}(z)$. Although $\mathop{{\tt supp}}(\beta)$ does not remain laminar during the algorithm, we maintain that its restriction to $R$, $\{U \cap R \mid U \in \mathop{{\tt supp}}(\beta)\}$, remains laminar. Note this is clearly true in the beginning since $z$'s support is laminar.
We now precisely define the lifting operation at a Steiner vertex $v$.
Let $\Gamma(v)$ be the set of neighbors of $v$ in $G$.
We will represent
$\{U \in \mathop{{\tt supp}}(\beta) \mid U \cap \Gamma(v) \neq \varnothing\}$ by a forest of rooted trees.
In more detail, we start with the standard representation of the laminar family $\{U \cap \Gamma(v) \mid U \in \mathop{{\tt supp}}(\beta)\} \backslash \{\varnothing\}$: the root nodes of the trees in the forest are sets with no proper supersets, and the parent of any other set is its minimal proper superset. Note that multiple $U\in \mathop{{\tt supp}}(\beta)$ could have the same intersection $S$ with $\Gamma(v)$.
We expand the node representing $S \subset \Gamma(v)$ into a directed path with one node representing each set $\{U \in \mathop{{\tt supp}}(\beta) \mid U \cap \Gamma(v) = S\}$, and with the children of $S$ made children of the node furthest from the root.
Now we describe the procedure for transferring duals with respect to $v$.\\
\hspace{-8mm}
\begin{boxedminipage}{7in}
Procedure {\sc Lift}$(v)$
\begin{enumerate}
\item We maintain a family $\bf A$ of \emph{active sets}. Initially $\bf A$ is the set of all root sets of the trees in the forest.
\item We decrease $\beta_U$ and increase $\beta_{U \cup \{v\}}$ for all $U\in \bf A$ at a uniform rate until either
\begin{enumerate}
\item all edges of the form $\{(u,v): u\in U \cap \Gamma(v)\}$ become $u$-satisfied for some $U\in \bf A$,
\item or $\beta_U$ becomes $0$ for some set $U\in \bf A$.
\end{enumerate}
\item In Case (a), we remove $U$ from $\bf A$ and repeat Step $2$.
\item In Case (b), we remove $U$ from $\bf A$ and put all children of $U$ (with respect to the forest) into $\bf A$. \\
Go back to Step 2.
\end{enumerate}
\end{boxedminipage}
We terminate when there are no active sets; the process must terminate since $\mathop{{\tt supp}}(\beta)$ is finite.
Call two sets $S, T$ \emph{$\Gamma(v)$-disjoint} if $S \cap T \cap \Gamma(v) = \varnothing$.
Note that the active sets are always $\Gamma(v)$-disjoint, and if at some point some vertex $u \in \Gamma(v)$ has $u \not\in \bigcup\bf A$, then $u \not\in \bigcup\bf A$ will continue to hold until termination.
We first make the following observations.
For any $S\subset R$, lifting preserves the invariant
\begin{equation}
\sum_{U \in \textrm{valid}(V): U \cap R = S} \beta_U = z_S. \label{eq:lifteq}
\end{equation}
This is because the drop in the value of a set $U$ is compensated by a
corresponding increase in $U\cup v$. Thus any edge between two
terminals remains satisfied throughout the whole algorithm, since it
was satisfied before the liftings.
We will show next in \prettyref{claim:claim-1} and \prettyref{claim:claim-2} that
after lifting at $v$, all edges incident to $v$ become satisfied.
Furthermore, when we lift at a Steiner vertex $v$, it does not affect
$\sum_{U:u\in U,v'\notin U} \beta_U$ or $\sum_{U: u\notin U, v'\in U}
\beta_U$ for any edge $(u, v')$ where $v'$ is another Steiner vertex.
Hence once we show both claims, it will follow that all edges are satisfied at termination, and
the proof of \prettyref{lem:main-qb} will be complete.
\begin{claim}\label{claim:claim-1}
When \textsc{Lift}$(v)$ terminates, all edges of the form $(u,v)$ are $u$-satisfied.
\end{claim}
\begin{proof}
Consider how the active set containing $u$ evolves over time (if any exists). If $u$ is not a member of any initial active set (i.e.\ the roots) then clearly $\beta_U = 0$ for all sets $U$ containing $u$, so $(u, v)$ is $u$-satisfied. If $u$ leaves $\bigcup\bf A$ due to Step 3, then $(u, v)$ is $u$-satisfied. If $u$ leaves $\bigcup\bf A$ due to Step 4, then we have reduced $\beta_U$ to 0 for all sets $U$ containing $u$, so $(u, v)$ is $u$-satisfied.
\end{proof}
\begin{claim}\label{claim:claim-2}
When \textsc{Lift}$(v)$ terminates, all edges of the form $(u,v)$ are $v$-satisfied.
\end{claim}
\begin{proof}
We first sketch the proof idea. Fix an edge $(u,v)$ and look at the
total $\beta$-value faced by $v$, that is, $\sum_{U: u\notin U}
\beta_{U \cup \{v\}}$. Since the increase in $\beta_{U\cup v}$ is
precisely the decrease in $\beta_U$, we will be done if we show that
the total drop in $\sum_{U:u\notin U}\beta_U$ is at most $c_{uv}$.
At a high level, in the {\sc Lift} procedure, the ``decrease'' of
$\beta_U$ for a set $U$ stops when some edge $(u',v)$ with in $u'
\in U$ becomes $u'$-tight, that is, the ``new'' $\beta$-value faced
by
$u'$ is precisely $c(u',v)$. Therefore, using the laminarity of the
support of $\beta$, we can charge the ``new'' $\sum_{U:u\notin
U}\beta_U$ to the costs of a series of edges of the form
$(u_1,v),(u_2,v),\cdots,(u_k,v)$. So, the total {\em drop} is the
``old" $\sum_{U:u\notin U}\beta_U$ minus $\sum_{i=1}^k c(u_i,v)$,
which equals
$$\sum_{U:u\notin U}z_U - \sum_{i=1}^k c(u_i,v)$$
from \eqref{eq:lifteq}
Now for the full component $K$ induced by $v$ and $(u_1,\cdots,u_k,u)$, one can show that the constraint in \eqref{eq:LP-A} with $u\in K$, implies that the first term above is {\em at most} $[c(u,v) + \sum_{i=1}^k c(u_i,v)]$, which completes the proof. We now give details of this rather technical proof.\\
\noindent
We must show that for any vertex $u \in \Gamma(v)$, at termination, we have $\sum_{U: u\notin U} \beta_{U \cup \{v\}} \le c_{uv}$. Let $\beta'_U$ be the value of $U$
before $v$ was lifted.
Since $\beta_{U \cup \{v\}}$ is precisely
the decrease in $\beta_U$, we have $\beta_{U \cup \{v\}} = \beta'_U - \beta_U$.
Thus we need to show
\begin{equation}\label{eq:toshow}
\sum_{U: u\notin U} (\beta'_U - \beta_U) \le c_{u,v}.
\end{equation}
Let $\mathcal Z$ denote the family of all sets $U$ for which $\beta_U$ was reduced to 0. Let $\mathcal F$ denote the family of all sets $U$ for which $\beta_U$ was reduced, but remained nonzero at termination; such sets must have left $\bf A$ due to Step 3. Sets in $\mathcal F$ are $\Gamma(v)$-disjoint since once Step 3 executes on a set $U$, none of the vertices in $U \cap \Gamma(v)$ will belong to any active sets in any further iterations. Furthermore, due to the condition in Step 2(a), each set $F\in\mathcal F$ contains a vertex $u_F\in F \cap \Gamma(v)$ such that the edge $(u_F,v)$ is $u_F$-tight, i.e.\
\begin{equation}\sum_{U:u_F\in U} \beta_U = c_{u_F,v}\label{eq:lala}\end{equation}
Let $K$ be the set of all such $u_F$'s.
Now look at the left hand side of the inequality \eqref{eq:toshow}. This can be rewritten as follows:
\begin{equation}
\sum_{U: u\notin U} (\beta'_U - \beta_U) = \sum_{U\in \mathcal Z: u\notin U} (\beta'_U - \beta_U) + \sum_{U\in \mathcal F: u\notin U} (\beta'_U - \beta_U) + \sum_{U\notin \mathcal Z\cup\mathcal F: u\notin U} (\beta'_U - \beta_U)
\label{eq:breakdown}\end{equation}
The summand in the third term of the right-hand side of \prettyref{eq:breakdown} is zero, but we preserve this term for useful manipulations.
Next, define the set $K'$ as follows:
\begin{itemize}
\item If $u \in F^*$ for some $F^* \in \mathcal F$, define $K' := \{u_F\}_{F \in \mathcal F, F \neq F^*}$;
\item otherwise, if $u \not\in \bigcup \mathcal F$, define $K' := \{u_F\}_{F \in \mathcal F}$.
\end{itemize}
Note that in any case, each set in $\mathcal Z$ contains at least one element of $K'$, each set $F \in \mathcal F$ with $F \neq F^*$ contains exactly one element of $K'$, $F^*$ (if it exists) contains no element of $K'$,
and each set in $\mathop{{\tt supp}}(\beta) \setminus (\mathcal Z \cup \mathcal F)$ contains at most one element of $K'$.
Define $\mathcal X := \{ U \in \mathop{{\tt supp}}(\beta) \setminus (\mathcal Z \cup \mathcal F) \mid U \cap K' \neq \varnothing \}$.
Since the sets in $\mathcal F$ are disjoint, and since each $X \in \mathcal X$ is a subset of some $F\in \mathcal F$, each $X \in \mathcal X$ is a subset of $F$ which {\em doesn't} contain $u$, and $|X \cap K'|=1$.
Recall that $\beta'_U - \beta_U = 0$ for all $U$ not in $\mathcal Z\cup\mathcal F$, so \prettyref{eq:breakdown} yields
\begin{align}
\sum_{U: u\notin U} (\beta'_U - \beta_U) &= \sum_{U\in \mathcal Z: u\notin U} (\beta'_U - \beta_U) + \sum_{U\in \mathcal F: u\notin U} (\beta'_U - \beta_U) + \sum_{U\in \mathcal X: u\notin U} (\beta'_U - \beta_U) \notag \\
&= \sum_{U\in \mathcal Z\cup\mathcal F\cup\mathcal X:u\notin U} \beta'_U - \sum_{U\in \mathcal Z\cup\mathcal F\cup\mathcal X:u\notin U} \beta_U\label{eq:toshow2}
\end{align}
Using the fact that $\beta_U = 0$ for all $U\in \mathcal Z$, the right summand in the RHS of \prettyref{eq:toshow2} is
\begin{equation}\sum_{U\in \mathcal Z\cup\mathcal F\cup\mathcal X:u\notin U} \beta_U = \sum_{U\in \mathcal F\cup\mathcal X:u\notin U} \beta_U = \sum_{u_F\in K'} \sum_{U:u_F\in U} \beta_U = \sum_{u_F\in K'} c_{u_F,v}\label{eq:pizza}\end{equation}
where the rightmost equality uses \prettyref{eq:lala}.
To interpret the left summand in the RHS of \prettyref{eq:toshow2}, take the set $K := K' \cup \{u\}$ and the full component/star formed by the edges $\{(v, w) \mid w \in K\}$. Then constraint in \eqref{eq:LP-A} for $K$ and $u$ implies
\begin{equation}\sum_{U: K\cap U\neq \varnothing, u\notin U} z_U \le C_K \le \sum_{u_F\in K'} c_{u_F,v} + c_{u,v}.\label{eq:burger}\end{equation}
Furthermore, since all sets $U$ in $\mathcal Z\cup\mathcal F\cup\mathcal X$ with $u \notin U$ have non-empty intersection with $K$, and using \prettyref{eq:lifteq}, we find
\begin{equation}\sum_{U \subset R: K \cap U \neq \varnothing, u \not\in U} z_U = \sum_{U \subset V: K \cap U \neq \varnothing, u \not\in U} \beta'_U \geq \sum_{U \in \mathcal Z\cup\mathcal F\cup\mathcal X, u \notin U} \beta'_U.\label{eq:cheese}\end{equation}
Combining Equations \eqref{eq:toshow2} to \eqref{eq:cheese}, we get \prettyref{eq:toshow} as needed.
\end{proof}
\end{proof}
\fi
}{
For lack of space, we present only sketches for our main equivalence
results in this extended abstract, and refer the reader to
\cite{CKP09} for details.
\begin{theorem}\label{theorem:p-pu}
The LPs \eqref{eq:LP-P2} and \eqref{eq:LP-PU} have the same optimal
value.
\end{theorem}
\noindent{\em Proof sketch.} To show this, it suffices
to find an optimum solution of \eqref{eq:LP-PU} which
satisfies the equality in \eqref{eq:LP-P2}; i.e., we want
to find a solution for which the maximal-rank partition
$\overline{\pi}$ is tight. We pick the optimum
solution to \eqref{eq:LP-PU} which minimizes the sum $\sum_{K\in
\ensuremath{\mathcal{K}}} x_K|K|$. Using \prettyref{ppty:quote}, we show
that either $\overline{\pi}$ is tight or there is a {\em
shrinking} operation which decreases $\sum_{K\in \ensuremath{\mathcal{K}}} x_K|K|$
without increasing the cost. Since the latter is impossible, the
theorem is proved.
\begin{theorem}\label{theorem:spe}
The feasible regions of \eqref{eq:LP-P2} and \eqref{eq:LP-S} are the
same.
\end{theorem}
\noindent{\em Proof sketch.} We show that the inequalities defining
\eqref{eq:LP-P2} are valid for \eqref{eq:LP-S}, and
vice-versa. Note that both have the same equality and
non-negativity constraints. To show that the partition inequality
of \eqref{eq:LP-P2} for $\pi$ holds for any $x \in
\eqref{eq:LP-S},$ we use the subtour inequalities in
\eqref{eq:LP-S} for every part of $\pi$. For the other direction,
given any subset $S\subseteq R$, we invoke the inequality in
\eqref{eq:LP-P2} for the partition $\pi := \{\{S\} \textrm{ as one part and
the remaining terminals as singletons}\}$.
\begin{theorem}\label{theorem:lifting}
On quasibipartite Steiner tree instances,
$\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-B} \ge \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PUDir}$.
\end{theorem}
\noindent{\em Proof sketch.} We look at
the duals of the two LPs and we show $\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-BD} \ge
\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-A}$ in quasibipartite instances. Recall that the
support of a solution to \eqref{eq:LP-A} is the family of sets with
positive $z_U$. A family of sets is called \emph{laminar} if for any
two of its sets $A,B$ we have $A\subseteq B, B\subseteq A$, or
$A\cap B=\varnothing$. The following fact follows along the standard line of ``set uncrossing" argumentation.
\begin{lemma} \label{lemma:3lps}
There is an optimal solution to
\eqref{eq:LP-A} with laminar support.
\end{lemma}
Given the above result, we may now assume that we have a solution
$z$ to \eqref{eq:LP-A} whose support is laminar. The heart of the
proof of \prettyref{theorem:lifting} is to show that $z$ can be
converted into a feasible solution to \eqref{eq:LP-BD} of the same
value.
Comparing \eqref{eq:LP-A} and \eqref{eq:LP-BD} one first notes that
the former has a variable for every valid subset of the terminals,
while the latter assigns values to all valid subsets of the entire
vertex set. We say that an edge $uv$ is \emph{satisfied} for a
candidate solution $z$, if both a) $\sum_{U:u\in U, v\notin U} z_{U}
\le c_{uv}$ and b) $\sum_{U:v\in U, u\notin U} z_{U} \le c_{uv}$
hold; $z$ is then feasible for \eqref{eq:LP-BD} if {\em all} edges
are satisfied.
Let $z$ be a feasible solution to \eqref{eq:LP-A}.
One easily verifies that all terminal-terminal edges are
satisfied. On the other hand, terminal-Steiner edges may
initially not be satisfied; e.g., consider the Steiner vertex
$v$ and its neighbours depicted in \prettyref{fig:lift} below.
Initially, none of the sets in $z$'s support contains $v$, and
the load on the edges incident to $v$ is quite {\em skewed}:
the left-hand side of condition a) above may be large, while
the left-hand side of condition b) is initially $0$.
\piccaptioninside
\piccaption{\label{fig:lift} Lifting variable $z_U$.}
\parpic(4.5cm,4cm)[fr]{\includegraphics[scale=.7]{lift.eps}}
To construct a valid solution for \eqref{eq:LP-BD}, we therefore
{\em lift} the initial value $z_S$ of each terminal subset $S$ to
supersets of $S$, by adding Steiner vertices. The lifting
procedure processes each Steiner vertex $v$ one at a time; when
processing $v$, we change $z$ by moving dual from some sets $U$ to
$U \cup \{v\}$. Such a dual transfer decreases the left-hand side
of condition a) for edge $uv$, and increases the
(initially $0$) left-hand sides of condition b) for edges connecting $v$ to
neighbours other than $v$.
We are able to show that there is a way of carefully lifting duals
around $v$ that ensures that all edges incident to $v$ become
satisfied. The definition of our procedure will ensure
that these edges remain satisfied for the rest of the lifting
procedure. Since there are no Steiner-Steiner edges, all edges will
be satisfied once all Steiner vertices are processed.
Throughout the lifting procedure, we will maintain that $z$ remains
unchanged, when projected to the terminals. The main consequence
of this is that the objective value
$\sum_{U\subseteq V} z_U$ remains constant throughout, and
the objective value of $z$ in \eqref{eq:LP-BD} is not affected
by the lifting. This yields \prettyref{theorem:lifting}.
}
\section{Improved Integrality Gap Upper Bounds}\label{sec:gapbounds}
\myifthen{ We first show the improved bound of $73/60$ for uniformly
quasibipartite graphs. We then show the $(2\sqrt{2} - 1) \doteq
1.828$ upper bound on general graphs, which contains the main ideas,
and then end by giving a $\sqrt{3} \doteq 1.729$ upper bound.
}{
In this extended abstract, we show the improved bound of $73/60$ for
uniformly quasibipartite graphs, and due to space restrictions, we
only show the weaker $(2\sqrt{2} - 1) \doteq 1.828$ upper bound on
general graphs.
}
\subsection{Uniformly Quasibipartite Instances}
Uniformly quasibipartite instances of the Steiner tree problem are
quasibipartite graphs where the cost of edges incident on a Steiner
vertex are the same. They were first studied by Gr\"opl et
al.~\cite{GH+02}, who gave a $73/60$ factor approximation
algorithm.
\myifthen{In the following, we show that the cost of the returned
tree is no more than than $\frac{73}{60} \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$, which
upper-bounds the integrality gap by $\frac{73}{60}$.
}{}
We start by describing the algorithm of Gr\"opl et al.~\cite{GH+02} in
terms of full components. A collection $\ensuremath{\mathcal{K}}'$ of full components is
acyclic if there is no list of $t > 1$ distinct terminals and
hyperedges in $\ensuremath{\mathcal{K}}'$ of the form $r_1 \in K_1 \ni r_2 \in K_2 \dotsb
\ni r_t \in K_t \ni r_1$ --- i.e.~there are no \emph{hypercycles}.
\vspace{3ex}\noindent
\begin{boxedminipage}{\algobox}
Procedure \textsc{RatioGreedy}
\begin{algorithmic}[1]
\STATE Initialize the set of acyclic components $\mathcal L$ to $\varnothing$. \\
\STATE Let $L^*$ be a minimizer of $\frac{C_L}{|L| - 1}$ over all full components $L$ such that $|L| \ge 2$ and $L\cup\mathcal L$ is acyclic. \\
\STATE Add $L^*$ to $\mathcal L$. \\
\STATE Continue until $(R, \mathcal L)$ is a hyper-spanning tree and return $\mathcal L$.
\end{algorithmic}
\end{boxedminipage}
\vspace{0.75ex}
\begin{theorem} \label{theorem:uniformly}
On a uniformly quasibipartite instance \textsc{RatioGreedy}\ returns a
Steiner tree of cost at most $\frac{73}{60}\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$.
\end{theorem}
\myifthen{
\begin{proof}
Let $t$ denote the number of iterations and $\mathcal L :=
\{L_1,\ldots,L_t\}$ be the ordered sequence of full components
obtained. We now define a dual solution to \eqref{eq:LP-PUD}. Let
$\pi(i)$ denote the partition induced by the connected components of
$\{L_1, \dotsc, L_i\}$. Let $\theta(i)$ denote $C_{L_i}/(|L_i| - 1)$
and note that $\theta$ is nondecreasing. Define $\theta(0)=0$ for
convenience. We define a dual solution $y$ with
$$y_{\pi(i)} = \theta(i+1)-\theta(i)$$
for $0 \le i < t$, and all other coordinates of $y$ set to zero; $y$
is not generally feasible, but we will scale it down to make it
so. By evaluating a telescoping sum, it is not hard to find that
$\sum_i y_{\pi(i)} (r(\pi(i))-1) = C(\mathcal L)$. In the rest of the proof
we will show for any $K\in \ensuremath{\mathcal{K}}$, $\sum_i y_{\pi(i)} \mathtt{rc}^{\pi(i)}_K
\le 73/60\cdot C_K$ --- by scaling, this also proves that
$\frac{60}{73} y$ is a feasible dual solution, and hence completes
the proof.
Fix any $K \in \ensuremath{\mathcal{K}}$ and let $|K| = k$. Since the instance in question
is uniformly quasi-bipartite, the full component $K$ is a star with
a Steiner centre and edges of a fixed cost $c$ to each terminal in
$K$. For $1 \le i < k$, let $\tau(i)$ denote the last iteration $j$
in which $\mathtt{rc}_K^{\pi(j)} \ge k-i$. Let $K_i$ denote any subset of
$K$ of size $k-i+1$ such that $K_i$ contains at most one element
from each part of $\pi(\tau(i))$; i.e., $|K_i| = k-i+1$ and
$\mathtt{rc}_{K_i}^{\pi(\tau(i))} = k-i$.
Our analysis hinges on the fact that $K_i$ was a valid choice for
$L_{\tau(i)+1}$. More specifically, note that $\{L_1, \dotsc,
L_{\tau(i)}, K_i\}$ is acyclic, hence by the greedy nature of the
algorithm, for any $1 \le i < k,$ $$\theta(\tau(i)+1) =
C_{L_{\tau(i)+1}}/(|L_{\tau(i)+1}|-1) \le C_{K_i}/(|K_i|-1) \le
\frac{c \cdot (k-i+1)}{k-i}.$$ Moreover, using the definition of
$\tau$ and telescoping we compute
$$\sum_\pi y_\pi \mathtt{rc}_K^\pi = \sum_{i=0}^{t-1} (\theta(i+1)-\theta(i))\mathtt{rc}_K^{\pi(i)} = \sum_{i=1}^{k-1} \theta(\tau(i)+1)
\le \sum_{i=1}^{k-1} \frac{c \cdot (k-i+1)}{k-i} = c\cdot
(k-1+H(k-1)),$$ where $H(\cdot)$ denotes the harmonic
series. Finally, note that $(k-1+H(k-1)) \le \7k$ for all $k \ge 2$
(achieved at $k=5$). Therefore, $\frac{60}{73}y$ is a valid solution
to \eqref{eq:LP-PUD}.
\end{proof}
}
{
\noindent{\emph{Proof sketch.}} Let $t$ denote the number of iterations and $\mathcal L :=
\{L_1,\ldots,L_t\}$ be the ordered sequence of full components
obtained. We now define a dual solution $y$ to
\eqref{eq:LP-PUD}. Let $\pi(i)$ denote the partition induced by the
connected components of $\{L_1, \dotsc, L_i\}$. Let $\theta(i)$
denote $C_{L_i}/(|L_i| - 1)$ and note that $\theta$ is
nondecreasing. Define $\theta(0)=0$ for convenience. We define a
dual solution $y$ with
$$y_{\pi(i)} = \theta(i+1)-\theta(i)$$
for $0 \le i < t$, and all other coordinates of $y$ set to zero. It
is straightforward to verify that the objective value $\sum_i y_{\pi(i)} (r(\pi(i))-1)$ of $y$ in \eqref{eq:LP-PUD} equals $C(\mathcal L)$.
The key is to show that for all $K \in \ensuremath{\mathcal{K}}$,
\begin{equation}
\label{eq:bamz}
\sum_i y_{\pi(i)} \mathtt{rc}^{\pi(i)}_K \le
(|K|-1+H(|K|-1))/|K|\cdot C_K,
\end{equation}
where $H$ denotes the harmonic series; this is obtained by using the greedy nature of the algorithm and the
fact that, in uniformly quasi-bipartite graphs, $C_{K'} \le C_K
\frac{|K'|}{|K|}$ whenever $K' \subset K$. Now,
$(|K|-1+H(|K|-1))/|K|$ is always at most $\frac{73}{60}$. Thus \eqref{eq:bamz} implies that $\frac{60}{73}\cdot y$
is a feasible dual solution, which completes the proof.
}
\comment{This method also gives a 5/4 integrality gap bound on
instances where every full component has size at most 3, and an
$H(t-1)$ integrality gap bound for \eqref{eq:LP-PU} on general
hypergraphs of maximum hyperedge size $t$ (i.e.\ ones not obtained
from instances of the Steiner tree problem) --- see \cite{CKP09}.}
\comment{we also get an integrality gap bound of $H(t-1)$ on
\eqref{eq:LP-PU} as an LP relaxation for the min-cost spanning
sub-hypergraph problem when all hyperedges have size at most
$t$. This complements the observation by Baudis et al.~\cite{BG+00}
that \textsc{RatioGreedy}\ has approximation ratio $H(t-1)$, which is in turn a
generalization of the submodular set cover framework of
Wolsey~\cite{Wo82}. This is nearly best possible for $t \ge 4$ since
``set cover with maximum set size $k$" to can be reduced to
``spanning connected hypergraph with maximum edge size $k+1$" by
creating a new root vertex and adding it to all sets. This set cover
problem is \ensuremath{\mathsf{APX}}-hard for $k \ge 3$ and Trevisan~\cite{Trev01} showed
$\ln k - O(\ln \ln k)$ inapproximability unless \ensuremath{\mathsf{P}}=\ensuremath{\mathsf{NP}}.
We also can extend these results, employing computational power, to
get an integrality gap bound $\beta_r$ in the case of Steiner tree
instances with at most $r$ terminals per full component. We obtain
integrality gap upper bounds $\beta_3 = 5/4, \beta_4 = 11/8, \beta_5
= 119/82, \beta_6 = 3/2$ respectively.}
\subsection{General graphs}
\def{\tt drop}} \def\gain{{\tt gain}{{\tt drop}} \def\gain{{\tt gain}}
\myifthen{
We start with a few definitions and notations in order to prove the
$2\sqrt{2}-1$ and $\sqrt{3}$ integrality gap bounds on \eqref{eq:LP-PU}. Both results use
similar algorithms, and the latter is a more complex version of the
former.
}{}
For conciseness we let a ``graph" be a triple $G = (V, E, R)$ where $R \subset V$ are $G$'s terminals. In the following, we let $\ensuremath{{\mathtt{mtst}}}(G; c)$ denote the minimum
\emph{terminal spanning tree}, i.e.~the minimum spanning tree of the terminal-induced subgraph $G[R]$ under edge-costs $c : E \to \mathbf{R}$. We will abuse notation and let $\ensuremath{{\mathtt{mtst}}}(G; c)$ mean both the tree and its cost under $c$.
When contracting an edge $uv$ in a graph, the new merged node resulting from contraction is defined to be a terminal iff at least one of $u$ or $v$ was a terminal; this is natural since a Steiner tree in the new graph is a minimal set of edges which, together with $uv$, connects all terminals in the old graph. Our \myifthen{algorithm performs}{algorithm performs} contraction, which may introduce parallel edges, but one may delete all but the cheapest edge from each parallel class without affecting the analysis.
Our \myifthen{first}{} algorithm proceeds in stages. In each stage we apply the operation $G \mapsto G/K$ which denotes contracting all edges in some full component $K$. To describe and analyze the algorithm we introduce some notation. For a minimum terminal spanning tree $T=\ensuremath{{\mathtt{mtst}}}(G;c)$
define ${\tt drop}} \def\gain{{\tt gain}_{T}(K;c) := c(T) - \ensuremath{{\mathtt{mtst}}}(G/K;c)$. We also define
$\gain_{T}(K;c):= {\tt drop}} \def\gain{{\tt gain}_{T}(K) - c(K)$, where $c(K)$ is the cost of
full component $K$. A tree $T$ is called \emph{gainless} if for every
full component $K$ we have $\gain_T(K;c) \le 0$. The following useful
fact is implicit in~\cite{KPT09} (see also
\myifthen{\prettyref{app:app2}}{\cite{CKP09}}).
\setcounter{thm_locopt}{\value{theorem}}
\begin{theorem}[Implicit in \cite{KPT09}] \label{thm:locopt}
If $\ensuremath{{\mathtt{mtst}}}(G; c)$ is gainless, then
$\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$ equals the cost of $\ensuremath{{\mathtt{mtst}}}(G; c)$.
\end{theorem}
We now give the \myifthen{first}{} algorithm and its analysis, which
uses a reduced cost trick introduced by Chakrabarty et
al.\cite{CDV08}.
\medskip
\noindent
\begin{boxedminipage}{\algobox}
Procedure {\sc Reduced One-Pass Heuristic}
\begin{algorithmic}[1]
\STATE Define costs $c'_e$ by $c'_e := c_e/\sqrt{2}$ for all
terminal-terminal edges $e$, and $c'_e = c_e$ for all other edges. Let $G_1 := G,$ $T_i := \ensuremath{{\mathtt{mtst}}}(G_i; c')$, and $i:=1$.
\STATE The algorithm
considers the full components in any order. When we examine a full component
$K$, if $\gain_{T_i}(K;c') > 0$, let
$K_i := K$, $G_{i+1} := G_i/K_i$, $T_{i+1} :=
\ensuremath{{\mathtt{mtst}}}(G_{i+1};c')$, and $i:=i+1$.
\STATE Let $f$ be the final value of $i$. Return the tree $T_{alg} :=
T_f \cup \bigcup_{i=1}^{f-1} K_i$.
\end{algorithmic}
\end{boxedminipage}
\medskip
\noindent
Note that the full components are scanned in {\em any} order and they
are not examined a priori. Hence the algorithm works just as well if
the full components arrive ``online," which might be useful for some
applications.
\begin{theorem}\label{theorem:bound1828}
$c(T_{alg}) \leq (2\sqrt{2} - 1) \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$.
\end{theorem}
\begin{proof}
First we claim that $\gain_{T_f}(K;c') \le 0$ for all $K$. To see this there are two cases. If $K=K_i$ for some $i$, then we immediately see that ${\tt drop}} \def\gain{{\tt gain}_{T_j}(K) = 0$ for all $j > i$ so $\gain_{T_f}(K) = -c(K) \le 0$. Otherwise (if for all $i,$ $K \neq K_i$) $K$ had nonpositive gain when examined by the algorithm; and the well-known \emph{contraction lemma} (e.g., see~\cite[\S 1.5]{GH+01b}) immediately implies that $\gain_{T_i}(K)$ is nonincreasing in $i$, so $\gain_{T_f}(K) \le 0$.
By \prettyref{thm:locopt},
$c'(T_f)$ equals the value of \eqref{eq:LP-PU} on the
graph $G_f$ with costs $c'$. Since $c' \le c$, and since at each
step we only contract terminals, the value of this optimum must be
at most $\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$. Using the fact that $c(T_f) =
\sqrt{2}c'(T_f)$, we get
\begin{align}\label{eq:dracula}
c(T_f) = \sqrt{2}c'(T_f) \le \sqrt{2} \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}
\end{align}
\noindent
Furthermore, for every $i$ we have $\gain_{T_i}(K_i;c') > 0$, that is,
${\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c') > c'(K) = c(K)$. The equality follows since $K$
contains no terminal-terminal edges. However, ${\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c') =
\frac{1}{\sqrt{2}} {\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c)$ because all edges of $T_i$ are
terminal-terminal. Thus, we get for every $i=1$ to $f$,
~${\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c) > \sqrt{2}\cdot c(K_i)$.
Since ${\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c) := \ensuremath{{\mathtt{mtst}}}(G_i;c) - \ensuremath{{\mathtt{mtst}}}(G_{i+1};c)$, we have
$$\sum_{i=1}^{f-1} {\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c)=\ensuremath{{\mathtt{mtst}}}(G;c) - c(T_f).$$
Thus, we have
\myifthen{
\begin{equation*}
\sum_{i=1}^{f-1} c(K_i) \le \frac{1}{\sqrt{2}} \sum_{i=1}^f
{\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c) = \frac{1}{\sqrt{2}} (\ensuremath{{\mathtt{mtst}}}(G;c) - c(T_f))
\le \frac{1}{\sqrt{2}}(2\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU} - c(T_f))
\end{equation*}
}
{
\begin{align*}
\sum_{i=1}^{f-1} c(K_i) \le \frac{1}{\sqrt{2}} \sum_{i=1}^f
{\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c) &= \frac{1}{\sqrt{2}} (\ensuremath{{\mathtt{mtst}}}(G;c) - c(T_f)) \\
&\le \frac{1}{\sqrt{2}}(2\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU} - c(T_f))
\end{align*}
}
where we use the fact that $\ensuremath{{\mathtt{mtst}}}(G, c)$ is at most twice
$\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$\footnote{This follows using standard arguments, and can be
seen, for instance, by applying \prettyref{thm:locopt} to the
cost-function with all terminal-terminal costs divided by 2, and
using short-cutting.}. Therefore
$$c(T_{alg}) = c(T_f) + \sum_{i=1}^{f-1} c(K_i) \le \Bigl(1 - \frac{1}{\sqrt{2}}\Bigr) c(T_f) + \sqrt{2}\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}.$$
Finally, using $c(T_f) \le \sqrt{2}\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$ from
\eqref{eq:dracula}, the proof of \prettyref{theorem:bound1828} is
complete. \end{proof}
\myifthen{
\subsubsection{Improving to $\sqrt{3}$}
\def\loss{{\tt loss}} To get the improved factor of $\sqrt{3}$, we use
a more refined iterated contraction approach. The crucial new concept
is that of the {\em loss} of a full component, introduced by Karpinski
and Zelikovsky \cite{KZ97}. The intuition is as follows. In each
iteration, the $(2\sqrt{2}-1)$-factor algorithm contracts a full
component $K$, and thus commits to include $K$ in the final solution;
the new algorithm makes a smaller commitment, by contracting a
\emph{subset} of $K$'s edges, which allows for a possibility of better
recovery later.
Given a full component $K$ (viewed as a tree with leaf set $K$ and
internal Steiner nodes), $\loss(K)$ is defined to be the minimum-cost
subset of $E(K)$ such that $(V(K), \loss(K))$ has at least one
terminal per connected component --- i.e.~the cheapest way in $K$ to
connect each Steiner node to the terminal set. We also use $\loss(K)$
to denote the total \emph{cost} of these edges. Note that no two
terminals are connected by $\loss(K)$.
A very useful theorem of Karpinski and Zelikovsky \cite{KZ97}
is that for any full component $K$, $\loss(K) \le c(K)/2$.
Now we have the ingredients to give our new algorithm. In the
description below, $\alpha > 1$ is a parameter (which will be set to
$\sqrt{3}$). In each iteration, the algorithm contracts the loss of a
single full component $K$ (we note it follows that the terminal set has constant size over all iterations).
\medskip
\noindent
\begin{boxedminipage}{\algobox}
Procedure {\sc Reduced One-Pass Loss-Contracting Heuristic}
\begin{algorithmic}[1]
\STATE Initially $G_1 := G$, $T_1 := \ensuremath{{\mathtt{mtst}}}(G;c)$, and $i:=1$.
\STATE
The algorithm considers the full components in any order. When we examine a full component
$K$, if
$$\gain_{T_i}(K;c) > (\alpha - 1)\loss(K),$$
let
$K_i := K$, $G_{i+1} := G_i/\loss(K_i)$, $T_{i+1} :=
\ensuremath{{\mathtt{mtst}}}(G_{i+1};c)$, and $i:=i+1$.
\STATE Let $f$ be the final value of $i$. Return the tree $T_{alg} :=
T_f \cup \bigcup_{i=1}^{f-1} \loss(K_i).$
\end{algorithmic}
\end{boxedminipage}
\medskip
We now analyze the algorithm.
\noindent
\begin{claim}\label{claim:tf}
$c(T_f) \le (\frac{1+\alpha}{2}) \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$.
\end{claim}
\begin{proof}
Using the contraction lemma again, $\gain_{T_f}(K;c) \le (\alpha - 1)\loss(K)$ for all $K$, so
\begin{align}\label{eq:wtf}
{\tt drop}} \def\gain{{\tt gain}_{T_f}(K;c) \le c(K) + (\alpha - 1) \loss(K) = c(K) + (\alpha - 1)\loss(K) \le \Big(\frac{1+\alpha}{2}\Big)c(K)
\end{align}
since $\loss(K) \le c(K)/2$.
To finish the proof of \prettyref{claim:tf}, we proceed as in the
proof of Equation \eqref{eq:dracula}. Define $c'_e :=
c_e/(\frac{1+\alpha}{2})$ for all edges $e$ which join two vertices of the original terminal set $R$, and $c'_e = c_e$ for all other edges. Note that \eqref{eq:wtf} implies that $T_f$
is gainless with respect to $c'$. Thus, by \prettyref{thm:locopt},
the value of LP \eqref{eq:LP-PU} on $(G_f, c')$ equals
$c'(T_f)$. Since we only reduce costs (as $\alpha \ge 1$), this
optimum is no more than the original $\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$ giving us
$c'(T_f) \le \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$. Now using the definition of
$c'$, the proof of the claim is complete.
\end{proof}
\begin{claim}\label{claim:drop}
For any $i\ge 1$, we have $c(T_i) - c(T_{i+1}) \ge \gain_{T_i}(K_i;c) + \loss(K_i)$.
\end{claim}
\begin{proof}
Recall that $T_{i+1}$ is a minimum terminal spanning tree of
$G_{i+1}$ under $c$. Consider the following other terminal
spanning tree $T$ of $G_{i+1}$: take $T$ to be the union of $K_i /
\loss(K_i)$ with $\ensuremath{{\mathtt{mtst}}}(G_i/K_i; c)$. Hence $c(T_{i+1}) \le
c(T) = \ensuremath{{\mathtt{mtst}}}(G_i/K_i; c) + c(K_i) - \loss(K_i)$. Rearranging,
and using the definition of gain, we obtain:
\begin{equation*}
c(T_i) - c(T_{i+1}) \ge c(T_i) - \ensuremath{{\mathtt{mtst}}}(G_i/K_i; c) - c(K_i) + \loss(K_i) = \gain_{T_i}(K_i; c) +\loss(K_i),
\end{equation*}
and this completes the proof.
\end{proof}
\noindent
Now we are ready to prove the integrality gap upper bound of $\sqrt{3}$.
\begin{theorem}
$c(T_{alg}) \le \sqrt{3}\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$.
\end{theorem}
\begin{proof}
By the algorithm, we have for all $i$ that $\gain_{T_i}(K_i) \ge
(\alpha - 1)\loss(K_i)$, and thus $\gain_{T_i}(K_i; c) +\loss(K_i)
\ge \alpha \loss(K_i)$. Thus, from \prettyref{claim:drop}, we get
$$\sum_{i=1}^{f-1} \loss(K_i) \le \frac{1}{\alpha} \sum_{i=1}^{f-1} \Big(c(T_i) - c(T_{i+1}) \Big)$$
\noindent
The right-hand sum telescopes to give us $c(T_1) - c(T_f) = \ensuremath{{\mathtt{mtst}}}(G;c) - c(T_f)$. Thus,
\begin{align*}
c(T_{alg}) &= c(T_f) + \sum_{i=1}^{f-1} \loss(K_i) \le c(T_f) + \frac{1}{\alpha}(\ensuremath{{\mathtt{mtst}}}(G;c) - c(T_f)) = \frac{1}{\alpha}\ensuremath{{\mathtt{mtst}}}(G;c) + \frac{\alpha-1}{\alpha} c(T_f) \\
&\le \Big(\frac{2}{\alpha} + \frac{(\alpha -1)(1+\alpha)}{2\alpha}\Big)\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}
= \Big(\frac{\alpha^2 + 3}{2\alpha}\Big)\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}
\end{align*}
which follows from $\ensuremath{{\mathtt{mtst}}}(G;c) \le 2\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$ and \prettyref{claim:tf}.
Setting $\alpha = \sqrt{3}$, the proof of the theorem is complete.
\end{proof}
}{}
\iffalse
\subsection{General graphs}
\def{\tt drop}} \def\gain{{\tt gain}{{\tt drop}} \def\gain{{\tt gain}}
\myifthen{
We start with a few definitions and notations in order to prove the
$2\sqrt{2}-1$ and $\sqrt{3}$ integrality gap bounds. Both results use
similar algorithms, and the latter is a more complex version of the
former.
}{}
In the following, we let $MST(R;c)$ denote the minimum
\emph{terminal spanning tree} for terminal set $R$ and edge costs $c$,
i.e.~the MST of the subgraph induced by $R$.
Our \myifthen{first}{} algorithm proceeds in stages, and in each stage
it will contract a set of terminals into a single super-terminal; for
a given terminal set $R$ we use $R/K$ to denote $R$ with terminals in
$K$ shrunk to a single super-terminal. Let $K$ be a full
component. For a minimum terminal spanning tree $T=MST(R;c)$ we now
define ${\tt drop}} \def\gain{{\tt gain}_{T}(K;c) := c(T) - MST(R/K;c)$. We also define
$\gain_{T}(K;c):= {\tt drop}} \def\gain{{\tt gain}_{T}(K) - c(K)$, where $c(K)$ is the cost of
full component $K$. A tree $T$ is called \emph{gainless} if for every
full component $K$ we have $\gain_T(K;c) \le 0$. The following useful
fact is implicit in~\cite{KPT09} (see also
\myifthen{\prettyref{app:app2}}{\cite{CKP09}}).
\setcounter{thm_locopt}{\value{theorem}}
\begin{theorem}[Implicit in \cite{KPT09}] \label{thm:locopt}
If the MST induced by the terminals is gainless, then
$\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$ equals the cost of that MST.
\end{theorem}
\myifthen{ We now give the first algorithm and its analysis, which
uses a reduced cost trick introduced by Chakrabarty et
al.\cite{CDV08}. }{ The following algorithm uses a reduced cost
trick introduced by Chakrabarty et al.\cite{CDV08}. }
At stage $i$ the contracted terminal set is $R_i$, and $T_i =
MST(R_i;c)$ denotes the minimum spanning tree in this contracted
graph.
\vspace{3ex}\noindent
\begin{boxedminipage}{\algobox}
Procedure {\sc Reduced One-Pass Heuristic}
\begin{algorithmic}[1]
\STATE Define cost $c'$ by $c'(i,j) := c(i,j)/\sqrt{2}$ for all
terminal pairs $(i,j)$, and $c'(e) = c(e)$ for all other edges. Let
$T := MST(R;c')$ be the minimum spanning tree of $R$ with cost $c'$.
\STATE Starting with $T_1 = T$, and $R_1 = R$ the algorithm
considers the full components in any order. In the $i$th step, if
there is a full component $K$ such that $\gain_{T_i}(K;c') > 0$, let
$K_i := K$. Let $R_{i+1} := R_i/K_i$ and $T_{i+1} :=
MST(R_{i+1};c')$.
\STATE Continue until the step $f$ in which
$\gain_{T_f}(K;c') \le 0$ for all $K$. Return the tree $T_{alg} :=
T_f \cup \bigcup_{i=1}^{f-1} K_i$.
\end{algorithmic}
\end{boxedminipage}
\vspace{0.75ex}
\noindent
Note that the full components are scanned in {\em any} order and they
are not examined apriori. Hence the algorithm works just as well if
the full components arrive ``online," which might be useful for some
applications.
\begin{theorem}\label{theorem:bound1828}
$c(T_{alg}) \leq (2\sqrt{2} - 1) \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$.
\end{theorem}
\begin{proof}
Let $G_f$ be the graph in the last iteration $f$ of the algorithm;
note that it has $R_f$ as the set of terminals. Since in the end
$\gain_{T_f}(K;c') \le 0$ for all $K$, \prettyref{thm:locopt}
gives us that $c'(T_f)$ equals the value of \eqref{eq:LP-PU} on the
graph $G_f$ with costs $c'$. Since $c' \le c$, and since at each
step we only contract terminals, the value of this optimum must be
at most $\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$. Using the fact that $c(T_f) =
\sqrt{2}c'(T_f)$, we get
\begin{align}\label{eq:dracula}
c(T_f) = \sqrt{2}c'(T_f) \le \sqrt{2} \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}
\end{align}
\noindent
Furthermore, for every $i$ we have $\gain_{T_i}(K_i;c') > 0$, that is,
${\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c') > c'(K) = c(K)$. The equality follows since $K$
contains no terminal-terminal edges. However, ${\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c') =
\frac{1}{\sqrt{2}} {\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c)$ because all edges of $T_i$ are
terminal-terminal. Thus, we get for every $i=1$ to $f$,
~${\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c) > \sqrt{2}\cdot c(K_i)$.
Since ${\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c) := MST(R_i;c) - MST(R_{i+1};c)$, we have
$$\sum_{i=1}^{f-1} {\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c)=MST(R;c) - c(T_f).$$
Thus, we have
\begin{multline*}
\sum_{i=1}^{f-1} c(K_i) < \frac{1}{\sqrt{2}} \sum_{i=1}^f
{\tt drop}} \def\gain{{\tt gain}_{T_i}(K_i;c) = \frac{1}{\sqrt{2}} (MST(R;c) - c(T_f)) \notag \\
\le \frac{1}{\sqrt{2}}(2\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-P2} - c(T_f))
\end{multline*}
where we use the fact that the MST is at most twice
$\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-P2}$\footnote{This is a standard result, and can be
seen, for instance, by applying \prettyref{thm:locopt} to the
cost-function with all terminal-terminal costs divided by 2, and
using short-cutting.}. Therefore
$$c(T_{alg}) = c(T_f) + \sum_{i=1}^{f-1} c(K_i) < \Bigl(1 - \frac{1}{\sqrt{2}}\Bigr) c(T_f) + \sqrt{2}\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}.$$
Finally, using $c(T_f) \le \sqrt{2}\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$ from
\eqref{eq:dracula}, the proof of \prettyref{theorem:bound1828} is
complete. \end{proof}
\myifthen{
\subsubsection{Improving to $\sqrt{3}$}
\def\loss{{\tt loss}} To get the improved factor of $\sqrt{3}$, we use
a more refined iterated contraction approach. The crucial new concept
is that of the {\em loss} of a full component, introduced by Karpinski
and Zelikovsky \cite{KZ97}. The intuition is as follows. In each
iteration, the $(2\sqrt{2}-1)$-factor algorithm contracts a full
component $K$, and thus commits to include $K$ in the final solution;
the new algorithm makes a smaller commitment, by contracting a
\emph{subset} of $K$'s edges, which allows for a possibility of better
recovery later.
Given a full component $K$ (viewed as a tree with leaf set $K$ and
internal Steiner nodes), $\loss(K)$ is defined to be the minimum-cost
subset of $E(K)$ such that $(V(K), \loss(K))$ has at least one
terminal per connected component --- i.e.~the cheapest way in $K$ to
connect each Steiner node to the terminal set. We also use $\loss(K)$
to denote the total \emph{cost} of these edges. Note that no two
terminals are connected by $\loss(K)$.
A very useful theorem of Karpinski and Zelikovsky \cite{KZ97}
is that for any full component $K$, $\loss(K) \le c(K)/2$.
Now we have the ingredients to give our new algorithm. In the
description below, $\alpha > 1$ is a parameter (which will be set to
$\sqrt{3}$). In each iteration, the algorithm contracts the loss of a
single full component $K$; the Steiner nodes of $K$ are contracted
\emph{into} the terminals of $K$; the total size of the terminal set
is invariant so we denote it by $R$ in all iterations. Notice that
the contraction of loss edges may decrease the distance between
terminals; in the following algorithm we therefore keep track of
the current contracted graph and its associated cost function.
\vspace{3ex}\noindent
\begin{boxedminipage}{\algobox}
Procedure {\sc Reduced One-Pass Loss-Contracting Heuristic}
\begin{algorithmic}[1]
\STATE Starting with $G_1 = G$, $c_1 =c$, and $T_1 = MST(R;c_1)$,
the algorithm considers the full components in any order. In the
$i$th step, if there is a full component $K$ such that
$$\gain_{T_i}(K;c_i) > (\alpha - 1)\loss(K),$$
we let $K_i := K$.
Contract the edges in $\loss(K)$, giving a new graph $G_{i+1} := G_i/\loss(K_i)$ with cost function $c_{i+1}$.
Let $T_{i+1} := MST(R;c_{i+1})$.
\STATE Continue until step $f$ in
which $\gain_{T_f}(K;c_f) \le (\alpha - 1)\loss(K)$ for all $K$.
Return the tree $T_{alg} := T_f \cup \bigcup_{i=1}^{f-1} \loss(K_i)$.
\end{algorithmic}
\end{boxedminipage}
\vspace{0.75ex}
\noindent
Since $T_{alg}$ consists of all the contracted edges plus a terminal spanning tree of the final graph, all terminals are connected by $T_{alg}$ and is in particular $T_{alg}$ is a valid Steiner tree.
\comment{This is because $T_f$ contains of two kinds of edges $E_1 \cup E_2$.
$E_1$ is the set of edges which have $c_f(e) = c_1(e) = c(e)$, while $E_2$ is the set of edges which were obtained via loss contraction.
Since the cost of an edge in $E_2$ might be smaller than the actual cost, we must ``expand" the loss back which led to this
new edge. Once we do so, we have a tree spanning the terminals. Thus, $T_{alg}$ is a valid Steiner tree.}
It satisfies
$c(T_{alg}) = c_f(T_f) + \sum_{i=1}^{f-1} \loss(K_i)$.
\begin{claim}\label{claim:tf}
$c_f(T_f) \le (\frac{1+\alpha}{2}) \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$.
\end{claim}
\begin{proof}
In the end with costs $c_f$, we have $\gain_{T_f}(K;c_f) \le (\alpha - 1)\loss(K)$ for all $K$.
That is,
\begin{align}\label{eq:wtf}
{\tt drop}} \def\gain{{\tt gain}_{T_f}(K;c_f) \le c_f(K) + (\alpha - 1) \loss(K) = c(K) + (\alpha - 1)\loss(K) \le \Big(\frac{1+\alpha}{2}\Big)c(K)
\end{align}
since $c_f(K) = c(K)$ because we only modify terminal-terminal edges, and $\loss(K) \le c(K)/2$.
To finish the proof of \prettyref{claim:tf}, we proceed as in the
proof of Equation \eqref{eq:dracula}. Define $c'_f :=
c_f/(\frac{1+\alpha}{2})$ for all terminal-terminal edges (and $c'_f =
c_f$ on other edges) and note that \eqref{eq:wtf} implies that $T_f$
is gainless with respect to $c'_f$. Thus, by \prettyref{thm:locopt},
the value of LP \eqref{eq:LP-PU} on $(G_f, c'_f)$ equals
$c'_f(T_f)$. Since we only reduce costs (since $\alpha > 1$), this
optimum is no more than the original $\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$ giving us
$c'_f(T_f) \le \ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$. Now using the definition of
$c'$, the proof of the claim is complete.
\end{proof}
\begin{claim}\label{claim:drop}
For any $i\ge 1$, we have $c_i(T_i) - c_{i+1}(T_{i+1}) \ge \gain_{T_i}(K_i;c_i) + \loss(K_i)$.
\end{claim}
\begin{proof}
Recall that $T_{i+1}$ is a minimum terminal spanning tree of
$G_{i+1}$ under $c_{i+1}$. Consider the following other terminal
spanning tree $T$ of $G_{i+1}$: take $T$ to be the union of $K_i /
\loss(K_i)$ with $MST(R/K_i; c_i)$. Hence $c_{i+1}(T_{i+1}) \le
c_{i+1}(T) = MST(R/K_i; c_i) + c(K_i) - \loss(K_i)$. Rearranging,
and using the definition of gain, we obtain:
\begin{equation}
c_i(T_i) - c_{i+1}(T_{i+1}) \ge c_i(T_i) - MST(R/K_i; c_i) - c(K_i) + \loss(K_i) = \gain_{T_i}(K_i; c_i) +\loss(K_i),.
\end{equation}
and this completes the proof.
\end{proof}
\noindent
Now we are ready to prove the integrality gap upper bound of $\sqrt{3}$.
\begin{theorem}
$c(T_{alg}) \le \sqrt{3}\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$.
\end{theorem}
\begin{proof}
By the algorithm, we have for all $i$ that $\gain_{T_i}(K_i) \ge
(\alpha - 1)\loss(K_i)$, and thus $\gain_{T_i}(K_i; c_i) +\loss(K_i)
\ge \alpha \loss(K_i)$. Thus, from \prettyref{claim:drop}, we get
$$\sum_{i=1}^{f-1} \loss(K_i) \le \frac{1}{\alpha} \sum_{i=1}^{f-1} \Big(c_i(T_i) - c_{i+1}(T_{i+1}) \Big)$$
\noindent
The right-hand sum telescopes to give us $c_1(T_1) - c_f(T_f) = MST(R;c) - c_f(T_f)$. Thus,
\begin{align*}
c(T_{alg}) &= c_f(T_f) + \sum_{i=1}^{f-1} \loss(K_i) \le c_f(T_f) + \frac{1}{\alpha}(MST(R;c) - c_f(T_f)) = \frac{1}{\alpha}MST(R;c) + \frac{\alpha-1}{\alpha} c_f(T_f) \\
&\le \Big(\frac{2}{\alpha} + \frac{(\alpha -1)(1+\alpha)}{2\alpha}\Big)\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}
= \Big(\frac{\alpha^2 + 3}{2\alpha}\Big)\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}
\end{align*}
which follows from $MST(R;c) \le 2\ensuremath{\mathop{\mathrm{OPT}}}\eqref{eq:LP-PU}$ and \prettyref{claim:tf}.
Setting $\alpha = \sqrt{3}$, the proof of the theorem is complete.
\end{proof}
}{}
\fi
\myifthen{
\section{Conclusion}
In this paper we looked at several hypergraphic LP relaxations for the
Steiner tree problem, and showed they all have the same objective
value. Furthermore, we noted some connections to the bidirected cut
relaxation for Steiner trees: although hypergraphic relaxations are
stronger than the bidirected cut relaxation in general, in
quasibipartite graphs all these relaxations are equivalent. We
obtained structural results about the hypergraphic relaxations showing
that basic feasible solutions have sparse support. We also showed
improved upper bounds on the integrality gaps on the hypergraphic
relaxations via simple algorithms.
Reiterating the comments in Section \ref{sec:discussion}, the
hypergraphic LPs are powerful (e.g.~as evidenced by Byrka et
al.~\cite{BGRS10}) but may not be manageable for computational
implementation. Some interesting areas for future work include:
non-ellipsoid-based algorithms to solve the hypergraphic LPs in the
$r$-restricted setting; resolving the complexity of optimizing them in
the unrestricted setting; and directly using the bidirected cut
relaxation to achieve good results (e.g.~in quasi-bipartite
instances).
}{}
| {
"timestamp": "2010-03-09T02:00:31",
"yymm": "0910",
"arxiv_id": "0910.0281",
"language": "en",
"url": "https://arxiv.org/abs/0910.0281",
"abstract": "We investigate hypergraphic LP relaxations for the Steiner tree problem, primarily the partition LP relaxation introduced by Koenemann et al. [Math. Programming, 2009]. Specifically, we are interested in proving upper bounds on the integrality gap of this LP, and studying its relation to other linear relaxations. Our results are the following. Structural results: We extend the technique of uncrossing, usually applied to families of sets, to families of partitions. As a consequence we show that any basic feasible solution to the partition LP formulation has sparse support. Although the number of variables could be exponential, the number of positive variables is at most the number of terminals. Relations with other relaxations: We show the equivalence of the partition LP relaxation with other known hypergraphic relaxations. We also show that these hypergraphic relaxations are equivalent to the well studied bidirected cut relaxation, if the instance is quasibipartite. Integrality gap upper bounds: We show an upper bound of sqrt(3) ~ 1.729 on the integrality gap of these hypergraph relaxations in general graphs. In the special case of uniformly quasibipartite instances, we show an improved upper bound of 73/60 ~ 1.216. By our equivalence theorem, the latter result implies an improved upper bound for the bidirected cut relaxation as well.",
"subjects": "Discrete Mathematics (cs.DM)",
"title": "Hypergraphic LP Relaxations for Steiner Trees",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018390836984,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7099229908255704
} |
https://arxiv.org/abs/math/0612104 | Representations of finite groups | This book is an introduction to a fast developing branch of mathematics - the theory of representations of groups. It presents classical results of this theory concerning finite groups. | \chapter{1}
REPRESENTATIONS OF GROUPS
\endtitle
\endtopmatter
\leftheadtext{CHAPTER \uppercase\expandafter{\romannumeral 1}.
REPRESENTATIONS OF GROUPS.}
\document
\headline=\truehead
\head
\SectionNum{1}{5} Representations of groups and their homomorphisms.
\endhead
\rightheadtext{\S\,1. Representations and their homomorphisms.}
It is clear that matrix groups are in some sense simpler than abstract
groups. Multiplication rule in them is explicitly specific. Dealing with
matrix groups one can use methods of the linear algebra and calculus. The
theory of group representations grows from the aim to reproduce an abstract
group in a matrix form.\par
Let $V$ be a linear vector space over the field of complex numbers
$\Bbb C$. By $\operatorname{End}(V)$ we denote the set of linear operators mapping $V$
into $V$. The subset of non-degenerate operators within $\operatorname{End}(V)$ is denoted
by $\operatorname{Aut}(V)$. It is easy to see that $\operatorname{Aut}(V)$ is a group. The operation of
composition, i\.\,e\. applying two operators successively, is the group
multiplication in $\operatorname{Aut}(V)$.
\mydefinition{1.1}A representation $f$ of a group $G$ in a linear vector
space $V$ is a group homomorphism $f\!:\,G\to\operatorname{Aut}(V)$.
\enddefinition
If $f$ is a representation of a group $G$ in $V$, this fact is briefly
written as $(f,G,V)$. Let $g\in G$ be an element of the group $G$ , then
$f(g)$ is a non-degenerate operator acting within the space $V$. It is
called the {\it representation operator\/} corresponding to the element
$g\in G$. By $f(g)\bold x$ we denote the result of applying this operator
to a vector $\bold x\in V$. The notation with two pairs of braces
$f(g)(\bold x)$ will also be used provided it is more clear in a given
context. For instance, $f(g)(\bold x+\bold y)$. Representation operators
satisfy the following evident relationships:
\roster
\item $f(g_1\,g_2)=f(g_1)\,f(g_2)$;
\item $f(1)=1$;
\item $f(g^{-1})=f(g)^{-1}$;
\endroster
\mydefinition{1.2} Let $(f,G,V)$ and $(h,G,W)$ be two representations
of the same group $G$. A linear mapping $A\!:\,V\to W$ is called a
{\it homomorphism\/} sending the representation $(f,G,V)$ to the
representation $(h,G,W)$ if the following condition is fulfilled:
$$
\hskip -2em
A\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)=h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A\text{\ \ for all \ }g\in G.
\mytag{1.1}
$$
The mapping $A$ in \mythetag{1.1}, which performs a homomorphism of two
representations, sometimes is called an {\it interlacing map}.
\enddefinition
\mydefinition{1.3} A homomorphism $A$ interlacing two representations
$(f,G,V)$ and $(h,G,W)$ is called an {\it isomorphism\/} if it is
bijective as a linear mapping $A\!:\,V\to W$.
\enddefinition
It is easy to verify that the relation of being isomorphic is an
equivalence relation for representations. Two isomorphic representations
are also called {\it equivalent representations}. In the theory of
representations two isomorphic representations are treated as identical
representations because all essential properties of such two
representations do coincide.
\mytheorem{1.1} If $(f,G,V)$ is a representation of a group $G$ in a
space $V$ and if $A\!:\,V\to W$ is a bijective linear mapping, then $A$
induces a unique representation of the group $G$ in $W$ which is equivalent
to $(f,G,V)$ and for which $A$ is an interlacing map.
\endproclaim
\demo{Proof} The proof of the theorem is trivial. Let's define the operators
of a representation $h$ in $W$ as follows:
$$
\hskip -2em
h(g)=A\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A^{-1}.
\mytag{1.2}
$$
It is easy to verify that the formula \mythetag{1.2} does actually define
a representation of the group $G$ in $W$. Multiplying \mythetag{1.2} on
the right by $A$, we get \mythetag{1.1}. Hence $A$ is an isomorphism
of $f$ and $h$. Moreover, any representation of $G$ in $W$ for which $A$
is an interlacing isomorphism should coincide with $h$. This fact is proved
by multiplying \mythetag{1.1} on the right by $A^{-1}$.
\qed\enddemo
From the theorem proved just above we conclude that if a representation
$f$ in $V$ is given, in order to construct an equivalent representation in
$W$ it is sufficient to have a linear bijection from $V$ to $W$. However,
in practice the problem is stated somewhat differently. Two representations
$f$ and $h$ in $V$ and $W$ are already given. The problem is to figure out
if they are equivalent and if so to find an interlacing operator. In this
statement this is one of the basic problems of the theory of representations.
\head
\SectionNum{2}{7} Finite dimensional representations.
\endhead
\rightheadtext{\S\,2. Finite-dimensional representations.}
\mydefinition{2.1} A representation $(f,G,V)$ is called
finite-dimensional if its space $V$ is a finite-dimensional
linear vector space, i\.\,e\. $\dim V<\infty$.
\enddefinition
Below in this book we consider only finite-dimensional
representations, though many facts proved for this case can then be
transferred or generalized for the case of infinite-dimensional
representations.\par
Note that any finite-dimensional linear vector space over the
field of complex numbers $\Bbb C$ can be bijectively mapped onto the
standard arithmetic coordinate vector space $\Bbb C^n$, where\linebreak
$n=\dim V$. And $\operatorname{Aut}(\Bbb C^n)=\operatorname{GL}(n,\Bbb C)$. Therefore each
finite-dimensional representation is equivalent to some matrix
representation $f\!:\,G\to\operatorname{GL}(n,\Bbb C)$. This fact follows from
the theorem~\mythetheorem{1.1}. In spite of this fact we shall consider
finite-dimensional representations in abstract vector spaces because
all statements in this case are more elegant and their proofs are sometimes
even more simple than the proofs of corresponding matrix statements.\par
\head
\SectionNum{3}{8} Invariant subspaces. Restriction and factorization of
representations.
\endhead
\rightheadtext{\S\,3. Invariant subspaces.}
\mydefinition{3.1} Let $(f,G,V)$ be a representation of a group $G$
in a linear vector space $V$. A subspace $W\subseteq V$ is called an
{\it invariant subspace\/} if for any $g\in G$ and for any $\bold x\in
W$ the result of applying $f(g)$ to $\bold x$ belongs to $W$, i\.\,e\.
$f(g)\bold x\in W$.
\enddefinition
The concept of {\it irreducibility\/} is introduced in terms of invariant
subspaces. This is the central concept in the theory of representations.
\mydefinition{3.2} A representation $(f,G,V)$ of the group $G$ is called
{\it irreducible\/} if it has no invariant subspaces other than $W=\{0\}$
and $W=V$. Otherwise the representation $(f,G,V)$ is called {\it reducible}.
\enddefinition
Assume that $(f,G,V)$ is an irreducible representation. Let's choose
some vector $\bold x\neq 0$ of $V$ and consider its orbit:
$$
\operatorname{Orb}_f(\bold x)=\{\,\bold y\in V\!:\,\bold y=f(g)\bold x\text{\ \ for
some \ }g\in G\,\}.
$$
The orbit $\operatorname{Orb}_f(\bold x)$ is a subset of the space $V$ invariant under
the action of the representation operators. However, in general case, it is
not a linear subspace. Let's consider its linear span
$$
W=\langle \operatorname{Orb}_f(\bold x)\rangle.
$$
The subspace $W$ is invariant and $W\neq\{0\}$ since it possesses the
non-zero vector $\bold x$. Then due to the irreducibility of $f$ we get
$W=V$. As a result one can formulate the following criterion of
irreducibility.
\mytheoremwithtitle{3.1}{ ({\bf irreducibility criterion})} A representation
$(f,G,V)$ is irreducible if and only if the orbit of an arbitrary non-zero
vector $\bold x\in V$ spans the whole space $V$.
\endproclaim
The necessity of this condition was proved above. Let's prove its
sufficiency. Let $W\subseteq V$ be an invariant subspace such that
$W\neq\{0\}$. Let's choose a nonzero vector $\bold x\in W$. Due to the
invariance of $W$ we have $\operatorname{Orb}_f(\bold x)\subseteq W$. Hence,
$\langle \operatorname{Orb}_f(\bold x)\rangle\subseteq W$. But $\langle
\operatorname{Orb}_f(\bold x)\rangle=V$, therefore $W=V$. The criterion is proved.
\par
Irreducible representations are similar to chemical elements. One
cannot extract other more simple representations of a given group from
them. Any reducible representation in some sense splits into irreducible
ones. Therefore in the theory of representations the following two
problems are solved:
\rosteritemwd=0pt
\roster
\item to find and describe all irreducible representations of a given
group;
\item to suggest a method for splitting an arbitrary representation
into its irreducible components.
\endroster
The first problem is analogous to building the Mendeleev's table in
chemistry, the second problem is analogous to chemical analysis of
substances.\par
Let's consider some reducible representation $(f,G,V)$ of a group
$G$. Assume that $W$ is an invariant subspace such that $\{0\}\subsetneq
W\subsetneq V$. Let's denote by $\varphi(g)$ the restriction of the
operator $f(g)$ to the subspace $W$:
$$
\hskip -2em
\varphi(g)=f(g)\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,W}.
\mytag{3.1}
$$
For the operators $\varphi(g)$ we have the following relationships:
$$
\align
&\hskip -2em\varphi(g)\,\varphi(g^{-1})=(f(g)\,f(g^{-1}))\,\hbox{\vrule
height 8pt depth 10pt width 0.5pt}_{\,W}=1;
\mytag{3.2}\\
&\hskip -2em\varphi(g_1)\,\varphi(g_2)=(f(g_1)\,f(g_2))\,\hbox{\vrule
height 8pt depth 10pt width 0.5pt}_{\,W}=\varphi(g_1\,g_2).
\mytag{3.3}
\endalign
$$
From the relationship \mythetag{3.2} we conclude that the operator
$\varphi(g)$ is invertible and $\varphi(g)^{-1}=\varphi(g^{-1})$.
Hence, $\varphi(g)\in\operatorname{Aut}(W)$. The relationship \mythetag{3.3} in
its turn shows that the mapping\linebreak $\varphi\!:\,G\to\operatorname{Aut}(W)$
is a group homomorphism defining a representation.\par
\mydefinition{3.3} The representation $(\varphi,G,W)$ of a group
$G$ obtained by restricting the operators of a representation
$(f,G,V)$ to its invariant subspace $W\subseteq V$ according to
the formula \mythetag{3.1} is called the {\it restriction\/} of
$f$ to $W$.
\enddefinition
The presence of an invariant subspace $W$ let's us define
factoroperators in the factorspace $V/W$:
$$
\hskip -2em
\psi(g)=f(g)\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,V/W}.
\mytag{3.4}
$$
Let's recall that the action of the operator $\psi(g)$ upon a coset
$\operatorname{Cl}_W(\bold x)$ in $V/W$ is defined as follows:
$$
\hskip -2em
\psi(g)\operatorname{Cl}_W(\bold x)=\operatorname{Cl}_W(f(g)\bold x).
\mytag{3.5}
$$
The correctness of the definition \mythetag{3.5} is verified by direct
calculations (see \mybookcite{1}). The factoroperators \mythetag{3.5}
obey the following relationships:
$$
\align
&\hskip -2em
\gathered
\psi(g)\,\psi(g^{-1})\operatorname{Cl}_W(\bold x)=\operatorname{Cl}_W(f(g)f(g^{-1})\bold x)=\\
=\operatorname{Cl}_W(f(g\,g^{-1})\bold x)=\operatorname{Cl}_W(\bold x),
\endgathered
\mytag{3.6}\\
\vspace{2ex}
&\hskip -2em
\gathered
\psi(g_1)\,\psi(g_2)\operatorname{Cl}_W(\bold x)=\operatorname{Cl}_W(f(g_1)f(g_2)\bold x)=\\
=\operatorname{Cl}_W(f(g_1\,g_2)\bold x)=\psi(g_1\,g_2)\operatorname{Cl}_W(\bold x).
\endgathered
\mytag{3.7}
\endalign
$$
From \mythetag{3.6} and \mythetag{3.7} we conclude that the factoroperators
\mythetag{3.4} satisfy the relationships similar to \mythetag{3.2} and
\mythetag{3.3}. They define a representation $(\psi,G,V/W)$ which is usually
called a {\it factorrepresentation}.\par
The representations $(\varphi,G,W)$ and $(\psi,G,V/W)$ are generated
by the representation $f$. Each of them inherits a part of the information
contained in the representation $f$. In order to understand which part of
the information is kept in $\varphi$ and $\psi$ let's study the
representation operators $f(g)$ in some special basis. Let's choose a
basis $\bold e_1,\,\ldots,\,\bold e_s$ within the invariant subspace $W$.
Then we complete this basis up to a basis in the space $V$. This construction
is based on the theorem on completing a basis of a subspace
(see \mybookcite{1}). The matrix of the operator $f(g)$ in such a composite
basis is a block-triangular matrix:
$$
\hskip -2em
F(g)=\Vmatrix
\hskip 0.2em \varphi^i_j\hskip 0.2em & \hskip -0.7em
\hbox{\vrule height 1.8ex depth 5.5ex}\hskip -0.4em & u^i_j\\
\vspace{-4.0ex}
\hskip -0.8em\vbox{\hsize 4em\hrule width 3.6em}\hskip -2.8em\\
0&\hskip -5em &\psi^i_j
\endVmatrix.
\mytag{3.8}
$$
The upper left diagonal block coincides with the matrix of the operator
$\varphi(g)$ in the basis $\bold e_1,\,\ldots,\,\bold e_s$. The lower right
diagonal block coincides with the matrix of the factoroperator $\psi(g)$ in
the basis $\bold E_1,\,\ldots,\,\bold E_{n-s}$, where
$$
\bold E_1=\operatorname{Cl}_W(\bold e_{s+1}),\quad\bold E_2=\operatorname{Cl}_W(\bold e_{s+2}),
\quad.\ .\ .\ ,\quad\bold E_{n-s}=\operatorname{Cl}_W(\bold e_n).
$$
From \mythetag{3.8} we conclude that when passing from $f$ to its restriction
$\varphi$ and to its factorrepresentation $\psi$ the amount
of the lost information is determined by the upper right non-diagonal
block $u^i_j$ in the matrix \mythetag{3.8}.\par
Despite to the loss of information, the passage from $f$ to the
pair of representations $\varphi$ and $\psi$ can be treated as splitting
$f$ into more simple components. If $\varphi$ and $\psi$ are also
reducible, they can be split further. However, this process of splitting
is finite since in each step we have the reduction of the dimension:
$\dim(W)<\dim(V)$ and $\dim (V/W)<\dim(V)$. The process will terminate
when we reach irreducible representations.\par
\head
\SectionNum{4}{11} Completely reducible representations.
\endhead
\rightheadtext{\S\,4. Completely reducible representations.}
As we have seen above, the process of fragmentation of a reducible
representation leads to the loss of information. However, there is a
special class of representations for which the loss of information is
absent. This is the class of {\it completely reducible representations}.
\mydefinition{4.1} A representation $(f,G,V)$ of a group $G$ is called
{\it completely reducible\/} if each its invariant subspace $W$ has an
invariant direct complement $U$, i\.\,e\. $V$ is a direct sum of two
invariant subspaces $V=W\oplus U$.
\enddefinition
Note that an irreducible representation is a trivial example of a
completely reducible one. Here wee have $V=V\oplus\{0\}$.\par
Let $(f,G,V)$ be a reducible and completely reducible representation.
Let $W$ be an invariant subspace for $f$ and $U$ be its invariant direct
complement. Then we have the following isomorphisms of the restrictions
and factors:
$$
\xalignat 2
&\hskip -2em f\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,U}\cong f\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,V/W},
&f\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,W}\cong f\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,V/U}.
\mytag{4.1}
\endxalignat
$$
In order to prove \mythetag{4.1} lets consider again a basis $\bold e_1,
\,\ldots,\,\bold e_s$ in $U$ and complement it with a basis $\bold h_1,
\,\ldots,\,\bold h_{n-s}$ in $U$. Let's denote $\bold e_{s+1}=\bold h_1,
\,\ldots,\,\bold e_n=\bold h_{n-s}$. As a result of such a reunion of
bases we get a basis in $V$. Let's define a mapping $A\!:\,U\to V/W$ by
setting
$$
\hskip -2em
A\bold x=\operatorname{Cl}_W(\bold x)\text{\ \ for all \ }\bold x\in U.
\mytag{4.2}
$$
From \mythetag{4.2} one easily derives the values of the mapping $A$ on
basis vectors
$$
\hskip -2em
A(\bold h_1)=\bold E_1,
\quad.\ .\ .\ ,\quad A(\bold h_{n-s})=\bold E_{n-s}.
\mytag{4.3}
$$
The mapping $A$ establishes bijective correspondence of bases of $U$ and
$V/W$. For this reason it is bijective. Let's verify that it implements
an isomorphism of representations, i\.\,e\. let's verify the relationship
\mythetag{1.1} for $A$:
$$
A(f(g)\bold h_i)=\operatorname{Cl}_W(f(g)\bold h_i)=\psi(g)\operatorname{Cl}_W(\bold h_i)=
\psi(g)A(\bold h_i).
$$
This relationship proves the first isomorphism in \mythetag{4.1}. It is
implemented by the mapping $A$ defined in \mythetag{4.2}.\par
From the invariance of the subspace $U$ we derive $f(g)\bold h_i
\in U$. This fact and the relationship \mythetag{4.3} let us write the
matrix of the operator $f(g)$:
$$
\hskip -2em
F(g)=\Vmatrix
\hskip 0.2em \varphi^i_j\hskip 0.2em & \hskip -0.7em
\hbox{\vrule height 1.8ex depth 5.5ex}\hskip -0.4em & 0\\
\vspace{-4.0ex}
\hskip -0.8em\vbox{\hsize 4em\hrule width 3.6em}\hskip -2.8em\\
0&\hskip -5em &\psi^i_j
\endVmatrix.
\mytag{4.4}
$$
The matrix \mythetag{4.4} is block-diagonal, therefore no loss of
information occur when passing from $f$ to the representations
$\varphi$ and $\psi$ in the case of completely reducible representation
$f$. Due to \mythetag{4.1} the representation $\psi$ can be treated
as the restriction of $f$ to the invariant complement $U$. The
representations $\varphi,G,W)$ and $(\psi,G,U)$ are not linked to each
other, they are defined in their own spaces which intersect trivially
and their sum is the complete space $V$. This situation is described by
the following definition.
\mydefinition{4.2} A representation $(f,G,V)$ of a group $G$ is called
an {\it inner direct sum\/} of the representations $(\varphi,G,W)$ and
$(\psi,G,W)$ if $V=W\oplus U$, while the subspaces $W$ and $U$ are
invariant subspaces for $f$ and the restrictions of $f$ to these
subspaces coincide with $\varphi$ and $\psi$.
\enddefinition
Note that the splitting of $f$ into a direct sum $f=\varphi\oplus
\psi$ can occur even in that case where $f$ is not a completely reducible
representation. However in this case such a splitting is rather an
exception than a rule.\par
Having a pair of representations $(\varphi,G,W)$ and $(\psi,G,U)$
of the same group $G$ in two distinct spaces, we can construct their
{\it exterior direct sum}. Let's consider the external direct sum
$W\oplus U$. Let's recall that it is the set of ordered pairs $(\bold
w,\bold u)$, where $\bold w\in W$ and $\bold u\in U$, with the algebraic
operations
$$
\aligned
&(\bold w_1,\bold u_1)+(\bold w_2,\bold u_2)=(\bold w_1+\bold w_2,
\bold u_1+\bold u_2),\\
&\alpha\,(\bold w,\bold u)=(\alpha\,\bold w,\alpha\,\bold u)
\text{, \ \ where \ }\alpha\in\Bbb C.
\endaligned
$$
The subspaces $W$ and $U$ in the external direct sum $W\oplus U$ are
assumed to be disjoint even in that case where they have non-zero
intersection or do coincide. Let's define operators $f(g)$ in
$W\oplus U$ as follows:
$$
\hskip -2em
f(g)(\bold w,\bold u)=(\varphi(g)\bold w,\psi(g)\bold u).
\mytag{4.5}
$$
The representation $(f,G,W\oplus U)$ constructed according to the
representation \mythetag{4.5} is called the {\it external direct sum\/}
of the representations $(\varphi,G,W)$ and $(\psi,G,W)$. It is denoted
$f=\varphi\oplus\psi$. The difference of internal and external direct
sums is rather formal, their properties do coincide in most.\par
Assume that the space $V$ of a representation $(f,G,V)$ is expanded
into a direct sum of subspaces $V=W\oplus U$ (not necessarily invariant
ones). Each such an expansion are uniquely associated with two projection
operators $P$ and $Q$. The projector $P$ is the projection operator that
projects onto the subspace $W$ parallel to the subspace $U$, while $Q$
projects onto $U$ parallel to $W$. They satisfy the following
relationships:
$$
\xalignat 3
&\hskip -2em
P^2=P,
&&Q^2=Q,
&&P+Q=1.
\quad
\mytag{4.6}
\endxalignat
$$
Moreover, $W=\operatorname{Im} P$ and $U=\operatorname{Im} Q$. These properties of the projection
operators are well known (see \mybookcite{1}).\par
\mylemma{4.1} The subspace $W$ in an expansion $V=W\oplus U$ is invariant
with respect to the operators of a representation $f,G,V)$ if and only if
the corresponding projector $P$ obeys the relationship
$$
\hskip -2em
(P\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)-f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P)\,\raise 1pt\hbox{$\sssize\circ$} \, P=0\text{\ \ for all \ }g\in G.
\mytag{4.7}
$$
The invariance of both subspaces $W$ and $U$ in an expansion $V=W\oplus U$
is equivalent to the relationship
$$
\hskip -2em
(P\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)=f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P)\text{\ \ for all \ }g\in G,
\mytag{4.8}
$$
which means that the projector $P$ commutes with all operators of the
representation $f$.
\endproclaim
Let $\bold x$ be an arbitrary vector. Then $P\bold x\in W$. In the
case of an invariant subspace $W$ the vector $\bold y=f(g)P\bold x$ also
belongs to $W$. For the vector $\bold y\in W$ we have $P\bold y=\bold y$.
Therefore
$$
\hskip -2em
Pf(g)P\bold x=f(g)P\bold x=f(g)P^2\bold x.
\mytag{4.9}
$$
Comparing the left and right hand sides of the equality \mythetag{4.9}
and taking into account the arbitrariness of the vector $\bold x$, we
easily derive the relationship \mythetag{4.7} in the statement of the
lemma.\par
And conversely, from the relationship \mythetag{4.7}, using the
property $P^2=P$ from \mythetag{4.6}, we easily derive \mythetag{4.9}.
From \mythetag{4.9} we derive that the vector $f(g)P\bold x$ belongs
to $W$ for an arbitrary vector $\bold x$. Let's choose $\bold x\in W$
and from $P\bold x=\bold x$ for such a vector $\bold x$ we find that
$f(g)\bold x$ belongs to $W$. The invariance of $W$ is established. In
order to prove the second statement of the lemma let's write the
relationship \mythetag{4.7} for the projector $Q$:
$$
\hskip -2em
(Q\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)-f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, Q)\,\raise 1pt\hbox{$\sssize\circ$} \, Q=0.
\mytag{4.10}
$$
from $Q=1-P$ we get $Q\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)-f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, Q=f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P-P
\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)$. Therefore the relationship \mythetag{4.10} is rewritten
as
$$
f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P-P\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)+(P\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)-f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P)\,\raise 1pt\hbox{$\sssize\circ$} \, P=0.
$$
Then, taking into account \mythetag{4.7}, we reduce it to the relationship
\mythetag{4.8}, which means that the operators $P$ and $f(g)$ do commute.
The lemma is proved.\par
The second proposition of the lemma~\mythelemma{4.1} can be
generalized for the case where $V$ is expanded into a direct sum of
several subspaces. Assume that $V=W_1\oplus\,\ldots\,\oplus W_s$.
Let's recall (see \mybookcite{1}) that this expansion uniquely fixes
a concordant family of projection operators $P_1,\,\ldots,\,P_s$.
They obey the following concordance relationships:
$$
\xalignat 2
&(P_i)^2=P_i,&& P_1+\ldots+P_s=1,\\
&P_i\,\raise 1pt\hbox{$\sssize\circ$} \, P_j=0\text{\ \ for \ }i\neq j,
&&P_i\,\raise 1pt\hbox{$\sssize\circ$} \, P_j=P_j\,\raise 1pt\hbox{$\sssize\circ$} \, P_i.
\endxalignat
$$
Moreover, $W_i=\operatorname{Im} P_i$. The condition of invariance of all subspaces
$W_i$ in the expansion $V=W_1\oplus\,\ldots\,\oplus W_s$ with respect to
representation operators of a representation $(f,G,V)$ in terms of the
corresponding projection operators is formulated in the following lemma.
We leave its proof to the reader.
\mylemma{4.2} The expansion $V=W_1\oplus\ldots\oplus W_s$ is an expansion
of $V$ into a direct sum of invariant subspaces of the representation
$(f,G,V)$ if and only if all projection operators $P_i$ associated with the
expansion $V=W_1\oplus\ldots\oplus W_s$ commute with the representation
operators $f(g)$.
\endproclaim
Let's consider a completely reducible finite-dimensional representation
$(f,G,V)$ split into a direct sum of its restrictions to invariant subspaces
$f=\varphi\oplus\psi$. Assume that the restriction $\varphi$ of $f$ to the
subspace $W$ is reducible and assume that $W_1$ is its non-trivial invariant
subspace: $\{0\}\subsetneq W_1\subsetneq W$. Then the subspace $W_1$ is
invariant with respect to $f$. It has an invariant complement $U_1$. Let's
consider the expansions
$$
\xalignat 2
&\hskip -2em
V=W\oplus U,
&&V=W_1\oplus U_1.
\mytag{4.11}
\endxalignat
$$
Note that $W_1\subset W$, hence, $W+U_1=V$. Therefore the dimensions of the
subspaces in \mythetag{4.11} are related as follows:
$$
\dim(W)+\dim(U_1)-\dim(V)=\dim(W)-\dim(W_1)>0.
$$
From this relationship, we see that the intersection $W_2=W\cap U_1$ is
non-zero and its dimension is given by the formula
$$
\hskip -2em
\dim(W\cap U_1)=\dim(W)-\dim(W_1).
\mytag{4.12}
$$
Now note that $W_1\cap W_2=\{0\}$ since $W_2\subset U_1$. Therefore, due to
\mythetag{4.12} the subspace $W$ is expanded into the direct sum
$$
\pagebreak
W=W_1\oplus W_2,
$$
each summand in which is invariant with respect to $f$ and, hence, with
respect to $\varphi$. Thus, we have proved the following important
theorem.
\mytheorem{4.1} The restriction of a completely reducible finite-dimensional
representation to an invariant subspace is completely reducible.
\endproclaim
The next theorem on the expansion into a direct sum is an immediate
consequence of the theorem~\mythetheorem{4.1}.
\mytheorem{4.2} Each finite-dimensional completely reducible representation
$f$ is expanded into a direct sum of irreducible representations
$$
\xalignat 2
&\hskip -2em
f=f_1\oplus\ldots\oplus f_k,
&&V=W_1\oplus\ldots\oplus W_k,
\quad
\mytag{4.13}
\endxalignat
$$
where each $f_i$ is a restriction of $f$ to the corresponding invariant
subspace $W_i$.
\endproclaim
Note that in general the expansion \mythetag{4.13} is not unique. Let's
consider two expansions of $f$ into irreducible components:
$$
\xalignat 2
&\hskip -2em
f=f_1\oplus\ldots\oplus f_k,
&&V=W_1\oplus\ldots\oplus W_k,
\quad\\
\vspace{-1.2ex}
&&&\mytag{4.14}\\
\vspace{-1.2ex}
&\hskip -2em
f=\tilde f_1\oplus\ldots\oplus\tilde f_k,
&&V=\tilde W_1\oplus\ldots\oplus\tilde W_k.
\quad
\endxalignat
$$
The extent of differences in two expansions \mythetag{4.14} is determined
by the following Jordan-H\"older theorem.
\mytheorem{4.3} The numbers of irreducible components in the expansions
\mythetag{4.14} are the same: $q=k$, and there is a transposition
$\sigma \in S_k$ such that $(f_i,G,W_i)\cong (\tilde f_{\sigma i},G,
\tilde W_{\sigma i})$.
\endproclaim
The expansions \mythetag{4.14} are isomorphic up to a transposition
of components. However, we should emphasize that the isomorphism does
not mean the coincidence of these expansions.\par
We shall prove the Jordan-H\"older theorem by induction on the number
of components $k$ in the first expansion \mythetag{4.14}.\par
{\bf The base of the induction}: $k=1$, $V=W_1$. In this case the
representation $f=f_1$ is irreducible. Therefore $q=1=k$ and $\tilde W_1
=V$, $\tilde f_1=f=f_1$. The base of the induction is proved.\par
{\bf The inductive step}. Assume that the theorem is valid for
representations possessing at least one expansion of the form
\mythetag{4.13} with the length $k-1$. For the representation $f$ we
introduce the following notations:
$$
\hskip -2em
\aligned
&\tilde V_i=\tilde W_1\oplus\ldots\oplus\tilde W_i\text{\ \ where \ }
i=1,\,\ldots,\,q,\\
&U=W_1\oplus\ldots\oplus W_{k-1}\text{\ \ and \ }\tilde U_i=\tilde V_i
\cap U.
\endaligned
\mytag{4.15}
$$
All of the subspaces in \mythetag{4.15} are invariant with respect to $f$.
Moreover, $V=U\oplus W_k$ and there are two chains of inclusions:
$$
\hskip -2em
\aligned
&\{0\}\subsetneq\tilde V_1\subsetneq\ldots\subsetneq\tilde V_q=V,\\
&\{0\}\subsetneq\tilde U_1\subsetneq\ldots\subsetneq\tilde U_q=U.
\endaligned
\mytag{4.16}
$$
Let $h\!:\,V\to V/U$ be a canonical projection onto the factorspace. Let's
denote by $\varphi$ the factorrepresentation in the factorspace $V/U$ and
consider the chain of invariant subspaces for it
$$
\{0\}\subseteq h(\tilde V_1)\subseteq\ldots\subseteq h(\tilde V_q)
=h(V)=V/U\cong W_k.
$$
Due to the isomorphism $\varphi\cong f_k$ we conclude that $\varphi$ is
irreducible. Hence the above chain does actually look like
$$
\{0\}=h(\tilde V_1)=\ldots=h(\tilde V_s)\subsetneq h(\tilde V_{s+1})
=\ldots=h(\tilde V_q)=h(V).
$$
Therefore $\tilde V_i\subseteq U$ and $\tilde U_i=\tilde V_i$ for
$i\leqslant s$. For $i\geqslant s+1$ we use the isomorphisms
$\tilde V_i/\tilde U_i\cong h(\tilde V_i)=h(\tilde V_{i+1})\cong
\tilde V_{i+1}/\tilde U_{i+1}$. But $\tilde V_i\subsetneq\tilde V_i$,
hence, $\tilde U_i\subsetneq\tilde U_{i+1}$. Then from \mythetag{4.16}
we get
$$
\hskip -2em
\{0\}\subsetneq\tilde U_1\subsetneq\ldots\subsetneq\tilde U_s
=\tilde U_{s+1}\subsetneq\ldots\subsetneq\tilde U_q=U.
\mytag{4.17}
$$
The equality $\tilde U_s=\tilde U_{s+1}$ follows from the irreducibility
of the factorrepresentation $\varphi_{s+1}\cong\varphi\cong f_k$ in the
factorspace $\tilde V_{s+1}/\tilde V_s$ and from the inclusions $\tilde
V_s=\tilde U_s=\tilde U_{s+1}\subsetneq\tilde V_{s+1}$. Relying upon
complete reducibility of $f$, we choose invariant complements $\tilde
W_{i+1}$ for $\tilde U_i$ in $\tilde U_{i+1}$, i\.\,e\. $\tilde U_{i+1}
=\tilde U_i\oplus\tilde W_{i+1}$. Then $\tilde V_i+\tilde U_{i+1}=\tilde
V_i\oplus\tilde W_{i+1}$ is an invariant subspace in $\tilde V_{i+1}$
containing $\tilde V_i$ and not coinciding with it. But the
factorrepresentation in $\tilde V_{i+1}/\tilde V_i$ is isomorphic to
$tilde f_{i+1}$ and, hence, is irreducible. Therefore $\tilde V_i
\oplus\tilde W_{i+1}=\tilde V_{i+1}$ and the restriction of $f$ to
$\tilde W_{i+1}$ is isomorphic to $\tilde f_{i+1}$. From \mythetag{4.15}
and \mythetag{4.17} we get
$$
\hskip -2em
\aligned
&U=W_1\oplus\ldots\oplus W_s\oplus W_{s+1}\oplus\ldots\oplus W_{k-1},\\
&U=\tilde W_1\oplus\ldots\oplus\tilde W_s\oplus\tilde W_{s+2}\oplus
\ldots\oplus\tilde W_q.
\endaligned
\mytag{4.18}
$$
Now it is sufficient to apply the inductive hypothesis to \mythetag{4.18}.
As a result we get $k=q$ and find $\sigma i$ for $i=1,\,\ldots,\,k-1$. The
isomorphism of $f_k$ and the factorrepresentation $\varphi_{s+1}$ in
$\tilde V_{s+1}/\tilde V_s$ yields $\sigma k=s+1$ since $\varphi_{s+1}
\cong\tilde f_{s+1}$ by construction of the subspaces \mythetag{4.15}.
The Jordan-H\"older theorem is proved.\par
\smallskip
Due to the theorems~\mythetheorem{4.1} and \mythetheorem{4.2} it is
natural to introduce the following terminology.\par
\mydefinition{4.3} An invariant subspace $W\subseteq V$ is called an
{\it irreducible subspace\/} for a representation $(f,G,V)$ if the
restriction of $f$ to $W$ is irreducible.
\enddefinition
The following theorem yields a tool for verifying the complete
reducibility of representations.\par
\mytheorem{4.4} A finite-dimensional representation $(f,G,V)$ is
completely reducible if and only if the set of all its irreducible
subspaces span the whole space $V$.
\endproclaim
\demo{Proof} Let $\{W_\alpha\}_{\alpha\in A}$ be the set of all irreducible
subspaces in $V$. The number of such subspaces could be infinite even in the
case of a finite-dimensional representation. \pagebreak Let's denote by $W$
the sum of all irreducible subspaces $W_\alpha$:
$$
W=\sum_{\alpha\in A}W_\alpha=\lower 3pt\hbox{$\left<\raise 3pt
\hbox{$\dsize\bigcup_{\alpha\in A}W_\alpha$}\right>$}.
$$
The theorem~\mythetheorem{4.4} says that the condition $W=V$ is necessary
and sufficient for the representation $f$ to be completely reducible. The
necessity of this condition follows from the theorem~\mythetheorem{4.2}.
Let's prove its sufficiency. Let $U=U_0$ be an invariant subspace for $f$
and $\{0\}\neq U_0\neq V$. The for each irreducible subspace $W_\alpha$
we have two mutually exclusive options:
$$
W_\alpha\subseteq U_0\text{\ \ or \ }W_\alpha\cap U_0=\{0\}.
$$
And there is at least one subspace $W_{\alpha_1}$ for which the second
option is valid: $W_{\alpha_1}\cap U_0=\{0\}$. Indeed, otherwise we would
have $W\subseteq U_0$, which contradict $W=V$. Let's consider the sum
$$
U_1=U_0+W_{\alpha_1}=U_0\oplus W_{\alpha_1}.
$$
The subspace $U_1$ is invariant with respect to $f$. If $U_1\neq V$, we
repeat our considerations with $U_1$ instead of $U_0$. As a result we get
a new invariant subspace
$$
U_2=U_1\oplus W_{\alpha_2}=U_0\oplus W_{\alpha_1}\oplus W_{\alpha_2}.
$$
The process of adding new direct summand to $U_0$ will terminate at some
step since $\dim V<\infty$. As a result we get
$$
U_k=U_0\oplus(W_{\alpha_1}\oplus\ldots\oplus W_{\alpha_k})=V.
$$
The the subspace $W=W_{\alpha_1}\oplus\ldots\oplus W_{\alpha_k}$ is a
required invariant direct complement for $U=U_0$. The theorem is proved.
\qed\enddemo
\head
\SectionNum{5}{21} Schur's lemma and some corollaries of it.
\endhead
\rightheadtext{\S\,5. Schur's lemma and some corollaries \dots}
The theorem~\mythetheorem{4.2} proved in previous section says that
each completely reducible representation is expanded into a direct sum of
its irreducible components. In this section we study these irreducible
components by themselves. Schur's lemma plays the central role in
this. We formulate two versions of this lemma, the second version being
a strengthening of the first one, but for a more special case.
\mylemmawithtitle{5.1}{ ({\bf Schur's lemma})} Let $(f,G,V)$ and $(h,G,W)$
be two irreducible representations of a group $G$. Each homomorphism $A$
relating these two representations either is identically zero or is a
homomorphism.
\endproclaim
Before proving the Schur's lemma we consider the following theorem
having a separate value.
\mytheorem{5.1} If a linear mapping $A\!:\,V\to W$ is a homomorphism of
representations from $(f,G,V)$ to $(h,G,W)$, then its kernel $Ker A$ is
an invariant subspace for $f$, while its image $\operatorname{Im} A$ is an invariant
subspace for $h$.
\endproclaim
\demo{Proof} Let $\bold y=A\bold X$ be a vector from the image of the
mapping $A$. We apple the operator $f(g)$ to it and use the relationship
\mythetag{1.1}. Then we get
$$
h(g)\bold y=h(g)A\bold x=Af(g)\bold x.
$$
Now it is clear that $h(g)\bold y\in\operatorname{Im} A$, i\.\,e\. $\operatorname{Im} A$ is an
invariant subspace for $h$. The invariance of $\operatorname{Ker} A$ is proved similarly.
Assume that $\bold x\in\operatorname{Ker} A$. Let's apply the operator $f(g)$ to $\bold x$
and then apply the mapping $A$. Taking into account \mythetag{1.1}, we get
$$
Af(g)\bold x=h(g)A\bold x=0
$$
since $A\bold x=0$. Hence, $f(g)\bold x\in\operatorname{Ker} A$. The invariance of the
kernel $\operatorname{Ker} A$ with respect to $f$ is proved.
\qed\enddemo
Now let's proceed to proving Schur's lemma~\mythelemma{5.1}. The case,
where $A=0$, is trivial. In order to exclude this case assume that
$A\neq 0$. Then $\operatorname{Im} A\neq\{0\}$. According to the
theorem~\mythetheorem{5.1} proved just above, $\operatorname{Im} A$ is invariant with
respect to the representation $h$. Since $h$ is irreducible, we get
$\operatorname{Im} A=W$. This means that the homomorphism $A$ is a surjective linear
mapping $A\!:\,V\to W$.\par
The kernel $\operatorname{Ker} A$ of the mapping $A\!:\,V\to W$ is invariant with
respect to $f$. Since $f$ is an irreducible representation, we have two
options $\operatorname{Ker} A=\{0\}$ or $\operatorname{Ker} A=V$. The second option leads to the
trivial case $A=0$, which is excluded. Therefore, $\operatorname{Ker} A=\{0\}$. This
means that $A\!:\,V\to W$ is an injective linear mapping. Being surjective
and injective simultaneously, this mapping is bijective. Hence, it is
an isomorphism of $(f,G,V)$ and $(h,G,W)$. Schur's lemma is proved.\par
Now let's consider an operator $A\!:\,V\to V$ that interlaces an
irreducible representation $(f,G,V)$ with itself. The relationship
\mythetag{1.1} written for this case means that $A$ commute with all
representation operators $f(g)$. The second version of Schur's lemma
describes this special case $f=g$.
\mylemmawithtitle{5.2}{ ({\bf Schur's lemma})} Each operator $A\!:\,V
\to V$ that commutes with all operators of an irreducible
finite-dimen\-sional representation $(f,G,V)$ in a linear vector
space $V$ over the field of complex numbers $\Bbb C$ is a scalar
operator, i\.\,e\. $A=\lambda\cdot 1$, where $\lambda\in\Bbb C$.
\endproclaim
\demo{Proof} Let $\lambda$ be an eigenvalue of the operator $A$ and
let $V_\lambda\neq\{0\}$ be the corresponding eigenspace. If $\bold x
\in V_\lambda$ then $A\bold x=\bold x$. Moreover, from the commutation
condition of the operators $A$ and $f(g)$ we obtain
$$
Af(g)\bold x=f(g)A\bold x=\lambda f(g)\bold x.
$$
Hence, $f(g)\bold x$ is also a vector of the eigenspace $V_\lambda$. In
other words, $V_\lambda$ is an invariant subspace. Since $V_\lambda\neq
\{0\}$ and since $f$ is irreducible, we get $V_\lambda=V$. Hence,
$A\bold x=\bold x$ for an arbitrary vector $\bold x\in V$. This means
that $A=\lambda\cdot 1$.
\qed\enddemo
Note that the condition of finite-dimensionality of the representation
$f$ and the condition of complexity of the vector space $V$ in Schur's
lemma~\mythelemma{5.2} are essential. Without these conditions one cannot
grant the existence of a non-trivial eigenspace.\par
Now we apply Schur's lemma for to investigate tensor products of
representations of some special sort. For this reason we need to define
the concept of tensor product for representations.\par
Let $(f,g,V)$ and $(h,G,W)$ be two representations of a group $G$. We
define the operators $\varphi(g)$ acting within the tensor product
$V\otimes W$ by setting
$$
\hskip -2em
\varphi(g)(\bold v\otimes\bold w)=(f(g)\bold v\otimes h(g)\bold w)
\text{\ \ for all \ }g\in G.
\mytag{5.1}
$$
The operators $\varphi(g)$ constitute a new representation of the group $G$.
The proof of this proposition and verifying the correctness of the formula
\mythetag{5.1} are left to the reader.
\mydefinition{5.1} The representation $(\varphi,G,V\otimes W)$ given by
the operators \mythetag{5.1} is called the {\it tensor product\/} of the
representations $f$ and $h$. It is denoted $\varphi=f\otimes h$.
\enddefinition
Note that the construction \mythetag{5.1} is easily generalized for
the case of several representations.\par
Assume that the representation $f$ in the construction \mythetag{5.1}
is irreducible. As a second tensorial multiplicand in \mythetag{5.1} we
choose the trivial representation $i$ given by the identity operators
$i(g)=1$ for all $g\in G$. In the case where $\dim W>1$ the tensor product
$f\otimes i$ is a reducible representation. Let's prove this fact. Assume
that $\bold e_1,\,\ldots,\,\bold e_m$ is a basis in the space $W$. Let's
denote by $W_k$ one-dimensional subspaces spanned by basis vectors
$\bold e_k$ and then consider the subspaces
$$
\hskip -2em
V\otimes W_k,\quad k=1,\,\ldots,\,m.
\mytag{5.2}
$$
The subspaces \mythetag{5.2} are invariant within $V\otimes W$ with
respect to the operators of the representation $f\otimes i$.The
restrictions of $f\otimes i$ to $V\otimes W_k$ all are isomorphic to
$f$. Hence, the subspaces \mythetag{5.2} are irreducible. These results
are simple. They are proved immediately.\par
No note that $V\otimes W=V\otimes W_1\oplus\ldots\oplus V\otimes W_m$.
The space $V\otimes W$ is a sum of its irreducible subspaces $V\otimes W_k$.
Hence it is spanned by these subspaces. Therefore, it is sufficient to
apply the theorem~\mythetheorem{4.4}. As a result we get the following
proposition.
\mytheorem{5.2} The tensor product $f\otimes i$ of a finite-dimensional
irreducible representation $f$ and a trivial finite-dimensional
representation $i$ is completely reducible.
\endproclaim
By means of this theorem one can describe the structure of all invariant
subspaces of the tensor product $f\otimes i$.
\mytheorem{5.3} If under the assumptions of the theorem~\mythetheorem{5.2}
$f$ and $i$ are representations in complex linear vector spaces $V$ and $W$,
then each invariant subspace of the representation $f\otimes i$ is
a tensor product $U=V\otimes \tilde W$, where $\tilde W$ is some subspace
of $W$.
\endproclaim
\demo{Proof} Let $U$ be some invariant subspace of $f\otimes i$ in
$V\otimes W$. Due to the theorem~\mythetheorem{5.2} the representation
$f\otimes i$ is completely reducible. Therefore $U$ has an invariant
complement $\tilde U$. The expansion $V\times W=U\oplus\tilde U$ means
that we can define a projection operator $P$ that project onto $U$
parallel to $\tilde U$. The invariance of both spaces $U$ and $\tilde U$
means that $P$ commutes with all operators of the representation
$\varphi=f\times i$.\par
Let $A\!:\,V\times W\to V\times W$ be some operator in the tensor
product $V\times W$. In our proof $A=P$, however, we return to this
special case a little bit later. Let's apply the operator $A$ to
$\bold x\otimes\bold e_j$ and write the result as
$$
\hskip -2em
A(\bold x\otimes\bold e_j)=\sum^m_{j=1}A^k_j(\bold x)\otimes\bold e_k.
\mytag{5.3}
$$
Here $\bold e_1,\,\ldots,\,\bold e_m$ is some basis in $W$. Since basis
vectors are linearly independent, the coefficients $A^k_j(\bold x)\in V$
in the expansion \mythetag{5.3} are unique. They are changed only if we
change a basis, the transformation rule for them being analogous to that
for components of a tensor. This fact is not important for us, since in
our considerations we will not change the basis $\bold e_1,\,\ldots,\,
\bold e_m$.\par
Let's study the dependence of $A^k_j(\bold x)$ on $\bold x$. It is
clear that it is linear. Therefore each coefficient $A^k_j(\bold x)$ in
\mythetag{5.3} defines a linear operator $A^k_j\!:\,V\to V$. Let's write
the commutation relationship for $A$ and $\varphi(g)$. It is sufficient
write this relationship as applied to the vectors of the form
$\bold x\otimes\bold e_i$. From \mythetag{5.3} we derive
$$
\hskip -2em
\aligned
&A\,\raise 1pt\hbox{$\sssize\circ$} \,\varphi(g)(\bold x\otimes\bold e_i)=\sum^m_{k=1}
A^k_j\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)(\bold x)\otimes \bold e_k,\\
&\varphi(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A(\bold x\otimes\bold e_i)=\sum^m_{k=1}
f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A^k_j(\bold x)\otimes \bold e_k.
\endaligned
\mytag{5.4}
$$
For to write \mythetag{5.4} it is sufficient to recall that $\varphi=
f\otimes i$, where $i$ is a trivial representation. From \mythetag{5.4},
it is easy to see that the commutation relationship $A\,\raise 1pt\hbox{$\sssize\circ$} \,\varphi(g)
=\varphi(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A$ is equivalent to
$$
\hskip -2em
f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A^k_j=A^k_j\,\raise 1pt\hbox{$\sssize\circ$} \, f(g).
\mytag{5.5}
$$
Thus, all of the linear operators $A^k_j$ commute with the operators
$f(g)$ of the representation $f$.\par
The next step in proof is to return to the projection operator $A=P$
(see above) and to apply Schur's lemma to the operators $A^k_j=P^k_j$. The
projector $P$ commutes with $\varphi(g)$, therefore the relationships
\mythetag{5.5} hold for $A^k_j=P^k_j$. Applying Schur's
lemma~\mythelemma{5.2}, we get
$$
\hskip -2em
P^k_j=\lambda^k_j\times 1,
\mytag{5.6}
$$
where $\lambda^k_j$ are some complex numbers. Substituting \mythetag{5.6}
into the expansion \mythetag{5.3}, for the operator $P$ we derive:
$$
\hskip -2em
P(\bold x\otimes\bold e_j)=\sum^m_{k=1}A^k_j(\bold x)\otimes\bold e_k=
\bold x\otimes\left(\,\shave{\sum^m_{k=1}}\lambda^k_j\,\bold e_k\right)\!.
\mytag{5.7}
$$
The relationship \mythetag{5.7} shows that it is natural to define a linear
operator $Q\!:\,W\to W$ given by its values on basis vectors:
$$
\hskip -2em
Q(\bold e_j)=\sum^m_{k=1}\lambda^k_j\,\bold e_k.
\mytag{5.8}
$$
Due to \mythetag{5.8} we can rewrite \mythetag{5.7} as
$$
\hskip -2em
P(\bold x\otimes\bold e_j)=\bold x\otimes Q(\bold e_j).
\mytag{5.9}
$$
\par
Now let's remember that $P$ is a projection operator. Hence, $P^2=P$
(see \mybookcite{1}). Combining this relationship with \mythetag{5.9},
we derive $Q^2=Q$. Therefore, $Q$ is also a projection operator. Let's
denote $\tilde W=\operatorname{Im} Q\subseteq W$. Then for the invariant subspace
$U\subseteq V\otimes W$ we get
$$
\hskip -2em
U=\operatorname{Im} P=V\otimes\operatorname{Im} Q=V\otimes\tilde W.
\mytag{5.10}
$$
The formula \mythetag{5.10} describes the structure of all invariant
subspaces if the representation $f\otimes i$. Thus, the
theorem~\mythetheorem{5.3} is proved.
\qed\enddemo
The theorem~\mythetheorem{5.3} can be applied for proving the
following extremely useful fact.
\mytheorem{5.4} Let $f$ be a finite dimensional irreducible representation
of some group $G$ in a complex linear vector space $V$. Then the set of
representation operators $f(g)$ spans the whole space of linear operators
$\operatorname{End}(V)$.
\endproclaim
\demo{Proof} Each representation $f$ in $V$ generates an associated
representation in the space of linear operators $\operatorname{End}(V)$. Indeed, if
$A\in\operatorname{End}(V)$, then we can define the action of $\psi(g)$ to $A$ as
the composition of operators $f(g)$ and $A$:
$$
\hskip -2em
\psi(g)(A)=f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A\text{\ \ for all \ }A\in\operatorname{End}(V).
\mytag{5.11}
$$
Let $F$ be the span of the set of all operators $f(g)$, i\.\,e\.
$$
F=\left<\{f(g)\!:\,g\in G\}\right>.
$$
It is easy to verify that the subspace $F$ is invariant with respect
to the representation $\psi$ defined by the formula \mythetag{5.11}.
\par
In order to apply the theorem~\mythetheorem{5.3} let's remember
that there is a canonical isomorphism $V\otimes V^*\cong\operatorname{End}(V)$,
where $V^*$ is the dual space for $V$ (the space of linear functionals
in $V$). This isomorphism is established by the mapping
$\sigma\!:\,V\times V^*\to\operatorname{End}(V)$ which is defined as follows:
$$
\sigma(\bold x\otimes\lambda)\bold y=\lambda(y)\bold x
\text{\ \ for all \ }\bold x,\bold y\in V\text{ \ \ and \ }
\lambda\in V^*.\quad
\mytag{5.12}
$$
The proof of the correctness of the definition \mythetag{5.12} and
verifying that $\sigma$ is an isomorphism are left to the reader.
\par
It is easy to verify that the canonical isomorphism $\sigma$
is an isomorphism interlacing the representation $\psi$ from
\mythetag{5.11} and the representation $f\times i$, where $i$ is
the trivial representation of the group $G$ in $V^*$. The subspace
$F$ is mapped by $\sigma$ onto some invariant subspace $U_F\subseteq
V\otimes V^*$. Since $f$ is irreducible, now we can apply the
theorem~\mythetheorem{5.3}. It yields $U_F=V\otimes\tilde W$, where
$\tilde W\subseteq V^*$ is a subspace in $V^*$.\par
If we assume that $\tilde W\neq V^*$, then there is some vector
$\bold x\neq 0$ in $V$ such that $\lambda(\bold x)=0$ for all $\lambda
\in V^*$. Applying this fact to $F$ and taking into account
\mythetag{5.11} and \mythetag{5.12}, we find that this vector $\bold x
\neq 0$ belongs to the kernel of any operator $A\in F$. But the operators
$f(g)\in F$ are non-degenerate, their kernels are zero. This contradiction
shows that $\tilde W=V^*$ and $U_F=V\otimes V^*$. Due to the isomorphism
$\sigma$ then we derive $F=\operatorname{End}(V)$.
\qed\enddemo
\head
\SectionNum{6}{27} Irreducible representations of the direct product
of groups.
\endhead
\rightheadtext{\S\,6. Representations of the direct product \dots}
The direct product is the simplest construction for building
new groups from those already available. \pagebreak Let's recall
that the group $G_1\times G_2$ is the set of ordered pairs $(g_1,g_2)$
with the multiplication rule
$$
\align
(g_1,g_2)\,\cdot\,&(\tilde g_1,\tilde g_2)=(g_1\cdot\tilde g_1,g_2
\cdot\tilde g_2),\\
&\text{where \ }g_1,\tilde g_1\in G_1
\text{\ \ and \ }g_2,\tilde g_2\in G_2.
\endalign
$$
The construction of direct product of groups is in a good agreement
with the construction of tensor product of their representations. Let
$(f_1,G_1,V_1)$ and $(f_2,G_2,V_2)$ are representations of the groups
$G_1$ and $G_2$ respectively. Let's define a representation of the
group $G=G_1\times G_2$ in the space $V_1\otimes V_2$ by the formula
$$
\hskip -2em
\gathered
f(g)(\bold x\otimes\bold y)=
f(g_1,g_2)(\bold x\otimes\bold y)=\\
=(f_1(g_1)\bold x)\otimes(f_2(g_2)\bold y).
\endgathered
\mytag{6.1}
$$
It is easy to verify that the definition \mythetag{6.1} is correct.
\mydefinition{6.1} The representation $(f,G_1\times G_2,V_1\otimes
V_2)$ given by the formula \mythetag{6.1} is called the {\it tensor
product\/} of the representations $(f_1,G_1,V_1)$ and $(f_2,G_2,V_2)$.
It is denoted $f=f_1\otimes f_2$.
\enddefinition
Note that the earlier construction of the tensor product given
by the definition~\mythedefinition{5.1} is embedded into the
construction~\mythedefinition{6.1}. Indeed, let's consider the
diagonal in the direct product $G\times G$:
$$
D=\{(g_1,g_2)\in G\times G\!:\ g_1=g_2\}.
$$
It is easy to see that $G\cong G$. The restriction of the representation
\mythetag{6.1} to the diagonal $D$ coincides with the representation
\mythetag{5.1}, where $f=f_1$ and $h=f_2$.
\mytheorem{6.1} The tensor product $(f,G_1\times G_2,V_1\otimes V_2)$ of
two finite-dimensional representations $(f_1,G_1,V_1)$ and $(f_2,G_2,V_2)$
in complex vector spaces $V_1$ and $V_2$ is irreducible if and only if
both multiplicands $f_1$ and $f_2$ are irreducible.
\endproclaim
\demo{Proof} Let's begin with proving the necessity in the formulated
proposition. Assume that $(f,G_1\times G_2,V_1\otimes V_2)$ is irreducible.
And assume that the irreducibility condition for $(f_1,G_1,V_1)$ and
$(f_2,G_2,V_2)$ is broken. For the sake of certainty assume that the second
representation $(f_2,G_2,V_2)$ is reducible. Then $f_2$ has a non-trivial
invariant subspace $\{0\}\subsetneq W_2\subsetneq V_2$. But in this case
$V_1\otimes W_2$ is a non-trivial invariant subspace for $f=f_1\otimes f_2$.
This contradicts the irreducibility of the representation $f$. The necessity
is proved.\par
Let's prove the sufficiency. Assume that $(f_1,G_1,V_1)$ and
$(f_2,G_2,V_2)$ are irreducible. In order to prove the irreducibility of
$(f,G_1\times G_2,V_1\otimes V_2)$ we use the irreducibility criterion in
form of the theorem~\mythetheorem{3.1}. Let's choose an arbitrary vector
$\bold u\neq 0$ in $V_1\times V_2$ and consider its orbit. The vector
$\bold u$ can be written as
$$
\hskip -2em
\bold u=\bold x_1\otimes\bold y_1+\ldots+\bold x_k\otimes\bold y_k.
\mytag{6.2}
$$
Without loss of generality we can assume that the vectors $\bold y_1,
\,\ldots,\,\bold y_k$ in \mythetag{6.2} are linearly independent. The
expansion \mythetag{6.2} is not unique. However, if the linearly
independent vectors $\bold y_1,\,\ldots,\,\bold y_k$ are fixed, then
the corresponding vectors $\bold x_1,\,\ldots,\,\bold x_k$ are
determined uniquely. Without loss of generality we can assume them
to be nonzero.\par
Now let's apply the theorem~\mythetheorem{5.4}. Let $A\!:\,V_2\to
V_2$ be a linear operator satisfying the following condition:
$$
\hskip -2em
A\bold y_1=y_1,\ A\bold y_2=0,\ \ldots,\ A\bold y_k=0.
\mytag{6.3}
$$
Since the vectors $\bold y_1,\,\ldots,\,\bold y_k$ in \mythetag{6.3}
are linearly independent, such an operator $A$ does exist. Applying
the theorem~\mythetheorem{5.4} to the representation $f_2$, we conclude
that the operator $A$ belong to the span of the representation
operators, i\.\,e\.
$$
\pagebreak
A=\sum^q_{i=1}\alpha_i\,f_2(g_i)\text{, \ where \ }g_i\in G_2.
$$
Let's apply the operator $1\otimes A$ to the vector \mythetag{6.2}.
This yields
$$
\hskip -2em
(1\otimes A)\bold u=\sum^k_{i=1}\bold x_i\otimes A\bold y_i=
\bold x_1\otimes\bold y_1.
\mytag{6.4}
$$
On the other hand, for the same quantity we get
$$
\hskip -2em
(1\otimes A)\bold u=\sum^q_{i=1}\alpha_i\,(1\otimes f(g_i))\bold u
=\sum^q_{i=1}\alpha_i\,f(e_1,g_i)\bold u.
\mytag{6.5}
$$
Here $e_1$ is the unit element of the group $G_1$. Comparing
\mythetag{6.4} and \mythetag{6.5}, we see that the vector
$\bold x_1\otimes\bold y_1$ belongs to the orbit of the vector
$\bold u$ from \mythetag{6.3}. Due to the irreducibility of the
representation $f_1$ the orbit of the vector $\bold x_1$ spans
$V_1$. For the similar reasons the orbit of the vector $\bold y_1$
spans $V_2$. These facts mean that any two vectors $\bold x\in V_1$
and $\bold y\in V_2$ can be obtained as
$$
\hskip -2em
\aligned
&\bold x=\sum^r_{i=1}\beta_i\,f(g_i)\bold x_1\text{, \ where \ }
g_i\in G_1;\\
&\bold y=\sum^s_{j=1}\gamma_i\,f(g_j)\bold y_1\text{, \ where \ }
g_i\in G_2.
\endaligned
\mytag{6.6}
$$
From \mythetag{6.6} we immediately derive
$$
\bold x\otimes\bold y=\sum^r_{i=1}\sum^s_{j=1}\beta_i\,
\gamma_i\,f(g_i,g_j)(\bold x_1\otimes\bold y_1),
$$
where $(g_i,g_j)\in G_1\times G_2$. Hence, an arbitrary vector of the
form $\bold x\otimes\bold y$ belongs to the orbit of the vector
$\bold x_1\otimes\bold y_1$ from \mythetag{6.4}, and this vector in
turn belongs to the orbit of the vector $\bold u$ from \mythetag{6.2}.
However, we know that the vectors of the form $\bold x\otimes\bold y$
spans the whole space $V_1\otimes V_2$. As a result we have proved that
the orbit of an arbitrary vector $\bold u\in V_1\otimes V_2$ spans the
whole space of the representation $f=f_1\otimes f_2$. According to the
theorem~\mythetheorem{3.1}, this representation is irreducible. Thus,
the theorem~\mythetheorem{6.1} is proved.
\qed\enddemo
\mytheorem{6.2} Any finite-dimensional irreducible representation
$\varphi$ of the direct product of two groups $G_1$ and $G_2$ in
a complex space $U$ is isomorphic to the tensor product of two
irreducible representations $(f_1,G_1,V_1)$ and $(f_2,G_2,V_2)$
of the groups $G_1$ and $G_2$.
\endproclaim
Let $\varphi(g_1,g_2)$ be the representation operators for the
representation $\varphi$ of the group $G_1\times G_2$ in the space $U$.
Then the operators of the form $\varphi(g_1,e_2)$, where $e_2$ is the
unit element of the group $G_2$, define a representation of the group
$G_1$. In general case it is reducible. Let $V_1\subseteq U$ be some
irreducible subspace in $U$. Denote
$$
\hskip -2em
\varphi_1(g_1)=\varphi(g_1,e_2)\text{, \ where \ }g_1\in G_1.
\mytag{6.7}
$$
The restrictions of the operators \mythetag{6.7} to $V_1$ define some
irreducible representation of the group $G_1$. We denote them as
$$
f_1(g_1)=\varphi_1(g_1)\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,V_1}=\varphi_1(g_1,e_2)\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,V_1}\text{, \ where \ } g_1\in G_1.
$$
By analogy to \mythetag{6.7} we introduce the following operators defining
some representation of the group $G_2$ in $U$:
$$
\hskip -2em
\varphi_2(g_2)=\varphi(e_1,g_2)\text{, \ where \ }g_2\in G_2.
\mytag{6.8}
$$
Then we denote by $F_2$ the span of the set of all operators \mythetag{6.8}.
It is a subspace in the space of the operators $\operatorname{End}(U)$:
$$
F_2=\left<\{\varphi_2(g_2)\!:\ g_2\in G_2\}\right>.
$$
The operators from $F_2$ commute with all operators \mythetag{6.7} since
the operators $\varphi_2(g_2)$ spanning $F_2$ commute with $\varphi_1(g_1)$.
For each operator $A\in F_2$ we denote by $\tilde A$ the restriction of $A$
to the subspace $V_1$. The operators $\tilde A$ should be treated as the
elements of the linear space $\tilde F_2\subseteq\operatorname{Hom}(V_1,U)$:
$$
\tilde A\!:\,V_1\to U.
$$\par
The operators $A\in F_2$ and $\tilde A\in\tilde F_2$ deserve a special
consideration. Let's define a subspace $V_A=AV_1=\tilde AV_1=\operatorname{Im}\tilde A
\subseteq U$. Since $A$ commute with operators \mythetag{6.7}, the subspace
$V_A$ is invariant with respect to the operators $\varphi_1(g_1)$.
Therefore we have a representation of the group $G_1$ in $V_A$:
$$
f_A(g_1)=\varphi_1(g_1)\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,V_A}=\varphi(g_1,e_2)\,\hbox{\vrule height 8pt
depth 10pt width 0.5pt}_{\,V_A}\text{, \ where \ }g_1\in G_1.
$$
The mapping $\tilde A\!:\,V_1\to V_A$ interlaces the representations $f_1$
and $f_A$ in $V_1$ and $V_A$. Indeed, we have
$$
\tilde A\,\raise 1pt\hbox{$\sssize\circ$} \, f_1(g_1)=A\,\raise 1pt\hbox{$\sssize\circ$} \,\varphi(g_1)\,\hbox{\vrule height 8pt
depth 10pt width 0.5pt}_{\,V_1}=\varphi(g_1)\,\raise 1pt\hbox{$\sssize\circ$} \, A\,\hbox{\vrule
height 8pt depth 10pt width 0.5pt}_{\,V_1}=f_A(g_1)\,\raise 1pt\hbox{$\sssize\circ$} \,\tilde A.
$$
The mapping $\tilde A\!:\,V_1\to V_A$ is surjective by its definition.
The kernel $\operatorname{Ker}\tilde A\subseteq V_1$ of this mapping is invariant with
respect to the representation $f_1$. Since $f_1$ is irreducible, we have
two mutually excluding options:
$$
\aligned
\operatorname{Ker}\tilde A=V_1\ &\Rightarrow\ \tilde A=0
\text{\ \ and \ }V_A=\{0\};\\
\operatorname{Ker}\tilde A=\{0\}\ &\Rightarrow\ \tilde A\text{\ \ is an
isomorphism and \ }f_1\cong f_A.
\endaligned\quad
\mytag{6.9}
$$
Let's study the second option in \mythetag{6.9}. Denote $W=V_1\cap V_A$.
The operators $f_1(g_1)$ and $f_A(g_1)$ upon restricting to $W$ do coincide.
Therefore $W\subseteq V_1$ is invariant with respect to $f_1$. Applying the
irreducibility of $f_1$ again, we get the following two options:
$$
\aligned
W=V_1\ &\Rightarrow\ V_A=V_1\text{\ \ and \ }f_A=f_1;\\
W=\{0\}\ &\Rightarrow\ V_A\cap V_1=\{0\}\text{\ \ and \ }f_A\cong f_1.
\endaligned\quad
\mytag{6.10}
$$
Combining \mythetag{6.9} and \mythetag{6.10}, we find
$$
\align
&V_A=\{0\},\quad f_A=0,\quad\tilde A=0;\\
&V_A=V_1,\quad f_A=f_1,\quad\tilde A=\lambda\cdot 1;
\mytag{6.11}\\
&V_A\cap V_1=\{0\},\quad f_A\cong f_1,\quad\tilde A\text{\ \ is an
isomorphism.}\qquad
\endalign
$$
The condition $\tilde A=\lambda\cdot 1$ in the second option of
\mythetag{6.11} follows from Schur's lemma~\mythelemma{5.2}.\par
Let $\bold u$ be some nonzero vector in $V_1$. We fix this vector
and consider the subspace $V_2\subseteq U$ obtained by applying the
operators $A\in F_2$ upon the vector $\bold u$:
$$
V_2=F_2\bold u=\{\bold v\in U\!:\ \bold v=A\bold u\text{\ \ for some \ }
A\in F_2\}.\quad
\mytag{6.12}
$$
The subspace $V_2$ is invariant with respect to the operators
\mythetag{6.8}. Therefore we have a representation $(f_2,G_2,V_2)$
of the group $G_2$. It is given by the operators
$$
\hskip -2em
f_2(g_2)=\varphi_2(g_2)\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,V_2}=\varphi_1(e_1,g_2)\,\hbox{\vrule height 8pt depth 10pt
width 0.5pt}_{\,V_2}.
\mytag{6.13}
$$\par
Due to the definition \mythetag{6.12} for any vector $\bold y
\in V_2$ there is a mapping $\tilde A\in\tilde F_2$ such that
$\bold y=\tilde A\bold u$. Let's prove that such a mapping is uniquely
fixed by the vector $\bold y\in V_2$. According to \mythetag{6.11}, we
study three possible options.\par
If $\bold y=0$, then $\operatorname{Ker}\tilde A\neq 0$. Due to \mythetag{6.9}
the only operator $\tilde A\in F_2$ satisfying the condition
$\bold y=\tilde A\bold u$ is the identically zero mapping $\tilde A=0$.
This case corresponds to the first option in \mythetag{6.11}.\par
If $\bold y\neq 0$ and $\bold y\in V_1$, then from $\bold y=
\tilde A\bold u$ we derive that $\bold y\in V_1\cap V_A$. Hence the
intersection $V_1\cap V_A$ is nonzero and we have the first option
in \mythetag{6.10}, which is equivalent to the second option of
\mythetag{6.11}. Hence, $\tilde A=\lambda\cdot 1$ and $\bold y=\lambda
\bold u$. The number $\lambda$ relating two collinear vectors is
uniquely fixed by these two vectors. Therefore, the mapping
$\tilde A=\lambda\cdot 1$ is also unique.\par
And finally, the third case, where $\bold y\notin V_1$. Due to
\mythetag{6.11} in this case we have $V_1\cap V_A=\{0\}$ and the
mapping $\tilde A\!:\,V_1\to V_A$ is bijective. Assume for a while
that the condition $\bold y=\tilde A\bold u$ does not fix the mapping
$\tilde A\in F_2$ uniquely. Let $\tilde A_1$ and $\tilde A_2$ be two
such mappings. Their associated subspaces $V_{A_1}$ and $V_{A_2}$ do
coincide. Indeed, $\bold y\in V_{A_1}\cap V_{A_2}\neq\{0\}$. Hence,
$V_{A_1}\cap V_{A_2}$ is a non-trivial invariant subspace for the
irreducible representations $f_{A_1}\cong f_1$ and $f_{A_2}\cong f_1$.
So, $V_{A_1}\cap V_{A_2}=V_{A_1}=V_{A_2}$. Using $V_{A_1}=V_{A_2}$ and
the bijectivity of the mappings
$$
\xalignat 2
&\tilde A_1\!:V_1\to V_{A_1},
&&\tilde A_2\!:V_1\to V_{A_2},
\endxalignat
$$
we invert one of them and consider the operator
$\tilde A_3=\tilde A_2^{-1}\,\raise 1pt\hbox{$\sssize\circ$} \,\tilde A_1$. This is a non-degenerate
operator in $V_1$. It implements the automorphism of the representation
$f_1$, i\.\,e\. it interlaces the operators $f_1(g_1)$ with themselves:
$$
\tilde A_3\,\raise 1pt\hbox{$\sssize\circ$} \, f_1(g_1)=f_1(g_1)\,\raise 1pt\hbox{$\sssize\circ$} \,\tilde A_3.
$$
Using the irreducibility of $f_1$ and applying Schur's
lemma~\mythelemma{5.2}, we get $\tilde A_3=\lambda\cdot 1$. This
yields $\tilde A_1=\lambda\tilde A_2$. Now from the conditions
$\bold y=\tilde A_1\bold u$ and $\bold y=\tilde A_2\bold u$ we
derive $\lambda=1$. Hence, $\tilde A_2=\tilde A_1$. Thus, the
uniqueness of $\tilde A$ is established.\par
For the mapping $\tilde A\in\tilde F_2$, which we uniquely
determine from the condition $\bold y=\tilde A\bold u$, we use
the notation $\tilde A=\tilde A(\bold y)$. The dependence of
$\tilde A$ on the vector $\bold y$ can be treated as a mapping
$\tilde A\!:\,V_2\to\operatorname{Hom}(V_1,U)$. It is easy to verify that
this mapping is linear. It satisfies the equality
$$
\tilde A(f_2(g_2)\bold y)=\varphi_2(g_2)\,\raise 1pt\hbox{$\sssize\circ$} \,\tilde A(\bold y),
\mytag{6.14}
$$
where the operator $f_2(g_2)$ is given by \mythetag{6.13}. Let's prove
the equality \mythetag{6.14}. Remember that $\tilde A(\bold y)$ is
the restriction to $V_1$ of some operator $A_1\in F_2$ such that
$$
A_1\bold u=\tilde A(\bold y)\bold u=\bold y.
$$
But the operator $A_2=\varphi_2\,\raise 1pt\hbox{$\sssize\circ$} \, A_1$ also belongs to $F_2$
(see the definition of the space $F_2$ above). For $A_2$ we derive
$$
\pagebreak
A_2\bold u=\varphi_2(g_2)A_1\bold u=\varphi_2(g_2)\bold y=
f_2(g_2)\bold y.
$$
Therefore the restriction of $A_2$ to $V_1$ coincides with
$\tilde A(f_2(g_2)\bold y)$. The equality \mythetag{6.14} is
proved.\par
The next step in proving the theorem~\mythetheorem{6.2} is to
apply the mapping $\tilde A(\bold y)$ for building the isomorphism
of the representation $f=f_1\otimes f_2$ and the representation
$\varphi$. But before doing it note that we have no information
on whether the representation
$f_2$ in \mythetag{6.13} is irreducible or not. Fortunately we can
assume $f_2$ to be irreducible due to the following reasons. Let
$\tilde V_2\subseteq V_2$ be some irreducible invariant subspace
for the representation \mythetag{6.13}. If $\bold u\in\tilde V_2$, then
$\tilde V_2=V_2$. This fact follows from the theorem~\mythetheorem{3.1}.
In the case, where $\bold u\notin\tilde V_2$, we choose a nonzero vector
$\tilde\bold u$ and take a mapping $\tilde A\in\tilde F_2$ such that
$\tilde A\bold u=\tilde\bold u$. We have already proved the existence and
uniqueness of such a mapping $\tilde A=\tilde A(\tilde\bold u)$. In our
case $\tilde A\neq 0$. Therefore, due to \mythetag{6.11} we see that the
mapping $\tilde A$ is bijective, it establishes the isomorphism of
representations $f_1\cong f_A$. Because of the isomorphism $f_1\cong f_A$
we can replace $f_1$ by $f_A$, which is also irreducible. The latter
representation is preferable since its space $V_A$ comprises the vector
$\tilde\bold u$. The orbit of the vector $\tilde\bold u$ spans the
irreducible subspace $\tilde V_2$ within the space of the representation
$\varphi_2$. For this reason we should come back to the beginning of
our constructions and assume that $V_1$ is exactly that irreducible
subspace of $\varphi_1$ which comprises some vector $\bold u$ generating
an irreducible subspace of the representation $\varphi_2$. Just above
we have demonstrated that such a choice of the subspace $V_1$ is
possible.\par
Thus, under a proper choice of the subspace $V_1$ both representations
$(f_1,G_1,V_1)$ and $(f_2,G_2,V_2)$ are irreducible. We consider their
tensor product $f=f_1\otimes f_2$ and then construct the mapping
$\sigma\!:\,V_1\otimes V_2\to U$ by means of the following formula:
$$
\hskip -2em
\sigma(\bold x\otimes\bold y)=\tilde A(\bold y)\bold x
\text{, \ where \ }\bold x\in V_1,\ \bold y\in V_2.
\mytag{6.15}
$$
Let's show that the mapping \mythetag{6.15} is an interlacing mapping for
the representations $(f_1\otimes f_2,G_1\times G_2,V_1\otimes V_2)$ and
$(\varphi,G_1\times G_2,U)$. Indeed, we easily derive
$$
\aligned
\hskip -2em
&\gathered
\varphi(g_1,g_2)\,\sigma(\bold x\otimes\bold y)=\varphi_1(g_1)
\varphi_2(g_2)\tilde A(\bold y)\bold x=\\
=\varphi_2(g_2)\tilde A(\bold y)\,\varphi_1(g_1)\bold x,
\endgathered\\
\vspace{4ex}
&\gathered
\sigma f(g_1,g_2)(\bold x\otimes\bold y)=\sigma((f_1(g_1)\bold x)
\otimes(f_2(g_2)\bold y))=\\
=\tilde A(f_2(g_2)\bold y)f_1(g_1)\bold x.
\endgathered
\endaligned
\mytag{6.16}
$$
The values of the right hand sides in two above formulas \mythetag{6.16}
do coincide due to \mythetag{6.14}. Therefore, from \mythetag{6.16} we
extract
$$
\varphi(g_1,g_2)\,\raise 1pt\hbox{$\sssize\circ$} \,\sigma=\sigma\,\raise 1pt\hbox{$\sssize\circ$} \, f(g_1,g_2).
$$
This is exactly the equality \mythetag{1.1} written for the representations
$f$ and $\varphi$. The mapping $\sigma$ implements an isomorphism of these
two representations. Note that
$$
\sigma(\bold u\times\bold u)=\tilde A(\bold u)\bold u=\bold u\neq 0.
$$
Therefore $\sigma\neq 0$. Now it is sufficient to use the irreducibility
of representations $f=f_1\otimes f_2$ and $\varphi$. Applying Schur's
lemma~\mythelemma{5.1}, we conclude that $\sigma$ is an isomorphism.
The irreducibility of $f$ is derived from the irreducibility of $f_1$
and $f_2$ due to the previous theorem. Thus, the proof of the
theorem~\mythetheorem{6.2} is completed.
\head
\SectionNum{7}{36} Unitary representations.
\endhead
\rightheadtext{\S\,7. Unitary representations.}
\mydefinition{7.1} A finite-dimensional complex linear vector space $V$
equipped with a symmetric positive sesquilinear form is called a {\it
Hermitian space}.
\enddefinition
Let's recall that a {\it sesquilinear form\/} in $V$ is a
complex-valued numeric function $\varphi(\bold x,\bold y)$ with two
vectorial arguments $\bold x,\bold y\in V$ such that it satisfies
the following four conditions:
\roster
\item\quad $\varphi(\bold x_1+\bold x_2,\bold y)=\varphi(\bold x_1,\bold y)
+\varphi(\bold x_2,\bold y)$;
\item\quad $\varphi(\alpha\,\bold x,\bold y)=\overline\alpha\,
\varphi(\bold x,\bold y)$;
\item\quad $\varphi(\bold x,\bold y_1+\bold y_2)\varphi(\bold x,\bold y_1)
+\varphi(\bold x,\bold y_2)$;
\item\quad $\varphi(\bold x,\alpha\,\bold y)=\alpha\,
\varphi(\bold x,\bold y)$.
\endroster
The bar sign over $\alpha$ in the second condition is the complex
conjugation sign. The conditions \therosteritem{1}--\therosteritem{4}
are usually are usually complemented with the conditions of symmetry
and positivity:
\roster
\item[5]\quad $\varphi(\bold x,\bold y)
=\overline{\varphi(\bold y,\bold x)}$;
\item\quad $\varphi(\bold x,\bold x)>0\text{\ \ for all \ }
\bold x\neq 0$.
\endroster
The condition \therosteritem{5} implies that $\varphi(\bold x,
\bold x)$ is a real number. The condition \therosteritem{6} strengthens
condition \therosteritem{5} requiring $\varphi(\bold x,\bold x)$ to be
a positive number. A form $\varphi(\bold x,\bold y)$ is called
{\it non-degenerate} if $\varphi(\bold x,\bold y)=0$ for all $\bold y\in V$
implies $\bold x=0$. Note that the positivity of a form implies its
non-degeneracy.\par
The symmetric positive form declared in the definition of a Hermitian
space is called the {\it Hermitian scalar product}. For this form we fix
the following notation:
$$
\varphi(\bold x,\bold y)=\langle\bold x|\bold y\rangle.
$$\par
Let $\bold e_1,\,\ldots,\,\bold e_n$ be a basis in a space $V$. The
quantities $g_{ij}=\langle\bold e_i|\bold e_j\rangle$ compose the Gram
matrix of the basis $\bold e_1,\,\ldots,\,\bold e_n$. They satisfy the
relationship $g_{ij}=\overline{g_{ji}\vphantom{\vrule height 5pt}}$.
It follows from the symmetry of the scalar product.\par
A basis, the Gram matrix of which is the unit matrix, is called an
{\it orthonormal basis}. Orthonormal bases do exist because each symmetric
sesquilinear in a finite-dimensional space is diagonalizable.
\mydefinition{7.2} A linear operator $A\!:\,V\to V$ in a Hermitian space
$V$ is called a {\it Hermitian operator\/} if $\langle\bold x|A\bold y
\rangle=\langle A\bold x|\bold y\rangle$ for any two vectors $\bold x$
and $\bold y$ in $V$.
\enddefinition
There is the standard theory of Hermitian operators in
finite-dimensional Hermitian spaces. We give basic facts of this theory
without proofs for the reader to recall them.
\mytheorem{7.1} Hermitian operators of a finite-dimensional\linebreak
Hermitian space are in a one-to-one correspondence with symmetric
sesquilinear forms:
$$
\hskip -2em
\varphi_A(\bold x,\bold y)=\langle\bold x|A\bold y\rangle.
\mytag{7.1}
$$
Non-degenerate operators correspond to non-degenerate forms.
\endproclaim
\mydefinition{7.3} A Hermitian operator $A$ is called a {\it positive
operator\/} if the corresponding form $\varphi_A$ is positive.
\enddefinition
\mytheorem{7.2} Each Hermitian operator $A$ is diagonalizable, its
eigenvalues are real numbers, and eigenvectors corresponding to
distinct eigenvalues $\lambda_i\neq\lambda_j$ are perpendicular to
each other.
\endproclaim
\mytheorem{7.3} An operator $A$ is a Hermitian operator if and only
if its eigenvalues are real numbers and if is diagonalizes in some
orthonormal basis.
\endproclaim
The proofs of the theorems~\mythetheorem{7.1}, \mythetheorem{7.2},
and \mythetheorem{7.3} can be found in many standard textbooks on
linear algebra. Apart from them, we need one more theorem, which also
can be found in some textbooks, but it is less standard.
\mytheorem{7.4} Let $A$ be a diagonalizable operator such that its
eigenvalues $\lambda_1,\,\ldots,\,\lambda_n$ are real non-negative
numbers. Then there is a unique operator $B$ with eigenvalues $\mu_i
\geqslant 0$ such that $B^2=A$ and $B$ commutes with any operator $C$
that commutes with $A$. If $A$ is a Hermitian operator, then the
corresponding operator $B$ is a Hermitian operator too.
\endproclaim
The operator $B$ declared in the theorem~\mythetheorem{7.4} is
naturally called the {\it square root\/} of the operator $A$. Let's
prove its existence. Let $\bold e_1,\,\ldots,\,\bold e_n$ be a basis
composed by eigenvectors of the operator $A$ corresponding to its
eigenvalues $\lambda_1,\,\ldots,\,\lambda_n$. The operator $B$ is
defined through its action upon basis vectors:
$$
\hskip -2em
B\bold e_i=\sqrt{\lambda_i}\,\bold e_i,\qquad i=1,\,\ldots,\,n.
\mytag{7.2}
$$
Due to this definition the operator $B$ is diagonalized in the same
basis as the operator $A$, its eigenvalues $\mu_i=\sqrt{\lambda_i}$
are real and non-negative numbers.\par
Let's study the problem of commuting for the operators $B$ and $C$.
If the operator $A$ commutes with $C$, this means that
$$
\hskip -2em
(\lambda_i-\lambda_j)\,C^i_j=0,
\mytag{7.3}
$$
where $C^i_j$ are the matrix elements for the operator $C$ in the
basis $\bold e_1,\,\ldots,\,\bold e_n$. The condition \mythetag{7.3}
is equivalent to $C^i_j=0$ for all $\lambda_i\neq\lambda_j$. But
$\lambda_i\neq\lambda_j$ implies $\mu_i\neq\mu_j$. Therefore the
operator $B$ defined by the formula \mythetag{7.2} commutes with
any operators $C$ that commutes with $A$. If the operator $A$ is a
Hermitian operator, then the basis $\bold e_1,\,\ldots,\,\bold e_n$
can be chosen to be an orthonormal basis. In this case, applying
the theorem~\mythetheorem{7.3}, we find that $B$ is a Hermitian
operator too.\par
Now we need to prove the uniqueness of the operator $B$ declared
in the theorem~\mythetheorem{7.4}. The condition $C^i_j=0$ for all
$\lambda_i\neq\lambda_j$, which follows from $A\,\raise 1pt\hbox{$\sssize\circ$} \, C=C\,\raise 1pt\hbox{$\sssize\circ$} \, A$,
can be formulated in an invariant (basis-free) way.
\myproposition{7.1} An operator $C$ commutes with a diagonalizable
operator $A$ if and only if all eigenspaces of the operator $A$ are
invariant with respect to the operator $C$.
\endproclaim
Under the assumptions of the theorem~\mythetheorem{7.4} let's
take $C=A$ and apply the proposition~\mytheproposition{7.1} to the
operator $B$. From $B\,\raise 1pt\hbox{$\sssize\circ$} \, A=A\,\raise 1pt\hbox{$\sssize\circ$} \, B$ in this case we derive that
all eigenspaces of the operator $A$ are invariant under the action of
the operator $B$. The requirement that $B$ is diagonalizable now means
that both $A$ and $B$ can be diagonalized simultaneously in some basis.
The conditions $B^2=A$ and $\mu_i\geqslant 0$ then fix the unique choice
of the operator $B$ defined by the relationships \mythetag{7.2}.\par
\mydefinition{7.4} A linear mapping $T\!:\,V\to W$ from some Hermitian
space $V$ to another Hermitian space $W$ is called an {\it isometry\/}
if $\langle T\bold x|T\bold y\rangle=\langle\bold x|\bold y\rangle$ for
all $\bold x,\bold y\in V$, i\.\,e\. if it preserves the scalar product.
\enddefinition
Due to the non-degeneracy of the sesquilinear forms determining
scalar products in $V$ and $W$ each isometry $T\!:\,V\to W$ is an
injective mapping.
\mydefinition{7.5} A linear operator $T\in\operatorname{End}(V)$ is called a {\it
unitary operator\/} if it implements an isometry $T\!:\,V\to V$.
\enddefinition
Unitary operators are non-degenerate. Their determinants and their
eigenvalues satisfy the following relationships:
$$
\xalignat 2
&|\det T|=1,&&|\lambda|=1.
\endxalignat
$$
Unitary operators in a Hermitian space $V$ form the group $\operatorname{U}(V)$,
which is a subgroup in $\operatorname{Aut}(U)$. Unitary operators with unit determinant,
in turn, form the group $\operatorname{SU}(V)\subsetneq\operatorname{U}(V)$.
\mydefinition{7.6} A representation $(f,G,V)$ of a group $G$ in
a Hermitian space $V$ is called a {\it unitary representation\/}
if all operators of this representation $f(g)$ are unitary operators.
\enddefinition
Unitary representations constitute an important subclass in the class
of general representations of groups. First of all this is because unitary
representations arise in applications of the theory of representations to
the quantum mechanics. The role of the following useful fact is also
substantial.
\mytheorem{7.5} Each unitary representation $(f,G,V)$ is completely
reducible.
\endproclaim
Indeed, let $U\subseteq V$ be an invariant subspace for the operators
of the representation $f$. In the case of a unitary operator $f(g)$ the
orthogonal complement to an invariant subspace
$$
U_{\sssize\perp}=\{\bold x\in V\!:\ \langle\bold x|\bold y\rangle=0
\text{\ \ for all \ }\bold y\in U\}
$$
is also an invariant subspace. The subspaces $U$ and $U_{\sssize\perp}$
intersect trivially (i\.\,e\. at zero vector only), their direct sum
coincides with $V$. Therefore, $U_{\sssize\perp}$ is a required invariant
direct complement for $U$. The complete reducibility of $f$ is shown.
\mycorollary{7.1} Each representation $f$ which is equivalent to some
unitary representation $h$ is completely reducible.
\endproclaim
Let $A\!:\,V\to W$ be an interlacing mapping which implements the
isomorphism of $f$ and $h$. Then each invariant subspace $U$ of $f$ has
the invariant direct complement $A^{-1}((AU)_{\sssize\perp})$.\par
Along with the concept of equivalence, in the class of unitary
representations we have the concept of {\it unitary equivalence}.
\mydefinition{7.7} Two unitary representations $(f,G,V)$ and $(h,G,W)$
are called unitary equivalent, if there is an isometry $A\!:\,V\to W$
implementing an isomorphism of them.
\enddefinition
The following theorem shows that despite the difference in definitions
the concepts of equivalence and unitary equivalence do coincide.
\mytheorem{7.6} If two unitary representations $f$ and $h$ are equivalent,
then they are unitary equivalent.
\endproclaim
In order to prove this theorem we need an auxiliary fact which is
formulated as a lemma.
\mylemma{7.1} Let $A\!:\,V\to W$ be a bijective linear mapping from a
Hermitian space $V$ to another Hermitian space $W$. Then it can be expanded
as a composition $A=T\,\raise 1pt\hbox{$\sssize\circ$} \, B$, where $T\!:\,V\to W$ is an isometry and
$B$ is a positive Hermitian operator in $V$.
\endproclaim
\demo{Proof of the lemma} Let's consider the following sesquilinear form
in the space $V$:
$$
\hskip -2em
\varphi(\bold x,\bold y)=\langle A\bold x|A\bold y\rangle.
\mytag{7.4}
$$
It is easy to see that the form \mythetag{7.4} is symmetric and positive.
We apply the theorem~\mythetheorem{7.1} in order to define a Hermitian
positive operator $D$ in $V$. The associated sesquilinear form \mythetag{7.1}
of this operator coincides with \mythetag{7.4}. This condition yields
$$
\hskip -2em
\langle\bold x|D\bold y\rangle=\langle A\bold x|A\bold y\rangle.
\mytag{7.5}
$$
Using the operator $D$ and applying the theorem~\mythetheorem{7.4} to it,
we construct a positive Hermitian operator $B$ being the square root of $D$,
i\.\,e\. $B^2=D$. Now let's consider a mapping $T\!:\,V\to W$ defined as
the composition $T=A\,\raise 1pt\hbox{$\sssize\circ$} \, B^{-1}$. Note that $B$ is non-degenerate since
it is positive. Therefore it is invertible. The rest is to show that
$T$ is an isometry. Indeed, we have
$$
\langle T\bold x|T\bold y\rangle=\langle A\,\raise 1pt\hbox{$\sssize\circ$} \, B^{-1}\bold x|
A\,\raise 1pt\hbox{$\sssize\circ$} \, B^{-1}\bold y\rangle=\langle B^{-1}\bold x|D\,\raise 1pt\hbox{$\sssize\circ$} \, B^{-1}
\bold y\rangle.\quad
\mytag{7.6}
$$
The last equality in the chain \mythetag{7.6} is provided by
\mythetag{7.5}. The further calculations are obvious:
$$
\langle B^{-1}\bold x|D\,\raise 1pt\hbox{$\sssize\circ$} \, B^{-1}\bold y\rangle
=\langle B^{-1}\bold x|\,\raise 1pt\hbox{$\sssize\circ$} \, B\bold y\rangle
=\langle B\,\raise 1pt\hbox{$\sssize\circ$} \, B^{-1}\bold x|\bold y\rangle
=\langle\bold x|\bold y\rangle.
$$
Combining this equality with \mythetag{7.6}, we get
$\langle T\bold x|T\bold y\rangle=\langle\bold x|\bold y\rangle$
for any two vectors $\bold x,\bold y\in V$. Hence, $T$ is an
isometry. The lemma is proved.
\qed\enddemo
\demo{Proof of the theorem~\mythetheorem{7.6}} The mapping $A=T\,\raise 1pt\hbox{$\sssize\circ$} \, B$
in this case implements an isomorphism of two unitary representations
$f$ and $h$. Therefore, we have
$$
\hskip -2em
T\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)=h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, T\,\raise 1pt\hbox{$\sssize\circ$} \, B
\text{\ \ for all \ }g\in G.
\mytag{7.7}
$$
Let's show that the operator $B$ commutes with $f(g)$. For this purpose
we show that $D=B^2$ commutes with $f(g)$:
$$
\langle\bold x|f(g)B^2\bold y\rangle=\langle f(g^{-1})\bold x|B
B\bold y\rangle=\langle Bf(g^{-1})\bold x|B\bold y\rangle.
$$
Here we used the facts that $f(g)$ is a unitary operator and $B$ is a
Hermitian operator. Let's continue our calculations using the isometry
of the mapping $T$:
$$
\langle Bf(g^{-1})\bold x|B\bold y\rangle=\langle Bf(g^{-1})\bold x|
TB\bold y\rangle=\langle TBf(g^{-1})\bold x|B\bold y\rangle.
$$
But $T\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, f(g^{-1})=h(g^{-1})\,\raise 1pt\hbox{$\sssize\circ$} \, T\,\raise 1pt\hbox{$\sssize\circ$} \, B$. This fact
follows from \mythetag{7.7}. Taking into account this equality and taking
into account that $h(g)$ is a unitary operator, we get
$$
\langle TBf(g^{-1})\bold x|B\bold y\rangle=\langle h(g^{-1})TB\bold x
|B\bold y\rangle=\langle TB\bold x|h(g)TB\bold y\rangle.
$$
Now let's use again the relationship \mythetag{7.7} written as
$h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, T\,\raise 1pt\hbox{$\sssize\circ$} \, B=T\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, h(g)$. Then we take
into account the isometry of $T$:
$$
\langle TB\bold x|h(g)TB\bold y\rangle=\langle TB\bold x|TBf(g)
\bold y\rangle=\langle T^{-1}TB\bold x|Bf(g)\bold y\rangle.
$$
In order to complete this series of calculations, we remember that
$B$ is a Hermitian operator:
$$
\gathered
\langle T^{-1}TB\bold x|Bf(g)\bold y\rangle=\langle B\bold x|Bf(g)
\bold y\rangle=\\
=\langle\bold x|BBf(g)\bold y\rangle=
\langle\bold x|B^2f(g)\bold y\rangle.
\endgathered
$$
As a result we have got $\langle\bold x|f(g)B^2\bold y\rangle
=\langle\bold x|B^2f(g)\bold y\rangle$. Since $\bold x$ and $\bold y$
are arbitrary two vectors, we conclude that the operators $f(g)$ commute
with the operator $D=B^2$. But the positive Hermitian operator $B$ is
a square root of the positive Hermitian operator $D$. Therefore $B$
commutes with all operators that commute with the operator $D$ (see
theorem~\mythetheorem{7.4}). As a result we get $f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, B=B\,\raise 1pt\hbox{$\sssize\circ$} \,
f(g)$. Substituting this equality into \mythetag{7.7}, we derive
$T\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, B=h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, T\,\raise 1pt\hbox{$\sssize\circ$} \, B$. Canceling the
non-degenerate operator $B$, we find
$$
\hskip -2em
T\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)=h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, T\text{\ \ for all \ }g\in G.
\mytag{7.8}
$$
The equality \mythetag{7.8} means that the isometric mapping $T$
implements an isomorphism of the unitary representations $f$
and $h$. Hence, the representations $f$ and $h$ are unitary equivalent.
Thus, the theorem~\mythetheorem{7.6} is proved.
\qed\enddemo
\newpage
\global\firstpage@true
\topmatter
\title\chapter{2}
Representations of finite groups
\endtitle
\endtopmatter
\leftheadtext{CHAPTER \uppercase\expandafter{\romannumeral 2}.
REPRESENTATIONS OF FINITE GROUPS.}
\document
\chapternum=2
\head
\SectionNum{1}{44} Regular representations of finite groups.
\endhead
\rightheadtext{\S\,1. Regular representations of finite groups.}
Let $G$ be a finite group and $N=|G|$ be the number of elements
in this group. Let's consider the set of complex-valued numeric functions on
$G$. We denote it by $L_2(G)$. It is clear that $L_2(G)$ is a complex
linear vector space of the dimension $\dim(L_2(G))=N$. Let's equip $L_2(G)$
with the structure of a Hermitian space. For this purpose we consider the
scalar product of two functions $u(g)$ and $v(g)$ given by the formula
$$
\hskip -2em
\langle u|v\rangle=\frac{1}{N}\sum_{g\in G}\overline{u(g)}\,v(g).
\mytag{1.1}
$$
Now we define an action of the group $G$ in $L_2(G)$ by defining linear
operators $R(g)\!:\,L_2(G)\to L_2(G)$. Let's set
$$
R(g)v(a)=v(a\,g)\text{\ \ for all \ } a,g\in G\text{\ \ and \ }
v\in L_2(G).\quad
\mytag{1.2}
$$
The operators $R(g)$ act upon functions of $L_2(g)$ by means of right
shifts of their arguments. It is easy to verify that these operators
satisfy the following relationship:
$$
R(g_1)\,\raise 1pt\hbox{$\sssize\circ$} \, R(g_2)=R(g_1\,g_2).
$$
Hence, the operators $R(g)$, which act according to \mythetag{1.2},
form a representation of the group $G$ in the space $L_2(G)$. This
representation is called the {\it right regular representation\/}
of the group $G$.\par
Along with the right regular representation there is the {\it left
regular representation\/} $(L,G,L_2(G))$ of the group $G$. Its operators
are defined as follows:
$$
L(g)v(a)=v(g^{-1}\,a)\text{\ \ for all \ } a,g\in G\text{\ \ and \ }
v\in L_2(G).\quad
\mytag{1.3}
$$
\mytheorem{1.1} The right regular representation defined by the formula
\mythetag{1.2} and the left regular representation defined by the formula
\mythetag{1.3} are unitary representations with respect to the Hermitian
structure given by the scalar product \mythetag{1.1}.
\endproclaim
\demo{Proof} Let's verify by means of direct calculations that $R(g)$ and
$L(g)$ are unitary operators. Assume that $u$ and $v$ are two arbitrary
functions from $L_2(G)$. Then we have
$$
\align
&\gathered
\langle R(g)u|R(g)v\rangle=\frac{1}{N}\sum_{a\in G}
\overline{u(a\,g)}\,v(a\,g)=\\
=\frac{1}{N}\sum_{b\in G}
\overline{u(b)}\,v(b)=\langle u|v\rangle;
\endgathered\\
\vspace{2ex}
&\gathered
\langle L(g)u|L(g)v\rangle=\frac{1}{N}\sum_{a\in G}
\overline{u(g^{-1}\,a)}\,v(g^{-1}\,a)=\\
=\frac{1}{N}\sum_{b\in G}
\overline{u(b)}\,v(b)=\langle u|v\rangle;
\endgathered
\endalign
$$
Here we used the fact that the right shift $a\mapsto b=a\,g$ and the
left shift $a\mapsto g^{-1}\,a$ are two bijective mappings of the group
$G$ onto itself.
\qed\enddemo
\mytheorem{1.2} The right regular representation and the left regular
representation are unitary equivalent to each other.
\endproclaim
\demo{Proof} In order to prove the theorem it is necessary to
construct the unitary operator $A\!:\,L_2(G)\to L_2(G)$ interlacing
the representations $(R,G,L_2(G))$ and $(L,G,L_2(G))$. We define
this operator as follows:
$$
\hskip -2em
Av(g)=v(g^{-1})\text{\ \ for all \ }g\in G\text{\ \ and \ }
v\in L_2(G).
\mytag{1.4}
$$
The fact that $A$ is a unitary operator with respect to the Hermitian
structure \mythetag{1.1} is shown by the following calculations:
$$
\langle Au|Av\rangle=\frac{1}{N}\sum_{a\in G}
\overline{u(a^{-1})}\,v(a^{-1})=\frac{1}{N}
\sum_{b\in G}\overline{u(b)}\,v(b)=\langle u|v\rangle.
$$
In carrying out these calculations we used the fact that the inversion
operation $a\mapsto b=a^{-1}$ is a bijective mapping of the group $G$
onto itself.\par
Now let's show that the operator $A$ introduced by means of the
formula \mythetag{1.4} interlaces right and left regular representations.
Let $v$ be some arbitrary function from $L_2(G)$. Assume that $u=L(g)v$
and $w=Av$. Then we have
$$
\gather
AL(g)v(a)=Au(a)=u(a^{-1})=v(g^{-1}\,a^{-1})=\\
=v((a\,g)^{-1})=w(a\,g)=R(g)w(a)=R(g)Av(a).
\endgather
$$
Since $v$ is an arbitrary function from $L_2(G)$, this sequence of
calculations shows that $A\,\raise 1pt\hbox{$\sssize\circ$} \, L(g)=R(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A$. The proof
is over.
\qed\enddemo
\head
\SectionNum{2}{46} Invariant averaging over a finite group.
\endhead
\rightheadtext{\S\,2. Invariant averaging \dots}
In previous section we have noted that the idea of considering
numeric functions on a finite group is fruitful enough. The finiteness
of a group $G$ provides the opportunity to define the operation of
invariant averaging for such functions. For an arbitrary function
$v\in L_2(G)$ we denote by $M[v]$ the number determined by the following
relationship:
$$
\hskip -2em
M[v]=\frac{1}{N}\sum_{g\in G}v(g)\text{, \ where \ }N=|G|.
\mytag{2.1}
$$
We used the symbol $M$ for denoting the operation of invariant
averaging \mythetag{2.1} since it is analogous to the mathematical
expectation or the mean value in the theory of probability.\par
Note that the operation of invariant averaging \mythetag{2.1}
can be applied not only to numeric functions, but to vector-valued,
operator-valued, and matrix-valued functions on a group $G$. In
order to apply this operation to a function it should take its
values in some linear vector space. Then the result of averaging
it $M[v]$ is an element of the same linear vector space. The operation
of invariant averaging satisfies the following obvious conditions
of linearity:
\roster
\item\quad $M[u+v]=M[u]+M[v]$;
\item\quad $M[\alpha\,u]=\alpha\,M[u]$, where $\alpha$ is a number.
\endroster
The invariance of the averaging \mythetag{2.1} reveals in the form of the
following relationships:
\roster
\item[3]\quad\kern -1.2cm\vtop{\hsize=9.6cm\noindent $M[R(g)u]=M[u]$, the
invariance with respect to right shifts;}
\vskip 0.5ex
\item\quad\kern -1.2cm\vtop{\hsize=9.6cm\noindent $M[L(g)u]=M[u]$, the
invariance with respect to left shifts;}
\vskip 0.5ex
\item\quad\kern -1.2cm\vtop{\hsize=9.6cm\noindent $M[Au]=M[u]$, the
invariance with respect to the inversion.}
\vskip 0.5ex
\endroster
The proof of the properties \therosteritem{3}--\therosteritem{5} is
reduced to verifying the following relationships:
$$
\align
M[R(g)u]&=\frac{1}{N}\sum_{a\in G}u(a\,g)=\frac{1}{N}\sum_{b\in G}
u(b)=M[u],\\
M[L(g)u]&=\frac{1}{N}\sum_{a\in G}u(g^{-1}\,a)=\frac{1}{N}\sum_{b\in G}
u(b)=M[u],\\
M[Au]&=\frac{1}{N}\sum_{a\in G}u(a^{-1})=\frac{1}{N}\sum_{b\in G}
u(b)=M[u].
\endalign
$$
Remember that the inversion operator $A$ in the property \therosteritem{5}
above is defined by the relationship \mythetag{1.4}.\par
If $u$ is a vector-valued function with the values in a linear vector
space $V$ , then the properties \therosteritem{1}--\therosteritem{5} can be
complemented with one more property:
\roster
\item[6]\quad\kern -1.2cm\vtop{\hsize=9.6cm\noindent $M[Bu]=BM[u]$, where
$B$ is an arbitrary linear mapping with the domain $V$.}
\endroster
The relationship like \therosteritem{6} is fulfilled for operator-valued
functions with the values in $\operatorname{End}(V)$:
\roster
\item[6]\quad\kern -1.2cm\vtop{\hsize=9.6cm\noindent $M[B\,\raise 1pt\hbox{$\sssize\circ$} \, u]
=B\,\raise 1pt\hbox{$\sssize\circ$} \, M[u]$, where $B$ is an arbitrary linear mapping with the
domain $V$.}
\endroster
Moreover, for such functions with values in $\operatorname{End}(V)$ we can write the
following two additional properties:
\roster
\item[7]\quad $\operatorname{tr} M[u]=M[\operatorname{tr} u]$;
\item\quad\kern -1.2cm\vtop{\hsize=9.6cm\noindent $M[B\,\raise 1pt\hbox{$\sssize\circ$} \, u]
=B\,\raise 1pt\hbox{$\sssize\circ$} \, M[u]$, where $B$ is an arbitrary linear mapping with the
domain $V$.}
\endroster
The operation of invariant averaging \mythetag{2.1} plays an important
role in the theory of representations of finite groups. As the first
example of its usage we prove the following fact.
\mytheorem{2.1} Each finite-dimensional representation of a finite group
is equivalent to some unitary representation of it.
\endproclaim
Let $(f,G,V)$ be some finite-dimensional representation of a finite
group $G$. Generally speaking, in order to prove the theorem we should
construct a unitary representation $(h,G,W)$ of the same group in some
Hermitian space $W$ and find a linear mapping $A\!:\,V\to W$ being an
isomorphism of representations $f$ and $h$. Assume for a while that we
managed to do it. Then we have the following relationships:
$$
\xalignat 2
&A\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)=h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A,
&&\langle h(g)\bold u|h(g)\bold v\rangle=\langle\bold u|\bold v\rangle.
\endxalignat
$$
The space $V$ is not equipped with its own scalar product. However, we
equip it with a scalar product as follows:
$$
\hskip -2em
\langle\bold u|\bold v\rangle=\langle A\bold u|A\bold v\rangle.
\mytag{2.2}
$$
All properties of a scalar product for the sesquilinear form
\mythetag{2.2} are verified immediately. The positivity is
present because $A$ is a bijective mapping and $\operatorname{Ker} A=\{0\}$.
The representation $f$ appears to be a unitary representation
with respect to the Hermitian scalar product \mythetag{2.2}:
$$
\gather
\langle f(g)\bold u|f(g)\bold v\rangle=\langle Af(g)\bold u|Af(g)\bold v
\rangle=\\
=\langle h(g)A\bold u|h(g)A\bold v\rangle=
\langle A\bold u|A\bold v\rangle=\langle\bold u|\bold v\rangle,
\endgather
$$
while $A$ establishes the unitary equivalence for $f$ and $h$. These
considerations show that in order to prove the theorem~\mythetheorem{2.1}
there is no need to construct a separate unitary representation
$h,G,W$ and find an isomorphism $A$. It is sufficient to define a proper
scalar product in $V$ such that $f$ is a unitary representation with
respect to it.\par
Let $\langle\!\langle f(g)\bold u|f(g)\bold v\rangle\!\rangle$ be some
arbitrary scalar product in $V$. For instance, it can be defined using
the coordinates of vectors $\bold u$ and $\bold v$ in some fixed basis:
$$
\langle\!\langle f(g)\bold u|f(g)\bold v\rangle\!\rangle
=\sum^n_{i=1}\overline{u^i}\,v^i.
$$
The operators $f(g)$ should not be unitary operators with respect to
such a scalar product. So, we need to improve it. Let's define another
scalar product in $V$ by means of the operation of invariant averaging:
$$
\langle\bold u|\bold v\rangle=
M[\langle\!\langle f(g)\bold u|f(g)\bold v\rangle\!\rangle]
=\frac{1}{N}\sum_{g\in G}\langle\!\langle f(g)\bold u|f(g)
\bold v\rangle\!\rangle.\quad
\mytag{2.3}
$$
It is easy to see that the form \mythetag{2.3} is sesquilinear and
symmetric. It is also a positive form:
$$
\langle\bold u|\bold u\rangle=
\sum_{g\in G}\frac{\langle\!\langle f(g)\bold u|f(g)
\bold u\rangle\!\rangle}{N}=\sum_{g\in G}\frac{\Vert f(g)\bold u\Vert^2}
{N}>0\text{\ \ for all \ }\bold u\neq 0.
$$
The operators $f(g)$ are unitary operators with respect to
the scalar product \mythetag{2.3}. This fact follows from the
property~\therosteritem{3} of the invariant averaging. Indeed,
we have
$$
\gather
\langle f(g)\bold u| f(g)\bold v\rangle=\sum_{a\in G}\frac{\langle
\!\langle f(a)f(g)\bold u|f(a)f(g)\bold v\rangle\!\rangle}{N}=\\
=\sum_{a\in G}\frac{\langle\!\langle f(a\,g)\bold u|f(a\,g)\bold v
\rangle\!\rangle}{N}=\sum_{b\in G}\frac{\langle\!\langle f(b)\bold u
|f(b)\bold v\rangle\!\rangle}{N}=\langle\bold u|\bold v\rangle.
\endgather
$$
The above considerations prove that each finite-dimensional representation
of a finite group can be transformed to a unitary representation by means
of the proper choice \mythetag{2.3} of a scalar product. Thus, the
theorem~\mythetheorem{2.1} is proved.\par
As an immediate corollary of the theorem~\mythetheorem{2.1} we
get the following important proposition concerning finite dimensional
representations of finite groups.\par
\mytheorem{2.2} Each finite-dimensional representation of a finite
group is completely reducible.
\endproclaim
The proof of this theorem is based on the
theorem~\mythetheoremchapter{7.5}{1} from
Chapter~\uppercase\expandafter{\romannumeral 1}. This theorem says
that each unitary representation is completely reducible. As for the
finite-dimensional representations of finite groups, we have proved
their equivalence to some unitary representations.
\head
\SectionNum{3}{50} Characters of group representations.
\endhead
\rightheadtext{\S\,3. Characters of group representations.}
Let $(f,G,V)$ be some finite-dimensional representation of a
group $G$. Each such representation is associated with the numeric
function $\chi_f$ on the group $G$ defined through the traces of
representation operators:
$$
\hskip -2em
\chi_f(g)=\operatorname{tr} f(g).
\mytag{3.1}
$$
The numeric function $\chi_f(g)$ on $G$ introduced by the formula
\mythetag{3.1} is called the {\it character\/} of the representation
$f$.
\mytheorem{3.1} The characters of finite-dimensional representations
possess the following properties:
\roster
\rosteritemwd=2pt
\item the characters of two equivalent representations do coincide;
\item a character is constant within each conjugacy class;
\item if $f$ is a unitary representation, then $\chi_f(g^{-1})
=\overline{\chi_f(g)}$;
\item the character of the direct sum of representations is equal to the
sum of characters of separate direct summands;
\item the character of the tensor product of representations is equal to
the product of characters of its multiplicands.
\endroster
\endproclaim
Let's begin with proving the first item of the theorem. Assume that
we have two equivalent representation $(f,G,V)$ and $(h,G,W)$ and let
$A\!:\,V\to W$ be an isomorphism of these representations. Let's choose
a basis $\bold e_1,\,\ldots,\,\bold e_n$ in $V$. Then the vectors
$\tilde\bold e_1=A\bold e_1,\,\ldots,\,\tilde\bold e_n=A\bold e_n$
constitute a basis in the space $W$. Let's calculate the matrices of the
operators $f(g)$ and $h(g)$ in these bases. They are defined by the
following relationships:
$$
\xalignat 2
&f(g)\bold e_i=\sum^n_{j=1}F^j_i(g)\,\bold e_j,
&&h(g)\tilde\bold e_i=\sum^n_{j=1}H^j_i(g)\,\tilde\bold e_j.
\qquad
\mytag{3.2}
\endxalignat
$$
Let's substitute $\tilde\bold e_i=A\bold e_i$ and $\tilde\bold e_j
=A\bold e_j$ into the second formula \mythetag{3.2} and take into
account the relationship $A\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)=h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A$. As a result
we obtain
$$
\hskip -2em
Af(g)\bold e_i=\sum^n_{j=1}H^j_i(g)\,A\bold e_j.
\mytag{3.3}
$$
The mapping $A$ is a bijective mapping. Therefore, we can cancel it in
\mythetag{3.3}. Upon canceling $A$, we compare \mythetag{3.3} with the
first formula \mythetag{3.2}. This comparison yields $H^j_i(g)=F^j_i(g)$,
i\.\,e\. the matrices of the operators $f(g)$ and $h(g)$ do coincide.
Hence, $\operatorname{tr} f(g)=\operatorname{tr} h(g)$ and $\chi_f(g)=\chi_h(g)$. The first item
in the theorem~\mythetheorem{3.1} is proved.\par
Assume that $g$ and $\tilde g$ are two elements of the same conjugacy
class in $G$. Then $\tilde g=a\,g\,a^{-1}$ for some $a\in G$. Therefore,
we get
$$
f(\tilde g)=f(a\,g\,a^{-1})=f(a)\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, f(a)^{-1}.
$$
Now it is sufficient to apply the formula $\operatorname{tr}(B\,\raise 1pt\hbox{$\sssize\circ$} \, A \,\raise 1pt\hbox{$\sssize\circ$} \, B^{-1})
=\operatorname{tr}(A)$ setting $A=f(g)$ and $B=f(a)$ in it. The second
item in the theorem~\mythetheorem{3.1} is proved.\par
In order to prove the third item we consider a unitary representation
$(f,G,V)$ and choose some orthonormal basis in $V$. The condition that $f$
is unitary is written as $\langle f(g)\bold u|\bold v\rangle=\langle
\bold u|f(g)^{-1}\bold v\rangle$. Upon substituting $\bold u=\bold e_i$
and $\bold v=\bold e_j$ we take into account the first formula \mythetag{3.2}.
As a result we get
$$
\hskip -2em
\overline{F^j_i(g)}=F^i_j(g^{-1}).
\mytag{3.4}
$$
The relationship \mythetag{3.4} means that the matrices $\overline{F(g)}$
and $F(g^{-1})$ are obtained from each other by transposing. The traces of
any two such matrices do coincide. The third item is proved.\par
The fourth item is trivial. Let $f=f_1\oplus f_2$ and $V=V_1\oplus
V_2$. We choose a basis in $V$ composed by two bases in $V_1$ and $V_2$
respectively. The matrix of the operator $f(g)$ in such a basis is a
blockwise diagonal matrix, the diagonal blocks of it coincides with the
matrices of the operators $f_1(g)$ and $f_2(g)$. Therefore,
$\operatorname{tr} f(g)=\operatorname{tr} f_1(g)+\operatorname{tr} f_2(g)$.\par
Now let's proceed to the fifth item of the theorem. Let's denote
$\varphi=f\otimes h$ and $V=U\otimes W$. We choose some basis $\bold e_1,
\,\ldots,\,\bold e_n$ in $U$ and some basis $\tilde\bold e_1,
\,\ldots,\,\tilde\bold e_m$ in $W$. The matrices of the operators $f(g)$
and $h(g)$ are determined by the relationships
$$
\xalignat 2
&f(g)\bold e_i=\sum^n_{j=1}F^j_i(g)\,\bold e_j,
&&h(g)\tilde\bold e_q=\sum^n_{p=1}H^p_q(g)\,\tilde\bold e_p,
\qquad
\mytag{3.5}
\endxalignat
$$
which are analogous to \mythetag{3.2}. The vectors $\bold E_{iq}=
\bold e_i\otimes\tilde\bold e_q$ constitute a basis in the tensor
product $V=U\otimes W$. The vectors of this basis are enumerated
by two indices, therefore the matrix of the operator $\varphi(g)$
in this basis is represented by a four-index array. It is determined
by the following relationships:
$$
\hskip -2em
\varphi(g)\bold E_{iq}=\sum^n_{j=1}\sum^m_{p=1}\varPhi^{jp}_{iq}(g)
\,\bold E_{jp}.
\mytag{3.6}
$$
The action of the operator $\varphi(g)$ upon the basis vectors
$\bold E_{iq}=\bold e_i\otimes\tilde\bold e_q$ is determined by the
formula \mythetagchapter{5.1}{1} from
Chapter~\uppercase\expandafter{\romannumeral 1}:
$$
\hskip -2em
\varphi(g)(\bold e_i\otimes\tilde\bold e_q)=(f(g)\bold e_i)\otimes
(h(g)\tilde\bold e_q).
\mytag{3.7}
$$
Combining \mythetag{3.7} with the formulas \mythetag{3.5}, we find
$$
\hskip -2em
\varphi(g)\bold E_{iq}=\sum^n_{j=1}\sum^m_{p=1} F^j_i(g)\,
H^p_q(g)\,\bold E_{jp}.
\mytag{3.8}
$$
Now, comparing the relationships \mythetag{3.6} and \mythetag{3.8}, we
determine the matrix components for the operator $\varphi(g)$:
$$
\hskip -2em
\varPhi^{jp}_{iq}(g)=F^j_i(g)\,H^p_q(g).
\mytag{3.9}
$$
The rest is to calculate the trace of the operator $\varphi(g)$ as the
trace of a matrix array \mythetag{3.9}:
$$
\gather
\operatorname{tr}\varphi(g)=\sum^n_{i=1}\sum^m_{q=1}\varPhi^{iq}_{iq}=
\sum^n_{i=1}\sum^m_{q=1}F^i_i(g)\,H^q_q(g)=\\
\left(\,\shave{\sum^n_{i=1}}F^i_i(g)\right)
\left(\,\shave{\sum^m_{q=1}}H^q_q(g)\right)=
\operatorname{tr} f(g)\,\operatorname{tr} h(g).
\endgather
$$
This relationship completes the proof of the fifth item
in the theorem~\mythetheorem{3.1} and the proof of the theorem
in whole.\par
In the end of this section one should remark that the properties
of representation character considered above are valid for
finite-dimensional representations of arbitrary groups, not for
finite groups only.
\head
\SectionNum{4}{54} Orthogonality relationships.
\endhead
\rightheadtext{\S\,4. Orthogonality relationships.}
Let $(f,G,V)$ and $(h,G,W)$ are two complex finite-dimensional
representations of a finite group $G$. We choose some linear mapping
$B\!:\,V\to W$ and, using it, we define a function $\varphi_B(g)$ on
$G$ with the values in $\operatorname{Hom}(V,W)$. Let's set
$$
\varphi_B(g)=h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, f(g^{-1}).
$$
The result of invariant averaging of $\varphi_B(g)$ over the group $G$
is some element $C\in\operatorname{Hom}(V,W)$:
$$
\hskip -2em
C=M[\varphi_B(g)]=\frac{1}{N}\sum_{a\in G}
h(a)\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, f(a^{-1}).
\mytag{4.1}
$$
It is easy to verify that the mapping $C\!:\,V\to W$ is a homomorphism
of the representations $f$ and $h$. Indeed, we have
$$
\gather
C\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)=M[\varphi_B(g)]\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)=\frac{1}{N}\sum_{a\in G}
h(a)\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, f(a^{-1})\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)=\\
=\frac{1}{N}\sum_{a\in G}h(a)\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, f(a^{-1}\,g)=
\frac{1}{N}\sum_{b\in G}h(g\,b)\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, f(b^{-1})=\\
=\frac{1}{N}\sum_{b\in G}h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, h(b)\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, f(b^{-1})=
h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, M[\varphi_B(g)]=h(g)\,\raise 1pt\hbox{$\sssize\circ$} \, C.
\endgather
$$\par
Assume that the representations $(f,G,V)$ and $(h,G,W)$
are irreducible. If they are not equivalent, applying Schur's
lemma~\mythelemmachapter{5.1}{1}, we get $C=0$.\par
Let's study the case $f\cong h$. For each pair of equivalent
irreducible representations we fix some bijective mapping $A_{fh}\!:
\,V\to W$ implementing an isomorphism of these representations. Then
the following lemma determines the structure of the mapping $C$ in
\mythetag{4.1}.
\mylemma{4.1} A homomorphism $C\!:\,V\to W$ of two equivalent irreducible
finite-dimensional complex representations $(f,G,V)$ and $(h,G,W)$ is fixed
up to a numeric factor, i\.\,e\. $C=\lambda\,A_{fh}$.
\endproclaim
\demo{Proof} Let's consider the operator $A=A_{fh}^{-1}\,\raise 1pt\hbox{$\sssize\circ$} \, C$ in the
space $V$. Being the composition of two homomorphisms, this operator
implements an isomorphism of $f$ with itself. Therefore we have
$A\,\raise 1pt\hbox{$\sssize\circ$} \, f(g)=f(g)\,\raise 1pt\hbox{$\sssize\circ$} \, A$ for all $f(g)$. Applying Schur's
lemma~\mythelemmachapter{5.2}{1}, we get $A=\lambda\cdot 1$. Hence,
$C=\lambda\,A_{fh}$. The lemma is proved.
\qed\enddemo
In order to calculate the numeric factor $\lambda$ we use the
trace of the operator $A$, which is its numeric invariant:
$$
\lambda=\frac{\operatorname{tr} A}{\operatorname{tr} 1}=\frac{\operatorname{tr} A}{\dim V}=\frac{1}{\dim V}
\,\operatorname{tr}(A_{fh}^{-1}\,\raise 1pt\hbox{$\sssize\circ$} \, C).
$$
Let's substitute the expression \mythetag{4.1} for $C$ into this formula:
$$
\gather
\lambda=\frac{1}{N\,\dim V}\sum_{a\in G}\operatorname{tr}(A_{fh}^{-1}\,\raise 1pt\hbox{$\sssize\circ$} \,
h(a)\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, f(a^{-1}))=\\
=\frac{1}{N\,\dim V}\sum_{a\in G}\operatorname{tr}(f(a)\,\raise 1pt\hbox{$\sssize\circ$} \, A_{fh}^{-1}\,\raise 1pt\hbox{$\sssize\circ$} \,
B\,\raise 1pt\hbox{$\sssize\circ$} \, f(a^{-1}))=\frac{\operatorname{tr}(A_{fh}^{-1}\,\raise 1pt\hbox{$\sssize\circ$} \, B)}
{\dim V}.
\endgather
$$
Here again we used the formula $\operatorname{tr}(F\,\raise 1pt\hbox{$\sssize\circ$} \, D\,\raise 1pt\hbox{$\sssize\circ$} \, F^{-1})=\operatorname{tr}(D)$
with $F=f(a)$ and $D=A_{fh}^{-1}\,\raise 1pt\hbox{$\sssize\circ$} \, B$. The result of calculating
the parameter $\lambda$ enables us to formula the following proposition.
\mytheorem{4.1} For arbitrary two irreducible finite-dimensional complex
representations $(f,G,V)$ and $(h,G,W)$ of a finite group $G$ the
relationship
$$
\pagebreak
\sum_{a\in G}\frac{h(a)\,\raise 1pt\hbox{$\sssize\circ$} \, B\,\raise 1pt\hbox{$\sssize\circ$} \, f(a^{-1})}{N}
=\cases\qquad 0&\text{\ \ for \ }f\not\cong h;\\
\vspace{2ex}
\dfrac{\operatorname{tr}(A_{fh}^{-1}\,\raise 1pt\hbox{$\sssize\circ$} \, B)}
{\dim V}&\text{\ \ for \ }f\not\cong h
\endcases\quad
\mytag{4.2}
$$
is fulfilled. It is valid for an arbitrary choice of a linear mapping
$B\!:\,V\to W$ in $\operatorname{Hom}(V,W)$.
\endproclaim
The relationship \mythetag{4.2} is the basic orthogonality
relationship in the theory of representations of finite groups.
Let's consider the matrix form of this relationship. Assume that the
bases $\bold e_1,\,\ldots,\,\bold e_n$ and $\tilde\bold e_1,\,\ldots,
\,\tilde\bold e_m$ in the spaces $V$ and $W$ are chosen. They determine
the matrices $F^p_i(a)$ and $H^j_q(a)$ for the operators $f(a)$ and
$h(a)$ respectively. They also determine the matrix $B^q_p$ for the
mapping $B\in\operatorname{Hom}(V,W)$. In the case $f\not\cong h$ the bases in $V$
and $W$ are not related to each other. In the case $f\cong h$ it is
convenient to choose the first basis arbitrarily and then define the
other by means of the relationship
$$
\hskip -2em
\tilde\bold e_i=A_{fh}\bold e_i,\quad i=1,\,\ldots,\,n.
\mytag{4.3}
$$
Under such a choice of bases the mapping $A_{fh}$ is represented by
the unit matrix, while the matrices of the operators $f(a)$ and $h(a)$
do coincide: $F^p_i(a)=H^p_i(a)$. As for the mapping $B$, we choose it
so that the only nonzero element in its matrix $B^q_p=1$ is placed in
the crossing of the $q$-th row and the $p$-th column. If all these
provisions are made, then the orthogonality relationship \mythetag{4.2}
is rewritten as follows:
$$
\hskip -2em
\frac{1}{N}\sum_{a\in G}H^j_q(a)\,F^p_i(a^{-1})
=\cases\quad 0 &\text{for \ }f\not\cong h,\\
\vspace{1ex}
\dfrac{\delta^j_i\,\delta^p_q}{n}&\text{for \ }f\cong h.
\endcases
\mytag{4.4}
$$\par
Assume that the representations $f$ and $h$ are unitary ones.
Above we have already proved that any finite-dimensional complex
representation of a finite group can be replaced by some unitary
representation equivalent to it. And if $f$ and $h$ are equivalent,
then they are unitary equivalent as well. For this reason we can
assume that the mapping $A_{fh}$ is an isometry, while the bases
$\bold e_1,\,\ldots,\,\bold e_n$ and $\tilde\bold e_1,\,\ldots,
\,\tilde\bold e_m$ are orthonormal bases. Then the relationship
\mythetag{4.4} is rewritten as follows:
$$
\hskip -2em
\frac{1}{N}\sum_{a\in G}H^j_q(a)\,\overline{F^i_p(a)}
=\cases\quad 0 &\text{for \ }f\not\cong h,\\
\vspace{1ex}
\dfrac{\delta^j_i\,\delta^p_q}{n}&\text{for \ }f\cong h.
\endcases
\mytag{4.5}
$$
Note that the equality \mythetag{4.3} is compatible with the
orthonormality of the bases $\bold e_1,\,\ldots,\,\bold e_n$
and $\tilde\bold e_1,\,\ldots,\,\tilde\bold e_m$ since $A_{fh}$
is an isometry. In writing \mythetag{4.5} we used the relationship
$$
F^p_i(a^{-1})=\overline{F^i_p(a)}
$$
because the matrices of unitary operators $f(a)$ in an orthonormal
basis are unitary matrices.\par
Let's set $q=j$ and $p=i$ in the formula \mythetag{4.5} and
then sum over the indices $i$ and $j$. As a result we derive from
\mythetag{4.5} the following relationship for the characters of
irreducible representations $f$ and $h$ of a finite group:
$$
\hskip -2em
\frac{1}{N}\sum_{a\in G}\operatorname{tr}(h(a))\,\overline{\operatorname{tr}(f(a))}
=\cases 0 &\text{for \ }f\not\cong h,\\
1 &\text{for \ }f\cong h.
\endcases
\mytag{4.6}
$$
Note that the representations $f$ and $h$ in \mythetag{4.6} are
not unitary ones. The matter is that the characters of equivalent
representations do coincide, while $f$ and $h$, according to the
theorem~\mythetheorem{2.1}, are equivalent to some unitary
representations.
\mytheorem{4.2} The characters of two non-equivalent irredu\-cible
finite-dimensional complex representations of a finite group $G$
are orthogonal as the elements of the space $L_2(G)$.
\endproclaim
The relationship \mythetag{4.6} is a proof of the
theorem~\mythetheorem{4.2}. In order to see it one should compare
this relationship with \mythetag{1.1}. From the finiteness
$\dim L_2(G)=N\leqslant\infty$ we conclude that the number of
non-equivalent irreducible finite-dimensional complex
representations of a finite group $G$ is finite. Therefore,
considering a {\it complete set\/} of such representations
is a true idea.
\mydefinition{4.1} The representations $f_1,\,\ldots,\,f_m$ form
a complete set of non-equivalent irreducible finite-dimensional
complex representations of a finite group $G$ if
\roster
\rosteritemwd=5pt
\item any two of them are not equivalent to each other;
\item each irreducible finite-dimensional complex representation
of $G$ is equivalent to one of the representations $f_1,\,\ldots,
\,f_m$.
\endroster
\enddefinition
The number $m$ of the representations in a complete set is a
numeric invariant of a finite group $G$. It is not greater than
the order of the group $N=|G|$.\par
Let $(f_1,G,V_1),\,\ldots,\,(f_m,G,V_m)$ be a complete set of
non-equivalent irreducible representations. Without loss of generality
we can assume these representations to be unitary ones. Let $n_1,\,
\ldots,\,n_m$ be the dimensions of these representations. We choose
some orthonormal basis in each of the spaces $V_1,\,\ldots,\,V_m$.
Then we have a set of matrices with the components
$$
F^j_i(g,r),\quad r=1,\,\ldots,\,m;\quad 1\leqslant i,j\leqslant n_r.
$$
Each component in these matrices depend on $g\in G$, therefore, it can
be treated as a function from $L_2(G)$. From \mythetag{4.5} we derive
the following orthogonality relationships for these functions:
$$
\hskip -2em
\frac{1}{N}\sum_{a\in G}F^j_q(a,r)\,\overline{F^i_p(a,s)}
=\frac{1}{n_r}\,\delta_{rs}\,\delta_{ij}\,\delta_{p\kern 0.5pt q}.
\mytag{4.7}
$$
The relationships \mythetag{4.7} mean that the functions $F^j_i(g,r)$
treated as the elements of the space $L_2(G)$ are pairwise orthogonal
to each other. Apparently, they are not only orthogonal, but form a
complete orthogonal set of functions in this space.
\mytheorem{4.3} For an arbitrary complete set $f_1,\,\ldots,\,f_m$ of
irreducible unitary representations of a finite group $G$ the matrix
elements of the operators $f_r(g)$ calculated in some orthonormal
bases form a complete orthogonal system of functions in $L_2(G)$.
\endproclaim
\demo{Proof} The orthogonality of the functions $F^i_j(g,r)$ follows
from the relationship \mythetag{4.7}. We need to prove their completeness.
For this purpose we consider the right regular representation
$(R,G,L_2(G))$. It is unitary with respect to the Hermitian structure
given by the scalar product \mythetag{1.1} (see
theorem~\mythetheorem{1.1}). For this reason the right regular
representation $(R,G,L_2(G))$ is completely reducible, it is expanded
into the direct sum of unitary irreducible representations:
$$
\hskip -2em
R=R_1\oplus\ldots\oplus R_k.
\mytag{4.8}
$$
The expansion \mythetag{4.8} of the right regular representation $R$
is associated with the expansion of the space $L_2(G)$ into the direct
sum of irreducible $R$-invariant subspaces
$$
L_2(G)=W_1\oplus\ldots\oplus W_k.
$$
Each of the irreducible representations $R_q$ in \mythetag{4.8} is
equivalent to one of the irreducible unitary representations
$(f_{r(q)},G,V_{r(q)})$ from our complete set $f_1,\,\ldots,\,f_m$.
Applying the theorem~\mythetheoremchapter{7.6}{1} from
Chapter~~\uppercase\expandafter{\romannumeral 1}, we conclude that
the representations $R_q$ and $f_{r(q)}$ are unitary equivalent.
Therefore, in each subspace $W_q\subseteq L_2(G)$ we can choose
some orthonormal basis of functions
$$
\hskip -2em
\varphi_i(g,q),\quad 1\leqslant i\leqslant n_{r(q)},
\mytag{4.9}
$$
such that the matrices of the operators $R_q(g)$ coincide with the
matrix components $F^j_i(g,r(q))$ of the operators for the corresponding
representation $f_{r(q)}$ in the complete set. Let's write this fact
as a formula:
$$
\hskip -2em
R_q(g)\varphi_i(q,q)=\sum^{n_{r(q)}}_{j=1}F^j_i(q,r(q))\,\varphi_j(a,q).
\mytag{4.10}
$$
But $R_q(g)$ is the restriction of the operator $R(q)$ from \mythetag{4.8}
to its invariant subspace $W_q$, \pagebreak while $\varphi_i(a,q)$ is an
element of this subspace. Therefore, we have
$$
\hskip -2em
R_q(g)\varphi_i(a,q)=R(g)\varphi_i(a,q)=\varphi_i(a\,g,q).
\mytag{4.11}
$$
Let's substitute \mythetag{4.11} into \mythetag{4.10}. Then in the
relationship obtained we set $a=e$. The quantities $\varphi_i(e,q)$
are some constant numbers, we denote them $c_{iq}=\varphi_i(e,q)$.
As a result we get
$$
\hskip -2em
\varphi_i(g,q)=\sum^{n_{r(q)}}_{j=1}c_{jq}\,F^j_i(q,r(q)).
\mytag{4.12}
$$
The formula \mythetag{4.12} is an expansion of the function
$\varphi_i(g,q)$ in the set of functions $F^j_i(q,r(q))$. But the
set of functions $\varphi_i(g,q)$ in \mythetag{4.9} constitute a basis
in $L_2(G)$. It is a complete set, each element $\varphi(g)\in L_2(G)$
has an expansion in this set of functions. Due to the formula \mythetag{4.12}
such an expansion can be transformed into the expansion of $\varphi(g)$
in the set of functions $F^j_i(q,r(q))$. Hence, the set of functions
$F^j_i(q,r(q))$ also is a complete set for $L_2(G)$. The
theorem~\mythetheorem{4.3} is proved.
\qed\enddemo
Let $(\varphi,G,V)$ be some finite-dimensional complex
representation of a finite group $G$. It is completely reducible
(see theorem~\mythetheorem{2.2}). It is expanded into
a sum of irreducible representations:
$$
\hskip -2em
\varphi=\varphi_1\oplus\ldots\oplus\varphi_\nu.
\mytag{4.13}
$$
Each of the irreducible representation $\varphi_q$ in \mythetag{4.13}
is equivalent to one of the representations $f_{r(q)}$ in our complete
set $f_1,\,\ldots,\,f_m$. Let's denote by $k_r$ the number of irreducible
representations in the expansion \mythetag{4.13} which are equivalent to
the representation $f_r$. Then the expansion \mythetag{4.13} is rewritten
as
$$
\hskip -2em
\varphi\cong k_1\,f_1\oplus\ldots\oplus k_m\,f_m.
\mytag{4.14}
$$
The number $k_r$ in \mythetag{4.14} is called the {\it multiplicity}
of the entry of the irreducible representation $f_r$ in $\varphi$.
The orthogonality relationship \mythetag{4.6} for characters enables
us to calculate the multiplicities without performing the expansion
\mythetag{4.13} itself:
$$
\hskip -2em
k_r=\frac{1}{N}\sum_{g\in G}\operatorname{tr}\varphi(g)\,\overline{\operatorname{tr} f_r(g)}.
\mytag{4.15}
$$
The relationship \mythetag{4.15} is derived from the following expansion
for the function $\operatorname{tr}\varphi(g)$:
$$
\hskip -2em
\operatorname{tr}\varphi(g)=k_1\,\operatorname{tr} f_1+\ldots+k_m\,\operatorname{tr} f_m.
\mytag{4.16}
$$
The relationship \mythetag{4.16} in turn is derived from \mythetag{4.14}.
\par
Let's find the expansion \mythetag{4.14} in the case of the right
regular representation $R(g)$. The corresponding expansion for the left
regular representation is the same because these two representations are
equivalent. In order to calculate the trace of the operator $R(g)$ it
is necessary to choose a basis in $L_2(G)$ and calculate the matrix
elements of the operator $R(g)$ in this basis. The
theorem~\mythetheorem{4.3}, which was proved above, says that the set of
functions $F^j_i(g,r)$ corresponding to some complete set of irreducible
representations can be used as a basis in $L_2(G)$. The basis functions
$F^j_i(g,r)$ are enumerated with three indices $i$, $j$, and $r$.
Therefore, the matrix of the operator $R(g)$ in this basis is represented
by a six-index array $R^{jp}_{qi}(r,s)$. This array is determined as
follows:
$$
\hskip -2em
R(g)F^j_i(a,r)=\sum^m_{s=1}\sum^{n_s}_{q=1}\sum^{n_s}_{p=1}
R^{jp}_{qi}(r,s)\,F^q_p(a,s).
\mytag{4.17}
$$
The trace of the six-index array $R^{jp}_{qi}(r,s)$ representing a matrix
is calculated according to the following formula:
$$
\hskip -2em
\operatorname{tr} R(g)=\sum^m_{r=1}\sum^{n_s}_{i=1}\sum^{n_s}_{j=1}
R^{ji}_{ji}(r,r).
\mytag{4.18}
$$
Let's calculate the left hand side of the equality \mythetag{4.17}
directly from the relationship \mythetag{1.2} that define the operator
$R(g)$:
$$
R(g)F^j_i(a,r)=F^j_i(a\,g,r)=\sum^{n_r}_{p=1}F^j_p(a,r)\,F^p_i(g,r).
\quad
\mytag{4.19}
$$
Here, deriving the formula \mythetag{4.19}, we used the relationship
$f_r(a\,g)=f_r(a)\,\raise 1pt\hbox{$\sssize\circ$} \, f_r(g)$ written in the matrix form. Comparing
the formulas \mythetag{4.17} and \mythetag{4.19}, we find
$$
R^{jp}_{qi}(r,s)=\delta_{rs}\delta^j_q\,F^p_i(g,r).
$$
The rest is to substitute this expression into \mythetag{4.18} and
perform the summations prescribed by that formula:
$$
\hskip -2em
\operatorname{tr} R(g)=\sum^m_{r=1}n_r\,\operatorname{tr} f_r(g).
\mytag{4.20}
$$
Let's compare \mythetag{4.20} and \mythetag{4.16}. The result of this
comparison is formulated in the following theorem.
\mytheorem{4.4} Each irreducible representation $f_r$ from some complete
set $f_1,\,\ldots,\,f_m$ of irreducible finite-dimensional complex
representations of a finite group $G$ enters the right regular
representation $(R,G,L_2(G))$ with the multiplicity $k_r$ equal to its
dimension, i\.\,e\. $k_r=n_r=\dim V_r$.
\endproclaim
Exactly the same proposition is valid for the left regular
representation $(L,G,L_2(G))$ of the group $G$ either. The
theorem~\mythetheorem{4.4} has the following immediate corollary
that follows from the fact that $\dim L_2(G)=|G|$.
\mycorollary{4.1} The order of a finite group $N=|G|$ is equal to the
sum of squares of the dimensions of all its non-equivalent irreducible
finite-dimensional complex representations $f_1,\,\ldots,\,f_m$,
i\.\,e\. $N=(n_1)^2+\ldots+(n_m)^2$.
\endproclaim
The same result can be obtained if we calculate the total number
of functions entering the orthogonality relationships \mythetag{4.7} and
forming a complete orthogonal set of functions in $L_2(G)$.\par
Let's consider the set of characters $\chi_1,\,\ldots,\,\chi_m$ for
irreducible representations from some complete set $f_1,\,\ldots,\,f_m$.
Due to the theorem~\mythetheorem{4.2} and the relationships \mythetag{4.6}
they are orthogonal. But in general case they do not form a complete set
of such functions in $L_2(G)$. From the theorem~\mythetheorem{3.1} we
know that these functions are constants within conjugacy classes of $G$.
Let's denote by $M_2(G)$ the set of complex numeric functions on $G$
constant within each conjugacy class of $G$. It is clear that $M_2(G)$
is a linear subspace in $L_2(G)$. It inherits the Hermitian scalar product
\mythetag{1.1} from the space $L_2(G)$.
\mytheorem{4.5} The characters $\chi_1,\,\ldots,\,\chi_m$ of the
representations $f_1,\,\ldots,\,f_m$ forming a complete set of irreducible
finite-dimensional representations of a finite group $G$ form a complete
set of orthogonal functions (a basis) in the space $M_2(G)\subseteq L_2(G)$.
\endproclaim
Due to the theorem~\mythetheorem{3.1} all of the characters
$\chi_1,\,\ldots,\,\chi_m$ belong to $M_2(G)$. They are orthogonal
to each other and normalized to the unity. It follows from the
theorem~\mythetheorem{4.2}. Let $\varphi(g)$ be some arbitrary element
of the space $M_2(G)$. Then we can expand it in the set of functions
$F^j_i(g,r)$ (see theorem~\mythetheorem{4.3}):
$$
\hskip -2em
\varphi(g)=\sum^m_{r=1}\sum^{n_r}_{i=1}\sum^{n_r}_{j=1}c_j^{\,i}(r)
\,F^j_i(g,r).
\mytag{4.21}
$$
Let's perform the conjugation $g\mapsto a\,g\,a^{-1}$ in the argument of
the function $\varphi(g)$. This operation does not change its value since
$\varphi(g)\in M_2(G)$. This value is not changed upon averaging over
conjugations by means of all elements of the group $G$ either:
$$
\hskip -2em
\varphi(g)=\frac{1}{N}\sum_{a\in G}\varphi(a\,g\,a^{-1}).
\mytag{4.22}
$$
Let's substitute the expansion \mythetag{4.21} into \mythetag{4.22}. This
yields
$$
\varphi(g)=\sum^m_{r=1}\sum^{n_r}_{i=1}\sum^{n_r}_{j=1}c_j^{\,i}(r)
\left(\frac{1}{N}\shave{\sum_{a\in G}}F^j_i(a\,g\,a^{-1},r)\right).
\quad
\mytag{4.23}
$$
We denote by $\psi^j_i(g,r)$ the expression enclosed into round brackets
in right hand side of \mythetag{4.23}. For this quantity we get
$$
\psi^j_i(g,r)=\frac{1}{N}\sum_{a\in G}\sum^{n_r}_{p=1}\sum^{n_r}_{q=1}
F^j_p(a,r)\,F^p_q(g,r)\,F^q_i(a^{-1},r).
$$
Here we used the relationship $f_r(a\,g\,a^{-1})=f_r(a)\,\raise 1pt\hbox{$\sssize\circ$} \, f_r(g)
\,\raise 1pt\hbox{$\sssize\circ$} \, f_r(a^{-1})$ written in the matrix form. Now let's recall that
$F^j_i(a,r)$ are unitary matrices and apply the orthogonality relationship
\mythetag{4.7}:
$$
\gather
\psi^j_i(g,r)=\frac{1}{N}\sum_{a\in G}\sum^{n_r}_{p=1}\sum^{n_r}_{q=1}
F^j_p(a,r)\,F^p_q(g,r)\,\overline{F^i_q(a,r)}=\\
=\frac{1}{n_r}\sum^{n_r}_{p=1}\sum^{n_r}_{q=1}F^p_q(g,r)
\,\delta^j_i\,\delta^q_p.
\endgather
$$
Upon performing the summation in the right hand side of the above formula
for the quantity $\psi^j_i(g,r)$ we get
$$
\psi^j_i(g,r)=\frac{1}{n_r}\operatorname{tr} f_r(g)\,\delta^j_i=\frac{1}{n_r}
\chi_r(g)\,\delta^j_i.
$$
The rest is to substitute this expression back into the formula
\mythetag{4.23}. As a result we find
$$
\hskip -2em
\varphi(g)=\sum^m_{r=1}\left(\,\shave{\sum^{n_r}_{i=1}}
\frac{c^{\,i}_i(r)}{n_r}\right)\chi_r(g).
\mytag{4.24}
$$
From \mythetag{4.24} it is clear that any function $\varphi(g)\in
M_2(G)$ is expanded in the set of functions \pagebreak $\chi_1,\,
\ldots,\,\chi_m$.\par
The following theorem on the number of irreducible representations
in a complete set of such representations for a finite group is an obvious
corollary of the theorem~\mythetheorem{4.5}, which is proved just above.
\mytheorem{4.6} The number of representations in a complete set
$f_1,\,\ldots,\,f_m$ of irreducible finite-dimensional complex
representations of a finite group $G$ coincides with the number of
conjugacy classes in the group $G$.
\endproclaim
\head
\SectionNum{5}{65} Expansion into irreducible components.
\endhead
\rightheadtext{\S\,5. Expansion into irreducible components.}
Let $G$ be a finite group. The theorem~\mythetheorem{4.6} determines
the number of irreducible representations in a complete set $f_1,\,\ldots,
\,f_m$, while the theorem~\mythetheorem{4.4} yields a way for finding such
representations. Indeed, each of the representations $f_1,\,\ldots,\,f_m$
enters the right regular representation $(R,G,L_2(G))$ at least once.
Therefore, in order to find it one should construct the expansion
\mythetag{4.13} for $\varphi=R$. For each particular finite group $G$
this could be done with the tools of linear algebra.\par
Suppose that this part of work is already done and some complete set
of unitary representations $f_1,\,\ldots,\,f_m$ is constructed. Then
upon choosing orthonormal bases in the spaces $V_1,\,\ldots,\,V_m$ of
these representations we can assume that the matrix elements $F^j_i(g,r)$
of the operators $f_r(g)$ are known. Under these assumptions we consider
the problem of expanding of a given representation $\varphi,G,V)$ into
its irreducible components. Let's begin with defining the following
operators:
$$
\hskip -2em
P^i_j(r)=\frac{n_r}{N}\sum_{a\in G}\overline{F^j_i(a,r)}\,\varphi(a).
\mytag{5.1}
$$
The number of such operators coincides with the number of functions
$F^j_i(a,r)$. However, a part of these operators can be equal to zero.
The operators $P^i_j(r)$ are interpreted as the coefficients for the
Fourier expansion of the operator-valued function in orthogonal system
of functions in $L_2(G)$. The following expansion approves such
interpretation:
$$
\hskip -2em
\varphi(g)=\sum^m_{r=1}\sum^{n_r}_{i=1}\sum^{n_r}_{j=1}
F^j_i(a,r)\,P^i_j(r).
\mytag{5.2}
$$
It is easily derived from the orthogonality relationship \mythetag{4.7}.
On the base of the same relationship \mythetag{4.7} one can derive a
number of other relationship for the operators \mythetag{5.1}. First of
all we consider the following ones:
$$
\align
&\hskip -2em
\varphi(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P^i_j(r)=\sum^{n_r}_{q=1}F^q_j(g,r)\,P^i_q(r),
\mytag{5.3}\\
&\hskip -2em
P^i_j(r)\,\raise 1pt\hbox{$\sssize\circ$} \,\varphi(g)=\sum^{n_r}_{q=1}F^i_q(g,r)\,P^q_j(r).
\mytag{5.4}\\
\endalign
$$
We prove the relationship \mythetag{5.3} by means of direct calculations.
In order to transform the left hand side of \mythetag{5.3} we use the
formula \mythetag{5.1} for the operator $P^i_j(r)$:
$$
\gather
\varphi(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P^i_j(r)=\frac{n_r}{N}\sum_{a\in G}
\overline{F^j_i(a,r)}\,\varphi(g)\,\raise 1pt\hbox{$\sssize\circ$} \,\varphi(a)=\\
=\frac{n_r}{N}\sum_{a\in G}\overline{F^j_i(a,r)}
\,\varphi(g\,a).
\endgather
$$
Replacing $a$ with $b=g\,a$ in summation over the group, we get
$$
\gather
\varphi(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P^i_j(r)=\frac{n_r}{N}\sum_{b\in G}
\overline{F^j_i(g^{-1}\,b,r)}\,\varphi(b)=\\
=\frac{n_r}{N}\sum_{b\in G}\sum^{n_r}_{q=1}\overline{
F^j_q(g^{-1},r)\,F^q_i(b,r)}\,\varphi(b).
\endgather
$$
Here we used the relationship $f_r(g^{-1}\,b)=f_r(g^{-1})\,\raise 1pt\hbox{$\sssize\circ$} \,
f_r(b)$ written in the matrix form. Now the rest is to use the
relationship $f_r(g^{-1})=f_r(g)^{-1}$ and the unitarity of the
matrix $F(g,r)$:
$$
\gather
\varphi(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P^i_j(r)=\sum^{n_r}_{q=1}F^q_j(g,r)\left(
\frac{n_r}{N}\shave{\sum_{b\in G}}\overline{F^q_i(b,r)}\,\varphi(b)
\right)=\\
=\sum^{n_r}_{q=1}F^q_j(g,r)\,P^i_q(r).
\endgather
$$
Comparing the left and right hand sides of the above equalities, we
see that the formula \mythetag{5.3} is proved. The formula \mythetag{5.4}
is proved in a similar way, therefore, we do not give its proof here.\par
Let's set $j=i$ in the formulas \mythetag{5.3} and \mythetag{5.4}
and then sum up these equalities over the index $i$. The double sums in
right hand sides of the resulting equalities do coincide. Therefore, the
result can be written as
$$
\hskip -2em
\gathered
\sum^{n_r}_{i=1}\varphi(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P^i_i(r)=\sum^{n_r}_{i=1}
P^i_i(r)\,\raise 1pt\hbox{$\sssize\circ$} \,\varphi(g)=\\
=\sum^{n_r}_{i=1}\sum^{n_r}_{j=1}F^j_i(g,r)\,P^i_j(r).
\endgathered
\mytag{5.5}
$$
The right hand side of \mythetag{5.5} differs from that of
\mythetag{5.2} by the absence of the sum over $r$. Combining
\mythetag{5.5} and \mythetag{5.2}, we obtain
$$
\hskip -2em
\varphi(g)=\sum^m_{r=1}\sum^{n_r}_{i=1}\varphi(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P^i_i(r)
\sum^m_{r=1}\sum^{n_r}_{i=1}P^i_i(r)\,\raise 1pt\hbox{$\sssize\circ$} \,\varphi(g).
\mytag{5.6}
$$
Due to \mythetag{5.6} it is natural to introduce new operators by
means of the following formula:
$$
\hskip -2em
P(r)=\sum^{n_r}_{i=1}P^i_i(r)=\frac{n_r}{N}\sum_{a\in G}
\overline{\operatorname{tr} f_r(a)}\,\varphi(a).
\mytag{5.7}
$$
In terms of \mythetag{5.7} the relationship \mythetag{5.6} itself
is rewritten as
$$
\hskip -2em
\varphi(g)=\sum^m_{r=1}\varphi(g)\,\raise 1pt\hbox{$\sssize\circ$} \, P(r)
\sum^m_{r=1}P(r)\,\raise 1pt\hbox{$\sssize\circ$} \,\varphi(g).
\mytag{5.8}
$$
Setting $g=e$ in \mythetag{5.8}, we get an expansion of the unity (of the
identical operator) in operators \mythetag{5.7}:
$$
\hskip -2em
1=\sum^m_{r=1}P(r).
\mytag{5.9}
$$
\mytheorem{5.1} The operators $P(r)\!:\,V\to V$, $r=1,\,\ldots,\,m$, given
by the formula \mythetag{5.7} possess the following properties:
\roster
\rosteritemwd=5pt
\item they satisfy the relationships $P(r)^2=P(r)$, because of which
those of them being nonzero $P(r)\neq 0$ are projectors onto the subspaces
$V(r)=\operatorname{Im} P(r)$;
\item they commute with the representation operators $\varphi(g)$, because
of which the subspaces $V(r)$ are invariant with respect to $\varphi(g)$;
\item they satisfy the relationship \mythetag{5.9} and the relationships
\linebreak
$P(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P(s)=0$ for $r\neq s$, because of which the expansion
$V=V(1)\oplus\ldots\oplus V(m)$ is an expansion into the direct sum of
invariant subspaces.
\endroster
\endproclaim
The relationships $P(r)^2=P(r)$ from the first item of the theorem
and the relationships $P(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P(s)=0$ for $r\neq s$ from the third
item of the theorem can be combined into one relationship:
$$
\hskip -2em
P(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P(s)=P(r)\,\delta_{rs}=\cases 0 &\text{for \ } r\neq s,\\
P(r) &\text{for \ }r=s.\endcases
\mytag{5.10}
$$
The relationship \mythetag{5.10} is easily derived from the following more
general relationship for the operators $P^i_j(r)$ defined in \mythetag{5.1}:
$$
\hskip -2em
P^i_j(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^k_q(s)=\delta_{rs}\,\delta^i_q\, P^k_j(r).
\mytag{5.11}
$$
It is convenient to prove \mythetag{5.11} by direct calculations. From the
formula \mythetag{5.1} for the product of operators in the left hand side
of \mythetag{5.11} we derive
$$
P^i_j(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^k_q(s)=\frac{n_r\,n_s}{N^2}\sum_{a\in G}\sum_{b\in G}
\overline{F^j_i(a,r)\,F^q_k(b,s)}\,\varphi(a\,b).
$$
Let's denote $c=a\,b$ and choose $c$ as a new parameter in summation over
the group $G$ in place of $b$. This yields
$$
\gather
P^i_j(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^k_q(s)=\frac{n_r\,n_s}{N^2}\sum_{a\in G}\sum_{c\in G}
\overline{F^j_i(a,r)\,F^q_k(a^{-1}\,c,s)}\,\varphi(c)=\\
=\frac{n_r\,n_s}{N^2}\sum_{a\in G}\sum_{c\in G}\overline{F^j_i(a,r)}
\sum^{n_s}_{p=1}\overline{F^q_p(a^{-1},s)\,F^p_k(c,s)}\,\varphi(c).
\endgather
$$
In the next step we use the unitarity of the matrix $F(a,s)$:
$$
P^i_j(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^k_q(s)=\sum^{n_s}_{p=1}\sum_{a\in G}
\frac{n_r\,\overline{F^j_i(a,r)}\,F^p_q(a,s)}{N}
\sum_{c\in G}\frac{n_s\,\overline{F^p_k(c,s)}\,\varphi(c)}{N}.
$$
And finally, we use \mythetag{5.1} and the orthogonality relationship
\mythetag{4.7}:
$$
P^i_j(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^k_q(s)=\sum^{n_s}_{p=1}\delta_{rs}\,\delta^p_j\,
\delta^i_q\,P^k_p(s)=\delta_{rs}\,\delta^i_q\,P^k_j(r).
$$
Comparing the left and right hand sides of this formula with
\mythetag{5.11}, we see that the formula \mythetag{5.11} is proved.
Hence, the relationship \mythetag{5.10} is also proved.\par
The relationship $P(r)^2=P(r)$, which follows from \mythetag{5.10},
in the case of a nonzero operator $P(r)\neq 0$ means that $P(r)$ is a
projection operator. It projects the space $V$ onto the subspace
$V(r)=\operatorname{Im} P(r)$ parallel to the subspace $\operatorname{Ker} P(r)$ (see more details
in \mybookcite{1}). The relationship
$$
\hskip -2em
P(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P(s)=0=P(s)\,\raise 1pt\hbox{$\sssize\circ$} \, P(r)\text{\ \ for \ }r\neq s
\mytag{5.12}
$$
means that the projectors $P(1),\,\ldots,\,P(m)$ commute and that
$\operatorname{Im} P(r)\subseteq\operatorname{Ker} P(s)$ for $r\neq s$. Due to \mythetag{5.12}
and \mythetag{5.9} the set of projectors $P(1),\,\ldots,\,P(m)$ is
a concordant and complete set of projectors. This set of projectors
determines an expansion of $V$ into a direct sum of subspaces:
$$
\hskip -2em
V=V(1)\oplus\ldots\oplus V(m)=\bigoplus^m_{r=1}V(r).
\mytag{5.13}
$$
The second item of the theorem~\mythetheorem{5.1} claiming the
commutativity of $P(r)$ and $\varphi(g)$ follows immediately from
\mythetag{5.5}. Due to this fact all subspaces in the expansion
\mythetag{5.13} are invariant subspaces of the representation
$(\varphi,G,V)$.\par
The matrix elements $F^j_i(a,r)$ of the operators $f_r(a)$
depend on a choice of bases in spaces where they act. The operators
$P^i_j(r)$ calculated according to the formula \mythetag{5.1}
also depend on a choice of these bases. However, the operators
$P(r)$ do not depend on bases since the traces of the operators
$f_r(a)$ in the formula \mythetag{5.7} are their (basis-free)
scalar invariants. Therefore, the expansion \mythetag{5.13} is
also invariant, it is determined by the group $G$ itself and by its
representation $\varphi$. Let's study how the expansions \mythetag{5.13}
and \mythetag{4.14} are related to each other.
\mytheorem{5.2} The restriction of the representation $\varphi$ to
the invariant subspace $V(r)=\operatorname{Im} P(r)$ is isomorphic to the irreducible
representation $f_r$ taken with the multiplicity $k_r$, where $k_r$ is
the coefficient of $f_r$ in the expansion \mythetag{4.14}.
\endproclaim
In order to prove the theorem~\mythetheorem{5.2} we use the following
relationship whose left hand side coincides with the operator $P(r)$ in the
case of $\varphi=f_s$ (see formula \mythetag{5.7}):
$$
\hskip -2em
\frac{n_r}{N}\sum_{a\in G}\sum^{n_r}_{i=1}\overline{F^i_i(a,r)}
\,f_s(a)=\cases 0\text{\ \ for \ } s\neq r,\\ 1 \text{\ \ for \ }
s=r.\endcases
\mytag{5.14}
$$
In order to verify that \mythetag{5.14} \pagebreak is valid it
is sufficient to pass from the operators $f_s(a)$ to their matrices
$F(a,s)$ and then to apply the orthogonality relationship
\mythetag{4.7}.\par
Now let's consider the expansion \mythetag{4.13}. According to
\mythetag{4.13}, the space $V$ is a direct sum of irreducible subspaces
$V=V_1\oplus\ldots\oplus V_\nu$, the restriction of $\varphi$ to each
such subspace is isomorphic to some irreducible representation from the
complete set $f_1,\,\ldots,\,f_r$:
$$
\hskip -2em
\varphi\,\hbox{\vrule height 8pt depth 10pt width 0.5pt}_{\,V_q}
\cong f_{r(q)}.
\mytag{5.15}
$$
Substituting \mythetag{5.15} into \mythetag{5.7} and taking into account
\mythetag{5.14}, we find that $V(r)$ is the sum of those subspaces $V_q$
in the expansion $V=V_1\oplus\ldots\oplus V_\nu$ for which $r(q)=r$. The
number of such subspaces is equal to $k_r$, while the restriction of
$\varphi$ to each of them is isomorphic to $f_r$. The
theorem~\mythetheorem{5.2} is proved.
According to the theorem~\mythetheorem{5.2}, the operators $P(r)$
yield a constructive way to build the expansion \mythetag{4.14}, while
the expansion itself is unique up to a permutation of the summands.\par
Let's consider a separate subspace $V(r)$ corresponding to the
component $k_r\,f_r$ in the expansion \mythetag{4.14}. If $k_r=0$, the
subspace $V(r)=\{0\}$ is trivial. If $k_r=1$ the subspace $V(r)$ is
irreducible, it does not require a further expansion. The rest is the
case $k_r>1$, it should be especially treated. In this case the subspace
$V(r)$ is expanded into the sum of several irreducible subspaces
$$
\hskip -2em
V(r)=\bigoplus^{k_r}_{q=1}W_q\text{, \ where \ }\dim W_q=n_r.
\mytag{5.16}
$$
In contrast to the expansion \mythetag{5.13}, the expansion \mythetag{5.16}
is not unique. One of the ways for constructing such an expansion is due to
the operators $P^i_i(r)$. Their sum is equal to $P(r)$ according to the
formula \mythetag{5.7}. From \mythetag{5.11} for these operators we derive
$$
\xalignat 2
&P^i_i(r)^2=P^i_i(r),
&&P^i_i(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^j_j(r)=0\text{\ \ for \ } i\neq j.\qquad\quad
\mytag{5.17}
\endxalignat
$$
There is no summation over $i$ and $j$ in these formulas. Due to
\mythetag{5.17} the operators $P^i_i(r)$, $i=1,\,\ldots,\,n_r$ form
a concordant and complete set of projection operators. They define
an expansion of $V(r)$ into a direct sum of smaller subspaces:
$$
\hskip -2em
V(r)=\bigoplus^{n_r}_{i=1}V_i(r).
\mytag{5.18}
$$\par
The projection operators $P^i_i(r)$ do not commute with
$\varphi(g)$. Therefore, the subspaces $V_i(i)=\operatorname{Im} P^i_i(r)$
in the expansion \mythetag{5.18} are not invariant for the
representation $\varphi$. However, we can overcome this difficulty.
For this purpose we use the following equalities easily derived from
\mythetag{5.11}:
$$
\xalignat 2
&P^k_k(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^i_k(r)=P^i_k(r),
&&P^i_k(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^i_i(r)=P^i_k(r),\qquad\quad
\mytag{5.19}\\
&P^k_i(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^i_k(r)=P^i_i(r),
&&P^i_k(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^k_i(r)=P^k_k(r).\qquad\quad
\mytag{5.20}
\endxalignat
$$
\mytheorem{5.3} For $i\neq k$ the operator $P^i_k(r)$ performs
a bijective mapping from $V_i(r)$ onto $V_k(r)$.
\endproclaim
The proof of the theorem~\mythetheorem{5.3} is based on the
formulas \mythetag{5.19} and \mythetag{5.20}. Indeed, let $\bold v
\in V_i(r)$ and let $\bold u=P^i_k(r)\bold v$. Then, using the first
relationship \mythetag{5.19}, we derive $P^k_k\bold u=\bold u$. Hence,
$\bold u\in V_k(r)$, i\.\,e\. $P^i_k(r)$ maps $V_i(r)$ into $V_k(r)$.
The mapping $P^i_k(r)\!:\,V_i(r)\to V_k(r)$ is bijective since it is
invertible. According to \mythetag{5.20}, the mapping
$P^k_i(r)\!:\,V_k(r)\to V_i(r)$ is inverse to it.\par
The equality $\dim V_i(r)=\dim V_k(r)$ is an immediate corollary
of the theorem~\mythetheorem{5.3}, which is proved just above. Using
this equality, from the expansions \mythetag{5.16} and \mythetag{5.18}
we derive
$$
\xalignat 2
&\dim V(r)=k_r\,n_r,
&&\dim V(r)=n_r\,\dim V_i(r).
\endxalignat
$$
Comparing these two formulas, we find that
$$
\dim V_i(r)=k_r,\quad i=1,\,\ldots,\,n_r.
$$
The subspaces $V_1,\,\ldots,\,V_{n_r}$ are connected by virtue of the
mappings $P^i_k(r)$. It is easy to verify that the diagram
$$
\vcenter to 120pt{\hsize=1cm}
\vadjust{\vskip 5pt\hbox to 0pt{\kern 60pt
\special{PSfile=diagram.eps voffset=-5}\hss}\vskip -5pt}
\mytag{5.21}
$$
is commutative. Indeed, $P^j_k(r)\,\raise 1pt\hbox{$\sssize\circ$} \, P^i_j(r)=P^i_k(r)$. This
equality is derived from \mythetag{5.11}. Let's choose a basis
$\bold e^1_1(r),\,\ldots,\,\bold e^1_{k_r}(r)$ in the subspace
$V_1(r)$ and then replicate it to the other subspaces $V_(r)$ by
means of the mappings $P^1_i(r)$:
$$
\bold e^i_1(r)=P^1_i(r)\bold e^1_1(r),\ .\ .\ .\ ,\ \bold e^i_{k_r}(r)=
P^1_i(r)\bold e^1_{k_r}(r).
$$
Due to the commutativity of the diagram \mythetag{5.21} we get
$$
\hskip -2em
P^i_k\bold e^i_s(r)=\bold e^k_s(r),\quad s=1,\,\ldots,\,k_r.
\mytag{5.22}
$$
The whole set of vectors $\bold e^i_s(r)$, $i=1,\,\ldots,\,n_r$,
$s=1,\,\ldots,\,k_r$ is a linear independent set since the sum of
subspaces in \mythetag{5.18} is a direct sum. Let's define new subspaces
as the spans of the following sets of vectors:
$$
U_s(r)=\langle\bold e^1_s(r),\,\ldots,\,\bold e^{n_r}_s(r)\rangle,
\quad s=1,\,\ldots,\,k_r.\qquad
\mytag{5.23}
$$
Due to the formula \mythetag{5.22} the subspaces \mythetag{5.23} are
invariant with respect to the operators $P^k_i(r)$. However, a stronger
proposition is also valid.
\mytheorem{5.4} The subspace $U_s(r)$ in \mythetag{5.23} is an invariant
subspace of the representation $(\varphi,G,V)$. It is irreducible and the
restriction of $\varphi$ to $U_s(r)$ is isomorphic to $f_r$.
\endproclaim
In order to prove the invariance of $U_s(r)$ with respect to $\varphi$
we use the relationship \mythetag{5.3}. We have proved it in the very
beginning of this section. Applying it, we get
$$
\hskip -2em
\gathered
\varphi(g)\bold e^i_s(r)=\varphi(g)P^1_i(r)\bold e^1_s(r)=\\
=\sum^{n_r}_{q=1}F^q_i(g,r)\,P^1_q(r)\bold e^1_s(r)=\sum^{n_r}_{q=1}
F^q_i(g,r)\bold e^q_s(r).
\endgathered
\mytag{5.24}
$$
Not only does the relationship \mythetag{5.24} prove the invariance of
$U_s(r)$ with respect to the operator $\varphi(g)$, but it shows that
the matrix of the operator $\varphi(g)$ in the basis
$\bold e^1_s(r),\,\ldots,\,\bold e^{n_r}_s(r)$ coincides with the
matrix of the operator $f_r(g)$. Thus, the second proposition of
the theorem~\mythetheorem{5.4} is also proved.\par
As a result we have found the constructive way for expanding
the space $V$ into a direct sum of subspaces
$$
V=\bigoplus^{m}_{r=1}\bigoplus^{k_r}_{s=1}U_s(r)
$$
that corresponds to the expansion \mythetag{4.14} of the representation
$\varphi$ into its irreducible components.\par
\newpage
\global\firstpage@true
\topmatter
\title
Contacts
\endtitle
\endtopmatter
\document
\line{\vtop{\hsize 5cm
{\bf Address:\special{html:<a name="pg75">}\special{html:</a>} }
\medskip\noindent
Ruslan A. Sharipov,\newline
Math. Department,\newline
Bashkir State University,\newline
32 Frunze street,\newline
Ufa 450074, Russia
\medskip
{\bf Phone:}\medskip
\noindent
+7-(347)-273-67-18 (Office)\newline
+7-(917)-476-93-48 (Cell)
}\hss
\vtop{\hsize 4.3cm
{\bf Home address:}\medskip\noindent
Ruslan A. Sharipov,\newline
5 Rabochaya street,\newline
Ufa 450003, Russia
\vskip 1cm
{\bf E-mails:}\medskip
\noindent
r-sharipov\@mail.ru\newline
R\hskip 0.5pt\_\hskip 1.5pt Sharipov\@ic.bashedu.ru
}
}
\bigskip
{\bf URL's:}\medskip
\noindent
\myhref{http://www.geocities.com/r-sharipov/}
{http:/\hskip -2pt/www.geocities.com/r-sharipov}\newline
\myhref{http://www.freetextbooks.boom.ru/}
{http:/\hskip -2pt/www.freetextbooks.boom.ru}\newline
\myhref{http://sovlit2.narod.ru/}
{http:/\hskip -2pt/sovlit2.narod.ru}\newline
\par
\newpage
\global\firstpage@true
\topmatter
\title
Appendix
\endtitle
\endtopmatter
\document
\rightheadtext{List of publications.}
\leftheadtext{List of publications.}
\Refs\nofrills{\special{html:<a name="pg76">}List\special{html:</a>}
of publications by the author\\ for the period 1986--2006.}
{\bf Part 1. Soliton theory.}\medskip
\ref\myrefno{1}\by Sharipov R. A.\paper Finite-gap analogs of $N$-multiplet
solutions of the KdV equation\jour Uspehi Mat. Nauk\vol 41\issue 5\yr 1986
\pages 203--204
\endref
\ref\myrefno{2}\by Sharipov R. A.\paper Soliton multiplets of the
Korteweg-de Vries equation\jour Dokladi AN SSSR\vol 292\yr 1987
\issue 6\pages 1356--1359
\endref
\ref\myrefno{3}\by Sharipov R. A.\paper Multiplet solutions of
the Kadomtsev-Petviashvili equation on a finite-gap background
\jour Uspehi Mat. Nauk\vol 42\yr 1987\issue 5\pages 221--222
\endref
\ref\myrefno{4}\by Bikbaev R. F. \& Sharipov R. A.\paper Magnetization
waves in Landau-Lifshits model\jour Physics Letters A\vol 134\yr 1988
\issue 2\pages 105-108\moreref see
\myhref{http://arxiv.org/abs/solv-int/9905008}{solv-int/9905008}
\endref
\ref\myrefno{5}\by Bikbaev R. F. \& Sharipov R. A.\paper Assymptotics
as $t\to\infty$ for a solution of the Cauchy problem for the Korteweg-de
Vries equation in the class of potentials with finite-gap behaviour as
$x\to\pm\infty$\jour Theor\. and Math\. Phys\.\vol 78\yr 1989\issue 3
\pages 345--356
\endref
\ref\myrefno{6}\by Sharipov R. A.\paper On integration of the Bogoyavlensky
chains\jour Mat\. zametki\vol 47\yr 1990\issue 1\pages 157--160
\endref
\ref\myrefno{7}\by Cherdantsev I. Yu. \& Sharipov R. A.\paper Finite-gap
solutions of the Bul\-lough-Dodd-Jiber-Shabat equation\jour Theor\. and
Math\. Phys\.\vol 82\yr 1990\issue 1\pages 155--160
\endref
\ref\myrefno{8}\by Cherdantsev I. Yu. \& Sharipov R. A.\paper Solitons
on a finite-gap background in Bullough-Dodd-Jiber-Shabat model\jour
International\. Journ\. of Modern Physics A\vol 5\yr 1990\issue 5
\pages 3021--3027\moreref see
\myhref{http://arxiv.org/abs/math-ph/0112045}{math-ph/0112045}
\endref
\ref\myrefno{9}\by Sharipov R. A. \& Yamilov R. I.\paper Backlund
transformations and the construction of the integrable boundary value
problem for the equation\linebreak $u_{xt}=e^u-e^{-2u}$\inbook
{\tencyr\char '074}Some problems of mathematical physics and asymptotics
of its solutions{\tencyr\char '076}\publ Institute of mathematics BNC UrO
AN SSSR\publaddr Ufa\yr 1991\pages 66--77\moreref see
\myhref{http://arxiv.org/abs/solv-int/9412001}{solv-int/9412001}
\endref
\ref\myrefno{10}\by Sharipov R. A.\paper Minimal tori in
five-dimensional sphere in $\Bbb C^3$\jour Theor\. and Math\.
Phys\.\vol 87\yr 1991\issue 1\pages 48--56\moreref see
\myhref{http://arxiv.org/abs/math.DG/0204253}{math.DG/0204253}
\endref
\ref\myrefno{11}\by Safin S. S. \& Sharipov R. A.\paper Backlund
autotransformation for the equation $u_{xt}=e^u-e^{-2u}$\jour Theor\.
and Math\. Phys\.\vol 95\yr 1993\issue 1\pages 146--159
\endref
\ref\myrefno{12}\by Boldin A. Yu. \& Safin S. S. \& Sharipov R. A.
\paper On an old paper of Tzitzeika and the inverse scattering
method\jour Journal of Mathematical Physics\vol 34\yr 1993\issue 12
\pages 5801--5809
\endref
\ref\myrefno{13}\by Pavlov M. V. \& Svinolupov S. I. \& Sharipov R. A.
\paper Invariant criterion of integrability for a system of equations
of hydrodynamical type\inbook {\tencyr\char '074}Integrability in dynamical
systems{\tencyr\char '076}\publ Inst. of Math. UrO RAN\publaddr Ufa
\yr 1994\pages 27--48\moreref\jour Funk\. Anal\. i Pril\.\vol 30\yr 1996
\issue 1\pages 18--29\moreref see
\myhref{http://arxiv.org/abs/solv-int/9407003}{solv-int/9407003}
\endref
\ref\myrefno{14}\by Ferapontov E. V. \& Sharipov R. A.\paper On
conservation laws of the first order for a system of equations of
hydrodynamical type\jour Theor\. and Math\. Phys\.\vol 108\yr 1996
\issue 1\pages 109--128
\endref
\medskip{\bf Part 2. Geometry of the normal shift.}\medskip
\ref\myrefno{1}\by Boldin A. Yu. \& Sharipov R. A.\paper Dynamical
systems accepting the normal shift\jour Theor\. and Math\. Phys\.
\vol 97\yr 1993\issue 3\pages 386--395\moreref see
\myhref{http://arxiv.org/abs/chao-dyn/9403003}{chao-}
\myhref{http://arxiv.org/abs/chao-dyn/9403003}{dyn/9403003}
\endref
\ref\myrefno{2}\by Boldin A. Yu. \& Sharipov R. A.\paper Dynamical
systems accepting the normal shift\jour Dokladi RAN\vol 334\yr 1994
\issue 2\pages 165--167
\endref
\ref\myrefno{3}\by Boldin A. Yu. \& Sharipov R. A.\paper Multidimensional
dynamical systems accepting the normal shift\jour Theor\. and Math\. Phys\.
\vol 100\yr 1994\issue 2\pages 264--269\moreref see
\myhref{http://arxiv.org/abs/patt-sol/9404001}{patt-sol/9404001}
\endref
\ref\myrefno{4}\by Sharipov R. A.\paper Problem of metrizability for
the dynamical systems accepting the normal shift\jour Theor\. and Math\.
Phys\.\vol 101\yr 1994\issue 1\pages 85--93\moreref see
\myhref{http://arxiv.org/abs/solv-int/9404003}{solv-int/9404003}
\endref
\ref\myrefno{5}\by Sharipov R. A.\paper Dynamical systems accepting the
normal shift\jour Uspehi Mat\. Nauk\vol 49\yr 1994\issue 4\page 105
\moreref see \myhref{http://arxiv.org/abs/solv-int/9404002}
{solv-int/9404002}
\endref
\ref\myrefno{6}\by Boldin A. Yu. \& Dmitrieva V. V. \& Safin S. S.
\& Sharipov R. A.\paper Dynamical systems accepting the normal shift on an
arbitrary Riemannian manifold\inbook {\tencyr\char '074}Dynamical systems
accepting the normal shift{\tencyr\char '076}\publ Bashkir State
University\publaddr Ufa\yr 1994\pages 4--19\moreref see also\nofrills
\jour Theor\. and Math\. Phys\.\vol 103\yr 1995\issue 2\pages 256--266
\nofrills\moreref and \myhref{http://arxiv.org/abs/hep-th/9405021}
{hep-th/9405021}
\endref
\ref\myrefno{7}\by Boldin A. Yu. \& Bronnikov A. A. \& Dmitrieva V. V.
\& Sharipov R. A.\paper Complete normality conditions for the dynamical
systems on Riemannian manifolds\inbook {\tencyr\char '074}Dynamical
systems accepting the normal shift{\tencyr\char '076}\publ Bashkir State
University\yr 1994\pages 20--30\moreref see also\nofrills\jour Theor\.
and Math\. Phys\.\vol 103\yr 1995\issue 2\pages 267--275\nofrills
\moreref and \myhref{http://arxiv.org/abs/astro-ph/9405049}
{astro-ph/9405049}
\endref
\ref\myrefno{8}\by Sharipov R. A.\paper Higher dynamical systems accepting
the normal shift\inbook {\tencyr\char '074}Dynamical systems accepting the
normal shift{\tencyr\char '076}\publ Bashkir State University\yr 1994
\pages 41--65
\endref
\ref\myrefno{9}\by Bronnikov A. A. \& Sharipov R. A.\paper Axially
symmetric dynamical systems accepting the normal shift in $\Bbb R^n$
\inbook {\tencyr\char '074}Integrability in dynamical systems{\tencyr
\char '076}\publ Inst\. of Math\. UrO RAN\publaddr Ufa\yr 1994
\pages 62--69
\endref
\ref\myrefno{10}\by Sharipov R. A.\paper Metrizability by means of
a conformally equivalent metric for the dynamical systems\inbook
{\tencyr\char '074}Integrability in dynamical systems{\tencyr\char
'076}\publ Inst\. of Math\. UrO RAN\publaddr Ufa\yr 1994\pages 80--90
\moreref see also\nofrills\jour Theor\. and Math\. Phys\.\vol 103
\yr 1995\issue 2\pages 276--282
\endref
\ref\myrefno{11}\by Boldin A. Yu. \& Sharipov R. A.\paper On the
solution of the normality equations for the dimension $n\geqslant 3$
\jour Algebra i Analiz\vol 10\yr 1998\issue 4\pages 31--61\moreref
see also \myhref{http://arxiv.org/abs/solve-int/9610006}
{solve-int/9610006}
\endref
\ref\myrefno{12}\by Sharipov R. A.\book Dynamical systems admitting
the normal shift, \rm Thesis for the degree of Doctor of Sciences in
Russia\publ \myhref{http://arxiv.org/abs/math.DG/0002202}
{math.DG/0002202}\publaddr Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2000
\pages 1--219
\endref
\ref\myrefno{13}\by Sharipov R. A.\paper Newtonian normal shift in
multidimensional Riemannian geometry\jour Mat\. Sbornik\vol 192
\yr 2001\issue 6\pages 105--144\moreref see also
\myhref{http://arxiv.org/abs/math.DG/0006125}{math.DG}
\myhref{http://arxiv.org/abs/math.DG/0006125}{/0006125}
\endref
\ref\myrefno{14}\by Sharipov R. A.\paper Newtonian dynamical systems
admitting the normal blow-up of points\jour Zap\. semin\. POMI
\vol 280\yr 2001\pages 278--298\moreref see also
\myhref{http://arxiv.org/abs/math.DG/0008081}{math.DG/0008081}
\endref
\ref\myrefno{15}\by Sharipov R. A.\paper On the solutions of the weak
normality equations in multidimensional case\jour
\myhref{http://arxiv.org/abs/math.DG/0012110}{math.DG/0012110}
in Electronic archive \myhref{http://arxiv.org}{http:/\hskip -2pt/}
\myhref{http://arxiv.org}{arxiv.org}\yr 2000\pages 1--16
\endref
\ref\myrefno{16}\by Sharipov R. A.\paper First problem of globalization
in the theory of dynamical systems admitting the normal shift of
hypersurfaces\jour International Journal of Mathematics and Mathematical
Sciences\vol 30\yr 2002\issue 9\pages 541--557\moreref see also
\myhref{http://arxiv.org/abs/math.DG/0101150}{math.DG/0101150}
\endref
\ref\myrefno{17}\by Sharipov R. A.\paper Second problem of globalization
in the theory of dyna\-mical systems admitting the normal shift of
hypersurfaces\jour \myhref{http://arxiv.org/abs/math.DG/0102141}
{math.DG} \myhref{http://arxiv.org/abs/math.DG/0102141}{/0102141}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2001\pages 1--21
\endref
\ref\myrefno{18}\by Sharipov R. A.\paper A note on Newtonian, Lagrangian,
and Hamiltonian dynamical systems in Riemannian manifolds\jour
\myhref{http://arxiv.org/abs/math.DG/0107212}{math.DG/0107212} in
Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2001\pages 1--21
\endref
\ref\myrefno{19}\by Sharipov R. A.\paper Dynamical systems admitting
the normal shift and wave equations\jour Theor\. and Math\. Phys\.
\vol 131\yr 2002\issue 2\pages 244--260\moreref see also
\myhref{http://arxiv.org/abs/math.DG/0108158}{math.DG/0108158}
\endref
\ref\myrefno{20}\by Sharipov R. A.\paper Normal shift in general
Lagrangian dynamics\jour \myhref{http://arxiv.org/abs/math.DG/0112089}
{math.DG} \myhref{http://arxiv.org/abs/math.DG/0112089}{/0112089}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2001\pages 1--27
\endref
\ref\myrefno{21}\by Sharipov R. A\paper Comparative analysis for a pair
of dynamical systems one of which is Lagrangian\jour
\myhref{http://arxiv.org/abs/math.DG/0204161}{math.DG/0204161}
in Electronic archive \myhref{http://arxiv.org}{http:/\hskip -2pt/}
\myhref{http://arxiv.org}{arxiv.org}\yr 2002\pages 1--40
\endref
\ref\myrefno{22}\by Sharipov R. A.\paper On the concept of a normal
shift in non-metric geometry\jour
\myhref{http://arxiv.org/abs/math.DG/0208029}{math.DG/0208029}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2002\pages 1--47
\endref
\ref\myrefno{23}\by Sharipov R. A.\paper $V$-representation for
the normality equations in geometry of generalized Legendre
transformation\jour
\myhref{http://arxiv.org/abs/math.DG/0210216}{math.DG/0210216}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2002\pages 1--32
\endref
\ref\myrefno{24}\by Sharipov R. A.\paper On a subset of the normality
equations describing a generalized Legendre transformation\jour
\myhref{http://arxiv.org/abs/math.DG/0212059}{math.DG/0212059}
in Electronic ar\-chive \yr 2002\pages 1--19
\endref
\medskip{\bf Part 3. Several complex variables.}\medskip
\ref\myrefno{1}\by Sharipov R. A. \& Sukhov A. B. On $CR$-mappings
between algebraic Cauchy-Riemann manifolds and the separate algebraicity
for holomorphic functions\jour Trans\. of American Math\. Society
\vol 348\yr 1996\issue 2\pages 767--780\moreref see also\nofrills
\jour Dokladi RAN\vol 350\yr 1996\issue 4\pages 453--454
\endref
\ref\myrefno{2}\by Sharipov R. A. \& Tsyganov E. N. On the separate
algebraicity along families of algebraic curves\book Preprint of Baskir
State University\publaddr Ufa\yr 1996\pages 1-7\moreref see also\nofrills
\jour Mat\. Zametki\vol 68\yr 2000\issue 2\pages 294--302
\endref
\medskip{\bf Part 4. Symmetries and invariants.}\medskip
\ref\myrefno{1}\by Dmitrieva V. V. \& Sharipov R. A.\paper On the
point transformations for the second order differential equations
\jour \myhref{http://arxiv.org/abs/solv-int/9703003}{solv-int/9703003}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 1997\pages 1--14
\endref
\ref\myrefno{2}\by Sharipov R. A.\paper On the point transformations
for the equation $y''=P+3\,Q\,y'+3\,R\,{y'}^2+S\,{y'}^3$\jour
\myhref{http://arxiv.org/abs/solv-int/9706003}{solv-int/9706003}
in Electronic archive \myhref{http://arxiv.org}
{http:/\hskip -2pt/}\linebreak\myhref{http://arxiv.org}{arxiv.org}
\yr 1997\pages 1--35\moreref see also\nofrills\jour Vestnik BashGU
\vol 1(I)\yr 1998\pages 5--8
\endref
\ref\myrefno{3}\by Mikhailov O. N. \& Sharipov R. A.\paper On the
point expansion for a certain class of differential equations of
the second order\jour Diff\. Uravneniya\vol 36\yr 2000\issue 10
\pages 1331--1335\moreref see also
\myhref{http://arxiv.org/abs/solv-int/9712001}{solv-int/9712001}
\endref
\ref\myrefno{4}\by Sharipov R. A.\paper Effective procedure of
point-classification for the equation $y''=P+3\,Q\,y'+3\,R\,{y'}^2
+S\,{y'}^3$\jour \myhref{http://arxiv.org/abs/math.DG/9802027}
{math.DG/9802027} in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 1998
\pages 1--35
\endref
\ref\myrefno{5}\by Dmitrieva V. V. \& Gladkov A. V. \& Sharipov R. A.
\paper On some equations that can be brought to the equations of
diffusion type\jour Theor\. and Math\. Phys\.\vol 123\yr 2000
\issue 1\pages 26--37\moreref see also
\myhref{http://arxiv.org/abs/math.AP/9904080}{math.AP/9904080}
\endref
\ref\myrefno{6}\by Dmitrieva V. V. \& Neufeld E. G. \& Sharipov R. A.
\& Tsaregorod\-tsev~A.~ A.\paper On a point symmetry analysis for
generalized diffusion type equations\jour \
\myhref{http://arxiv.org/abs/math.AP/9907130}{math.AP/9907130} \
in \ Electronic \ archive \ \myhref{http://arXiv.org}{http:/\negskp/arXiv.org} \yr 1999\pages 1--52
\endref
\medskip{\bf Part 5. General algebra.}\medskip
\ref\myrefno{1}\by Sharipov R. A\paper Orthogonal matrices with
rational components in composing tests for High School students
\jour \myhref{http://arxiv.org/abs/math.GM/0006230}{math.GM/0006230}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2000\pages 1--10
\endref
\ref\myrefno{2}\by Sharipov R. A.\paper On the rational extension
of Heisenberg algebra\jour \myhref{http://arxiv.org/abs/math.RA/0009194}
{math.} \myhref{http://arxiv.org/abs/math.RA/0009194}{RA/0009194}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2000\pages 1--12
\endref
\ref\myrefno{3}\by Sharipov R. A\paper An algorithm for generating
orthogonal matrices with rational elements\jour
\myhref{http://arxiv.org/abs/cs.MS/0201007}{cs.MS/0201007} in
Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2002\pages 1--7
\endref
\medskip{\bf Part 6. Condensed matter physics.}\medskip
\ref\myrefno{1}\by Lyuksyutov S. F. \& Sharipov R. A.\paper Note
on kinematics, dynamics, and thermodynamics of plastic glassy media
\jour\myhref{http://arxiv.org/abs/cond-mat/0304190}{cond-mat/0304190}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2003\pages 1--19
\endref
\ref\myrefno{2}\by Lyuksyutov S. F. \& Sharipov R. A. \& Sigalov G.
\& Paramonov P. B.\paper Exact analytical solution for electrostatic
field produced by biased atomic force microscope tip dwelling above
dielectric-conductor bilayer\jour
\myhref{http://arxiv.org/abs/cond-mat/0408247}{cond-}
\myhref{http://arxiv.org/abs/cond-mat/0408247}{mat/0408247}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2004\pages 1--6
\endref
\ref\myrefno{3}\by Lyuksyutov S. F. \& Sharipov R. A.\paper Separation
of plastic deformations in polymers based on elements of general nonlinear
theory\jour \myhref{http://arxiv.org/abs/cond-mat/0408433}{cond-mat}
\myhref{http://arxiv.org/abs/cond-mat/0408433}{/0408433}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2004\pages 1--4
\endref
\ref\myrefno{4}\by Comer J. \& Sharipov R. A.\paper A note on the
kinematics of dislocations in crystals\jour
\myhref{http://arxiv.org/abs/math-ph/0410006}{math-ph/0410006}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2004\pages 1--15
\endref
\ref\myrefno{5}\by Sharipov R. A.\paper Gauge or not gauge?\nofrills
\jour \myhref{http://arxiv.org/abs/cond-mat/0410552}{cond-mat/0410552}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2004\pages 1--12
\endref
\ref\myrefno{6}\by Sharipov R. A.\paper Burgers space versus real space
in the nonlinear theory of dislocations\jour
\myhref{http://arxiv.org/abs/cond-mat/0411148}{cond-mat/0411148}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2004\pages 1--10
\endref
\ref\myrefno{7}\by Comer J. \& Sharipov R. A.\paper On the geometry
of a dislocated medium\jour
\myhref{http://arxiv.org/abs/math-ph/0502007}{math-ph/0502007}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2005\pages 1--17
\endref
\ref\myrefno{8}\by Sharipov R. A.\paper A note on the dynamics and
thermodynamics of dislocated crystals\jour
\myhref{http://arxiv.org/abs/cond-mat/0504180}{cond-mat/0504180}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2005\pages 1--18
\endref
\ref\myrefno{9}\by Lyuksyutov S. F. \& Paramonov P. B. \& Sharipov R. A.
\& Sigalov G.\paper Induced nanoscale deformations in polymers using
atomic force microscopy\jour Phys\. Rev\. B \vol 70\yr 2004
\issue 174110
\endref
\medskip{\bf Part 7. Tensor analysis.}\medskip
\ref\myrefno{1}\by Sharipov R. A.\paper Tensor functions of tensors
and the concept of extended tensor fields\jour
\myhref{http://arxiv.org/abs/math/0503332}{math/0503332}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2005\pages 1--43
\endref
\ref\myrefno{2}\by Sharipov R. A.\paper Spinor functions of spinors
and the concept of extended spinor fields\jour
\myhref{http://arxiv.org/abs/math.DG/0511350}{math.DG/0511350}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2005\pages 1--56
\endref
\ref\myrefno{3}\by Sharipov R. A.\paper Commutation relationships and
curvature spin-tensors for extended spinor connections\jour
\myhref{http://arxiv.org/abs/math.DG/0512396}{math.DG/0512396}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2005\pages 1-22
\endref
\medskip{\bf Part 8. Particles and fields.}\medskip
\ref\myrefno{1}\by Sharipov R. A.\paper A note on Dirac spinors in
a non-flat space-time of ge\-neral relativity\jour
\myhref{http://arxiv.org/abs/math.DG/0601262}{math.DG/0601262}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2006\pages 1--22
\endref
\ref\myrefno{2}\by Sharipov R. A.\paper A note on metric connections
for chiral and Dirac spi\-nors\jour
\myhref{http://arxiv.org/abs/math.DG/0602359}{math.DG/0602359}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2006\pages 1--40
\endref
\ref\myrefno{3}\by Sharipov R. A.\paper On the Dirac equation in a
gravitation field and the secondary quantization\jour
\myhref{http://arxiv.org/abs/math.DG/0603367}{math.DG/0603367}
in Electronic archive \myhref{http://arxiv.org}{http:/\hskip -2pt/}
\myhref{http://arxiv.org}{arxiv.org}\yr 2006\pages 1--10
\endref
\ref\myrefno{4}\by Sharipov R. A.\paper The electro-weak and color
bundles for the Standard Model in a gravitation field\jour
\myhref{http://arxiv.org/abs/math.DG/0603611}{math.DG/0603611}
in Electronic archive\linebreak\myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2006\pages 1--8
\endref
\ref\myrefno{5}\by Sharipov R. A.\paper A note on connections of
the Standard Model in a gravitation field\jour
\myhref{http://arxiv.org/abs/math.DG/0604145}{math.DG/0604145}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2006\pages 1--11
\endref
\ref\myrefno{6}\by Sharipov R. A.\paper A note on the Standard Model
in a gravitation field\jour
\myhref{http://arxiv.org/abs/math.DG/0605709}{math.DG/0605709}
in Electronic archive \myhref{http://arXiv.org}{http:/\negskp/arXiv.org}\yr 2006\pages 1--36
\endref
\medskip{\bf Part 9. Textbooks.}\medskip
\ref\myrefno{1}\by Sharipov R. A.\book Theory of representations of
finite groups\publ Bash-NII-Stroy\publaddr Ufa\yr 1995\moreref
see also \myhref{http://arxiv.org/abs/math.HO/0612104}{math.HO/0612104}
\endref
\ref\myrefno{2}\by Sharipov R. A\book Course of linear algebra and
multidimensional geometry\publ Bashkir State University\publaddr
Ufa\yr 1996\moreref see also
\myhref{http://arxiv.org/abs/math.HO/0405323}{math.HO/0405323}
\endref
\ref\myrefno{3}\by Sharipov R. A.\book Course of differential geometry,
\publ Bashkir State University\publaddr Ufa\yr 1996\moreref see also
\myhref{http://arxiv.org/abs/math.HO/0412421}{math.HO/0412421}
\endref
\ref\myrefno{4}\by Sharipov R. A.\book Classical electrodynamics and
theory of relativity\publ Bash\-kir State University\publaddr Ufa\yr 1996
\moreref see also
\myhref{http://arxiv.org/abs/physics/0311011}{physics/0311011}
\endref
\ref\myrefno{5}\by Sharipov R. A.\book Foundations of geometry for
university students and high-school students\publ Bashkir State
University\yr 1998
\endref
\ref\myrefno{6}\by Sharipov R. A.\book Quick introduction to tensor
analysis\publ free on-line textbook\yr 2004\moreref see also
\myhref{http://arxiv.org/abs/math.HO/0403252}{math.HO/0403252}
\endref
\endRefs
\enddocument
\end
| {
"timestamp": "2006-12-04T20:49:57",
"yymm": "0612",
"arxiv_id": "math/0612104",
"language": "en",
"url": "https://arxiv.org/abs/math/0612104",
"abstract": "This book is an introduction to a fast developing branch of mathematics - the theory of representations of groups. It presents classical results of this theory concerning finite groups.",
"subjects": "History and Overview (math.HO); Group Theory (math.GR)",
"title": "Representations of finite groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018419665619,
"lm_q2_score": 0.7279754371026367,
"lm_q1q2_score": 0.7099229871689043
} |
https://arxiv.org/abs/1305.6031 | Congruences for Generalized Frobenius Partitions with an Arbitrarily Large Number of Colors | In his 1984 AMS Memoir, George Andrews defined the family of $k$--colored generalized Frobenius partition functions. These are denoted by $c\phi_k(n)$ where $k\geq 1$ is the number of colors in question. In that Memoir, Andrews proved (among many other things) that, for all $n\geq 0,$ $c\phi_2(5n+3) \equiv 0\pmod{5}.$ Soon after, many authors proved congruence properties for various $k$--colored generalized Frobenius partition functions, typically with a small number of colors.Work on Ramanujan--like congruence properties satisfied by the functions $c\phi_k(n)$ continues, with recent works completed by Baruah and Sarmah as well as the author. Unfortunately, in all cases, the authors restrict their attention to small values of $k.$ This is often due to the difficulty in finding a "nice" representation of the generating function for $c\phi_k(n)$ for large $k.$ Because of this, no Ramanujan--like congruences are known where $k$ is large. In this note, we rectify this situation by proving several infinite families of congruences for $c\phi_k(n)$ where $k$ is allowed to grow arbitrarily large. The proof is truly elementary, relying on a generating function representation which appears in Andrews' Memoir but has gone relatively unnoticed. | \section{Introduction}
In his 1984 AMS Memoir, George Andrews \cite{AndMem} defined the family of $k$--colored generalized Frobenius partition functions which are denoted by $c\phi_k(n)$ where $k\geq 1$ is the number of colors in question. Among many things,
Andrews \cite[Corollary 10.1]{AndMem} proved that, for all $n\geq 0,$ $c\phi_2(5n+3) \equiv 0\pmod{5}.$
Soon after, many authors proved similar congruence properties for various $k$--colored generalized Frobenius partition functions, typically for a small number of colors $k.$ See, for example, \cite{ES, GarThesis, Kol1, KolPow3, Lovejoy, Ono, PauRad, Sel1, Xiong}.
In recent years, this work has continued. Baruah and Sarmah \cite{BarSar} proved a number of congruence properties for $c\phi_4$, all with moduli which are powers of 4. Motivated by this work of Baruah and Sarmah, the author \cite{SelJIMS} further studied 4--colored generalized Frobenius partitions and proved that
for all $n\geq 0,$ $c\phi_4(10n+6) \equiv 0 \pmod{5}.$
Unfortunately, in all the works mentioned above, the authors restrict their attention to small values of $k.$ This is often due to the difficulty in finding a ``nice'' representation of the generating function for $c\phi_k(n)$ for large $k.$ Because of this, no Ramanujan--like congruences are known where $k$ is large. The goal of this brief note is to rectify this situation by proving several infinite families of congruences for $c\phi_k(n)$ where $k$ is allowed to grow arbitrarily large. The proof is truly elementary, relying on a generating function representation which appears in Andrews' Memoir but has gone relatively unnoticed.
\section{Our Congruence Results}
We begin by noting the following generating function result from Andrews' AMS Memoir \cite[Equation (5.14)]{AndMem}:
\begin{thm}
\label{AndGenFn}
For fixed $k,$ the generating function for $c\phi_k(n)$ is the constant term (i.e., the $z^0$ term) in
$$
\prod_{n=0}^\infty (1+zq^{n+1})^k(1+z^{-1}q^n)^k.
$$
\end{thm}
Theorem \ref{AndGenFn} is the springboard that Andrews uses to find ``nice'' representations of the generating functions for $c\phi_k(n)$ for $k=1,2,$ and $3.$ Theorem \ref{AndGenFn} rarely appears in the works written by the various authors referenced above; however, it is extremely useful in proving the following theorem, the main result of this note.
\begin{thm}
\label{MainThm}
Let $p$ be prime and let $r$ be an integer such that $0<r<p.$ If
$$
c\phi_k(pn+r) \equiv 0\pmod{p}
$$
for all $n\geq 0,$ then
$$
c\phi_{pN+k}(pn+r) \equiv 0\pmod{p}
$$
for all $N\geq 0$ and $n\geq 0.$
\end{thm}
\begin{proof}
Assume $p$ is prime and $r$ is an integer such that $0<r<p.$ Thanks to Theorem \ref{AndGenFn}, we note that the generating function for $c\phi_{pN+k}(n)$ is the constant term (i.e., the $z^0$ term) in
\begin{equation}
\label{genfn1}
\prod_{n=0}^\infty (1+zq^{n+1})^{pN+k}(1+z^{-1}q^n)^{pN+k}.
\end{equation}
Since $p$ is prime, we know (\ref{genfn1}) is congruent, modulo $p,$ to
\begin{equation}
\label{genfn2}
\prod_{n=0}^\infty (1+(zq^{n+1})^p)^{N}(1+(z^{-1}q^n)^p)^{N}\prod_{n=0}^\infty (1+zq^{n+1})^{k}(1+z^{-1}q^n)^{k}
\end{equation}
thanks to the binomial theorem.
Note that the first product in (\ref{genfn2}) is a function of $q^p$ and the second product is the product from which we obtain the generating function for $c\phi_k(n)$ thanks to Theorem \ref{AndGenFn}. Since the first product is indeed a function of $q^p$, and since we wish to find the generating function dissection for $c\phi_k(pn+r)$ where $0<r<p,$ we see that if
$$
c\phi_k(pn+r) \equiv 0\pmod{p}
$$
for all $n\geq 0,$ then
$$
c\phi_{pN+k}(pn+r) \equiv 0\pmod{p}
$$
for all $n\geq 0.$
\end{proof}
Of course, once one knows a single congruence of the form
$$
c\phi_k(pn+r) \equiv 0\pmod{p}
$$
for all $n\geq 0,$ where $p$ be prime and $r$ is an integer such that $0<r<p,$ then one can write down an infinite family of congruences for an arbitrarily large number of colors with the same modulus $p.$ We provide a number of such examples here.
\begin{cor}
For all $N\geq 0$ and for all $n\geq 0,$
\begin{eqnarray*}
c\phi_{5N+1}(5n+4) &\equiv& 0 \pmod{5}, \\
c\phi_{7N+1}(7n+5) &\equiv& 0 \pmod{7}, \text{\ \ and} \\
c\phi_{11N+1}(11n+6) &\equiv& 0 \pmod{11}.
\end{eqnarray*}
\end{cor}
\begin{proof}
This corollary of Theorem \ref{MainThm} follows from the fact that $c\phi_1(n) = p(n)$ for all $n\geq 0$ as well as Ramanujan's well--known congruences for $p(n)$ modulo 5, 7, and 11.
\end{proof}
\begin{cor}
For all $N\geq 0$ and for all $n\geq 0,$
$$
c\phi_{5N+2}(5n+3) \equiv 0 \pmod{5}.
$$
\end{cor}
\begin{proof}
This corollary of Theorem \ref{MainThm} follows from
Andrews \cite[Corollary 10.1]{AndMem} where he proved that, for all $n\geq 0,$ $c\phi_2(5n+3) \equiv 0\pmod{5}.$
\end{proof}
\begin{cor}
For all $N\geq 1$ and all $n\geq 0,$
$$c\phi_{3N}(3n+2) \equiv 0\pmod{3}.$$
\end{cor}
\begin{proof}
This corollary of Theorem \ref{MainThm} follows from
Kolitsch's work \cite{KolPow3} where he proved that, for all $n\geq 0,$ $c\phi_3(3n+2) \equiv 0\pmod{3}.$
\end{proof}
One last comment is in order. It is also clear that one can combine corollaries like those above in order to obtain some truly unique--looking congruences. For example, we note the following:
\begin{cor}
For all $N\geq 0$ and all $n\geq 0,$
$$c\phi_{1155N+1002}(1155n+908) \equiv 0\pmod{1155}.$$
\end{cor}
\begin{proof}
The proof of this result follows from the Chinese Remainder Theorem and the fact that
$$1155 = 3\times 5\times 7\times 11$$ along with a combination of the corollaries mentioned above.
\end{proof}
It is extremely gratifying to be able to explicitly identify such congruences satisfied by these generalized Frobenius partition functions.
| {
"timestamp": "2013-05-28T02:02:17",
"yymm": "1305",
"arxiv_id": "1305.6031",
"language": "en",
"url": "https://arxiv.org/abs/1305.6031",
"abstract": "In his 1984 AMS Memoir, George Andrews defined the family of $k$--colored generalized Frobenius partition functions. These are denoted by $c\\phi_k(n)$ where $k\\geq 1$ is the number of colors in question. In that Memoir, Andrews proved (among many other things) that, for all $n\\geq 0,$ $c\\phi_2(5n+3) \\equiv 0\\pmod{5}.$ Soon after, many authors proved congruence properties for various $k$--colored generalized Frobenius partition functions, typically with a small number of colors.Work on Ramanujan--like congruence properties satisfied by the functions $c\\phi_k(n)$ continues, with recent works completed by Baruah and Sarmah as well as the author. Unfortunately, in all cases, the authors restrict their attention to small values of $k.$ This is often due to the difficulty in finding a \"nice\" representation of the generating function for $c\\phi_k(n)$ for large $k.$ Because of this, no Ramanujan--like congruences are known where $k$ is large. In this note, we rectify this situation by proving several infinite families of congruences for $c\\phi_k(n)$ where $k$ is allowed to grow arbitrarily large. The proof is truly elementary, relying on a generating function representation which appears in Andrews' Memoir but has gone relatively unnoticed.",
"subjects": "Number Theory (math.NT)",
"title": "Congruences for Generalized Frobenius Partitions with an Arbitrarily Large Number of Colors",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9923043525940246,
"lm_q2_score": 0.7154240079185319,
"lm_q1q2_score": 0.7099183570078211
} |
https://arxiv.org/abs/1710.08480 | The generalized Sierpiński Arrowhead Curve | We define special Hamiltonian-paths and special permutations of the up-facing dark tiles on a checked triangular grid related to the generalized Sierpiński Gasket. Our definitions and observations make possible the generalization of the Sierpiński Arrowhead Curve for all orders. We produce these symmetric recursive curves in many ways by two kinds of asymmetric paths which are in a bijective relation and unambiguously transformable into each other in any order. These node-rewriting and edge-rewriting recursive curves keep their self-avoiding and simple properties after the transformation and their cardinality specifies a new integer sequence. We show a transformation table to change the curves into each other and we give another table to change them into Lindenmayer-system strings both by the absolute direction codes of their edges. | \section{The bijective relation between paths and tilings}
In this section we define two kinds of special paths related to the same generator pattern on the triangular grid. We observe their transformability into each other and prove their bijective relation.
\subsection{Checked generator patterns}
Let us consider an equilateral triangle with sides divided into $n$ equal pieces. Connecting the dividing points with line-segments parallel to the sides, we get a natural partitioning of the $n^2$ smaller coincident subtriangles by colouring the tiles that face upwards (like the original triangle) dark, and colouring the rest of the subtriangles white.
By infinitely replacing all the dark tiles with contracted copies of the checked generator pattern, we get a symmetric fractal, the \emph{generalized Sierpi\'{n}ski Gasket}. This 2-dimensional checked generator pattern of order $n$ in the $k$-th approximation (repeating this transformation $k$ times) usually denoted by $SG_{2,n}(k)$. We will be referring to the generator pattern in a shorter form by $F_n$, and the approximations of the fractal by $F_n(k)$. See \emph{Figure 1}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{F1-generator+.pdf}
\centerline{}
\centerline{\bf Figure 1. Triangular grid of order 4 with 16 tiles,}
\centerline{\bf $F_4$ checked generator pattern and the 2nd approximation: $F_4(2)$.}
\end{figure}
This fractal family keeps the original properties of the pattern for any order of $n$, unlike \emph{Pascal Triangle modulo $n$} patterns which keep these properties only in prime orders and contain other patterns in the white subtriangles which face downwards.
Our generator pattern $F_n$ contains $n^2$ subtriangles, $T_{n-1}$ white tiles, $T_n$ dark tiles and $T_{n+1}$ grid points as consecutive triangular numbers. The centroids of the monochromatic tiles also form a triangular grid. We will only be using the dark tiles. Their centroids form the \emph{inscribed grid}, and their corners form the \emph{overall grid}. See \emph{Figure 2}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{F2-IOgrids.pdf}
\centerline{}
\centerline{\bf Figure 2. Generator pattern of order 4 consists of $T_4$ dark tiles.}
\centerline{\bf Dots define the inscribed grid (left) and the overall grid (right).}
\centerline{\bf They consist of $T_4=10$ and $T_5=15$ grid points.}
\end{figure}
\subsection{Paths and permutations of the dark tiles}
By connecting the $T_n$ grid-points of the inscribed grid with the shortest possible edges in all the self-avoiding ways we get Hamiltonian-paths, \emph{H-paths} denoted by $H_n$.
Let us consider a self-avoiding tiling-path on the overall grid called \emph{S-path} (referring to Sierpi\'{n}ski), denoted by $S_n$ which consists of $T_n$ edges and in which all the edges must be lying on different dark subtriangles. For practical reasons we will be using the notation of McKenna: marking the tiles with little ticks in the middle of the edges [McK94]. See the right side of \emph{Figure 3}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{F3-insc+.pdf}
\centerline{}
\centerline{\bf Figure 3. Example of an $H_3$-path and the corresponding $S_3$-path}
\centerline{\bf with the same permutation of the dark tiles.}
\end{figure}
Both paths have directed edges which are originating from the leftmost grid point and terminating in the rightmost grid point.
H-paths and S-paths are both connect node-neighbour dark subtriangles and describe permutations of the dark tiles, but these paths have different cardinality.
\subsection{Absolute direction code of the edges}
We will describe both paths with strings of the absolute direction codes of their edges.
Let us define a triangular grid $T_n$ with $A,B,C$ corners where $AB$ is a polar axis. The third corner $C$ is located in positive direction (counterclockwise) on our polar coordinate system.
We are searching for paths from $A$, the originating leftmost grid point (pole) to $B$, the terminating rightmost grid point (the end point of the polar axis).
Consider the edges of the path translated into the pole. The only unique property is their direction. Let us denote this absolute direction by $d$, where ${0}\leq{d}\leq{5}$ and a direction code means $d\dfrac{\pi}{3}$ radian. See \emph{Figure 4}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{F4-dcodes+.pdf}
\centerline{}
\centerline{\bf Figure 4. Absolute direction codes of H-paths (middle)}
\centerline{\bf and S-paths (left and right sides) in which}
\centerline{\bf parity specifies the facing of the tiles.}
\end{figure}
We can describe the consecutive edges of our paths with a string which consists of these direction codes in both $H$-paths and $S$-paths.
It is obvious that there are no turn backs in paths, so for every \linebreak $d_i$ and $d_{i+1}$, $\mid{d_{i+1}-d_{i}} \mid \neq 3$ stands for both paths.
The definition of the S-path also excludes turn backs to the same dark tile, therefore consecutive edges $(d_i, d_{i+1})$ of the S-path cannot be the following:
\begin{equation*}
d_{i+1}\not\equiv
\begin{cases}
(d_i +2) \mod 6 & \qquad \text{if $d_i$ is even} \\
(d_i +4) \mod 6 & \qquad \text{if $d_i$ is odd}
\end{cases}
\end{equation*}
\subsection{Transformability of the paths into each other}
Here we introduce and observe the transformations between the paths in both directions. We will prove that there is a bijection between a subset of $H$-paths and $S$-paths which are related to the same generator pattern.
The transformation from an S-path into a Hamiltonian-path means we connect the centroids of all the dark tiles exactly in the same order as the edges of the S-path touched them. This transformation is always possible and results exactly one H-path which is different for every S-path. This is an injective relation.
The transformation from an H-path into an S-path means we connect all the dark tiles from the leftmost point of the overall grid through the contact points of the node-neighbour dark tiles to the rightmost point with tiling-edges exactly in the same order as the H-path connected them. Sometimes it is not possible, but any of the S-paths have a preimage in the set of H-paths, therefore it is a surjective relation.
\begin{remark}
There are more Hamiltonian-paths on the inscribed grid than S-paths on the overall grid: $\mid{H_n} \mid \geqslant \mid{S_n} \mid$, because some of them are impossible to transform into an S-path.
\end{remark}
Consider 3 dark node-neighbour tiles which share on one common node in the middle. By connecting their centroids we can always form a Hamiltonian-path, but it is impossible to do so with S-paths. We cannot draw 3 consecutive edges which touch 3 different tiles in this arrangement because the 3 tiles have only one contact point instead of two. See \emph{Figure 5} where we cannot continue the S-path with an edge which touches the third tile signed by an X.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{F5.pdf}
\centerline{}
\centerline{\bf Figure 5. Impossibility of the transformation}
\centerline{\bf from an H-path into an S-path.}
\end{figure}
These injective $(S_n \rightarrowtail H_n)$ and surjective $(H_n \twoheadrightarrow S_n)$ relations mean there is a bijection between a subset of $H_n$ and $S_n$. We call this subset the Well-formed Hamiltonian-paths (W-paths), and referring to them by $W_n$.
\subsection{Special Hamiltonian paths}
\begin{theorem}
W-paths on the inscribed grid and S-paths on the corresponding overall grid are in bijective relation if they belongs to the same generator pattern: $W_n \mathrel{\rightarrowtail\kern-1.9ex\twoheadrightarrow} S_n$.
\end{theorem}
By excluding some undesirable turns we can define W-paths, a special subset of H-paths which can be bijectively transformed into S-paths and vice versa.
We use new notations ($s_n$, $h_n$ and $w_n$) for the strings of the direction codes referring to the paths of order $n$ denoted by the same capital letters.
Let us consider all possible Hamiltonian-paths ($H_n$) on the inscribed grid, with the absolute direction codes described above. We would like to observe the turns of the edge-pairs in the H-path, therefore we consider a basic direction ($0$) from the left to the right same as the position of the originating and the terminating point and we supplement our $h_n$ string with a leading and an ending zero.
The forbidden turns as $(d_i, d_{i+1})$ number pairs are the following:
\begin{equation*}
d_{i+1}\not\equiv
\begin{cases}
(d_i +4) \mod 6 & \qquad \text{if $d_i$ is even} \\
(d_i +2) \mod 6 & \qquad \text{if $d_i$ is odd}
\end{cases}
\end{equation*}
where $d_i$ and $d_{i+1}$ are neighbouring elements of the direction code string $h_n$. Geometrically, this means that the next edge cannot turn $120^{\circ}$ to the right after an even direction and it cannot turn $120^{\circ}$ to the left after an odd direction. You can see a wrong turn on the left side of \emph{Figure 6} at the 2nd and the 3rd edges of the H-path, as direction code pairs 51. We call these kinds of strings consisting of only well-formed turns $w_n$ strings in that case if this condition is true.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{F6-directions+.pdf}
\centerline{}
\centerline{\bf Figure 6. H-path ($h_4=151215540$) and W-path ($w_4=111544015$).}
\end{figure}
\subsection{Trivial W-paths}
\begin{remark}
Well-formed paths exist for all orders.
\end{remark}
We will be showing a constructing algorithm for a trivial W-path which proves their existence on the inscribed grid for all orders.
Let $j$ be equal to 1.
The iterated steps are the following: head $(n-j)$ times up-right (1), then head down-right $1$ time (5). Increment $j$, then head $(n-j)$ times down-left (4), then go to the right $1$ time (0), and increment $j$.
If $n$ is odd, we have to iterate our steps $\dfrac{n-1}{2}$ times, otherwise $\dfrac{n-2}{2}$ times, and in this case we have to add 2 more edges (an arrowhead shape) to the string in the directions up-right (1) and down-right (5) after the iterations have finished.
For the $n=4$ case of our trivial W-path, see the middle image of \emph{Figure 6}.
When $n=6$, our trivial W-path as a string (using the absolute direction codes we previously described) will be the following:
\smallskip
\centerline{$w_6=11111544440111544015$}
\smallskip
This proves that we can always construct at least one W-path on our inscribed grid for any order of $n$.
\subsection{Transformation table of the paths}
In W-paths, there are no untransformable turns. They have the same cardinality as S-paths: $\mid{W_n} \mid = \mid{S_n} \mid$. They describe the same permutations of the dark tiles. The connecting points of the dark tiles in an S-path can be transformed into the edges of the unique corresponding W-path and vice versa. We will be showing this property and prove the bijection by a transformation table. See \emph{Table 1, Figure 4} and \emph{Figure 7}.
We always have to supplement our $w_n$ string with a leading and an ending zero to begin transforming a W-path into an S-path: $0 w_n 0 \rightarrow s_n$. Then every pair $(a,b)$ of the supplemented $w_n$ string (first and second digits, second and third digits, etc.) give a new direction code, a digit of the $s_n$ string, by reading \emph{Table 1} by rows and columns $(a,b)$. These pairs describe two contact points between 3 dark tiles ($P_1 P_2, P_2 P_3, P_3 P_4$ on \emph{Figure 7}). The middle tile has an entering and an exiting point which are in a one-to-one correspondence with the direction code of the actual edge in the S-path. (See the left side and the middle of \emph{Figure 7} and for the direction codes see \emph{Figure 4}.)
\[
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
\bf
$\blacksquare$ & {\bf $b=0$} & \bf {1} & \bf {2} & \bf {3} & \bf {4} & \bf {5} \\ \hline \hline
\bf {$a=0$} & 0 & 1 & 1 & $\blacksquare$ & $\blacksquare$ & 0 \\ \hline
\bf 1 & 0 & 1 & 1 & $\blacksquare$ & $\blacksquare$ & 0 \\ \hline
\bf 2 & $\blacksquare$ & 2 & 2 & 3 & 3 & $\blacksquare$ \\ \hline
\bf 3 & $\blacksquare$ & 2 & 2 & 3 & 3 & $\blacksquare$ \\ \hline
\bf 4 & 5 & $\blacksquare$ & $\blacksquare$ & 4 & 4 & 5 \\ \hline
\bf 5 & 5 & $\blacksquare$ & $\blacksquare$ & 4 & 4 & 5 \\ \hline
\end{tabular}
\]
\smallskip
\centerline {\bf \emph{Table 1.} Transformation table for changing the direction codes} \centerline {\bf from a W-path into an S-path and vice versa.}
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{F7-trans+2+.pdf}
\centerline{\bf Figure 7. Changing a supplemented W-path into an S-path}
\centerline{\bf by reading Table 1 as values defined by $(a,b)$}
\centerline{\bf and vice versa by reading it in reverse order $(b,a)$.}
{\bf $0 w_2 0=0150 \qquad \rightarrow \qquad s_2=105 \qquad \rightarrow \qquad w_2=15$.}
\end{figure}
To change the direction codes from an S-path to a W-path ($s_n \rightarrow w_n$) we have to read \emph{Table 1} by columns and rows $(b,a)$ because this is the transpose of our matrix.
In this case consecutive edge-pairs of the S-path $(b,a)$ gives us the edges of the inscribed grid (digits of the $w_n$ string). Consecutive digit-pairs of the $s_n$ string show the unique possible arrangement of 2 neighbour dark tiles connected by a contact point ($P_2, P_3$). We can connect their centroids only in one way to get the edges of the W-path. (See the middle and the right side of \emph{Figure 7}.)
The definitions of the paths with the rules and properties of their transformations, the presented illustrations, the constructing algorithm of the trivial paths, the transformation table and all these considerations prove \emph{Theorem 1}.
\section{Triangular self-avoiding recursive curves}
In this section we will show that these bijective pairs of the paths are the generator curves of the \emph{generalized Sierpi\'{n}ski Arrowhead Curve}. We describe their construction methods. We will call our fractal approximating extended grids the \emph{inscribed graph} and the \emph{overall graph}.
\subsection {FASS-curves}
In 1890, \emph{Peano} discovered the first \emph{space-filling curve}, which was the first continuous geometrical example for \emph{Cantor}'s surprising theorems.
In 1994, \emph{Douglas M. McKenna} proved that unlike on a square grid, simple triangular Peano-curves (FASS-curves) do not exist [McK94]. I thought non space-filling curves are worth observing, and I found a new family of simple self-avoiding recursive curves in 2009.
For further study of our subject, simple \emph{recursive curves} or \emph{Peano-curves}, also known as \emph{FASS-curves}, which have space-Filling, self-Avoiding, Simple and self-Similar properties, and the related \emph{Lindenmayer-systems} with their rewriting methods we recommend reading the followings: [PL90, PLF91, Sa94, M82].
\subsection{Rewriting methods}
Recursive curves are usually described by a formal language called L-system, named after the Hungarian theoretical biologist \emph{Aristid Lindenmayer}, who lived and worked in Utrecht, the Netherlands.
There are 2 different ways to make a recursive curve in the Lindenmayer-system: the node-rewriting (NR) and the edge-rewriting (ER) method.
It is an obvious but often overlooked fact that \emph{node-rewriting} FASS-curves are Hamiltonian-paths in any approximation. In an NR path we keep the edges and substitute \emph{the nodes} in all approximations with the transformed copy of the original path.
\emph{Edge-rewriting} curves work like tessellations. Edges of this self-avoiding path do not visit all the grid points, but they touch every little tile exactly once. We subsitute \emph{the edges} in all approximations with the transformed copy of the original path.
Both of our paths can be made extendable with a simple self-similar definition to larger graphs. A \emph{W-path} on the inscribed grid became to \emph{NR recursive curve} and results Hamiltonian-paths on the inscribed graph. An \emph{S-path} on the overall grid became to \emph{ER recursive curve} and results tiling-paths on the overall graph. They are not FASS-curves because they are \emph{non space-filling paths}, but they describe and fill the same fractal pattern $F_n(\infty)$ in an infinite approximation.
\subsection{Approximately space-filling property}
Properties of space-filling curves are often different in finite and infinite approximations. \emph{Mazurkiewicz's theorem} states that there is no continuous, one-to-one mapping of a line segment onto a square, therefore no curves can be both self-avoiding and strictly space-filling, but a finite approximation of a given space-filling curve can be both self-avoiding and approximately space-filling. Our curves pass within a small distance from all points of the dark tiles which surround the curve and this distance can be arbitrarily reduced by carrying the recursive construction to an appropriate level. [PLF91]
\subsection{Edge-rewriting construction}
Here we show how to construct approximations of $F_n(k)$ fractal curves with ER method. Let us start with a generator curve $S_n$ on the overall grid which consists of $T_n$ edges on a triangular grid $(T_{n+1})$. All edges are lying on exactly one side of each dark tiles which stand in the same position as the big triangle ($F_n(0)=F_1$ generator pattern). Edges are originating from the left corner and terminating in the right corner of the big triangle. In the next approximation we divide all the dark tiles into $n$ equal pieces and we get $T_n^2$ dark subtriangles in the overall graph. We have to substitute the dark subtriangles with the contracted, rotated and reflected copies of the generator path in the direction of the edges. Direction codes define the tiles unambiguously. See \emph{Figure 4}.
\begin{remark}
For producing recursive curves we will use upper index $k$ at $w_n$ and $s_n$ direction code strings to mark the $k$-th approximation level.
\end{remark}
For example the simplest S-path means the following assignment: $s_2^0=0 \rightarrow s_2^1=105$ on the overall grid.
To any $p$ even and $q$ odd direction code the corresponding transformations (rotation and reflection) give the following 3 digits:
\begin{center}
$p \rightarrow \qquad (p+1) \mod 6, \qquad p, \qquad (p+5) \mod 6$
$q \rightarrow \qquad (q+5) \mod 6, \qquad q, \qquad (q+1) \mod 6$
\end{center}
The next approximation contains $T_n^2$ digits (9 edges). See \emph{Figure 8}.
\smallskip
\centerline{$s_2^1=105 \implies s_2^2=012105450$}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{F8+.pdf}
\centerline{}
\centerline{\bf Figure 8. An S-path as an ER recursive curve.}
\centerline{\bf Sierpi\'{n}ski Arrowhead Curve $F_2(2)$.}
\centerline{\bf $0 \rightarrow 105 \implies 012105450$ ($s_2^0 \rightarrow s_2^1 \implies s_2^2$)}
\end{figure}
Let us see this assignment in general case from a direction code (1 digit as an even $p$ or an odd $q$ number) to $T_n$ edges (consisting of $5$ possible direction codes from set $x$ and set $y$).
The generalized formula for even numbers ($p$):
$p \rightarrow (p+x_1) \mod 6, (p+x_2) \mod 6, ... (p+x_{T_n}) \mod 6$, where $x_i \in \{0,1,4,5\}$
and the corresponding assignments for odd numbers ($q$) are:
$q \rightarrow (q+y_1) \mod 6, (q+y_2) \mod 6, ... (q+y_{T_n})\mod 6$ where $y_i \in \{0,5,2,1\}$.
Missing values in set $x$ and set $y$ mean 2 forbidden turns in S-path (no turn back to the same tile).
The order of the values in set $x$ and set $y$ indicates the corresponding pairs for the other parity, otherwise the reflected turns with the same angle from an even direction and from an odd direction. For example we have the following formula to an even direction: $(p+1) \mod 6$. The corresponding formula for an odd direction is $(q+5) \mod 6$ because the fourth value in set $x$ is a $1$ and in set $y$ it is a $5$.
\subsection{Node-rewriting construction}
We will use W-paths to get NR recursive curves. The simplest one is $w_2^0=0 \rightarrow w_2^1=15$ on the inscribed grid $(T_2)$, which has 3 nodes. In the next approximations we will keep all the previous edges, contracted them to the actual unit length and replace all the nodes among them to the contracted, rotated and reflected copies of our generator W-path as the last approximation of the S-path indicated how the dark tiles facing. See \emph{Figure 9}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{F9_.pdf}
\centerline{}
\centerline{\bf Figure 9. A W-path as an NR recursive curve.}
\centerline{\bf Sierpi\'{n}ski Arrowhead Curve $F_2(2)$.}
\centerline{\bf $0 \rightarrow 15 \implies 02115540$, ($w_2^0 \rightarrow w_2^1 \implies w_2^2$)}
\end{figure}
To get the next fractal approximation as a string $(w_n^{k+1})$ we have to insert these direction codes before, between and after the digits of our actual $w_n^k$ string but it is much more easy to transform the original W-path into an S-path first and using the ER method to construct the required $k$th approximation of the fractal and at the end, transform it back to NR code with \emph{Table 1}.
We get the following NR codes for the first 3 approximations (up to $T_2^3$ inscribed graph, $w_2^0 \rightarrow w_2^1 \implies w_2^2 \implies w_2^3$). Larger sized digits show the inheritance of the edges from the previous recursive levels:
\begin {center}
{$0 \rightarrow 15 \implies 02$ {\Large $1$} $15$ {\Large $5$} $40 \implies 15$ {\Large $0$} $02$ {\Large $2$}
$31$ {\Huge $1$} $02$ {\Large $1$} $15$ {\Large $5$} $40$ {\Huge $5$} $53$ {\Large $4$} $40$ {\Large $0$} $15$}
\end{center}
\subsection{Example for the least compound order}
We have shown how to construct recursive curves with both methods by W-paths and S-paths which are the generator curves of the NR and the ER recursive curves.
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\textwidth]{FX+.pdf}
\centerline{}
\centerline{\bf Figure 10. A bijective pair of $F_4$ generator curves (W- and S-paths)}
\centerline{\bf and their $F_4(3)$ approximations (NR and ER recursive curves).}
\end{figure}
Both of our examples in \emph{Figures 8} and \emph{9} presented the first single result for $n=2$ which makes the unique bilateral symmetric permutation of the dark tiles in both methods and it keeps this property in all approximations. This curve is known as the \emph{Sierpi\'{n}ski Arrowhead Curve}. The commonly known ER version can be described easier but we can see the name of the curve came from the arrowhead shape of the NR generator curve. Every NR curve contains the original arrowhead shape at their tops.
This recursive curve family $F_n(\infty)$ for $n>2$ can be seen \emph{first time here}. See \emph{Figure 10} where you can compare the corresponding paths and recursive curves in both construction methods.
Corresponding recursive curves map the same fractal in the same permutation of the dark tiles in all approximations. This is an example from the first (least) compound order ($n=4$) where $F_n(\infty)$ fractal pattern first time differs from the \emph{Pascal Triangle modulo $n$} pattern.
See \emph{Table 2} for the numbers of the Hamiltonian-paths, which appear in [SEH05] and the numbers of the generator curves of the \emph{generalized Sierpi\'{n}ski Arrowhead Curve}, which \emph{first appear here as a new integer sequence}. We got these results by our computer program which is a smart backtrack algorithm.
The \emph{generalized Sierpi\'{n}ski Arrowhead Curve} fills the \emph{generalized Sierpi\'{n}ski Gasket} $(F_n(\infty))$ in both methods. Their \emph{Hausdorff dimension} [HD] is:
\smallskip
$D=log_n{(T_n)}=log_n{\dfrac{n(n+1)}{2}}$ \qquad where \qquad $\lim_{n\to\infty}log_n{(T_n)}=2$
\[
\begin{tabular}{|c||c||c||c|}
\hline
\bf
{\small n} & \bf {\small $T_n$} & \bf {\small $H_n$} & \bf {\small $W_n=S_n$ } \\ \hline \hline
\bf 2 & 3 & 1 & 1 \\ \hline
\bf 3 & 6 & 2 & 2 \\ \hline
\bf 4 & 10 & 10 & 4 \\ \hline
\bf 5 & 15 & 92 & 16 \\ \hline
\bf 6 & 21 & 1852 & 68 \\ \hline
\bf 7 & 28 & 78032 & 464 \\ \hline
\bf 8 & 36 & 6846876 & 3828 \\ \hline
\bf 9 & 45 & 1255156712 & 44488 \\ \hline
\end{tabular}
\]
\smallskip
\centerline{\bf \emph{Table 2.} Numbers of the Hamiltonian-paths $(H_n)$,}
\centerline{\bf the W-paths $(W_n)$ and the S-paths $(S_n)$}
\centerline{\bf on the generator pattern $F_n$ consisting of $T_n$ dark tiles.}
\section{Extendability of \emph{Theorem 1} to recursive curves}
In this section we will show why our bijection is also extendable to recursive curves.
Transformation between ER and NR recursive curves usually means their properties change. We will show our recursive curves unlike others always keep their self-avoiding property and all their edges keep the same length after transforming them into each other.
\subsection{Length of the connecting edges}
In [McK94] we can see tiling-paths of ER recursive curves which usually connect different facing tiles. In our case it means connections between different coloured tiles with tiling-paths (edges touch all used tiles only at one side). The connecting lengths are the same unit lengths but by transforming them to connected centroids we get two more connecting lengths. Centroids between edge-neighbour tiles are in $1/\sqrt{3}$ distance, between node-neighbour tiles they are in $2/\sqrt{3}$ distance from each other. Using S-path we have chosen appropriate connections. An S-path can connect only the same facing dark tiles which ensures the unit length of the connecting edges independently from the transformation between the paths.
\subsection{Self-avoiding property of our recursive curves}
ER curves based on S-paths. Consider the generator pattern as an $ABC$ triangle, where $A$ is the entering point, $B$ is the exiting point and $C$ is the top of the triangle. An S-path is always avoid corner $C$. By extending the \emph{overall grid} for larger approximations ($k>1$), the overall graph consists of $T_n^k$ little tile-like triangular grids. Each little triangle consists of $T_{n+1}$ grid points but they share common corners. Their edges always avoid $C$ corners of the little grids and they use the common corners between them only once as their exiting nodes ($B$) became to entering nodes ($A$) of different, neighbour tiles.
NR curves based on W-paths. We extend the inscribed grid to an inscribed graph which consists of $T_n^k$ little separated triangular grids with separated paths. We connect the little triangular grids with the edges of the W-path without the forbidden turns which ensures we stay inside this graph and we never use the corners of the little triangular grids twice. They could not be entering and also exiting points at the same time.
\subsection{Generator curves of the same fractal $F_n(\infty)$}
The bijective pairs of the W-paths and the S-paths as recursive curves fill and touch or pass over the dark tiles in all approximations exactly in the same order. They keep their self-avoiding property and continuously map the same fractal. We can transform them anytime into each other in the same way as the paths with \emph{Table 1}.
They approximate the same fractal $F_n(\infty)$ from inside (NR curve) and from outside (ER curve) and in an arbitrarily large finite $k$th approximation the distance of the paths becomes infinitely small anywhere. Contact points of the $ER$ curve and edges of the $NR$ curve are getting infinitely close to each other and we cannot distinguish the little triangles, their centroids and their touching edges any more.
By avoiding wrong turns in the construction of the recursive curves and by our other considerations described above the bijection is also valid for recursive curves.
\section {Transforming the paths into L-system codes}
We can transform our generator paths into the Lindenmayer-system to draw the fractal curves in an easy way.
\subsection{Edge-rewriting codes}
We can transform our W-paths (NR generator curves) into ER strings in L-system by assigning the digit-pairs of the $w_n$ string with L-system symbols.
The first digit of the pair is an even $(p)$ or an odd $(q)$ number, then these assignings depend on the second digit of the pair:
\begin{multicols}{2}
$p \rightarrow A$
$(p+1) \rightarrow -B$
$(p+2) \mod{6} \rightarrow -B-$
$(p+5) \mod{6} \rightarrow A+$
$q \rightarrow B$
$(q+1) \mod{6} \rightarrow B-$
$(q+4) \mod{6} \rightarrow +A+$
$(q+5) \mod{6} \rightarrow +A$
\end{multicols}
For example on an $F_4$ generator pattern a possible W-path (supplemented with the leading and ending zero) is $w_4=01502215550$. Then by transforming the digit-pairs into (ER) L-system we get the following string: $A=-B+A+B--B-AA++A+BBB-$. Let the other assigning equal to its inverse: $B=+A-B-A++A+BB--B-AAA+$.
\[
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
\bf
$\blacksquare$ & {\bf $b=0$} & \bf {1} & \bf {2} & \bf {3} & \bf {4} & \bf {5} \\ \hline \hline
\bf {$a=0$} & A & -B & -B- & $\blacksquare$ & $\blacksquare$ & A+ \\ \hline
\bf 1 & +A & B & B- & $\blacksquare$ & $\blacksquare$ & +A+ \\ \hline
\bf 2 & $\blacksquare$ & A+ & A & -B & -B- & $\blacksquare$ \\ \hline
\bf 3 & $\blacksquare$ & +A+ & +A & B & B- & $\blacksquare$ \\ \hline
\bf 4 & -B- & $\blacksquare$ & $\blacksquare$ & A+ & A & -B \\ \hline
\bf 5 & B- & $\blacksquare$ & $\blacksquare$ & +A+ & +A & B \\ \hline
\end{tabular}
\]
\smallskip
\centerline {\bf \emph{Table 3.} Transformation table for changing the edge-pairs}
\centerline {\bf of the W-paths to an ER string in L-system.}
\bigskip
You can see this W-path (NR generator curve) on the left side of \emph{Figure 10}. For the sake of simplicity we can use \emph{Table 3} to transform it to ER string in L-system. We have to read the table by rows and columns $(a,b)$. You can see this S-path (ER generator curve) on the right side of \emph{Figure 10}.
We can make various fractal patterns by changing the inverse rule (B=...) to other for example the palindrome of the B string or to the inverse of any other path in the same order of $n$.
\subsection{Node-rewriting codes}
To get the NR string in L-system we only have to insert F symbols among the symbol groups and we have to change all the A symbols to X, and all the B symbols to Y, because in L-system, A and B symbols are drawing instructions like F symbol (they draw the edges), but X and Y are only variables, we only need them to substitute the nodes which they symbolize.
Finally we get: $X=-YF+X+FY-F-Y-FXFX+F+X+FYFYFY-$ and its inverse $Y=+XF-Y-FX+F+X+FYFY-F-Y-FXFXFX+$ as strings in L-system which code the NR recursive curve.
For illustration see the left side of \emph{Figure 10}.
\subsection{Supplement}
L-system codes can be visualized easier with Inkscape [IS] freeware vector-graphic application program which has a built-in L-system extension, or in an online L-system application [OL]. In this paper we discussed recursive curves with simple axioms against for example Moore-curve, but it is also possible to make cycles from them or combine them with compound axioms.
For further information and experiments with recursive curves we recommend that beyond the mentioned sources curious readers read the beautiful book of Jeffrey Ventrella [V12] and [A16]. Gosper-curve variants are also related [FSN01].
\section* {Summary}
We have observed the properties of Hamiltonian-paths and tiling-paths on the triangular grid and their transformability into each other. We have found and proved the bijective relation of their subsets in \emph{Section 1}.
In \emph{Sections 2} and \emph{3} we have used our bijective related assymetric paths as generator curves by which we have constructed the \emph{generalized Sierpi\'{n}ski Arrowhead Curve} in larger graphs also in node-rewriting and edge-rewriting method. The cardinality of their generator curves specifies a new integer sequence.
In \emph{Section 4} we have presented a transformation table to change the direction code strings of the paths into L-system strings to draw the fractals.
Our previous papers can complete this field with some details. For all of my papers please check my Google Scholar site [KA].
\section* {Acknowledgement}
We would like to express our thanks to \emph{Mih\'aly Hujter} professor of Mathematics at TU Budapest, to \emph{Guszt\'av Ga\'al} mathematician at E\"{o}tv\"{o}s Lor\'and University (ELTE) Budapest and the inspiring community of the \emph{Mathematics Museum in Budapest}.
\section* {References}
\smallskip \noindent [McK94] McKenna, Douglas M.: \emph{SquaRecurves, E-Tours, Eddies and Frenzies: Basic Families of Peano Curves on the Square Grid}, In: Guy, Richard K., Woodrow, Robert E.: \emph{The Lighter Side of Mathematics: Proceedings of the Eugene Strens Memorial Conference on Recreational Mathematics and its History}, pp. 49-73, Mathematical Association of America, 1994.
\smallskip
\smallskip \noindent [PL90] Prusinkiewicz, P. and Lindenmayer, A.: \emph{The algorithmic beauty of plants}, Springer, 1990.
\smallskip
\smallskip \noindent [PLF91] Prusinkiewicz, P., Lindenmayer, A., and Fracchia, F. D.: \emph{Synthesis of space---filling curves on the square grid}, In: H.-O. Peitgen, J. M. Henriques and L. F. Penedo, eds.: \emph{Fractals in the fundamental and applied sciences}, pp. 341-366, North-Holland, 1991.
\smallskip
\smallskip \noindent [Sa94] Sagan, H.: \emph{Space---filling curves}, Springer, 1994.
\smallskip
\smallskip \noindent [M82] Mandelbrot, B. B.: \emph{The Fractal Geometry of Nature}, W. H. Freeman and Company, New York, 1982.
\smallskip
\smallskip \noindent [SEH05] Staji\'c, J., Elezovi\'c-Had\v{z}i\'c, S.: \emph{Hamiltonian walks on Sierpinski and n-simplex fractals}, \url{https://arxiv.org/abs/cond-mat/0310777}, 2005.
\smallskip
\medskip \noindent [HD] \emph{List of fractals by Hausdorff dimension}, \url{https://en.wikipedia.org/wiki/List_of_fractals_by_Hausdorff_dimension}
\smallskip
\smallskip \noindent [IS] \emph{Inkscape vector graphic application with built in L-system}, \\
\url{https://inkscape.org/en/}
\smallskip
\smallskip \noindent [OL] \emph{Online Lindenmayer-system application}, \\
\url{http://www.kevs3d.co.uk/dev/lsystems/}
\smallskip
\smallskip \noindent [V12] Ventrella, J.: \emph{Brainfilling Curves --- a Fractal Bestiary}, \\
\url{http://www.brainfillingcurves.com/}, 2012.
\smallskip
\smallskip \noindent [A16] Arndt, J.: \emph{Plain--filling curves on all uniform grids}, \\
\url{https://arxiv.org/pdf/1607.02433.pdf}, 2016.
\smallskip
\smallskip \noindent [FSN01] Fukuda, H., Shimizu, M., Nakamura, G.: \emph{New Gosper Space Filling Curves}, In: Proceedings of the International Conference on Computer Graphics and Imaging (CGIM2001) \\
\url{http://kilin.clas.kitasato-u.ac.jp/museum/gosperex/343-024.pdf}, 2001.
\smallskip \noindent \rm [KA] \emph{Google Scholar citations of Kaszanyitzky, A.},
\\ \url{https://scholar.google.hu/citations?user=i5daxSoAAAAJ}
\smallskip
\end{document} | {
"timestamp": "2017-10-25T02:01:40",
"yymm": "1710",
"arxiv_id": "1710.08480",
"language": "en",
"url": "https://arxiv.org/abs/1710.08480",
"abstract": "We define special Hamiltonian-paths and special permutations of the up-facing dark tiles on a checked triangular grid related to the generalized Sierpiński Gasket. Our definitions and observations make possible the generalization of the Sierpiński Arrowhead Curve for all orders. We produce these symmetric recursive curves in many ways by two kinds of asymmetric paths which are in a bijective relation and unambiguously transformable into each other in any order. These node-rewriting and edge-rewriting recursive curves keep their self-avoiding and simple properties after the transformation and their cardinality specifies a new integer sequence. We show a transformation table to change the curves into each other and we give another table to change them into Lindenmayer-system strings both by the absolute direction codes of their edges.",
"subjects": "Combinatorics (math.CO)",
"title": "The generalized Sierpiński Arrowhead Curve",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540740815275,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7099046697961084
} |
https://arxiv.org/abs/1811.09988 | Spectral form factor and semi-circle law in the time direction | We study the time derivative of the connected part of spectral form factor, which we call the slope of ramp, in Gaussian matrix model. We find a closed formula of the slope of ramp at finite $N$ with non-zero inverse temperature. Using this exact result, we confirm numerically that the slope of ramp exhibits a semi-circle law as a function of time. | \section{Introduction \label{sec:intro}}
After the seminal work \cite{Garcia-Garcia:2016mno,Cotler:2016fpe},
the spectral form factor is intensively studied as a diagnostic
of the quantum chaotic behavior of the Sachdev-Ye-Kitaev (SYK) model \cite{KitaevTalks,Sachdev,Maldacena:2016hyu}, which is a solvable example of the
holographic model of a certain black hole in two-dimension.
At late times, the spectral form factor
of SYK model exhibits a structure of the so-called \textit{ramp}
and \textit{plateau}, and
it is well-approximated by the behavior of the Gaussian Unitary Ensemble
(GUE) random matrix model
when the number of fermions mod 8 is 2 or 6 \cite{you}\footnote{See also \cite{Gharibyan:2018jrp,Hunter-Jones:2017crg,Li:2017hdt,Kanazawa:2017dpd,Saad:2018bqo,Garcia-Garcia:2018ruf,Nosaka:2018iat,Garcia-Garcia:2017bkg} for the study of spectral form factor
in SYK model and its supersymmetric generalizations.}.
In this paper, we will consider the the
spectral form factor $g(\beta,t)$ in GUE matrix model
with non-zero inverse temperature $\beta$.
We will show that
$g(\beta,t)$
is written exactly as a trace of
an $N\times N$ matrix $A(z)$ defined in \eqref{eq:Amat}.
$g(\beta,t)$ consists of two parts: the disconnected part
$g_{\text{disc}}(\beta,t)$ \eqref{eq:gdisc} and the connected part
$g_{\text{conn}}(\beta,t)$ \eqref{eq:gconn}.
In Figure \ref{fig:gtotal}, we show the plot of this exact $g(\beta,t)$
for $\beta=5$ with the matrix size $N=500$.
As we can see from Figure \ref{fig:gtotal},
after the initial decay described by
the disconnected part $g_{\text{disc}}(\beta,t)$,
$g(\beta,t)$ has the structure of ramp
and plateau at late times. This late time behavior
comes from the connected part
$g_{\text{conn}}(\beta,t)$ and it was studied
extensively in the literature (see e.g. \cite{hikami,Liu:2018hlr} and references therein).
\footnote{The spectral form factor was first introduced in
\cite{jost} as a Fourier transform of the two-level correlation
function,
and it was observed that the spectral form factor exhibits a structure of dip,
which was originally called the ``correlation hole'' in \cite{jost}.}
The ramp is closely related to
the short range correlation of eigenvalues described by
the so-called sine kernel, and if we focus on the contribution from
a small window around some fixed eigenvalue the ramp grows linearly in $t$.
However, since $g(\beta,t)$ is defined by integrating over
the whole range of eigenvalue distribution, the actual ramp is not a linear function of $t$.
In this paper, we will study the non-linearity of ramp using the exact result at finite $N$.
To see the deviation from the linear behavior, it is natural
to consider the time derivative of $g_{\text{conn}}(\beta,t)$, which we will call the
\textit{slope of ramp}.
If the ramp were a linear function of $t$, the slope of ramp would be a constant.
However, the actual slope of ramp is not constant in time.
It turns out that the slope of ramp obeys the semi-circle law as a function of time.
This is a direct consequence of the semi-circle law of eigenvalue distribution,
of course, but there is an interesting twist:
the slope of ramp corresponds to the eigenvalues and the time
corresponds to
the eigenvalue density (see Figure \ref{fig:circle} for the detail
of this correspondence). In other words,
the eigenvalue density manifests itself as the time direction
in the graph of the slope of ramp.
\begin{figure}[thb]
\centering
\includegraphics[width=10cm]{gtotal.pdf}
\caption{Plot of the exact spectral form factor $g(\beta,t)$
in GUE for $\beta=5, N=500$.
}
\label{fig:gtotal}
\end{figure}
This paper is organized as follows.
In section \ref{sec:exact}, we write down the exact
closed form expression of the slope of ramp
$\partial_t g_{\text{conn}}(\beta,t)$ at finite $N$.
In section \ref{sec:largeN}, we compute the late time behavior of
$g_{\text{conn}}(\beta,t)$ in the large $N$
limit. We point out that after an appropriate change of variable
\eqref{eq:sbt}, the slope of ramp obeys the semi-circle law as a function of time.
In section \ref{sec:plot}, we plot the slope of ramp as a function of time
using our exact result at finite $N$ for both $\beta=0$ and $\beta\ne0$ cases,
and confirm that the slope of ramp exhibits the semi-circle law.
In section \ref{sec:smallt}, we consider the slope of ramp in the small $t$ regime.
Finally, we conclude in section \ref{sec:conclusion}.
In Appendix \ref{app:mat}, we explain how to compute
$\Tr A(z)$ and $\Tr A(z_1)A(z_2)$.
\section{Exact slope of ramp at finite $N$ \label{sec:exact}}
In this paper we consider the spectral form factor in Gaussian
matrix model defined by
\begin{equation}
\begin{aligned}
g(\beta,t)=\Bigl\langle \Tr e^{-(\beta+\mathrm{i} t)H}
\Tr e^{-(\beta-\mathrm{i} t)H}\Bigr\rangle
&=\frac{\int dHe^{-\frac{N}{2}\Tr H^2}\Tr e^{-(\beta+\mathrm{i} t)H}
\Tr e^{-(\beta-\mathrm{i} t)H}}{\int dHe^{-\frac{N}{2}\Tr H^2}},
\end{aligned}
\label{eq:def-g}
\end{equation}
where the integral is over the $N\times N$ hermitian matrix $H$.
By definition, $g(\beta,t)$ is an even function of $t$. Moreover,
since the Gaussian measure is invariant under $H\to -H$,
$g(\beta,t)$ is independent of the sign
of $\beta$.
In the following we will assume that $\beta$ and $t$
are both positive without loss of generality:
\begin{equation}
\begin{aligned}
\beta\geq0,\quad t\geq0.
\end{aligned}
\end{equation}
In the normalization of Gaussian measure
in \eqref{eq:def-g},
the eigenvalue $\mu$ of matrix $H$ is distributed along the cut $\mu\in[-2,2]$
in the large $N$ limit,
and the eigenvalue density $\rho(\mu)$ is given by the
Wigner semi-circle law
\begin{equation}
\begin{aligned}
\rho(\mu)=\frac{1}{2\pi}\rt{4-\mu^2} .
\end{aligned}
\label{eq:wigner}
\end{equation}
As pointed out in \cite{delCampo:2017bzr},
$g(\beta,t)$ in \eqref{eq:def-g}
is formally equivalent to the correlator of 1/2 BPS Wilson loops in
4d $\mathcal{N}=4$ Super Yang-Mills (SYM) theory,
which is also given by the Gaussian matrix model via
the supersymmetric localization \cite{Erickson:2000af,Drukker:2000rr,Pestun:2007rz}.
Thus, we can immediately find the exact form of $g(\beta,t)$
by borrowing the known result of $\mathcal{N}=4$ SYM
in \cite{Drukker:2000rr,Kawamoto:2008gp,Okuyama:2018aij}.
To do this, it is convenient to rescale the matrix
\begin{equation}
\begin{aligned}
H=\rt{\frac{2}{N}}M,
\end{aligned}
\end{equation}
so that the measure becomes $\int dM e^{-\Tr M^2}$.
In this normalization, $g(\beta,t)$ is written as
\begin{equation}
\begin{aligned}
g(\beta,t)=\Bigl\langle \Tr e^{\frac{\beta+\mathrm{i} t}{\rt{N}}\rt{2}M}
\Tr e^{\frac{\beta-\mathrm{i} t}{\rt{N}}\rt{2}M}\Bigr\rangle.
\end{aligned}
\label{eq:gmat}
\end{equation}
On the other hand, the correlator of 1/2 BPS Wilson loops with winding number
$k_i$ is given by \cite{Okuyama:2018aij}
\begin{equation}
\begin{aligned}
\left\langle\prod_i \Tr e^{k_i\rt{\frac{\lambda}{4N}}\rt{2}M}\right\rangle,
\end{aligned}
\label{eq:Wmat}
\end{equation}
where $\lambda$ denotes the 't Hooft coupling of $\mathcal{N}=4$ SYM.
Comparing \eqref{eq:gmat} and \eqref{eq:Wmat}, we find a dictionary between
Wilson loops in $\mathcal{N}=4$ SYM and the spectral form factor
\begin{equation}
\begin{aligned}
k_i\rt{\lambda}~\leftrightarrow~2(\beta\pm\mathrm{i} t).
\end{aligned}
\end{equation}
As shown in \cite{Fiol:2013hna,Okuyama:2018aij},
the correlator of $\Tr e^{z\rt{2}M}$ is written in terms of the $N\times N$
symmetric matrix
$A(z)$ defined by
\begin{equation}
\begin{aligned}
A(z)_{i,j}=\rt{\frac{i!}{j!}}e^{\frac{z^2}{2}}z^{j-i}
L_i^{j-i}(-z^2), \quad(i,j=0,\cdots,N-1),
\end{aligned}
\label{eq:Amat}
\end{equation}
where $L_n^\alpha(x)$ denotes the associated Laguerre polynomial.
The one-point function is given by
the trace of $A(z)$ (see Appendix \ref{app:mat} for a derivation of this result)
\begin{equation}
\begin{aligned}
\Bigl\langle \Tr e^{z\rt{2}M}\Bigr\rangle
=\Tr A(z)=e^{\frac{z^2}{2}}L_{N-1}^1(-z^2).
\end{aligned}
\end{equation}
The spectral form factor $g(\beta,t)$ in \eqref{eq:gmat}
is a two-point function
of $\Tr e^{z\rt{2}M}$ and $\Tr e^{\b{z}\rt{2}M}$ with
\begin{equation}
\begin{aligned}
z=\frac{\beta+\mathrm{i} t}{\rt{N}},\quad
\b{z}=\frac{\beta-\mathrm{i} t}{\rt{N}}.
\end{aligned}
\label{eq:z-bt}
\end{equation}
One can naturally decompose $g(\beta,t)$ into the disconnected part
$g_{\text{disc}}(\beta,t)$
and the connected part $g_{\text{conn}}(\beta,t)$
\begin{equation}
\begin{aligned}
g(\beta,t)= g_{\text{disc}}(\beta,t)+ g_{\text{conn}}(\beta,t).
\end{aligned}
\end{equation}
The disconnected part is given by a product of one-point functions
\begin{equation}
\begin{aligned}
g_{\text{disc}}(\beta,t)=\Tr A(z)\Tr A(\b{z})
=e^{\frac{z^2+\b{z}^2}{2}}L_{N-1}^1(-z^2)L_{N-1}^1(-\b{z}^2),
\end{aligned}
\label{eq:gdisc}
\end{equation}
where $z$ and $\b{z}$ are defined in \eqref{eq:z-bt}.
This part is responsible for the early time decay of $g(\beta,t)$,
which we will not consider in this paper.
The late time behavior of $g(\beta,t)$, the so-called ramp and plateau,
comes form the connected part. Using the result in
\cite{Drukker:2000rr,Kawamoto:2008gp,Okuyama:2018aij},
$g_{\text{conn}}(\beta,t)$ is written as
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t)&=\Tr \Bigl[A(z+\b{z})-A(z)A(\b{z})\Bigr].
\end{aligned}
\label{eq:gconn}
\end{equation}
Since $z+\b{z}=\frac{2\beta}{\rt{N}}$,
the first term of \eqref{eq:gconn} is independent of time
and it sets the value of plateau
\begin{equation}
\begin{aligned}
g_{\text{plateau}}(\beta)&=\Tr A(z+\b{z})
=e^{\frac{2\beta^2}{N}}L_{N-1}^1\Bigl(-\frac{4\beta^2}{N}\Bigr).
\end{aligned}
\end{equation}
Using the result of Wilson loop in $\mathcal{N}=4$ SYM \cite{Erickson:2000af},
the large $N$ limit of $g_{\text{plateau}}(\beta)$
with fixed $\beta$ is given by\footnote{The initial value of the
disconnected part $g_{\text{disc}}(\beta,t=0)$ is order $N^2$
in the large $N$ limit
\begin{equation}
\begin{aligned}
g_{\text{disc}}(\beta,t=0)\approx N^2\frac{I_1(2\beta)^2}{\beta^2}.
\end{aligned}
\end{equation}
Note that this is larger than the value of plateau \eqref{eq:plateau-value}
by a factor of $N$.
}
\begin{equation}
\begin{aligned}
g_{\text{plateau}}(\beta)\approx N\frac{I_1(4\beta)}{2\beta},
\end{aligned}
\label{eq:plateau-value}
\end{equation}
where $I_n(x)$ denotes the modified Bessel function of the first kind.
The non-trivial time dependence comes from the second term of \eqref{eq:gconn}
\begin{equation}
\begin{aligned}
g_{\text{ramp}}(\beta,t)&=-\Tr \Bigl[A(z)A(\b{z})\Bigr].
\end{aligned}
\end{equation}
In what follows, we will consider the time derivative of $g_{\text{ramp}}(\beta,t)$,
which we call the \textit{slope of ramp}.
Since $g_{\text{plateau}}(\beta)$ is independent of time,
the slope of ramp is equal to the time derivative of the connected part
of spectral form factor
\begin{equation}
\begin{aligned}
\frac{\partial g_{\text{ramp}}}{\partial t}(\beta,t)=\frac{\partial g_{\text{conn}}}{\partial t}(\beta,t).
\end{aligned}
\end{equation}
As explained in Appendix \ref{app:mat},
we can write down a closed form expression of the slope of ramp
\begin{equation}
\begin{aligned}
\frac{\partial g_{\text{conn}}}{\partial t}(\beta,t)&=
\frac{N}{\beta}e^{\frac{z^2+\b{z}^2}{2}}\text{Im}
\Bigl[L_N(-z^2)L_{N-1}(-\b{z}^2)\Bigr] .
\end{aligned}
\label{eq:slope-bt}
\end{equation}
By taking the limit $\beta\to0$ of \eqref{eq:slope-bt}, the slope of ramp for $\beta=0$ becomes
\begin{equation}
\begin{aligned}
\frac{\partial g_{\text{conn}}}{\partial t}(0,t)=
2te^{-\frac{t^2}{N}}\Biggl[L_{N-1}\Bigl(\frac{t^2}{N}\Bigr)
L_{N-1}^1\Bigl(\frac{t^2}{N}\Bigr)-L_{N}\Bigl(\frac{t^2}{N}\Bigr)
L_{N-2}^1\Bigl(\frac{t^2}{N}\Bigr)\Biggr] .
\end{aligned}
\label{eq:slope-0}
\end{equation}
We are interested in the large $N$ limit of the slope of ramp
\eqref{eq:slope-bt} and
\eqref{eq:slope-0}.
When $\beta=0$, as pointed out in \cite{hikami},
$\partial_t g_{\text{conn}}(0,t)$ in \eqref{eq:slope-0}
happens to be equal to the eigenvalue
density in the Wishart-Laguerre ensemble, which is known to
obey the semi-circle law in the large $N$ limit.\footnote{See
eq.(3.16) and eq.(3.30) in \cite{Brezin:1995dp} (see also \cite{Verbaarschot:1993pm}).
The eigenvalue density of Wishart-Laguerre ensemble
$\rho(\mu)=\mu\tilde{\rho}(\mu^2)$ in \cite{Brezin:1995dp} is equal to
$\frac{1}{2}\partial_t g_{\text{conn}}(0,t)$ under the identification $\mu=t/N$;
eq.(3.30) in \cite{Brezin:1995dp} corresponds to the exact finite $N$ result
of $\partial_t g_{\text{conn}}(0,t)$ in \eqref{eq:slope-0}, while eq.(3.16)
in \cite{Brezin:1995dp} represents its large $N$ limit.}
However, the large $N$ limit of $\partial_t g_{\text{conn}}(\beta,t)$
with non-zero $\beta$ is not well studied in the literature.
In section \ref{sec:largeN},
we will numerically study the large $N$ behavior of the exact result \eqref{eq:slope-bt} and
\eqref{eq:slope-0}.
Before doing this numerical study, in the next section we will review the
analytic derivation of the large $N$ behavior of ramp
in \cite{hikami,Liu:2018hlr}.
\section{Large $N$ limit of the slope of ramp \label{sec:largeN}}
The large $N$ limit of $g_{\text{conn}}(\beta,t)$
is written in terms of the connected part of the two-level
correlation function $\rho^{(2)}(\mu_1,\mu_2)$
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t)&=\int d\mu_1d\mu_2\rho^{(2)}(\mu_1,\mu_2)
e^{(\beta+\mathrm{i} t)\mu_1}e^{(\beta-\mathrm{i} t)\mu_2}\\
&=\int d\mu_1d\mu_2\rho^{(2)}(\mu_1,\mu_2)e^{\mathrm{i} t(\mu_1-\mu_2)+\beta(\mu_1+\mu_2)}.
\end{aligned}
\end{equation}
At late times $t\gg1$, the dominant contribution comes from
the region $|\mu_1-\mu_2|\ll1$.
Thus we can use the universal form of the short range correlation, known as the \textit{sine kernel}
(see e.g. \cite{Mehta})
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t) \approx
-N^2\int d\mu_1d\mu_2\left[ \frac{\sin N\pi (\mu_1-\mu_2) \rho\bigl(\frac{\mu_1+\mu_2}{2}\bigr)}
{N\pi (\mu_1-\mu_2)}\right]^2 e^{\mathrm{i} t(\mu_1-\mu_2)+\beta (\mu_1+\mu_2)}.
\end{aligned}
\label{eq:gconn-sine}
\end{equation}
Introducing the variables $\omega$ and $u$ by
\begin{equation}
\begin{aligned}
\omega=2N(\mu_1-\mu_2),\quad
u=\frac{\mu_1+\mu_2}{4},
\end{aligned}
\end{equation}
\eqref{eq:gconn-sine} is rewritten as
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t) \approx
-\frac{4N}{\pi^2}\int dud\omega \frac{\sin^2 \frac{\pi}{2} \rho(2u)\omega}{\omega^2} e^{\mathrm{i} \omega\tau+4\beta u},
\end{aligned}
\label{eq:g-omint}
\end{equation}
where $\tau$ is given by
\begin{equation}
\begin{aligned}
\tau=\frac{t}{2N} .
\end{aligned}
\label{eq:tau-def}
\end{equation}
In the large $N$ limit, the integration region of $\omega$ can be extended to $\omega\in[-\infty,\infty]$,
and the $\omega$-integral is explicitly evaluated as \cite{hikami}
\begin{equation}
\begin{aligned}
\int_{-\infty}^\infty d\omega\frac{\sin^2 \frac{\pi}{2} \rho(2u)\omega}{\omega^2}e^{\mathrm{i} \omega\tau}
=\left\{
\begin{aligned}
&\frac{\pi}{2} \big(\pi\rho(2u)-\tau\big),\quad & (\pi\rho(2u)>\tau),\\
&0, \quad & (\pi\rho(2u)<\tau).
\end{aligned}
\right.
\end{aligned}
\label{eq:relu}
\end{equation}
The condition $\pi\rho(2u)>\tau$ limits the range of $u$-integration
to $u\in[-u_\tau,u_\tau]$, where $u_\tau$ is determined by $\pi\rho(2u_\tau)=\tau$.
From the explicit form of eigenvalue density in \eqref{eq:wigner},
we find
\begin{equation}
\begin{aligned}
\pi\rho(2u_\tau)= \rt{1-u_\tau^2}=\tau,
\end{aligned}
\label{eq:u-tau}
\end{equation}
and $u_\tau$ is given by
\begin{equation}
\begin{aligned}
u_\tau=\rt{1-\tau^2}.
\end{aligned}
\label{eq:utau-sqrt}
\end{equation}
Since the maximal value of $\pi\rho(2u_\tau)$ is one,
$\tau=1$ is the critical value at which
the behavior of $g_{\text{conn}}(\beta,t)$ changes discontinuously from ramp to plateau.
In the following, we will
consider the ramp regime $\tau<1$.
When $\tau<1$, plugging \eqref{eq:relu}
into \eqref{eq:g-omint} we find that $g_{\text{conn}}(\beta,t)$ is written as
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t)=\frac{2N}{\pi}\int_{-u_\tau}^{u_\tau}du \,e^{4\beta u}\bigl(
\tau-\pi \rho(2u)\bigr).
\end{aligned}
\label{eq:gconn-uint}
\end{equation}
Let us consider the time derivative of
$g_{\text{conn}}(\beta,t)$ in \eqref{eq:gconn-uint}.
The $t$-derivative of the boundary term $\pm u_\tau$ vanishes
due to the condition \eqref{eq:u-tau}.
Thus, the $t$-derivative of \eqref{eq:gconn-uint} comes only from the
derivative of integrand
\begin{equation}
\begin{aligned}
\frac{\partial g_{\text{conn}}}{\partial t}(\beta,t)&=\frac{2N}{\pi}\int_{-u_\tau}^{u_\tau}du\, e^{4\beta u}\frac{\partial\tau}{\partial t}=
\frac{1}{\pi}\int_{-u_\tau}^{u_\tau}du\, e^{4\beta u}=
\frac{\sinh 4\beta u_\tau}{2\pi\beta}.
\end{aligned}
\label{eq:delg-sinh}
\end{equation}
Let us take a closer look at the case of $\beta=0$.
By setting $\beta=0$ in \eqref{eq:delg-sinh}, one can see
that $\partial_t g_{\text{conn}}(0,t)$ is proportional
to $u_\tau$
\begin{equation}
\begin{aligned}
\frac{\partial g_{\text{conn}}}{\partial t}(0,t)=\frac{2}{\pi}u_\tau.
\end{aligned}
\end{equation}
Introducing the rescaled slope of ramp $s(0,t)$ by
\begin{equation}
\begin{aligned}
s(0,t):=\frac{\pi}{2}\frac{\partial g_{\text{conn}}}{\partial t}(0,t)=u_\tau,
\end{aligned}
\label{eq:s0-def}
\end{equation}
it follows from \eqref{eq:utau-sqrt} that $s(0,t)$ obeys the semi-circle law
\begin{equation}
\begin{aligned}
s(0,t)^2+\tau^2=1.
\end{aligned}
\label{eq:st-circle}
\end{equation}
\begin{figure}[thb]
\centering
\begin{tikzpicture}
\draw[->,thick] (-5.5,0)--(5.5,0);
\draw[->,thick] (0,-0.5)--(0,5.8);
\coordinate (u) at (5.5,0) node at (u) [right] {$u$};
\coordinate (r) at (0,5.8) node at (r) [above] {$\pi\rho(2u)$};
\draw[thick,blue] (5,0) arc (0:180:5);
\coordinate (up) at (4,0) node at (up) [below] {$u_\tau$};
\coordinate (um) at (-4,0) node at (um) [below] {$-u_\tau$};
\coordinate (O) at (0,-0.22) node at (O) [left]{$0$};
\coordinate (t) at (0,3.22) node at (t) [left] {$\tau$};
\coordinate (s) at (2,3) node at (s) [below] {$s(\beta,t)$};
\coordinate (c1) at (5,0) node at (c1) [below] {$1$};
\coordinate (c2) at (-5,0) node at (c2) [below] {$-1$};
\coordinate (c3) at (0,5.22) node at (c3) [left] {$1$};
\draw[thick,red, dotted] (-4,3)--(0,3);
\draw[<->,thick,red] (4,3)--(0,3);
\draw[thick,dashed] (-4,3)--(-4,0);
\draw[thick,dashed] (4,3)--(4,0);
\end{tikzpicture}
\caption{This figure shows the interpretation of $\tau$ and $s(\beta,t)$
in the eigenvalue distribution. The blue semi-circle is the graph of eigenvalue density $\pi\rho(2u)=\rt{1-u^2}$. The time slice $\pi\rho(2u)=\tau$ is represented by
the horizontal red line. The \textit{slope of ramp} $s(\beta,t)=u_\tau$
corresponds to the length of solid red line.}
\label{fig:circle}
\end{figure}
When $\beta\ne0$,
one can similarly define the quantity $s(\beta,t)$
by applying
the inverse function of sinh to $\partial_t g_{\text{conn}}$ in \eqref{eq:delg-sinh}:
\begin{equation}
\begin{aligned}
s(\beta,t):=\frac{1}{4\beta}\text{arcsinh}\left(2\pi\beta \frac{\partial g_{\text{conn}}}{\partial t}(\beta,t)\right)
=u_\tau.
\end{aligned}
\label{eq:sbt}
\end{equation}
Again, from \eqref{eq:utau-sqrt} it follows that
$s(\beta,t)$ obeys the semi-circle law
\begin{equation}
\begin{aligned}
s(\beta,t)^2+\tau^2=1 .
\end{aligned}
\label{eq:sb-circle}
\end{equation}
In the rest of this paper, we will use the name ``slope of ramp''
for both $\partial_t g_{\text{conn}}(\beta,t)$ and $s(\beta,t)$ interchangeably.
In Figure \ref{fig:circle}, we show the interpretation of $s(\beta,t)$
in the Wigner semi-circle distribution.
Here we comment on some feature of this figure:
\begin{itemize}
\item The time $\tau$ corresponds to the vertical axis in Figure \ref{fig:circle}.
Namely, $\tau$ probes the value of eigenvalue density (see \eqref{eq:u-tau}).
\item The \textit{slope of ramp} $s(\beta,t)$ in \eqref{eq:sbt}
corresponds to the horizontal direction in Figure \ref{fig:circle}.
In other words, $s(\beta,t)$ plays the role of eigenvalue.
\item The point $(s(\beta,t),\tau)$ lies on the unit semi-circle \eqref{eq:sb-circle}.
\end{itemize}
Before closing this section, we note in passing that
the large $N$ limit of $g_{\text{conn}}(\beta,t)$ is easily obtained by integrating
$\partial_t g_{\text{conn}}$ in \eqref{eq:delg-sinh}
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t)
=g_{\text{conn}}(\beta,0)+2N\int_0^\tau d\tau' \frac{\sinh4\beta\rt{1-\tau'^2}}{2\pi \beta}.
\end{aligned}
\end{equation}
After a change of variable $\tau=\sin\th$,
this integral can be performed by using the relation
\begin{equation}
\begin{aligned}
\sinh(4\beta\cos\th)=2\sum_{n=1}^\infty I_{2n-1}(4\beta)\cos(2n-1)\th.
\end{aligned}
\end{equation}
Then we find
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,t)=
g_{\text{conn}}(\beta,0)+
\frac{N}{\pi\beta}\left[
I_1(4\beta)\th
+\sum_{n=1}^\infty \frac{I_{2n+1}(4\beta)+
I_{2n-1}(4\beta)}{2n}\sin 2n\th\right],
\end{aligned}
\label{eq:gconn-Ibt}
\end{equation}
where $\th$ is related to time $\tau$ by
\begin{equation}
\begin{aligned}
\th=\arcsin(\tau).
\end{aligned}
\end{equation}
Note that the initial value $g_{\text{conn}}(\beta,0)$ is given by
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,0)=
\Tr\left[A\Bigl(\frac{2\beta}{\rt{N}}\Bigr)-A\Bigl(\frac{\beta}{\rt{N}}\Bigr)^2\right].
\end{aligned}
\label{eq:gc-init}
\end{equation}
When $\beta=0$ this initial value vanishes $g_{\text{conn}}(0,0)=0$, but
it is non-zero for $\beta\ne0$. The large $N$ limit of $g_{\text{conn}}(\beta,0)$
in \eqref{eq:gc-init} can be obtained by
borrowing the result of two-point correlator of 1/2 BPS Wilson loops in $\mathcal{N}=4$
SYM \cite{Akemann:2001st,Giombi:2009ms,Okuyama:2018aij}
\begin{equation}
\begin{aligned}
g_{\text{conn}}(\beta,0)= \beta I_0(2\beta)I_1(2\beta)+\mathcal{O}(N^{-2}).
\end{aligned}
\end{equation}
When $\beta=0$, \eqref{eq:gconn-Ibt} reproduces the known result
in \cite{hikami,Liu:2018hlr}
\begin{equation}
\begin{aligned}
g_{\text{conn}}(0,t)&=\frac{2N}{\pi}\Bigl(\th+\frac{1}{2}\sin2\th\Bigr)
=\frac{2N}{\pi}\Bigl(\arcsin(\tau)+\tau\rt{1-\tau^2}\Bigr).
\end{aligned}
\end{equation}
We have also checked that the small $\beta$ expansion of our result \eqref{eq:gconn-Ibt}
is consistent with the $\mathcal{O}(\beta^2)$ term of $g_{\text{conn}}(\beta,t)$
computed in
\cite{Liu:2018hlr}.
\section{Plot of the exact slope of ramp \label{sec:plot}}
In this section, we will study numerically
the behavior of the exact slope of ramp $s(\beta,t)$ at finite $N$.
Plugging the exact result of $\partial_t g_{\text{conn}}(\beta,t)$ \eqref{eq:slope-bt}
into \eqref{eq:sbt}, we find the exact form of
$s(\beta,t)$ at finite $N$
\begin{equation}
\begin{aligned}
s(\beta,t)= \frac{1}{4\beta}\text{arcsinh}\left(
2\pi Ne^{\frac{\beta^2-t^2}{N}}\text{Im}\Biggl[L_N\Bigl(-\frac{(\beta+\mathrm{i} t)^2}{N}\Bigr)
L_{N-1}\Bigl(-\frac{(\beta-\mathrm{i} t)^2}{N}\Bigr)\Biggr]\right).
\end{aligned}
\label{eq:sbtexact}
\end{equation}
When $\beta=0$, using the result of $\partial_t g_{\text{conn}}(0,t)$ in \eqref{eq:slope-0}
the exact form of
$s(0,t)$ at finite $N$ becomes
\begin{equation}
\begin{aligned}
s(0,t)=\pi
te^{-\frac{t^2}{N}}\Biggl[L_{N-1}\Bigl(\frac{t^2}{N}\Bigr)
L_{N-1}^1\Bigl(\frac{t^2}{N}\Bigr)-L_{N}\Bigl(\frac{t^2}{N}\Bigr)
L_{N-2}^1\Bigl(\frac{t^2}{N}\Bigr)\Biggr].
\end{aligned}
\label{eq:s0exact}
\end{equation}
\begin{figure}[htb]
\centering
\subcaptionbox{$s(0,t)$\label{sfig:sbt0}}{\includegraphics[width=7cm]{sbt0.pdf}}
\hskip5mm
\subcaptionbox{$s(5,t)$\label{sfig:sbt5}}{\includegraphics[width=7cm]{sbt5.pdf}}
\caption{Plot of $s(\beta,t)$
for \subref{sfig:sbt0} $\beta=0$ and \subref{sfig:sbt5} $\beta=5$.
The horizontal axis is the rescaled time $\tau=t/2N$.
The blue dots are the exact values of $s(\beta,t)$ at $N=500$ while the
red curve represents the semi-circle law $s=\rt{1-\tau^2}$.
}
\label{fig:sbt}
\end{figure}
In Figure \ref{fig:sbt}, we plot the exact slope of ramp
$s(\beta,t)$ at $N=500$
as a function of time $\tau=t/2N$.
One can clearly see that $s(\beta,t)$ obeys the semi-circle law as predicted by the large $N$
analysis in the previous section.
We emphasize that $s(\beta,t)$ is independent of $\beta$
in the large $N$ limit and it obeys the semi-circle law for both $\beta=0$ and $\beta\ne0$
as shown in \eqref{eq:st-circle} and \eqref{eq:sb-circle}.
On the other hand, $g_{\text{conn}}(\beta,t)$ itself has a non-trivial $\beta$-dependence,
whose explicit form in the large $N$ limit is given by \eqref{eq:gconn-Ibt}.
Note that the vertical and horizontal axes in Figure \ref{fig:circle}
are flipped in Figure \ref{fig:sbt}.
As we explained in the previous section, the
$\tau$-axis corresponds to the eigenvalue density and
the $s$-axis
corresponds to the eigenvalues.
In other words, the eigenvalue density manifests itself as the time direction
in Figure \ref{fig:sbt}.
As we can see from Figure \ref{fig:sbt}, the slope of ramp vanishes beyond
the critical value $\tau=1$, which corresponds to the so-called Heisenberg time
$t_H=2N$
where the plateau regime sets in.
This critical time is determined by the maximal value of
the eigenvalue density.
\section{Small $t$ behavior of the slope of ramp \label{sec:smallt}}
In this section we will consider the small $t$
behavior of the slope of ramp $s(\beta,t)$. Since $s(\beta,t)$ is an odd function of $t$,
its Taylor expansion starts from the linear term in $t$ \footnote{In \cite{Cotler:2017jue} it was observed numerically
that in the small $t$ regime
$g_{\text{conn}}(0,t)$ behaves as $g_{\text{conn}}(0,t)\sim t^2$.
This behavior simply follows from the fact that
$g_{\text{conn}}(0,t)$ is an even function of $t$ with
the initial value $g_{\text{conn}}(0,0)=0$, hence its Taylor expansion
starts from $t^2$.}.
From the exact result of $s(\beta,t)$ at finite $N$ in \eqref{eq:sbtexact},
we can compute the coefficient of this linear term
\begin{equation}
\begin{aligned}
s(\beta,t)=
\pi e^{\frac{\beta^2}{N}}\left[
L_{N-1}\Bigl(-\frac{\beta^2}{N}\Bigr)
L_{N-1}^1\Bigl(-\frac{\beta^2}{N}\Bigr)-
L_{N}\Bigl(-\frac{\beta^2}{N}\Bigr)
L_{N-2}^1\Bigl(-\frac{\beta^2}{N}\Bigr)\right]t+\mathcal{O}(t^3).
\end{aligned}
\end{equation}
In the large $N$ limit this becomes
\begin{equation}
\begin{aligned}
s(\beta,t)=\pi \Bigl[I_0(2\beta)^2-I_1(2\beta)^2\Bigr]t+\mathcal{O}(t^3).
\end{aligned}
\end{equation}
One can in principle compute the coefficient of $t^3,t^5,\cdots,$
as a function of $\beta$ using the
exact result in \eqref{eq:sbtexact}.
However, the computation for general $\beta$
becomes tedious when we go to higher order terms.
Instead, here we focus on the $\beta=0$ case
where the higher order coefficients are easily extracted from
the exact result at finite $N$ in \eqref{eq:s0exact}
\begin{equation}
\begin{aligned}
s(0,t)= \frac{\pi}{2}\left[ 2 t-2 t^3+t^5 +
\left(-\frac{5}{18}-\frac{1}{18N^2}\right)t^7+\mathcal{O}(t^9)\right].
\end{aligned}
\label{eq:s0taylor}
\end{equation}
This expansion is valid until the first and the second terms in \eqref{eq:s0taylor}
become comparable. The order of this time scale is
\begin{equation}
\begin{aligned}
t\sim \mathcal{O}(N^0).
\end{aligned}
\end{equation}
Summing over the order $N^0$ terms in \eqref{eq:s0taylor}, we find
that the large $N$ limit of $s(0,t)$ in the small $t$ regime
is given by the Bessel function
\begin{equation}
\begin{aligned}
s(0,t)= \pi t\Bigl[J_0(2t)^2+J_1(2t)^2\Bigr]+\mathcal{O}(N^{-2}).
\end{aligned}
\label{eq:bessel-osc}
\end{equation}
\begin{figure}[htb]
\centering
\includegraphics[width=10cm]{smallt.pdf}
\caption{Plot of $s(\beta=0,t)$
in the small $t$ region.
The dots are the exact values of $s(0,t)$ for $N=500$.
The blue line is the first term $s=\pi t$
in the Taylor expansion of $s(0,t)$ in \eqref{eq:s0taylor},
while the red curve represents the Bessel function in
\eqref{eq:bessel-osc}.
This figure is a closeup of the small $t$ region of Figure \ref{sfig:sbt0}.
}
\label{fig:smallt}
\end{figure}
In Figure \ref{fig:smallt}, we plot the exact $s(0,t)$ at $N=500$
in the small $t$ region. $s(0,t)$ grows linearly at very early time and then
starts to oscillate around $s=1$. The linear behavior of
$s(0,t)$ around $t=0$ comes from the first term in the
Taylor expansion \eqref{eq:s0taylor}, while the oscillating behavior
is captured by the Bessel function \eqref{eq:bessel-osc}
as discussed in \cite{hikami}.
When $t$ becomes of order $N$, the expression \eqref{eq:bessel-osc}
is no longer valid;
$s(0,t)$ is described instead by the semi-circle law \eqref{eq:st-circle} when $t\sim \mathcal{O}(N)$.
\section{Conclusion \label{sec:conclusion}}
In this paper, we have studied the slope of ramp $s(\beta,t)$,
which is related to $\partial_t g_{\text{conn}}(\beta,t)$ by \eqref{eq:sbt},
in the Gaussian matrix model.
We found the exact closed form expression of $s(\beta,t)$ in \eqref{eq:sbtexact} and
confirmed numerically that $s(\beta,t)$
obeys the semi-circle law as a function of time for both $\beta=0$ and $\beta\ne0$ cases.
Interestingly, in the plot of $s(\beta,t)$ the time direction plays the role of eigenvalue
density.
There are many interesting open questions. We list several avenues for
future research.
The relation between $g_{\text{conn}}$ and the eigenvalue density
$\rho(\mu)$ in \eqref{eq:gconn-sine} is expected to be quite universal, and
hence it is not restricted to the Gaussian matrix model.
It would be very interesting to study the slope of ramp in other models,
such as the SYK model,
and see if the eigenvalue density
manifests itself in the time direction for other models as well\footnote{See
\cite{Gaikwad:2017odv} for a study of
spectral form factor in hermitian matrix model with
a non-Gaussian potential.}.
It would be also interesting to generalize our study to the
higher point correlation function of $\Tr e^{-(\beta\pm\mathrm{i} t)H}$.
In the case of Gaussian matrix model, the exact form of the connected part of
higher point function
was recently studied in \cite{Okuyama:2018aij}.
It would be interesting to see if the multi-point correlator of
eigenvalues $\rho^{(n)}(\mu_1,\cdots,\mu_n)$
appears in the time dependence of higher point functions of $\Tr e^{-(\beta\pm\mathrm{i} t)H}$
in the large $N$ limit. To see this, we need to go beyond the ``box approximation''
used in \cite{Liu:2018hlr}.
\acknowledgments
I would like to thank Nick Hunter-Jones for correspondence
and careful reading of the manuscript.
This work was supported in part by JSPS KAKENHI Grant Number 16K05316.
| {
"timestamp": "2019-02-15T02:03:10",
"yymm": "1811",
"arxiv_id": "1811.09988",
"language": "en",
"url": "https://arxiv.org/abs/1811.09988",
"abstract": "We study the time derivative of the connected part of spectral form factor, which we call the slope of ramp, in Gaussian matrix model. We find a closed formula of the slope of ramp at finite $N$ with non-zero inverse temperature. Using this exact result, we confirm numerically that the slope of ramp exhibits a semi-circle law as a function of time.",
"subjects": "High Energy Physics - Theory (hep-th)",
"title": "Spectral form factor and semi-circle law in the time direction",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540686581883,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.709904665864891
} |
https://arxiv.org/abs/2104.07028 | On Missing Mass Variance | The missing mass refers to the probability of elements not observed in a sample, and since the work of Good and Turing during WWII, has been studied extensively in many areas including ecology, linguistic, networks and information theory.This work determines what is the \emph{maximal variance of the missing mass}, for any sample and alphabet sizes. The result helps in understanding the missing mass concentration properties. | \section{Introduction}
\subsection{Background}
Let $X_1,\ldots,X_n\sim^{IID} p$ be a sample from a distribution $(p_s)_{s
\in S}$ on a countable alphabet $S$, and $f_s = \#\{i: X_i = s\}$ be the empirical (observed) frequencies. The missing mass
\begin{align}
M_0 = \sum_{s}p_s\mathbb{I}(f_s=0),
\end{align}
which quantifies how much of the population does not appear in the sample, is of interest to statistics~\cite{robbins1968estimating} and applied disciplines, such as:
ecology~\cite{shen2003predicting,chao2003nonparametric,chao2013entropy,chao2017seen}, linguistic~\cite{efron1976estimating,mcneil1973estimating,thisted1987did,gale1995good}, archaeology~\cite{myrberg2015tale}, network design~\cite{budianu2003estimation,budianu2004good}, information theory~\cite{vu2007coverage,zhang2012entropy}, microbiology~\cite{hughes2001counting}
and bio-molecular modeling~\cite{mao2002poisson,koukos2014application}. The
problem was studied first by Good and Turing~\cite{good1953population}.
This work studies the maximal variance of the missing mass
\begin{align}\label{eq:mse_def}
\max_{(p_s)\in\mathbb{P}(S)} \mathbf{Var}[M_0] = ?
\end{align}
given the alphabet size (equivalently, the constrained support):
\begin{align}
\# S = m,
\end{align}
where we allow for $m=+\infty$ and denote by $\mathbb{P}(S)$ the set of probability measures on $S$.
We are interested in deriving a formula valid for any sample size $n$ and any alphabet size $m$, with the relative error of at most $o(1)$ when $n$ grows.
The motivation is two-fold. First, our work continues the line of research on extreme missing mass properties \cite{rajaraman2017minimax,minimaxa2018improved,berend2012missing,berend2017expected}. Second, more importantly, understanding the variance extremes determines no-go regimes for concentration inequalities, which have been actively studied \cite{mcallester2000convergence,mcallester2003concentration,berend2013concentration,ben2017concentration,chandra2019concentration,skorskisub,skorski2020missing}; the need for such no-go results was highlighted in prior works~\cite{rajaraman2017minimax,minimaxa2018improved}.
\subsection{Related Work}
No prior work has addressed the problem of finding \emph{sharp variance bounds} for the missing mass, even under no alphabet constraints. Below we review weaker results that are available.
The maximal variance is $\Theta(n^{-1})$ under no constraints ($m=+\infty$). This has been noticed in the line of work on concentration properties~\cite{mcallester2000convergence}. Furthermore, this asymptotic holds up to a constant for many distributions with certain regularity properties~\cite{ben2017concentration}.
The asymptotic $\Theta(n^{-1})$, when $m=+\infty$, can be also deduced from works that studied the Good-Turing estimator~\cite{rajaraman2017minimax,minimaxa2018improved}; unfortunately the estimator variance differs from our variance already by $\Omega(n^{-1})$, and so it is insufficient to give the sharp asymptotic.
As we will see, in the context of our problem techniques from prior works are not sufficient. While certain tricks~\cite{rajaraman2017minimax,minimaxa2018improved} can used to handle the smaller order term, determining the leading asymptotic term involves subtle non-linear programming, mainly due to the support constraint.
Analogues programs from prior works on extreme missing mass properties luckily had simpler objectives and boundaries. For example, the problem~\cite{berend2012missing} is straightforward due to Schur-convexity~\cite{xia2009schur}; in~\cite{minimaxa2018improved} the reduced program involves only one equality constraint; in~\cite{rajaraman2017minimax} the constant was found up to a small gap, but with elementary methods avoiding optimization.
The reader interested in missing mass beyond the scope of our problem may also refer to other works~\cite{esty1982confidence,zhang2009asymptotic,mossel2015impossibility,cohen2021non,orlitsky2003always,orlitsky2015competitive,cohen2017cardinality,ayed2019good}.
\subsection{Summary of Contribution}
The contribution of this work can be summarized as follows:
\begin{itemize}
\item The formula accurately approximating missing mass variance approximation, with the poissonized version which is better suited for analysis and numerical evaluation.
\item The characterization of the worst-case variance value, along with the maximizing distribution, for any sample and alphabet size. The maximal variance behaves like $\Theta(\min\{\frac{1}{m},\frac{1}{n}\})$, with the explicit leading constant. The maximizer is a mixture of a uniform distribution and a point mass, with the explicitly given proportions.
\item An application of the extreme variance values to benchmarking results on missing mass concentration.
\end{itemize}
\section{Results}
\subsection{Accurate Variance Estimation}
We present the accurate bound on the missing mass variance, valid for any discrete distribution. The derivation is similar as for the formula for the Good-Turing estimator, obtained in \cite{rajaraman2017minimax}, although quantitatively quite different.
\begin{theorem}[Missing Mass Variance]\label{thm:poisson}
We have:
\begin{multline}
\mathbf{Var}[M_0] =-n\left(\sum_{s}p_s^2(1-p_{s})^{n}\right)^2 + n\sum_s p_s^3(1-p_s)^{n} \\ + O(n^{-2}).
\end{multline}
\end{theorem}
We can further approximate the variance, similarly as for moments of occupancy numbers~\cite{gnedin2007notes,barbour2009small}, with "poissonized" terms. This makes the formula easier to
analyze and also more stable numerically (via the "log-sum-exp" trick~\cite{blanchard2019accurate}).
\begin{corollary}[Missing Mass Variance, Poissonized]\label{cor:poisson}
We have:
\begin{multline}
\mathbf{Var}[M_0] =-n\left(\sum_{s}p_s^2\mathrm{e}^{-np_s}\right)^2 + n\sum_s p_s^3\mathrm{e}^{-np_s} \\ + O(n^{-2}).
\end{multline}
\end{corollary}
\subsection{Extreme Variance Behavior}
Building on \Cref{cor:poisson}, we use non-linear optimization to characterize the maximal variance. The result is stated below:
\begin{theorem}\label{thm:main_opt}
Denote $m = \# S$, $b=\frac{m}{n}$. Consider the program:
\begin{align}
\begin{aligned}
\max_{w,c} && \alpha(w,c)=
- w^2 c^2\mathrm{e}^{-2c} + w c^2\mathrm{e}^{-c} \\
\mathrm{s.t.} &&
\begin{cases}
0\leqslant w \leqslant 1\\
w \leqslant b c,
\end{cases}
\end{aligned}
\end{align}
and let $\alpha$ be its optimal value achieved at $w,c$. Then:
\begin{align}
\max_{(p_s)\in\mathbb{P}(S)} \mathbf{Var}[M_0] = \frac{\alpha}{n} + O(n^{-2}),
\end{align}
with the maximum achieved at $(p_s)$ being the mixture of a uniform distribution on $\min\{\lceil wn/ c \rceil,m-1\}$ elements and a Dirac mass,
with the proportion $w$ and $1-w$ respectively.
For $m=+\infty$, the constraints reduce to $0\leqslant w\leqslant 1,0\leqslant c$.
\end{theorem}
The shape of the worst-case distribution, a Uniform-Dirac mixture, appears
often for extreme problems in nonparameteric statistics~\cite{bu2018estimation} and could be expected; the proof is however quite subtle. The optimization is illustrated in \Cref{fig:landscape}.
\begin{figure}[h]
\begin{tikzpicture}[scale=0.8]
\begin{axis}[
axis lines = left,
colormap/cool,
xlabel = $w$,
ylabel = $c$,
y dir = reverse,
z dir = reverse,
zmax = 0.50,
view={60}{-140},
legend style={at={(0.0,0.9)},anchor=north west},
enlargelimits = true,
colorbar
]
\addplot3[
surf,
samples=50,
domain=0:1,
domain y = 0:5
]
{ -x^2*y^2*exp(-2*y) + x*y^2*exp(-y) };
\addlegendentry{$-w^2c^2\mathrm{e}^{-2c}+wc^2\mathrm{e}^{-c}$}
\addplot3[
black,ultra thick,dotted,
domain=0:5,
samples = 50,
samples y = 0
]
(
{min(0.8*x,1)},
{x},
{-min(0.8*x,1)^2*x^2*exp(-2*x) + min(0.8*x,1)*x^2*exp(-x)}
);
\end{axis}
\end{tikzpicture}
\caption{The optimization landscape of the problem in \Cref{thm:main_opt}.\\
The boundary of the feasible region is shown with the dotted line.}
\label{fig:landscape}
\end{figure}
Finally, we give a reduction to a one-dimensional problem, which can be solved \emph{numerically} for any given $m,n$.
\begin{corollary}\label{cor:phase}
Let $c^{*}=2.26281...$ be the root of $2 - 2 e^c + c (-2 + e^c)=0$.
Then the optimal solution for \Cref{thm:main_opt} is:
\begin{align}
\alpha=
\begin{cases}
\max_{c\geqslant 0} -c^2\mathrm{e}^{-2c}+c^2\mathrm{e}^{-c} = 0.477\ldots & \frac{m}{n}\geqslant \frac{1}{c^{*}}\\
\max_{0<c\leqslant \frac{1}{b}} -b^2 c^4 \mathrm{e}^{-2c}+b c^3\mathrm{e}^{-c} & \frac{m}{n} < \frac{1}{c^{*}}
\end{cases},
\end{align}
and $w$ equals $1$ and respectively $b c$, where $c$ is the maximizer.
\end{corollary}
We see that the constant in the maximal variance asymptotic depends on the ratio of the sample size and alphabet size. Furthermore, for sufficiently large $n$ the Dirac part vanishes (phase transition), as illustrated in
\Cref{fig:phase}.
\begin{figure}[h]
\begin{tikzpicture}[scale=0.83]
\begin{axis}[
enlargelimits = true,
axis lines =middle,
xlabel = {$\frac{m}{n}$},
ylabel = {$\alpha$},
xtick={0, 0.449, 2},
xticklabels={0,$\frac{1}{c^{*}}$},
yticklabels={0,$(c^{*})^2(\mathrm{e}^{-2c^{*}}-\mathrm{e}^{-c^{*}})$},
ytick={0, 0.477},
ymax = 1,
xmax = 0.9,
colormap/cool,
every axis x label/.style={
at={(ticklabel* cs:1.0)},
anchor=north west,
},
every axis y label/.style={
at={(ticklabel* cs:1.0)},
anchor=east,
},
]
\addplot[mesh,ultra thick] table [x=b,y=val,col sep=comma] {miss_mass_var.csv};
\addplot[mark=none,dashed,black] coordinates {(0,0.477) (0.449,0.477)};
\addplot[mark=none,dashed,black] coordinates {(0.449,0) (0.449,0.9)};
\path[fill=gray,fill opacity=0.05] (axis cs:0,0) -- (axis cs:0,1) -- (axis cs:0.449,1)--(axis cs:0.449,0);
\path[fill=blue,fill opacity=0.05] (axis cs:0.449,0) -- (axis cs:0.449,1) -- (axis cs:0.9,1.72)--(axis cs:0.9,0);
\node (n1) at (axis cs: 0,0.75) {};
\node (n2) at (axis cs: 0.449,0.75) {};
\node (n3) at (axis cs: 0.9,0.75) {};
\draw[<->] (n1)--(n2) node[midway,above] {Uniform+Dirac: $w<1$};
\draw[<->] (n2)--(n3) node[midway,above] {Uniform: $w=1$};
\end{axis}
\end{tikzpicture}
\caption{The constant $\alpha$ in the asymptotic $\Theta(n^{-1})$ for the maximal missing mass variance \eqref{eq:mse_def}, depending on the alphabet size $m=\#S$ (\Cref{cor:phase}).
The phase transition (the maximizer becomes uniform) occurs at $m = \frac{n}{c^{*}}$.}
\label{fig:phase}
\end{figure}
\subsection{Application: Gap in Missing Mass Concentration}
We show how our result can be used to benchmark concentration bounds. This solves the problem posed in~\cite{minimaxa2018improved}.
The state-of-the-art bounds for the concentration of $M_0$~\cite{ben2017concentration} give the following:
$M_0$ is sub-gamma with variance-factor $v=\Theta(n^{-1})$ and scale $c=O(n^{-1})$, and thus satisfies
the concentration bound $\Pr[|M_0-\mathbb{E}[M_0]|>\epsilon]\leqslant 2\exp\left(\frac{-\epsilon^2}{2v^2+2c\epsilon}\right)$ (in particular $2\mathrm{e}^{-\Omega(n\epsilon^2)}$ as we can assume $\epsilon<1$). However we know that $v^2 \geqslant \mathbf{Var}[M_0]$ for the sub-gamma bound, and this is sharp~\cite{boucheron2013concentration}. Using our results, we observe that:
\begin{itemize}
\item The biggest variance is $\mathbf{Var}[M_0]\approx \frac{0.477}{n}$. In turn, by the numeric formula~\cite{boucheron2013concentration}, we have $v=\sum_s p_s^2(1-p_s)^{n} + n^{-1}\sum_s p_s(1-p_s)^{n}$ which can be as big as $v\approx \frac{0.839}{n}$. This leaves the gap of $\frac{0.362}{n}$ w.r.t. the "ideal" case.
\item Missing mass is a sum of negatively-dependent variables, and the all prior works on missing mass concentration handle this by the IID majorization. However ignoring correlation terms gives the variance-term of at least $v\approx \sum_s p_s^2 ((1-p_s)^n - (1-p_s)^{2n})$, which can be as far as by $\Omega(n^{-1})$ from the "ideal" value of $\mathbf{Var}[M_0]$.
\end{itemize}
\section{Preliminaries}
We will need the mean-value theorem to obtain asymptotic bounds, as well as to localize and count roots of equations.
\begin{lemma}[Mean-Value Theorem~\cite{feng2013mean}]\label{lemma:mean_val_thm}
If a function $f(u)$ is continuous on the closed interval $[a,b]$ and differentiable on the open interval $(a,b)$, then there exists a point $c\in(a,b)$ such that $\frac{\partial f}{\partial u}(c) = \frac{f(b)-f(a)}{b-a}$.
\end{lemma}
We will also need the facts below in our estimations:
\begin{lemma}[Generalized-Bernoulli Inequality]\label{lemma:binomial_series}
Let $1\leqslant k\leqslant n$ be integers. Then for any $-1\leqslant u\leqslant 0$ it holds that:
$1+nu \leqslant (1+u)^n \leqslant 1+nu+O(nu^2)$.
\end{lemma}
\begin{lemma}[Taylor's Expansion with Remainder]\label{lemma:taylor_remainder}
Let $f$ be a twice differentiable real function, then $f(x) = f(x_0) + \frac{\partial f}{\partial x}(x_0)\cdot (x-x_0) + \frac{ \frac{\partial^2 f}{\partial x^2}(z)}{2}\cdot (x-x_0)^2$ for some $z\in [x,x_0]$.
\end{lemma}
Another fact handles the commonly occurring expressions:
\begin{lemma}[Occupancy-like Expressions]\label{lemma:exp_sums}
Let $k,n\geqslant 1$. Then $p^k(1-p)^n$ for $p\in [0,1]$ is maximized for $p=\frac{k}{k+n}$,
and $p^k\mathrm{e}^{-np}$ for $p\in [0,1]$ is maximized for $p=\frac{k}{n}$. In particular if $(p_s)$ is a probability distribution, then $\sum_s p_s^k(1-p_s)^{n} = O(n^{1-k})$ and $\sum_s p_s^k\mathrm{e}^{-np_s} = O(n^{1-k})$ when $k=O(1)$.
\end{lemma}
Finally, we will need some theory of non-linear programming. For a detailed discussion we refer to books~\cite{boyd2004convex,biegler2010nonlinear,bazaraa2013nonlinear}.
\begin{lemma}[KKT Conditions]\label{lemma:KKT}
Consider the program
\begin{align}
\begin{aligned}
\max && f(x) \\
\mathrm{s.t.} &&
\begin{cases}
h_i(x) = 0,& i\in I\\
g_j(x)\leqslant 0,& i\in J
\end{cases}
\end{aligned}
\end{align}
with differentiable real functions $f$,$(h_i)_{i\in I},(g_j)_{j\in J}$ in variables $x=x_1,\ldots,x_d$.
If the maximum occurs at $x$, then:
\begin{align}
\frac{\partial}{\partial x}f(x) = \sum_{i} \lambda_i \frac{\partial}{\partial x}h_i(x) +\sum_{j} \mu_j \frac{\partial}{\partial x}g_j(x),
\end{align}
where $\frac{\partial}{\partial x} = (\frac{\partial}{\partial x_1}\ldots\frac{\partial}{\partial x_d})$,
for $\lambda_i\in\mathbb{R}$, $\mu_j\geqslant 0$ such that
\begin{align}
\lambda_i \in\mathbb{R},\ \mu_j\geqslant 0,\ \mu_j g_j(x) = 0,
\end{align}
provided that regularity conditions hold at $x$.
\end{lemma}
We briefly remind the optimization terminology. The function $f$ is called objective, and any $x\in\mathbb{R}^d$ satisfying the constraints is called feasible. The optimal value is also called the program value. The constraint $h_i$ respectively $g_j$ is called active at $x$, when $h_i(x)=0$, respectively $g_j(x)=0$.
\begin{remark}[Constraints Qualification]\label{rem:licq}
The KKT conditions hold for optimal $x$ when the active constraints gradients are linearly independent (LICQ) or for affine constraints (LQC).
\end{remark}
\begin{remark}[Extreme Values~\cite{lovric2007vector}]\label{lemma:extreme_val_theorem}
A continuous function on a compact subset of $\mathbb{R}^d$ achieves its maximum and minimum.
\end{remark}
\section{Proofs}
\subsection{Proof of \Cref{thm:poisson}}
Let $\xi_s$ be the indicator that $s$ does not occur in the sample.
By \Cref{lemma:binomial_series}:
\begin{align}
\begin{split}
\mathbf{Var}[\xi_s] &= (1-p_s)^n-((1-p_s)^{2})^n \\
& = (1-p_s)^n(1-(1-p_s)^n) \\
& = n p_s(1-p_s)^n (1+O(n p_s)).
\end{split}
\end{align}
In turn, for $s\not=s'$ by
\Cref{lemma:mean_val_thm}:
\begin{align}
\begin{split}
\mathbf{Cov}[\xi_s,\xi_{s'}]&=(1-p_s-p_{s'})^n-(1-p_s)(1-p_{s'})^n \\
& = -np_s p_{s'}(1-p_s)^n(1-p_{s'})^{n-1} + \\
& \quad + O\left(n^2p_s^2p_{s'}^2(1-p_s)^{n-2}(1-p_{s'})^{n-2}\right).
\end{split}
\end{align}
These bounds imply that:
\begin{align}
\sum_s p_s^2\mathbf{Var}[\xi_s] = n \sum_s p_s^3(1-p_s)^n + O(n^{-2}),
\end{align}
where we used \Cref{lemma:exp_sums} to estimate the $O(\cdot)$ term, and
\begin{multline}
\sum_{s\not=s'}p_s p_{s'}\mathbf{Cov}[\xi_s,\xi_{s'}] = \\
-n\sum_{s\not=s'}p_s^2p_{s'}^2(1-p_s)^{n-2}(1-p_{s'})^{n-2} + O(n^{-2}),
\end{multline}
where we estimated the $O(\cdot)$ term using
$n^2\sum_{s\not=s'}p_s^3p_{s'}^3(1-p_s)^{n-2}(1-p_{s'})^{n-2}<(n\sum_s p_s^3(1-p_s)^{n-2})^2=O(n^{-2})$ due to \Cref{lemma:exp_sums}; since
$n\sum_s p_s^4(1-p_s)^{2(n-2)}=O(n^{-2})$ by \Cref{lemma:exp_sums}, we obtain the following simpler expression:
\begin{align}
\sum_{s\not=s'}p_s p_{s'}\mathbf{Cov}[\xi_s,\xi_{s'}] = -n(\sum_s p_s^2(1-p_s)^{n-2})^2 + O(n^{-2}).
\end{align}
Since $\sum_s p_s^2(1-p_s)^{n-2} = \sum_s p_s^2(1-p_s)^n + O(n^{-2})$, as seen by $p_s^2(1-p_s)^n = p_s^2(1-p_s)^{n-2}(1-2p_s+O(p_s^2))$ and \Cref{lemma:exp_sums}, we can replace the exponent $n-2$ with $n$.
By the identity $\mathbf{Var}[M] = \sum_s p_s\mathbf{Var}[\xi_s] + \sum_{s\not=s'}p_s p_{s'}\mathbf{Cov}[\xi_s,\xi_{s'}]$, and the bounds we derived above for the two pars of the summation:
\begin{multline}
\mathbf{Var}[M_0] = -n\left(\sum_{s}p_s^2(1-p_{s})^{n}\right)^2 + n\sum_s p_s^3(1-p_s)^{n} \\ + O(n^{-2}).
\end{multline}
At the final step we observe $|\mathrm{e}^{-n p}-(1-p)^n |\leqslant O(n p^2\mathrm{e}^{-np})$ by \Cref{lemma:mean_val_thm} applied to $f(u)=u^n$. By \Cref{lemma:exp_sums}, we conclude that in the above identity we can replace
$1-p$ by $\mathrm{e}^{-p}$ making the error of $O(n\sum_s p_s^4\mathrm{e}^{-np_s})=O(n^{-2})$ in the first sum
and $O(n\sum_s p_s^5\mathrm{e}^{-np_s})=O(n^{-3})$ in the third sum. Both sums are $O(n^{-1})$, thus the total additive error is $O(n^{-2})$.
\subsection{Proof of \Cref{thm:main_opt}}
\subsubsection{Non-linear Programming Formulation}
In view of \Cref{cor:poisson}, we need to solve the optimization program:
\begin{align}\label{eq:main_program}
\begin{aligned}
\max_{(p_s)} &&
-n\left(\sum_s p_s^2\mathrm{e}^{-n p_s}\right)^2 + n\sum_s p_s^3\mathrm{e}^{-np_s} \\
\textrm{s.t.} && \sum_s p_s = 1,\ p_s\geqslant 0.
\end{aligned}
\end{align}
Observe that $(p_s)$ may be not finite-dimensional, but rather countable,
so it is not clear whether the maximum is achieved on a feasible point. However, the objective is bounded, so the supremum may be achieved on a sequence of feasible points.
\subsubsection{Maximizer is 4-Mixture}
We will show that every feasible $(p_s)$ with at least 4 distinct non-zero values can be improved, that is replaced by a feasible $(p_s)$ with the strictly bigger objective value. To this end we fix the values of $p_s$ on all indices except a finite subset $S'$ and consider the program
\begin{align}
\begin{aligned}
\max &&
-n\left(\sum_{s\in S'} p_s^2\mathrm{e}^{-n p_s}+B\right)^2 + n\sum_{s\in S'} p_s^3\mathrm{e}^{-np_s} + C \\
\textrm{s.t.} && \sum_s p_s = A,\ p_s\geqslant 0.
\end{aligned}
\end{align}
for some $A,B,C>0$. It suffices to prove that regardless of values of $A,B,C$, the maximum is achieved for $(p_s)_{s\in S'}$ which takes at most 3 distinct non-zero values. By \Cref{lemma:extreme_val_theorem} the maximum is indeed achieved at some feasible point. By \Cref{lemma:KKT} the \emph{non-zero components} of the optimal point $(p_s)_{s'}$ are solutions of the following equation in $u$:
\begin{align}
-2n \cdot D\cdot \frac{\partial}{\partial u}[u^2\mathrm{e}^{-nu}] + n
\cdot \frac{\partial}{\partial u}[u^3 \mathrm{e}^{-nu}] = \lambda,
\end{align}
where $D = 2(\sum_{s\in S'}p_s^2\mathrm{e}^{-np_s}+B)$, for some constant $\lambda$.
The equation is of the form
$P(u)\mathrm{e}^{-n u} = \lambda$ where $P(u)$ is a polynomial of degree 3.
We claim that such an equation has at most 4 solutions; to this end it suffices to show that the derivative of $P(u)\mathrm{e}^{-nu}$ has at most 3 solutions; indeed by \Cref{lemma:mean_val_thm} between two points with equal function value there is a root of its derivative (a.k.a. Rolle's Theorem). But the derivative is $P'(u)\mathrm{e}^{-nu}$ for a degree-3 polynomial $P'$, and has at most 3 roots by the Fundamental Theorem of Algebra.
This claim shows that, the equivalent program is
\begin{align}\label{eq:main_program_compact}
\begin{aligned}
\max_{(p_s),(k_s)} &&
-n\left(\sum_{s=1}^{4} k_s p_s^2\mathrm{e}^{-n p_s}\right)^2 + n\sum_{s=1}^{4} k_s p_s^3\mathrm{e}^{-np_s} \\
\textrm{s.t.} &&
\begin{cases}
\sum_{s=1}^{4} k_s p_s = 1\\
p_s\geqslant 0 \\
k_s\geqslant 0,\ k_s \in\mathbb{Z},
\end{cases}
\end{aligned}
\end{align}
whether the maximum in \eqref{eq:main_program} is achieved at a concrete point, or in the limit on a sequence of points. We can state it equivalently: when seeking the maximizer we can restrict the feasible set to mixtures of (at most) 4 uniform distributions.
\subsubsection{Relaxation}
We prove, as before, that \eqref{eq:main_program_compact} holds when $S$ is bounded. Let $m=\#S$, consider the relaxed program:
\begin{align}\label{eq:main2_program_compact_relaxed}
\begin{aligned}
\max_{(p_s),(k_s)} &&
-n\left(\sum_{s=1}^{4} k_s p_s^2\mathrm{e}^{-n p_s}\right)^2 + n\sum_{s=1}^{4} k_s p_s^3\mathrm{e}^{-np_s} \\
\textrm{s.t.} &&
\begin{cases}
\sum_{s=1}^{4} k_s p_s \leqslant 1\\
\sum_{s=1}^{4}k_s \leqslant m \\
p_s\geqslant 0 \\
k_s\geqslant 0.
\end{cases}
\end{aligned}
\end{align}
We prove that the global maximum is achieved. Consider the supremum of the objective under the constraints; it can be approached in the limit of the objective values on the sequence of feasible points $(k^j_s),(p^j_s)$ for $j=1,2,\ldots$. By the diagonal argument (passing to a subequence) we can assume that for every $s$ the sequences $k^j_s,p^j_s$ are monotone.
Observe now that $p^j_s\to \infty$ for $j\to\infty$, then the objective in the limit and the feasibility does not change when we replace $p^j_s = 0$; this is because
$p^2\mathrm{e}^{-np},p^3\mathrm{e}^{-np}\to 0$ for $p\to\infty$. We can thus assume that $(p^j_s)$ are bounded. Suppose now that $k^j_s\to \infty$ for some $s$, when $j\to\infty$. Then it must be that $p^j_s\to 0$ because $k_sp_s\leqslant 1$; then the objective in the limit and the feasibility do not change when we replace $k^j_s = 0$ for all $j$. We can thus assume that $k_s^j$ are also bounded. Since $p^j_s$ and $k^j_s$ are monotone and bounded, they converge to finite values $p^{*}_s,k^{*}_s$. By the continuity, the sequence of objective values converges to the objective on $p^{*}_s,k^{*}_s$.
Furthermore, these values are feasible, as the constraint set is closed. This shows that the supremum is achieved.
\subsubsection{Solving Relaxation}
Note that the constraints are bi-linear; thus we can apply the KKT conditions
for with respect to $p_s$ with optimal $k_s$, and with respect to $k_s$ with fixed optimal $p_s$. At the optimal solution $(p_s),(k_s)$ for $s$ such that $p_s k_s > 0$ we have that $u=p_s$ solves the system of equations
\begin{align}\label{eq:system2}
\begin{aligned}
g(u) &= \lambda u + \mu \\
\frac{\partial g}{\partial u}(u) &= \lambda
\end{aligned},
\end{align}
where we introduced the auxiliary function
\begin{align}
g(u)\triangleq (u^3-A u^2)\mathrm{e}^{-n u}
\end{align}
with $A = 2\sum_s k_s p_s^2\mathrm{e}^{-np_s}$. The first equation in \eqref{eq:system2} is the KKT condition for $(k_s)$ divided by $n$, and the second equation in \eqref{eq:system2} is the KKT condition for $(p_s)$ divided by $n$. Geometrically, \eqref{eq:system2} means that the line $u\to \lambda u + \mu$ is tangent to the plot of $g$ at $u$.
We will now argue that this does not happen for \emph{distinct} values of $u$, so the system has one solution.
This is best seen on \Cref{fig:function2}, but the formal argument is rather subtle. Suppose that \eqref{eq:system2} has two solutions $u_1<u_2$.
The derivative $\frac{\partial g}{\partial u}(u)=-\mathrm{e}^{-n u} u (A (2 - n u) + u (-3 + n u))$ has 3 roots which split the domain into
intervals $I_1,I_2,I_3$ such that $g$ is decreasing on $I_1$, increasing on $I_2$ and decreasing on $I_3$; the endpoints are the roots
$0$, $\frac{3 + A n - \sqrt{9 - 2 A n + A^2 n^2}}{2 n}$, $\frac{3 + A n + \sqrt{9 - 2 A n + A^2 n^2}}{2 n}$. We then claim that on every interval $I_j$
the derivative takes every value at most twice. Otherwise, one of the intervals contains two roots of the second derivative
$\frac{\partial^2 g}{\partial u^2}$, by \Cref{lemma:mean_val_thm} (applied to the repeating values); since every interval contains at least one root of the second derivative, by \Cref{lemma:mean_val_thm} applied to the endpoints (for $I_3$, we use $+\infty$ as the right endpoint), in total this gives 4 roots.
On the other hand the roots of $\frac{\partial^2 g}{\partial u^2}$ satisfy the polynomial equation of order 3 with at most 3 roots, a contradiction.
\begin{figure}
\resizebox{0.7\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[
axis lines*=left,
xlabel = $u$,
ylabel = {$g(u)$},
yticklabel style={/pgf/number format/fixed}
]
\addplot [
domain= 0:1.5,
samples=100,
color=red,
]
{(x^3-0.2*x^2)*exp(-10*x)};
\addlegendentry{$(u^3-Au^2)\mathrm{e}^{-n u}$}
\end{axis}
\end{tikzpicture}
}
\caption{The auxliary function $g(u)=(u^3-Au^2)\mathrm{e}^{-nu}$, for $A\in (0,1)$ and integer $n>1$. In this example $n=10$ and $A=0.2$.}
\label{fig:function2}
\end{figure}
Thus, for the optimal solution of \eqref{eq:main2_program_compact_relaxed}
$k_s p_s \not=0$ for only one $s$. In other words, the program is equivalent to:
\begin{align}\label{eq:main2_program_compact_relaxed_2d}
\begin{aligned}
\max_{p,k} &&
-n k^2 p^4\mathrm{e}^{-2n p} + n k p^3\mathrm{e}^{-np} \\
\textrm{s.t.} &&
\begin{cases}
k p \leqslant 1\\
0\leqslant k\leqslant m \\
0 \leqslant p.
\end{cases}
\end{aligned}
\end{align}
\subsubsection{Relaxation Gap}
Consider the optimal solution $(p,k)$ for \eqref{eq:main2_program_compact_relaxed_2d}. We will slightly modify it
so that it is feasible for \eqref{eq:main_program_compact}, but
the relaxed objective does not change much. Define:
\begin{align}\label{eq:best_construction}
\begin{aligned}
k_1 = \min\{\lceil k \rceil,m-1\},\ p_1 = p \\
k_2=1,\ p_2 = 1-k_1p_1\\
k_3,k_4,p_3,p_4 = 0.
\end{aligned}
\end{align}
Consider the following quantities:
\begin{align}
\begin{aligned}
P' & = -n\left(\sum_s k_sp_s^2\mathrm{e}^{-np_s}\right)^2 + n\sum_s k_sp_s^3\mathrm{e}^{-np_s} \\
P & = -n ( kp\mathrm{e}^{-np})^2 + n k p^3\mathrm{e}^{-np}
\end{aligned}.
\end{align}
We then have:
\begin{align}
\begin{aligned}
\Delta_1 &\triangleq n(\sum_s k_sp_s^2\mathrm{e}^{-np_s})^2- n(k p^2 \mathrm{e}^{-np})^2 \\
& \leqslant n(\sum_s k_sp_s^2\mathrm{e}^{-np_s})- k p^2 \mathrm{e}^{-np})\cdot O(n^{-1})\\
&= ((k_1-k) p^2\mathrm{e}^{-np} + O(n^{-2}))\cdot O(1) \\
& \leqslant O(n^{-2}).
\end{aligned}
\end{align}
Where in the second line we use the identity $x^2-y^2=(x-y)(x+y)$
with $x=\sum_s k_s p_s^2\mathrm{e}^{-np_s}$, $y=kp^2\mathrm{e}^{-np}$
and the fact that $x+y=O(n^{-1})$ by \Cref{lemma:exp_sums}; in the third line we use again \Cref{lemma:exp_sums} and $|k_1-k|\leqslant O(1)$ (by the definition of $k_1$).
Similarly we obtain:
\begin{align}
\begin{aligned}
\Delta_2 &\triangleq n k p^3\mathrm{e}^{-np} - n\sum_s k_s p_s^3\mathrm{e}^{-np_s} \\
& = n (k-k_1) p^3\mathrm{e}^{-np} - O(n^{-2}) \\
& \leqslant O(n^{-2}),
\end{aligned}
\end{align}
where we used \Cref{lemma:exp_sums} (twice) and $|k_1-k|\leqslant O(1)$.
Therefore $ P- P' = \Delta_1 + \Delta_2 \leqslant O(n^{-2})$,
since also $ P' \leqslant P$ (by relaxation) we finally obtain:
\begin{align}
|P - P'| \leqslant O(n^{-2}).
\end{align}
\subsubsection{Concluding Result}
By the change of variables $w = kp$, $c = pn$ and $b\triangleq\frac{m}{n}$ we can rewrite \eqref{eq:main2_program_compact_relaxed_2d} as follows:
\begin{align}\label{eq:simple_2d}
\begin{aligned}
\max_{w,c} &&
\frac{- w^2 c^2\mathrm{e}^{-2c} + w c^2\mathrm{e}^{-c}}{n} \\
\textrm{s.t.} &&
\begin{cases}
0\leqslant w \leqslant 1\\
w \leqslant b c.
\end{cases}
\end{aligned}
\end{align}
This proves the first part of \Cref{thm:main_opt} (the maximal value); note that the relaxation gap of $O(n^{-2})$ with respect to the program
\eqref{eq:main_program} and the difference of $O(n^{-2})$ between this program and the variance, give the total error of $O(n^{-2})$.
For the second part (the maximizing distribution) we use the construction \eqref{eq:best_construction}, which gives the stated characterization.
\subsection{Proof of \Cref{cor:phase}}
Consider the program \eqref{eq:simple_2d}. The maximum is achieved,
as inherited from \eqref{eq:main2_program_compact_relaxed} (it follows also directly: the objective is non-negative and goes to zero when $c\to \infty$, uniformly in $w$); in fact is is achieved for $w,c>0$ (then the objective, under the constraints, is positive; for $wc=0$ it equals zero).
We first note that the optimal solution must be on the boundary. Otherwise if $0<w<1$ and $0<w<bc$ the first order conditions give
$w=\frac{\mathrm{e}^c}{2}$
and $\frac{1}{n} c \mathrm{e}^{-2 c} w (2 (c -1)w -(c - 2) \mathrm{e}^c )=0$, which leads to $w=\frac{\mathrm{e}^c}{2}$ and
$2 (c -1)w -(c - 2) \mathrm{e}^c )=0$, which in turn leads to a contradiction that $c-1=c-2$.
Thus our optimal value equals $\frac{1}{n}$ times the maximum of the following two cases which determine the constant $\alpha$: either
\begin{align}\label{case:1}
\max_{c\geqslant \frac{1}{b}} -c^2\mathrm{e}^{-2c}+c^2\mathrm{e}^{-c}
\end{align}
or
\begin{align}\label{case:2}
\max_{0<c\leqslant \frac{1}{b}} -b^2 c^4 \mathrm{e}^{-2c}+b c^3\mathrm{e}^{-c}.
\end{align}
For numerical optimization below we used the software \cite{2020SciPy-NMeth}.
Suppose that $b\leqslant \frac{1}{c^{*}}$. Let $g(c)=-c^2\mathrm{e}^{-2c}+c^2\mathrm{e}^{-c}$; this function has the unique maximizer $c^{*}\approx 2.26281$, is increasing for $0<c<c^{*}$ and decreasing for $c>c^{*}$. Then \eqref{case:1} equals
$g(1/b)$ which matches the objective of \eqref{case:2} at $c=\frac{1}{b}$;
thus the optimal value of \eqref{case:1} is smaller or equal to that of \eqref{case:2}.
Suppose that $b>\frac{1}{c^{*}}$. Then \eqref{case:1} equals $g(c^{*})$.
We will prove that \eqref{case:2} is smaller or equal to $g(c^{*})$.
This is true when $\frac{\mathrm{e}^c}{2}<1$, because then
\eqref{case:2} is at most $c^2\mathrm{e}^{-c}$ maximized over $c\leqslant \log 2$, which is $ 0.24...$ (we used $bc\leqslant 1$). We can assume that $1\leqslant \frac{\mathrm{e}^c}{2}$;
note that the objective of \eqref{case:2} increases in $b$ when
$b\leqslant \frac{\mathrm{e}^{c}}{2c}$, and under the constraint we have
$0\leqslant b\leqslant \frac{1}{c} \leqslant \frac{\mathrm{e}^c}{2c}$. Thus,
\eqref{case:2} can be upper-bounded by setting $b=\frac{1}{c}$ in the objective and removing the constraint on $c$. This gives the upper bound of $\max_c g(c) =g(c^{*})$, and finishes the proof
\section{Conclusion}
This work shows the \emph{exact} behavior of the missing mass variance in the extreme case. The result can be used to benchmark concentration results and to understand no-go regimes. The optimization techniques can be possibly generalized and used to obtain similar characterization for other occupancy numbers, such as small counts recently studied~\cite{battiston2021consistent}.
\newpage
\bibliographystyle{IEEEtran}
| {
"timestamp": "2021-04-16T02:00:07",
"yymm": "2104",
"arxiv_id": "2104.07028",
"language": "en",
"url": "https://arxiv.org/abs/2104.07028",
"abstract": "The missing mass refers to the probability of elements not observed in a sample, and since the work of Good and Turing during WWII, has been studied extensively in many areas including ecology, linguistic, networks and information theory.This work determines what is the \\emph{maximal variance of the missing mass}, for any sample and alphabet sizes. The result helps in understanding the missing mass concentration properties.",
"subjects": "Information Theory (cs.IT); Statistics Theory (math.ST)",
"title": "On Missing Mass Variance",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540680555949,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7099046654280888
} |
https://arxiv.org/abs/2205.07346 | Optimal Error-Detecting Codes for General Asymmetric Channels via Sperner Theory | Several communication models that are of relevance in practice are asymmetric in the way they act on the transmitted "objects". Examples include channels in which the amplitudes of the transmitted pulses can only be decreased, channels in which the symbols can only be deleted, channels in which non-zero symbols can only be shifted to the right (e.g., timing channels), subspace channels in which the dimension of the transmitted vector space can only be reduced, unordered storage channels in which the cardinality of the stored (multi)set can only be reduced, etc. We introduce a formal definition of an asymmetric channel as a channel whose action induces a partial order on the set of all possible inputs, and show that this definition captures all the above examples. Such a general approach allows one to treat all these different models in a unified way, and to obtain a characterization of optimal error-detecting codes for many interesting asymmetric channels by using Sperner theory. | \section{Introduction}
\label{sec:intro}
Several important channel models possess an intrinsic asymmetry in the
way they act on the transmitted ``objects''.
A classical example is the binary $ \mathsf{Z} $-channel in which the
transmitted $ 1 $'s may be received as $ 0 $'s, but not vice versa.
In the present article we formalize the notion of an asymmetric channel
by using order theory, and illustrate that the given definition captures
this and many more examples.
Our main aim is twofold:
\begin{inparaenum}
\item[1)]
to introduce a framework that enables one to treat many different kinds
of asymmetric channels in a unified way, and
\item[2)]
to demonstrate its usefulness and meaningfulness through examples.
In particular, the usefulness of the framework is illustrated through a
characterization of optimal error-detecting codes for a broad class of
asymmetric channels (for all channel parameters), which is obtained by
using Kleitman's theorem on posets satisfying the so-called LYM inequality.
\end{inparaenum}
The framework and its consequences will be further developed in an extended
version of this paper.
\subsection{Communication channels}
\label{sec:channels}
\begin{definition}
\label{def:channel}
Let $ \myX, \myY $ be nonempty sets.
A communication channel on $ (\myX, \myY) $ is a subset
$ \myK \subseteq \myX \times \myY $ satisfying
$ \forall x \in \myX \; \exists y \in \myY \; (x,y) \in \myK $ and
$ \forall y \in \myY \; \exists x \in \myX \; (x,y) \in \myK $.
We also use the notation $ {x \myarrow y} $, or simply $ x \rightsquigarrow y $
when there is no risk of confusion, for $ (x,y) \in \myK $.
For a given channel $ \myK \subseteq \myX \times \myY $, we define its
dual channel as $ \myK^\textnormal{d} = \{ (y, x) : (x, y) \in \myK \} $.
\end{definition}
Note that we describe communication channels purely in combinatorial terms,
as \emph{relations} in Cartesian products $ \myX \times \myY $.
Here $ \myX $ is thought of as the set of all possible inputs, and $ \myY $
as the set of all possible outputs of the channel.
The expression $ x \rightsquigarrow y $ means that the input $ x $ can
produce the output $ y $ with positive probability.
We do not assign particular values of probabilities to each pair
$ (x,y) \in \myK $ as they are irrelevant for the problems that we
intend to discuss.
\subsection{Partially ordered sets}
\label{sec:posets}
In what follows, we shall use several notions from order theory, so we
recall the basics here \cite{engel, stanley}.
A partially ordered set (or poset) is a set $ \myU $ together with a
relation $ \preceq $ satisfying, for all $ x, y, z \in \myU $:
\begin{inparaenum}
\item[1)]
reflexivity:
$ x \preceq x $,
\item[2)]
asymmetry (or antisymmetry):
if $ x \preceq y $ and $ y \preceq x $, then $ x = y $,
\item[3)]
transitivity:
if $ x \preceq y $ and $ y \preceq z $, then $ x \preceq z $.
\end{inparaenum}
Two elements $ x, y \in \myU $ are said to be comparable if either
$ x \preceq y $ or $ y \preceq x $.
They are said to be incomparable otherwise.
A chain in a poset $ (\myU, \preceq) $ is a subset of
$ \myU $ in which any two elements are comparable.
An antichain is a subset of $ \myU $ any two distinct elements of which
are incomparable.
A function $ \rho: \myU \to \mathbb{N} $ is called a rank function if
$ \rho(y) = \rho(x) + 1 $ whenever $ y $ covers $ x $, meaning that
$ x \preceq y $ and there is no $ y' \in \myU $ such that $ x \preceq y' \preceq y $.
A poset with a rank function is called graded.
In a graded poset with rank function $ \rho $ we denote
$ \myU_{[\underline{\ell}, \overline{\ell}]} =
\{ x \in \myU : \underline{\ell} \leqslant \rho(x) \leqslant \overline{\ell} \} $,
and we also write $ \myU_\ell = \myU_{[\ell,\ell]} $ (here the rank function
$ \rho $ is omitted from the notation as it is usually understood from the
context).
Hence, $ \myU = \bigcup_\ell \myU_\ell $.
A graded poset is said to have Sperner property if $ \myU_\ell $ is an antichain
of maximum cardinality in $ (\myU, \preceq) $, for some $ \ell $.
A poset is called rank-unimodal if the sequence $ |\myU_\ell| $ is unimodal
(i.e., an increasing function of $ \ell $ when $ \ell \leqslant \ell' $,
and decreasing when $ \ell \geqslant \ell' $, for some $ \ell' $).
We say that a graded poset $ (\myU, \preceq) $ possesses the LYM
(Lubell--Yamamoto--Meshalkin) property \cite{kleitman} if there exists a
nonempty list of maximal chains such that, for any $ \ell $, each of the
elements of rank $ \ell $ appear in the same number of chains.
In other words, if there are $ L $ chains in the list, then each element
of rank $ \ell $ appears in $ L/|\myU_\ell| $ of the chains.
We shall call a poset \emph{normal} if it satisfies the LYM property,
see \cite[Sec.~4.5 and Thm 4.5.1]{engel}.
A simple sufficient condition for a poset to be normal is that it be
regular \cite[Cor.~4.5.2]{engel}, i.e., that both the number of elements that
cover $ x $ and the number of elements that are covered by $ x $ depend only
on the rank of $ x $.
In Section \ref{sec:examples} we shall see that many standard examples of
posets, including the Boolean lattice, the subspace lattice, the Young's
lattice, chain products, etc., arise naturally in the analysis of
communications channels.
\section{General asymmetric channels and error-detecting codes}
\label{sec:asymmetric}
In this section we give a formal definition of asymmetric channels and
the corresponding codes which unifies and generalizes many scenarios
analyzed in the literature.
We assume hereafter that the sets of all possible channel inputs and all
possible channels outputs are equal, $ \myX = \myY $.
For a very broad class of communication channels, the relation
$ \rightsquigarrow $ is reflexive, i.e., $ x \rightsquigarrow x $ (any
channel input can be received unimpaired, in case there is no noise), and
transitive, i.e., if $ x \rightsquigarrow y $ and $ y \rightsquigarrow z $,
then $ x \rightsquigarrow z $ (if there is a noise pattern that transforms
$ x $ into $ y $, and a noise pattern that transforms $ y $ into $ z $,
then there is a noise pattern -- a combination of the two -- that transforms
$ x $ into $ z $).
Given such a channel, we say that it is \emph{asymmetric} if the relation
$ \rightsquigarrow $ is asymmetric, i.e., if $ x \rightsquigarrow y $,
$ x \neq y $, implies that $ y \not\rightsquigarrow x $.
In other words, we call a channel asymmetric if the channel action induces
a partial order on the set of all inputs $ \myX $.
\begin{definition}
\label{def:asymmetric}
A communication channel $ \myK \subseteq \myX^2 $ is said to be asymmetric
if $ (\myX, \stackrel{\sml{\myK}}{\rightsquigarrow}) $ is a partially ordered
set.
We say that such a channel is * if the poset
$ (\myX, \stackrel{\sml{\myK}}{\rightsquigarrow}) $ is *, where * stands for
an arbitrary property a poset may have (e.g., graded, Sperner, normal, etc.).
\end{definition}
Many asymmetric channels that arise in practice, including all the examples
mentioned in this paper, are graded as there are natural rank functions that
may be assigned to them.
For a graded channel $ \myK $, we denote by
$ \myK_{[\underline{\ell}, \overline{\ell}]} =
\myK \cap \myX^2_{[\underline{\ell}, \overline{\ell}]} $
its natural restriction to inputs of rank $ \underline{\ell}, \ldots, \overline{\ell} $.
\begin{definition}
\label{def:edc}
We say that $ \bs{C} \subseteq \myX $ is a code detecting up to $ t $ errors
in a graded asymmetric channel $ \myK \subseteq \myX^2 $ if, for all $ x, y \in \C $,
\begin{align}
\label{eq:detectgen}
x \myarrow y \; \land \; x \neq y \quad \Rightarrow \quad | \rank(x) - \rank(y) | > t .
\end{align}
We say that $ \bs{C} \subseteq \myX $ detects \emph{all} error patterns in
an asymmetric channel $ \myK \subseteq \myX^2 $ if, for all $ x, y \in \C $,
\begin{align}
\label{eq:detectgen2}
x \myarrow y \quad \Rightarrow \quad x = y .
\end{align}
\end{definition}
In words, $ \bs{C} $ detects all error patterns in a given asymmetric channel
if no element of $ \C $ can produce another element of $ \C $ at the channel
output.
If this is the case, the receiver will easily recognize whenever the transmission
is erroneous because the received object is not going to be a valid codeword
which could have been transmitted.
If the channel is graded, it is easy to see that the condition \eqref{eq:detectgen2}
is satisfied if and only if the condition \eqref{eq:detectgen} holds for any $ t $.
Yet another way of saying that $ \C $ detects all error patterns is the following.
\begin{proposition}
\label{thm:edc}
$ \C \subseteq \myX $ detects all error patterns in an asymmetric channel
$ \myK \subseteq \myX^2 $ if and only if $ \C $ is an antichain in the
corresponding poset $ (\myX, \stackrel{\sml{\myK}}{\rightsquigarrow}) $.
\end{proposition}
A simple example of an antichain, and hence a code correcting all error
patterns in a graded asymmetric channel, is the level set $ \myX_\ell $,
for an arbitrary $ \ell $.
\begin{definition}
\label{def:optimal}
We say that $ \C \subseteq \myX $ is an optimal code detecting up to
$ t $ errors (resp. all error patterns) in a graded asymmetric channel
$ \myK \subseteq \myX^2 $ if there is no code of cardinality larger than
$ |\C| $ that satisfies \eqref{eq:detectgen} (resp. \eqref{eq:detectgen2}).
\end{definition}
Hence, an optimal code detecting all error patterns in an asymmetric channel
$ \myK \subseteq \myX^2 $ is an antichain of maximum cardinality in the poset
$ (\myX, \stackrel{\sml{\myK}}{\rightsquigarrow}) $.
Channels in which the code $ \myX_\ell $ is optimal, for some $ \ell $, are
called Sperner channels.
All channels treated in this paper are Sperner.
An example of an error-detecting code, of which the code $ \myX_\ell $ is
a special case (obtained for $ t \to \infty $), is given in the following
proposition.
\begin{proposition}
\label{thm:tedc}
Let $ \myK \subseteq \myX^2 $ be a graded asymmetric channel, and
$ (\ell_n)_n $ a sequence of integers satisfying $ \ell_n - \ell_{n-1} > t $,
$ \forall n $.
The code $ \C = \bigcup_{n} \myX_{\ell_n} $
detects up to $ t $ errors in $ \myK $.
\end{proposition}
If the channel is normal, an optimal code detecting up to $ t $ errors is
of the form given in Proposition~\ref{thm:tedc}.
We state this fact for channels which are additionally rank-unimodal, as
this is the case that is most common.
\begin{theorem}
\label{thm:optimal}
Let $ \myK \subseteq \myX^2 $ be a normal rank-unimodal asymmetric channel.
The maximum cardinality of a code correcting up to $ t $ errors in
$ \myK_{[\underline{\ell}, \overline{\ell}]} $ is given by
\begin{equation}
\label{eq:maxsumgen}
\max_{m} \sum^{\overline{\ell}}_{\substack{ \ell=\underline{\ell} \\ \ell \, \equiv \, m \; (\operatorname{mod}\, t+1) } } |\myX_\ell| .
\end{equation}
\end{theorem}
\begin{IEEEproof}
This is essentially a restatement of the result of Kleitman~\cite{kleitman}
(see also \cite[Cor.~4.5.4]{engel}) which states
that, in a finite normal poset $ ( \myU, \preceq ) $, the largest cardinality
of a family $ \C \subseteq \myU $ having the property that, for all distinct
$ x, y \in \C $, $ x \preceq y $ implies that $ \rank(y) - \rank(x) > t $, is
$ \max_F \sum_{x \in F} |\myU_{\rank(x)}| $.
The maximum here is taken over all chains $ F = \{x_1, x_2, \ldots, x_c\} $
satisfying $ x_1 \preceq x_2 \preceq \cdots \preceq x_c $ and
$ \rank(x_{i+1}) - \rank(x_i) > t $ for $ i = 1, 2, \ldots, c-1 $, and all
$ c = 1, 2, \ldots $.
If the poset $ ( \myU, \preceq ) $ is in addition rank-unimodal, then it is
easy to see that the maximum is attained for a chain $ F $ satisfying
$ \rank(x_{i+1}) - \rank(x_i) = t + 1 $ for $ i = 1, 2, \ldots, c-1 $, and
that the maximum cardinality of a family $ \C $ having the stated property
can therefore be written in the simpler form
\begin{equation}
\label{eq:maxsumgen2}
\max_{m} \sum_{\ell \, \equiv \, m \; (\operatorname{mod}\, t+1)} |\myU_\ell| .
\end{equation}
Finally, \eqref{eq:maxsumgen} follows by recalling that the restriction
$ ( \myU_{[\underline{\ell}, \overline{\ell}]}, \preceq ) $ of a normal
poset $ ( \myU, \preceq ) $ is normal \cite[Prop. 4.5.3]{engel}.
\end{IEEEproof}
\vspace{2mm}
An optimal value of $ m $ in \eqref{eq:maxsumgen} can be determined explicitly
in many concrete examples.
We conclude this section with the following claim which enables one to
directly apply the results pertaining to a given asymmetric channel to
its dual.
\begin{proposition}
\label{thm:dual}
A channel $ \myK \subseteq \myX^2 $ is asymmetric if and only if its dual
$ \myK^\textnormal{d} $ is asymmetric.
A code $ \bs{C} \subseteq \myX $ detects up to $ t $ errors in $ \myK $ if
and only if it detects up to $ t $ errors in $ \myK^\textnormal{d} $.
\end{proposition}
\section{Examples of asymmetric channels}
\label{sec:examples}
In this section we list several examples of communication channels that
have been analyzed in the literature in different contexts and that are
asymmetric in the sense of Definition \ref{def:asymmetric}.
For each of them, a characterization of optimal error-detecting codes is
given based on Theorem~\ref{thm:optimal}.
\subsection{Codes in power sets}
\label{sec:subset}
Consider a communication channel with $ \myX = \myY = 2^{\{1,\ldots,n\}} $
and with $ A \rightsquigarrow B $ if and only if $ B \subseteq A $, where
$ A, B \subseteq \{1, \ldots, n\} $.
Codes defined in the power set $ 2^{\{1,\ldots,n\}} $ were proposed in
\cite{gadouleau+goupil2, kovacevic+vukobratovic_clet} for error correction
in networks that randomly reorder the transmitted packets (where the set
$ \{1,\ldots,n\} $ is identified with the set of all possible packets), and
are also of interest in scenarios where data is written in an unordered way,
such as DNA-based data storage systems \cite{lenz}.
Our additional assumption here is that the received set is always a subset
of the transmitted set, i.e., the noise is represented by ``set reductions''.
These kinds of errors may be thought of as consequences of packet losses/deletions.
Namely, if $ t $ packets from the transmitted set $ A $ are lost in the
channel, then the received set $ B $ will be a subset of $ A $ of cardinality
$ |A| - t $.
We are interested in codes that are able to detect up to $ t $ packet
deletions, i.e., codes having the property that if $ B \subsetneq A $,
$ |A| - |B| \leqslant t $, then $ A $ and $ B $ cannot both be codewords.
It is easy to see that the above channel is asymmetric in the sense of
Definition \ref{def:asymmetric}; the ``asymmetry'' in this model is
reflected in the fact that the cardinality of the transmitted set can
only be reduced.
The poset $ ( \myX, \rightsquigarrow ) $ is the so-called Boolean lattice
\cite[Ex.~1.3.1]{engel}.
The rank function associated with it is the set cardinality:
$ \rank(A) = |A| $, for any $ A \subseteq \{1, \ldots, n\} $.
This poset is rank-unimodal, with $ |\myX_\ell| = \binom{n}{\ell} $, and
normal \cite[Ex.~4.6.1]{engel}.
By applying Theorem~\ref{thm:optimal} we then obtain the maximum cardinality
of a code $ \C \subseteq 2^{\{1,\ldots,n\}} $ detecting up to $ t $ deletions.
Furthermore, an optimal value of $ m $ in \eqref{eq:maxsumgen} can be
found explicitly in this case.
This claim was first stated by Katona~\cite{katona} in a different terminology.
\begin{theorem}
\label{thm:subset}
The maximum cardinality of a code $ \C \subseteq 2^{\{1,\ldots,n\}} $
detecting up to $ t $ deletions is
\begin{equation}
\label{eq:maxsumsets}
\sum^n_{\substack{ \ell=0 \\ \ell \, \equiv \, \lfloor \frac{n}{2} \rfloor \; (\operatorname{mod}\, t+1) } }
\binom{n}{\ell}
\end{equation}
\end{theorem}
Setting $ t \to \infty $ (in fact, $ t > \lceil n/2 \rceil $ is sufficient),
we conclude that the maximum cardinality of a code detecting any number
of deletions is $ \binom{n}{\lfloor n/2 \rfloor} = \binom{n}{\lceil n/2 \rceil} $.
This is a restatement of the well-known Sperner's theorem \cite{sperner},
\cite[Thm 1.1.1]{engel}.
For the above channel, its dual (see Definition~\ref{def:channel}) is the
channel with $ \myX = 2^{\{1, \ldots, n\}} $ in which $ A \rightsquigarrow B $
if and only if $ B \supseteq A $.
This kind of noise, ``set augmentation'', may be thought of as a consequence
of packet insertions.
Proposition~\ref{thm:dual} implies that \eqref{eq:maxsumsets} is also the
maximum cardinality of a code $ \C \subseteq \myX $ detecting up to $ t $
insertions.
\subsection{Codes in the space of multisets}
\label{sec:multiset}
A natural generalization of the model from the previous subsection,
also motivated by unordered storage or random permutation channels,
is obtained by allowing repetitions of symbols, i.e., by allowing
the codewords to be \emph{multisets} over a given alphabet \cite{multiset}.
A multiset $ A $ over $ \{1, \ldots, n\} $ can be uniquely described by its
multiplicity vector\linebreak $ \mu_A = (\mu_A(1), \ldots, \mu_A(n)) \in \N^n $,
where $ \N = \{0, 1, \ldots\} $.
Here $ \mu_A(i) $ is the number of occurrences of the symbol $ i \in \{1, \ldots, n\} $
in $ A $.
We again consider the deletion channel in which $ A \rightsquigarrow B $
if and only if $ B \subseteq A $ or, equivalently, if $ \mu_B \leqslant \mu_A $
(coordinate wise).
If we agree to use the multiplicity vector representation of multisets, we may
take $ \myX = \myY = \N^n $.
It is easy to see that the channel just described is asymmetric in the
sense of Definition~\ref{def:asymmetric}.
The rank function associated with the poset $ {(\myX, \rightsquigarrow)} $ is
the multiset cardinality: $ \rank(A) = \sum_{i=1}^n \mu_A(i) $.
We have $ |\myX_\ell| = \binom{\ell + n - 1}{n - 1} $.
The following claim is a multiset analog of Theorem~\ref{thm:subset}.
\begin{theorem}
\label{thm:multiset}
The maximum cardinality of a code $ \C \subseteq \myX_{[\underline{\ell}, \overline{\ell}]} $,
$ \myX = \N^n $, detecting up to $ t $ deletions is
\begin{align}
\label{eq:Mcodesize}
\sum^{\lfloor \frac{\overline{\ell} - \underline{\ell}}{t+1} \rfloor}_{i=0}
\binom{\overline{\ell} - i (t+1) + n - 1}{n - 1} .
\end{align}
\end{theorem}
\begin{IEEEproof}
The poset $ (\myX, \rightsquigarrow) $ is normal as it is a product of
chains \cite[Ex.~4.6.1]{engel}.
We can therefore apply Theorem~\ref{thm:optimal}.
Furthermore, since $ |\myX_\ell| = \binom{\ell + n - 1}{n - 1} $ is a
monotonically increasing function of $ \ell $, the optimal choice of $ m $
in \eqref{eq:maxsumgen} is $ \overline{\ell} $, which implies \eqref{eq:Mcodesize}.
\end{IEEEproof}
\vspace{2mm}
The dual channel is the channel in which $ A \rightsquigarrow B $ if and
only if $ B \supseteq A $, i.e., $ \mu_B \geqslant \mu_A $.
These kinds of errors -- multiset augmentations -- may be caused by
insertions or duplications.
\subsection{Codes for the binary $ \mathsf{Z} $-channel and its generalizations}
\label{sec:Z}
Another interpretation of Katona's theorem \cite{katona} in the coding-theoretic
context, easily deduced by identifying subsets of $ \{1, \ldots, n\} $
with sequences in $ \{0, 1\}^n $, is the following: the expression in
\eqref{eq:maxsumsets} is the maximum size of a binary code of length
$ n $ detecting up to $ t $ asymmetric errors (i.e., errors of the form
$ 1 \to 0 $) \cite{borden}.
By using Kleitman's result \cite{kleitman}, Borden \cite{borden} also
generalized this statement and described optimal codes over arbitrary
alphabets detecting $ t $ asymmetric errors.
To describe the channel in more precise terms, we take
$ \myX = \myY = \{0, 1, \ldots, a-1\}^n $ and we let
$ (x_1, \ldots, x_n) \rightsquigarrow (y_1, \ldots, y_n) $ if and only
if $ y_i \leqslant x_i $ for all $ i = 1, \ldots, n $.
This channel is asymmetric and the poset $ (\myX, \rightsquigarrow) $
is normal \cite[Ex.~4.6.1]{engel}.
The appropriate rank function here is the Manhattan weight:
$ \rank(x_1, \ldots, x_n) = \sum_{i=1}^n x_i $.
In the binary case ($ {a = 2} $), this channel is called the $ \mathsf{Z} $-channel
and the Manhattan weight coincides with the Hamming weight.
Let $ c(N, M, \ell) $ denote the number of \emph{compositions} of the
number $ \ell $ with $ M $ non-negative parts, each part being $ \leqslant\! N $
\cite[Sec.~4.2]{andrews}.
In other words, $ c(N, M, \ell) $ is the number of vectors from
$ \{0, 1, \ldots, N-1\}^M $ having Manhattan weight $ \ell $.
Restricted integer compositions are well-studied objects;
for an explicit expression for $ c(N, M, \ell) $, see \cite[p.~307]{stanley}.
\begin{theorem}[Borden \cite{borden}]
\label{thm:Z}
The maximum cardinality of a code $ \C \subseteq \{0, 1, \ldots, a-1\}^n $
detecting up to $ t $ asymmetric errors is
\begin{align}
\label{eq:Zcode}
\sum^{n(a-1)}_{\substack{ \ell=0 \\ \ell \, \equiv \, \lfloor \frac{n(a-1)}{2} \rfloor \; (\operatorname{mod}\, t+1) }}
c(a-1, n, \ell) .
\end{align}
\end{theorem}
The channel dual to the one described above is the channel in which
$ (x_1, \ldots, x_n) \rightsquigarrow (y_1, \ldots, y_n) $ if and only
if $ y_i \geqslant x_i $ for all $ i = 1, \ldots, n $.
\subsection{Subspace codes}
\label{sec:subspace}
Let $ \myF_q $ denote the field of $ q $ elements, where $ q $ is a prime
power, and $ \myF_q^n $ an $ n $-dimensional vector space over $ \myF_q $.
Denote by $ \myP_q(n) $ the set of all subspaces of $ \myF_q^n $
(also known as the projective space), and by $ \myG_q(n , \ell) $
the set of all subspaces of dimension $ \ell $ (also known as the Grassmannian).
The cardinality of $ \myG_q(n , \ell) $ is expressed through
the $ q $-binomial (or Gaussian) coefficients \cite[Ch.~24]{vanlint+wilson}:
\begin{align}
\label{eq:gcoeff}
\left| \myG_q(n , \ell) \right|
= \binom{n}{\ell}_{\! q}
= \prod_{i=0}^{\ell-1} \frac{ q^{n-i} - 1 }{ q^{\ell-i} - 1 } .
\end{align}
The following well-known properties of $ \binom{n}{\ell}_{\! q} $ will
be useful:
\begin{inparaenum}
\item[1)]
symmetry: $ \binom{n}{\ell}_{\! q} = \binom{n}{n-\ell}_{\! q} $, and
\item[2)]
unimodality: $ \binom{n}{\ell}_{\! q} $ is increasing in $ \ell $ for
$ \ell \leqslant \frac{n}{2} $, and decreasing for $ \ell \geqslant \frac{n}{2} $.
\end{inparaenum}
We use the convention that $ \binom{n}{\ell}_{\! q} = 0 $ when
$ \ell < 0 $ or $ \ell > n $.
Codes in $ \myP_q(n) $ were proposed in \cite{koetter+kschischang} for error
control in networks employing random linear network coding \cite{ho}, in which
case $ \myF_q^n $ corresponds to the set of all length-$ n $ packets (over a
$ q $-ary alphabet) that can be exchanged over the network links.
We consider a channel model in which the only impairments are ``dimension
reductions'', meaning that, for any given transmitted vector space
$ U \subseteq \myF_q^n $, the possible channel outputs are subspaces of $ U $.
These kinds of errors can be caused by packet losses, unfortunate choices
of the coefficients in the performed linear combinations in the network
(resulting in linearly dependent packets at the receiving side), etc.
In the notation introduced earlier, we set $ \myX = \myY = \myP_q(n) $ and
define the channel by: $ U \rightsquigarrow V $ if and only if $ V $ is a
subspace of $ U $.
This channel is asymmetric.
The poset $ (\myX, \rightsquigarrow) $ is the so-called linear lattice
(or the subspace lattice) \cite[Ex.~1.3.9]{engel}.
The rank function associated with it is the dimension of a vector space:
$ \rank(U) = \dim U $, for $ U \in \myP_q(n) $.
We have $ |\myX_\ell| = | \myG_q(n , \ell) | = \binom{n}{\ell}_{\! q} $.
The following statement may be seen as the $ q $-analog \cite[Ch.~24]{vanlint+wilson}
of Katona's theorem \cite{katona}, or of Theorem \ref{thm:subset}.
\begin{theorem}
\label{thm:subspace}
The maximum cardinality of a code $ \C \subseteq \myP_q(n) $
detecting dimension reductions of up to $ t $ is
\begin{align}
\label{eq:codesize}
\sum^n_{\substack{ \ell=0 \\ \ell \, \equiv \, \lfloor \frac{n}{2} \rfloor \; (\operatorname{mod}\, t+1) } } \binom{n}{\ell}_{\! q} .
\end{align}
\end{theorem}
\begin{IEEEproof}
The poset $ (\myP_q(n), \subseteq) $ is rank-unimodal and normal
\cite[Ex.~4.5.1]{engel} and hence, by Theorem \ref{thm:optimal},
the maximum cardinality of a code detecting dimension reductions
of up to $ t $ can be expressed in the form
\begin{subequations}
\begin{align}
\label{eq:maxsum}
&\max_{m} \sum^n_{\substack{ \ell=0 \\ \ell \, \equiv \, m \; (\operatorname{mod}\, t+1) } }
\binom{n}{\ell}_{\! q} \\
\label{eq:maxsumr}
&\;\;\;\;= \max_{r \in \{0, 1, \ldots, t\}} \; \sum_{j \in \Z} \binom{n}{\lfloor \frac{n}{2} \rfloor + r + j(t+1)}_{\! q} .
\end{align}
\end{subequations}
(Expression \eqref{eq:maxsum} was also given in \cite[Thm~7]{ahlswede+aydinian}.)
We need to show that $ m = \lfloor n / 2 \rfloor $ is a maximizer
in \eqref{eq:maxsum} or, equivalently, that $ r = 0 $ is a maximizer in
\eqref{eq:maxsumr}.
Let us assume for simplicity that $ n $ is even; the proof for odd $ n $ is similar.
What we need to prove is that the following expression is non-negative,
for any $ r \in \{1, \ldots, t\} $,
\begin{subequations}
\label{eq:j}
\begin{align}
\nonumber
&\sum_{j \in \Z} \binom{n}{\frac{n}{2} + j(t+1)}_{\! q} -
\sum_{j \in \Z} \binom{n}{\frac{n}{2} + r + j(t+1)}_{\! q} \\
\label{eq:jpos}
&\;\;\;\;= \sum_{j > 0} \binom{n}{\frac{n}{2} + j(t+1)}_{\! q} - \binom{n}{\frac{n}{2} + r + j(t+1)}_{\! q} + \\
\label{eq:jzero}
&\;\;\;\;\phantom{=}\ \binom{n}{\frac{n}{2}}_{\! q} - \binom{n}{\frac{n}{2} + r}_{\! q} - \binom{n}{\frac{n}{2} + r - (t+1)}_{\! q} + \\
\label{eq:jneg}
&\;\;\;\;\phantom{=}\ \sum_{j < 0} \binom{n}{\frac{n}{2} + j(t+1)}_{\! q} - \binom{n}{\frac{n}{2} + r + (j-1)(t+1)}_{\! q} .
\end{align}
\end{subequations}
Indeed, since the $ q $-binomial coefficients are unimodal and maximized
at $ \ell = n / 2 $, each of the summands in the sums \eqref{eq:jpos} and
\eqref{eq:jneg} is non-negative, and the expression in \eqref{eq:jzero} is
also non-negative because%
\begin{subequations}
\begin{align}
\nonumber
&\binom{n}{\frac{n}{2}}_{\! q} - \binom{n}{\frac{n}{2} + r}_{\! q} - \binom{n}{\frac{n}{2} + r - (t+1)}_{\! q} \quad \\
\label{eq:a1}
&\;\;\;\;\geqslant \binom{n}{\frac{n}{2}}_{\! q} - \binom{n}{\frac{n}{2} + 1}_{\! q} - \binom{n}{\frac{n}{2} -1}_{\! q} \\
\label{eq:a2}
&\;\;\;\;= \binom{n}{\frac{n}{2}}_{\! q} - 2 \binom{n}{\frac{n}{2} - 1}_{\! q} \\
\label{eq:a3}
&\;\;\;\;= \binom{n}{\frac{n}{2}}_{\! q} \left( 1 - 2 \frac{q^{\frac{n}{2} + 1} - 1}{q^{\frac{n}{2} + 2} - 1} \right) \\
\label{eq:a4}
&\;\;\;\;> \binom{n}{\frac{n}{2}}_{\! q} \left( 1 - 2 \frac{1}{q} \right) \\
\label{eq:a5}
&\;\;\;\;\geqslant 0 ,
\end{align}
\end{subequations}
where \eqref{eq:a1} and \eqref{eq:a2} follow from unimodality and symmetry
of $ \binom{n}{\ell}_{\! q} $, \eqref{eq:a3} is obtained by substituting the
definition of $ \binom{n}{\ell}_{\! q} $, \eqref{eq:a4} follows from the fact
that $ \frac{\alpha-1}{\beta-1} < \frac{\alpha}{\beta} $ when $ 1 < \alpha < \beta $,
and \eqref{eq:a5} is due to $ q \geqslant 2 $.
\end{IEEEproof}
\vspace{2mm}
As a special case when $ t \to \infty $ (in fact, $ t > \lceil n/2 \rceil $
is sufficient), we conclude that the maximum cardinality of a code detecting
arbitrary dimension reductions is $ \binom{n}{\lfloor n/2 \rfloor}_{\! q} $.
In other words, $ \myG_q(n, \lfloor n/2 \rfloor) $ is an antichain of maximum
cardinality in the poset $ (\myP_q(n), \subseteq) $ (see Prop.~\ref{thm:edc}).
This is the well-known $ q $-analog of Sperner's theorem \cite[Thm 24.1]{vanlint+wilson}.
The dual channel is the channel in which $ U \rightsquigarrow V $ if and only
if $ U $ is a subspace of $ V $.
\subsection{Codes for deletion and insertion channels}
\label{sec:deletions}
Consider the channel with
$ \myX = \myY = \{0, 1, \ldots, a-1\}^* = \bigcup_{n=0}^\infty \{0, 1, \ldots, a-1\}^n $
in which $ x \rightsquigarrow y $ if and only if $ y $ is a subsequence
of $ x $.
This is the so-called deletion channel in which the output sequence is
produced by deleting some of the symbols of the input sequence.
The channel is asymmetric in the sense of Definition~\ref{def:asymmetric}.
The rank function associated with the poset $ (\myX, \rightsquigarrow) $ is
the sequence length: for any $ x = x_1 \cdots x_\ell $, where $ x_i \in \{0, 1, \ldots, a-1\} $,
$ \rank(x) = \ell $.
We have $ |\myX_\ell| = a^\ell $.
Given that $ \myX $ is infinite, we shall formulate the following
statement for the restriction $ \myX_{[\underline{\ell}, \overline{\ell}]} $,
i.e., under the assumption that only sequences of lengths
$ \underline{\ell}, \ldots, \overline{\ell} $ are allowed as inputs.
This is a reasonable assumption from the practical viewpoint.
\begin{theorem}
The maximum cardinality of a code
$ \C \subseteq \bigcup_{\ell=\underline{\ell}}^{\overline{\ell}} \{0, 1, \ldots, a-1\}^\ell $
detecting up to $ t $ deletions is
\begin{align}
\sum_{j=0}^{\lfloor \frac{\overline{\ell} - \underline{\ell}}{t+1} \rfloor}
a^{\overline{\ell} - j (t+1)} .
\end{align}
\end{theorem}
\begin{IEEEproof}
The poset $ (\myX_{[0, \overline{\ell}]}, \rightsquigarrow) $ is normal.
To see this, note that the list of $ a^{\overline{\ell}} $ maximal chains
of the form $ \epsilon \rightsquigarrow x_1 \rightsquigarrow x_1 x_2
\rightsquigarrow \cdots \rightsquigarrow x_1 x_2 \ldots x_{\overline{\ell}} $,
where $ \epsilon $ is the empty sequence and $ x_i \in \{0, 1, \ldots, a-1\} $,
satisfies the condition that each element of $ \myX_{[0, {\overline{\ell}}]} $
of rank $ \ell $ appears in the same number of chains, namely $ a^{\overline{\ell} - \ell} $
(see Section~\ref{sec:posets}).
The claim now follows by invoking Theorem \ref{thm:optimal} and by using
the fact that $ |\myX_\ell| = a^\ell $ is a monotonically increasing function
of $ \ell $, implying that the optimal choice for $ m $ in \eqref{eq:maxsumgen}
is $ \overline{\ell} $.
\end{IEEEproof}
\vspace{2mm}
The dual channel in this example is the insertion channel in which
$ x \rightsquigarrow y $ if and only if $ x $ is a subsequence of $ y $.
\subsection{Codes for bit-shift and timing channels}
\label{sec:shift}
Let $ \myX = \myY = \{0, 1\}^n $, and let us describe binary sequences by
specifying the positions of $ 1 $'s in them.
More precisely, we identify $ x \in \{0,1\}^n $ with
the integer sequence $ \lambda_x = (\lambda_x(1), \ldots, \lambda_x(w)) $,
where $ \lambda_x(i) $ is the position of the $ i $'th $ 1 $ in $ x $,
and $ w $ is the Hamming weight of $ x $.
This sequence satisfies
$ 1 \leqslant \lambda_x(1) < \lambda_x(2) < \cdots < \lambda_x(w) \leqslant n $.
For example, for $ x = 1 1 0 0 1 0 1 $, $ \lambda_x = (1, 2, 5, 7) $.
In fact, it will be more convenient to use a slightly different description
of a sequence $ x $, namely $ \tilde{\lambda}_x = \lambda_x - (1, 2, \ldots, w) $,
for which it holds that $ 0 \leqslant \tilde{\lambda}_x(1) \leqslant
\tilde{\lambda}_x(2) \leqslant \cdots \leqslant \tilde{\lambda}_x(w) \leqslant n - w $.
Consider a communication model in which each of the $ 1 $'s in the input
sequence may be shifted to the right \cite{shamai+zehavi}.
Such models are also of interest in describing timing channels wherein
$ 1 $'s indicate the time slots in which packets have been sent and
shifts of these $ 1 $'s are consequences of packet delays; see for
example \cite{kovacevic+popovski}.
Thus $ x \rightsquigarrow y $ if and only if $ x $ and $ y $ have the
same Hamming weight and $ \lambda_x \leqslant \lambda_y $ (coordinate wise).
Since a necessary condition for $ x \rightsquigarrow y $ is that $ x $ and
$ y $ have the same Hamming weight, we may consider the sets of inputs
$ \{0, 1\}^n_w \equiv \{ x \in \{0, 1\}^n : \sum_{i=1}^n x_i = w\} $
separately, for $ w = 0, \ldots, n $ (here $ x = x_1 \cdots x_n $).
The above channel is asymmetric.
The poset $ (\{0, 1\}^n_w, \rightsquigarrow) $ is denoted $ L(n-w, w) $
in \cite[Ex.~1.3.13]{engel}.
The rank function on this poset is defined by:
$ \rank(x) = \sum_{i=1}^{w} \tilde{\lambda}_x(i) $, where $ w $ is the
Hamming weight of $ x $.
Let $ p(N, M, \ell) $ denote the number of \emph{partitions} of the number
$ \ell $ into at most $ M $ positive parts, each part being $ \leqslant\! N $
\cite[Sec.~3.2]{andrews}.
These too are very well-studied objects.
An interesting connection between them and the Gaussian coefficients which
we encountered in Section~\ref{sec:subspace} is the following
\cite[Sec.~3.2]{andrews}, \cite[Thm 24.2]{vanlint+wilson}:
\begin{align}
\label{eq:partitions}
\sum_{\ell=0}^{MN} p(N, M, \ell) q^\ell = \binom{N+M}{M}_{\! q} .
\end{align}
\begin{theorem}
The maximum cardinality of a code $ \C \subseteq \{0, 1\}^n_w $ detecting
up to $ t $ right-shifts is lower-bounded by
\begin{align}
\label{eq:shift}
\sum^{w (n-w)}_{\substack{ \ell=0 \\ \ell \, \equiv \, \lfloor \frac{w(n-w)}{2} \rfloor \; (\operatorname{mod}\, t+1) } } p(n-w, w, \ell) .
\end{align}
The maximum cardinality of a code $ \C \subseteq \{0, 1\}^n_w $ detecting
all patterns of right-shifts is $ p(n-w, w, \lfloor \frac{w(n-w)}{2} \rfloor) $.
\end{theorem}
\begin{IEEEproof}
The number of elements in $ \{0, 1\}^n_w $ of rank $ \ell $ is $ p(n-w, w, \ell) $.
These numbers are symmetric, $ p(n-w, w, \ell) = p(n-w, w, w(n-w) - \ell) $,
and unimodal, and hence maximized when $ \ell = \lfloor \frac{w(n-w)}{2} \rfloor $
\cite[Thm 3.10]{andrews}.
Furthermore, it follows from \cite[Thm 6.2.10 and Cor.~6.2.1]{engel} that the
poset $ (\{0, 1\}^n_w, \rightsquigarrow) $ is Sperner.
This implies the second statement.
The first statement follows from Proposition \ref{thm:tedc}.
\end{IEEEproof}
\vspace{2mm}
We believe the lower bound in \eqref{eq:shift} is actually the optimal value,
i.e., the maximum cardinality of a code detecting $ t $ right-shifts, but at
present we do not have a proof of this fact.
The dual channel in this example is the channel in which non-zero
symbols may be shifted only to the left.
\section{Conclusion}
As we have seen, order theory is a powerful tool for analyzing asymmetric
channel models, particularly the error detection problem for which an optimal
solution may be obtained in many interesting cases.
Developing the introduced framework further and exploring other applications
and channel models that fit into it is a topic of ongoing investigation.
Note that we have not discussed here error-\emph{correcting} codes in the
posets we encountered.
This topic has been explored previously \cite{firer} and is also left for
future work.
\vspace{6mm}
| {
"timestamp": "2022-05-17T02:24:49",
"yymm": "2205",
"arxiv_id": "2205.07346",
"language": "en",
"url": "https://arxiv.org/abs/2205.07346",
"abstract": "Several communication models that are of relevance in practice are asymmetric in the way they act on the transmitted \"objects\". Examples include channels in which the amplitudes of the transmitted pulses can only be decreased, channels in which the symbols can only be deleted, channels in which non-zero symbols can only be shifted to the right (e.g., timing channels), subspace channels in which the dimension of the transmitted vector space can only be reduced, unordered storage channels in which the cardinality of the stored (multi)set can only be reduced, etc. We introduce a formal definition of an asymmetric channel as a channel whose action induces a partial order on the set of all possible inputs, and show that this definition captures all the above examples. Such a general approach allows one to treat all these different models in a unified way, and to obtain a characterization of optimal error-detecting codes for many interesting asymmetric channels by using Sperner theory.",
"subjects": "Information Theory (cs.IT); Discrete Mathematics (cs.DM)",
"title": "Optimal Error-Detecting Codes for General Asymmetric Channels via Sperner Theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540668504082,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7099046645544849
} |
https://arxiv.org/abs/math/9907060 | Polynomial Homotopies for Dense, Sparse and Determinantal Systems | Numerical homotopy continuation methods for three classes of polynomial systems are presented. For a generic instance of the class, every path leads to a solution and the homotopy is optimal. The counting of the roots mirrors the resolution of a generic system that is used to start up the deformations. Software and applications are discussed. | \section{Introduction}
Solving polynomial systems numerically means computing approximations
to all isolated solutions.
Homotopy continuation methods provide paths to approximate solutions.
The idea is to break up the original system into simpler problems.
To solve the original system, the solutions of the simpler systems
are deformed into the solutions of the original problem.
\medskip
This paper presents optimal homotopies for three different classes of
polynomial systems. Optimal means that for generic instances of the
classes there are no diverging solution paths, whence the amount of
computational work is linear in the number of solutions.
In the next section we list the principal key words, definitions and
main theorems for dense, sparse and determinantal polynomial systems.
The proofs of these theorems follow from the correctness of the homotopies.
\medskip
Path-following methods are standard numerical
techniques~(\cite{ag90,ag93,ag97}, \cite{mor87}, \cite{wat86,wat89})
to achieve global convergence when solving nonlinear systems.
For polynomial systems we can reach all isolated solutions.
In the third section we describe
the paradigm of Cheater's homotopy~(\cite{lsy89},~\cite{lw92})
or coefficient-parameter polynomial continuation~(\cite{ms89},~\cite{ms90}).
This paradigm allows to construct homotopies for which singularities only
occur at the end of the paths.
To deal with components of solutions we use an embedding
method that leads to generic points on each component.
This method is essential to numerical algebraic geometry~\cite{sw96}.
\medskip
From~\cite{kem93} we cite: ``Algebraic geometry studies the delicate balance
between the geometrically plausible and the algebraically possible''.
By a choice of coordinates we set up an algebraic formulation for a
geometric problem that is then solved by automatic computations.
While this approach is extremely powerful, we might get trapped into
tedious wasted computations after loosing the original geometric meaning
of the problem. In section four we stress the geometric intuition of
homotopy methods. Compactifications and homogeneous coordinates provide
us the tools to generate the numerically most favorable representations
for the solutions to our problem.
In section five we arrive at the heart of modern homotopy methods
where we outline specific algorithms to implement the root
counts\footnote{The term ``root count'' was coined by
Canny and Rojas~\cite{cr91} while introducing mixed volumes
to computational algebraic geometry.}.
The counting of the roots mirrors the resolution of a system in
generic position that is used as starting point in the deformations.
\medskip
Polyhedral methods occupy the central part of current research, as
they are responsible for a computational breakthrough in numerical
general-purpose solvers for polynomial systems.
Section six is devoted to numerical software with an emphasis on the
structure of the package PHC, developed by the author during the past decade.
Another novel and exciting research development concerns the numerical
Schubert calculus, which is one of the major new features in the second
public release of~PHC.
The author has gathered more than one hundred polynomial systems that
arose in various application fields.
This collection serves as a test suite for software and a gallery to
demonstrate the importance of polynomial systems to mathematical modelling.
In section seven we sample some interesting cases.
\medskip
The reference list contains a compilation of the most relevant technical
contributions to polynomial homotopy continuation. Besides those we
want to point at some other works in the literature that are of special
interest. Some user-friendly introductions to algebraic geometry appeared
in recent years: see~\cite{abh90}, \cite{fal90}, \cite{har95}, with
computational aspects in~\cite{cls97} and~\cite{cls98}.
As Newton polytopes have become extremely important,
we recommend~\cite{zie95} and the handbook chapters~\cite{gr97}.
See also~\cite{stu98} for the interplay between the combinatorics of polytopes
and the (real) roots of polynomials.
A recent survey that also covers polyhedral homotopies along with other
polynomial continuation methods appeared in~\cite{li97}.
\section{Three Classes of Polynomial Systems}
The classification in Table~\ref{tabclasses} is inspired by~\cite{hss98}.
The dense class is closest to the common algebraic description, whereas
the determinantal systems arise in enumerative geometry.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c||c|c|c|c|} \hline
system & model & theory & \multicolumn{2}{c|}{space} \\ \hline \hline
dense & highest degrees & B\'ezout & $\pp^n$ & projective \\ \hline
sparse & Newton polytopes & Bernshte\v{\i}n & $({\Bbb C}^*)^n$ & toric \\ \hline
determinantal & localization posets & Schubert & $G_{mr}$ & Grassmannian
\\ \hline
\end{tabular}
\smallskip
\caption{Key words of the three classes of polynomial systems.}
\label{tabclasses}
\end{center}
\end{table}
For the vector of unknowns ${\bf x} = (x_1,x_2,\ldots,x_n)$
and exponents ${\bf a} = (a_1,a_2,\ldots,a_n) \in {\Bbb N}^n$, denote
${\bf x}^{\bf a} = x_1^{a_1} x_2^{a_2} \cdots x_n^{a_n}$.
A polynomial system $P({\bf x}) = {\bf 0}$ is given by
$P = (p_1,p_2,\ldots,p_n)$, a tuple of polynomials $p_i \in {\Bbb C}[{\bf x}]$,
$i = 1,2,\ldots,n$.
\smallskip
The complexity of a {\em dense polynomial} $p$ is
measured by its degree~$d$:
\begin{equation}
p({\bf x}) = \sum_{0 \leq a_1+a_2+\cdots+a_n \leq d}
c_{\bf a} {\bf x}^{\bf a}, \quad \quad d = \deg(p),
\end{equation}
where at least one monomial of degree $d$ should have a nonzero coefficient.
The {\em total degree} $D$ of a {\em dense system} $P$ is
$D = \prod_{i=1}^n \deg(p_i)$.
\begin{theorem}(B\'ezout~{\rm \cite{cls97}})
The system $P({\bf x}) = {\bf 0}$ has
no more than $D$ isolated solutions, counted with multiplicities.
\end{theorem}
Consider for example
\begin{equation} \label{eqexsys}
P(x_1,x_2) =
\left\{
\begin{array}{r}
x_1^4 + x_1 x_2 + 1 = 0 \\
x_1^3 x_2 + x_1 x_2^2 + 1 = 0
\end{array}
\right.
\quad {\rm with~total~degree}~D = 4 \times 4 = 16.
\end{equation}
Although $D = 16$, this system has only eight solutions because of its
sparse structure.
\medskip
The {\em support} $A$ of a {\em sparse polynomial} $p$ collects all
exponents of those monomials whose coefficients are nonzero.
Since we allow negative exponents (${\bf a} \in {\Bbb Z}^n$),
we restrict ${\bf x} \in ({\Bbb C}^*)^n$, ${\Bbb C}^* = {\Bbb C} \setminus \{ 0 \}$.
\begin{equation}
p({\bf x}) = \sum_{{\bf a} \in A} c_{\bf a} {\bf x}^{\bf a}, \quad \quad
\forall {\bf a} \in A: c_{\bf a} \not= 0, \quad
A \subset {\Bbb Z}^n, \quad \#A < \infty.
\end{equation}
The {\em Newton polytope} $Q$ of~$p$ is the convex hull of the support~$A$
of~$p$.
We model the structure of a {\em sparse system} $P$ by a tuple of Newton
polytopes ${\cal Q} = (Q_1,Q_2,\ldots,Q_n)$, spanned by
${\cal A} = (A_1,A_2,\ldots,A_n)$, the so-called {\em supports} of~$P$.
\smallskip
The volume of a positive linear combination of polytopes is a homogeneous
polynomial in the multiplication factors.
The coefficients are {\em mixed volumes}.
For instance, for $(Q_1,Q_2)$, we write:
\begin{equation} \label{eqvolpol}
2!{\rm vol}_2(\lambda_1 Q_1 + \lambda_2 Q_2)
= V_2(Q_1,Q_1) \lambda_1^2 + 2 \cdot V_2(Q_1,Q_2) \lambda_1 \lambda_2
+ V_2(Q_2,Q_2) \lambda_2^2,
\end{equation}
normalizing $V_2(Q,Q) = 2! {\rm vol}_2(Q)$.
For the Newton polytopes of the system~(\ref{eqexsys}):
$2! {\rm vol}_2(\lambda_1 Q_1 + \lambda_2 Q_2) =
4 \lambda_1^2 + 2 \cdot 8 \lambda_1 \lambda_2 + 5 \lambda_2^2$.
To interpret this we look at Figure~\ref{figmcc} and see that multiplying
$P_1$ and $P_2$ respectively by $\lambda_1$ and $\lambda_2$ changes their
areas respectively with $\lambda_1^2$ and $\lambda_2^2$.
The cells in the subdivision of $Q_1 + Q_2$ whose area is scaled by
$\lambda_1 \lambda_2$ contribute to the mixed volume.
So, for the example in~(\ref{eqexsys}), the root count is eight.
\begin{figure}[hbt]
\begin{center}
\centerline{\psfig{figure=psfmcc.ps}}
\caption{Newton polytopes $Q_1$, $Q_2$, a mixed subdivision
of $Q_1 + Q_2$ with volumes.}
\label{figmcc}
\end{center}
\end{figure}
\begin{theorem} (Bernshte\v{\i}n~{\rm \cite{ber75}})
A system $P({\bf x}) = {\bf 0}$ with Newton polytopes~${\cal Q}$ has no
more than $V_n({\cal Q})$ isolated solutions in $({\Bbb C}^*)^n$,
counted with multiplicities.
\end{theorem}
The mixed volume was nicknamed~\cite{cr91} as the BKK bound to honor
Bernshte\v{\i}n~\cite{ber75}, Kushnirenko~\cite{kus76}, and
Khovanski{\v{\i}}~\cite{kho78}.
\medskip
For the third class of polynomial systems we consider a matrix $[C | X]$
where $C \in {\Bbb C}^{(m+r) \times m}$ and $X \in {\Bbb C}^{(m+r) \times r}$
respectively collect the coefficients and indeterminates.
Laplace expansion of the maximal minors of $[C | X]$ in $m$-by-$m$ and
$r$-by-$r$ minors yields a {\em determinantal} polynomial
\begin{equation} \label{eqdetpol}
p({\bf x}) = \sum_{\scriptsize \begin{array}{c} I \cup J = U \\
I \cap J = \emptyset
\end{array} } {\rm sign}(I,J) C[I] X[J],
\quad U = \{ 1,2,\ldots,m+r\},
\end{equation}
where the summation runs over all distinct choices $I$ of $m$ elements of~$U$.
The partition $\{ I, J \}$ of $U$ defines the permutation $U \mapsto (I,J)$
with ${\rm sign}(I,J)$ its sign.
The symbols $C[I]$ and $X[I]$ respectively represent
coefficient minors and minors of indeterminates.
Note that for more general intersection conditions, the matrices
$[C | X]$ are not necessarily square.
\smallskip
The vanishing of a polynomial as in~(\ref{eqdetpol}) expresses
the condition that the $r$-plane $X$ meets a given $m$-plane nontrivially.
The counting and finding of all figures that satisfy certain geometric
conditions is the central theme of enumerative geometry.
For example, consider the following.
\begin{theorem} (Schubert~{\rm \cite{sch1891}})
Let $m,r \geq 2$. In~${\Bbb C}^{m+r}$ there are
\begin{equation} \label{eqgrassdeg}
d_{m,r} \quad = \quad
\frac{1! \, 2! \, 3! \cdots (r\!- \!2) ! \, (r \!-\!1)! \cdot
(mr)!}{m!\, (m \! + \! 1)! \, (m \! + \! 2)!
\cdots(m \! + \! r \! - \! 1)!}
\end{equation}
$r$-planes that nontrivially meet $mr$ given $m$-planes in general position.
\end{theorem}
This root count $d_{m,r}$ is sharp compared to other root counts,
see~\cite{sot98} and~\cite{ver98} for examples.
\smallskip
We can picture the simplest case, using the fact that 2-planes in ${\Bbb C}^4$
represent lines in $\pp^3$. In Figure~2 the positive real projective 3-space
corresponds to the interior of the tetrahedron.
\bigskip
\parbox[b]{5cm}{
\centerline{\psfig{figure=psfgrass.ps,width=5cm}}
~~~~~Figure~2: $m = 2 = r$.
\addtocounter{figure}{1}
} \ \ \ \ \ \ \ \
\parbox[b]{10cm}{
\smallskip
In Figure~2, we see two thick lines meeting four given skew fine lines
in a point. When not all input planes have the same dimension,
but when the number of solutions is still finite,
Pieri's formula~\cite{pie1891} provides a root count~\cite{sot97a,sot97b}.
\smallskip
In~\cite{hss98} the problem is solved in chains of nested subspaces,
using a cellation of the Grassmannian $G_{mr}$ of $r$-planes in ${\Bbb C}^{m+r}$.
A {\em localization poset} models~\cite{hv99} the specialization of
the solution $r$-plane when the input is specialized.
}
\bigskip
Algorithmic proofs for the above theorems consist in two steps.
First we show how to construct a generic start system that has exactly
as many regular solutions as the root count.
Then we set up a homotopy for which all isolated solutions of any
particular target system lie at the end of some solution path
originating at some solution of the constructed start system.
\section{The Principles of Polynomial Homotopy Continuation Methods}
Homotopy continuation methods operate in two stages.
Firstly, homotopy methods exploit the structure of $P$ to find a root count
and to construct a start system $P^{(0)}({\bf x}) = {\bf 0}$
that has exactly as many regular solutions as the root count.
This start system is embedded in the {\em homotopy}
\begin{equation} \label{eqlinhom}
H({\bf x},t) = \gamma(1-t)P^{(0)}({\bf x}) + t P({\bf x}) = {\bf 0},
\quad t \in [0,1],
\end{equation}
with $\gamma \in {\Bbb C}$ a random number.
In the second stage, as $t$ moves from 0 to 1,
numerical continuation methods trace the paths that originate at the
solutions of the start system towards the solutions of the target system.
\smallskip
\noindent The good properties we expect from a homotopy
$H({\bf x},t) = {\bf 0}$ are (borrowed from \cite{li97}):
\begin{enumerate}
\item ({\em triviality}) The solutions for~$t=0$ are trivial to find.
\item ({\em smoothness}) No singularities along the solution paths occur.
\item ({\em accessibility}) All isolated solutions can be reached.
\end{enumerate}
\bigskip
Continuation or path-following methods are standard numerical techniques
(\cite{ag90,ag93,ag97}, \cite{mor87}, \cite{wat86,wat89}) to trace the
solution paths defined by the homotopy using {\em predictor-corrector} methods.
The smoothness property of complex polynomial homotopies implies that paths
never turn back, so that during correction the parameter $t$ stays fixed,
which simplifies the set up of path trackers.
A pseudo-code description of a path tracker is in
Algorithm~\ref{algpathfoll}.
\medskip
The {\em predictor} delivers at each step of the method
a new value for the continuation parameter and predicts an approximate
solution of the corresponding new system in the homotopy.
Figure~\ref{figsectan} shows two common predictor schemes.
The predicted approximate solution is adjusted by applying Newton's method
as {\em corrector}. The third ingredient in path-following methods is
the {\em adaptive step size control}.
The step length is determined to enforce quadratic convergence in the
corrector to avoid path crossing.
\medskip
\begin{algorithm} \label{algpathfoll}
{\rm Following one solution path by an increment-and-fix
predictor-corrector method with an adaptive step size control strategy.
\bigskip
\noindent \begin{tabular}{lcr}
Input: \ \ $H({\bf x},t)$, ${\bf x}^* \in {\Bbb C}^n$: $H({\bf x}^*,0) = {\bf 0}$,
& \ \ \ & {\em homotopy and start solution} \\
\ \ \ \ \ \ \ \ \ \ \ \ $\epsilon > 0$, $max\_it$, $max\_steps$.
& & {\em accuracy and upper bounds} \\
Output: ${\bf x}^*$, success if $||H({\bf x}^*,1)|| \leq \epsilon$.
& & {\em approximate solution if success} \\
\\
$t := 0$; \ $k := 0$; & & {\em initialization} \\
$h := max\_step\_size$; & & {\em step length} \\
$old\_t := t$; \ $old\_{\bf x}^* := {\bf x}^*$
& & {\em back up values for $t$ and ${\bf x}^*$ } \\
$previous\_{\bf x}^* := {\bf x}^*$;
& & {\em previous approximate solution} \\
stop := false; & & {\em combines stopping criteria} \\
while $t < 1$ and not stop loop & & \\
\ \ \ $t := \min(1,t + h)$; & & {\em secant predictor for $t$} \\
\ \ \ ${\bf x}^* := {\bf x}^* + h ( {\bf x}^* - previous\_{\bf x}^* )$;
& & {\em secant predictor for ${\bf x}^*$} \\
\ \ \ Newton($H({\bf x},t),{\bf x}^*,\epsilon,max\_it$,success);
& & {\em correct with Newton's method} \\
\ \ \ if success & & {\em step size control} \\
\ \ \ \ then $h := \min(Expand(h),max\_step\_size)$;
& & {\em enlarge step length} \\
\ \ \ \ \ \ \ \ \ \ \ $previous\_{\bf x}^* := old\_{\bf x}^*$;
& & {\em go further along path} \\
\ \ \ \ \ \ \ \ \ \ \ $old\_t := t$; \ $old\_{\bf x}^* := {\bf x}^*$;
& & {\em new back up values} \\
\ \ \ \ else \ $h := Shrink(h)$; & & {\em reduce step length} \\
\ \ \ \ \ \ \ \ \ \ \ $t := old\_t$; \ ${\bf x}^* := old\_{\bf x}^*$;
& & {\em step back and try again} \\
\ \ \ end if; & & \\
\ \ \ $k := k+1$; & & {\em augment counter} \\
\ \ \ stop := ($h < min\_step\_size$) or ($k > max\_steps$);
& & {\em stopping criteria} \\
end loop; & & \\
success := ($||H({\bf x}^*,1)|| \leq \epsilon$).
& & {\em report success or failure}
\end{tabular}
}
\end{algorithm}
Following all paths can be done sequentially, one path at a time, or
in parallel, with for each solution path the same sequence of values of
the continuation parameter.
The sequential path-following method has the advantage that the low
overhead of communication~\cite{acw89} makes it very suitable to run on
multi-processor environments.
Note that the memory requirements are optimal.
\begin{figure}[hbt]
\begin{center}
\centerline{\psfig{figure=psfsectan.ps}}
\caption{The secant and tangent predictor with step length~$h$.}
\label{figsectan}
\end{center}
\end{figure}
\medskip
To solve repeatedly a polynomial system with the same coefficient structure
$P({\bf c},{\bf x}) = {\bf 0}$, the homotopy~(\ref{eqlinhom}) is applied
with $P^{(0)} = P({\bf c}^0,{\bf x}) = {\bf 0}$ a system with random
coefficients~${\bf c}^0$.
Solving $P({\bf c}^0,{\bf x}) = {\bf 0}$ is no longer trivial,
so the name {\em cheater's homotopy}~\cite{lsy89} is appropriate.
A similar idea appeared in~\cite{ms89,ms90}.
For coefficients given as functions of parameters, a refined version of
cheater's homotopy in~\cite{lw92} avoids repeated evaluation of those
functions during path following:
\begin{equation} \label{eqnonlincheat}
H({\bf x},t) = P((1-[t-t(1-t) \gamma]){\bf c}^0
+ (t-t(1-t) \gamma){\bf c},{\bf x}) = {\bf 0},
\quad t \in [0,1], \gamma \in {\Bbb C}.
\end{equation}
In~\cite{lw92} it is proven that with~(\ref{eqnonlincheat}) all isolated
solutions of~$P({\bf c},{\bf x}) = {\bf 0}$ can be reached and that
singularities can only occur at the end of the paths.
\smallskip
Typically, when using a cheater's homotopy, the computational effort spent
towards the end of the paths often accounts for most of the work.
The main numerical problem is then to distinguish irrelevant solutions
at infinity from ill-conditioned but possibly meaningful solutions.
End games~\cite{hv98}, \cite{msw91,msw92a,msw92b}, \cite{sws96}
provide several procedures to approximate the winding number of a path.
Recently, Zeuthen's rule was applied in~\cite{kss98} to determine
numerically the multiplicity of an isolated solution.
Multi-precision facilities are useful for evaluation of residuals and
root refinement for badly scaled solutions.
\medskip
In most applications, the polynomial systems have real coefficients and
invite the use of real homotopies.
In~\cite{bm84} it was conjectured and proven in~\cite{lw93}
that generically, real homotopies contain no singular points other
than a finite number of quadratic turning points.
At those bifurcation points pairs of real solution paths become imaginary
or conversely, complex conjugated solution paths join to yield two real
solution paths.
We refer to~\cite{all84}, \cite{hk90}, \cite{li97} and~\cite{lw93,lw94}
for a discussion of numerical techniques to deal with quadratic turning points.
A remarkable application of real homotopies in the real world
consists in the finding of the relevant parameters of a polynomial system
to maximize the number of real roots, see~\cite{die98}
for the 40 real solutions for the Stewart-Gough platform in mechanics.
\medskip
In~\cite{sw96} the use of homotopy continuation to deal with overdetermined
and components of solutions is discussed.
Geometrically one slices the components of solutions with as many random
hyperplanes as the dimension of the components.
The solutions to the original polynomial system augmented with these
random linear equations for the hyperplanes are {\em generic points}
of the components, constituting the main numerical data to study those
components. In particular, the number of generic points one obtains
by this slicing procedure equals the sum of the degrees over all
top-dimensional components of solutions.
\medskip
To make the algorithms of~\cite{sw96} more efficient, in~\cite{sv99},
the following embedding of the polynomial system~$P({\bf x}) = {\bf 0}$
is proposed:
\begin{equation} \label{eqembed}
\left\{
\begin{array}{rl}
p_i({\bf x}) + \lambda_i z = 0, & i=1,2,\ldots,n \\
{\displaystyle \sum_{j=1}^n c_j x_j + z = 0}
\end{array}
\right.
\end{equation}
where the $\lambda_i$'s and $c_j$'s are random complex numbers.
This embedding has the advantage over the algorithms in~\cite{sw96} that
fewer solution paths diverge. Solutions to the system~(\ref{eqembed})
with $z = 0$ lie on a component of solutions.
By Bertini's theorem, all solutions with $z \not= 0$ are regular.
In~\cite{sv99}, it is proven that those solutions can be used as
start solutions to reach {\em all} isolated solutions of the original
polynomial system~$P({\bf x}) = {\bf 0}$.
\medskip
The embedding~(\ref{eqembed}) is performed repeatedly in the routine
`Embed' in the algorithm (copied from~\cite{sv99}) below.
\begin{algorithm} \label{algcascade}
{\rm Cascade of homotopies between embedded systems.
\bigskip
\begin{tabular}{lcr}
Input: $P$, $n$.
& & {\em system with solutions in ${\Bbb C}^n$} \\
Output: $({\cal E}_i,{\cal X}_i,{\cal Z}_i)_{i=0}^n$.
& & {\em embeddings with solutions} \\
\\
${\cal E}_0 := P$;
& & {\em initialize embedding sequence} \\
for $i$ from 1 up to $n$ do
& & {\em slice and embed} \\
\ \ \ ${\cal E}_i$ := Embed(${\cal E}_{i-1},z_{i}$);
& & {\em $z_i$ = new added variable} \\
end for; & & {\em homotopy sequence starts} \\
${\cal Z}_n$ := Solve(${\cal E}_n$);
& & {\em all roots are isolated, nonsingular, with $z_n \not= 0$}\\
for $i$ from $n-1$ down to 0 do
& & {\em countdown of dimensions} \\
\ \ \ $H_{i+1}$ := $
t {\cal E}_{i+1}
+
(1-t)\left(
\begin{array}{c}
{\cal E}_i \\
z_{i+1}
\end{array}
\right) $;
& & {\em \begin{tabular}{r}
homotopy continuation \\
$t: 1 \rightarrow 0$ to remove $z_{i+1}$ \\
\end{tabular} } \hspace{-5mm} \\
\ \ \ ${\cal X}_i$ := limits of solutions of $H_{i+1}$ \\
\ \ \ \ \ \ as $t\to 0$ with $z_i=0$;
& & {\em on component} \\
\ \ \ ${\cal Z}_i$ := $H_{i+1}({\bf x},z_{i} \not= 0,t=0)$;
& & {\em not on component: these solutions} \\
& & {\em are isolated and nonsingular} \\
end for. & & \\
\end{tabular}
}
\end{algorithm}
This embedding allows the efficient treatment of overdetermined systems
and other nonproper intersections.
By perturbing the added hyperplanes and extending the generic points by
continuation, interpolation methods can lead to equations for the components.
\section{The Geometry of the Deformations}
Homotopy methods have an intuitive geometric interpretation.
In this section we illustrate the geometry of the three types of
moving into special position: product, toric, and Pieri deformations.
These can be regarded as three applications of the principle of
continuity or conservation of number in enumerative geometry.
\smallskip
Product homotopies deform polynomial equations into products of
linear equations. In Figure~\ref{figbasic} we see the line configuration
at the start and the ellipse-parabola intersection in the end.
Note that complex space is the natural space for deformations.
The other two complex conjugated intersection points could not be
displayed in Figure~\ref{figbasic}.
{\small
\begin{figure}[hbt]
\begin{center}
\centerline{\psfig{figure=psfbasic.ps}}
\caption{Intersection of quadrics: a degenerate and a target configuration.}
\label{figbasic}
\end{center}
\end{figure}
}
\medskip
The sparser a system, the easier it can be solved.
In Figure~\ref{figincrpoco} we illustrate the idea of making a system
sparser by setting up a so-called polyhedral homotopy that reduces this
particular system at $t=0$ to a linear system.
The lower hull of the Newton polytope of this homotopy
induces a triangulation, which is used to count the roots.
In particular, every cell in the triangulation gives rise to a homotopy
with as many paths to follow as the volume of the cell.
The other root for the example in Figure~\ref{figincrpoco} can be computed
with a homotopy obtained from~$\widehat P$ by the substitution
of variables $x_1 \leftarrow {\widetilde x}_1 t^{-1}$ and
$x_2 \leftarrow {\widetilde x}_2 t^{-1}$. This transformation pushes the
constant monomial up, so that at $t=0$ we have the nonconstant monomials
in the start system to compute the other root.
{\small
\begin{figure}[hbt]
\begin{center}
\centerline{\psfig{figure=psfincrpoco.ps}}
\caption{Triangulation of the Newton polytope of $P$ with
polyhedral homotopy $\widehat P$.}
\label{figincrpoco}
\end{center}
\end{figure}
}
\medskip
Figure~\ref{figlines} displays a special and a general configuration
of four lines. The basis has been chosen such that two of the four
input lines are spanned by standard basis vectors.
To compute all lines that meet four given lines, one of the four
given lines is moved into special position so that it intersects
two other given lines, see the left of Figure~\ref{figlines}.
The solution lines must then originate at those two intersection points
and reach to the other opposite line while meeting the line left in
general position.
\begin{figure}[hbt]
\begin{center}
\centerline{\psfig{figure=psflines.ps}}
\caption{In $\pp^3$ two thick lines meet four given lines
$L_1$, $L_2$, $L_3$, and $L_4$ in a point.
At the left we see a special configuration and the general
configuration is at the right.}
\label{figlines}
\end{center}
\end{figure}
\bigskip
The constructions above are in a sense~\cite{abh90} ``heuristic proofs''.
With the general position assumption we cheat a bit, avoiding the hard
problem of assigning multiplicities. Making this so-called~\cite{wei62}
``method of degeneration'' rigorous was an important development in
algebraic geometry.
\bigskip
To deal with solution paths diverging to ill-conditioned
roots or to infinity we need to compactify our space.
Instead of polynomials in $n$ variables we consider homogeneous forms
with coordinates subject to equivalence relations.
While mathematically all coordinate choices are equivalent,
we select the numerically most favorable representations
of the solutions.
\medskip
The usual projective transformation consists in the change of
variables $x_i := \frac{z_i}{z_0}$, for $i=1,2,\ldots,n$, which
leads to the homogeneous system $P({\bf z}) = {\bf 0}$.
To have as many equations as unknowns, we add to this system a
random hyperplane. Except for an algebraic set of the coefficients
of this added hyperplane, all solution paths are guaranteed to stay
inside ${\Bbb C}^{n+1}$ when homotopy continuation is applied.
We refer to~\cite{li97} for numerical techniques that dynamically
restrict the computations to $n$ dimensions.
\medskip
For sparse polynomial systems, we introduce as in~\cite{ver99}
a new variable for every facet of the Newton polytopes.
The advantage of this more involved compactification
is based on the observation that when paths diverge
to infinity certain coefficients of the polynomial system become dominant.
With toric homogenization the added variables that become zero identify
the faces of the Newton polytopes for the parts of the system that
become dominant. This compactification works in conjunction with
polyhedral end games~\cite{hv98} which are summarized in
Section~\ref{secsparhom}.
\medskip
The natural way to compactify $G_{mr}$ is to consider a multi-projective
homogenization according to rows or columns of the matrix representations
for the planes. In addition, we have that the planes are equivalent upon
a linear change of basis. Choosing orthonormal matrices to represent the
input planes leads to drastic improvements in the conditioning of the
solution paths, see~\cite{hv99} and~\cite{ver98} for experimental data.
\section{Root Counts and Start Systems}
The main principle is that counting roots corresponds to solving start
systems. Algorithms to illustrate this principle will be shown for little
examples for the three classes of polynomial systems.
\smallskip
For dense polynomial systems, the computation of generalized permanents
model the resolution of linear-product start systems.
The algorithms to compute mixed volumes lead to polyhedral homotopies
to solve sparse polynomial systems.
The localization posets describe the structure of the cellation of
the Grassmannian used to set up the Pieri deformations.
\subsection{Dense Polynomials modeled by Highest Degrees}
A polynomial in one variable has as many complex solutions as its degree.
A linear system has either infinitely many solutions or
exactly one isolated solution in projective space.
By this analogy~\cite{li87} we see that B\'ezout's theorem generalizes
these last two statements:
a polynomial system has either infinitely many solutions or exactly
as many isolated solutions in complex projective space as the total degree.
\medskip
As the above presentation of B\'ezout's theorem suggests, the simplest
cases are univariate and linear systems, which are used as start systems.
For the example system~(\ref{eqexsys}),
a start system~$P^{(0)}({\bf x}) = {\bf 0}$ based on the
total degree~$D$ is given by two univariate quartic equations
$x_1^4 - c_1 = 0 = x_2^4 - c_2$,
where $c_1$ and $c_2$ are randomly chosen complex numbers.
Note that the computation of $D = 4 \times 4$ models the structure of the
solutions of $P^{(0)}$ as four solutions for $x_1$ crossed with four solutions
for~$x_2$.
\medskip
The earliest approaches of this homotopy
appear in~\cite{cmy79}, \cite{dre77}, \cite{gz79}, \cite{gl80},
and were further developed in~\cite{li83}, \cite{mor83}, \cite{wri85}.
The book~\cite{mor87} contains a very good introduction to the
practice of solving polynomial system by homotopy continuation.
Regularity results can be found in~\cite{ls87} and~\cite{zul88}.
While this homotopy algorithm has a sound theoretical basis,
the total degree is a too crude upper bound for the number of
affine roots to be useful in most applications.
\medskip
Multi-homogeneous homotopies were introduced in~\cite{ms87a,ms87b} and applied
to various problems in mechanism design, see e.g.~\cite{wms90,wms92a}.
Similar are the random product homotopies~\cite{lsy87b,lsy87a},
applying intersection theory in~\cite{lw91},
but less suitable for automatic procedures.
For our running example~(\ref{eqexsys}),
we follow the approach of multi-homogenization
and we group the unknowns according to the partition
$Z = \{ \{ x_1 \} , \{ x_2 \} \}$. The corresponding degree matrix $M_Z$
has in its $(i,j)$-th entry the degree of the $i$-th polynomial in the
variables of the $j$-th set of $Z$.
The 2-homogeneous B\'ezout bound $B_Z$ is the permanent of~$M_Z$.
\begin{equation}
\begin{array}{l}
P(x_1,x_2) = \\
~~~~~ \left\{
\begin{array}{r}
x_1^4 + x_1 x_2 + 1 = 0 \\
x_1^3 x_2 + x_1 x_2^2 + 1 = 0
\end{array}
\right.
\end{array}
\quad
\begin{array}{rcc}
M_Z & \! \! = \! \! & M_{\{ \{ x_1 \} , \{ x_2 \} \}} \\
& \! \! = \! \! & \left[
\begin{array}{cc}
4 & 1 \\
3 & 2
\end{array}
\right]
\end{array}
\quad
\begin{array}{rcl}
B_Z & \! \! = \! \! & {\rm per}(M_Z) \\
& \! \! = \! \! & 4 \times 2 + 3 \times 1 \\
& \! \! = \! \! & 11
\end{array}
\end{equation}
The computation of the permanent follows the expansion for the determinant,
except for the permanence of signs, as it corresponds to adding up the roots
when solving the corresponding linear-product start system:
\begin{equation}
P^{(0)}({\bf x}) =
\left\{
\begin{array}{c}
{\displaystyle \prod_{i=1}^4 (x_1 - \alpha_{1i})
\prod_{i=1}^1 (x_2 - \beta_{1i})} = 0 \\
{\displaystyle \prod_{i=1}^3 (x_1 - \alpha_{2i})
\prod_{i=1}^2 (x_2 - \beta_{2i})} = 0 \\
\end{array}
\right.
\end{equation}
In most applications the grouping of variables follows from their meaning,
e.g.: for eigenvalue problems~$A {\bf x} = \lambda {\bf x}$,
$Z = \{ \{ \lambda \} , \{ x_1, x_2 ,\ldots ,x_n \} \}$.
Efficient permanent evaluations in conjunction with
exhaustive searching algorithms for finding an optimal grouping
were developed in~\cite{wam92}. In case the number of independent roots
equals the B\'ezout bound, interpolation methods~\cite{vbh91} are useful.
\medskip
Partitioned linear-product start systems were developed in~\cite{vh93a}
elaborating the idea that several different partitions can be used for
the polynomials in the system.
Motivated by symmetric applications~\cite{vc94a}, general linear-product
start system were proposed in~\cite{vc93b}. These start systems are
based on a supporting set structure $S$ which provides a more refined model
of the degree structure of a polynomial system.
\begin{equation}
S =
\begin{array}{|c|} \hline
\{ x_1 \} \{ x_1 , x_2 \} \{ x_1 \} \{ x_1 \} \\
\{ x_1 \} \{ x_1 , x_2 \} \{ x_1 \} \{ x_2 \} \\ \hline
\end{array}
\end{equation}
To compute the bound formally, one collects all admissible $n$-tuples of sets,
picking one set out of every row in the set structure.
\begin{equation}
\begin{array}{rl}
B_S = & \! \! \! \! \# \{ ( \{ x_1 \} , \{ x_1 , x_2 \} ) ,
( \{ x_1 \} , \{ x_2 \} ) ,
( \{ x_1 , x_2 \} , \{ x_1 \} ) , \\
& ~~~ ( \{ x_1 , x_2 \} , \{ x_1, x_2 \} ) ,
( \{ x_1 , x_2 \} , \{ x_1 \} ) ,
( \{ x_1 , x_2 \} , \{ x_2 \} ) , \\
& ~~~ ( \{ x_1 \} , \{ x_2 \} ) ,
( \{ x_1 , x_2 \} , \{ x_2 \} )
( \{ x_1 \} , \{ x_2 \} )
( \{ x_1 \} , \{ x_2 \} ) \}
\end{array}
\end{equation}
Each admissible pair corresponds to a linear system that leads to a
solution of a generic start system:
\begin{equation}
P^{(0)}({\bf x}) =
\left\{
\begin{array}{c}
(x_1+c_{11})(x_1+c_{12} x_2+c_{13})(x_1+c_{14})(x_1+c_{15}) = 0 \\
(x_1+c_{21})(x_1+c_{22} x_2+c_{23})(x_1+c_{24})(x_2+c_{25}) = 0 \\
\end{array}
\right.
\end{equation}
This start system has $B_S = 10$ solutions.
In~\cite{vc93b}, the following theorems were proven.
\begin{theorem} Except for a choice of coefficients belonging to an
algebraic set, there are exactly $B_S$ regular solutions to a random
linear-product system based on the set structure~$S$.
\end{theorem}
The proof of the theorem consists in collecting the determinants
that express the degeneracy conditions. These determinants are
polynomials in the coefficients and vanish at an algebraic set.
\medskip
\begin{theorem} All isolated solutions to~$P({\bf x}) = {\bf 0}$ lie
at the end of some solution path defined by a convex-linear homotopy
originating at a solution of a random linear-product start system,
based on a supporting set structure for~$P$.
\end{theorem}
The idea of the proof is to embed the homotopy into an appropriate
projective space and to consider the projection of the discriminant
variety as an algebraic set for the continuation parameter.
See~\cite{lww96} for an alternative proof.
\medskip
A general approach to exploit product structures was developed
in~\cite{msw95}. For systems whose polynomials are sums of products
one may arrive at a much tighter bound replacing the products by one
simple product. An efficient homotopy to solve the nine-point problem
in mechanical design was obtained in this way.
\medskip
The complexity of this homotopy based on the total degree is
addressed in~\cite{bcss97} where $\alpha$-theory is applied
to give bounds on the number of steps that is needed to trace
the solution paths. A major result is that one can decide in
polynomial time whether an average polynomial system has a solution.
A similar analysis of Newton's method in multi-projective space
was recently done in~\cite{ds97}.
\medskip
While the above complexity results apply to random systems,
the problem of automatically extracting and exploiting the
degree structure of a polynomial system is a much harder problem.
Finding an optimal multi-homogeneous grouping essentially requires
the enumeration of all partitions~\cite{wam92}.
With supporting set structures one may obtain a high success rate,
see~\cite{lww96} for a efficient heuristic algorithm.
Recent software extensions for finding optimal partitioned linear-product
start systems are in~\cite{wsw98}.
\subsection{Mixed Subdivisions of Newton Polytopes to compute Mixed Volumes}
For~(\ref{eqexsys}), we collect the exponent vectors of the system
$P$ in the supports $\cal A$:
\begin{equation} \label{eqexpolysup}
\begin{array}{l}
P(x_1,x_2) = \\
~~~~~ \left\{
\begin{array}{r}
x_1^4 + x_1 x_2 + 1 = 0 \\
x_1^3 x_2 + x_1 x_2^2 + 1 = 0
\end{array}
\right.
\end{array}
\quad
\begin{array}{l}
{\cal A} = (A_1,A_2) \\
~~~~ A_1 = \{ (0,0) , (1,1) , (4,0) \} \\
~~~~ A_2 = \{ (0,0) , (1,2) , (3,1) \}
\end{array}
\end{equation}
The supports $A_1$ and $A_2$ span the respective Newton polytopes
$Q_1$ and $Q_2$.
\medskip
The Cayley trick~\cite[Proposition 1.7, page 274]{gkz94} is a method
to rewrite a certain resultant as a discriminant of one single
polynomial with additional variables.
The polyhedral version of this trick as in~\cite[Lemma 5.2]{stu94}
is due to Bernd Sturmfels.
It provides a one-to-one correspondence between the cells in a mixed
subdivision and a triangulation of the so-called Cayley polytope spanned
by the points of $A_i$ embedded in a $(2n-1)$-dimensional space.
See~\cite{hrs99} for another application besides mixed-volume computation.
As in~\cite{hrs99}, Figure~7 gives a ``one-picture proof'' of this trick,
displaying the Cayley polytope for the supports~${\cal A}$
in~(\ref{eqexpolysup}). Note that this construction provides a
definition for mixed subdivisions.
\medskip
\noindent~\parbox[t]{8cm}{
The Cayley polytope is spanned by the points in $A_1$,
where each point of $A_1$ is extended with $n-1$ zero coordinates,
and the points in $A_i$ where each point in $A_i$ is extended
with the respective $i$-th standard basis vector, for $i=1,2,\ldots,n-1$.
\smallskip
Omitting the added coordinates of this Cayley embedding, every cell
in a triangulation of the Cayley polytope is identified with a cell
in a mixed subdivision of the original tuple of polytopes.
We can see this identification geometrically when slicing the
Cayley polytope with a hyperplane that separates the embedded
polytopes. As in Figure~7, the slice contains
$\lambda_1 Q_1 + \lambda_2 Q_2$ and the cells of a mixed subdivision
are cut out by the cells in a triangulation of the Cayley polytope.
}~\hspace{-0.3cm}
\parbox[t]{10cm}{
\begin{center}
\centerline{\psfig{figure=psfcayley.ps}}
\medskip
Figure~7 : Cayley polytope of $Q_1$ and $Q_2$.
\addtocounter{figure}{1}
\end{center}
}
\medskip
The Cayley trick was
implemented in~\cite{vgc96} as an application of the dynamic
lifting algorithm to construct regular triangulations.
This method calculates the volume polynomial~(\ref{eqvolpol}) completely.
When one is only interested in the mixed volume, the method is only
efficient when the supports do not differ much from each other.
\medskip
To compute only the mixed volume,
the lift-and-prune approach was presented in~\cite{ec95},
using a primal model to prune in the tree of edge-edge combinations.
This approach operates in two stages. First the polytopes are lifted
by adding one coordinate to every point in the supports.
In the second stage, one computes the facets of the lower hull of
the Minkowski sum that are spanned by sums of edges.
These facets constitute the {\em mixed cells} in a mixed subdivision.
On the supports~$\cal A$ in~(\ref{eqexpolysup}),
we consider the lifted supports
\begin{equation} \label{eqlifted}
{\widehat {\cal A}} = ({\widehat A}_1,{\widehat A}_2) \quad
\begin{array}{l}
{\widehat A}_1 = \{ (0,0,1) , (1,1,0) , (4,0,0) \} \\
{\widehat A}_2 = \{ (0,0,0) , (1,2,0) , (3,1,1) \}
\end{array}
\end{equation}
The lower hulls of the lifted polytopes are displayed in
Figure~\ref{fig3dmcc}.
\medskip
\begin{figure}[hbt]
\begin{center}
\centerline{\psfig{figure=psf3dmcc.ps}}
\caption{Lifted polytopes ${\widehat Q}_1$, ${\widehat Q}_2$, and
a regular mixed subdivision of ${\widehat Q}_1 + {\widehat Q}_2$.}
\label{fig3dmcc}
\end{center}
\end{figure}
The two cells that contribute to the mixed volume are identified by
inner normals $\alpha$ and $\beta$ that satisfy
systems of linear equations and inequalities:
\smallskip
\begin{equation} \label{eqdual}
\begin{array}{l}
\! \! \! \! \alpha = (0,0,1) \\
\left\{
\begin{array}{rcl}
4 \alpha_1 = \alpha_1 + \alpha_2 & < & 1 \\
\alpha_1 + 2 \alpha_2 = 0 & < & 3 \alpha_1 + \alpha_2 + 1
\end{array}
\right.
\end{array}
\quad \quad
\begin{array}{l}
\! \! \! \! \beta = (2,-1,1) \\
\left\{
\begin{array}{rcl}
\beta_1 + \beta_2 = 1 & < & 4 \beta_1 \\
\beta_1 + 2 \beta_2 = 0 & < & 3 \beta_1 + \beta_2 + 1
\end{array}
\right.
\end{array}
\end{equation}
\smallskip
\noindent These systems express that the cells correspond to facets
spanned by the sum of two edges on the lower hulls of
${\widehat Q}_1$ and ${\widehat Q}_2$ respectively.
The lift-and-prune method with a dual version of the linear inequality
constrains as in~(\ref{eqdual}) was elaborated in~\cite{vgc96},
exploiting the fact that several polynomials can share the same
Newton polytope (see~\cite{hs95}) and with dimension reductions.
\medskip
The geometric dual construction to Figure~\ref{fig3dmcc} is
displayed in Figure~\ref{figfans}.
\begin{figure}[hbt]
\begin{center}
\centerline{\psfig{figure=psffans.ps}}
\caption{On the left we see the projection of a regular mixed subdivision
of ${\widehat Q}_1 + {\widehat Q}_2$. On the right, we have the dual
construction with complexes ${\cal N}_{\vee}^{1}(Q_1)$ and
${\cal N}_{\vee}^{1}(Q_2)$
collecting the cones of all vectors normal to the edges on the lower hulls
of ${\widehat Q}_1$ and ${\widehat Q}_2$ respectively.
The intersection of the cones contain the normals to the mixed cells.}
\label{figfans}
\end{center}
\end{figure}
As in~\cite{hs95}, we assume that there are $r$ different Newton polytopes.
Given a tuple of lifted point sets
${\widehat {\cal A}}
= ( {\widehat A}_1, {\widehat A}_2, \ldots, {\widehat A}_r )$,
any lifted cell ${\widehat {\cal C}}_{\bf v}$ of a regular subdivision
can be characterized by its inner normal as
\begin{equation}
{\widehat {\cal C}}_{\bf v} = (
\partial_{\bf v} {\widehat A}_1, \partial_{\bf v} {\widehat A}_2, \ldots ,
\partial_{\bf v} {\widehat A}_r ).
\end{equation}
Since ${\rm conv}({\widehat C}_{\bf v})
= {\rm conv}(\sum_{i=1}^r \partial_{\bf v} {\widehat A}_i)$
is a facet of the lower hull,
the inner product $\langle . , {\bf v} \rangle$ attains \\[1mm]
its minimum over ${\widehat A}_i$ at $\partial_{\bf v} {\widehat A}_i$, i.e.,
\begin{equation} \label{equa}
\forall {\widehat {\bf a}}, {\widehat {\bf b}}
\in \partial_{\bf v} {\widehat A}_i: \
\langle {\widehat {\bf a}} , {\bf v} \rangle
= \langle {\widehat {\bf b}} , {\bf v} \rangle, \quad i = 1,2,\ldots,r,
\end{equation}
\begin{equation} \label{inequa}
\forall {\widehat {\bf a}} \in {\widehat A}_i \setminus
\partial_{\bf v} {\widehat A}_i, \
\forall {\widehat {\bf b}} \in \partial_{\bf v} {\widehat A}_i: \
\langle {\widehat {\bf a}} , {\bf v} \rangle
> \langle {\widehat {\bf b}} , {\bf v} \rangle, \quad i = 1,2,\ldots,r.
\end{equation}
Algorithm~\ref{algshafac} (presented in~\cite{vgc96})
gives a way to compute all mixed cells by searching for feasible solutions
to the constraints~(\ref{equa}) and~(\ref{inequa}).
The algorithm generates a tree of all possible combinations of $k_i$-faces,
with feasibility tests to prune branches that do not lead to mixed cells.
The order of enumeration is organized so that mixed cells which
share some faces also share a part of the factorization work to be
done to solve the system defined by~(\ref{equa}).
\begin{algorithm} \label{algshafac}
{\rm Pruning algorithm with shared factorizations subject to
inequality constraints:
\begin{center}
\begin{tabular}{lcr}
Input: $({\widehat A}_1,{\widehat A}_2, \ldots, {\widehat A}_r)$,
& & {\em lifted point sets} \ \\
\ \ \ \ \ \ \ \ \ $\!$ ${\bf k} = (k_1,k_2,\ldots,k_r)$,
$n = \sum_{i=1}^r k_i$, & & {\em $A_i$ appears $k_i$ times } \ \\
\ \ \ \ \ \ \ \ \ $\!$
$({\widehat {\cal F}}_1,{\widehat {\cal F}}_2, \ldots,
{\widehat {\cal F}}_r)$.
& \ \ & {\em $k_i$-faces of lower hull of ${\rm conv}({\widehat A}_i)$} \ \\
Output: ${\widehat {\frak S}_\omega}
= \{ \ {\widehat {\cal C}} \in {\widehat S}_\omega \ |
\ V_n({\cal C},{\bf k}) > 0 \ \}$.
& & {\em collection of lifted mixed cells} \ \\
\\
{\bf At level} $i$, $1 \leq i < r$: & & \\
\ \ {\em DATA and INVARIANT CONDITIONS}: & & \\
\ \ \ \ \ $\!$ $(M_1,\kappa)$: \ \
$M_1 {\bf v} = {\bf 0} \not\Rightarrow v_{n+1} = 0$,
$\kappa = {\displaystyle \sum_{j=1}^{i-1} k_j}$
& & \begin{tabular}{r}
{\em equalities} (\ref{equa}) \\
{\em upper triangular up to row $\kappa$}
\end{tabular} \\
\ \ \ \ \begin{tabular}{ll}
$(M_2,\kappa)$: &
$M_2 {\bf v} \geq {\bf 0} \not\Rightarrow -v_{n+1} \geq 0$ \\
& $\dim(M_2) = n-\kappa$
\end{tabular}
& & \begin{tabular}{r}
{\em inequalities} (\ref{inequa}) \\
{\em still feasible and reduced}
\end{tabular} \\
\ \ {\em ALGORITHM}: & & \\
\ \ \ \ \ for each ${\widehat C}_i \in {\widehat {\cal F}}_i$ loop
& & {\em enumerate over all $k_i$-faces} \ \\
\ \ \ \ \ \ \ {\em Triangulate}$(M_1,\kappa,{\widehat C}_i)$;
& & {\em ensure invariant conditions} \ \\
\ \ \ \ \ \ \ if $M_1 {\bf v} = {\bf 0} \not\Rightarrow v_{n+1} = 0$
& & {\em test for feasibility w.r.t.} (\ref{equa}) \ \\
\ \ \ \ \ \ \ \ then {\em Eliminate}$(M_1,M_2,\kappa,{\widehat C}_i,
{\widehat A}_i)$;
& & {\em eliminate unknowns} \ \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if $M_2 {\bf v} \geq {\bf 0} \not\Rightarrow
-v_{n+1} \geq 0$
& & {\em test for feasibility w.r.t.} (\ref{inequa}) \ \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ then proceed to next level $i+1$; & & \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ end if; & & \\
\ \ \ \ \ \ \ end if; & & \\
\ \ \ \ \ end for. & & \\
{\bf At level} $i=r$: & & \\
\ \ Compute $\bf v$: $M_1 {\bf v} = {\bf 0}$; & & \\
\ \ Merge the new cell ${\cal C}_{\bf v}$ with the
list ${\widehat {\frak S}_\omega}$. & &
\end{tabular}
\end{center}
}
\end{algorithm}
Note that (\ref{inequa}) has to be weakened to $\geq$ type inequalities
in order to be able to compute also subdivisions that are not fine.
This also explains the merge operation at the end.
The feasibility tests in the algorithm allow an efficient computation
of the mixed cells.
The conditions~(\ref{equa}) and~(\ref{inequa}) are verified incrementally.
After choosing a $k_i$-face ${\widehat C}_i = \{
{\widehat {\bf c}}_{0i}, {\widehat {\bf c}}_{1i} , \ldots ,
{\widehat {\bf c}}_{k_i i} \}$ of ${\widehat A}_i$, linear programming
is used to check whether $({\widehat C}_1, \ldots , {\widehat C}_i )$
can lead to a mixed cell in the induced subdivision.
\medskip
We end this section with complexity results.
The complexity of computing mixed volumes is proven~\cite{dgh98}
to be $\#P$-hard. This complexity class is typical for all enumerative
problems, since, unlike the class $NP$, there exists no algorithm that
runs in polynomial time for arbitrary dimensions to verify that
a guessed answer is correct.
Although the current algorithmical practice suggests that computing mixed
volumes is harder than computing volumes of polytopes (which is also
known as a $\#P$-hard problem~\cite{df88}), this is not the case
from a complexity point of view as shown in~\cite{dgh98}.
In~\cite{emi96} it is shown that the mixed volume~$V_n({\cal Q})$
is bounded from below by $n! {\rm vol}_n(Q_\mu)$, $Q_\mu$ being the
polytope of minimum volume in~$\cal Q$.
\subsection{Sparse Polynomial Systems solved by Polyhedral Homotopies}
\label{secsparhom}
The simplest system in the polytope model that still has isolated solutions
in~$({\Bbb C}^*)^n$ has exactly two terms in every equation.
Polyhedral homotopies~\cite{hs95} solve systems with random complex
coefficients starting from sparser subsystems. For~(\ref{eqexsys}),
the homotopy with supports~$\widehat {\cal A}$ as in~(\ref{eqlifted}) is
\begin{equation} \label{eqexpolyhom}
{\widehat P}(x_1,x_2,t) =
\left\{
\begin{array}{r}
c_1 x_1^4 t^0 + c_2 x_1 x_2 t^0 + c_3 t^1 = 0 \\
c'_1 x_1^3 x_2 t^1 + c'_2 x_1 x_2^2 t^0 + c'_3 t^0 = 0
\end{array}
\right.
\quad {\rm with~} c_i,c'_i \in {\Bbb C}^*.
\end{equation}
The exponents of~$t$ are the values of the lifting~$\omega$
applied to the supports.
\medskip
To find the start systems, we look at Figure~\ref{fig3dmcc},
at the subdivision that is induced by this lifting process.
The start systems have Newton polytopes spanned by one edge of the first
and one edge of the second polytope.
Since the two cells that contribute to the mixed volume
are characterized by their inner normals $\alpha$ and $\beta$
satisfying~(\ref{eqdual}) we denote the start systems respectively
by $P^\alpha$ and $P^\beta$.
To compute start solutions, unimodular transformations make the
system triangular as follows.
After dividing the equations so that the constant term is present,
we apply the substitution $x_1 = y_2$, $x_2 = y_1^{-1} y_2^3$ on
$P^\alpha$ as follows:
\begin{equation} \label{eqbin}
P^{\alpha}({\bf x})
= \left\{
\begin{array}{r}
x_1^3 x_2^{-1} + c''_1 = 0 \\
x_1 x_2^2 + c''_2 = 0
\end{array}
\right.
\quad
P^{\alpha}({\bf x} = {\bf y}^U)
= \left\{
\begin{array}{r}
y_1 + c''_1 = 0 \\
y_1^{-2} y_2^7 + c''_2 = 0
\end{array}
\right.
\end{equation}
\noindent The substitution in (\ref{eqbin}) is apparent in the notation
(as used in~\cite{li97})
${\bf x}^V = ({\bf y}^U)^V = {\bf y}^{VU} = {\bf y}^L$ elaborated as
\begin{equation}
\begin{array}{rcl}
\left(
\begin{array}{c}
x_1^3 \cdot x_2^{-1} \\ x_1^1 \cdot x_2^2
\end{array}
\right)
& = &
\left(
\begin{array}{c}
(y_1^0 y_2^1)^3 \cdot (y_1^{-1} y_2^3)^{-1} \\
(y_1^0 y_2^1)^1 \cdot (y_1^{-1} y_2^3)^2
\end{array}
\right) \\
& = &
\left(
\begin{array}{c}
y_1^{3 \cdot 0 - 1 \cdot (-1)} \cdot y_2^{3 \cdot 1 - 1 \cdot 3} \\
y_1^{1 \cdot 0 + 2 \cdot (-1)} \cdot y_2^{1 \cdot 1 + 2 \cdot 3}
\end{array}
\right)
=
\left(
\begin{array}{c}
y_1^1 \cdot y_2^0 \\ y_1^{-2} \cdot y_2^7
\end{array}
\right).
\end{array}
\end{equation}
The exponents are calculated by the factorization $VU = L$:
\begin{equation}
\left[ \begin{array}{rr}
3 & -1 \\ 1 & 2
\end{array}
\right]
\left[ \begin{array}{rr}
0 & 1 \\ -1 & 3
\end{array}
\right] =
\left[ \begin{array}{rr}
1 & 0 \\ -2 & 7
\end{array}
\right] .
\end{equation}
Since $\det(U) = 1$, the matrix $U$ is called unimodular.
\medskip
The polyhedral homotopy~(\ref{eqexpolyhom}) directly extends
the solutions of $P^{\alpha}$ to the target system.
To obtain a homotopy starting
at the solutions of~$P^{\beta}$, we substitute in~(\ref{eqexpolyhom})
$x_1 \leftarrow {\widetilde x}_1 t^{\beta_1}$,
$x_2 \leftarrow {\widetilde x}_2 t^{\beta_2}$
and clear out the lowest powers of~$t$.
This construction appeared in~\cite{hs95} and provides an algorithmic
proof of the following theorem.
\begin{theorem} \label{theobera}
(Bernshte\v{\i}n~{\rm \cite[Theorem A]{ber75}})
For a general choice of coefficients for~$P$,
the system $P({\bf x}) = {\bf 0}$ has
exactly as many regular solutions as its mixed volume~$V_n({\cal Q})$.
\end{theorem}
The original algorithm Bernshte\v{\i}n used in his proof was implemented
in~\cite{vvc94}.
\medskip
For the numerical stability of polyhedral continuation, it is important
to have subdivisions induced by low lifting values, since those influence
the power of the continuation parameter.
In~\cite{vgc96} explicit lower bounds on integer lifting values were derived,
but unfortunately the dynamic lifting algorithm does not generalize that
well~\cite{lw98} if one is only interested in the mixed cells of a mixed
subdivision. A balancing method was proposed in~\cite{glvw98} to improve
the stability of homotopies induced by random floating-point lifting values.
\medskip
Once all solutions to a polynomial system with randomly generated coefficients
are computed, we use cheater's homotopy to solve any specific system with
the same Newton polytopes.
One could say that polyhedral homotopies have removed the cheating part.
The main advantage of polyhedral methods is that the mixed volume is a much
sharper root count in most applications, leading to fewer paths to trace.
They also allow more flexibility to exploit symmetry as demonstrated
in~\cite{vg95}.
\medskip
In case the system has fewer isolated solutions than the mixed volume,
we consider the face systems. Define the face of a
polynomial $p$ with support $A$ as follows:
\begin{equation}
p({\bf x}) = \sum_{{\bf a} \in A} c_{\bf a} {\bf x}^{\bf a}
\quad \mbox{has faces} \quad
\partial_{\bf v} p({\bf x})
= \sum_{{\bf a} \in \partial_{\bf v} A} c_{\bf a} {\bf x}^{\bf a}
\quad \mbox{for } {\bf v} \not= {\bf 0}.
\end{equation}
For ${\bf v} \not= {\bf 0}$, the corresponding face system of $P$
is $\partial_{\bf v} P = ( \partial_{\bf v} p_1,
\partial_{\bf v} p_2, \ldots , \partial_{\bf v} p_n )$.
\begin{theorem} \label{theoberb}
(Bernshte\v{\i}n~{\rm \cite[Theorem B]{ber75}})
Suppose $V_n({\cal Q}) > 0$. Then,
$P({\bf x}) = {\bf 0}$ has fewer than~$V_n({\cal Q})$
isolated solutions if and only if $\partial_{\bf v} P({\bf x}) = {\bf 0}$
has a solution in $({\Bbb C}^*)^n$, for~${\bf v} \not= {\bf 0}$.
\end{theorem}
As is the case for our running example~(\ref{eqexsys}),
the Newton polytopes may be in generic position such that for any
nonzero choice of the coefficients, the system has exactly as
many isolated solutions as the mixed volume.
In practical applications however,
how can we decide whether paths are really going towards infinity?
Relying on the actual computed values is arbitrarily,
because $10^4$ is as far from infinity as~$10^8$, so we need
algebraic structural data to certify the divergence.
\medskip
In the polyhedral end game~\cite{hv98} solution paths are represented
by power series expansions:
\begin{equation} \label{eqpower}
\left\{
\begin{array}{rcl}
x_i(s) & = & a_i s^{v_i} ( 1 + O(s) ) \\
t(s) & = & 1 - s^m
\end{array}
\right.
\quad \quad t \approx 1, \quad s \approx 0.
\end{equation}
The winding number $m$ is lower than or equal to the multiplicity of the
solution. For a solution diverging to infinity or to a zero-component
solution we observe that $v_i \not= 0$.
According to Theorem~\ref{theoberb}, this solution vanishes at a
face system $\partial_{\bf v} P$
(same ${\bf v}$ with components $v_i$ as in~(\ref{eqpower})),
certifying the divergence.
\medskip
To check whether a solution path really diverges is equivalent to
the test on the value for~$v_i$. A first-order approximation
of $v_i$ can be computed by
\begin{equation}
\frac{\log|x_i(s_1)| - \log|x_i(s_0)|}{\log(s_1) - \log(s_0)}
= v_i + O(s_0),
\end{equation}
with $0 < s_1 < s_0$. The above formula assumes the correct value for~$m$.
To compute $m$, solution paths are sampled geometrically with ratio $h$
as $s_k = h^{k/m} s_0$. The errors on the estimates for~$v_i$ are
\begin{eqnarray}
e_i^{(k)} & = & (\log|x_i(s_k)| - \log|x_i(s_{k+1})|)
- (\log|x_i(s_{k+1})| - \log|x_i(s_{k+2})|) \\
& = & c_1 h^{k/m} s_0 (1 + O(h^{k/m})).
\end{eqnarray}
An estimate for~$m$ is derived from two consecutive errors~$e_i^{(k)}$.
Extrapolation improves this estimate.
So, by an inexpensive side calculation at the end of the paths, we obtain
important structural algebraic information about the system.
\medskip
Recall that $V_n({\cal Q})$ count the roots in~$({\Bbb C}^*)^n$.
Using Newton polytopes to count affine roots
(i.e.: in~${\Bbb C}^n$ instead of~$({\Bbb C}^*)^n$)
was proposed in~\cite{roj94} with the notion of shadowed polytopes obtained by
the substitution $x_i \leftarrow x_i + c_i$ for arbitrary constants $c_i$.
To arrive at sharper bounds, it suffices (see~\cite{lw96} and~\cite{rw96})
to add a random constant to every equation.
Stable mixed volumes~\cite{hs97} provide a generically sharp affine root count.
The constructions in~\cite{ev99} and~\cite{glw98} avoid the use of
recursive liftings to compute stable mixed volumes.
Further developments and generalizations can be found in~\cite{roj99}.
\subsection{Determinantal Polynomials arising in Enumerative Geometry}
Homotopies for solving problems in enumerative geometry appeared
in~\cite{hss98}. The algorithms in the numerical Schubert calculus
originated from questions in real enumerative geometry~\cite{sot97a,sot97b}
and have their main application to the pole placement problem~\cite{byr89},
\cite{rrw96,rrw98}, \cite{ros94}, \cite{rw99} in control theory.
\medskip
The enumerative problems are formalized in some ``finiteness'' theorems,
avoiding the explicit but involved (as in~(\ref{eqgrassdeg})) formulas
for the root counts.
\begin{theorem} \label{theoprob1}
The number of $r$-planes meeting $mr$ general $m$-planes in~${\Bbb C}^{m+r}$
is a finite constant.
\end{theorem}
The first homotopy presented in~\cite{hss98} uses a Gr\"obner basis
for the ideal that defines $G_{mr}$, as is derived in~\cite{stu93}.
By Gr\"obner bases questions concerning any polynomial system are solved
by relation to monomial equations.
Every Gr\"obner basis defines a flat deformation, which preserves
the structure of the solution set~\cite{eis95}.
Geometrically, this type of deformation is used to collapse the solution
set in projective space to the coordinate hyperplanes, or in the opposite
direction, to extend the solutions of the monomial equations to those
of the original system.
The flat deformations that are obtained in this way are similar to
toric deformations in the sense that one moves from the solutions of
a subsystem to the solutions of the whole system.
\medskip
The Gr\"obner homotopies of~\cite{hss98} work in the synthetic
Pl\"ucker embedding, and need to take the large set of defining
equations of $G_{mr}$ into account.
When expanding the minors into local coordinates, these equations
are automatically satisfied, which leads to a much smaller
polynomial system. Consequently, the second type of homotopies
of~\cite{hss98}, the so-called SAGBI homotopies are more efficient.
Instead of an ideal, we now have a subalgebra and work with
a SAGBI basis,
i.e.: the Subalgebra Analogue to a Gr\"obner Basis for an Ideal.
The term order selects the monomials on the diagonal as the dominant
ones. This implies that in the flat deformation (see~\cite{stu96}
for a general description) only the diagonal monomials remain at~$t=0$.
\medskip
For $m=2=r$, the equations of the SAGBI homotopy in determinantal form are
\begin{equation} \label{eqsabi22det}
p_i({\bf x}) =
\det \left[
\left.
\begin{array}{cc}
c_{11}^{(i)} & c_{12}^{(i)} \\
c_{21}^{(i)} & c_{22}^{(i)} \\
c_{31}^{(i)} & c_{32}^{(i)} \\
c_{41}^{(i)} & c_{42}^{(i)} \\
\end{array}
\right|
\begin{array}{cc}
x_{11} ~ & x_{12} \\
x_{21} t & x_{22} \\
1 & 0 \\
0 & 1 \\
\end{array}
\right]
= 0, \quad i=1,2,3,4,
\end{equation}
where the coefficients $c_{kl}^{(i)}$ are random complex constants.
In expanding the minors of~(\ref{eqsabi22det}), the lowest power of $t$
is divided out, minor per minor.
The system at $t=0$ is solved by polyhedral continuation.
The system at $t=1$ serves as start system in the cheater's homotopy
to solve any system with particular real values for the
coefficients~$c_{kl}^{(i)}$.
Figure~\ref{figsagbi} outlines the structure of the general solver.
{\small
\begin{figure}[hbt]
\begin{center}
\begin{picture}(400,40)(0,0)
\put(0,0){\framebox(50,40)[c]
{\begin{tabular}{c} Binomial \\Systems \end{tabular}}}
\put(60,20){\vector(1,0){50}} \put(60,25){polyhedral}
\put(62,10){homotopy}
\put(120,0){\framebox(50,40)[c]
{\begin{tabular}{c} Generic \\ Complex \\ System \end{tabular}}}
\put(180,20){\vector(1,0){40}} \put(189,25){flat}
\put(183,10){deform.}
\put(230,0){\framebox(50,40)[c]
{\begin{tabular}{c} Generic \\ Complex \\ Problem \end{tabular}}}
\put(290,20){\vector(1,0){50}} \put(293,25){cheater's}
\put(292,10){homotopy}
\put(350,0){\framebox(50,40)[c]
{\begin{tabular}{c} Specific \\ Real \\ Problem \end{tabular}}}
\end{picture}
\caption{The SAGBI homotopy is at the center of the concatenation.}
\label{figsagbi}
\end{center}
\end{figure}
}
SAGBI homotopies have been implemented~\cite{ver98} to
verify some large instances of input planes for which it was conjectured
that all solution planes would be real.
We refer to~\cite{rs98} and \cite{sot98} for related work on these conjectures.
In~\cite{sot99b} an asymptotic choice of inputs is generated for which all
solutions are proven to be real.
\medskip
The third type of homotopies presented in~\cite{hss98} are the
so-called Pieri homotopies.
Since they are closer to the intrinsic geometric nature, they are
applicable to a broader class of problems. In particular, we obtain
an effective proof for the following.
\begin{theorem} \label{theoprob2}
The number of $r$-planes meeting $n$ general $(m+1-k_i)$-planes in~${\Bbb C}^{m+r}$,
with $k_1 + k_2 + \cdots + k_n = mr$, is a finite constant.
\end{theorem}
Note that when all $k_i = 1$, we arrive at Theorem~\ref{theoprob1}.
For general $k_i \not= 1$, we are not aware of any explicit formulas
for the number of roots.
\medskip
Figure~\ref{figcelldeco} shows a part of a cellular decomposition
of $G_{22}$ with the determinantal equations.
We specialize the pattern~$X$ that represents a solution line by
setting some of its coordinates to zero.
This specialization determines a specialization of the input lines
as follows: take those basis vectors not indexed by rows of~$X$
where zeroes have been introduced. The special line $S_X$ for this
example is as in~(\ref{eqpierihom}) spanned by the first and third
standard basis vector.
\begin{figure}[hbt]
\begin{center}
\begin{picture}(240,120)(20,-10)
\thicklines
\put(-50,85){\small
${\rm det}\left[
S_X
\left|
\begin{array}{cc}
x_{11} & 0 \\
0 & 0 \\
0 & x_{32} \\
0 & x_{42} \\
\end{array}
\right.
\right] = 0$}
\put(140,85){\small
${\rm det}\left[
S_X
\left|
\begin{array}{cc}
x_{11} & 0 \\
x_{21} & 0 \\
0 & x_{32} \\
0 & 0 \\
\end{array}
\right.
\right] = 0$}
\put(110,50){\line(4,1){40}}
\put(90,50){\line(-4,1){40}}
\put(10,15){\small
${\rm det}[L_3 | X]
= {\rm det}\left[
L_3
\left|
\begin{array}{cc}
x_{11} & 0 \\
x_{21} & 0 \\
0 & x_{32} \\
0 & x_{42} \\
\end{array}
\right.
\right] = 0$}
\put(260,10){[2~4]}
\put(227,30){[1~4]}
\put(293,30){[2~3]}
\put(257,18){\line(-2,1){10}}
\put(285,18){\line(2,1){10}}
\end{picture}
\caption{Part of a cellular decomposition of the Grassmannian of all 2-planes.
At the right we have the short-hand notation with brackets.
The bracket~$[2~4]$ contains the row indices to the lowest nonzero entries
in~$X$. }
\label{figcelldeco}
\end{center}
\end{figure}
\medskip
Figure~\ref{figcelldeco} pictures patterns of the moving 2-planes in the Pieri
homotopy algorithm for the case $(m,r) = (2,2)$, see Figure~\ref{figlines}.
The bottom matrix is the general representation of a solution that
intersects already the two input lines spanned by the standard basis
vectors. At the leaves of the tree by linear algebra operations we can
intersect with a third input line. Moving down the poset, we deform form
the left configuration in Figure~\ref{figlines} to the general problem.
\medskip
Denote by $L_1$ and $L_2$ the lines already met by $X$.
At the leaves of the tree in Figure~\ref{figcelldeco}
we intersect with the fourth line $L_4$.
The special position for the third line $L_3$ is represented by the
matrix $S_X$, which intersects any $X$ with coordinates as at the leaves
of the tree.
In the homotopy $H(X,t) = {\bf 0}$ we deform the line spanned by the columns
of $S_X$ to line $L_3$, for $t = 0$ to $t = 1$.
\begin{equation} \label{eqpierihom}
S_X = \left[
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
0 & 1 \\
0 & 0 \\
\end{array}
\right]
\quad \quad
H(X,t) =
\left\{
\begin{array}{l}
\det( L_4 | X ) = 0 \\
\det( (1-t) S_X + tL_3 | X ) = 0 \\
\end{array}
\right.
\quad t \in [0,1].
\end{equation}
Every solution $X(t)$ of $H(X(t),t) = {\bf 0}$ intersects already three
lines: $L_1$, $L_2$ and $L_3$.
At the end, for $t=1$, $X$ also meets the line~$L_3$ in a point.
\medskip
The homotopy~(\ref{eqpierihom}) deforms two solution lines,
starting at patterns which have their row indices for the lowest nonzero
entries respectively as in~$[1~4]$ and in~$[2~3]$.
The correctness of this homotopy (proven in~\cite{hss98} and~\cite{hv99})
justifies the formal root count using the localization poset.
This combinatorial root count proceeds in two stages. First we build
up the poset, from the bottom up, diminishing the entries in the brackets
under the restriction that the same entry never occurs twice or more.
Secondly, we descend from the top of the poset, collecting and adding
up the counts at the nodes in the poset.
More examples and variations are in~\cite{hv99}.
\medskip
To solve the general intersection problem of Theorem~\ref{theoprob2}, the
special $(m+1-k_i)$-planes lie in the intersection of special $m$-planes.
In the construction of the poset one has to follow additional rules
as to ensure a solution that meets the intersection of special $m$-planes.
We refer to~\cite{hv99} for details.
\medskip
The third enumerative problem we can solve is formalized as follows.
\begin{theorem} \label{theoprob3}
The number of all maps of degree $q$ that produce $r$-planes in ${\Bbb C}^{m+r}$
meeting $mr + q(m+r)$ general $m$-planes at specified interpolation
points is a finite constant.
\end{theorem}
In~\cite{rrw96,rrw98} and~\cite{wrr94} explicit formulas are given
for this finite constant along with other combinatorial identities.
Following a hint of Frank Sottile (see also~\cite{sot99a})
and reverse engineering on the root counts in~\cite{rrw96},
Pieri homotopies were developed in~\cite{hv99} whose correctness
yields a proof for Theorem~\ref{theoprob3}.
\medskip
The analogue to Figure~\ref{figcelldeco} for maps of degree one
into $G_{22}$ is displayed in Figure~\ref{figcurvcelldeco}.
\begin{figure}[hbt]
\begin{center}
\begin{picture}(220,120)(20,-10)
\thicklines
\put(-65,85){\small
${\rm det}\left[
S_X
\left|
\begin{array}{cc}
x^0_{11} & x^1_{12} s~~~ \\
x^0_{21} & ~~~x^0_{22} t \\
0 & ~~~x^0_{32} t \\
0 & ~~~x^0_{42} t \\
\end{array}
\right.
\right] = 0$}
\put(140,85){\small
${\rm det}\left[
S_X
\left|
\begin{array}{cc}
x^0_{11} & 0 \\
x^0_{21} & x^0_{22} \\
x^0_{31} & x^0_{32} \\
0 & x^0_{42} \\
\end{array}
\right.
\right] = 0$}
\put(110,50){\line(4,1){40}}
\put(90,50){\line(-4,1){40}}
\put(-10,15){\small
${\rm det}[L_n | X(s,t)]
= {\rm det}\left[
L_n
\left|
\begin{array}{cc}
x^0_{11} & x^1_{12} s~~~ \\
x^0_{21} & ~~~x^0_{22} t \\
x^0_{31} & ~~~x^0_{32} t \\
0 & ~~~x^0_{42} t \\
\end{array}
\right.
\right] = 0$}
\put(260,10){[3~5]}
\put(227,30){[2~5]}
\put(293,30){[3~4]}
\put(257,18){\line(-2,1){10}}
\put(285,18){\line(2,1){10}}
\end{picture}
\caption{Part of a cellular decomposition of the Grassmannian of
maps of degree~1 that produce 2-planes in projective 3-space.
The bracket notation at the right corresponds to a matrix representation
of the coefficients of the map~$X(s,t)$. }
\label{figcurvcelldeco}
\end{center}
\end{figure}
To solve the problem in Theorem~\ref{theoprob3} we need a special position
for the interpolation points. By moving those to infinity, the
dominant monomials in the maps allow to re-use the same special $m$-planes,
whose entries should be considered modulo~$m+r$.
The homotopy to satisfy the $n$-th intersection condition is:
\begin{equation} \label{eqpieriqhom}
H(X(s,t),s,t) =
\left\{
\begin{array}{rl}
\det(L_i | X(s_i,t_i)) = 0 & i=1,2,\ldots,n-1 \\
\det((1-t) S_X + t L_n | X(s,t) ) = 0 & \\
(s-1)(1-t) + (s-s_n)t = 0 & t \in [0,1]
\end{array}
\right.
\end{equation}
Note that the continuation parameter~$t$ moves the interpolation point
from infinity, at $(s,t) = (1,0)$, to the specific value $(s,t) = (s_n,1)$.
\medskip
See~\cite{sot99a} for information on the selection of the input planes so
that all maps are real.
\medskip
As an example of another problem in enumerative geometry we mention
the 27 lines on a cubic surface in 3-space.
According to~\cite{mum76}, this is one of the gems
hidden in the rag-bag of projective geometry.
In~\cite{seg42}, the 27 lines are determined by breaking up the cubic
surface into three planes in a continuous way such that each intermediate
position is nonsingular. It is shown that this continuous variation
is also valid in the real field.
\section{Numerical Software for Solving Polynomial Systems}
In computer algebra one wants to compute exactly as long as possible
and to defer the approximate calculations to the very last end.
Exactly the opposite way is taken in homotopy methods:
here we use floating-point arithmetic and only increase the precision
when needed.
\medskip
Next we mention programs with special features for polynomial systems.
See~\cite[Chapter VIII]{ag97} for a list of available software for path
following.
HOMPACK~\cite{msw89,wbm87} is a general continuation package with a
polynomial driver. It has been parallelized~\cite{acw89,hw89},
extended with an end game~\cite{sws96}, and upgraded~\cite{wsmmw97}
to Fortran~90. POLSYS\_PLP~\cite{wsw98} provides linear-product root
counts to be used in conjunction with HOMPACK90.
The Fortran code for CONSOL is contained in~\cite[Appendix 6]{mor87}.
The C-program pss~\cite{mal96} applies homotopy continuation with
verification by $\alpha$-theory. Pelican~\cite{hub95,hub96}
implements in~C the polyhedral methods of~\cite{hs95}.
Efficient Fortran software for polyhedral continuation with facilities
to compute all affine roots is used in~\cite{glw98}.
The computation of mixed volumes with the C program mvlp~\cite{emi94,ec95}
is a crucial step for sparse resultants~\cite{wem98}.
A distributed version has been created in~\cite{gio96}.
\medskip
PHC is written in Ada and originated during the doctoral research of
the author~\cite{ver96}. Executable versions were first released
at the PoSSo open workshop on software~\cite{ver95}.
The public release of the sources is described in~\cite{ver97}.
The package is organized as a tool box, organized along four stages of
the solver. Figure~\ref{figstages} presents the flow of the solver.
The package is menu-driven and file-oriented.
A general-purpose black-box solver is available.
{\small
\begin{figure}[hbt]
\begin{center}
\begin{picture}(360,145)(10,0)
\put(20,120){\framebox(130,20)[c]{1. Preconditioning}}
\put(50,100){$\diamond$ Coefficient Scaling}
\put(50,85){$\diamond$ Reduction of degrees}
\put(205,120){\framebox(130,20)[c]{4. Validation}}
\put(235,100){$\diamond$ Refining of the roots}
\put(235,85){$\diamond$ Analysis of condition $\#$s}
\put(20,50){\framebox(130,20)[c]{2. Root Counting}}
\put(30,35){$\diamond$ B\'ezout : degrees}
\put(30,20){$\diamond$ Bernshtein : polytopes}
\put(30,5){$\diamond$ Schubert : SAGBI/Pieri}
\put(205,50){\framebox(140,20)[c]{3. Homotopy Continuation}}
\put(215,30){$\diamond$ Fix continuation parameters}
\put(215,15){$\diamond$ Choose Predictor-Corrector}
\put(160,57){\line(1,0){25}}
\put(160,60){\line(1,0){25}}
\put(185,64){\line(2,-1){11}}
\put(185,53){\line(2,1){11}}
\put(34,113){\line(0,-1){25}}
\put(38,113){\line(0,-1){25}}
\put(31,91){\line(1,-2){5}}
\put(41,91){\line(-1,-2){5}}
\put(219,107){\line(0,-1){25}}
\put(223,107){\line(0,-1){25}}
\put(216,102){\line(1,2){5}}
\put(226,102){\line(-1,2){5}}
\end{picture}
\caption{The four stages in the flow of the PHC solver.}
\label{figstages}
\end{center}
\end{figure}
}
The new second release of PHC uses Ada~95 concepts in the construction
of the mathematical library.
It is developed with the freely available gnu-ada compiler
(currently at version 3.11p) on various platforms.
To run the software no compilation is needed,
as binaries are available for Unix Workstations: running SUN Solaris and
SGI Irix, and Pentium PCs: running Linux and Solaris.
The portability of PHC is ensured by the gnu-ada compiler.
\medskip
Another main feature of the second release are the homotopy methods
for the Schubert calculus. Implementing those homotopies was a matter
of plugging in the equations and calling the path trackers.
The third release of the package should offer a more comprehensive
environment to construct homotopies, providing an easier access to
the two main computational engines: mixed-volume computation and
polynomial continuation.
\section{The Database of Applications}
The polynomial systems in scientific and engineering models
are a continuing source of open problems.
Systems that come from academic questions are often conjectures providing
computational evidence in a developing theory.
In various engineering disciplines polynomial systems represent a
modeling problem, e.g.: a mechanical device.
The origin of a polynomial system matters when the original problem
formulation does not admit well-conditioned solutions.
As a general method to deal with badly scaled systems to compute equilibria
of chemical reaction systems, coefficient and equation scaling
was developed in~\cite{mm87},
see also~\cite[Chapter 5]{mor87} and~\cite{wbm87}.
\medskip
The collection of test systems is organized as a database and
available via the author's web pages,
A good test example reveals properties of the solution method and
has a meaningful application.
Besides the algebraic formulation it contains the fields:
title (meaningful description), references (problem source), root counts
(B\'ezout bounds and mixed volume), and solution list.
\medskip
\noindent Instead of producing a huge list with an overview, we pick
some important case studies.
\begin{description}
\item[{\bf katsura-n}] ({\em magnetism problem~{\rm \cite{kat94}}})
The number of solutions equals the total degree $D = 2^n$,
so the homotopy based on~$D$ is optimal to solve this problem.
Because the constant term is missing in all except one equation, the system
is an interesting test problem for affine polyhedral methods.
\smallskip
\item[{\bf camera1s}] ({\em computer vision~{\rm \cite{fm90}}})
The system models the displacement of a camera between two positions
in a static environment~\cite{emi94}.
The multi-homogeneous homotopy is optimal for this problem, requiring
20 solution paths to trace instead of~$D = 64$.
\smallskip
\item[{\bf game$n$two}] ({\em totally mixed Nash equilibria for $n$ players
with two strategies~{\rm \cite{mm97,mcl97}}})
This is another instance where multi-homogeneous homotopies are optimal.
The number of solutions grows like $n! e^{-1}$ as $n \rightarrow \infty$.
The largest system that is currently completely solvable is for $n = 8$
requiring 14,833 paths to trace.
Situations exist for which all solutions are meaningful.
\smallskip
\item[{\bf cassou}] ({\em real algebraic geometry})
This system illustrates the success story of polyhedral homotopies:
the total degree equals~1,344, best known B\'ezout bound is 312
(see~\cite{lww96}), whereas the mixed volume gives~24.
Still eight paths are diverging to infinity and polyhedral end
games~\cite{hv98} are needed to separate those diverging paths from the
from the other finite ill-conditioned roots.
\smallskip
\item[{\bf cyclic-n}] ({\em Fourier transforms~{\rm \cite{bjo89,bf91}}})
For $n=7$, polyhedral homotopies are optimal, with all 924 paths leading
to finite solutions. For $n \geq 8$, the mixed volume overestimates the
number of roots and there are components of solutions.
In~\cite{sv99} the degrees of the components were computed for $n=8,9$.
There are 34940 cyclic 10-roots, generated by 1747 solutions.
\smallskip
\item[{\bf pole28sys}] ({\em pole placement problem~{\rm \cite{byr89}}})
This system illustrates the efficiency of SAGBI homotopies for verifying
a conjecture in real algebraic geometry~\cite{sot98}.
With the input planes chosen to osculate a rational normal curve,
an instance with all 1,430 solutions real and isolated was solved
in~\cite{ver98}. The problem is relevant to control theory~\cite{rw99}.
\smallskip
\item[{\bf stewgou40}] ({\em mechanism design~{\rm \cite{die98}}})
Whether the Stewart-Gough parallel platform in robotics could have all
its 40 solutions real was a notorious open problem, until recently,
as it was solved by numerical continuation methods~\cite{die98}.
The problem formulation in~\cite{die98} is highly deficient: the
mixed volume equals 1,536 whereas only 40 solution paths will converge.
\end{description}
We emphasize that we have optimal homotopies for three classes
of polynomial systems, but not for all possible structures.
Although one can solve a modelling problem by a black-box polynomial-system
solver, knowing the origin of the problem leads in most cases to more favorable
algebraic formulations that help the resolution of a polynomial system.
To produce really meaningful solutions one often has to be close
to the source of the problems and be able to interact with the people who
formulate the polynomial systems.
\medskip
In closing this section we list some notable usages of PHC.
Charles Wampler~\cite{wam96} used a preliminary version of PHC to count
the roots of various systems in mechanical design.
Root counts for linear subspace intersections in the Schubert calculus
were computed by Frank Sottile, see~\cite{sot98} for various tables.
A third example comes from computer graphics.
To show that the 12 lines tangent to four given spheres can all be real,
Thorsten Theobald used PHC, choosing appropriate parameters in the algebraic
formulation set up by Cassiano Durand.
\section{Closing Remarks and Open Problems}
The three classes presented in this paper are by no means exhaustive, but
give an idea of what can be done with homotopies to solve polynomial systems.
The root counts constitute the theoretical backbone for general-purpose
black-box solving. Yet, the homotopy methods are flexible enough to
exploit a particular geometrical situation, with guaranteed optimal
complexity when applied to generic instances.
\medskip
From algebraic geometry formal procedures based on intersection theory
count the number of solutions to classes of polynomial systems.
Examples are the theorems of B\'ezout, Bernshte\v{\i}n and Schubert.
For these situations we construct a start system and have a homotopy to deform
the solutions to this start system to the solutions to any specific problem.
There are many other cases for which one knows how to count but not how
to deform and solve efficiently.
Research in homotopy methods is aimed at turning the formal root counts
into effective numerical methods.
As open problem we can ask for a meta-homotopy method to connect formal root
counting methods to solving generic systems and deformation procedures.
\medskip
In most applications, only the real solutions are important.
Once we know an optimal homotopy to solve the problem in the complex case,
we would like to know whether all solutions can be real and how the real
solutions are distributed. The reality question appears for instance in
the theory of totally mixed Nash equilibria and in the pole placement
problem. Finding well-conditioned instances of fully real problems
can be done by homotopy methods.
The finding of 40 real solutions to the Stewart-Gough platform~\cite{die98}
is perhaps the most striking example.
The question is to find an efficient procedure to deform from the complex
case to the fully real case.
\bigskip
\noindent {\bf Acknowledgments.}
The interest in homotopy methods by the FRISCO project has stimulated
the author's research. The author is deeply indebted to all his co-authors.
The interactions with Birk Huber and Frank Sottile at MSRI were influential
in this current treatment of homotopies.
\newpage
{\small
| {
"timestamp": "1999-07-13T00:47:25",
"yymm": "9907",
"arxiv_id": "math/9907060",
"language": "en",
"url": "https://arxiv.org/abs/math/9907060",
"abstract": "Numerical homotopy continuation methods for three classes of polynomial systems are presented. For a generic instance of the class, every path leads to a solution and the homotopy is optimal. The counting of the roots mirrors the resolution of a generic system that is used to start up the deformations. Software and applications are discussed.",
"subjects": "Numerical Analysis (math.NA); Algebraic Geometry (math.AG)",
"title": "Polynomial Homotopies for Dense, Sparse and Determinantal Systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540656452213,
"lm_q2_score": 0.7248702880639792,
"lm_q1q2_score": 0.7099046636808808
} |
https://arxiv.org/abs/1910.01164 | Insight in the Rumin Cohomology and Orientability Properties of the Heisenberg Group | The purpose of this study is to analyse two related topics: the Rumin cohomology and the $\mathbb{H}$-orientability in the Heisenberg group $\mathbb{H}^n$. In the first three chapters we carefully describe the Rumin cohomology with particular emphasis at the second order differential operator $D$, giving examples in the cases $n=1$ and $n=2$. We also show the commutation between all Rumin differential operators and the pullback by a contact map and, more generally, describe pushforward and pullback explicitly in different situations. Differential forms can be used to define the notion of orientability; indeed in the fourth chapter we define the $\mathbb{H}$-orientability for $\mathbb{H}$-regular surfaces and we prove that $\mathbb{H}$-orientability implies standard orientability, while the opposite is not always true. Finally we show that, up to one point, a Möbius strip in $\mathbb{H}^1$ is a $\mathbb{H}$-regular surface and we use this fact to prove that there exist $\mathbb{H}$-regular non-$\mathbb{H}$-orientable surfaces, at least in the case $n=1$. This opens the possibility for an analysis of Heisenberg currents mod $2$. | \chapter{Preliminaries}
In this chapter we will first provide definitions and notions about Lie and Carnot groups (Section \ref{liegroups}), as well as introduce the Heisenberg group and some basic properties and automorphisms associated to it. Second, we will present the standard basis of
vector fields in the Heisenberg group $\mathbb{H}^n$, including the behaviour of the Lie brackets and the left invariance, which will lead to the definition of dual differential forms. Next we will mention briefly different equivalent distances on $\mathbb{H}^n$: the \emph{Carnot-Carathéodory} distance $d_{cc}$ and the \emph{Korányi} distance $d_{\mathbb{H}}$. Finally we will present the Heisenberg group's topology and dimensions (Section \ref{defH}).
The Heisenberg group is maybe the most famous example of Sub-Riemannian geometry. we will commonly use the adjectives ``Riemannian" and ``Euclidean" as synonymous. As opposite to them, we will refer to ``Sub-Riemannian" and ``Heisenberg" for objects proper of the Sub-Riemannian structure of the Heisenberg group.\\
There exists many good references for an introduction on the Heisenberg group; we will follow section 2.1 in \cite{GCmaster}, the introduction of \cite{TRIP}, sections 2.1 and 2.2 in \cite{FSSC} and section 2.1.3 and 2.2 in \cite{CDPT}.
\section{Lie Groups and Left Translation}\label{liegroups}
In this section we provide definitions and notions about Lie and Carnot groups. General references can be, for example, section 2.1 in \cite{GCmaster}, the introduction of \cite{TRIP}, section 2.1 in \cite{FSSC} and 2.1.3 in \cite{CDPT}.
\begin{defin}\label{lie}
A group $\mathbb{G}$ with the group operation $*$, $(\mathbb{G},*)$, is a \emph{Lie Group} if
\begin{itemize}
\item
$\mathbb{G}$ is a differentiable manifold,
\item
the map $\mathbb{G} \times \mathbb{G} \to \mathbb{G}, \ (p,q) \mapsto p*q=p q$ is differentiable,
\item
the map $ \mathbb{G} \to \mathbb{G}, \ p \mapsto p^{-1}$ is differentiable.
\end{itemize}
\end{defin}
\begin{defin}\label{lefttr}
Let $(\mathbb{G}, *)$ be a Lie group, with the following smooth operation
\begin{align*}
\mathbb{G} \times \mathbb{G} \to \mathbb{G}, \ (p, q) \mapsto p*q^{-1}.
\end{align*}
If $q \in \mathbb{G}$, then denote the \emph{left translation by $q$} as
\begin{align*}
L_q : \mathbb{G} \to \mathbb{G}, \ p \mapsto q*p.
\end{align*}
In the literature, $L_q$ is often denoted also as $\tau_q$. For this reason we will write $\tau_q$ in Chapter \ref{orient4}, when talking specifically about the Heisenberg group.
\end{defin}
\begin{obs}
It follows from the definition that
$$L_{q} L_{p}=L_{q p}.
$$
\end{obs}
\begin{defin}\label{leftinv}
A vector field $V$ on a Lie group $\mathbb{G}$ is \emph{left-invariant} if $V$ commutes with $L_g$, for every $g \in \mathbb{G}$. Specifically, $V$ is \emph{ left-invariant } if
$$
(L_q)_* V_p = V_{L_q (p)} =V_{q *p} \ \in T_{q *p} \mathbb{G},
$$
for every $p,q \in \mathbb{G}$,
where $(\cdot)_*$ expresses the standard pushforward. Equivalently, one can express the definitions as
$$
V_p (\varphi \circ L_q ) = [( V \varphi ) \circ L_q ]_p \ \in C^\infty( U_{q *p} ),
$$
for every $ p,q \in \mathbb{G}$ and $ \varphi \in C^\infty (\mathbb{G})$, where $U_{q *p} $ is a neighbourhood of $q*p$.
\end{defin}
\begin{no}\label{neighbourhood}
Often we will need to refer to neighbourhoods of points. For this reason, we introduce the notation
$$
\mathcal{U}_p := \left \{ U_p \ / \ U_p \text{ a neighbourhood of } p \right \}.
$$
\end{no}
\begin{obs}
The most important property of left invariant vector fields is that they are uniquely determined by their value at one point, which is usually taken to be the neutral element.\\
In general, to compute the value of a left invariant vector field $V_q \in T_q \mathbb{G}$ at another point $p$, we can simply left-translate by $p*q^{-1}$:
$$
( L_{p*q^{-1}} )_* V_q = V_{p*q^{-1}*q}=V_p.
$$
\end{obs}
\noindent
There are special Lie Groups that hold additional important properties; they are called Carnot Groups. Before defining them, we need to introduce the Lie bracket operation:
\begin{defin}
The \emph{Lie bracket} or \emph{commutator} of vector fields is an operator defined as follows
\begin{align*}
[,] : \mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}, \ ( V_1, V_2) \mapsto [V_1,V_2]:=V_1 V_2 - V_2 V_1,
\end{align*}
where $\mathfrak{g}$ is an algebra over a field $\mathbb{G}$.
\end{defin}
\begin{defin}
A \emph{Lie algebra} $\mathfrak{g}$ of a Lie group $\mathbb{G}$ is a non-associative algebra with the following properties: for all $ V_1,V_2,V_3 \in \mathfrak{g}$,
\begin{itemize}
\item
$[V_1,V_1]=0 $ \quad (Alternativity),
\item
$[V_1+V_2,V_3]=[V_1,V_3]+[V_2,V_3] \ \text{ and } \ [V_3,V_1+V_2]=[V_3,V_1]+[V_3,V_2] $ \quad \\ (Bilinearity),
\item
$[V_1,[V_2,V_3]]+[V_2,[V_3,V_1]]+[V_3,[V_1,V_2]]=0 $ \quad (Jacobi identity).
\end{itemize}
\end{defin}
\begin{defin}
A \emph{Carnot group of step $k$} is a connected, simply connected Lie group whose Lie algebra $\mathfrak{g}$ admits a step $k$ stratification, i.e.,
$$
\mathfrak{g} = V_1 \oplus \cdots \oplus V_k,
$$
where every $V_j$ is a linear subspace of $\mathfrak{g}$ satisfying $[V_1,V_j]=V_{j+1}$. Additionally, $V_k \neq \{ 0 \}$ and $V_j= \{ 0 \}$ for $j>k$.
\end{defin}
\begin{defin}\label{homdim}
Let $\mathbb{G}$ be a Carnot group and call $m_i := dim (V_i) $ for each $V_i$ in the stratification of $\mathfrak{g}$. Then the \emph{homogeneous dimension} of $\mathbb{G}$ is
$$
Q:= \sum_{i=1}^k i m_i.
$$
\end{defin}
\begin{comment
$$
W_p \phi= (W \phi )(p) \in C^\infty( p)
$$
$$
[ (\tau_q)_* W_p ] \varphi = W_p ( \varphi \circ \tau_q ) = [ W ( \varphi \circ \tau_q ) ](p) =\nabla (\varphi \circ \tau_q)(p) \cdot (W Id )(p) \in C^\infty( q *p)
$$
$$
( W_p \varphi ) \circ \tau_q = [ ( W \varphi ) \circ \tau_q ](p) = ( W \varphi ) ( q *p) \in C^\infty( q *p)
$$
But then, don't we have that the coefficient of $n_\mathbb{H}$ change together with the left translation and then the $\mathbb{H}$-regularity a priori is not preserved?
$$
(Xf)_p \neq 0
$$
The left translation is
$$
[ (\tau_q)_* X_p ] f = X_p ( f \circ \tau_q ) = [ X ( f \circ \tau_q ) ](p) =\nabla (f \circ \tau_q)(p) \cdot (X Id )(p) \stackrel{!}{=}
$$
$$
\stackrel{!}{=} ( X_p f ) \circ \tau_q = [ ( X f ) \circ \tau_q ](p) = ( X f ) ( q *p)
$$
or easier
$$
[ (\tau_q)_* X_p ] f = X_p ( f \circ \tau_q ) = [ X ( f \circ \tau_q ) ](p) \stackrel{!}{=} ( X_p f ) \circ \tau_q = [ ( X f ) \circ \tau_q ](p) = ( X f ) ( q *p) =( X f )_{q *p}
$$
$$
( X f )_{q *p} \neq 0 \ ?
$$
\end{comment
\section{Definition of $\mathbb{H}^n$}\label{defH}
In this section we introduce the Heisenberg group $\mathbb{H}^n$ as well as some basic properties and automorphisms associated to it. Then we present the standard basis of
vector fields in the Heisenberg group $\mathbb{H}^n$, including the behaviour of the Lie brackets and the left invariance, which will lead to the definition of dual differential forms. Next we mention briefly different equivalent distances on $\mathbb{H}^n$: the \emph{Carnot-Carathéodory} distance $d_{cc}$ and the \emph{Korányi} distance $d_{\mathbb{H}}$. Finally we mention the Heisenberg group's topology and dimensions.\\
General references are section 2.1 in \cite{GCmaster}, sections 2.1 and 2.2 in \cite{FSSC} and section 2.2 in \cite{CDPT}.
\begin{defin}\label{Heisenberg_Group}
The $n$-dimensional \emph{Heisenberg Group} $\mathbb{H}^n$ is defined as
$$
\mathbb{H}^n:= (\mathbb{R}^{2n+1}, * ),
$$
where $*$ is the following product:
$$
(x,y,t)*(x',y',t') := \left (x+x',y+y', t+t'- \frac{1}{2} \langle J
\begin{pmatrix}
x \\
y
\end{pmatrix}
,
\begin{pmatrix}
x' \\
y'
\end{pmatrix} \rangle_{\mathbb{R}^{2n}} \right ),
$$
with $x,y,x',y' \in \mathbb{R}^n$, $t,t' \in \mathbb{R}$ and $J= \begin{pmatrix}
0 & I_n \\
-I_n & 0
\end{pmatrix} $.\\\\
Notationally it is common to write $x=(x_1,\dots,x_n) \in \mathbb{R}^n$. Furthermore, with a simple computation of the matrix product, we immediately have that
$$
\langle
\begin{pmatrix}
0 & I_n \\
-I_n & 0
\end{pmatrix}
\begin{pmatrix}
x \\
y
\end{pmatrix}
,
\begin{pmatrix}
x' \\
y'
\end{pmatrix} \rangle
=
\langle
\begin{pmatrix}
y \\
-x
\end{pmatrix}
,
\begin{pmatrix}
x' \\
y'
\end{pmatrix} \rangle
=
\sum_{j=1}^n \left ( y_j x_j' - x_j y_j' \right ),
$$
and so one can rewrite the product as
$$
(x,y,t)*(x',y',t') := \left (x+x',y+y', t+t' + \frac{1}{2} \sum_{j=1}^n \left ( x_j y_j' - y_j x_j' \right ) \right ).
$$
\end{defin}
\begin{obs}
The Heisenberg group $\mathbb{H}^n$ satisfies the conditions of Definition \ref{lie} and is hence a Lie group.
\end{obs}
\begin{obs
One can easily see the following properties
\begin{itemize}
\item
$\mathbb{H}^n$ is a non-commutative group.
\item
The neutral element of $\mathbb{H}^n$ is $(0,0,0)$.
\item
The inverse of $(x,y,t) \in \mathbb{H}^n$ is $(x,y,t)^{-1}=(-x,-y,-t)$.
\item
The center of the group, namely the elements that commute with all the elements of the group, is $\{ (0,t) \in \mathbb{R}^{2n} \times \mathbb{R} \}$.
\end{itemize}
\end{obs}
\noindent
On the Heisenberg group $\mathbb{H}^n$ there are two important groups of automorphisms. The first one is the operation of left-translation (see Definition \ref{lefttr}) and the second one is the ($1$-parameter) group of the \emph{anisotropic dilatations $\delta_r$}:
\begin{defin
\label{delta}
The ($1$-parameter) group of the \emph{anisotropic dilatations $\delta_r$}, with $r \in \mathbb{R}^+$, is defined as follows
\begin{align*}
\delta_r : \mathbb{H}^n \to \mathbb{H}^n, \ (x,y,t) \mapsto (rx,ry,r^2 t).
\end{align*}
\end{defin}
\begin{comment
\begin{defin
\label{delta}
On the Heisenberg group $\mathbb{H}^n$ there are two important groups of automorphisms.
\begin{itemize}
\item
The first one is the ($1$-parameter) group of the \emph{anisotropic dilatations $\delta_r$}, with $r \in \mathbb{R}^+$, which are defined as follows
\begin{align*}
\delta_r : \mathbb{H}^n & \to \mathbb{H}^n\\
(x,y,t) & \mapsto (rx,ry,r^2 t)
\end{align*}
\item
The second one is the operation of left-translation (defined at \ref{lefttr}) for any $q \in \mathbb{H}^n$.
\begin{align*}
L_q : \mathbb{H}^n & \to \mathbb{H}^n\\
p & \mapsto q*p
\end{align*}
\end{itemize}
\end{defin}
\end{comment
\subsection{Left Invariance and Horizontal Structure on $\mathbb{H}^n$}\label{lefthor}
In this subsection we present the standard basis of
vector fields in the Heisenberg group $\mathbb{H}^n$, including the behaviour of the Lie brackets and the left invariance. This will lead to conclude that the Heisenberg group is a Carnot group and to the definition of dual differential forms.\\
General references are section 2.1 in \cite{GCmaster} and sections 2.1 and 2.2 in \cite{FSSC}.
\begin{defin
\label{XYT}
A basis of left invariant vector fields in $\mathbb{H}^n$
consists of the following $2n+1$ vectors:
$$
\begin{cases}
X_j = \partial_{x_j} - \frac{1}{2} y_j\partial_{t} \ \ \emph{\emph{ for }} \ j=1,\dots,n , \\
Y_j = \partial_{y_j} + \frac{1}{2} x_j\partial_{t} \ \ \emph{\emph{ for }} \ j=1,\dots,n, \\
T = \partial_{t}.
\end{cases}
$$
We will show in Lemma \ref{leftinvar} that they are indeed left invariant.
\end{defin}
\begin{obs}
One can observe the immediate property that $\{ X_1,\dots,X_n,Y_1,\dots,Y_n,T \}$ becomes $\{ \partial_{x_1},\dots, \partial_{x_n}, \partial_{y_1},\dots,\partial_{y_n}, \partial_{t} \}$ at the neutral element.
\end{obs}
\begin{comment
\begin{lem}\label{simplecomposition}
Consider a function $g: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, and interpret its meaning as $g: U \to \mathbb{R}^{2n+1}$. Assume then $g \in \left [ C^1(U, \mathbb{H}^n) \right ]^{2n+1}$, meaning that all the $2n+1$ components of $g$ are $C^1$-regular.\\
Consider a second function $f: \mathbb{H}^n = \mathbb{R}^{2n+1} \to \mathbb{R}$, $f\in C^1(\mathbb{H}^n, \mathbb{R})$.
Then the following holds:
\begin{align*}
X_j (f \circ g) &= (\nabla f)_g \cdot X_j g \ \ \ \ \forall j=1,\dots,n ; \\
Y_j (f \circ g) &= (\nabla f)_g \cdot Y_j g \ \ \ \ \forall j=1,\dots,n ; \\
T (f \circ g) &= (\nabla f)_g \cdot T g .
\end{align*}
where $\nabla$ describes the Euclidean gradient in $\mathbb{R}^{2n+1}$.
\end{lem}
\end{comment
\begin{lem}\label{simplecomposition}
Consider a function $g: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open. Assume then $g \in \left [ C^1(U, \mathbb{R}) \right ]^{2n+1}$, meaning that all the $2n+1$ components of $g$ are $C^1$-regular.\\
Consider a second function $f: \mathbb{H}^n \to \mathbb{R}$, $f\in C^1(\mathbb{H}^n, \mathbb{R})$.
Then the following holds:
\begin{align}
X_j (f \circ g) &= (\nabla f)_g \cdot X_j g, \label{firstone} \\
Y_j (f \circ g) &= (\nabla f)_g \cdot Y_j g , \\
T (f \circ g) &= (\nabla f)_g \cdot T g .
\end{align}
where $ j=1,\dots,n$ and $\nabla$ describes the Euclidean gradient in $\mathbb{R}^{2n+1}=\mathbb{H}^n$.
\end{lem}
\begin{proof}
Equation \eqref{firstone} holds by direct computation. Indeed, we have
\begin{align*}
X_j(f \circ g)
=&\left ( \partial_{x_j} -\frac{1}{2} y_j\partial_t \right ) (f \circ g)\\%=\\
=& \sum_{i=1}^{n} \left ( \partial_{x_i} f \partial_{x_j} g^{i}+ \partial_{y_i} f \partial_{x_j} g^{n+i} \right ) + \partial_{t} f \partial_{x_j} g^{2n+1}\\%+\\
-\frac{1}{2}y_j \left [ \sum_{i=1}^{n} \left ( \partial_{x_i} f \partial_t g^{i}+ \partial_{y_i} f \partial_t g^{n+i} \right ) + \partial_{t} f \partial_t g^{2n+1} \right ]\\%=\\
=& \sum_{i=1}^{n} \left ( \partial_{x_i} f \partial_{x_j} g^{i} -\frac{1}{2} y_{j} \partial_{x_i} f \partial_t g^{i} \right ) + \sum_{i=1}^{n} \left ( \partial_{y_i} f \partial_{x_j} g^{n+i} -\frac{1}{2} y_{j} \partial_{y_i} f \partial_t g^{n+i} \right ) \\%+\\
+ \partial_{t} f \partial_{x_j} g^{2n+1} -\frac{1}{2} y_{j} \partial_{t} f \partial_t g^{2n+1} \\%=\\
=& \sum_{i=1}^{n} \partial_{x_i} f \left ( \partial_{x_j} g^{i} -\frac{1}{2} y_{j} \partial_t g^{i} \right ) + \partial_{y_i} f \sum_{i=1}^{n} \left ( \partial_{x_j} g^{n+i} -\frac{1}{2} y_{j} \partial_t g^{n+i} \right ) \\%+\\
+ \partial_{t} f \left ( \partial_{x_j} g^{2n+1} -\frac{1}{2} y_{j} \partial_t g^{2n+1} \right ) \\%=\\
=& \sum_{i=1}^{n} \partial_{x_i} f X_j g^{i} + \sum_{i=1}^{n} \partial_{y_i} f X_j g^{n+i} + \partial_{t} f X_j g^{2n+1} \\%=\\
=&(\partial_{x_1} f, \dots, \partial_{x_n} f ,\partial_{y_1} f, \dots, \partial_{y_n} f \partial_t f)_g \cdot (X_j g^1, \dots , X_j g^{2n+1} )\\%=\\
=& (\nabla f)_g \cdot X_j g.
\end{align*}
\begin{comment
$$
X_j(f \circ g) =\left ( \partial_{x_j} -\frac{1}{2} y_j\partial_t \right ) (f \circ g)=
$$
$$
= \sum_{i=1}^{n}
\left (
\partial_{x_i} f \partial_{x_j} g^{i}+
\partial_{y_i} f \partial_{x_j} g^{n+i}
\right ) +
\partial_{t} f \partial_{x_j} g^{2n+1)}+
$$
$$
-\frac{1}{2}y_j
\left [
\sum_{i=1}^{n}
\left (
\partial_{x_i} f \partial_t g^{i}+
\partial_{y_i} f \partial_t g^{n+i}
\right ) +
\partial_{t} f \partial_t g^{2n+1}
\right ]=
$$
$$
= \sum_{i=1}^{n}
\left (
\partial_{x_i} f \partial_{x_j} g^{i}
-\frac{1}{2} y_{j}
\partial_{x_i} f \partial_t g^{i}
\right )
+
\sum_{i=1}^{n}
\left (
\partial_{y_i} f \partial_{x_j} g^{n+i}
-\frac{1}{2} y_{j}
\partial_{y_i} f \partial_t g^{n+i}
\right ) +
$$
$$
+
\partial_{t} f \partial_{x_j} g^{2n+1}
-\frac{1}{2} y_{j}
\partial_{t} f \partial_t g^{2n+1}
=
$$
$$
= \sum_{i=1}^{n}
\partial_{x_i} f
\left (
\partial_{x_j} g^{i}
-\frac{1}{2} y_{j}
\partial_t g^{i}
\right )
+
\partial_{y_i} f
\sum_{i=1}^{n}
\left (
\partial_{x_j} g^{n+i}
-\frac{1}{2} y_{j}
\partial_t g^{n+i}
\right )
+
$$
$$
+
\partial_{t} f
\left (
\partial_{x_j} g^{2n+1}
-\frac{1}{2} y_{j}
\partial_t g^{2n+1}
\right )
=
$$
$$
= \sum_{i=1}^{n}
\partial_{x_i} f
X_j g^{i}
+
\sum_{i=1}^{n}
\partial_{y_i} f
X_j g^{n+i}
+
\partial_{t} f
X_j g^{2n+1}
=
$$
$$
=(\partial_{x_1} f, \dots, \partial_{x_n} f ,\partial_{y_1} f, \dots, \partial_{y_n} f \partial_t f)_g
\cdot
(X_j g^1, \dots , X_j g^{2n+1} )=
$$
$$
= (\nabla f)_g \cdot X_j g.
$$
$$
T(f \circ g) =\left ( \partial_t \right ) (f \circ g)=
$$
$$
=
\sum_{i=1}^{n}
\left (
\partial_{x_i} f \partial_t g^{i}+
\partial_{y_i} f \partial_t g^{n+i}
\right ) +
\partial_{t} f \partial_t g^{2n+1}
=
$$
$$
=
\sum_{i=1}^{n}
\left (
\partial_{x_i} f T g^{i}+
\partial_{y_i} f T g^{n+i}
\right ) +
\partial_{t} f T g^{2n+1}
=
$$
$$
=(\partial_{x_1} f, \dots, \partial_{x_n} f ,\partial_{y_1} f, \dots, \partial_{y_n} f \partial_t f)_g
\cdot
(T g^1, \dots , T g^{2n+1} )=
$$
$$
= (\nabla f)_g \cdot T g.
$$
\end{comment
\noindent
This proves the first part of the statement and the proof for $Y_j(f \circ g)$ is analogous. The case of $T$ is much simpler:
\begin{align*}
T(f \circ g)
=& \left ( \partial_t \right ) (f \circ g)\\%=\\
=&
\sum_{i=1}^{n}
\left (
\partial_{x_i} f \partial_t g^{i}+
\partial_{y_i} f \partial_t g^{n+i}
\right ) +
\partial_{t} f \partial_t g^{2n+1}\\%=\\
=&
\sum_{i=1}^{n}
\left (
\partial_{x_i} f T g^{i}+
\partial_{y_i} f T g^{n+i}
\right ) +
\partial_{t} f T g^{2n+1}\\%=\\
=&(\partial_{x_1} f, \dots, \partial_{x_n} f ,\partial_{y_1} f, \dots, \partial_{y_n} f \partial_t f)_g
\cdot
(T g^1, \dots , T g^{2n+1} )\\%=\\
=&(\nabla f)_g \cdot T g.
\end{align*}
\end{proof}
\begin{lem}\label{leftinvar}
In Definition \ref{XYT} we claimed that $X_j$'s, $Y_j$'s and $T$ are left invariant vector fields. we prove it here.
\end{lem}
\begin{proof}
For notational simplicity, we consider $n=1$ and $X_1=X = \partial_{x} - \frac{1}{2} y\partial_{t}$. The other cases and the general situation follow with hardly any change.\\
Consider $f \in C^1 (\mathbb{H}^1, \mathbb{R} )$ and $p=(x_p, y_p,t_p), q=(x_q, y_q,t_q) \in \mathbb{H}^1$. By Lemma \ref{simplecomposition},
\begin{align*}
X_p (f \circ L_q)
=& (\nabla f)_{L_q(p)} \cdot X_p (L_q) \\
=& (\partial_x f, \partial_y f, \partial_t f)_{L_q(p)} \cdot \left ( X (L_q^{(1)})(p), X (L_q^{(2)})(p), X (L_q^{(3)})(p) \right )\\%%
=& (\partial_x f, \partial_y f, \partial_t f)_{L_q(p)} \cdot \left ( 1 , 0 , - \frac {1}{2} y_q - \frac{1}{2} y_p \right )\\
=&(\partial_x f)_{L_q(p)} -\frac{1}{2} (y_q+y_p) (\partial_t f)_{L_q(p)} \\
=& \left [
\partial_x f -\frac{1}{2}y \partial_t f
\right ]_{L_q(p)} =
[ (Xf) \circ L_q ]_p,
\end{align*}
where, for the third line, we used that
$$
\begin{cases}
(L_q^{(1)})(p) =x_q + x_p,\\
(L_q^{(2)})(p) = y_q + y_p \quad \text{and}\\
(L_q^{(3)})(p) =t_q + t_p +\frac {1}{2} (x_q y_p - y_q x_p).
\end{cases}
$$
This proves the left invariance of $X$. Repeating the same argument for all $X_j$, $Y_j$ and $T$ completes the proof.
\end{proof}
\begin{obs}
\label{[,]}
The only non-trivial commutators of the vector fields $X_j,Y_j$ and $T$ are
$$
[X_j,Y_j]=T \ \emph{\emph{ for }} \ j=1,\dots,n.
$$
This immediately tells that all the higher-order commutators are zero.
\end{obs}
\begin{comment}
\begin{proof}
$\forall j=1,\dots,n$
\begin{align*}
[X_j,Y_j]&=\left [ \partial_{x_j} -\frac{1}{2} y_j\partial_{t}, \partial_{y_j} + \frac{1}{2} x_j\partial_{t} \right ]=\\
&
=[\partial_{x_j},\partial_{y_j}] + \frac{1}{2} [\partial_{x_j},x_j\partial_{t}] - \frac{1}{2} [y_j\partial_{t},\partial_{y_j}]
-\frac{1}{4}[y_j\partial_{t},x_j\partial_{t}]=\\
&
=0 + \frac{1}{2} \partial_{t} + \frac{1}{2} \partial_{t} +0 = \partial_{t} = T.
\end{align*}
\end{proof}
\end{comment}
\begin{rem} \label{Hcarnot}
The observation above shows that the Heisenberg group is a Carnot group of step 2. Indeed we can write its algebra $\mathfrak{h}$ as:
$$
\mathfrak{h} =\mathfrak{h}_1 \oplus \mathfrak{h}_2,
$$
with
$$
\mathfrak{h}_1 = \spn \{ X_1, \ldots, X_n, Y_1, \ldots, Y_n \} \text{ and } \mathfrak{h}_2 =\spn \{ T \}.
$$
Usually one calls $\mathfrak{h}_1$ the space of \emph{horizontal vectors} and $\mathfrak{h}_2$ the space of \emph{vertical vectors}.
\end{rem}
\begin{obs}
Consider a function $f \in C^1 (U, \mathbb{R} )$, $U\subseteq \mathbb{H}^n$ open. It is useful to mention that the vector fields $\{ X_1,\dots,X_n,Y_1,\dots,Y_n\}$ are homogeneous of order $1$ with respect to the dilatation $\delta_r, \ r \in \mathbb{R}^+$, i.e.,
$$
X_j (f\circ \delta_r)=r X_j(f)\circ \delta_r \ \ \ \ \text{ and } \ \ \ \ Y_j (f\circ \delta_r)=r Y_j(f)\circ \delta_r ,
$$
for any $j=1,\dots,n$.\\
On the other hand, the vector field $T$ is homogeneous of order $2$, that is,
$$
T(f\circ \delta_r)=r^2T(f)\circ \delta_r.
$$
The proof is a simple application of Lemma \ref{simplecomposition}.\\
It is not a surprise, then, that the homogeneous dimension of $\mathbb{H}^n$ (see Definition \ref{homdim}) is
$$
Q=2n+2.
$$
\end{obs}
\begin{obs}\label{scal}
The vectors $X_1,\dots,X_n,Y_1,\dots,Y_n,T$ can be made an orthonormal basis of $\mathfrak{h}$ with a scalar product $\langle \cdot , \cdot \rangle $.\\
In the same way, the vectors $X_1,\dots,X_n,Y_1,\dots,Y_n$ form an orthonormal basis of $\mathfrak{h}_1$ with a scalar product $\langle \cdot , \cdot \rangle_H $ defined purely on $\mathfrak{h}_1$.
\end{obs}
\begin{no}\label{notW}
Sometimes it will be useful to consider all the elements of the basis of $\mathfrak{h}$ with one symbol. To do so, one can notationally write
$$
\begin{cases}
W_i &:= X_i \ \text{ for } j=1,\dots,n,\\
W_{n+i} &:= Y_i\ \text{ for } j=1,\dots,n,\\
W_{2n+1}&:=T.
\end{cases}
$$
In the same way, the point $(x_1,\dots,x_n,y_1,\dots,y_n,t)$ will be denoted as $(w_1,\dots,w_{2n+1})$.
\end{no}
\begin{defin
\label{dual_basis}
Consider $ {\prescript{}{}\bigwedge}^1 \mathfrak{h}$ as the dual space of $\mathfrak{h}$,
which inherits an inner product from the one of $\mathfrak{h}$. By duality, one can find a dual orthonormal basis of covectors $\{\omega_1,\dots,\omega_{2n+1}\}$ in $ {\prescript{}{}\bigwedge}^1 \mathfrak{h}$ such that
$$
\omega_j ( W_k ) =\langle \omega_j , W_k \rangle = \langle \omega_j \vert W_k \rangle =\delta_{jk}, \quad j,k=1,\dots,2n+1,
$$
where $W_k$ is an element of the basis of $\mathfrak{h}$ and the notation varies in the literature. Such covectors are differential forms in the Heisenberg group. It turns out that the dual orthonormal basis is given by
$$
\{dx_1,\dots,dx_n,dy_1,\dots,dy_n,\theta \},
$$
where $\theta$ is called \emph{contact form} and is defined as
$$
\theta :=dt - \frac{1}{2} \sum_{j=1}^{n} (x_j d y_j-y_j d x_j).
$$
\end{defin}
\begin{no}\label{dfnota
As it will be useful sometimes to call all such forms by the same name, one can notationally write,
$$
\begin{cases}
\theta_i &:= dx_i\ \text{ for } j=1,\dots,n,\\
\theta_{n+i} &:= dy_i\ \text{ for } j=1,\dots,n,\\
\theta_{2n+1}&:=\theta.
\end{cases}
$$
In particular the covector $\theta_i$ is always the dual of the vector $W_i$, for all $i=1,\dots,2n+1$.
\end{no}
\begin{obs}\label{contactcontactcontact}
Note that, one could have introduced the Heisenberg group $\mathbb{H}^n$ with a different approach and defined it as a \emph{contact manifold}. A contact manifold is a manifold with a contact structure, meaning that its
algebra $\mathfrak{h}$ has a $1$-codimensional subspace $Q$ that can be written as a kernel of a non-degenerate $1$-form, which is then called \emph{contact form}.\\
The just-defined $\theta$ satisfy all these requirements and is indeed the contact form of the Heisenberg group, while $Q=\mathfrak{h}_1$. The non-degeneracy condition is $\theta \wedge d \theta \neq 0$. A straightforward computation shows that
$$
d \theta =- \sum_{j=1}^{n} dx_j \wedge dy_j,
$$
and so indeed
$$
\theta \wedge d \theta =- \sum_{j=1}^{n} dx_j \wedge dy_j \wedge \theta \neq 0.
$$
\end{obs}
\begin{obs}\label{df}
As a useful example, we show here that the just-defined bases of vectors and covectors behave as one would expect when differentiating. Specifically, consider $f: U\subseteq \mathbb{H}^n \to \mathbb{R}$, $U$ open, $f \in C^1 (U, \mathbb{R} )$, then one has:
\begin{align*}
df
=& \sum_{j=1}^{n} \left ( \partial_{x_j} f d x_j + \partial_{y_j} f dy \right ) + \partial_t f dt \\
=& \sum_{j=1}^{n} \left ( \partial_{x_j} f d x_j + \partial_{y_j} f dy \right ) + (\partial_t f) \left ( \theta + \frac{1}{2}\sum_{j=1}^{n} x_j dy_j - \frac{1}{2}\sum_{j=1}^{n} y_j dx_j \right )\\
=& \sum_{j=1}^{n} \left ( X_j f d x_j + Y_j f dy_j \right ) + Tf \theta.
\end{align*}
\end{obs}
\noindent
The next natural step is to define vectors and covectors of higher order.
\begin{defin} \label{kdim}
We define the sets of $k$-vectors and $k$-covectors, respectively, as follows:
\begin{align*}
\Omega_k \equiv {\prescript{}{}\bigwedge}_k \mathfrak{h} &:= \spn \{ W_{i_1} \wedge \dots \wedge W_{i_k} \}_{1\leq i_1 \leq \dots \leq i_k \leq 2n+1 },
\end{align*}
and
\begin{align*}
\Omega^k \equiv {\prescript{}{}\bigwedge}^k \mathfrak{h} &:= \spn \{ \theta_{i_1} \wedge \dots \wedge \theta_{i_k} \}_{1\leq i_1 \leq \dots \leq i_k \leq 2n+1 }.
\end{align*}
The same definitions can be given for $ \mathfrak{h}_1$ and produces the spaces $ {\prescript{}{}\bigwedge}_k \mathfrak{h}_1 $ and $ {\prescript{}{}\bigwedge}^k \mathfrak{h}_1 $.
\end{defin}
\begin{defin}
For $k=1,\dots,2n+1$, if $\omega \in {\prescript{}{}\bigwedge}^k \mathfrak{h}$, then we define $\omega^* \in {\prescript{}{}\bigwedge}_k \mathfrak{h}$ so that
$$
\langle \omega^* , V \rangle = \langle \omega \vert V \rangle, \ \ \ V \in {\prescript{}{}\bigwedge}_k \mathfrak{h}.
$$
\end{defin}
\noindent
We give here the definition of Pansu differentiability for maps between Carnot groups $\mathbb{G}_1$ and $\mathbb{G}_2$. After that, we state it in the special case of $\mathbb{G}_1=\mathbb{H}^n$ and $\mathbb{G}_2=\mathbb{R}$.\\
We call a function $h : (\mathbb{G}_1,*,\delta^1) \to (\mathbb{G}_2,*,\delta^2)$ \emph{homogeneous} if $h(\delta^1_r(p))= \delta^2_r \left ( h(p) \right )$ for all $r>0$.
\begin{defin}[see \cite{PANSU} and 2.10 in \cite{FSSC}]\label{dGGG}
Consider two Carnot groups $(\mathbb{G}_1,*,\delta^1)$ and $(\mathbb{G}_2,*,\delta^2)$. A function $f: U \to \mathbb{G}_2$, $U \subseteq \mathbb{G}_1$ open, is \emph{P-differentiable} at $p_0 \in U$ if there is a (unique) homogeneous Lie groups homomorphism $d_H f_{p_0} : \mathbb{G}_1 \to \mathbb{G}_2$ such that
$$
d_H f_{p_0} (p) := \lim\limits_{r \to 0} \delta^2_{\frac{1}{r}} \left ( f(p_0)^{-1} * f(p_0* \delta_r^1 (p) ) \right ),
$$
uniformly for $p$ in compact subsets of $U$.
\end{defin}
\begin{defin}\label{dHHH}
Consider $f: U \to \mathbb{R}$, $U \subseteq \mathbb{H}^n$ open. $f$ is \emph{P-differentiable} at $p_0 \in U$ if there is a (unique) homogeneous Lie groups homomorphism $d_H f_{p_0} : \mathbb{H}^n \to \mathbb{R}$ such that
$$
d_H f_{p_0} (p) := \lim\limits_{r \to 0} \frac{ f \left (p_0* \delta_r (p) \right ) - f(p_0) }{r},
$$
uniformly for $p$ in compact subsets of $U$.
\end{defin}
\begin{obs}
Consider $f:U\subseteq \mathbb{H}^n \to \mathbb{H}^n = \mathbb{R}^{2n+1}$, $U$ open, and interpret it in components as $f=(f^1,\dots,f^{2n+1})$. A straightforward computation shows immediately that, $f$ is P-differentiable in the sense of Definition \ref{dGGG}, then $f^1,\dots,f^{2n}$ are P-differentiable in the sense of Definition \ref{dHHH}.
\end{obs}
\begin{proof}
Consider n=1 for simplicity. The other cases follow immediately. Consider $p_0=(x_0,y_0,t_0)$ and $p=(x,y,t)$ in $\mathbb{H}^n$. A straightforward computation shows that
\begin{align*}
&\lim_{r\to 0+} \delta_{\frac{1}{r}} \left ( f(p_0)^{-1} * f(p_0* \delta_r (p) ) \right )=\\
&= \lim_{r\to 0+} \Bigg (
\frac{f^1 (p_0 *\delta_r(p) ) - f^1 (p_0)}{r} , \frac{f^2 (p_0 *\delta_r(p) ) - f^2 (p_0)}{r} ,\\
& \frac{f^3 (p_0 *\delta_r(p) ) - f^3 (p_0) +\frac{1}{2} \left ( f^2 (p_0)f^1 (p_0 *\delta_r(p) ) - f^1 (p_0) f^2 (p_0 *\delta_r(p) ) \right ) }{r^2}
\Bigg ).
\end{align*}
By hypothesis the limit exists and the first two components give us the claim.
\end{proof}
\begin{defin}[see 2.11 in \cite{FSSC}]
Let $f: U \subseteq \mathbb{H}^n \to \mathbb{R}$, $U$ open, be a P-differentiable function at $p \in U$. Then the \emph{Heisenberg gradient} or \emph{horizontal gradient} of $f$ at $p$ is defined as
$$
\nabla_\mathbb{H} f(p) := d_H f (p)^* \in \mathfrak{h}_1,
$$
or, equivalently,
$$
\nabla_\mathbb{H} f(p) = \sum_{j=1}^{n} \left [ (X_j f)(p) X_j + (Y_j f)(p) Y_j \right ].
$$
\end{defin}
\begin{no}[see 2.12 in \cite{FSSC}]\label{CH1}
Sets of differentiable functions can be defined with respect to the P-differentiability. Take $U \subseteq \mathbb{G}_1$ open, then
\begin{itemize}
\item
$C_{\mathbb{H}}^1 (U, \mathbb{G}_2)$ is the vector space of continuous functions $f:U \to \mathbb{G}_2 $ such that the P-differential $d_H f$ is continuous.
\item
$\left [ C_{\mathbb{H}}^1 (U, \mathbb{G}_2) \right ]^k$ is the set of $k$-tuples $f=\left (f^1,\dots, f^k \right)$ such that $f^i \in C_{\mathbb{H}}^1 (U, \mathbb{G}_2)$ for each $i=1 ,\dots , k$.
\end{itemize}
In particular, take $ U \subseteq \mathbb{H}^n$ open; then
\begin{itemize}
\item
$C_{\mathbb{H}}^1 (U, \mathbb{H}^n)$ is the vector space of continuous functions $f:U \to \mathbb{H}^n $ such that the P-differential $d_H f$ is continuous.
\item
$C_{\mathbb{H}}^1 (U, \mathbb{R})$ is the vector space of continuous functions $f:U \to \mathbb{R} $ such that $\nabla_\mathbb{H} f$ is continuous in $U$ (or, equivalently, such that the P-differential $d_H f$ is continuous).
\item
$C_{\mathbb{H}}^k (U, \mathbb{R})$ is the vector space of continuous functions $f:U \to \mathbb{R}$ such that the derivatives of the kind $W_{i_1} \dots W_{i_k}f$ are continuous in $U$, where $W_{i_h}$ is any $X_j$ or $Y_j$.
\item
$\left [ C_{\mathbb{H}}^m (U,\mathbb{R}) \right ]^k$ is the set of $k$-tuples $f=\left (f^1,\dots, f^k \right)$ such that $f^i \in C_{\mathbb{H}}^m (U,\mathbb{R})$ for each $i=1 ,\dots , k$.
\end{itemize}
\end{no}
\begin{obs}\label{diamond}
Given the notation above we have:
$$
\diamondinclusion{C^3 (U, \mathbb{R})}{C^2 (U, \mathbb{R})\\C_{\mathbb{H}}^3 (U, \mathbb{R})}{ C_{\mathbb{H}}^2 (U, \mathbb{R}) }
\subsetneq C^1 (U, \mathbb{R}) \subsetneq C_{\mathbb{H}}^1 (U, \mathbb{R}).
$$
\end{obs}
\noindent
We define here also an operator that will be useful later: the Hodge operator. The Hodge operator of a vector returns a second vector of dual dimension with the property to be orthogonal to the first. This will be used when talking about orientability as well as tangent and normal vector fields.
\begin{defin}[see 2.3 in \cite{FSSC} or 1.7.8 in \cite{FED}]\label{hodge}
Let $1 \leq k \leq 2n$. The \emph{Hodge operator} is the linear isomorphism
\begin{align*}
*: {\prescript{}{}\bigwedge}_k \mathfrak{h} &\rightarrow {\prescript{}{}\bigwedge}_{2n+1-k} \mathfrak{h} ,\\
\sum_I v_I V_I &\mapsto \sum_I v_I (*V_I),
\end{align*}
where
$$
*V_I:=(-1)^{\sigma(I) }V_{I^*},
$$
and, for $1 \leq i_1 \leq \cdots \leq i_k \leq 2n+1$,
\begin{itemize}
\item $I=\{ i_1,\cdots,i_k \}$,
\item $V_I= V_{i_1} \wedge \cdots \wedge V_{i_k} $,
\item $I^*=\{ i_1^*,\dots,i_{2n+1-k}^* \}=\{1, \cdots, 2n+1\} \smallsetminus I \quad $ and
\item $\sigma(I)$ is the number of couples $(i_h,i_l^*)$ with $i_h > i_l^*$.
\end{itemize}
\end{defin}
\subsection{Distances on $\mathbb{H}^n$}
In this subsection we mention briefly different equivalent distances on $\mathbb{H}^n$. General references are section 2.1 in \cite{GCmaster} and section 2.2.1 in \cite{CDPT}.\\
The usual intrinsic distance in the Heisenberg group is the \emph{Carnot} -- \emph{Carathéodory} distance $d_{cc}$, which measures the distance between any two points along shortest curves whose tangent vectors are horizontal.\\
Here we define more precisly another distance, equivalent to the first, called the \emph{Korányi} distance:
\begin{defin
\label{norm}
We define the \emph{Korányi} distance on $\mathbb{H}^n$ by setting, for $p,q \in \mathbb{H}^n$,
$$
d_{\mathbb{H}} (p,q) := \norm{ q^{-1}*p }_{\mathbb{H}},
$$
where $ \norm{ \cdot }_{\mathbb{H}}$ is the \emph{Korányi} norm
$$
\norm{(x,y,t)}_{\mathbb{H}}:=\left ( |(x,y)|^4+16t^2 \right )^{\frac{1}{4}},
$$
with $(x,y,t) \in \mathbb{R}^{2n} \times \mathbb{R} $ and $| \cdot |$ the Euclidean norm.
\end{defin}
\begin{obs
We show that $\norm{ \cdot }_{\mathbb{H}}$ is indeed a norm, as it satisfies the following properties:
\begin{enumerate}
\item
$\norm{(x,y,t)}_{\mathbb{H}} \geq 0, \ \norm{(x,y,t)}_{\mathbb{H}} = 0 \Leftrightarrow (x,y,t)=0$,
\item
$\norm{(x,y,t)*(x',y',t')}_{\mathbb{H}} \leq \norm{(x,y,t)}_{\mathbb{H}} + \norm{(x',y',t')}_{\mathbb{H}} $,
\item
Also $\norm{ \cdot }_{\mathbb{H}}$ is homogeneous of degree $1$ with respect to $\delta_r$:
$$
\norm{ \delta_r (x,y,t)}_{\mathbb{H}} = r \norm{(x,y,t)}_{\mathbb{H}} ,
$$
where $\delta_r$ appears in Definition \ref{delta}.
\end{enumerate}
\end{obs}
\begin{proof}
The first and third point can be verified immediately. A proof of the triangle inequality can be found in $1.F$ in \cite{KORR}.
\end{proof}
\begin{obs
The Korányi distance is left invariant, namely,
$$
d_{\mathbb{H}} (p*q,p*q')=d_{\mathbb{H}} (q,q'), \quad p,q,q' \in \mathbb{H}^n.
$$
It is, moreover, homogeneous of degree $1$ with respect to $\delta_r$:
$$
d_\mathbb{H} \left ( \delta_r (p), \delta_r (q) \right ) = r d_{\mathbb{H}} (p,q) .
$$
\end{obs}
\begin{obs
\label{8.8stein
We already mentioned that we use $| \cdot |$ for the Euclidean norm. One can prove the following inequality:
$$
|p| \leq \norm{p}_\mathbb{H} \leq |p|^{\frac{1}{2}} \ \ \text{ when } \norm{p}_\mathbb{H} \leq 1.
$$
\end{obs}
\begin{comment
\begin{obs}\label{8.9stein}[Triangle inequality]\\%PRESO DAL CAP 5
There exists a constant $c \geq 1$ such that, $\forall u,v \in \mathbb{H}^n$,
$$
|u+v| \leq c(|u|+|v|)
$$
\end{obs}
\begin{proof}
By homogeneity, we may assume that $|u|+|v|=1$. Then the set of pairs $(u,v) \in \mathbb{H}^n \times \mathbb{H}^n$ satisfying this equation is compact, so we can take $c$ to be the larger of the maximums values of $|u+v|$
on this set.
\end{proof}
\end{comment
\subsection{Dimensions and Integration on $\mathbb{H}^n$}
\begin{comment
\begin{obs}
We also notice that the topology induced by the metric
$$
d \left ( (z,t),(z',t') \right ):=\left |(z,t)*(z',t')^{-1} \right | = \left | \left (x-x',y-y',t-t'+2(xy'-x'y) \right ) \right |
$$
is equivalent to the Euclidean topology on $\mathbb{R}^{2n} \times \mathbb{R}$.\\
$\mathbb{H}^n$ becomes, then, a locally compact topological group. As such, it has the \emph{right-invariant} and the \emph{left-invariant Haar measure}.
\end{obs}
\end{comment
In this subsection we add information on the Heisenberg group's topology, dimensions and integrals. General references are section 2.1 in \cite{GCmaster} and section 2.2.3 in \cite{CDPT}.
\begin{obs}
The topology induced by the Korányi metric is equivalent to the Euclidean topology on $\mathbb{R}^{2n+1}$. The Heisenberg group $\mathbb{H}^n$ becomes, then, a locally compact topological group. As such, it has the \emph{right-invariant} and the \emph{left-invariant Haar measure}.
\end{obs}
\begin{defin}
\label{haar}
We call an outer measure $\mu$ the \emph{left-invariant} (or \emph{right-invariant}) \emph{Haar measure}, on a locally compact Hausdorff topological group $G$, if the followings are satisfied:
\begin{itemize}
\item
$\mu(gE)=\mu(E) \text{ with } E \subseteq G \text{ and } g \in G, \text{ where } gE:=\{ga ; \ a \in E\}$ \\
$\left ( \text{or } \mu(Eg)=\mu(E) \text{ with } E \subseteq G \text{ and } g \in G, \text{ where } Eg:=\{ag ; \ a \in E\}\right ) $,
\item
$\mu(K)< \infty, \text{ for all } K \subset \subset G$,
\item
$\mu$ is outer regular: $\mu(E)=\inf \{ \mu(U) ;\ E \subseteq U \subseteq G, U \emph{\emph{ open}}\}, \ E \subseteq G $,
\item
$\mu$ is inner regular: $\mu(E)=\sup \{ \mu(K) ; \ K \subseteq E \subseteq G, K \emph{\emph{ compact}}\}, \ E \subseteq G$.
\end{itemize}
\end{defin}
\begin{obs}[see, among others, after remark 2.2 in \cite{FSSC2001}
The ordinary Lebesgue measure on $\mathbb{R}^{2n+1}$ is invariant under both left and right translations on $\mathbb{H}^n$. In other words, the Lebesgue measure is both a left and right invariant Haar measure on $\mathbb{H}^n$.
\end{obs}
\begin{comment
\begin{proof}
For the right translation:
$$
\int_{\mathbb{H}^n} f \left ( (x,y,t)(x',y',t') \right)dxdydt =
$$
$$
=\int_{\mathbb{H}^n} f \left ( x+x',y+y',t+t' -2(xy'-x'y) \right ) dxdydt=
$$
by the obvious change of variables, the Jacobian is equal to
$$
\begin{vmatrix}
1 & 0 & -2y'\\
0 & 1 & 2x'\\
0 & 0 & 1
\end{vmatrix}
=1,
$$
then
$$
=\int_{\mathbb{H}^n} f (x,y,t) dxdydt.
$$
\end{proof}
\end{comment
\begin{obs}
\label{dimension}
It is easy to see that, denoting the ball of radius $r>0$ as
$$
B_\mathbb{H}(0,r):=\{ (x,y,t) \in \mathbb{H}^n ; \ \norm{(x,y,t)}_\mathbb{H} <r \},
$$
a change of variables gives
$$
|B_\mathbb{H}(0,r)|=
\int_{B_\mathbb{H}(0,r)} dxdydt=
r^{2n+2} \int_{B_\mathbb{H}(0,1)} dxdydt =
r^{2n+2}|B_\mathbb{H}(0,1)|.
$$
Thus $2n+2$ is the Hausdorff dimension of $ \left (\mathbb{H}^n, d_\mathbb{H} \right )$, which concides with its homogeneous dimension.
\end{obs}
\begin{no}\label{CCK}
Consider $S \subseteq \mathbb{H}^n$. we denote its Hausdorff dimension with respect to the Euclidean distance as
$$
\dim_{\mathcal{H}_{E}} S,
$$
while its Hausdorff dimension with respect to the Carnot-Carathéodory and Korányi distances as
$$
\dim_{\mathcal{H}_{cc}} S= \dim_{\mathcal{H}_{\mathbb{H}}} S.
$$
\end{no}
\begin{comment
\begin{proof}[Proof of Observation \ref{dimension}]
$$
\norm{(x,y,t)}_\mathbb{H} <r \ \Longleftrightarrow \
r \norm{\delta_{1/r}(x,y,t)}_\mathbb{H} <r \ \Longleftrightarrow \
\norm{ \left (\frac{x}{r},\frac{y}{r},\frac{t}{r^2} \right )}_\mathbb{H} <1
$$
so we denote new variables $x', \ y', \ t'$ such that
$$
\begin{cases}
x=rx' \ \in \mathbb{R}^n \\
y=ry' \ \in \mathbb{R}^n \\
t=r^2t' \ \in \mathbb{R}
\end{cases}
$$
The Jacobian of such change of variables is
$$
J=\left ( \frac{\partial v'_i}{\partial v_i} \right )_{i=1,\dots,2n+1}=
\begin{pmatrix}
r\\
&\ddots\\
&&r\\
&&&\ddots\\
&&&&r\\
&&&&&r^2\\
\end{pmatrix}=r^{2n+2}.
$$
\end{proof}
\noindent
We finish this chapter with a remark on the convolution.
\begin{defin} \label{conv1
Given $f,g \in L^1(\mathbb{H}^n)$, we define the \emph{convolution} $f*g$ as
$$
f*g(x,y,t):= \int_{\mathbb{H}^n} f(x,y,t) g((x,y,t)^{-1} (x',y',t')) dx'dy'dt'
$$
\end{defin}
\begin{obs}
\label{conv2}
The following property is easy to check:
\begin{align*}
f*g(x,y,t) &= \int_{\mathbb{H}^n} f(x,y,t) g((x,y,t)^{-1} (x',y',t')) dx'dy'dt'=\\
& = \int_{\mathbb{H}^n} f((x,y,t) (x',y',t')^{-1}) g(x',y',t') dx'dy'dt'=\\
& = \int_{\mathbb{H}^n} f(x-x',y-y',t-t' - 2(xy'-x'y)) g(x',y',t') dx'dy'dt'.
\end{align*}
\end{obs}
\begin{obs}
\label{conv3}
If one sets $\check{g}(x,y,t):=g((x,y,t)^{-1})$, it is also true that
$$
\int_{\mathbb{H}^n} (f*g)(x,y,t)h (x,y,t) dxdydt = \int_{\mathbb{H}^n} f(x,y,t)(h*\check{g})(x,y,t) dxdydt
$$
provided that both sides make sense.
\end{obs}
\end{comment
\chapter{Differential Forms and Rumin Cohomology}\label{DFaRC}
In this chapter we will present the precise definition of the Rumin complex
in any Heisenberg group (Section \ref{rumincomplex}). Then, to give a practical feeling of the difference between the de Rham and the Rumin complexes, we will write explicitly the differential complex of the Rumin cohomology in $\mathbb{H}^1$ and $\mathbb{H}^2$ (Sections \ref{cohomology1} and \ref{cohomology2}). As general references for this chapter, one can look at \cite{RUMIN} and \cite{FSSC}.\\
This chapter is connected with Appendices \ref{computationH2}, \ref{A} and \ref{B}.
Appendix \ref{computationH2} contains the proof of Proposition \ref{exH2}. Appendix \ref{A} presents the Rumin cohomology, in $\mathbb{H}^1$ and $\mathbb{H}^2$, using only one operator $d_c$, as opposed to the three operators ($d_Q, \ D, \ d_Q$ again) used more frequently in the literature. The main reference for this appendix is \cite{TRIP}. As it will be clear from this chapter, direct computation of the Rumin differential operator are more and more challenging as the dimension of the space grows: Appendix \ref{B} offers the formulas to compute the dimension of the spaces involved in the Rumin complex for any dimension. There are also examples for $n=1,\dots,5$ which clearly show such computational challenges.
\section{The Rumin Complex}\label{rumincomplex}
In this section we precisely present the definition of the Rumin complex in the general Heisenberg group $\mathbb{H}^n$. We start giving some basic definitions that can be found, for instance, in \cite{RUMIN} and \cite{FSSC}:
\begin{defin}\label{def_forms}
Consider $0\leq k \leq 2n+1$ and recall $\Omega^k$ from Definition \ref{kdim}. We denote:
\begin{itemize}
\item
$I^k := \{ \alpha \wedge \theta + \beta \wedge d \theta ; \ \alpha \in \Omega^{k-1}, \ \beta \in \Omega^{k-2} \}$,
\item
$J^k :=\{ \alpha \in \Omega^{k}; \ \alpha \wedge \theta =0, \ \alpha \wedge d\theta=0 \}$.
\end{itemize}
\end{defin}
\begin{defin}[Rumin complex]\label{complexHn}
The Rumin complex, due to Rumin in \cite{RUMIN}, is given by
$$
0 \to \mathbb{R} \to C^\infty \stackrel{d_Q}{\to} \frac{\Omega^1}{I^1} \stackrel{d_Q}{\to} \dots \stackrel{d_Q}{\to} \frac{\Omega^n}{I^n} \stackrel{D}{\to} J^{n+1} \stackrel{d_Q}{\to} \dots \stackrel{d_Q}{\to} J^{2n+1} \to 0,
$$
where $d$ is the standard differential operator and, for $k < n$,
$$
d_Q( [\alpha]_{I^k} ) := [d \alpha]_{I^{k+1}},
$$
while, for $k \geq n +1$,
$$
d_Q := d_{\vert_{J^k}}.
$$
The second order differential operator $D$ will be defined at the end of this section.
\end{defin}
\begin{rem}[proposition at page 286 in \cite{RUMIN}]
This structure defines indeed a complex. In other words, applying two consequential operators in the chain gives zero.
\end{rem}
\begin{rem}
When $k=1$, $d_Q$ is the same as $d_H$, from Definition \ref{dHHH}.
\end{rem}
\begin{no}
The spaces of the kind $\frac{\Omega^k}{I^k} $ are called \emph{low dimensional}, while the spaces $ J^{k} $'s \emph{high dimensional} or \emph{low codimensional}.
\end{no}
\begin{rem}
From the definition of $I^k$, $k=1,\dots,n$, one can see that $\alpha \wedge \theta \in I^k$ for any $\alpha \in \Omega^{k-1}$. This means that, in modulo, $\theta$ is never present in the low dimensional spaces $\frac{\Omega^k}{I^k} $'s.\\
On the other hand, every $\beta \in J^k$ must be of the kind $\beta=\beta' \wedge \theta$ (as this is the only way to satisfy the condition $\beta \wedge \theta=0$). This means that $\theta$ will always be present the the high dimensional spaces $J^k$'s.
\end{rem}
\noindent
In order to be able to define $D$, some preliminary work is needed:
\begin{obs}\label{first}
First of all, notice that the definition, for $k < n$, of $d_Q( [\alpha]_{I^k} ) := [d \alpha]_{I^{k+1}}$ is well posed.
\end{obs}
\begin{proof}
The equality $[\alpha]_{I^k} = [\beta]_{I^k}$ means $ \beta - \alpha \in I^k$, which implies
\begin{align*}
\beta - \alpha = \sigma \wedge \theta + \tau \wedge d \theta,
\end{align*}
for some $\sigma \in \Omega^{k-1}, \ \tau \in \Omega^{k-2}$. Then one can write
\begin{align*}
d \beta - d \alpha =& d \sigma \wedge \theta +(- 1)^{k-1} \sigma \wedge d \theta +d \tau \wedge d \theta + 0\\
=& d \sigma \wedge \theta + ( (- 1)^{k-1} \sigma +d \tau ) \wedge d \theta \in I^{k+1}.
\end{align*}
Then $[d \alpha]_{I^{k+1}} = [d \beta]_{I^{k+1}}.$ This gives the well-posedness.
\end{proof}
\begin{no}\label{middleequivclass}
Let $\gamma \in \Omega^{k-1}$ and consider the equivalence class
$$
{\prescript{}{}\bigwedge}^k \mathfrak{h}_1 = \left \{ \beta \in \Omega^k ; \ \beta =0 \ \text{or} \ \beta \wedge \theta \neq 0 \right \} \cong \frac{\Omega^k}{ \{ \gamma \wedge \theta \} } ,
$$
where $ {\prescript{}{}\bigwedge}^k \mathfrak{h}_1 $ appears in Definition \ref{kdim} and we write $\{ \gamma \wedge \theta \} = \{ \gamma \wedge \theta ; \ \gamma \in \Omega^{k-1} \}$ for short. The equivalence is given by $\beta \mapsto ( \beta)_{\vert_{ {\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }}$.\\
Then, given $ \alpha \in \Omega^k$, denote $[\alpha]_{ \{ \gamma \wedge \theta \} }$ an element in this equivalence class. \\
\end{no}
\begin{obs}\label{finalequivclass}
Let $\gamma \in \Omega^{k-1}$ and $\beta \in \Omega^{k-2}$. One can see, straight by the definition of $I^k$, that
$$
\frac{ \frac{\Omega^k}{ \{ \gamma \wedge \theta \} } }{ \{ \beta \wedge d \theta \} } \cong \frac{\Omega^k}{I^k}, \quad k=1,\dots,n,
$$
where we also write $\{ \beta \wedge d\theta \} = \{ \beta \wedge d\theta ; \ \beta \in \Omega^{k-2} \}$ for short.
\end{obs}
\noindent
The following lemma is necessary to define the second order differential operator $D$. Given $[\alpha]_{ \{ \gamma \wedge \theta \} } \in \frac{\Omega^n}{ \{ \gamma \wedge \theta \} }$, a lifting of $[\alpha]_{ \{ \gamma \wedge \theta \} }$ is any $\alpha' \in \Omega^n$ such that $ [\alpha]_{ \{ \gamma \wedge \theta \} } = [\alpha']_{ \{ \gamma \wedge \theta \} }$.
\begin{lem}[Rumin \cite{RUMIN}, page 286]\label{lemma}
For every form $[\alpha]_{ \{ \gamma \wedge \theta \} } \in \frac{\Omega^n}{ \{ \gamma \wedge \theta \} }$, there exists a unique lifting $\tilde{\alpha} \in \Omega^n$ of $[\alpha]_{ \{ \gamma \wedge \theta \} }$ so that $d \tilde{\alpha} \in J^{n+1}$.
\end{lem}
\begin{proof}
Note that this proof is not exactly the one given by Rumin, but still follows the same steps.\\
Let $ \alpha \in \Omega^n $ $ \left ( \text{so }[\alpha]_{ \{ \gamma \wedge \theta \} } \in \frac{\Omega^n}{ \{ \gamma \wedge \theta \} } \right )$ and define
$$
\tilde{\alpha}:= \alpha + \beta \wedge \theta \in \Omega^n,
$$
where $\beta \in {\prescript{}{}\bigwedge}^{n-1} \mathfrak{h}_1.
$
Then compute
\begin{align*}
\theta \wedge d \tilde{\alpha}
=& \theta \wedge d \alpha + \theta \wedge d( \beta \wedge \theta)\\
=& \theta \wedge d \alpha + \theta \wedge d \beta \wedge \theta+(- 1)^{\vert \beta \vert} \theta \wedge \beta \wedge d \theta\\
=& \theta \wedge d \alpha + (- 1)^{\vert \beta \vert} \theta \wedge \beta \wedge d \theta\\
=& \theta \wedge d \alpha + (- 1)^{\vert \beta \vert} \theta \wedge d \theta \wedge \beta\\
=& \theta \wedge d \alpha + (- 1)^{\vert \beta \vert} \theta \wedge L(\beta) \\
=& \theta \wedge \left ( d \alpha + (- 1)^{\vert \beta \vert} L(\beta) \right ),
\end{align*}
where $L$ (see $2$ in \cite{RUMIN}) is the isomorphism
\begin{align*}
L: {\prescript{}{}\bigwedge}^{n-1} \mathfrak{h}_1 \to {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1, \ \beta \mapsto d \theta \wedge \beta .
\end{align*}
Notice that, since $d \alpha \in \Omega^{n+1}$, we can divide it as
$$
d \alpha = (d \alpha)_{\vert_{ \left ( {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 \right )^\perp }} + (d \alpha)_{\vert_{ {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 }},
$$
where
$$
\theta \wedge (d \alpha)_{\vert_{ \left ( {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 \right )^\perp }} =0,
$$
and, by isomorphism, there exists a unique $\beta$ so that
$$
(- 1)^{\vert \beta \vert} L(\beta)=- (d \alpha)_{\vert_{ {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 }}.
$$
With such a choice of $\beta$ one gets
$$
\theta \wedge \left ( d \alpha + (- 1)^{\vert \beta \vert} L(\beta) \right ) =0.
$$
Then
$$
\theta \wedge d \tilde{\alpha} =0,
$$
and, finally, also
$$
d \theta \wedge d \tilde{\alpha} = d(\theta \wedge d \tilde{\alpha}) =0.
$$
Then, by definition, $d \tilde{\alpha} \in J^{n+1}$.
\end{proof}
\begin{obs}\label{explicitalphatilde}
In the proof of Lemma \ref{lemma}, instead of $\beta$, we could have chosen $\beta':=(- 1)^{\vert \beta \vert} \beta$, which would give
$$
L(\beta')=- (d \alpha)_{\vert_{ {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 }},
$$
or, equivalently,
$$
\beta' = L^{-1} \left (- (d \alpha)_{\vert_{ {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 }} \right ).
$$
Then the lifting can be written explicitly as
$$
\tilde{\alpha} = \alpha + L^{-1} \left (- (d \alpha)_{\vert_{ {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 }} \right ) \wedge \theta.
$$
\end{obs}
\begin{defin}\label{D}
Using the observation above, finally we can define $D$ as
$$
D( [\alpha]_{I^n} ) := d \tilde{\alpha} = d \left ( \alpha + L^{-1} \left (- (d \alpha)_{\vert_{ {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 }} \right ) \wedge \theta \right ),
$$
and the definition is well-posed.
\end{defin}
\section{Cohomology of $\mathbb{H}^1$}\label{cohomology1}
In this section we explicitly write the differential complex of the Rumin cohomology in $\mathbb{H}^1$ and compare it to the de Rham cohomology.
Furthermore, this sets a method for the more challenging case of $\mathbb{H}^2$, as well as hints at the qualitative difference between the first Heisenberg group $\mathbb{H}^1$ and all the others.
\begin{obs
In the case $n=1$, the spaces of the Rumin cohomology presented in Definition \ref{def_forms} are reduced to
\begin{align*}
\Omega^1 &= \spn \{ dx, dy, \theta \},\\
I^1&=\spn \{ \theta \},\\
\frac{\Omega^1}{I^1} &\cong \spn \{ dx, dy \} , \\
J^2&= \spn \{ dx \wedge \theta, dy \wedge \theta \},\\
J^3&= \spn \{ dx \wedge dy \wedge \theta \}.
\end{align*}
Moreover, in this case the isomorphism $L : {\prescript{}{}\bigwedge}^0 \mathfrak{h}_1 \to {\prescript{}{}\bigwedge}^2 \mathfrak{h}_1 $ is given by
\begin{align*}
L : \spn \{ f \} \to \spn \{ dx \wedge dy\}, \ f \mapsto -f dx \wedge dy.
\end{align*}
\end{obs}
\begin{comment
\begin{obs
In the case $k=1$, the Rumin complex presented in Definition \ref{complexHn} is reduced to
$$
0 \to \mathbb{R} \to C^\infty \stackrel{d_Q}{\to} \frac{\Omega^1}{I^1} \stackrel{D}{\to} J^{2} \stackrel{d_Q}{\to} J^{3} \to 0,
$$
where
\begin{align*}
\Omega^1 &= \spn \{ dx, dy, \theta \},\\
I^1&=\spn \{ \theta \},\\
\frac{\Omega^1}{I^1} &\cong \spn \{ dx, dy \} , \\
J^2&= \spn \{ dx \wedge \theta, dy \wedge \theta \},\\
J^3&= \spn \{ dx \wedge dy \wedge \theta \}.
\end{align*}
Moreover, in this case the isomorphism $L$ can be explicitly written as
\begin{align*}
L : {\prescript{}{}\bigwedge}^0 \mathfrak{h}_1 = \spn \{ f \} &\to {\prescript{}{}\bigwedge}^2 \mathfrak{h}_1 = \spn \{ dx \wedge dy\},\\
f & \mapsto f d\theta = -f dx \wedge dy.
\end{align*}
\end{obs}
\end{comment
\begin{comment
\begin{equation*}
0 \to \mathbb{R} \to C^\infty
\tikz[anchor=base, baseline]{\node(d1) {$\stackrel{d_Q}{\to}$} }
\frac{\Omega^1}{I^1}
\tikz[anchor=base, baseline]{\node(d2) {$\stackrel{D}{\to}$}}
J^{2}
\tikz[anchor=base, baseline]{\node(d3){$ \stackrel{d_Q}{\to} $}}
J^{3} \to 0
\end{equation*}
\vspace{1cm}\par
\noindent\hfil\begin{tikzpicture}
\node[left=1cm, align=center] (t1) {$ f \mathbin{\color{red} \stackrel{d_Q}{\mapsto} } [Xf dx + Yf dy]_{ I^1 } $};
\node[below = 1cm,align=center] (t2) at (t1.east) {$ [\alpha_1 dx + \alpha_2 dy]_{ I^1 } \mathbin{\color{red} \stackrel{D}{\mapsto} } $ \\ $ (XX \alpha_2 - XY \alpha_1 -T\alpha_1 )dx\wedge \theta+ $ \\ $ + (YX \alpha_2 - YY\alpha_1 -T\alpha_2)dy\wedge \theta $};
\node[above=1cm, right =2cm,align=center] (t3) at (t2.east)
{$f dx\wedge \theta + g dy\wedge \theta \mathbin{\color{red} \stackrel{d_Q}{\mapsto} } (Xg -Yf) dx \wedge dy \wedge \theta$};
\end{tikzpicture}
\begin{tikzpicture}[overlay]
\draw[blue,thick,->] (d1) to [in=90,out=265] (t1.north);
\draw[blue,thick,->] (d2) to [in=90,out=265] (t2.north);
\draw[blue,thick,->] (d3) to [in=90,out=265] (t3.north);
\end{tikzpicture}
\end{comment
\noindent
The following proposition shows the explicit action of each differential operator in the Rumin complex of $\mathbb{H}^1$.
\begin{prop}[Explicit Rumin complex in $\mathbb{H}^1$]\label{exH1}
In the case $n=1$, the Rumin complex presented in Definition \ref{complexHn} is becomes
\begin{equation*}
0 \to \mathbb{R} \to C^\infty \stackrel{d_Q^{(1)}}{\to} \frac{\Omega^1}{I^1}\stackrel{D}{\to},
J^{2} \stackrel{d_Q^{(3)}}{\to} J^{3} \to 0
\end{equation*}
with
\begin{align*}
f &\mathbin{ \stackrel{d_Q^{(1)}}{\mapsto} } [Xf dx + Yf dy]_{ I^1 },\\
[\alpha_1 dx + \alpha_2 dy]_{ I^1 } &\mathbin{ \stackrel{D}{\mapsto} } (XX \alpha_2 - XY \alpha_1 -T\alpha_1 )dx\wedge \theta + (YX \alpha_2 - YY\alpha_1 -T\alpha_2)dy\wedge \theta, \\
\alpha_1 dx\wedge \theta + \alpha_2 dy\wedge \theta &\mathbin{ \stackrel{d_Q^{(3)}}{\mapsto} } (X\alpha_2 -Y\alpha_1 ) dx \wedge dy \wedge \theta.
\end{align*}
\end{prop}
\begin{comment
\begin{tcolorbox}[
colback=white,
colframe=black,
center,
center title,
]
\begin{prop}[Explicit Rumin complex in $\mathbb{H}^1$]\label{exH1}
\begin{equation*}
0 \to \mathbb{R} \to C^\infty
\tikz[anchor=base, baseline]{\node(d4) {$\stackrel{d_Q}{\to}$} }
\frac{\Omega^1}{I^1}
\tikz[anchor=base, baseline]{\node(d5) {$\stackrel{D}{\to}$}}
J^{2}
\tikz[anchor=base, baseline]{\node(d6){$ \stackrel{d_Q}{\to} $}}
J^{3} \to 0
\end{equation*}
\vspace{1cm}\par
\noindent\hfil\begin{tikzpicture}
\node[align=center] (t1) {$ f \mathbin{\color{red} \stackrel{d_Q}{\mapsto} } [Xf dx + Yf dy]_{ I^1 } $};
\node[below = 2cm,align=center] (t2) at (t1.east) {$ [\alpha_1 dx + \alpha_2 dy]_{ I^1 } \mathbin{\color{red} \stackrel{D}{\mapsto} } $ \\ $ (XX \alpha_2 - XY \alpha_1 -T\alpha_1 )dx\wedge \theta+ $ \\ $ + (YX \alpha_2 - YY\alpha_1 -T\alpha_2)dy\wedge \theta $};
\node[below = 2cm,align=center] (t3) at (t2.east)
{$f dx\wedge \theta + g dy\wedge \theta \mathbin{\color{red} \stackrel{d_Q}{\mapsto} } (Xg -Yf) dx \wedge dy \wedge \theta$};
\end{tikzpicture}
\begin{tikzpicture}[overlay]
\draw[blue,thick,->] (d4) to [in=90,out=265] (t1.north);
\draw[blue,thick,->] (d5) to [in=90,out=265] (t2.north);
\draw[blue,thick,->] (d6) to [in=90,out=265] (t3.north);
\end{tikzpicture}
\end{prop}
\end{tcolorbox}
\end{comment
\begin{comment
$$
0 \to \mathbb{R} \to C^\infty \stackrel{d_Q}{\to} \frac{\Omega^1}{I^1} \stackrel{D}{\to} J^{2} \stackrel{d_Q}{\to} J^{3} \to 0
$$
$$
\ \ \ \ \ \ \ \ \ f \stackrel{d_Q}{\mapsto} Xf dx + Yf dy
$$
$$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \alpha_1 dx + \alpha_2 dy \stackrel{D}{\mapsto} (XX \alpha_2 - XY \alpha_1 -T\alpha_1 )dx\wedge \theta
$$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + (YX \alpha_2 - YY\alpha_1 -T\alpha_2)dy\wedge \theta
$$
$$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ f dx\wedge \theta + g dy\wedge \theta \stackrel{d_Q}{\mapsto} (Xg -Yf) dx \wedge dy \wedge \theta
$$
\begin{align*}
0 \to \mathbb{R} \to C^\infty & \stackrel{d_Q}{\to} \frac{\Omega^1}{I^1} \stackrel{D}{\to} J^{2} \stackrel{d_Q}{\to} J^{3} \to 0\\
f & \stackrel{d_Q}{\mapsto} Xf dx + Yf dy \\
\alpha_1 dx &+ \alpha_2 dy \stackrel{D}{\mapsto} (XX \alpha_2 - XY \alpha_1 -T\alpha_1 )dx\wedge \theta \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ + (YX \alpha_2 - YY\alpha_1 -T\alpha_2)dy\wedge \theta \\
& f dx\wedge \theta + g dy\wedge \theta \stackrel{d_Q}{\mapsto} (Xg -Yf) dx \wedge dy \wedge \theta \\
\end{align*}
\begin{align*}
\begin{split}
C^\infty & \stackrel{d_Q}{\to} \frac{\Omega^1}{I^1} \\
f & \stackrel{d_Q}{\mapsto} Xf dx + Yf dy
\end{split}\\\\
\begin{split}
\frac{\Omega^1}{I^1} & \stackrel{D}{\to} J^{2} \\
\alpha_1 dx + \alpha_2 dy & \stackrel{D}{\mapsto} (XX \alpha_2 - XY \alpha_1 -T\alpha_1 )dx\wedge \theta \\
& + (YX \alpha_2 - YY\alpha_1 -T\alpha_2)dy\wedge \theta
\end{split}\\\\
\begin{split}
J^{2} & \stackrel{d_Q}{\to} J^{3}\\
f dx\wedge \theta + g dy\wedge \theta & \stackrel{d_Q}{\mapsto} (Xg -Yf) dx \wedge dy \wedge \theta
\end{split}\\
\end{align*}
\end{comment
\begin{proof}
This proposition can be proved by simple computations. Two of the three cases are trivial.\\
Indeed, by Definition \ref{complexHn} and Observation \ref{df}, we have
$$
d_Q^{(1)} f = [Xf dx + Yf dy]_{ I^1 }.
$$
By the same definition and observation, we also get
\begin{align*}
d_Q^{(3)} ( \alpha_1 dx\wedge \theta + \alpha_2 dy\wedge \theta ) =& Y\alpha_1 dy \wedge dx \wedge \theta + X\alpha_2 dx \wedge dy \wedge \theta \\
=& (X\alpha_2 -Y\alpha_1 ) dx \wedge dy \wedge \theta .
\end{align*}
Finally we have to compute $D$. We remind that, by Observation \ref{df},
$$
d g = Xg dx + Y g dy + Tg \theta,
$$
with $g: U\subseteq \mathbb{H}^1 \to \mathbb{R}$ smooth.\\
Consider now $\alpha = \alpha_1 dx + \alpha_2 dy \in \Omega^1$. Then $[\alpha]_{ I^1 }=[\alpha_1 dx + \alpha_2 dy]_{ I^1 } \in \frac{\Omega^1}{I^1} $, and the (full) exterior derivative of $\alpha$ is:
\begin{align*}
d \alpha
=& d ( \alpha_1 dx + \alpha_2 dy ) = Y \alpha_1 dy \wedge dx + T\alpha_1 \theta \wedge dx +
X \alpha_2 dx \wedge dy + T\alpha_2 \theta \wedge dy \\
=&( X \alpha_2 - Y \alpha_1 ) dx \wedge dy - T\alpha_1 dx \wedge \theta - T\alpha_2 dy \wedge \theta.
\end{align*}
Then
$$
(d \alpha)_{\vert_{ {\prescript{}{}\bigwedge}^{1} \mathfrak{h}_1 }} = ( X \alpha_2 - Y \alpha_1 ) dx \wedge dy = - ( X \alpha_2 - Y \alpha_1 ) d \theta,
$$
and
$$
L^{-1} \left (- (d \alpha)_{\vert_{ {\prescript{}{}\bigwedge}^{1} \mathfrak{h}_1 }} \right ) = X \alpha_2 - Y \alpha_1 .
$$
Finally
\begin{align*}
D([\alpha]_{I^1} ) =& d \left ( \alpha + ( X \alpha_2 - Y \alpha_1 ) \theta \right )\\
=& d \alpha + d \left ( X \alpha_2 - Y \alpha_1 \right ) \wedge \theta + \left ( X \alpha_2 - Y \alpha_1 \right ) d \theta\\
=& ( X \alpha_2 - Y \alpha_1 ) dx \wedge dy - T\alpha_1 dx \wedge \theta - T\alpha_2 dy \wedge \theta \\
&+ X( X \alpha_2 - Y \alpha_1 ) dx \wedge \theta + Y ( X \alpha_2 - Y \alpha_1 ) dy \wedge \theta - ( X \alpha_2 - Y \alpha_1 ) dx \wedge dy \\
=& ( XX \alpha_2 - XY \alpha_1 - T\alpha_1 ) dx \wedge \theta + (YX \alpha_2 -YY \alpha_1 - T\alpha_2 ) dy \wedge \theta.
\end{align*}
\end{proof}
\section{Cohomology of $\mathbb{H}^2$}\label{cohomology2}
In this section, as we did in the previous one for $\mathbb{H}^1$, we explicitly write the differential complex of the Rumin cohomology in $\mathbb{H}^2$. The computation is quantitative more challenging than the previous one and so we report it in Appendix \ref{computationH2}. In this case the bases of the spaces of the complex have more variety, as one must take into account more possible combinations than in the previous case. In a qualitative sense, this is caused by the fact that the algebra of $\mathbb{H}^n$ allows a strict subalgebra of step $2$ only for $n >1$.
\begin{obs
\label{obsH2}
For $n=2$, the spaces of the Rumin cohomology presented in Definition \ref{def_forms} are reduced to
\begin{align*}
\Omega^1 &= \spn \{ dx_1, dx_2, dy_1, dy_2, \theta \},\\
I^1&=\spn \{ \theta \},\\
\frac{\Omega^1}{I^1} &\cong \spn \{dx_1, dx_2, dy_1, dy_2 \},\\
\Omega^2 &= \spn \{ dx_1 \wedge dx_2, dx_1 \wedge dy_1, dx_1 \wedge dy_2, dx_1 \wedge \theta, dx_2 \wedge dy_1, dx_2 \wedge dy_2, \\
& \hspace{7.8cm} dx_2 \wedge \theta, dy_1 \wedge dy_2, dy_1 \wedge \theta, dy_2 \wedge \theta \} ,\\
I^2&=\spn \{ dx_1\wedge \theta, dx_2\wedge \theta, dy_1\wedge \theta, dy_2 \wedge \theta, dx_1 \wedge dy_1 + dx_2 \wedge dy_2 \},\\
\frac{\Omega^2}{I^2} &\cong \spn \{
dx_1 \wedge dx_2, dx_1 \wedge dy_2, dx_2 \wedge dy_1, dy_1 \wedge dy_2 \} \oplus \frac{ \spn \{ dx_1 \wedge dy_1, dx_2 \wedge dy_2 \} }{ \spn \{ dx_1 \wedge dy_1 + dx_2 \wedge dy_2 \} } ,\\
&\cong \spn \{ dx_1 \wedge dx_2, dx_1 \wedge dy_2, dx_2 \wedge dy_1, dy_1 \wedge dy_2, dx_1 \wedge dy_1 - dx_2 \wedge dy_2 \} ,\\
J^3&= \spn \{ dx_1 \wedge dx_2 \wedge \theta, dx_1 \wedge dy_2 \wedge \theta, dx_2 \wedge dy_1 \wedge \theta, dy_1 \wedge dy_2 \wedge \theta,\\
&\hspace{8.8cm} dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta \},\\
J^4&= \spn \{ dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta, dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta, dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta,\\
&\hspace{10.5cm} dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta \},\\
J^5&= \spn \{ dx_1 \wedge dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta \}.
\end{align*}
Note that, in rewriting $\frac{\Omega^2}{I^2} $, we simply observe that $\{ dx_1 \wedge dy_1, dx_2 \wedge dy_2 \}$ and $\{ dx_1 \wedge dy_1 + dx_2 \wedge dy_2, dx_1 \wedge dy_1 - dx_2 \wedge dy_2 \}$ span the same subspace.
\end{obs}
\begin{comment
\begin{itemize}
\item $\Omega^1 = \text{span}\{ dx_1, dx_2, dy_1, dy_2, \theta \}$
\item $I^1=\text{span}\{ \theta \}$
\item $\frac{\Omega^1}{I^1} \cong \text{span}\{dx_1, dx_2, dy_1, dy_2 \} $
\item $\Omega^2 = \text{span}\{ dx_1 \wedge dx_2, dx_1 \wedge dy_1, dx_1 \wedge dy_2, dx_1 \wedge \theta, dx_2 \wedge dy_1, dx_2 \wedge dy_2,$ $dx_2 \wedge \theta, dy_1 \wedge dy_2, dy_1 \wedge \theta, dy_2 \wedge \theta \}$
\item $I^2=\text{span}\{ dx_1\wedge \theta, dx_2\wedge \theta, dy_1\wedge \theta, dy_2 \wedge \theta, dx_1 \wedge dy_1 + dx_2 \wedge dy_2 \}$
\item $\frac{\Omega^2}{I^2} \cong \text{span}\{
dx_1 \wedge dx_2, dx_1 \wedge dy_2, dx_2 \wedge dy_1, dy_1 \wedge dy_2 \} \oplus \frac{ \text{span}\{ dx_1 \wedge dy_1, dx_2 \wedge dy_2 \} }{ \text{span}\{ dx_1 \wedge dy_1 + dx_2 \wedge dy_2 \} } $\\
$\cong \text{span}\{ dx_1 \wedge dx_2, dx_1 \wedge dy_2, dx_2 \wedge dy_1, dy_1 \wedge dy_2, dx_1 \wedge dy_1 - dx_2 \wedge dy_2 \} $
\item $J^3= \text{span}\{ dx_1 \wedge dx_2 \wedge \theta, dx_1 \wedge dy_2 \wedge \theta, dx_2 \wedge dy_1 \wedge \theta, dy_1 \wedge dy_2 \wedge \theta, dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta \}$
\item $J^4= \text{span}\{
dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta,
dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta,
dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta,
dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta
\}$
\item $J^5= \text{span}\{ dx_1 \wedge dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta \}$
\end{itemize}
\end{comment
\begin{obs}
In this case the isomorphism $L$ acts as follows
\begin{align*}
L : {\prescript{}{}\bigwedge}^1 \mathfrak{h}_1 & \to {\prescript{}{}\bigwedge}^3 \mathfrak{h}_1 ,\\
\omega & \mapsto \omega \wedge d \theta,\\
dx_1 & \mapsto -dx_1 \wedge dx_2 \wedge dy_2,\\
dy_1 & \mapsto - dy_1 \wedge dx_2 \wedge dy_2,\\
dx_2 & \mapsto - dx_2 \wedge dx_1 \wedge dy_1,\\
dy_2 & \mapsto - dy_2 \wedge dx_1 \wedge dy_1.
\end{align*}
\end{obs}
\begin{rem}
Notice in particular that in the highest low dimensional space, $\frac{\Omega^2}{I^2}$, and in the lowest high dimensional space, $J^3$, there are generators that did not appear in the case $n=1$, namely $dx_1 \wedge dy_1 - dx_2 \wedge dy_2$ and $ dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta$ respectively. This is due to the fact that th first Heisenberg group $\mathbb{H}^1$ is the only Heisenberg group to be also a free group.
\end{rem}
\begin{prop}[Explicit Rumin complex in $\mathbb{H}^2$]\label{exH2}
\begin{equation*}
0 \to \mathbb{R} \to C^\infty \stackrel{d_Q^{(1)}}{\to} \frac{\Omega^1}{I^1}
\stackrel{d_Q^{(2)}}{\to} \frac{\Omega^2}{I^2}
\stackrel{D}{\to} J^{3}
\stackrel{d_Q^{(3)}}{\to} J^{4}
\stackrel{d_Q^{(4)}}{\to}
J^{5} \to 0, \quad \text{with}
\end{equation*}
\begin{align*}
f & \mathbin{ \stackrel{d_Q^{(1)}}{\mapsto} } [X_1f dx_1 + X_2f dx_2+ Y_1f dy_1+ Y_2f dy_2 ]_{I^1} ,\\
& \\
& [ \alpha_1 dx_1 + \alpha_2 dx_2 + \alpha_3 dy_1+ \alpha_4 dy_2 ]_{I^1} \\
& \mathbin{ \stackrel{d_Q^{(2)}}{\mapsto} } \Big [ ( X_1 \alpha_2 - X_2 \alpha_1 ) dx_1 \wedge dx_2 + ( X_2 \alpha_3 - Y_1 \alpha_2 ) dx_2 \wedge dy_1\\
&\ \ \ \ \ + ( Y_1 \alpha_4 - Y_2 \alpha_3 ) dy_1 \wedge dy_2 + ( X_1 \alpha_4 - Y_2 \alpha_1 ) dx_1 \wedge dy_2\\
& \ \ \ \ \ + \left ( \frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{2} \right ) ( dx_1 \wedge dy_1 - dx_2 \wedge dy_2 ) \Big ]_{I^2},\\
& \\
& [\alpha_1 dx_1 \wedge dx_2 + \alpha_3 dx_1 \wedge dy_2 + \alpha_4 dx_2 \wedge dy_1 +\alpha_6 dy_1 \wedge dy_2 \\
& + \beta (dx_1 \wedge dy_1 - dx_2 \wedge dy_2) ]_{I^2}\\
& \mathbin{\stackrel{D}{\mapsto} } \left [ ( - X_1 Y_1 -Y_2 X_2) \alpha_1 +X_2 X_2 \alpha_3 -X_1 X_1 \alpha_4 +2X_1 X_2 \beta \right ] dx_1 \wedge dx_2 \wedge \theta \\
&\ \ \ \ \ +\left [ -Y_2 Y_2 \alpha_1 +(X_2 Y_2 \alpha_3 -X_1 Y_1 )\alpha_3 +X_1 X_1 \alpha_6 +2X_1 Y_2 \beta \right ] dx_1 \wedge dy_2 \wedge \theta \\
&\ \ \ \ \ +\left [ +Y_1 Y_1 \alpha_1 +(Y_1 X_1 -Y_2 X_2) \alpha_4 -X_2 X_2 \alpha_6 -2 X_2 Y_1 \beta \right ] dx_2 \wedge dy_1 \wedge \theta \\
&\ \ \ \ \ +\left [ -Y_1 Y_1 \alpha_3 +Y_2 Y_2 \alpha_4 +(Y_1 X_1 + X_2 Y_2) \alpha_6 + 2Y_1 Y_2 \beta \right ] dy_1 \wedge dy_2 \wedge \theta \\
&\ \ \ \ \ +\left [ -Y_1 Y_2 \alpha_1 +Y_1 X_2 \alpha_3 -X_1 Y_2 \alpha_4 -X_1 X_2 \alpha_6 \right ] ( dx_1 \wedge dy_1 \wedge \theta -dx_2 \wedge dy_2\wedge \theta),\\
& \\
& \alpha_1 dx_1 \wedge dx_2 \wedge \theta +\alpha_2 dx_1 \wedge dy_2 \wedge \theta + \alpha_3 dx_2 \wedge dy_1 \wedge \theta + \alpha_4 dy_1 \wedge dy_2 \wedge \theta \\
& + \alpha_5( dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta) \\
& \mathbin{ \stackrel{d_Q^{(3)}}{\mapsto} } ( Y_1 \alpha_1 + X_1 \alpha_3 - X_2 \alpha_5) dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta \\
&\ \ \ \ \ +( Y_2 \alpha_1 -X_2 \alpha_2 - X_1\alpha_5 )dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta\\
&\ \ \ \ \ +( -Y_1 \alpha_2 + X_1 \alpha_4 + Y_2 \alpha_5 ) dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta \\
&\ \ \ \ \ +( Y_2 \alpha_3 + X_2 \alpha_4 +Y_1 \alpha_5 ) dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta ,\\
& \\
& \alpha_1 dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta + \alpha_2 dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta \\
& + \alpha_3 dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta + \alpha_4 dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta \\
& \mathbin{ \stackrel{d_Q^{(4)}}{\mapsto} } ( -Y_2 \alpha_1 + Y_1 \alpha_2 - X_2 \alpha_3 + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta .
\end{align*}
\end{prop}
\begin{proof}[Proof of Proposition \ref{exH2} is at Appendix \ref{computationH2}]
\end{proof}
\begin{comment
\begin{tcolorbox}[
colback=white,
colframe=black,
center,
center title,
width=15cm
]
\begin{prop}[Explicit Rumin complex in $\mathbb{H}^2$]\label{exH2}
\begin{equation*}
0 \to \mathbb{R} \to C^\infty
\tikz[anchor=base, baseline]{\node(d4) {$\stackrel{d_Q}{\to}$} }
\frac{\Omega^1}{I^1}
\tikz[anchor=base, baseline]{\node(d5){$ \stackrel{d_Q}{\to}$}}
\frac{\Omega^2}{I^2}
\tikz[anchor=base, baseline]{\node(d6) {$\stackrel{D}{\to}$}}
J^{3}
\tikz[anchor=base, baseline]{\node(d7){$ \stackrel{d_Q}{\to} $}}
J^{4}
\tikz[anchor=base, baseline]{\node(d8){$ \stackrel{d_Q}{\to} $}}
J^{5} \to 0
\end{equation*}
\vspace{1cm}\par
\noindent\hfil\begin{tikzpicture}
\node[left=1cm,align=center] (t1)
{$ f \mathbin{\color{red} \stackrel{d_Q}{\mapsto} }$ \\ $ [X_1f dx_1 + X_2f dx_2+$ \\ $ + Y_1f dy_1+ Y_2f dy_2 ]_{I^1} $};
\node[below = 1cm,align=center] (t2) at (t1.east)
{$[ \alpha_1 dx_1 + \alpha_2 dx_2 + \alpha_3 dy_1+ \alpha_4 dy_2 ]_{I^1} \mathbin{\color{red} \stackrel{d_Q}{\mapsto} }$ \\ $ \Big [ ( X_1 \alpha_2 - X_2 \alpha_1 ) dx_1 \wedge dx_2 + ( X_2 \alpha_3 - Y_1 \alpha_2 ) dx_2 \wedge dy_1+ $ \\ $+ ( Y_1 \alpha_4 - Y_2 \alpha_3 ) dy_1 \wedge dy_2 + ( X_1 \alpha_4 - Y_2 \alpha_1 ) dx_1 \wedge dy_2 + $ \\ $ + \left ( \frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{2} \right ) ( dx_1 \wedge dy_1 - dx_2 \wedge dy_2 ) \Big ]_{I^2}$};
\node[left=2cm, below = 2cm,align=center] (t3) at (t2.east)
{$[\alpha_1 dx_1 \wedge dx_2 + \alpha_3 dx_1 \wedge dy_2+$ \\
$ + \alpha_4 dx_2 \wedge dy_1 +\alpha_6 dy_1 \wedge dy_2+$ \\
$ + \beta (dx_1 \wedge dy_1 - dx_2 \wedge dy_2) ]_{I^2} \mathbin{\color{red} \stackrel{D}{\mapsto} } $ \\
$\left [ ( - X_1 Y_1 -Y_2 X_2) \alpha_1 +X_2 X_2 \alpha_3 -X_1 X_1 \alpha_4 +2X_1 X_2 \beta \right ]
dx_1 \wedge dx_2 \wedge \theta+ $ \\
$+\left [ -Y_2 Y_2 \alpha_1 +(X_2 Y_2 \alpha_3 -X_1 Y_1 )\alpha_3 +X_1 X_1 \alpha_6 +2X_1 Y_2 \beta \right ]
dx_1 \wedge dy_2 \wedge \theta+$ \\
$+\left [ +Y_1 Y_1 \alpha_1 +(Y_1 X_1 -Y_2 X_2) \alpha_4 -X_2 X_2 \alpha_6 -2 X_2 Y_1 \beta \right ]
dx_2 \wedge dy_1 \wedge \theta +$ \\
$+\left [ -Y_1 Y_1 \alpha_3 +Y_2 Y_2 \alpha_4 +(Y_1 X_1 + X_2 Y_2) \alpha_6 + 2Y_1 Y_2 \beta \right ]
dy_1 \wedge dy_2 \wedge \theta+$ \\
$+\left [ -Y_1 Y_2 \alpha_1 +Y_1 X_2 \alpha_3 -X_1 Y_2 \alpha_4 -X_1 X_2 \alpha_6 \right ]
( dx_1 \wedge dy_1 \wedge \theta -dx_2 \wedge dy_2\wedge \theta)$};
\node[below = 5cm,left=3cm,align=center] (t4) at (t3.east)
{$\alpha_1 dx_1 \wedge dx_2 \wedge \theta +\alpha_2 dx_1 \wedge dy_2 \wedge \theta +$ \\ $+ \alpha_3 dx_2 \wedge dy_1 \wedge \theta + \alpha_4 dy_1 \wedge dy_2 \wedge \theta +$ \\ $ + \alpha_5( dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta) \mathbin{\color{red} \stackrel{d_Q}{\mapsto} } $ \\
$ ( Y_1 \alpha_1 + X_1 \alpha_3 - X_2 \alpha_5) dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta+$ \\
$+( Y_2 \alpha_1 -X_2 \alpha_2 - X_1\alpha_5 )dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta+$ \\
$+( -Y_1 \alpha_2 + X_1 \alpha_4 + Y_2 \alpha_5 ) dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta+$ \\
$+( Y_2 \alpha_3 + X_2 \alpha_4 +Y_1 \alpha_5 ) dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta$};
\node[left=2cm ,below = 2.5cm,align=center] (t5) at (t4.east)
{$ \alpha_1 dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta + \alpha_2 dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta + $ \\
$ + \alpha_3 dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta + \alpha_4 dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta \mathbin{\color{red} \stackrel{d_Q}{\mapsto} } $ \\
$ ( -Y_2 \alpha_1 + Y_1 \alpha_2 - X_2 \alpha_3 + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta $};
\end{tikzpicture}
\begin{tikzpicture}[overlay]
\draw[blue,thick,->] (d4) to [in=90,out=265] (t1.north);
\draw[blue,thick,->] (d5) to [in=90,out=265] (t2.45);
\draw[blue,thick,->] (d6) to [in=80,out=300] (t3.40);
\draw[blue,thick,->] (d7) .. controls (15.2,8) .. (t4.10);
\draw[blue,thick,->] (d8) .. controls (14.8,9) .. (t5.12);
\end{tikzpicture}
\end{prop}
\end{tcolorbox}
\end{comment
\begin{comment
\begin{lem}[Change of variables]\label{change}
While dealing with the equivalence classes of the complex, there will be the circumstance in which, for example, we will need to write a differential form of the kind $\alpha= f dx_1 \wedge d y_1 + g dx_2 \wedge d y_2$ with respect to a different pair of basis elements $dx_1 \wedge d y_1 +dx_2 \wedge d y_2$ and $dx_1 \wedge d y_1 -dx_2 \wedge d y_2$ (and keep only one of the two).\\
Here we write a change of coordinates to make this step smooth later.
$$
\begin{cases}
\xi=dx_1 \wedge d y_1\\
\eta=dx_2 \wedge d y_2
\end{cases}
$$
and
$$
\begin{cases}
u=\frac{\xi+\eta}{\sqrt{2}}\\
v=\frac{\xi-\eta}{\sqrt{2}}
\end{cases}
$$
Denote:
$$
M:=\left (
\begin{matrix}
\frac{\partial u}{\partial \xi } & \frac{\partial u}{\partial \eta } \\
\frac{\partial v}{\partial \xi } & \frac{\partial v}{\partial \eta }
\end{matrix} \right )=
\left (
\begin{matrix}
\frac{1}{\sqrt{2} } & \frac{1}{\sqrt{2} } \\
\frac{1}{\sqrt{2} } & \frac{-1}{\sqrt{2} }
\end{matrix} \right ).
$$
This matrix moves elements from a basis to the other. Indeed
$$
M\left (
\begin{matrix}
\xi \\
\eta
\end{matrix} \right )=
\left (
\begin{matrix}
\frac{1}{\sqrt{2} } & \frac{1}{\sqrt{2} } \\
\frac{1}{\sqrt{2} } & \frac{-1}{\sqrt{2} }
\end{matrix} \right ) \cdot
\left (
\begin{matrix}
\xi \\
\eta
\end{matrix} \right )=
\left (
\begin{matrix}
\frac{\xi+\eta}{\sqrt{2}}\\
\frac{\xi-\eta}{\sqrt{2}}
\end{matrix} \right )=
\left (
\begin{matrix}
u \\
v
\end{matrix} \right ).
$$
The fllowing computation will be used later:
$$
(X_1 \alpha_3 - Y_1 \alpha_1 ) dx_1 \wedge dy_1 +( X_2 \alpha_4 - Y_2 \alpha_2 ) dx_2 \wedge dy_2=
$$
$$
M\left (
\begin{matrix}
X_1 \alpha_3 - Y_1 \alpha_1 \\
X_2 \alpha_4 - Y_2 \alpha_2
\end{matrix} \right )=
\left (
\begin{matrix}
\frac{1}{\sqrt{2} } & \frac{1}{\sqrt{2} } \\
\frac{1}{\sqrt{2} } & \frac{-1}{\sqrt{2} }
\end{matrix} \right ) \cdot
\left (
\begin{matrix}
X_1 \alpha_3 - Y_1 \alpha_1 \\
X_2 \alpha_4 - Y_2 \alpha_2
\end{matrix} \right )=
\left (
\begin{matrix}
(*) \\
\frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{\sqrt{2}}
\end{matrix} \right )=
$$
$$
=
(*)\frac{\xi+\eta}{\sqrt{2}} + \frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{\sqrt{2}} \cdot \frac{\xi-\eta}{\sqrt{2}}=
$$
\begin{equation}\label{variables}
= (**) + \frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{2} ( dx_1 \wedge dy_1 - dx_2 \wedge dy_2)
\end{equation}
where the first term is not relevant because it will disappear in the equivalence class.
\end{lem}
\begin{proof}[Proof of Proposition \ref{exH2}]
In this proof we will not always specify the equivalence class we am working in. This does not create ambiguity because each case is treated separately. Furthermore, one can always check the current set from Observation \ref{obsH2}.\\
\textbf{Case: $k =0$.}\\
From Observation \ref{df}:
$$
d_Q f = [ X_1f dx_1 + X_2f dx_2 + Y_1f dy_1+ Y_2f dy_2]_{I^1}.
$$
\textbf{Case: $k =1$.}\\
Consider $[\alpha]_{I^1}= [\alpha_1 dx_1 + \alpha_2 dx_2 + \alpha_3 dy_1+ \alpha_4 dy_2]_{I^1} \in \frac{\Omega^1}{I^1}$.
$$
d_Q ([\alpha]_{I^1} ) = d_Q ( [ \alpha_1 dx_1 + \alpha_2 dx_2 + \alpha_3 dy_1+ \alpha_4 dy_2]_{I^1} ) =
$$
\begin{align*}
= &[- X_2 \alpha_1 dx_1 \wedge dx_2 - Y_1 \alpha_1 dx_1 \wedge dy_1 - Y_2 \alpha_1 dx_1 \wedge dy_2 + \\
&+ X_1 \alpha_2 dx_1 \wedge dx_2 - Y_1 \alpha_2 dx_2 \wedge dy_1 - Y_2 \alpha_2 dx_2 \wedge dy_2 + \\
&+ X_1 \alpha_3 dx_1 \wedge dy_1 + X_2 \alpha_3 dx_2 \wedge dy_1 - Y_2 \alpha_3 dy_1 \wedge dy_2 + \\
&+ X_1 \alpha_4 dx_1 \wedge dy_2 + X_2 \alpha_4 dx_2 \wedge dy_2 + Y_1 \alpha_4 dy_1 \wedge dy_2 ]_{I^2} =
\end{align*}
\begin{align*}
= &[ ( X_1 \alpha_2 - X_2 \alpha_1 ) dx_1 \wedge dx_2 + ( X_2 \alpha_3 - Y_1 \alpha_2 ) dx_2 \wedge dy_1+\\
& ( Y_1 \alpha_4 - Y_2 \alpha_3 ) dy_1 \wedge dy_2 + ( X_1 \alpha_4 - Y_2 \alpha_1 ) dx_1 \wedge dy_2 +\\
& (X_1 \alpha_3 - Y_1 \alpha_1 ) dx_1 \wedge dy_1 +( X_2 \alpha_4 - Y_2 \alpha_2 ) dx_2 \wedge dy_2 ]_{I^2} =
\end{align*}
thanks to the equivalence class, the last line can be written differently using equation \eqref{variables} in Lemma \ref{change},
\begin{align*}
= & \bigg [ ( X_1 \alpha_2 - X_2 \alpha_1 ) dx_1 \wedge dx_2 + ( X_2 \alpha_3 - Y_1 \alpha_2 ) dx_2 \wedge dy_1+\\
& ( Y_1 \alpha_4 - Y_2 \alpha_3 ) dy_1 \wedge dy_2 + ( X_1 \alpha_4 - Y_2 \alpha_1 ) dx_1 \wedge dy_2 +\\
& \left ( \frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{2} \right ) ( dx_1 \wedge dy_1 - dx_2 \wedge dy_2) \bigg ]_{I^2} .
\end{align*}
\textbf{Case: $k =3$.}\\
Consider $\alpha= \alpha_1 dx_1 \wedge dx_2 \wedge \theta +\alpha_2 dx_1 \wedge dy_2 \wedge \theta + \alpha_3 dx_2 \wedge dy_1 \wedge \theta + \alpha_4 dy_1 \wedge dy_2 \wedge \theta + \alpha_5( dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta) \in J^3$.
$$
d_Q \alpha = d_Q \big ( \alpha_1 dx_1 \wedge dx_2 \wedge \theta +\alpha_2 dx_1 \wedge dy_2 \wedge \theta + \alpha_3 dx_2 \wedge dy_1 \wedge \theta + \alpha_4 dy_1 \wedge dy_2 \wedge \theta +
$$
$$
+\alpha_5( dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta) \big )=
$$
\begin{align*}
&= Y_1 \alpha_1 dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta +Y_2 \alpha_1 dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta+\\
& -X_2 \alpha_2 dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta -Y_1 \alpha_2 dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta+\\
&+ X_1 \alpha_3 dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta +Y_2 \alpha_3 dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta+\\
& + X_1 \alpha_4 dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta + X_2 \alpha_4 dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta +
\end{align*}
$$
- X_1\alpha_5 dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta - X_2 \alpha_5 dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta+
$$
$$
+Y_1 \alpha_5 dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta + Y_2 \alpha_5 dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta+
$$
$$
+ \alpha_5(\underbrace{ - dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2 + dx_2 \wedge dy_2 \wedge dx_1 \wedge dy_1}_{=0}) =
$$
\begin{align*}
&= ( Y_1 \alpha_1 + X_1 \alpha_3 - X_2 \alpha_5) dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta+\\
&+( Y_2 \alpha_1 -X_2 \alpha_2 - X_1\alpha_5 )dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta+\\
&+( -Y_1 \alpha_2 + X_1 \alpha_4 + Y_2 \alpha_5 ) dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta+\\
&+( Y_2 \alpha_3 + X_2 \alpha_4 +Y_1 \alpha_5 ) dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta.
\end{align*}
\textbf{Case: $k =4$.}\\
Consider
$$
\alpha = \alpha_1 dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta + \alpha_2 dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta +
$$
$$
+ \alpha_3 dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta + \alpha_4 dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta \in J^4.
$$
Then
$$
d_Q \alpha= ( -Y_2 \alpha_1 + Y_1 \alpha_2 - X_2 \alpha_3 + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta.
$$
\textbf{Case: $k =2$.}\\
We remind that by Observation \ref{df}:
$$
d g = X_1g dx_1 + Y_1 g dy_1 +X_2g dx_2 + Y_2 g dy_2 + Tg \theta.
$$
Consider $\alpha' \in \Lambda^2 Q^*$:
$$
\alpha'= \alpha_1 dx_1 \wedge dx_2 +\alpha_2 dx_1 \wedge dy_1 + \alpha_3 dx_1 \wedge dy_2 + \alpha_4 dx_2 \wedge dy_1 + \alpha_5 dx_2 \wedge dy_2 +\alpha_6 dy_1 \wedge dy_2.
$$
Now, proceding as in the case of equation \eqref{variables}, consider only $\alpha_2 dx_1 \wedge dy_1 + \alpha_5 dx_2 \wedge dy_2$ and write it with new basis elements:
$$
\alpha_2 dx_1 \wedge dy_1 + \alpha_5 dx_2 \wedge dy_2=
M\left (
\begin{matrix}
\alpha_2 \\
\alpha_5
\end{matrix} \right )=
\left (
\begin{matrix}
\frac{1}{\sqrt{2} } & \frac{1}{\sqrt{2} } \\
\frac{1}{\sqrt{2} } & \frac{-1}{\sqrt{2} }
\end{matrix} \right ) \cdot
\left (
\begin{matrix}
\alpha_2 \\
\alpha_5
\end{matrix} \right )=
\left (
\begin{matrix}
\frac{ \alpha_2 + \alpha_5 }{\sqrt{2}} \\
\frac{ \alpha_2 - \alpha_5 }{\sqrt{2}}
\end{matrix} \right )=
$$
$$
=
\underbrace{\frac{ \alpha_2 + \alpha_5 }{2}}_{=:\gamma}(dx_1 \wedge dy_1 + dx_2 \wedge dy_2) + \underbrace{ \frac{\alpha_2 - \alpha_5 }{2}}_{=:\beta} (dx_1 \wedge dy_1 - dx_2 \wedge dy_2).
$$
So, by the definition of $\frac{\Omega^2}{I^2}$, one has
$$
[\alpha']_{I^2}= [\alpha_1 dx_1 \wedge dx_2 + \alpha_3 dx_1 \wedge dy_2 + \alpha_4 dx_2 \wedge dy_1 +\alpha_6 dy_1 \wedge dy_2+
$$
$$
+ \beta (dx_1 \wedge dy_1 - dx_2 \wedge dy_2) ]_{I^2}.
$$
Name
$$
\alpha := \alpha_1 dx_1 \wedge dx_2 + \alpha_3 dx_1 \wedge dy_2 + \alpha_4 dx_2 \wedge dy_1 +\alpha_6 dy_1 \wedge dy_2+ \beta (dx_1 \wedge dy_1 - dx_2 \wedge dy_2)
$$
and one obviously has
$$
[\alpha']_{I^2}=[\alpha]_{I^2}.
$$
Now compute the full derivative of $\alpha$:
$$
d \alpha= d \big ( \alpha_1 dx_1 \wedge dx_2 + \alpha_3 dx_1 \wedge dy_2 + \alpha_4 dx_2 \wedge dy_1 +\alpha_6 dy_1 \wedge dy_2+
$$
$$
+ \beta (dx_1 \wedge dy_1 - dx_2 \wedge dy_2) \big ) =
$$
$$
= Y_1 \alpha_1 dy_1 \wedge dx_1 \wedge dx_2 + Y_2 \alpha_1 dy_2 \wedge dx_1 \wedge dx_2 + T \alpha_1 \theta \wedge dx_1 \wedge dx_2 +
$$
$$
+ Y_1 \alpha_3 dy_1 \wedge dx_1 \wedge dy_2 +X_2 \alpha_3 dx_2 \wedge dx_1 \wedge dy_2 + T \alpha_3 \theta \wedge dx_1 \wedge dy_2+
$$
$$
+ X_1 \alpha_4 dx_1 \wedge dx_2 \wedge dy_1 + Y_2 \alpha_4 dy_2 \wedge dx_2 \wedge dy_1 + T \alpha_4 \theta \wedge dx_2 \wedge dy_1+
$$
$$
+ X_1 \alpha_6 dx_1 \wedge dy_1 \wedge dy_2+X_2 \alpha_6 dx_2 \wedge dy_1 \wedge dy_2 + T \alpha_6 \theta \wedge dy_1 \wedge dy_2+
$$
$$
- X_1 \beta dx_1 \wedge dx_2 \wedge dy_2 - Y_1 \beta dy_1 \wedge dx_2 \wedge dy_2 +
$$
$$
+X_2 \beta dx_2 \wedge dx_1 \wedge dy_1 + Y_2 \beta dy_2 \wedge dx_1 \wedge dy_1 +
$$
$$
+T \beta (dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta )=
$$
$$
=(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge dy_1
+(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge dy_2 +
$$
$$
+(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_1 \wedge dy_2
+(Y_2 \alpha_4 + Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge dy_2 +
$$
$$
+ T \alpha_1 \theta \wedge dx_1 \wedge dx_2 + T \alpha_3 \theta \wedge dx_1 \wedge dy_
+ T \alpha_4 \theta \wedge dx_2 \wedge dy_1+ T \alpha_6 \theta \wedge dy_1 \wedge dy_2+
$$
$$
+T \beta (dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta ).
$$
Then
$$
(d\alpha)_{\vert_Q} =(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge dy_1
+(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge dy_2 +
$$
$$
+(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_1 \wedge dy_2
+(Y_2 \alpha_4 + Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge dy_2 =
$$
$$
=(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge d \theta
-(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge d \theta +
$$
$$
-(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_2 \wedge d \theta
+(Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dy_1 \wedge d \theta.
$$
Then
$$
L^{-1} ( - (d\alpha)_{\vert_Q} ) =- (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2
+(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 +
$$
$$
+(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_2
-(Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dy_1.
$$
And so
$$
d (L^{-1} ( - (d\alpha)_Q ) ) \wedge \theta=
$$
$$
- X_1 (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge \theta
+Y_1 (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dy_1 \wedge \theta+
$$
$$
+ Y_2(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dy_2 \wedge \theta+
$$
$$
-X_2 (Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge \theta
-Y_1 (Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dy_1 \wedge \theta+
$$
$$
-Y_2(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dy_2 \wedge \theta+
$$
$$
+X_1 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_2\wedge \theta
+X_2 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_2 \wedge dy_2\wedge \theta+
$$
$$
+Y_1 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_1 \wedge dy_2\wedge \theta+
$$
$$
-X_1 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dx_1 \wedge dy_1 \wedge \theta
-X_2 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge \theta+
$$
$$
Y_2 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dy_1 \wedge dy_2 \wedge \theta.
$$
Finally
$$
D([\alpha]_{I^2}) =d \left ( \alpha + L^{-1} ( - (d\alpha)_{\vert_Q} ) \wedge \theta \right )=
$$
\[
\begin{array}{cc}
=\cancel{ (Y_1 \alpha_1 - X_2 \beta+ X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge dy_1 }+&\rdelim\}{10}{1em}[$d\alpha$]\\
+\cancel{(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge dy_2} +\\
+\cancel{(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_1 \wedge dy_2}+\\
+\cancel{(Y_2 \alpha_4 + Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge dy_2 } +\\\\
+ T \alpha_1 dx_1 \wedge dx_2 \wedge \theta+ T \alpha_3 dx_1 \wedge dy_2 \wedge \theta +\\
+ T \alpha_4 dx_2 \wedge dy_1 \wedge \theta + T \alpha_6 dy_1 \wedge dy_2 \wedge \theta +\\
+T \beta (dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta )+\\\\
\end{array}
\]
\[
\begin{array}{cc}
- X_1 (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge \theta+&\rdelim\}{12}{1em}[ $d (L^{-1} ( - (d\alpha)_Q ) ) \wedge \theta$]\\
+Y_1 (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dy_1 \wedge \theta+\\
+ Y_2(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dy_2 \wedge \theta+\\
-X_2 (Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge \theta+\\
-Y_1 (Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dy_1 \wedge \theta+\\
-Y_2(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dy_2 \wedge \theta+\\
+X_1 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_2\wedge \theta+\\
+X_2 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_2 \wedge dy_2\wedge \theta+\\
+Y_1 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_1 \wedge dy_2\wedge \theta+\\
-X_1 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dx_1 \wedge dy_1 \wedge \theta+\\
-X_2 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge \theta+\\
+Y_2 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dy_1 \wedge dy_2 \wedge \theta+\\
+\cancel{ (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dx_1 \wedge dy_1 }+&\rdelim\}{6}{1em}[$L^{-1} ( - (d\alpha)_Q ) \wedge d\theta$]\\
\cancel{-(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge dy_2 }+\\
\cancel{-(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_2 \wedge dx_1 \wedge dy_1}+\\
+\cancel{(Y_2 \alpha_4 + Y_1 \beta + X_2 \alpha_6 ) dy_1 \wedge dx_2 \wedge dy_2 } =\\
\end{array}
\]
$$
=( \underbrace{T}_{ =X_2 Y_2-Y_2 X_2} \alpha_1 - X_1 Y_1 \alpha_1 +X_1 X_2 \beta -X_1 X_1 \alpha_4 -X_2 Y_2 \alpha_1 +X_2 X_2 \alpha_3 + X_2 X_1 \beta )
dx_1 \wedge dx_2 \wedge \theta+
$$
$$
+( T \alpha_3 -Y_2 Y_2 \alpha_1+ Y_2 X_2 \alpha_3 + Y_2 X_1 \beta +X_1 Y_2 \beta -X_1 Y_1 \alpha_3 +X_1 X_1 \alpha_6 )
dx_1 \wedge dy_2 \wedge \theta+
$$
$$
+ (T \alpha_4 +Y_1 Y_1 \alpha_1 - Y_1 X_2 \beta +Y_1 X_1 \alpha_4 -X_2 Y_2 \alpha_4-X_2 Y_1 \beta -X_2 X_2 \alpha_6 )
dx_2 \wedge dy_1 \wedge \theta +
$$
$$
+(T \alpha_6 + Y_1 Y_2 \beta -Y_1 Y_1 \alpha_3 +Y_1 X_1 \alpha_6 +Y_2 Y_2 \alpha_4+Y_2 Y_1 \beta + Y_2 X_2 \alpha_6 )
dy_1 \wedge dy_2 \wedge \theta+
$$
$$
+T \beta (dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta )+
$$
$$
+ (Y_2Y_1 \alpha_1 - Y_2 X_2 \beta + Y_2X_1 \alpha_4 +X_2 Y_2 \beta -X_2 Y_1 \alpha_3 +X_2 X_1 \alpha_6 )
dx_2 \wedge dy_2\wedge \theta+
$$
$$
(-Y_1 Y_2 \alpha_1 +Y_1 X_2 \alpha_3 +Y_1 X_1 \beta -X_1 Y_2 \alpha_4 -X_1 Y_1 \beta -X_1 X_2 \alpha_6 )
dx_1 \wedge dy_1 \wedge \theta=
$$
$$
=\left [ ( - X_1 Y_1 -Y_2 X_2) \alpha_1 +X_2 X_2 \alpha_3 -X_1 X_1 \alpha_4 +2X_1 X_2 \beta \right ]
dx_1 \wedge dx_2 \wedge \theta+
$$
$$
+\left [
-Y_2 Y_2 \alpha_1
+(X_2 Y_2 \alpha_3 -X_1 Y_1 )\alpha_3
+X_1 X_1 \alpha_6
+2X_1 Y_2 \beta
\right ]
dx_1 \wedge dy_2 \wedge \theta+
$$
$$
+\left [
+Y_1 Y_1 \alpha_1
+(Y_1 X_1 -Y_2 X_2) \alpha_4
-X_2 X_2 \alpha_6
-2 X_2 Y_1 \beta
\right ]
dx_2 \wedge dy_1 \wedge \theta +
$$
$$
+\left [
-Y_1 Y_1 \alpha_3
+Y_2 Y_2 \alpha_4
+(Y_1 X_1 + X_2 Y_2) \alpha_6
+ 2Y_1 Y_2 \beta
\right ]
dy_1 \wedge dy_2 \wedge \theta+
$$
$$
+\left [
-Y_1 Y_2 \alpha_1
+Y_1 X_2 \alpha_3
-X_1 Y_2 \alpha_4
-X_1 X_2 \alpha_6
\right ]
( dx_1 \wedge dy_1 \wedge \theta
-dx_2 \wedge dy_2\wedge \theta).
$$
\end{proof}
\end{comment
\chapter{Pushforward and Pullback in $\mathbb{H}^n$}\label{PPHN}
In this chapter we define pushforward and pullback on $\mathbb{H}^n$ and, after some properties, we prove that the pullback by a contact map commutes with the Rumin differential at every dimension. Then we show explicit formulas for pushforward and pullback, in Heisenberg notations, in three different cases: for a general function, for a contact map and a contact diffeomorphism. Finally we present the same formulas in the Rumin cohomology.
References for this chapter are \cite{KORR}, from which we use some results, \cite{RUMIN} and \cite{FSSC}.\\
Appendix \ref{explicitcommutation} concerns an explicit proof of commutation between pullback and Rumin differential.
\section{Definitions and Properties}
\begin{defin}[Pushforward in $\mathbb{H}^n$]
Let $f: U\subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=\left (f^1,\dots,f^{2n+1} \right ) \in \left [ C^1 (U,\mathbb{R}) \right ]^{2n+1}$. The \emph{pushforward by $f$} is defined as follows:\\
if $k=1$, we set
\begin{align*}
f_*:=d f : \mathfrak{h} &\to \mathfrak{h},\\
v&\mapsto v(f).
\end{align*}
If $k > 1$, we set
$$
f_* \equiv \Lambda_k d f : {\prescript{}{}\bigwedge}_k \mathfrak{h} \to {\prescript{}{}\bigwedge}_k \mathfrak{h}
$$
to be the linear map satisfying
$$
\Lambda_k d f (v_1 \wedge \dots \wedge v_k) :=df (v_1) \wedge \dots \wedge d f (v_k),
$$
i.e.,
$$
f_* (v_1 \wedge \dots \wedge v_k) :=f_* v_1 \wedge \dots \wedge f_* v_k,
$$
for $v_1,\dots,v_k \in \mathfrak{h}$.
\end{defin}
\begin{defin}[Pullback in $\mathbb{H}^n$]
Let $f: U\subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=\left (f^1,\dots,f^{2n+1} \right ) \in \left [ C^1 (U,\mathbb{R}) \right ]^{2n+1}$. The \emph{pullback by $f$ }is defined by duality with respect to the pushforward as:
$$
f^* \equiv \Lambda^k d f : \Omega^k \to \Omega^k, \text{ so that}
$$
$$
\langle \Lambda^k d f (\omega) \vert v \rangle =\langle \omega \vert \Lambda_k d f (v) \rangle,
$$
i.e.,
$$
\langle f^* (\omega) \vert v \rangle =\langle \omega \vert f_* (v) \rangle.
$$
\end{defin}
\noindent
As in the Riemannian case, we have the following
\begin{obs}\label{pullprop}
Let $f: U\subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=\left (f^1,\dots,f^{2n+1} \right ) \in \left [ C^1 (U,\mathbb{R}) \right ]^{2n+1}$, and $\alpha \in \Omega^k$, $\beta \in \Omega^h$ differential forms. One can verify that
$$
f^* (\alpha \wedge \beta ) =f^* \alpha \wedge f^* \beta.
$$
\end{obs}
\begin{defin}\label{contact}
A map $f: U\subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=\left (f^1,\dots,f^{2n+1} \right ) \in \left [ C^1 (U,\mathbb{R}) \right ]^{2n+1}$ is a \emph{contact map} if
$$
f_* \left (\spn \{ X_1, \dots, X_n,Y_1,\dots, Y_n \} \right ) \subseteq \spn \{ X_1, \dots, X_n,Y_1,\dots, Y_n \}.
$$
\end{defin}
\begin{obs}
Remembering Notation \ref{notW}, the pushforward $f_*$ can be expressed in terms of the basis given by $\{W_1,\dots, W_{2n+1} \}$, where one can think of them as vectors: $W_j=e_j^T$ ($e_j$ the vector with value $1$ at position $j$ and zero elsewhere).
Then the general pushforward matrix looks like
\begin{align*}
f_*=&
\left (
\begin{matrix}
\langle dw_1 , f_* W_1 \rangle & \ldots & \langle dw_1 , f_* W_{2n+1} \rangle \\
\vdots & & \vdots\\
\langle dw_{2n+1} , f_* W_1 \rangle & \ldots & \langle dw_{2n+1} , f_* W_{2n+1} \rangle
\end{matrix}
\right ).
\end{align*}
If the function $f$ is also contact, then by definition we have
\begin{align*}
f_*=&
\left (
\begin{matrix}
\langle dw_1 , f_* W_1 \rangle & \ldots & \langle dw_1 , f_* W_{2n} \rangle & \langle dw_1 , f_* W_{2n+1} \rangle \\
\vdots & & \vdots & \vdots \\
\langle dw_{2n} , f_* W_1 \rangle & \ldots & \langle dw_{2n} , f_* W_{2n} \rangle & \langle dw_{2n} , f_* W_{2n+1} \rangle \\
0 & \ldots & 0 & \langle dw_{2n+1} , f_* W_{2n+1} \rangle
\end{matrix}
\right ).
\end{align*}
By definition of pulback, the pullback matrix is the transpose of the pushforward matrix, so
\begin{align*}
f^*=(f_*)^T=&
\left (
\begin{matrix}
\langle dw_1 , f_* W_1 \rangle & \ldots & \langle dw_{2n} , f_* W_1 \rangle & 0 \\
\vdots & & \vdots & \vdots \\
\langle dw_1 , f_* W_{2n} \rangle & \ldots & \langle dw_{2n} , f_* W_{2n} \rangle & 0 \\
\langle dw_1 , f_* W_{2n+1} \rangle & \ldots & \langle dw_{2n} , f_* W_{2n+1} \rangle & \langle dw_{2n+1} , f_* W_{2n+1} \rangle
\end{matrix}
\right ).
\end{align*}
This shows that an equivalent condition for contactness is to ask
$$
f^* \theta = \lambda_f \theta, \quad \text{with} \quad \lambda_f = \langle f^* \theta , T \rangle .
$$
\end{obs}
\begin{obs}[See proposition 6 in \cite{KORREI}]
If $f : U \subseteq \mathbb{H}^n \to \mathbb{H}^n$ is a P-differentiable function from $\mathbb{H}^n$ to $\mathbb{H}^n$ as in the Definition \ref{dGGG}, then $f$ is a contact map.
\end{obs}
\begin{ex}\label{examplecontact}
The anisotropic dilation $\delta_r (\bar x, \bar y, t)=(r \bar x, r \bar y,r^2 t )$, $ (\bar x, \bar y, t) \in \mathbb{H}^n$, is a contact map. This will be shown later in Example \ref{contactdebt}.
\end{ex}
\begin{obs}[Section 2.B \cite{KORR}] \label{d_theta}
Note that, if $f$ is a contact map and $\lambda_f = \langle f^* \theta , T \rangle$, then
$$
f^* d \theta = d (f^* \theta) = d (\lambda_f \theta) = d \lambda_f \wedge \theta + \lambda_f d\theta,
$$
where $d$ is the (full) exterior Riemannian derivative.
\end{obs}
\begin{obs}[Section 2.B \cite{KORR}]\label{section2.b}
Note that, if $f$ is a contact map, $\lambda_f = \langle f^* \theta , T \rangle$ and $v_1, v_2 \in \spn \{ X_1,\dots,X_n, Y_1,\dots, Y_n \}$, then
$$
\langle d \theta \vert f_* (v_1 \wedge v_2) \rangle = \lambda_f \langle d\theta \vert v_1 \wedge v_2 \rangle .
$$
\end{obs}
\begin{proof}
By the definition of pullback and Observation \ref{section2.b}, we have
\begin{align*}
\langle d \theta \vert f_* (v_1 \wedge v_2) \rangle & = \langle f^* d \theta \vert v_1 \wedge v_2 \rangle \\
&= \langle d \lambda_f \wedge \theta \vert v_1 \wedge v_2 \rangle + \langle \lambda_f d\theta \vert v_1 \wedge v_2 \rangle = \lambda_f \langle d \theta \vert v_1 \wedge v_2 \rangle.
\end{align*}
\end{proof}
\section{Commutation of Pullback and Rumin Complex in $\mathbb{H}^n$}
We consider a contact map $f$ on $\mathbb{H}^n$. We know that $f^*$ commutes with the
exterior derivative $d$ and here we show that the commutation holds also for the differentials in the Rumin complex.\\\\
\noindent
Recall from Definitions \ref{complexHn} and \ref{D} that the Rumin complex
is given by
$$
0 \to \mathbb{R} \to C^\infty \stackrel{d_Q}{\to} \frac{\Omega^1}{I^1} \stackrel{d_Q}{\to} \dots \stackrel{d_Q}{\to} \frac{\Omega^n}{I^n} \stackrel{D}{\to} J^{n+1} \stackrel{d_Q}{\to} \dots \stackrel{d_Q}{\to} J^{2n+1} \to 0,
$$
where, for $k<n$, $d_Q( [\alpha]_{I^k} ) := [d \alpha]_{I^k},$ while, for $k \geq n+1$, $d_Q := d_{\vert_{J^k}}.$ For $k=n$, $D$ was the second order differential operator uniquely defined as $D( [\alpha]_{I^n} ) = d \left ( \alpha + L^{-1} \left (- (d \alpha)_{\vert_{ {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 }} \right ) \wedge \theta \right ).$
\begin{theor}\label{Fdc=dcF}
A smooth contact map $f: \mathbb{H}^n \to \mathbb{H}^n$ satisfies
$$
f^* d_Q = d_Q f^* \ \ \ \ \text{ for } k \neq n,
$$
and
$$
f^* D = D f^* \ \ \ \ \text{ for } k = n.
$$
Namely, the pullback by a contact map $f$ commutes with the
operators of the Rumin complex.\\\\
To the best of my knowledge, this result does not apper in the literature, but the main steps were explained to me by Bruno Franchi in September 2017. We present here a complete proof.\\
A computationally more explicit proof of this statement is available in Appendix \ref{explicitcommutation}.
\end{theor}
\noindent
Before starting the proof of Theorem \ref{Fdc=dcF}, some lemmas and definitions are necessary.
\begin{lem}\label{I}
Consider a smooth contact map $f: \mathbb{H}^n \to \mathbb{H}^n$ and write $\{ \gamma \wedge \theta \}=\{ \gamma \wedge \theta ; \ \gamma \in \Omega^{k-1}\}$. Then
$$
f^* (\{ \gamma \wedge \theta \}) \subseteq \{ \gamma \wedge \theta \} \quad \text{ and } \quad f^* (I^k) \subseteq I^k \quad k=1,\dots,n.
$$
\end{lem}
\begin{proof}
First notice, from Definition \ref{contact} and Observation \ref{d_theta} that
$$
f^* \theta = \lambda_f \theta
\quad \text{and} \quad
f^* d \theta
d \lambda_f \wedge \theta +\lambda_f d\theta
$$
Then one has
$$
f^* (\alpha \wedge \theta ) = f^* \alpha \wedge f^* \theta =\lambda_f f^* \alpha \wedge \theta \in \{ \gamma \wedge \theta \}.
$$
This means $f^* (\{ \gamma \wedge \theta \} ) \subseteq \{ \gamma \wedge \theta \}$. Furthermore
\begin{align*}
f^* (\alpha \wedge \theta + \beta \wedge d \theta) &= f^* \alpha \wedge f^* \theta + f^* \beta \wedge f^* d \theta \\
&
= \lambda_f f^* \alpha \wedge \theta + f^* \beta \wedge (d \lambda_f \wedge \theta +\lambda_f d\theta)\\
&
=( \lambda_f f^* \alpha + f^* \beta \wedge d \lambda_f ) \wedge \theta + \lambda_f f^* \beta \wedge d \theta
\in I^k
\end{align*}
for $\alpha \in \Omega^{k-1}$ and $\beta \in \Omega^{k-2}$. This means that $f^* (I^k) \subseteq I^k$.
\end{proof}
\begin{defin}
Recall the equivalence class in Notation \ref{middleequivclass}:\\
$\frac{\Omega^k}{ \{ \gamma \wedge \theta \} } \cong \left \{ \beta \in \Omega^k ; \ \beta =0 \ \text{or} \ \beta \wedge \theta \neq 0 \right \}= {\prescript{}{}\bigwedge}^k \mathfrak{h}_1 $ where $[\alpha]_{ \{ \gamma \wedge \theta \} }$ is an element in this equivalence class. Then consider a smooth contact map $f: \mathbb{H}^n \to \mathbb{H}^n$. We define a pullback on such equivalence class as
$$
f^* : \frac{\Omega^k}{ \{ \gamma \wedge \theta \} } \to \frac{\Omega^k}{ \{ \gamma \wedge \theta \} },
$$
where
$$
f^* ( [\alpha]_{ \{ \gamma \wedge \theta \} } ):= [f^* (\alpha )]_{ \{ \gamma \wedge \theta \} }, \quad [\alpha]_{ \{ \gamma \wedge \theta \} } \in \frac{\Omega^k}{ \{ \gamma \wedge \theta \} }.
$$
This definition is well posed, as follows from the following lemma.
\end{defin}
\begin{lem} \label{Iq}
Consider a smooth contact map $f: \mathbb{H}^n \to \mathbb{H}^n$ and $\{ \gamma \wedge \theta \}=\{ \gamma \wedge \theta ; \ \gamma \in \Omega^{k-1}\}$. Then we have
$$
[\alpha]_{ \{ \gamma \wedge \theta \} } = [\beta]_{ \{ \gamma \wedge \theta \} } \Rightarrow f^* ( [\alpha ]_{ \{ \gamma \wedge \theta \} } )= f^* ([ \beta]_{ \{ \gamma \wedge \theta \} } ).
$$
\end{lem}
\begin{proof}
By definition of equivalence class we have that $[\alpha]_{ \{ \gamma \wedge \theta \} } = [\beta]_{ \{ \gamma \wedge \theta \} } $ means
$$
\beta - \alpha \in \{ \gamma \wedge \theta \} ,
$$
which, by Lemma \ref{I}, implies
$$
f^*\beta - f^* \alpha = f^* ( \beta - \alpha ) \in \{ \gamma \wedge \theta \},
$$
which, again by definition, means
$$
[f^* (\alpha )]_{ \{ \gamma \wedge \theta \} } = [f^* (\beta )]_{ \{ \gamma \wedge \theta \} }.
$$
The claim follows by the definition of pushforward in this equivalence class.
\end{proof}
\begin{defin}\label{pushequivclass}
Consider another equivalence class as given in Observation \ref{finalequivclass}:
$
\frac{ \frac{\Omega^k}{ \{ \gamma \wedge \theta \} } }{ \{ \beta \wedge d \theta \} } = \frac{\Omega^k}{I^k}
$
and consider a smooth contact map $f: \mathbb{H}^n \to \mathbb{H}^n$. Again there is a pushforward defined as
\begin{align*}
f^* : \frac{\Omega^k}{I^k} \to \frac{\Omega^k}{I^k},
\end{align*}
where
\begin{align*}
f^* ([\alpha]_{I^k} ) := [f^* (\alpha )]_{I^k}, \quad [\alpha]_{I^*} \in \frac{\Omega^k}{I^k}.
\end{align*}
Also this definition is well posed, as shown in the lemma below.
\end{defin}
\begin{lem}\label{Iqq}
Consider a smooth contact map $f: \mathbb{H}^n \to \mathbb{H}^n$. Then we have
$$
[\alpha]_{I^k} = [\beta]_{I^k} \Rightarrow [f^* (\alpha )]_{I^k} = [f^* (\beta )]_{I^k}.
$$
\end{lem}
\begin{proof}
By definition $[\alpha]_{I^k} = [\beta]_{I^k}$ means
$$
\beta - \alpha \in I^k,
$$
which implies, by Lemma \ref{I},
$$
f^* \beta - f^* \alpha = f^* (\beta - \alpha ) \in I^k.
$$
So
$$
[f^* (\alpha )]_{I^k} = [f^* (\beta )]_{I^k}.
$$
\end{proof}
\noindent
After all these lemmas about lower order object in the Rumin comples, we show here one on higher order spaces.
\begin{lem}\label{lastone}
Consider a smooth contact map $f: \mathbb{H}^n \to \mathbb{H}^n$. Then
$$
f^* (J^k) \subseteq J^k.
$$
\end{lem}
\begin{proof}
By definition of $J^k$, $\alpha \in J^k$ means
$$
\theta \wedge \alpha = 0 \ \text{ and } \ d \theta \wedge \alpha =0.
$$
Then
$$
0=f^* ( \theta \wedge \alpha )= f^* \theta \wedge f^* \alpha = \lambda_f \theta \wedge f^* \alpha,
$$
which implies
$$
\theta \wedge f^* \alpha =0.
$$
Moreover,
\begin{align*}
0&=f^* ( d \theta \wedge \alpha )= f^* (d \theta ) \wedge f^* \alpha =d ( \lambda_f \theta ) \wedge f^* \alpha \\
&= ( d \lambda_f \wedge \theta ) \wedge f^* \alpha + \lambda_f d \theta \wedge f^* \alpha.
\end{align*}
Since $\theta \wedge f^* \alpha=0$, we get that
$$
d \theta \wedge f^* \alpha =0 .
$$
And, finally,
$$
\theta \wedge f^* \alpha =0 \ \text{ and } \ d \theta \wedge f^* \alpha =0, \ \text{ i.e. } \ f^* \alpha \in J^k.
$$
\end{proof}
\noindent
At last, we will prove a lemma regarding the case $k=n$. After this we am finally ready to prove the main theorem.
\begin{lem}\label{lemmaDtilde}
Consider a smooth contact map $f: \mathbb{H}^n \to \mathbb{H}^n$. We know that, by Lemma \ref{lemma}, every form $[\alpha]_{ \{ \gamma \wedge \theta \} } \in \frac{\Omega^n}{ \{ \gamma \wedge \theta \} } $ has a unique lifting $\tilde{\alpha} \in \Omega^n$ so that $d \tilde{\alpha} \in J^{n+1}$. Then we have
$$
\widetilde{f^* \alpha}= f^* \tilde{\alpha}.
$$
\end{lem}
\begin{proof}
Now we only have to prove the claim that $\widetilde{f^* \alpha}= f^* \tilde{\alpha}$.\\
Following the proof of Lemma \ref{lemma}, one knows that there exists a unique $ \beta \in {\prescript{}{}\bigwedge}^{n-1} \mathfrak{h}_1 $ so that
$$
\tilde{\alpha} = \alpha + \beta \wedge \theta,
$$
and such $\beta$ is the only one for which the following condition is satisfied:
$$
\theta \wedge \left ( d \alpha + (- 1)^{\vert \beta \vert} L(\beta) \right ) =0.
$$
Thus one has that
$$
f^* \tilde{\alpha} = f^* \alpha + f^* (\beta \wedge \theta ) = f^* \alpha + \lambda_f f^* \beta \wedge \theta.
$$
On the other hand, one can repeat the lifting process for $f^* \alpha \in \frac{\Omega^n}{ \{ \gamma \wedge \theta \} } \cong {\prescript{}{}\bigwedge}^{n} \mathfrak{h}_1 $ (the congruence tells that if $\alpha$ belongs to $\frac{\Omega^n}{ \{ \gamma \wedge \theta \} } $, so does $f^*$).\\
Then there exists a unique $\gamma $ so that
$$
\widetilde{f^* \alpha} = f^* \alpha + \gamma \wedge \theta,
$$
where, as before, such $\gamma$ is the only one which satisfies
\begin{equation}\label{uniquecondition}
\theta \wedge \left ( d f^* \alpha + (- 1)^{\vert \gamma \vert} L(\gamma) \right ) =0.
\end{equation}
To prove the claim one needs to show that $ \gamma = \lambda_f f^* \beta $. We can substitute $ \lambda_f f^* \beta$ in place of $\gamma$ in the condition \eqref{uniquecondition} and, by uniqueness, it is enough to show that
\begin{equation}\label{11star}
\theta \wedge \left ( d f^* \alpha + (- 1)^{\vert \lambda_f f^* \beta \vert} L(\lambda_f f^* \beta) \right ) =0.
\end{equation}
Indeed
\begin{align*}
d ( f^* \alpha ) & =
f^* (d \alpha ) =
f^* \left [ d \alpha+ L \left ( (- 1)^{\vert \beta \vert}\beta \right ) - L \left ( (- 1)^{\vert \beta \vert}\beta \right ) \right ] \\
&
= f^* \left (d \alpha+ L \left ( (- 1)^{\vert \beta \vert}\beta \right ) \right ) - f^* \left ( L \left ( (- 1)^{\vert \beta \vert}\beta \right ) \right ) .
\end{align*}
Then
\begin{align*}
\theta \wedge d f^* \alpha & = \theta \wedge f^* \left (d \alpha+ L \left ( (- 1)^{\vert \beta \vert}\beta \right ) \right )
- \theta \wedge f^* \left ( L \left ( (- 1)^{\vert \beta \vert}\beta \right ) \right )\\
&
=
\lambda_f^{-1} f^* \left ( \theta \wedge \left (d \alpha+ (- 1)^{\vert \beta \vert} L \left ( \beta \right ) \right ) \right ) - (- 1)^{\vert \beta \vert} \theta \wedge f^* \left ( L \left ( \beta \right ) \right ).
\end{align*}
Since $ \theta \wedge \left (d \alpha+ (- 1)^{\vert \beta \vert} L \left ( \beta \right ) \right )=0$, we get
\begin{align*}
\theta \wedge d f^* \alpha & = - (- 1)^{\vert \beta \vert} \theta \wedge f^* \left ( L \left ( \beta \right ) \right ) =- (- 1)^{\vert \beta \vert} \theta \wedge f^* \left ( d\theta \wedge \beta \right ) \\
& = - (- 1)^{\vert \beta \vert} \theta \wedge f^* d\theta \wedge f^* \beta = - (- 1)^{\vert \beta \vert} \theta \wedge \lambda_f d \theta \wedge f^* \beta \\
&= - (- 1)^{\vert \beta \vert} \lambda_f \theta \wedge d \theta \wedge f^* \beta,
\end{align*}
where, in the second to last equality, we used that $ f^* d\theta = d f^* \theta= d (\lambda_f \theta)= d \lambda_f \wedge \theta +\lambda_f d \theta$.\\
On the other hand, since $\vert \lambda_f f^* \beta \vert = \vert \beta \vert$,
\begin{align*}
\theta \wedge \left ( (- 1)^{\vert \lambda_f f^* \beta \vert} L(\lambda_f f^* \beta) \right ) &=(- 1)^{\vert \beta \vert} \theta \wedge \left ( L(\lambda_f f^* \beta) \right )\\
&= (- 1)^{\vert \beta \vert} \theta \wedge \left ( d\theta \wedge \lambda_f f^* \beta \right ) \\
&= (- 1)^{\vert \beta \vert} \lambda_f \theta \wedge d\theta \wedge f^* \beta .
\end{align*}
This shows that equation \eqref{11star} holds and thus ends the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Fdc=dcF}]
As the definition of Rumin complex is done by cases, we will also divide this proof by cases. As one may expect, the case $k=n$ is the one that requires more work.\\\\
\textbf{First case: $k \geq n+1$.}\\%1111111111111111111111111111111111111111111111111111111111111111111111
If $k \geq n+1$, then $d_Q = d_{\vert_{J^k}}$ and we need to consider a differentual form $\alpha \in J^k$. The we have
$$
d_Q \alpha = d \alpha \text{ on } J^k.
$$
So, since also $f^* \left ( \alpha \right ) \in J^k$, we can already conclude that
$$
f^* \left (d_Q \alpha \right ) = f^* \left ( d \alpha \right ) = d f^* \left ( \alpha \right ) = d_Q f^* \left ( \alpha \right ).
$$
\noindent
\textbf{Second case: $k < n$.}\\%2222222222222222222222222222222222222222222222222222222222222222222222222
For $k < n$, the definition says $d_Q \left ([ \alpha ]_{I^k} \right ) =[ d \alpha ]_{I^k}$ for any $\alpha \in \Omega^k$.
Then, by Definition \ref{pushequivclass} of $f^*$,
$$
f^*d_Q ([ \alpha ]_{I^k}) = f^* \left ( [ d \alpha ]_{I^k} \right ) = [ f^* d \alpha ]_{I^k} = [ d f^* \alpha ]_{I^k} =d_Q [ f^* \alpha ]_{I^k}= d_Q f^* ( [ \alpha ]_{I^k}).
$$
\noindent
\textbf{Third and last case: $k =n$.}\\%333333333333333333333333333333333333333333333333333333333333333333333
We know that, by Lemma \ref{lemma}, every form $[\alpha]_{ \{ \gamma \wedge \theta \} } \in \frac{\Omega^n}{ \{ \gamma \wedge \theta \} } $
has a unique lifting $\tilde{\alpha} \in \Omega^n$ so that $d \tilde{\alpha} \in J^{n+1}$. Given this existence and unicity, we can now define the following operator:
\begin{align*}
\tilde{D} : \frac{\Omega^n}{ \{ \gamma \wedge \theta \} } & \to J^{n+1},\\
\tilde{D} \left ( [\alpha]_{ \{ \gamma \wedge \theta \} } \right ) & := d\tilde{\alpha}.
\end{align*}
By Lemma \ref{lemmaDtilde}, we know that $\widetilde{f^* \alpha}= f^* \tilde{\alpha}$ holds. Then
$$
\tilde{D} f^* [ \alpha]_{ \{ \gamma \wedge \theta \} }= \tilde{D} [ f^* \alpha]_{ \{ \gamma \wedge \theta \} }
= d \left ( \widetilde{f^* \alpha} \right )= d ( f^* \tilde{\alpha}) = f^* d \tilde{\alpha} = f^* \tilde{D} ( [\alpha]_{ \{ \gamma \wedge \theta \} } ).
$$
In Definition \ref{D}, we posed the second-order differential operator $D$ to be $D ( [\alpha]_{I^n} ) = d \tilde{\alpha}$, which means
$$
D ( [\alpha]_{I^n} ) = \tilde{D} \left ( [\alpha]_{ \{ \gamma \wedge \theta \} } \right ).
$$
Then, since $ D [\alpha ]_{I^n} \in J^{n+1}$ and using the definitions of $D$ and $f^*$, as well as the fact that $f^*\tilde{D}=\tilde{D}f^* $, we get
\begin{align*}
f^* D [\alpha ]_{I^n}
& = f^* \tilde{D} \left ( [\alpha]_{ \{ \gamma \wedge \theta \} } \right )
= \tilde{D} f^* \left ( [\alpha]_{ \{ \gamma \wedge \theta \} } \right )
= \tilde{D} \left ( [ f^*\alpha]_{ \{ \gamma \wedge \theta \} } \right )\\
& = D ( [f^* \alpha]_{I^n} ) =
D f^* [ \alpha]_{I^n}.
\end{align*}
This concludes the proof.
\begin{comment
\noindent
Now we only have to prove the claim that $\widetilde{f^* \alpha}= f^* \tilde{\alpha}$.\\
Following the proof of Lemma \ref{lemma}, one knows that there exists a unique $ \beta \in {\prescript{}{}\bigwedge}^{n-1} \mathfrak{h}_1 $ so that
$$
\tilde{\alpha} = \alpha + \beta \wedge \theta
$$
with
$$
\theta \wedge \left ( d \alpha + (- 1)^{\vert \beta \vert} L(\beta) \right ) =0.
$$
And so one can compute that
$$
f^* \tilde{\alpha} = f^* \alpha + f^* (\beta \wedge \theta ) = f^* \alpha + \lambda_f f^* \beta \wedge \theta
$$
with
$$
\theta \wedge \left ( d \alpha + (- 1)^{\vert \beta \vert} L(\beta) \right ) =0.
$$
On the other hand, one can repeat the lifting process for $f^* \alpha \in {\prescript{}{}\bigwedge}^{n} \mathfrak{h}_1 \cong \frac{\Omega^n}{ \{ \gamma \wedge \theta \} }$ (the congruence tells that if $\alpha$ belongs to ${\prescript{}{}\bigwedge}^{n} \mathfrak{h}_1 $, so does $f^*$).\\
Then there exists a unique $\gamma $ so that
$$
\widetilde{f^* \alpha} = f^* \alpha + \gamma \wedge \theta
$$
with
$$
\theta \wedge \left ( d f^* \alpha + (- 1)^{\vert \gamma \vert} L(\gamma) \right ) =0.
$$
To prove the claim one needs to show that $ \gamma = \lambda_f f^* \beta $. We can substitute $ \lambda_f f^* \beta$ in place of $\gamma$ in the second condition and, by uniqueness, it is enough to show that
\begin{equation}\label{11star}
\theta \wedge \left ( d f^* \alpha + (- 1)^{\vert \lambda_f f^* \beta \vert} L(\lambda_f f^* \beta) \right ) =0
\end{equation}
Indeed
\begin{align*}
d ( f^* \alpha ) & =
f^* (d \alpha ) =
f^* \left [ d \alpha+ L \left ( (- 1)^{\vert \beta \vert}\beta \right ) - L \left ( (- 1)^{\vert \beta \vert}\beta \right ) \right ] \\
&
= f^* \left (d \alpha+ L \left ( (- 1)^{\vert \beta \vert}\beta \right ) \right ) - f^* \left ( L \left ( (- 1)^{\vert \beta \vert}\beta \right ) \right ) .
\end{align*}
Then
\begin{align*}
\theta \wedge d f^* \alpha & = \theta \wedge f^* \left (d \alpha+ L \left ( (- 1)^{\vert \beta \vert}\beta \right ) \right )
- \theta \wedge f^* \left ( L \left ( (- 1)^{\vert \beta \vert}\beta \right ) \right )\\
&
=
\lambda_f^{-1} f^* \left ( \theta \wedge \left (d \alpha+ (- 1)^{\vert \beta \vert} L \left ( \beta \right ) \right ) \right ) - (- 1)^{\vert \beta \vert} \theta \wedge f^* \left ( L \left ( \beta \right ) \right ).
\end{align*}
Since $ \theta \wedge \left (d \alpha+ (- 1)^{\vert \beta \vert} L \left ( \beta \right ) \right )=0$, we get
\begin{align*}
\theta \wedge d f^* \alpha & = - (- 1)^{\vert \beta \vert} \theta \wedge f^* \left ( L \left ( \beta \right ) \right ) =- (- 1)^{\vert \beta \vert} \theta \wedge f^* \left ( d\theta \wedge \beta \right ) \\
& = - (- 1)^{\vert \beta \vert} \theta \wedge f^* d\theta \wedge f^* \beta = - (- 1)^{\vert \beta \vert} \theta \wedge \lambda_f d \theta \wedge f^* \beta \\
&= - (- 1)^{\vert \beta \vert} \lambda_f \theta \wedge d \theta \wedge f^* \beta,
\end{align*}
where, in the second to last equality, we used that $ f^* d\theta = d f^* \theta= d (\lambda_f \theta)= d \lambda_f \wedge \theta +\lambda_f d \theta$.\\
On the other hand, since $\vert \lambda_f f^* \beta \vert = \vert \beta \vert$,
\begin{align*}
\theta \wedge \left ( (- 1)^{\vert \lambda_f f^* \beta \vert} L(\lambda_f f^* \beta) \right ) &=(- 1)^{\vert \beta \vert} \theta \wedge \left ( L(\lambda_f f^* \beta) \right )\\
&= (- 1)^{\vert \beta \vert} \theta \wedge \left ( d\theta \wedge \lambda_f f^* \beta \right ) \\
&= (- 1)^{\vert \beta \vert} \lambda_f \theta \wedge d\theta \wedge f^* \beta .
\end{align*}
This shows that equation \eqref{11star} holds and thus ends the proof.
\end{comment
\end{proof}
\section{Derivative of Compositions, Pushforward and Pullback}\label{dercompuspul}
\noindent
In this section we start by writing the derivatives of composition of functions. After that we will move to writing explicitely the pushforward and pullback by such functions, in different situations. Unfortunately, if we ask only regularity but no contact properties, the calculation becomes quite heavy and, since its meaning is relative (as contactness is a natural assumption), we will not push this case after the first derivatives. One can see section 1.D in \cite{KORR} as a reference.
\subsection{General Maps}
\begin{comment
\begin{rec}
Consider $g: U \subset \mathbb{H}^n \to \mathbb{H}^n$, $g=(g^1,\dots,g^{2n+1}) \in \left [ C^1(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$ and $f: \mathbb{H}^n \to \mathbb{R}$, $f\in C^1(\mathbb{H}^n, \mathbb{R})$.
Recall, from \ref{simplecomposition} that
\begin{align*}
X_j (f \circ g) &= (\nabla f)_g \cdot X_j g \ \ \ \ \forall j=1,\dots,n\\
Y_j (f \circ g) &= (\nabla f)_g \cdot Y_j g \ \ \ \ \forall j=1,\dots,n\\
T (f \circ g) &= (\nabla f)_g \cdot T g
\end{align*}
This result is, in a way, not satisfactory as it uses the Riemannian laplacian. We will see now that we can obtain similar results with the horizontal laplacian.
\end{rec}
\end{comment
\noindent
First we introduce the following notation:
\begin{no}
Remember Notation \ref{notW} and let $j=1,\dots,2n$. Define
$$
\tilde w_{ j}:=
\begin{cases}
w_{n+j}, \quad &j=1,\dots,n,\\
-w_{j-n}, \quad & j=n+1,\dots,2n.
\end{cases}
$$
Then we have that
$$
W_j = \partial_{w_j} -\frac{1}{2} \tilde w_{ j} \partial_t, \quad j=1,\dots,2n.
$$
\end{no}
\begin{no}\label{callA}
Let $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=(f^1,\dots,f^{2n+1}) \in \left [ C_\mathbb{H}^2(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$. Denote
$$
\mathcal{A}(j,f):= W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{j} f^l , \quad j=1,\dots,2n+1.
$$
\end{no}
\begin{lem}\label{composition}
Consider a map $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=(f^1,\dots,f^{2n+1}) \in \left [ C^1(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$ and $g: \mathbb{H}^n \to \mathbb{R}$, $g\in C^1(\mathbb{H}^n, \mathbb{R})$.
Then
\begin{align}
\begin{aligned}
W_j (g \circ f) &= \sum_{l=1}^{2n} ( W_l g)_f W_{j} f^l + (Tg)_f \left ( W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{j} f^l \right ) \\
& = (\nabla_{\mathbb{H}} g)_f \cdot (W_j f^1, \dots, W_j f^{2n}) + (Tg)_f \mathcal{A}(j,f) ,
\end{aligned}
\end{align}
for $j=1,\dots,2n+1$.
In particular, if $n=1$,
$$
\begin{cases}
X (g \circ f) = ( X g)_f X f^1 + ( Y g)_f X f^2 + (Tg)_f \left ( X f^{3} + \frac{1}{2} \left ( f^2 Xf^1 - f^1 X f^2 \right ) \right ) ,\\
Y (g \circ f) = ( X g)_f Y f^1 + ( Y g)_f Y f^2 + (Tg)_f \left ( Y f^{3} + \frac{1}{2} \left ( f^2 Yf^1 - f^1 Y f^2 \right ) \right ) ,\\
T (g \circ f) = ( X g)_f T f^1 + ( Y g)_f T f^2 + (Tg)_f \left ( T f^{3} + \frac{1}{2} \left ( f^2 Tf^1 - f^1 T f^2 \right ) \right ) .
\end{cases}
$$
\end{lem}
\noindent
Note that, with our regularity hypotheses, we are not ready to use the equality $T=[W_j,W_{n+j}]$, as the double derivative is not well-defined yet. We will do this later when considering contact maps.
\begin{proof}[Proof of Lemma \ref{composition}
\begin{align*}
W_j (g \circ f) &= \left (\partial_{w_j}- \frac{1}{2} \tilde w_{ j} \partial_t \right ) (g \circ f)\\
&= \sum_{l=1}^{2n+1} (\partial_{w_l} g)_f \partial_{w_{j}} f^l - \frac{1}{2} \tilde w_{ j} \sum_{l=1}^{2n+1} (\partial_{w_l} g)_f \partial_{t} f^l \\
&= \sum_{l=1}^{2n+1} \left ( \partial_{w_{j}} f^l - \frac{1}{2} \tilde w_{ j} \partial_{t} f^l \right ) (\partial_{w_l} g)_f = \sum_{l=1}^{2n+1} W_{j} f^l (\partial_{w_l} g)_f\\
&= \sum_{l=1}^{2n} W_{j} f^l \left ( W_l g+ \frac{1}{2} \tilde w_{ l} T g \right )_f + W_{j} f^{2n+1} (Tg)_f \\
&= \sum_{l=1}^{2n} ( W_l g)_f W_{j} f^l + (Tg)_f \left ( W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{j} f^l \right ) ,
\end{align*}
for $j=1,\dots,2n$. In the case of $j=2n+1,$
\begin{align*}
T (g \circ f) &= \partial_{t} (g \circ f) = \sum_{l=1}^{2n+1} (\partial_{w_l} g)_f \partial_{t} f^l \\
&= \sum_{l=1}^{2n}\left ( W_l g+ \frac{1}{2} \tilde w_{ l} T g \right )_f T f^l + (Tg)_f T f^{2n+1} \\
&= \sum_{l=1}^{2n} ( W_l g)_f T f^l + (Tg)_f \left ( T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) T f^l \right ) .
\end{align*}
\end{proof}
\begin{prop}\label{pushforwardgeneral}
Let $f: U\subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=(f^1,\dots,f^{2n+1}) \in \left [ C^1(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$.
Then
\begin{align}
\begin{aligned}
f_* W_j
&= \sum_{l=1}^{2n+1} \langle dw_l , f_* W_j \rangle W_l \\
&= \sum_{l=1}^{2n} W_{j} f^l W_l + \left ( W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l}(f) W_{j} f^l \right ) T \\
&= \sum_{l=1}^{2n} W_{j} f^l W_l + \mathcal{A}(j,f) T ,
\end{aligned}
\end{align}
for $j=1,\dots,2n+1$. In particular, if $n=1$,
\begin{align}
\begin{aligned}
f_* X=&
\langle dx , f_* X \rangle X + \langle dy , f_* X \rangle Y + \langle \theta , f_* X \rangle T \\
= &Xf^1 X + Xf^2 Y + \left ( X f^{3} + \frac{1}{2} \left ( f^2 Xf^1 - f^1 X f^2 \right ) \right ) T,
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
f_* Y=&
\langle dx , f_* Y \rangle X + \langle dy , f_* Y \rangle Y + \langle \theta , f_* Y \rangle T \\
=& Yf^1 X + Yf^2 Y + \left ( Y f^{3} + \frac{1}{2} \left ( f^2 Yf^1 - f^1 Y f^2 \right ) \right ) T,
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
f_* T=&
\langle dx , f_* T \rangle X + \langle dy , f_* T \rangle Y + \langle \theta , f_* T \rangle T \\
=& Tf^1 X + Tf^2 Y + \left ( T f^{3} + \frac{1}{2} \left ( f^2 T f^1 - f^1 T f^2 \right ) \right ) T.
\end{aligned}
\end{align}
\end{prop}
\begin{proof
The proof follows immediately from Lemma \ref{composition} remembering that $(f_* W_j )h=W_j(h \circ f)$. Otherwise one can make the computation directly as following exactly the same strategy as in Lemma \ref{composition}:
\begin{comment
s. Let $j=1,\dots,2n$. Define
$$
\tilde w_{ j}:=
\begin{cases}
w_{n+j}, \quad \text{if} \ j=1,\dots,n\\
-w_{j-n}, \quad \text{if} \ j=n+1,\dots,2n.
\end{cases}
$$
Then
\end{comment
\begin{align*}
f_* W_j &=f_* \left (\partial_{w_j}- \frac{1}{2} \tilde w_{ j} \partial_t \right )\\
&= \sum_{l=1}^{2n+1} \partial_{w_{j}} f^l \partial_{w_l} - \frac{1}{2} \tilde w_{ j} \sum_{l=1}^{2n+1} \partial_{t} f^l \partial_{w_l}\\
&= \sum_{l=1}^{2n+1} \left ( \partial_{w_{j}} f^l - \frac{1}{2} \tilde w_{ j} \partial_{t} f^l \right ) \partial_{w_l} = \sum_{l=1}^{2n+1} W_{j} f^l \partial_{w_l}\\
&= \sum_{l=1}^{2n} W_{j} f^l \left ( W_l + \frac{1}{2} \tilde w_{ l} T \right ) + W_{j} f^{2n+1} T \\
&= \sum_{l=1}^{2n} W_{j} f^l W_l + \left ( W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} W_{j} f^l \right ) T ,
\end{align*}
for $j=1,\dots,2n$. Similarly for $f_* T$.
\end{proof}
\subsection{Contact Maps}
\begin{rem}
Recall that the Definition \ref{contact} of contact map says that
$$
\langle \theta , f_* W_j \rangle =0, \quad j=1,\dots,2n.
$$
Proposition \ref{pushforwardgeneral} shows clearly that this is the same as asking
$$
\mathcal{A}(j,f)= W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{j} f^l =0, \quad j=1,\dots,2n.
$$
\end{rem}
\begin{ex}\label{contactdebt}
In Example \ref{examplecontact} we promised to prove that the anisotropic dilation
$$\delta_r (w_1, \dots, w_{2n},w_{2n+1})=(rw_1, \dots, r w_{2n}, r^2w_{2n+1} )$$
is a contact map. In other words, we have to show that $\mathcal{A}(j,\delta_r)(w)=0$ for $j=1,\dots,2n$. Indeed
\begin{align*}
\mathcal{A}(j,\delta_r) (w)&= W_{j} \delta_r^{2n+1} (w) + \frac{1}{2} \sum_{l=1}^{2n} [\tilde w_{ l} (\delta_r)](w) W_{j} \delta_r^l (w) \\
&= \left (\partial w_j -\frac{1}{2} \tilde w_{ j} \partial_{w_{2n+1}} \right ) ( r^2w_{2n+1} ) + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (rw) \left (\partial w_j -\frac{1}{2} \tilde w_{ j} \partial_{w_{2n+1}} \right ) (r w_l) \\
&= -\frac{1}{2} \tilde w_{ j} r^2 + \frac{1}{2} r \tilde w_{ j} \cdot r =0.
\end{align*}
For completeness we show also that $\mathcal{A}(2n+1,\delta_r)(w) \neq 0$, indeed:
\begin{align*}
\mathcal{A}(2n+1,\delta_r) (w)&= W_{2n+1} \delta_r^{2n+1} (w) + \frac{1}{2} \sum_{l=1}^{2n} [\tilde w_{ l} (\delta_r)](w) W_{2n+1} \delta_r^l (w) \\
&= \left ( \partial_{w_{2n+1}} \right ) ( r^2w_{2n+1} ) + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (rw) \left ( \partial_{w_{2n+1}} \right ) (r w_l) =r^2 \neq 0 .
\end{align*}
\end{ex}
\begin{no}\label{callL}
Let $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=(f^1,\dots,f^{2n+1}) \in \left [ C_\mathbb{H}^2(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$. Denote
$$
\lambda(j,f):= \sum_{l=1}^{n} \left ( W_j f^l W_{n+j} f^{n+l} - W_{n+j} f^{l}W_{j} f^{n+l} \right ), \quad j=1,\dots,n.
$$
\end{no}
\begin{lem}\label{T3=XY12}
Let $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, be a contact map, $f=(f^1,\dots,f^{2n+1}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$. Then, for $j=1,\dots,n$,
$$
\mathcal{A}(2n+1,f) = T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) T f^l
= \sum_{l=1}^{n} \left ( W_j f^l W_{n+j} f^{n+l} - W_{n+j} f^{l}W_{j} f^{n+l} \right ) = \lambda (j,f).
$$
In particular, for $n=1$, one has $j=1$ and
$$
\mathcal{A}(3,f) = T f^{3} + \frac{1}{2} \left ( f^2 T f^1 - f^1 T f^2 \right ) = \left ( X f^1 Y f^2 - Y f^1 X f^2 \right ) = \lambda (1,f).
$$
\end{lem}
\begin{proof
\begin{align*}
\mathcal{A}&(2n+1,f) = T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) T f^l = \\
=& \left ( W_j W_{n+j} - W_{n+j} W_{j} \right ) f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) \left ( W_j W_{n+j} - W_{n+j} W_{j} \right ) f^l \\
=& W_j W_{n+j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_j W_{n+j} f^l
- W_{n+j} W_{j} f^{2n+1} - \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{n+j} W_{j} f^l \\
=& W_j \left ( W_{n+j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{n+j} f^l \right ) - \frac{1}{2} \sum_{l=1}^{2n} W_j \tilde w_{ l} (f) W_{n+j} f^l \\
& - W_{n+j} \left ( W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{j} f^l \right) + \frac{1}{2} \sum_{l=1}^{2n} W_{n+j} \tilde w_{ l} (f) W_{j} f^l \\
=& W_j \left ( \mathcal{A}(n+j,f) \right ) - \frac{1}{2} \sum_{l=1}^{2n} W_j \tilde w_{ l} (f) W_{n+j} f^l - W_{n+j} \left ( \mathcal{A}(j,f) \right ) + \frac{1}{2} \sum_{l=1}^{2n} W_{n+j} \tilde w_{ l} (f) W_{j} f^l \\
=& - \frac{1}{2} \sum_{l=1}^{2n} W_j \tilde w_{ l} (f) W_{n+j} f^l +\frac{1}{2} \sum_{l=1}^{2n} W_{n+j} \tilde w_{ l} (f) W_{j} f^l \\
=& - \frac{1}{2} \sum_{l=1}^{n} \left ( W_j f^{n+l} W_{n+j} f^l - W_j f^l W_{n+j} f^{n+l} \right ) + \frac{1}{2} \sum_{l=1}^{n} \left ( W_{n+l} f^{n+l} W_{j} f^l - W_{n+j} f^l W_{j} f^{n+l} \right ) \\
=& \sum_{l=1}^{n} \left ( W_j f^l W_{n+j} f^{n+l} - W_{n+j} f^{l}W_{j} f^{n+l} \right )= \lambda (j,f).
\end{align*}
\end{proof}
\noindent
The lemma shows clearly that $ \lambda(j,f)$ does not actually depend on $j$, so from this point we can write $ \lambda(f)= \lambda(j,f)$.
\begin{lem}\label{composition_contact}
Consider a contact map $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=(f^1,\dots,f^{2n+1}) \in \left [ C^1(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$ and a map $g: \mathbb{H}^n \to \mathbb{R}$, $g\in C^1(\mathbb{H}^n, \mathbb{R})$. \\
Then, given the definition of contactness, it follows immediately from Lemma \ref{composition} that
\begin{align}
\begin{aligned}
W_j (g \circ f) &= \sum_{l=1}^{2n} ( W_l g)_f W_{j} f^l \\
& = (\nabla_{\mathbb{H}} g)_f \cdot (W_j f^1, \dots, W_j f^{2n}) .
\end{aligned}
\end{align}
If $n=1$, they become
$$
\begin{cases}
X (g \circ f) = ( X g)_f X f^1 + ( Y g)_f X f^2 ,\\
Y (g \circ f) = ( X g)_f Y f^1 + ( Y g)_f Y f^2 .
\end{cases}
$$
\end{lem}
\begin{lem}\label{nabla_comp1}
Consider a contact map $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=(f^1,\dots,f^{2n+1}) \in \left [ C^1(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$ and a map $g: \mathbb{H}^n \to \mathbb{R}$, $g \in C^1(\mathbb{H}^n, \mathbb{R})$.
Then
\begin{equation}\label{tantinabla}
\nabla_{\mathbb{H}} (g \circ f) =f_*^T (\nabla_{\mathbb{H}} g)_f .
\end{equation}
\end{lem}
\begin{proof
Consider a horizontal vector $V$ and compute the scalar product of the Heisenberg gradient of the composition (which is horizontal by definition) against such vector $V$.
$$
\langle \nabla_{\mathbb{H}} (g \circ f) ,V \rangle_H = \langle d_Q (g \circ f) \vert V \rangle
$$
Note that here we can substitute $d$ to $d_Q$ (and viceversa) because the last component of the differential does not play any role as the computation regards only horizontal objects. Formally we have
\begin{align*}
\langle d \phi \vert V \rangle
= & \langle \sum_{j=1}^{n} \left ( X_j \phi d x_j + Y_j \phi dy_j \right ) + T\phi \theta \vert \sum_{j=1}^{n} \left ( V_j X_j + V_{n+j} Y_j \right ) \rangle \\
=& \langle \sum_{j=1}^{n} \left ( X_j \phi d x_j + Y_j \phi dy_j \right ) \vert \sum_{j=1}^{n} \left ( V_j X_j + V_{n+j} Y_j \right ) \rangle \\
=& \langle d_Q \phi \vert V \rangle.
\end{align*}
We can also repeat this below for $f_* V $ below, since $f_* V $ is still a horizontal vector field. Then
\begin{align*}
\langle \nabla_{\mathbb{H}} (g \circ f) ,V \rangle_H =& \langle d_Q (g \circ f) \vert V \rangle = \langle d (g \circ f) \vert V \rangle =
(g \circ f)_* ( V ) \\
=& ( (g_*)_f \circ f_* ) ( V ) = (d g)_f ( f_* V ) =
\langle (d g)_f \vert f_* V \rangle \\
=& \langle (d_Q g)_f \vert f_* V \rangle = \langle (\nabla_{\mathbb{H}} g)_f , f_* V \rangle_H = \langle f_{*}^T (\nabla_{\mathbb{H}} g)_f, V \rangle_H .
\end{align*}
Then, since $V$ is a general horizontal vector,
$$
\nabla_{\mathbb{H}} (g \circ f) = f_*^T (\nabla_{\mathbb{H}} g)_f .
$$
\end{proof}
\begin{rem}\label{nabla_comp2}
Equation \eqref{tantinabla} can be rewritten as
\begin{comment
\begin{align*}
& \nabla_{\mathbb{H}} (f \circ g) =
(\nabla f)_g \cdot (X_1 g, \dots, X_n g , Y_1 g, \dots, Y_n g )=\\
=&
(\partial_{x_1} f, \dots, \partial_{x_n} f, \partial_{y_1} f, \dots, \partial_{y_n} f, \partial_t f)_g \cdot
\left (
\begin{matrix}
X_1 g^1 & \ldots & X_n g^1 & Y_1 g^1 & \ldots & Y_n g^1 \\
\vdots & & \vdots & \vdots & & \vdots\\
X_1 g^n & \ldots & X_n g^n & Y_1 g^n & \ldots & Y_n g^n\\
X_1 g^{n+1} & \ldots & X_n g^{n+1} & Y_1 g^{n+1} & \ldots & Y_n g^{n+1}\\
\vdots & & \vdots & \vdots & & \vdots\\
X_1 g^{2n} & \ldots & X_n g^{2n} & Y_1 g^{2n} & \ldots & Y_n g^{2n}\\
X_1 g^{2n+1} & \ldots & X_n g^{2n+1} & Y_1 g^{2n+1} & \ldots & Y_n g^{2n+1} \\
\end{matrix}
\right )\\
=&
\left (
\begin{matrix}
(\nabla f)_g X_1 g \\
\vdots \\
(\nabla f)_g X_n g \\
(\nabla f)_g Y_1 g \\
\vdots \\
(\nabla f)_g Y_n g \\
\end{matrix}
\right )^T.
\end{align*}
Alternatively we can write it as
\begin{align*}
& \nabla_{\mathbb{H}} (g \circ f) =
(\nabla_\mathbb{H} g)_f \cdot
\left (
\begin{matrix}
X_1 f^1 & \ldots & X_n f^1 & Y_1 f^1 & \ldots & Y_n f^1 \\
\vdots & & \vdots & \vdots & & \vdots\\
X_1 f^n & \ldots & X_n f^n & Y_1 f^n & \ldots & Y_n f^n\\
X_1 f^{n+1} & \ldots & X_n f^{n+1} & Y_1 f^{n+1} & \ldots & Y_n f^{n+1}\\
\vdots & & \vdots & \vdots & & \vdots\\
X_1 f^{2n} & \ldots & X_n f^{2n} & Y_1 f^{2n} & \ldots & Y_n f^{2n}
\end{matrix}
\right )\\
=&
(X_1 g, \dots, X_n g, Y_1 g, \dots, Y_n g )_f \cdot
\left (
\begin{matrix}
X_1 f^1 & \ldots & X_n f^1 & Y_1 f^1 & \ldots & Y_n f^1 \\
\vdots & & \vdots & \vdots & & \vdots\\
X_1 f^n & \ldots & X_n f^n & Y_1 f^n & \ldots & Y_n f^n\\
X_1 f^{n+1} & \ldots & X_n f^{n+1} & Y_1 f^{n+1} & \ldots & Y_n f^{n+1}\\
\vdots & & \vdots & \vdots & & \vdots\\
X_1 f^{2n} & \ldots & X_n f^{2n} & Y_1 f^{2n} & \ldots & Y_n f^{2n}
\end{matrix}
\right ).
\end{align*}
\end{comment
\begin{align*}
\nabla_{\mathbb{H}} (g \circ f) =
(X_1 g, \dots, X_n g, Y_1 g, \dots, Y_n g )_f \cdot
\left (
\begin{matrix}
X_1 f^1 & \ldots & X_n f^1 & Y_1 f^1 & \ldots & Y_n f^1 \\
\vdots & & \vdots & \vdots & & \vdots\\
X_1 f^{2n} & \ldots & X_n f^{2n} & Y_1 f^{2n} & \ldots & Y_n f^{2n}
\end{matrix}
\right ).
\end{align*}
\end{rem}
\noindent
Next we show the double derivative of the composition of two functions. By Lemma \ref{T3=XY12}, we will find our previous expression for $T$.
\begin{comment
\begin{rem}
Let $g: U \subseteq \mathbb{H}^1 \to \mathbb{H}^1$ be a smooth contact map and $f: \mathbb{H}^1 \to \mathbb{R}$ be smooth. One has:
\begin{align}\label{compXX}
\begin{aligned}
XX (f \circ g) =& \left [ (XXf)_g X g^1 + ( YXf)_g X g^2 \right ] X g^1 + (Xf)_g X X g^1 \\
&+ \left [ (XYf)_g X g^1 + ( YYf)_g X g^2 \right ] X g^2 + (Yf)_g X X g^2.
\end{aligned}
\end{align}
\begin{align}\label{compXY}
\begin{aligned}
XY(f \circ g)=&\left [ (XXf)_g X g^1 + ( YXf)_g X g^2 \right ] Y g^1 + (Xf)_g X Y g^1\\
& + \left [ (XYf)_g X g^1 + ( YYf)_g X g^2 \right ] Y g^2 + (Yf)_g X Y g^2.
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
Y X (f \circ g)= & \left [ (XXf)_g Y g^1 + ( YXf)_g Y g^2 \right ] X g^1 + (Xf)_g Y X g^1\\
&+ \left [ (XYf)_g Y g^1 + ( YYf)_g Y g^2 \right ] X g^2 + (Yf)_g Y X g^2.
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
Y Y (f \circ g) =& \left [ (XXf)_g Y g^1 + ( YXf)_g Y g^2 \right ] Y g^1 + (Xf)_g Y Y g^1 \\
&+ \left [ (XYf)_g Y g^1 + ( YYf)_g Y g^2 \right ] Y g^2 + (Yf)_g Y Y g^2.
\end{aligned}
\end{align}
\begin{align}\label{compT}
\begin{aligned}
T(f \circ g) =&XfTg^1 + YfTg^2 + \lambda Tf.
\end{aligned}
\end{align}
where $\lambda := T g^3 + \frac{1}{2} g^2 Tg^1 - \frac{1}{2} g^1 T g^2 $ (see Notation \ref{callA} later on).
\end{rem}
\begin{proof}[Proof of Remark]
Remember that
$$
X(f \circ g) = (Xf)_g X g^1 + (Yf)_g X g^2 \ \text{ and } \ Y(f \circ g) = (Xf)_g Y g^1 + (Yf)_g Y g^2 .
$$
Then
\begin{align*}
X X (f \circ g)
=&X \left ( (Xf \circ g ) X g^1 + (Yf\circ g ) X g^2 \right )\\
=& X [ (Xf \circ g ) ] X g^1 + (Xf)_g X X g^1 + X [ (Yf \circ g ) ] X g^2+ (Yf)_g X X g^2 \\
=&\left [ (XXf)_g X g^1 + ( YXf)_g X g^2 \right ] X g^1 + (Xf)_g X X g^1\\
& + \left [ (XYf)_g X g^1 + ( YYf)_g X g^2 \right ] X g^2 + (Yf)_g X X g^2.
\end{align*}
\begin{align*}
XY(f \circ g)
=& X \left ( (Xf)_g Y g^1 + (Yf)_g Y g^2 \right ) \\
=& X [ (Xf \circ g ) ] Y g^1 + (Xf)_g X Y g^1 + X [ (Yf \circ g ) ] Y g^2+ (Yf)_g X Y g^2 \\
=& \left [ (XXf)_g X g^1 + ( YXf)_g X g^2 \right ] Y g^1 + (Xf)_g X Y g^1\\
& + \left [ (XYf)_g X g^1 + ( YYf)_g X g^2 \right ] Y g^2 + (Yf)_g X Y g^2.
\end{align*}
\begin{align*}
Y X (f \circ g)
=& Y \left ( (Xf \circ g ) X g^1 + (Yf\circ g ) X g^2 \right )\\
=& Y [ (Xf \circ g ) ] X g^1 + (Xf)_g Y X g^1 + Y [ (Yf \circ g ) ] X g^2+ (Yf)_g Y X g^2 \\
=& \left [ (XXf)_g Y g^1 + ( YXf)_g Y g^2 \right ] X g^1 + (Xf)_g Y X g^1\\
& + \left [ (XYf)_g Y g^1 + ( YYf)_g Y g^2 \right ] X g^2 + (Yf)_g Y X g^2.
\end{align*}
\begin{align*}
Y Y (f \circ g)
=& Y \left ( (Xf \circ g ) Y g^1 + (Yf\circ g ) Y g^2 \right )\\
=& Y [ (Xf \circ g ) ] Y g^1 + (Xf)_g Y Y g^1 + Y [ (Yf \circ g ) ] Y g^2+ (Yf)_g Y Y g^2 \\
=& \left [ (XXf)_g Y g^1 + ( YXf)_g Y g^2 \right ] Y g^1 + (Xf)_g Y Y g^1\\
& + \left [ (XYf)_g Y g^1 + ( YYf)_g Y g^2 \right ] Y g^2 + (Yf)_g Y Y g^2.
\end{align*}
Finally,
\begin{align*}
T(f \circ g)
=& \partial_t (f \circ g)=(\partial_x f)_g \partial_t g^1 +(\partial_y f)_g \partial_t g^2 + (\partial_t f)_g \partial_t g^3 \\
=& \left ( Xf + \frac{1}{2} g^2 Tf \right ) Tg^1 + \left ( Yf - \frac{1}{2} g^1 Tf \right ) T g^2 + Tf T g^3\\
=& Tf \left ( T g^3 + \frac{1}{2} g^2 Tg^1 - \frac{1}{2} g^1 T g^2 \right ) + XfTg^1 + YfTg^2\\
=& XfTg^1 + YfTg^2 +\lambda Tf.
\end{align*}
\end{proof}
\end{comment
\begin{lem}\label{doublederivativecomposition}
Consider a contact map $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=(f^1,\dots,f^{2n+1}) \in \left [ C^1(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$ and a map $g: \mathbb{H}^n \to \mathbb{R}$, $g\in C_\mathbb{H}^2 (\mathbb{H}^n, \mathbb{R})$. For $j,i\in \{ 1,\dots, 2n\}$ one has:
\begin{align}
W_j W_i (g \circ f) =& \sum_{l=1}^{2n} \left [
\left ( \sum_{h=1}^{2n} \left (W_h W_l g \right )_f W_j f^h \right ) W_i f^l + \left (W_l g \right )_f W_j W_i f^l
\right ], \quad \quad \quad \quad
\end{align}
\begin{align}
\begin{aligned}\label{compoT}
\quad T(g \circ f) =& \sum_{l=1}^{2n} \left (W_l g \right )_f T f^l +
(Tg)_f \sum_{l=1}^{n} \left ( W_h f^l W_{n+h} f^{n+l} - W_{n+h} f^{l}W_{h} f^{n+l} \right )\\
=& \sum_{l=1}^{2n} \left (W_l g \right )_f T f^l +
(Tg)_f \lambda (h,f) .
\end{aligned}
\end{align}
In case $n=1$, one gets
\begin{comment
\begin{align}\label{compXX}
\begin{aligned}
XX (g \circ f) =& \left [ (XXg)_f X f^1 + ( YXg)_f X f^2 \right ] X f^1 + (Xg)_f X X f^1 \\
&+ \left [ (XYg)_f X f^1 + ( YYg)_f X f^2 \right ] X f^2 + (Yg)_f X X f^2,
\end{aligned}
\end{align}
\begin{align}\label{compXY}
\begin{aligned}
XY(g \circ f)=&\left [ (XXg)_f X f^1 + ( YXg)_f X f^2 \right ] Y f^1 + (Xg)_f X Y f^1\\
& + \left [ (XYg)_f X f^1 + ( YYg)_f X f^2 \right ] Y f^2 + (Yg)_f X Y f^2,
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
Y X (g \circ f)= & \left [ (XXg)_f Y f^1 + ( YXg)_f Y f^2 \right ] X f^1 + (Xg)_f Y X f^1\\
&+ \left [ (XYg)_f Y f^1 + ( YYg)_f Y f^2 \right ] X f^2 + (Yg)_f Y X f^2,
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
Y Y (g \circ f) =& \left [ (XXg)_f Y f^1 + ( YXg)_f Y f^2 \right ] Y f^1 + (Xg)_f Y Y f^1 \\
&+ \left [ (XYg)_f Y f^1 + ( YYg)_f Y f^2 \right ] Y f^2 + (Yg)_f Y Y f^2,
\end{aligned}
\end{align}
\end{comment
\begin{align}\label{compT}
\begin{aligned}
T(g \circ f) =&XgTf^1 + YgTf^2 +\lambda (1,f) Tg,\quad \quad \quad \quad \quad \quad \quad
\end{aligned}
\end{align}
where $\lambda (1,f):
Xf^1 Yf^2-Yf^1Xf^2$.
\end{lem}
\begin{proof
Remember that
$$
W_j (g \circ f) = \sum_{l=1}^{2n} \left (W_l g \right )_f W_j f^l.
$$
Then
\begin{align*}
W_j W_i (g \circ f) =& \sum_{l=1}^{2n} W_j \left ( \left ( W_l g \circ f \right ) W_i f^l \right ) \\
=& \sum_{l=1}^{2n} \left [ W_j \left (W_l g \circ f \right ) W_i f^l + \left (W_l g \right )_f W_j W_i f^l
\right ]\\
=& \sum_{l=1}^{2n} \left [ \sum_{h=1}^{2n} \left (
\left (W_h W_l g \right )_f W_j f^h \right ) W_i f^l + \left (W_l g \right )_f W_j W_i f^l
\right ].
\end{align*}
\begin{align*}
T(g \circ f)
=& \left (W_j W_{n+j} - W_{n+j} W_{j} \right ) (g\circ f)\\
=& \sum_{l=1}^{2n} \Bigg [
\left ( \sum_{h=1}^{2n} \left (W_h W_l g \right )_f W_j f^h \right ) W_{n+j} f^l + \left (W_l g \right )_f W_j W_{n+j} f^l \\
&- \left ( \sum_{h=1}^{2n} \left (W_h W_l g \right )_f W_{n+j} f^h \right ) W_j f^l - \left (W_l g \right )_f W_{n+j} W_j f^l \Bigg ],\\
=& \sum_{l=1}^{2n} \left [
\sum_{h=1}^{2n} \left (W_h W_l g \right )_f \left ( W_j f^h W_{n+j} f^l - W_{n+j} f^h W_j f^l \right )
+ \left (W_l g \right )_f T f^l \right ]\\
=& \sum_{l=1}^{2n} \left (W_l g \right )_f T f^l + \sum_{l,h=1}^{2n} \left (W_h W_l g \right )_f \left ( W_j f^h W_{n+j} f^l - W_{n+j} f^h W_j f^l \right ) .
\end{align*}
Note that every time that $l=h$, the corresponding term is zero. Furthermore the term corresponding to a pair $(l,h)$ is the same as the one of the pair $(h,l)$ with a change of sign. Then we can rewrite as
\begin{align*}
T(g \circ f)
=& \sum_{l=1}^{2n} \left (W_l g \right )_f T f^l +
\sum_{\mathclap{\substack{ l,h=1\\ l<h }}}^{2n}
\left (W_h W_l g - W_l W_h g \right )_f \left ( W_j f^h W_{n+j} f^l - W_{n+j} f^h W_j f^l \right ) .
\end{align*}
Then notice that all the terms in the second sum are zero apart from when $h=n+l$. So we can finally write
\begin{align*}
T(g \circ f)
=& \sum_{l=1}^{2n} \left (W_l g \right )_f T f^l +
(Tg)_f \sum_{ l=1}^{n}
\left ( W_j f^l W_{n+j} f^{n+l} - W_{n+j} f^l W_j f^{n+l} \right ) .
\end{align*}
\end{proof}
\begin{prop}\label{pushforwardcontact}
Consider a contact map $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=(f^1,\dots,f^{2n+1}) \in \left [ C^1(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$.
Then the pushforward matrix can be written as
\begin{align*}
f_*=&
\left (
\begin{matrix}
W_1 f^1 & \ldots & W_{2n} f^1 & W_{2n+1} f^1 \\
\vdots & & \vdots & \vdots \\
W_1 f^{2n} & \ldots & W_{2n} f^{2n} & W_{2n+1} f^{2n} \\
0 & \ldots & 0 & \lambda (h,f)
\end{matrix}
\right )
,
\end{align*}
with $h \in\{1,\dots,n\}$.
This is the same as writing
\begin{align}
\begin{aligned}\label{pushforwardWj}
f_* W_j &= \sum_{l=1}^{2n} \langle dw_l , f_* W_j \rangle W_l = \sum_{l=1}^{2n} W_{j} f^l W_l , \quad j=1,\dots,2n, \quad \ \
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}\label{pushforwardT}
f_* T
&= \sum_{l=1}^{2n+1} \langle dw_l , f_* T \rangle W_l \\
&= \sum_{l=1}^{2n} T f^l W_l + \sum_{l=1}^{n} \left ( W_h f^l W_{n+h} f^{n+l} - W_{n+h} f^{l}W_{h} f^{n+l} \right ) T \\
&= \sum_{l=1}^{2n} T f^l W_l + \lambda (h,f) T .
\end{aligned}
\end{align}
In particular, if $n=1$, we have:
\begin{align*}
f_*=
\begin{pmatrix}
Xf^1 &Yf^1 & Tf^1 \\
Xf^2 & Yf^2 & Tf^2 \\
0 & 0 & \lambda (1,f)
\end{pmatrix},
\quad \text{ i.e.,} \quad
\begin{cases}
f_* X = Xf^1 X + Xf^2 Y, \\
f_* Y = Yf^1 X + Yf^2 Y, \\
f_* T = Tf^1 X + Tf^2 Y +\lambda (1,f) T.
\end{cases}
\end{align*}
\begin{comment
and then
\begin{align}
f_* X &=
Xf^1 X + Xf^2 Y, \label{fX} \\
f_* Y &=
Yf^1 X + Yf^2 Y, \label{fY}\\
f_* T & =
Tf^1 X + Tf^2 Y +\lambda (1,f) T.
\end{align}
\end{comment
\end{prop}
\begin{proof
The proof of equation \eqref{pushforwardWj} comes immediately from the definition of contactness. To prove equation \eqref{pushforwardT}, we can use Proposition \ref{pushforwardgeneral} together with Lemma \ref{T3=XY12}.
Instead, if we want to show the proof directly, we just need to work exactly as in the proof of equation \eqref{compoT}.
\end{proof}
\noindent
In the same way, since $\langle f^* \omega\vert v \rangle =\langle \omega \vert f_* v \rangle$, we have an equivalent proposition for the pullback.
\begin{prop}
Consider a contact map $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, $f=(f^1,\dots,f^{2n+1}) \in \left [ C^1(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$.
Then
\begin{align*}
f^*=(f_*)^T=&
\left (
\begin{matrix}
W_1 f^1 & \ldots & W_1 f^{2n} & 0 \\
\vdots & & \vdots & \vdots \\
W_{2n} f^1 & \ldots & W_{2n} f^{2n} & 0 \\
W_{2n+1} f^1 & \ldots & W_{2n+1} f^{2n} & \lambda (h,f)
\end{matrix}
\right ),
\end{align*}
with $h \in\{1,\dots,n\}$.
This is the same as writing
\begin{align}
\begin{aligned}
f^* dw_j &= \sum_{l=1}^{2n+1} \langle f^* dw_j , W_l \rangle dw_l = \sum_{l=1}^{2n+1} W_l f^j dw_l , \quad j=1,\dots,2n ,
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
f^* \theta = \sum_{l=1}^{2n+1} \langle f^* \theta , W_l \rangle dw_l = \langle f^* \theta , T \rangle \theta = \lambda (h,f) \theta .
\end{aligned}
\end{align}
In particular, if $n=1$ we have:
\begin{align*}
f^*=(f_*)^T=
\begin{pmatrix}
Xf^1 & Xf^2 & 0 \\
Yf^1 & Yf^2 & 0\\
Tf^1 & Tf^2 & \lambda (1,f)
\end{pmatrix},
\quad \text{ i.e.,} \quad
\begin{cases}
f^* dx= Xf^1 dx + Y f^1 dy + T f^1 \theta, \\
f^* dy = Xf^2 dx + Y f^2 dy + T f^2 \theta,\\
f^* \theta = \lambda (1,f) \theta.
\end{cases}
\end{align*}
\begin{comment
and then
\begin{align}
f^* dx
=& Xf^1 dx + Y f^1 dy + T f^1 \theta, \\
f^* dy
= & Xf^2 dx + Y f^2 dy + T f^2 \theta,\\
f^* \theta = &
\lambda (1,f) \theta.
\end{align}
\end{comment
\end{prop}
\subsection{Contact diffeomorphisms}
\begin{lem}\label{spamT}
Let $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, be a contact diffeomorphism such that $f=(f^1,\dots,f^{2n+1}) \in \left [ C^1(\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$. Then
$$
f_* T \in \spn \{ T \},
$$
or equivalently, given equation \eqref{pushforwardT},
$$
\langle dw_j , f_* T \rangle = T f^j=0,
$$
for all $j=1,\dots,2n$. Furthermore, for a diffeomorphism $f$ the matrix $(f_*)$ must be invertible, so we get that $ \lambda (h,f) =Tf^{2n+1} \neq 0$, with $h=1,\dots,n$.
\end{lem}
\begin{proof
Since $f$ is a diffeomorphism, it admits an inverse mapping and therefore the matrix $(f_*)$ must be invertible. In particular, this means that $ \lambda (h,f) \neq 0$, with $h=1,\dots,n$. \\%By equation \eqref{pushforwardT}, this means that $f_* T$ must always have $T$ component.\\
Again since $f$ is a diffeomorphism, $f_* : \mathfrak{h} \mapsto \mathfrak{h}$ is a linear isomorphism. By contactness, we have that
$$
{f_*}_{\vert_{ \mathfrak{h}_1 }} : \mathfrak{h}_1 \to \mathfrak{h}_1 ,
$$
which is still an isomorphism. Hence, since the lie algebra divides as $\mathfrak{h}=\mathfrak{h}_1 \oplus \mathfrak{h}_2$, also
$$
{f_*}_{\vert_{ \mathfrak{h}_2 }} : \mathfrak{h}_2 \to \mathfrak{h}_2
$$
is a linear isomorphism and the dimensions of domain and codomain must coincide and be $1$. Then
$$
f_* T \in \spn \{ T \}= \mathfrak{h}_2.
$$
\begin{comment
From equation \eqref{pushforwardWj}, we can write $f_* W_j = \sum_{l=1}^{2n} W_{j} f^l W_l $, with $ j=1,\dots,2n .$ Furthermore, since $f$ is a diffeomorphism, $f_* ([W_j,W_{n+j}]) = [f_* W_j,f_* W_{n+j}]$. Then
\begin{align*}
f_* T =&f_* ([W_j,W_{n+j}]) = [f_* W_j,f_* W_{n+j}]= \left [ \sum_{l=1}^{2n} W_{j} f^l W_l , \sum_{h=1}^{2n} W_{n+j} f^h W_h \right ]\\
=& \sum_{l=1}^{2n} W_{j} f^l W_l \sum_{h=1}^{2n} W_{n+j} f^h W_h - \sum_{h=1}^{2n} W_{n+j} f^h W_h \sum_{l=1}^{2n} W_{j} f^l W_l \\
=& \sum_{l,h=1}^{2n} \left ( ( W_{j} f^l W_l ) ( W_{n+j} f^h W_h ) - ( W_{n+j} f^h W_h ) ( W_{j} f^l W_l ) \right ) \\
=& \sum_{l,h=1}^{2n} \left ( W_{j} f^l W_{n+j} f^h W_l W_h - W_{n+j} f^h W_{j} f^l W_h W_l \right ) \\
=& \sum_{l,h=1}^{2n} W_{j} f^l W_{n+j} f^h \left ( W_l W_h - W_h W_l \right ) .
\end{align*}
The terms of this sum are all zero except in the cases $(h,n+h)$, with $h=1,\dots,n$, and $(n+l,l)$, with $l=1,\dots,n$. So
\begin{align*}
f_* T =& \sum_{h=1}^{n} W_{j} f^{n+h} W_{n+j} f^h \left ( W_{n+h} W_h - W_h W_{n+h} \right ) \\
&+ \sum_{l=1}^{n} W_{j} f^l W_{n+j} f^{n+l} \left ( W_l W_{n+l} - W_{n+l} W_l \right ) \\
=& - \sum_{l=1}^{n} W_{j} f^{n+l} W_{n+j} f^l T + \sum_{l=1}^{n} W_{j} f^l W_{n+j} f^{n+l} T \\
=& \sum_{l=1}^{n} \left ( W_{j} f^l W_{n+j} f^{n+l} - W_{j} f^{n+l} W_{n+j} f^l \right ) T
\end{align*}
\end{comment
\end{proof}
\begin{obs}\label{cmdiff}
Let $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, be a contact diffeomorphism such that $f=(f^1,\dots,f^{2n+1}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$. Lemma \ref{T3=XY12} then becomes
$$
T f^{2n+1} = \sum_{l=1}^{n} \left ( W_j f^l W_{n+j} f^{n+l} - W_{n+j} f^{l}W_{j} f^{n+l} \right ) = \lambda (j,f)
$$
$j=1,\dots,n.$
Then Lemma \ref{spamT} says that
$$
\begin{cases}
\langle \theta , f_* W_j \rangle =0,\\
\langle dw_j , f_* T \rangle =0,\\
j=1,\dots,2n,\\
\end{cases}
\quad \text{ i.e.,} \quad
\begin{cases}
\mathcal{A}(j,f)= W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{j} f^l =0,\\
T f^j=0,\\
j=1,\dots,2n,\\
\end{cases}
$$
\end{obs}
\begin{rem}
Let $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, be a contact diffeomorphism such that $f=(f^1,\dots,f^{2n+1}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$. Then equation \eqref{compoT} becomes
$$
T(g \circ f) =(Tg)_f T f^{2n+1}.
$$
\end{rem}
\begin{prop}
Let $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, be a contact diffeomorphism such that $f=(f^1,\dots,f^{2n+1}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$. Then
\begin{align*}
f_*=&
\left (
\begin{matrix}
W_1 f^1 & \ldots & W_{2n} f^1 & 0 \\
\vdots & & \vdots & \vdots \\
W_1 f^{2n} & \ldots & W_{2n} f^{2n} & 0 \\
0 & \ldots & 0 & T f^{2n+1}
\end{matrix}
\right ).
\end{align*}
This is the same as writing
\begin{align}
\begin{aligned}
f_* W_j &= \sum_{l=1}^{2n} \langle dw_l , f_* W_j \rangle W_l = \sum_{l=1}^{2n} W_{j} f^l W_l , \quad j=1,\dots,2n ,
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
f_* T
&= \langle d \theta , f_* T \rangle T \\
&= \sum_{l=1}^{n} \left ( W_h f^l W_{n+h} f^{n+l} - W_{n+h} f^{l}W_{h} f^{n+l} \right ) T \quad \quad \quad \quad \\
&= T f^{2n+1} T .
\end{aligned}
\end{align}
In particular, if $n=1$ we have:
\begin{align*}
f_*=
\begin{pmatrix}
Xf^1 &Yf^1 & 0\\
Xf^2 & Yf^2 & 0 \\
0 & 0 & Tf^{3}
\end{pmatrix},
\quad \text{ i.e.,} \quad
\begin{cases}
f_* X = Xf^1 X + Xf^2 Y,\\
f_* Y = Yf^1 X + Yf^2 Y,\\
f_* T = Tf^3 T
\end{cases}
\end{align*}
with $Tf^{3} =Xf^1Yf^2-Xf^2Yf^1$.
\begin{comment
which means
\begin{align}
f_* X
= &
Xf^1 X + Xf^2 Y,\\
f_* Y
= &
Yf^1 X + Yf^2 Y,\\
f_* T
= &
Tf^3 T = \left ( Xf^1Yf^2-Xf^2Yf^1 \right ) T.
\end{align}
\end{comment
\end{prop}
\noindent
In the same way again,
\begin{prop}
Let $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, be a contact diffeomorphism such that $f=(f^1,\dots,f^{2n+1}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$. Then
\begin{align*}
f^*=(f_*)^T=&
\left (
\begin{matrix}
W_1 f^1 & \ldots & W_1 f^{2n} & 0 \\
\vdots & & \vdots & \vdots \\
W_{2n} f^1 & \ldots & W_{2n} f^{2n} & 0 \\
0 & \ldots & 0 & T^{2n+1}
\end{matrix}
\right ).
\end{align*}
This is the same as writing
\begin{align}
\begin{aligned}
f^* dw_j &= \sum_{l=1}^{2n+1} \langle f^* dw_j , W_l \rangle dw_l = \sum_{l=1}^{2n+1} W_l f^j dw_l , \quad j=1,\dots,2n ,
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
f^* \theta = \sum_{l=1}^{2n+1} \langle f^* \theta , W_l \rangle dw_l = \langle f^* \theta , T \rangle \theta = \lambda (h,f) \theta .
\end{aligned}
\end{align}
In particular, if $n=1$ we have:
\begin{align*}
f^*=(f_*)^T=
\begin{pmatrix}
Xf^1 & Xf^2 & 0 \\
Yf^1 & Yf^2 & 0\\
0 & 0 & Tf^3
\end{pmatrix},
\quad \text{ i.e.,} \quad
\begin{cases}
f^* dx= Xf^1 dx + Y f^1 dy,\\
f^* dy = Xf^2 dx + Y f^2 dy,\\
f^* \theta = T f^3 \theta ,
\end{cases}
\end{align*}
with $ T f^3 = Xf^1Yf^2-Xf^2Yf^1 $.
\begin{comment
same as
\begin{align}
f^* dx
=&
Xf^1 dx + Y f^1 dy,\\
f^* dy
=&
Xf^2 dx + Y f^2 dy,\\
f^* \theta
= &
T f^3 \theta = \left ( Xf^1Yf^2-Xf^2Yf^1 \right ) \theta.
\end{align}
\end{comment
\end{prop}
\subsection{Higher Order}
\begin{lem}\label{XCYC}
Let $f: U \subseteq \mathbb{H}^n \to \mathbb{H}^n$, $U$ open, be a contact map such that $f=(f^1,\dots,f^{2n+1}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^n, \mathbb{R}) \right ]^{2n+1}$. Then
\begin{align*}
W_j (\lambda (f))=& \sum_{l=1}^{n} \left ( W_j f^{n+l} T f^l - Tf^{n+l} W_j f^{l} \right )\\
=& \sum_{l=1}^{2n} W_j( \tilde w_{ l} (f) ) T f^l .
\end{align*}
In the case $n=1$ we get
$$
X(\lambda (f))= Xf^1 Tf^2 - Tf^1 Xf^2 \quad \text{ and } \quad Y(\lambda (f))= Yf^1 Tf^2 - Tf^1 Yf^2 .
$$
\end{lem}
\begin{proof
Observe first then
$$
T(f^l W_j f^{n+l})=Tf^l W_j f^{n+l} + f^l TW_j f^{n+l} = Tf^l W_j f^{n+l} + f^l W_j T f^{n+l},
$$
which means
$$
- f^l W_j T f^{n+l} =- T(f^l W_j f^{n+l}) + Tf^l W_j f^{n+l} .
$$
Likewise we have
$$
f^{n+l} W_j T f^{l} = T(f^{n+l} W_j f^{l})- Tf^{n+l} W_j f^{l} .
$$
Then
\begin{align*}
W_j& (\lambda (h,f))=\\
=& W_j \left ( T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) T f^l \right )\\
=& W_j T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \left ( W_j( \tilde w_{ l} (f)) T f^l +
\tilde w_{ l} (f) W_j T f^l
\right ) \\
=& W_j T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{n} \left (
W_j( \tilde w_{ l} (f)) T f^l +\tilde w_{ l} (f) W_j T f^l
+ W_j( \tilde w_{ {n+l}} (f)) T f^{n+l} +\tilde w_{ {n+l}} (f) W_j T f^{n+l}
\right ) \\
=& W_j T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{n} \left (
W_j f^{n+l} T f^l + f^{n+l} W_j T f^l
- W_j f^l T f^{n+l} - f^l W_j T f^{n+l}
\right ) \\
=& T W_j f^{2n+1} + \frac{1}{2} \sum_{l=1}^{n} \left (
T(f^{n+l} W_j f^{l}) - T(f^l W_j f^{n+l})
\right )
+ \sum_{l=1}^{n} \left ( W_j f^{n+l} T f^l - Tf^{n+l} W_j f^{l} \right ) \\
=& \sum_{l=1}^{n} \left ( W_j f^{n+l} T f^l - Tf^{n+l} W_j f^{l} \right )\\
=& \sum_{l=1}^{2n} W_j( \tilde w_{ l} (f) ) T f^l ,
\end{align*}
where we use $ T W_j f^{2n+1} + \frac{1}{2} \sum_{l=1}^{n} \left ( T(f^{n+l} W_j f^{l}) - T(f^l W_j f^{n+l}) \right ) = T (\mathcal{A}(j,f))=0 $.
\end{proof}
\noindent
From this point to the end of the chapter we will consider $n=1$. The choice is not only of notational convenience, as in the other cases the computation becomes more problematic and the results do not allow an easy interpretation.
\begin{prop}\label{highdimensioncontact}
Let $f: U \subseteq \mathbb{H}^1 \to \mathbb{H}^1$, $U$ open, be a contact map such that $f=(f^1,f^2,f^{3}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^1, \mathbb{R}) \right ]^{3}$. Then the pushforward matrix in the basis $\{ X \wedge Y, X \wedge T , Y \wedge T \}$ is
\begin{align*}
f_*=
\begin{pmatrix}
\lambda (f) & X(\lambda (f) ) & Y(\lambda (f) ) \\
0 & Xf^1 \lambda (f) & Yf^1 \lambda (f) \\
0 & Xf^2 \lambda (f) & Yf^2 \lambda (f)
\end{pmatrix},
\quad \text{ i.e.,} \quad
\end{align*}
\begin{align}
f_* (X \wedge Y) &=\lambda (f) X\wedge Y,\\
f_* (X \wedge T) &=X(\lambda (f) ) X\wedge Y +Xf^1 \lambda (f) X\wedge T+ Xf^2 \lambda (f) Y\wedge T,\\
f_* (Y \wedge T)&= Y(\lambda (f)) X\wedge Y + Yf^1 \lambda (f) X\wedge T+ Yf^2 \lambda (f) Y\wedge T,
\end{align}
and
\begin{align}
f_* (X \wedge Y \wedge T) =&\lambda (f)^2 X \wedge Y \wedge T.
\end{align}
Likewise, the pullback is:
\begin{align*}
f^*= (f_*)^T=
\begin{pmatrix}
\lambda (f) & 0 & 0 \\
X(\lambda (f) ) & Xf^1 \lambda (f) & Xf^2 \lambda (f) \\
Y(\lambda (f) ) & Yf^1 \lambda (f) & Yf^2 \lambda (f)
\end{pmatrix},
\quad \text{ i.e.,} \quad
\end{align*}
\begin{align}
f^* (dx \wedge dy)&= \lambda (f) dx \wedge dy + X(\lambda (f)) dx \wedge \theta+ Y(\lambda (f)) dy \wedge \theta,\\
f^* (dx \wedge \theta) &=Xf^1 \lambda (f) dx \wedge \theta + Y f^1\lambda (f) dy \wedge \theta,\label{2dimx}\\
f^* (dy \wedge \theta) &= Xf^2 \lambda (f) dx \wedge \theta + Y f^2 \lambda (f) dy \wedge \theta,\label{2dimy}
\end{align}
and
\begin{align}
f^* (dx \wedge dy \wedge \theta ) =&\lambda (f)^2 dx \wedge dy \wedge \theta.\label{3dim}
\end{align}
\end{prop}
\begin{proof
\begin{align*}
f_* (X \wedge Y)=& f_* X \wedge f_* Y =( Xf^1 Yf^2 - Yf^1 Xf^2) X\wedge Y =\lambda (1,f) X\wedge Y.\\
&\\
f_* (X \wedge T)=&f_* X \wedge f_* T = ( Xf^1 Tf^2 - Tf^1 Xf^2 ) X\wedge Y \\
&+Xf^1 \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) X\wedge T\\
&+ Xf^2 \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) Y\wedge T\\
=& X(\lambda (1,f)) X\wedge Y +Xf^1 \lambda (1,f) X\wedge T+ Xf^2 \lambda (1,f) Y\wedge T.\\
&\\
f_* (Y \wedge T)=& f_* Y \wedge f_* T = ( Yf^1 Tf^2 - Tf^1 Yf^2 ) X\wedge Y \\
&+Yf^1 \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) X\wedge T\\
&+ Yf^2 \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) Y\wedge T\\
=&Y(\lambda (1,f)) X\wedge Y + Yf^1 \lambda (1,f) X\wedge T+ Yf^2 \lambda (1,f) Y\wedge T.\\
&\\
f_* (X \wedge Y \wedge T)=&f_* X \wedge f_* Y \wedge f_* T \\
=&( Xf^1 Yf^2 - Yf^1 Xf^2) \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) X \wedge Y \wedge T\\
=& \lambda (1,f)^2 X \wedge Y \wedge T.\\
&\\
f^* (dx \wedge dy)=&f^* dx \wedge f^* dy = ( Xf^1 Yf^2 - Yf^1 Xf^2) dx \wedge dy \\
&+ ( Xf^1 Tf^2 - Tf^1 Xf^2 )dx \wedge \theta+ ( Yf^1 Tf^2 - Tf^1 Yf^2 )dy \wedge \theta\\
=& \lambda (1,f) dx \wedge dy + X(\lambda (1,f)) dx \wedge \theta+ Y(\lambda (1,f)) dy \wedge \theta.\\
&\\
f^* (dx \wedge \theta)=& Xf^1 \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dx \wedge \theta \\
& +Y f^1 \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dy \wedge \theta\\
=&Xf^1 \lambda (1,f) dx \wedge \theta + Y f^1\lambda (1,f) dy \wedge \theta.\\
&\\
f^* (dy \wedge \theta)= &Xf^2 \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dx \wedge \theta \\
&+ Y f^2 \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dy \wedge \theta\\
=& Xf^2 \lambda (1,f) dx \wedge \theta + Y f^2 \lambda (1,f) dy \wedge \theta.\\
&\\
f^* (dx \wedge dy \wedge \theta )=& ( Xf^1 Yf^2 - Yf^1 Xf^2) \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dx \wedge dy \wedge \theta\\
=& \lambda (1,f)^2 dx \wedge dy \wedge \theta.
\end{align*}
\end{proof}
\begin{obs}\label{simply}
Let $f: U \subseteq \mathbb{H}^1 \to \mathbb{H}^1$, $U$ open, be a contact diffeomorphism such that $f=(f^1,f^2,f^{3}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^1, \mathbb{R}) \right ]^{3}$. Recall
$$
\lambda (f)=Tf^3=Xf^1 Yf^2- Xf^2 Yf^1.
$$
Given the conditions in Observation \ref{cmdiff}, Lemma \ref{XCYC} then becomes
\begin{align*}
X(\lambda (f))= 0 \quad \text{and} \quad Y(\lambda (f))=0 .
\end{align*}
\end{obs}
\begin{prop}
Let $f: U \subseteq \mathbb{H}^1 \to \mathbb{H}^1$, $U$ open, be a contact diffeomorphism such that $f=(f^1,f^2,f^{3}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^1, \mathbb{R}) \right ]^{3}$. In this case, Proposition \ref{highdimensioncontact} becomes
\begin{align*}
f_*=
\begin{pmatrix}
T f^3 & 0 & 0 \\
0 & Xf^1 T f^3 & Yf^1 T f^3 \\
0 & Xf^2 T f^3 & Yf^2 T f^3
\end{pmatrix},
\quad \text{ i.e.,} \quad
\end{align*}
$$
\begin{cases}
f_* (X \wedge Y) =T f^3 X\wedge Y ,\\
f_* (X \wedge T)= Xf^1 Tf^3 X\wedge T+ Xf^2 Tf^3 Y\wedge T,\\
f_* (Y \wedge T) = Yf^1 Tf^3 X\wedge T+ Yf^2 Tf^3 Y\wedge T,
\end{cases}
$$
and
\begin{align}
f_* (X \wedge Y \wedge T) = (Tf^3)^2 X \wedge Y \wedge T .
\end{align}
Likewise, the pullback is:
\begin{align*}
f^*= (f_*)^T=
\begin{pmatrix}
T f^3& 0 & 0 \\
0 & Xf^1 T f^3 & Xf^2 T f^3 \\
0 & Yf^1 T f^3 & Yf^2 T f^3
\end{pmatrix},
\quad \text{ i.e.,} \quad
\end{align*}
$$
\begin{cases}
f^* (dx \wedge dy) = Tf^3 dx \wedge dy,\\
f^* (dx \wedge \theta)= Xf^1 T f^3 dx \wedge \theta + Y f^1 T f^3 dy \wedge \theta,\\
f^* (dy \wedge \theta)= Xf^2 T f^3 dx \wedge \theta + Y f^2 T f^3 dy \wedge \theta,
\end{cases}
$$
and
\begin{align}
f^* (dx \wedge dy \wedge \theta )= (Tf^3)^2 dx \wedge dy \wedge \theta.
\end{align}
\begin{comment
\begin{align}
f_* (X \wedge Y) =&T f^3 X\wedge Y ,\\
f_* (X \wedge T)=& Xf^1 Tf^3 X\wedge T+ Xf^2 Tf^3 Y\wedge T,\\
f_* (Y \wedge T) =& Yf^1 Tf^3 X\wedge T+ Yf^2 Tf^3 Y\wedge T,\\
f_* (X \wedge Y \wedge T) = &(Tf^3)^2 X \wedge Y \wedge T .
\end{align}
Likewise, the pullback is
\begin{align}
f^* (dx \wedge dy) =& Tf^3 dx \wedge dy,\\
f^* (dx \wedge \theta)=& Xf^1 T f^3 dx \wedge \theta + Y f^1 T f^3 dy \wedge \theta,\\
f^* (dy \wedge \theta)= &Xf^2 T f^3 dx \wedge \theta + Y f^2 T f^3 dy \wedge \theta,\\
f^* (dx \wedge dy \wedge \theta )= & (Tf^3)^2 dx \wedge dy \wedge \theta.
\end{align}
\end{comment
\end{prop}
\begin{rem}
Note that so far we never considered the Rumin cohomology but only the definitions of pushforward and pullback on the whole algebra and the notions of contactness and diffeomorphicity. In the Rumin cohomology, as we described in the previous chapter, some differential forms (and subsequently their dual vectors) are either zero in the equivalence class or do not appear at all. For $n=1$ these are: $T$, $X\wedge Y$,$\theta$ and $dx\wedge dy$.
\end{rem}
\begin{comment
\subsection{Rumin Cohomology}
At this point we consider the Rumin cohomology and see the effect of pushforwad and pullback into it.
\begin{prop}\label{alldimRumin}
Let $f: U \subseteq \mathbb{H}^1 \to \mathbb{H}^1$, $U$ open, be a contact map such that $f=(f^1,f^2,f^{3}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^1, \mathbb{R}) \right ]^{3}$. Then the pushforward and pullback in the Rumin cohomology are:
\begin{align}
f_* X =& Xf^1 X + Xf^2 Y,\\
f_* Y=& Yf^1 X + Yf^2 Y,\\
f_* (X \wedge T)=& Xf^1 \lambda (f) X\wedge T+ Xf^2 \lambda (f) Y\wedge T,\\
f_* (Y \wedge T)=& Yf^1 \mathcal{C}(f) X\wedge T+ Yf^2 \mathcal{C}(f) Y\wedge T,\\
f_* (X \wedge Y \wedge T)=&\mathcal{C}(f)^2 X \wedge Y \wedge T \\
f^* dx=& Xf^1 dx + Y f^1 dy,\\
f^* dy = & Xf^2 dx + Y f^2 dy,\\
f^* (dx \wedge \theta)=&Xf^1 \mathcal{C}(f) dx \wedge \theta + Y f^1 \mathcal{C}(f) dy \wedge \theta, \label{2dimx}\\
f^* (dy \wedge \theta)= &Xf^2\mathcal{C}(f) dx \wedge \theta + Y f^2 \mathcal{C}(f) dy \wedge \theta, \label{2dimy}\\
f^* (dx \wedge dy \wedge \theta )=&\mathcal{C}(f)^2 dx \wedge dy \wedge \theta. \label{3dim}
\end{align}
\end{prop}
\begin{prop}
Let $f: U \subseteq \mathbb{H}^1 \to \mathbb{H}^1$, $U$ open, be a contact diffeomorphism such that $f=(f^1,f^2,f^{3}) \in \left [ C_\mathbb{H}^2 (\mathbb{H}^1, \mathbb{R}) \right ]^{3}$. Proposition \ref{alldimRumin} becomes
\begin{align}
f_* X = &Xf^1 X + Xf^2 Y,\\
f_* Y= &Yf^1 X + Yf^2 Y,\\
f_* (X \wedge T)=& Xf^1 Tf^3 X\wedge T+ Xf^2 Tf^3 Y\wedge T,\\
f_* (Y \wedge T)= &Yf^1 Tf^3 X\wedge T+ Yf^2 Tf^3 Y\wedge T,\\
f_* (X \wedge Y \wedge T)=& ( Tf^3 )^2 X \wedge Y \wedge T \\
f^* dx=& Xf^1 dx + Y f^1 dy,\\
f^* dy = & Xf^2 dx + Y f^2 dy,\\
f^* (dx \wedge \theta)= & Xf^1 T f^3 dx \wedge \theta + Y f^1 T f^3 dy \wedge \theta,\\
f^* (dy \wedge \theta)= & Xf^2 T f^3 dx \wedge \theta + Y f^2 T f^3 dy \wedge \theta,\\
f^* (dx \wedge dy \wedge \theta )=& ( T f^3 )^2 dx \wedge dy \wedge \theta.
\end{align}
\end{prop}
\end{comment
\chapter{Orientability} \label{orient4}
The objective of this chapter is to give a definition of orientability in the Heisenberg sense and to study which surfaces can be called orientable in such sense.
One reason to study orientability is that it is possible to define it using the cohomology of differential forms. As we have seen so far, the Rumin cohomology is different from the (usual) Riemannian one, so one can naturally ask how will the orientability change in this prospective.
There are, however, other reasons: orientability plays an important role in the theory of currents. Currents are linear functionals on the space of differential forms and can be identified, with some hypotheses, with regular surfaces. This creates an important bridge between analysis and geometry. Such surfaces are usually orientable but not always: in Riemannian geometry there exists a notion of currents (currents mod $2$) made for surfaces that are not necessarily orientable (see, for instance, \cite{MORGAN2}).
Also in the Heisenberg group regular orientable surfaces can be associated to currents, but this happens with different hypothesis than the Riemannian case and it is not clear whether it is meaningful to study currents mod $2$ in this setting. We will show that also in the Heisenberg group there exists non-orientable surfaces with enough regularity to be associated to currents. This means that, also in the Heisenberg group, it is a meaningful task to study currents mod $2$.\\
First we will state the definitions of $\mathbb{H}$-regularity for low dimensional and low codimensional surfaces. The main reference in this section is \cite{FSSC}. Next we will show that there exist surfaces with the property of being both $\mathbb{H}$-regular and non-orientable in the Euclidean sense. Then we will introduce the notion of Heisenberg orientability ($\mathbb{H}$-orientability) and prove that, under left translations and anisotropic dilations, $\mathbb{H}$-regularity is invariant for $1$-codimensional surfaces and $\mathbb{H}$-orientability is invariant for $\mathbb{H}$-regular $1$-codimensional surfaces.
Finally, we show how the two notions of orientability are related, concluding that non-$\mathbb{H}$-orientable $\mathbb{H}$-regular surfaces exist, at least when $n=1$.\\
The idea is to consider a ``Möbius strip"-like surface as target. We will restrict ourselves and take $S$, a $1$-codimensional surface in $\mathbb{H}^n$, to be $C^1$-Euclidean, with $C(S) = \varnothing$, where $C(S)$ is defined as the set of characteristic points of $S$, meaning points where $S$ fails to respect the $\mathbb{H}$-regularity condition. These hypotheses imply that $S$ is $\mathbb{H}$-regular. All these terms will be made precise below. We will then show that it is possible to build such $S$ to be a ``Möbius strip"-like surface.
\begin{comment}
I THINK IT'S CLEAR THAT THE SURFACE OF KSC IS DEFINED SO THAT IT IS PART OF A LOCAL GRAPH, AND HENCE IT IS H-ORIENTABLE
\begin{que}
Is the set defined in \cite{KSC} $\mathbb{H}$-orientable? So it seems because $S= \Omega \cap \{ f=0 \}$ with $\vert \nabla_\mathbb{H} f \vert = \vert (-1,0) \vert=1$. Am I sure about this? [see the paper, page 890]\\
Then it's easily $\mathbb{H}$-orientable and $\vec{t}_\mathbb{H}=T \wedge X$, $n_\mathbb{H} = Y$.\\
Anyway, the point is that $f$ is (I think) global.
\end{que}
Currents, analytical objects,
are linear functional between the space and differential forms and the real line. With certain hypothesis, they can be identified with some class of surface as integrals over them. This creates an important bridge between analysis and geometry. The standard such procedure (non si può sentire) requires the surface to be orientable. Nevertheless, in the Riemannian setting there exists currents mod $2$, which can be interpretated as currents whose orientability is, instead, not taken into consideration (see, for instance, \cite{MORGAN2}).\\
Also in the Heisenberg group currents can be associated to surfaces, which can be asked to satisfy more restrictive conditions (the $\mathbb{H}$-regularity below) than in the Riemannian case, to the point that one could wonder whether these conditions imply also $\mathbb{H}$-orientability (also defined below). In case they do, wondering about the $\mathbb{H}$-orientability of $\mathbb{H}$-regular surfaces will have no meaning, as all $\mathbb{H}$-regular surfaces would be $\mathbb{H}$-orientable. On the other hand, if the $\mathbb{H}$-orientability is not \emph{a priori} determined, then it makes sense to consider currents mod $2$ (whose interpretation, I remind, is to ignore the orientability). To prove that this last case is indeed the right one, one needs to show that there exists a $\mathbb{H}$-regular surface who is $\mathbb{H}$-non-orientable.\\
\end{comment
\begin{comment
PS: in $\mathbb{R}^3$, are there other bounded not-orientable surfaces?\\
In three dimensions, there is no unbounded non-orientable surface which does not intersect itself ( http://mathworld.wolfram.com/NonorientableSurface.html ).\\
And We don't want self-intersections.
\end{comment
\section{$\mathbb{H}$-regularity in $\mathbb{H}^n$}
We state here the definitions of $\mathbb{H}$-regularity for low dimension and low codimension. The terminology comes from Subsection \ref{lefthor}. The main reference in this section is \cite{FSSC}.
\begin{defin}[See 3.1 in \cite{FSSC}]
Let $1\leq k \leq n$. A subset $S \subseteq \mathbb{H}^n$ is a $\mathbb{H}$-\emph{regular} $k$-\emph{dimensional surface} if for all $p \in S$ there exists a neighbourhood $ U \in \mathcal{U}_p$, an open set $ V \subseteq \mathbb{R}^k$ and a function $\varphi : V \to U$, $ \varphi \in [C_{\mathbb{H}}^1(V,U)]^{2n+1} $ injective with $d_H \varphi $ injective such that $ S \cap U = \varphi (V)$.
\end{defin}
\begin{defin}[See 3.2 in \cite{FSSC}]\label{Hreg}
Let $1\leq k \leq n$. A subset $S \subseteq \mathbb{H}^n$ is a $\mathbb{H}$-\emph{regular} $k$-\emph{codimensional surface} if for all $ p \in S $ there exists a neighbourhood $ U \in \mathcal{U}_p$ and a function $ f : U \to \mathbb{R}^k$, $ f \in [C_{\mathbb{H}}^1(U,\mathbb{R}^k)]^k$, such that $ {\nabla_\mathbb{H} f_1} \wedge \dots \wedge {\nabla_\mathbb{H} f_k} \neq 0 $ on $ U $ and $ S \cap U = \{ f=0 \} $.
\end{defin}
\noindent
We will almost always work with the codimensional definition, that is, the surfaces of higher dimension.\\
If a surface is $\mathbb{H}$-regular, it is natural to associate to it, locally, a normal and a tanget vector:
\begin{defin}
Consider a $\mathbb{H}$-regular $k$-codimensional surface $S$ and $p \in S$. Then the \emph{(horizontal) normal $k$-vector field} $n_{\mathbb{H},p}$ is defined as
$$
n_{\mathbb{H},p} := \frac{ {\nabla_\mathbb{H} f_1}_p \wedge \dots \wedge {\nabla_\mathbb{H} f_k}_p }{ \vert {\nabla_\mathbb{H} f_1}_p \wedge \dots \wedge {\nabla_\mathbb{H} f_k}_p \vert } \in {\prescript{}{}\bigwedge}_{k,p} \mathfrak{h}_1 .
$$
In a natural way, the \emph{tangent $(2n+1-k)$-vector field} $t_{\mathbb{H},p}$ is defined as its dual:
$$
t_{\mathbb{H},p} := * n_{\mathbb{H},p} \in {\prescript{}{}\bigwedge}_{2n+1-k,p} \mathfrak{h},
$$
where the Hodge operator $*$ appears in Definition \ref{hodge}.
\end{defin}
\begin{no}
We defined the regularity of a surface in the Heisenberg sense.
When considering the regularity of a surface in the Euclidean sense, we will say $C^k$\emph{-regular in the Euclidean sense} or $C^k$\emph{-Euclidean} for short.
\end{no}
\begin{lem}\label{quickcomputation}
Let $S$ be a $C^1$-Euclidean surface in $\mathbb{R}^{2n+1} = \mathbb{H}^n$. Then $\dim_{\mathcal{H}_{cc}} S=Q-1=2n+1$ if and only if $\dim_{\mathcal{H}_{E}} S =2n$.
\end{lem}
\noindent
Hence, from now on we may consider a $C^1$-Euclidean surface $S$ and ask it to be $1$-codimensional without further specifications
\begin{proof}[Proof of Lemma \ref{quickcomputation}]
Consider first $\dim_{\mathcal{H}_{cc}} S=2n+1$. The Hausdorff dimension of $S$ with respect of the Euclidean distance is equal to the dimension of the tangent plane, which is well defined everywhere by hypothesis; hence such dimension is an integer. By theorems $2.4$-$2.6$ (and fig. 2) in \cite{BTW} or by \cite{BRSC} (for $\mathbb{H}^1$ only), one has that:
$$
\max \{ k, 2k-2n \} \leq \dim_{\mathcal{H}_{cc}} S \leq \min \{ 2k, k+1 \} ,
$$
with $k:=\dim_{\mathcal{H}_{E}} S$.\\
Let's first we check the minimum: if $2k\leq k+1$, then $k\leq 1$ and
$$
\max \{ k, 2k-2n \} \leq 2n+1 \leq 2k \leq 2 ,
$$
which is impossible because $2n+1\geq 3$. So $k+1\leq 2k$ and the minimum is decided. For the maximum, if $k \leq 2k-2n$, then $k \geq 2n$ and so either $k=2n$ or $k=2n+1$. But then
$$
2k-2n \leq 2n+1 ( \leq k+1) ,
$$
which says that $2k \leq 2n+1$; this is verified only if $k=2n$. The second case of the maximum is $2k-2n\leq k$, meaning $k\leq 2n$. In this case
$$
k \leq 2n+1 \leq k+1 ,
$$
which again says that the only possibility is $k=2n$.\\
On the other hand, if we consider a $C^1$-Euclidean surface $S$ in $\mathbb{R}^{2n+1}$ with $\dim_{\mathcal{H}_{E}} S =2n$ (an hypersurface in the Euclidean sense), then it follows (see page 64 in \cite{BAL} or \cite{GROMOV}) that $\dim_{\mathcal{H}_{cc}} S=2n+1$.
\end{proof}
\begin{defin}
Denote $ T_p S $ the space of vectors tangent to $S$ at the point $p$. Define the characteristic set $C(S)$ of a surface $S \subseteq \mathbb{H}^n$ as
$$
C(S):= \left \{ p \in S ; \ T_p S \subseteq \mathfrak{h}_{1,p} \right \}.
$$
To say that a point $p \in C(S)$ is the same as saying that $n_{\mathbb{H},p}=0$, which says that, for the $k$-codimensional case, it is not possible to find a map $f$ such as in Definition \ref{Hreg}. Viceversa, $p \notin C(S)$ if $n_{\mathbb{H},p} \neq 0$.
\end{defin}
\begin{obs}[1.1 in \cite{BAL} and 2.16 in \cite{MAG}]
The set of characteristic points of a $k$-codimensional surface $S \subseteq \mathbb{H}^n$ has always measure zero:
$$
\mathcal{H}_{cc}^{2n+2-k} \left ( C(S) \right ) =0.
$$
\end{obs}
\begin{obs}\label{Cpoints}[See page195 in \cite{FSSC}]
Consider a $C^1$-Euclidean surface $S$ with $C(S)= \varnothing$. It follows that $S$ is a
$\mathbb{H}$-regular surface in $\mathbb{H}^n$.
\end{obs}
\begin{proof}
Since S is $C^1$-Euclidean, it can be written as
$$
S=C(S) \cup \left ( S \ \setminus \ C(S) \right ),
$$
where $S \ \setminus \ C(S) $ is $\mathbb{H}$-regular by definition of $C(S)$. Since $C(S)= \varnothing$, $S$ is also $\mathbb{H}$-regular.
\end{proof}
\section{The Möbius Strip in $\mathbb{H}^1$}
In this section we show that there exist $\mathbb{H}$-regular surfaces which are non-orientable in the Euclidean sense. We prove it for a ``Möbius strip"-like surface. This result will be crucial later on. We define precisely the orientability in the Euclidean sense later in Definition \ref{Eorientable}; we will not use the notion in this section but only the knowledge that the Möbius strip is not orientable.\\\\
Let $\mathcal{M}$ be any Möbius strip. Is whether $C(\mathcal{M}) = \varnothing$? Or, if this is not possible, is there a surface $\widebar{\mathcal{M}} \subseteq \mathcal{M}$, $\widebar{\mathcal{M}}$ still non-orientable in the Euclidean sense, such that $C( \widebar{\mathcal{M}} ) = \varnothing$?\\
\noindent
We will make this question more precise and everything will be proved carefully.
Our strategy will be the following: considering one specific parametrization of the Möbius strip $\mathcal{M} $, we show that there is only at most one characteristic point $\bar p$. Therefore the surface
$$
\widebar{\mathcal{M}} :=\mathcal{M} \ \setminus \ U_{\bar p},
$$
where $U_{\bar p}$ is a neighbourhood of $\bar p$ taken with smooth boundary, is indeed $C^1$-Euclidean with $C(\widebar{\mathcal{M}}) = \varnothing$ and non-orientable in the Euclidean sense.
In particular, to prove the existence of $\widebar{\mathcal{M}}$, some steps are needed: 1. parametrize the Möbius strip $\mathcal{M}$ as $\gamma(r,s)$, 2. write the two tangent vectors $\vec\gamma_r$ and $\vec\gamma_s$ in Heisenberg coordinates, 3. compute the components of the normal vector field $\vec N=\vec\gamma_r \times_H \vec\gamma_s$ and 4. compute on which points both the first and second components are not zero. \\\\%For the full calculations, see Appendix \ref{appMo}.
\noindent
Consider a fixed $R \in \mathbb{R}^+ $ and $w \in \mathbb{R}^+$ so that $w<R$. Then consider the map
$$
\gamma : [0,2 \pi ) \times [-w,w] \to \mathbb{R}^3
$$
defined as follows
\begin{align*}
\gamma (r,s):&=(x(r,s), y(r,s), t(r,s))\\
&= \left ( \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r , \ \left [R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r , \ s \sin \left ( \frac{r}{2} \right ) \right ).
\end{align*}
This is indeed the parametrization of a Möbius strip of half-width $w$ with midcircle of radius $R$ in $\mathbb{R}^3$. We can pose then $\mathcal{M} := \gamma \left ( [0,2 \pi ) \times [-w,w] \right ) \subseteq \mathbb{R}^3$.
\begin{prop}\label{Mobius}
Let $\mathcal{M}$ be the Möbius strip parametrised by the curve $\gamma$. Then $\mathcal{M}$ contains at most only one characteristic point $\bar p$ and so there exists a $1$-codimensional surface $\widebar{\mathcal{M}} \subseteq \mathcal{M}$, $\bar p \notin \widebar{\mathcal{M}}$, $\widebar{\mathcal{M}}$ still non-orientable in the Euclidean sense, such that $C( \widebar{\mathcal{M}} ) = \varnothing$.
\end{prop}
\noindent
This says that $\widebar{\mathcal{M}}$ is a $\mathbb{H}$-regular surface (by Observation \ref{Cpoints}) and is non-orientable in the Euclidean sense. The proof will follow after some lemmas.
\begin{lem}[Step 1.]
Consider the parametrization $\gamma$. The two tangent vector fields of $\gamma$, in the basis $\{\partial_x, \partial_y, \partial_t \}$, are
\begin{align*}
\vec\gamma_r (r,s) = & \bigg ( - \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \cos r - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r , \\
& - \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \sin r + \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r, \ \frac{s}{2} \cos \left ( \frac{r}{2} \right ) \bigg ),\\
\text{and}\hspace{0.8cm} &\\
\vec\gamma_s (r,s) =& \left ( \cos \left ( \frac{r}{2} \right ) \cos r , \ \cos \left ( \frac{r}{2} \right ) \sin r , \ \sin \left ( \frac{r}{2} \right ) \right ).
\end{align*}
\end{lem}
\begin{lem}[Step 2.]
Consider the parametrization $\gamma$. The two tangent vector fields $\vec\gamma_r$ and $\vec\gamma_s$ can be written in Heisenberg coordinates as:
\begin{align*}
\vec\gamma_r (r,s)= & \left ( - \frac{1}{2}s \sin \left ( \frac{r}{2} \right ) \cos r - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r \right ) X \\
&+ \bigg ( - \frac{1}{2}s \sin \left ( \frac{r}{2} \right ) \sin r + \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r \bigg ) Y\\
& + \left ( s \frac{1}{2} \cos \left ( \frac{r}{2} \right ) - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \frac{1}{2} \right )T,\\
\text{and}\hspace{0.8cm} &\\
\vec\gamma_s (r,s) = & \cos \left ( \frac{r}{2} \right ) \cos r X + \cos \left ( \frac{r}{2} \right ) \sin r Y + \sin \left ( \frac{r}{2} \right ) T .
\end{align*}
\end{lem}
\noindent
The computation is at Observations \ref{Hcoordinates1} and \ref{Hcoordinates2}.\\\\
Call $\vec{N} (r,s)= \vec{N}_1 (r,s) X + \vec{N}_2 (r,s) Y + \vec{N}_3 (r,s) T$ the normal vector field of $\mathcal{M}$. Such vector is given by the cross product of the two tangent vector fields $\vec\gamma_r$ and $\vec\gamma_s$. Specifically:
$$
\vec{N}= \vec{N}_1 X + \vec{N}_2 Y + \vec{N}_3 T =
\vec\gamma_r \times_\mathbb{H} \vec\gamma_s =
$$
\[ =
\begin{vmatrix}
X & Y & T \\
\frac{-s \cos r \sin \frac{r}{2} }{2} - \left [ R+s \cos \frac{r}{2} \right ] \sin r&
\frac{-s \sin r \sin \frac{r}{2} }{2} + \left [ R+s \cos \frac{r}{2} \right ] \cos r &
\frac{s \cos \frac{r}{2} }{2} - \frac{ \left [ R+s \cos \frac{r}{2} \right ]^2 }{2}\\
\cos \left ( \frac{r}{2} \right ) \cos r & \cos \left ( \frac{r}{2} \right ) \sin r & \sin \left ( \frac{r}{2} \right )
\end{vmatrix}.
\]
\begin{lem}[Step 3.]
Consider $\vec{N} (r,s)= \vec{N}_1 (r,s) X + \vec{N}_2 (r,s) Y + \vec{N}_3 (r,s) T$ the normal vector field of $\mathcal{M}$. A calculation shows that:
\begin{align*}
\vec{N}_1 =&
- \frac{1}{2}s \sin r
+ \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r \sin \left ( \frac{r}{2} \right )
+ \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \frac{1}{2} \cos \left ( \frac{r}{2} \right ) \sin r ,\\
\vec{N}_2 =&
\left ( - z^5 + \frac{1}{2} z^3 \right ) s^2
+ \left ( - 2 (R+1)z^4 + ( R+3) z^2 - \frac{1}{2} \right ) s
- ( R^2 + 2 R) z^3 \\
& + \left ( \frac{1}{2} R^2 + 2 R \right ) z ,\\
\text{and}&\\
\vec{N}_3 =& - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos \left ( \frac{r}{2} \right ) .
\end{align*}
with $z=\cos \left ( \frac{r}{2} \right ) $, $r\in [0,2\pi)$ and $s \in [-w,w]$.
\end{lem}
\noindent
The full computation is at Observations \ref{TvecN}, \ref{XvecN} and \ref{YvecN}.
\begin{proof}[Proof of Proposition \ref{Mobius}]
To find pairs of parameters $(r,s)$ corresponding to characteristic points we have to impose
$$
\begin{cases}
\vec{N}_1 (r,s)= 0\\
\vec{N}_2 (r,s)= 0.
\end{cases}
$$
We find that $\vec{N}_1 (r,s)= 0$ only at the points $(x(r,s),y(r,s),t(r,s))$ with
$$
(r,s)=
\begin{cases}
(0,s), \quad &s \in [-w,w], \quad \text{or} \\
\left (r, \frac{ -(R+1) z^2 + 1 \pm \sqrt{ z^4 -( R+2 ) z^2 +1 } }{ z^3} \right ),
\quad &r \in [0, 2 \pi), \ r \neq \pi, \ z=\cos \frac{r}{2}.
\end{cases}
$$
Evaluating these possibilities on $\vec{N}_2 (r,s)= 0$ (full computation is at Observations \ref{N1=0N2=0} and \ref{partialconclusion}), we find that the system is verified only by the pair (that defines then a characteristic point)
$$
(r,s)= \left (0,\frac{ -2R+1 - \sqrt{-4R+1} }{2} \right ),\quad \text{when} \quad 0< R < \frac{1}{4},
$$
which corresponds to the point $\bar p = (\bar x,\bar y,\bar t) = \left ( \frac{1}{2} - \sqrt{-R + \frac{1}{4} } ,0,0 \right ) $:
$$
\begin{cases}
\bar x = [R+s \cos ( \frac{r}{2} ) ] \cos r =R +\frac{ -2R+1 - \sqrt{-4R+1} }{2} = \frac{ 1 - \sqrt{-4R+1} }{2} =\frac{1}{2} - \sqrt{-R + \frac{1}{4} }>0 \\
\bar y = [R+s \cos ( \frac{r}{2} ) ] \sin r=0 \\
\bar t = s \sin ( \frac{r}{2} )=0.
\end{cases}
$$
Notice that it is not strange that the number of characteristic points depends on the radius $R$ as changing the radius is not an anisotropic dilation.\\
Therefore the surface
$$
\widebar{\mathcal{M}} :=\mathcal{M} \ \setminus \ U_{\bar p},
$$
where $U_{\bar p}$ is a neighbourhood of $\bar p$ with smooth boundary, is indeed a $C^1$-Euclidean surface with $C(\widebar{\mathcal{M}}) = \varnothing$ (and so $1$-codimensional $\mathbb{H}$-regular) and not Euclidean-orientable. This completes the proof.
\end{proof}
\section{Comparing Orientabilities}
In this section we first recall the definition of Euclidean orientability, then introduce the notion of Heisenberg orientability ($\mathbb{H}$-orientability) and show the connection between differential forms and orientability. Next we prove that, under left translations and anisotropic dilations, $\mathbb{H}$-regularity is invariant for $1$-codimensional surfaces and $\mathbb{H}$-orientability is invariant for $\mathbb{H}$-regular $1$-codimensional surfaces. Finally, we show how the two notions of orientability are related, concluding that non-$\mathbb{H}$-orientable $\mathbb{H}$-regular surfaces exist, at least when $n=1$.
\begin{obs}
Recall that, if $S$ is a $\mathbb{H}$-regular $1$-codimensional surface in $\mathbb{H}^n$, Definition \ref{Hreg} becomes:
\begin{equation}\label{eq1}
\text{for all } p \in S \ \text{ there exists a neighbourhood } U \text{ and } f : U \to \mathbb{R}, \ f \in C_{\mathbb{H}}^1 (U, \mathbb{R}), \text{ so that }
\end{equation}
$$
S \cap U = \{ f=0 \} \text{ and } \nabla_{\mathbb{H}} f \neq 0 \text{ on } U.
$$
On the other hand, in case $S$ is $C^1$-Euclidean then (see for instance the introduction of \cite{BAL})
\begin{equation}\label{eq2}
\text{for all } p \in S \ \text{ there exists a neighbourhood } U \text{ and } g : U \to \mathbb{R}, \ g \in C^1(U, \mathbb{R}), \text{ so that }
\end{equation}
$$
S \cap U = \{ g=0 \} \text{ and } \nabla g \neq 0 \text{ on } U.
$$
These two notions of regularity are obviously similar. First we will connect each of them to a definition of orientability; then we will compare these last two definitions.
\end{obs}
\subsection{$\mathbb{H}$-Orientability in $\mathbb{H}^n$}\label{Horient}
In this subsection we recall the notion of Euclidean orientability and define the Heisenberg orientability. We add some observations and an explicit representation of continuous global independent vector field tangent to an $\mathbb{H}$-orientable surface in the Heisenberg sense.
\begin{no}
Denote $ T S $ the space of vector fields tangent to a surface $S$. A vector $v$ is \emph{normal to $S$} (and write $v \perp S$) if
$$
\langle v,w \rangle =0 \ \ \forall w \in TS,
$$
where $\langle , \rangle$ appears in Definition \ref{scal}.
\end{no}
\begin{comment
\begin{defin}
Consider a $1$-codimensional $C^1$-Euclidean surface $S$ in $\mathbb{H}^n$ with $C(S) = \varnothing$. The following are equivalent:
\begin{enumerate}[label=(\roman*)]
\item
$S$ is \emph{Euclidean-orientable}
\item
there exists a continuous global $1$-vector field $n_E=\sum_{i=1}^{n} \left ( n_{E,i} \partial_{x_i} + n_{E,n+i} \partial_{y_i} \right ) + n_{E,2n+1} \partial_t \neq 0$ on $S$
which is normal to $S$.\\
Such $n_{E}$ is called \emph{Euclidean normal vector field of} $S$.
\item
there exists a continuous global $2n$-vector field $t_E$ on $S$, so that $t_E$ is tangent to $S$ in the sense that $*t_E$ is normal to $S$.
\end{enumerate}
The symbol $*$ represents the Hodge operator (see definition at \ref{hodge}).
\end{defin}
\end{comment
\begin{defin}\label{Eorientable}
Consider a $1$-codimensional $C^1$-Euclidean surface $S$ in $\mathbb{H}^n$ with $C(S) = \varnothing$. We say that $S$ is \emph{Euclidean-orientable} (or \emph{orientable in the Euclidean sense}) if there exists a continuous global $1$-vector field
$$
n_E=\sum_{i=1}^{n} \left ( n_{E,i} \partial_{x_i} + n_{E,n+i} \partial_{y_i} \right ) + n_{E,2n+1} \partial_t \neq 0 ,
$$
defined on $S$ and normal to $S$.\\
Such $n_{E}$ is called \emph{Euclidean normal vector field of} $S$.
\end{defin}
\begin{lem}
Consider a $1$-codimensional $C^1$-Euclidean surface $S$ in $\mathbb{H}^n$ with $C(S) = \varnothing$. The following are equivalent:
\begin{enumerate}[label=(\roman*)]
\item
$S$ is \emph{Euclidean-orientable}
\item
there exists a continuous global $2n$-vector field $t_E$ on $S$, so that $t_E$ is tangent to $S$ in the sense that $*t_E$ is normal to $S$.
\end{enumerate}
The symbol $*$ represents the Hodge operator (see Definition \ref{hodge}).
\end{lem}
\begin{obs}
It is straightforward that, up to a choice of sign,
$$t_E = * n_E.$$
\end{obs}
\noindent
It is possible to give an equivalent definition of orientability (both Euclidean or Heisenberg) using differential forms: a manifold is orientable (in some sense) if and only if there exists a proper volume form on it; where a volume form is a never-null form of maximal order. In particular:
\begin{obs}
Consider a $1$-codimensional $C^1$-Euclidean surface $S$ in $\mathbb{H}^n$ with $C(S) = \varnothing$. A fourth equivalent condition is that $S$ allows a volume form $\omega_E$, which can be chosen so that the following property holds:
$$
\langle \omega_E \vert t_E \rangle=1.
$$
\end{obs}
\begin{obs}\label{Elocally}
Both vectors $t_E$ and $n_E$ can be written in local components. For example, if $n=1$, by condition \eqref{eq2} locally gives $n_{E}=\mu \nabla g$, with $\mu \in C^\infty (\mathbb{H}^1,\mathbb{R})$.
So,
$$
n_E= n_{E,1} \partial_x + n_{E,2} \partial_y + n_{E,3} \partial_t = \mu \partial_x g \partial_x +\mu \partial_y g \partial_y + \mu \partial_t g \partial_t
$$
and then
$$
t_E = * n_E = \mu \partial_x g \partial_y \wedge \partial_t - \mu \partial_y g \partial_x \wedge \partial_t + \mu \partial_t g \partial_x \wedge \partial_y.
$$
In this case, the corresponding volume form is
$$
\omega_E= \frac{ n_{E,1} }{ n_{E,1}^2+n_{E,2}^2+n_{E,3}^2 } dy \wedge dt
- \frac{ n_{E,2} }{ n_{E,1}^2+n_{E,2}^2+n_{E,3}^2 } dx \wedge dt
$$
$$
+ \frac{ n_{E,3} }{ n_{E,1}^2+n_{E,2}^2+n_{E,3}^2 } dx \wedge dy.
$$
\end{obs}
\begin{defin}
Consider two vectors $v, w \in \mathfrak{h}_1
$ in $\mathbb{H}^n$. We say that $v$ and $w$ are \emph{ orthogonal in the Heisenberg sense } $(v \perp_H w)$ if
$$
\langle v,w \rangle_H = 0,
$$
where $\langle \cdot , \cdot \rangle_H$ is the scalar product that makes $X_j$'s and $Y_j$'s orthonormal (see Observation \ref{scal}).
\end{defin}
\begin{defin}
Consider a $1$-codimensional $C^1$-Euclidean surface $S$ in $\mathbb{H}^n$, with $C(S) = \varnothing$. Consider also a vector $v \in HT\mathbb{H}^n$. We say that $v$ and $S$ are \emph{orthogonal in the Heisenberg sense } $(v \perp_H S)$ if
$$
\langle v,w \rangle_H = 0, \quad \text{for all } \ w \in TS.
$$
In the same way one can say that a $k$-vector field $v$ on $S$ is \emph{tangent to} $S$ \emph{in the Heisenberg sense} if
$$
\langle *v,w\rangle_H = 0, \quad \text{for all } \ w \in TS.
$$
\end{defin}
\begin{comment
\begin{defin}
Consider a $1$-codimensional $C^1$-Euclidean surface $S$ in $\mathbb{H}^n$ with $C(S) = \varnothing$. The following are equivalent:
\begin{enumerate}[label=(\roman*)]
\item
$S$ is $\mathbb{H}$\emph{-orientable}
\item
there exists a continuous global $1$-vector field $n_{\mathbb{H}}=\sum_{i=1}^{n} \left ( n_{\mathbb{H},i} X_i + n_{\mathbb{H},n+i} Y_i \right ) \neq 0$ on $S$ so that $n_{\mathbb{H}} \perp_H S$ which is $\mathbb{H}$-normal to $S$.\\
Such $n_{\mathbb{H}}$ will be called \emph{Heisenberg normal vector field of} $S$.
\item
there exists a continuous global $2n$-vector field $t_\mathbb{H}$ on $S$ so that $t_\mathbb{H}$ is $\mathbb{H}$-tangent to S in the sense that $* t_\mathbb{H}$ is $\mathbb{H}$-normal to $S$.
\end{enumerate}
\end{defin}
\end{comment
\begin{defin}
Consider a $1$-codimensional $C^1$-Euclidean surface $S$ in $\mathbb{H}^n$ with $C(S) = \varnothing$. We say that $S$ is $\mathbb{H}$\emph{-orientable} (or \emph{orientable in the Heisenberg sense}) if there exists a continuous global $1$-vector field
$$
n_{\mathbb{H}}=\sum_{i=1}^{n} \left ( n_{\mathbb{H},i} X_i + n_{\mathbb{H},n+i} Y_i \right ) \neq 0,
$$
defined on $S$ so that $n_{\mathbb{H}}$ and $S$ are orthogonal in the Heisenberg sense ($n_{\mathbb{H}} \perp_H S$).\\
Such $n_{\mathbb{H}}$ will be called \emph{Heisenberg normal vector field of} $S$.
\end{defin}
\begin{lem}
Consider a $1$-codimensional $C^1$-Euclidean surface $S$ in $\mathbb{H}^n$ with $C(S) = \varnothing$. The following are equivalent:
\begin{enumerate}[label=(\roman*)]
\item
$S$ is $\mathbb{H}$\emph{-orientable}
\item
there exists a continuous global $2n$-vector field $t_\mathbb{H}$ on $S$ so that $t_\mathbb{H}$ is $\mathbb{H}$-tangent to S in the sense that $* t_\mathbb{H}$ is $\mathbb{H}$-normal to $S$.
\end{enumerate}
\end{lem}
\begin{obs}
Again, one can easily say that, up to a choice of sign,
$$t_\mathbb{H}= * n_\mathbb{H}.$$
\end{obs}
\noindent
As before, it is possible to give an equivalent definition of Heisenberg orientability using a volume form on the Rumin complex:
\begin{obs}
Consider a $1$-codimensional $C^1$-Euclidean surface $S$ in $\mathbb{H}^n$ with $C(S) = \varnothing$. A fourth equivalent condition is that $S$ allow a volume form $\omega_\mathbb{H}$ (again a never-null form of maximal order), which can be chosen so that the following property holds:
$$
\langle \omega_\mathbb{H} \vert t_\mathbb{H} \rangle=1.
$$
\end{obs}
\begin{obs}\label{Hlocally}
As in Observation \ref{Elocally}, one can write both $n_\mathbb{H}$ and $t_\mathbb{H}$ locally in $\mathbb{H}^1$: by condition \eqref{eq1}, locally $n_\mathbb{H}=\lambda \nabla_{\mathbb{H}} f$, with $\lambda \in C^\infty (\mathbb{H}^1,\mathbb{R})$.
So,
$$
n_{\mathbb{H}} = n_{\mathbb{H},1} X +n_{\mathbb{H},2} Y =\lambda X f X + \lambda Y f Y
$$
and, since $t_{\mathbb{H}} = * n_{\mathbb{H}} = n_{\mathbb{H},1} Y \wedge T - n_{\mathbb{H},2} X \wedge T $,
$$
t_{\mathbb{H}} = \lambda X f Y \wedge T - \lambda Y f X \wedge T .
$$
In this case, the corresponding volume form is
$$
\omega_\mathbb{H}= \frac{ n_{\mathbb{H},1} }{ n_{\mathbb{H},1}^2+n_{\mathbb{H},2}^2} dy \wedge \theta
- \frac{ n_{\mathbb{H},2} }{ n_{\mathbb{H},1}^2+n_{\mathbb{H},2}^2 } dx \wedge \theta.
$$
\end{obs}
\begin{ex}
Consider an $\mathbb{H}$-orientable 1-codimensional surface $S$ in $\mathbb{H}^1$. Then at each point $p \in S$ there exist two continuous global linearly independent vector fields $ \vec{r}$ and $ \vec{s}$ tangent on $S$, $ \vec{r}, \vec{s} \in T_p S$. With the previous notation, we can explicitly find such a pair by solving the following list of conditions:
\begin{multicols}{2}
\begin{enumerate}[nosep]
\item
$\langle \vec{r}, \vec{s} \rangle_{H} =0$,
\item
$\langle \vec{r}, n_\mathbb{H} \rangle_{H} =0$,
\item
$\langle \vec{s}, n_\mathbb{H} \rangle_{H} =0$,
\item
$\vert \vec{r} \vert_{H} =1$,
\item
$\vert \vec{s} \vert_{H} =1$,
\item
$\vec{r} \times \vec{s} = n_\mathbb{H}$,
\item
$\vec{r} \wedge \vec{s} = t_\mathbb{H}.$
\end{enumerate}
\end{multicols}
\noindent
Furthermore, one can (but it is not necessary) choose $\vec{r} = T$ since $n_\mathbb{H} \in \spn \{X,Y\}$. Then one can take $\vec{s} = aX+bY$, so the first two conditions are satisfied.
The third condition is $\langle \vec{s}, n_\mathbb{H} \rangle_{H} =0$, meaning
$$
a n_{\mathbb{H},1} + b n_{\mathbb{H},2} =0,
$$
and the solution is
$$
\begin{cases}
a= c n_{\mathbb{H},2}\\
b = -c n_{\mathbb{H},1}.
\end{cases}
$$
with $c$ arbitrary. Then, by Observation \ref{Hlocally}, there is a local function $f$ so that $\vec{s} = c n_{\mathbb{H},2} X - c n_{\mathbb{H},1} Y$ becomes
$$
\vec{s} = c \lambda Y f X -c \lambda X f Y.
$$
The fourth condition comes for free. The fifth one is $\vert \vec{s} \vert_{\mathbb{H}} =1$, which gives
$$
1= \sqrt{c^2 \lambda^2 (Y f)^2 + c^2 \lambda^2 (X f)^2} = \vert c \lambda\vert \sqrt{ (Y f)^2 + (X f)^2} = \vert c \lambda\vert \cdot \vert \nabla_\mathbb{H} f \vert,
$$
meaning
$$
c=\pm \frac{1}{ \vert \lambda \nabla_\mathbb{H} f \vert}.
$$
So one has that
\begin{align*}
\vec{s} &= \pm \frac{1}{ \vert \lambda \nabla_\mathbb{H} f \vert} \left ( \lambda Y f X - \lambda X f Y \right )=
\pm \frac{\lambda}{ \vert \lambda \vert } \frac{ Y f X - X f Y }{ \vert \nabla_\mathbb{H} f \vert}\\
&=
\pm \ sign (\lambda) \left (
\frac{ Y f }{ \vert \nabla_\mathbb{H} f \vert}X - \frac{ X f }{ \vert \nabla_\mathbb{H} f \vert}Y
\right ) .
\end{align*}
The sixth condition is $\vec{r} \times \vec{s} = n_{\mathbb{H}}$, so:
$$
n_{\mathbb{H}}= \vec{r} \times \vec{s} =
\begin{vmatrix}
X & Y & T \\
0 & 0 & 1 \\
c n_{\mathbb{H},2} & -c n_{\mathbb{H},1} & 0
\end{vmatrix}
=
c n_{\mathbb{H},2} Y + c n_{\mathbb{H},1} X =c n_{\mathbb{H}}.
$$
Then it is necessary to take $c=1$ and one has $\vert \lambda \nabla_\mathbb{H} f \vert = 1$, namely,
$$
\lambda = \pm \frac{1}{\vert \nabla_\mathbb{H} f \vert },
$$
and
$$
\vec{s} = \lambda Y f X - \lambda X f Y = \pm \left (
\frac{ Y f }{ \vert \nabla_\mathbb{H} f \vert}X - \frac{ X f }{ \vert \nabla_\mathbb{H} f \vert}Y
\right ) .
$$
Finally, we verify $\vec{r} \wedge \vec{s} = t_{\mathbb{H}}$ (seventh and last condition):
$$
\vec{r} \wedge \vec{s} = T \wedge ( \lambda Y f X - \lambda X f Y ) =\lambda X f Y \wedge T - \lambda Y f X \wedge T= t_{\mathbb{H}}.
$$
\end{ex}
\subsection{Invariances
In this subsection we prove that, for a $1$-codimensional surface, the $\mathbb{H}$-regularity is invariant under left translations and anisotropic dilations. Furthermore, for an $\mathbb{H}$-regular $1$-codimensional surface, the $\mathbb{H}$-orientability is invariant under the same two types of transformations.
\begin{prop}\label{tau_prop}
Consider the left translation map $\tau_{\bar{p}} : \mathbb{H}^n \to \mathbb{H}^n$
, $\bar{p} \in \mathbb{H}^n$, and an $\mathbb{H}$-regular $1$-codimensional surface $S$.\\
Then $\tau_{\bar{p}} S := \{ \bar{p}*p ; \ p \in S \}$ is again an $\mathbb{H}$-regular $1$-codimensional surface.
\end{prop}
\begin{proof}
Since $\tau_{\bar{p}} S = \{ \bar{p}*p ; p \in S \}$, for all $ q \in \tau_{\bar{p}} S$ there exists a point $ p \in S$ so that $q=\bar{p}*p$.
For such $p \in S $, there exists a neighbourhood $U_p$ and a function $f: U_p \to \mathbb{R}$ so that $S \cap U_p = \{ f=0 \}$ and $\nabla_\mathbb{H} f \neq 0$ on $U_p$.
Define $U_q := \tau_{\bar{p}} U_p = \bar{p} * U_p $, which is a neighbourhood of $q= \bar{p}*p$, and a function $\tilde{f}: = f \circ \tau_{\bar{p}}^{-1} : U_q \to \mathbb{R} $.
Then, for all $ q' \in U_q$,
$$
\tilde{f}(q') = (f \circ \tau_{\bar{p}}^{-1})(q') = f (\bar{p}^{-1} * \bar{p}* p' ) = f(p')=0,
$$
where $q'= \bar{p}* p' $ and $p' \in U_p$. Then
$$
\tau_{\bar{p}} S \cap U_q = \{ \tilde{f}=0 \}.
$$
Furthermore, on $U_q$, and using Definition \ref{leftinv},
\begin{align*}
\nabla_\mathbb{H} \tilde{f} = &\nabla_\mathbb{H} ( f \circ \tau_{\bar{p}}^{-1} ) = \nabla_\mathbb{H} ( f \circ \tau_{\bar{p}^{-1}}) =
\sum_{i=1}^n \bigg (
X_i( f \circ \tau_{\bar{p}^{-1}}) X_i + Y_i ( f \circ \tau_{\bar{p}^{-1}}) Y_i
\bigg )\\
=&
\sum_{i=1}^n \bigg (
[X_i ( f )\circ \tau_{\bar{p}^{-1}} ] X_i + [ Y_i ( f ) \circ \tau_{\bar{p}^{-1}} ] Y_i
\bigg )
\neq 0
\end{align*}
as $X_i ( f )\circ \tau_{\bar{p}^{-1}}$ and $ Y_i ( f ) \circ \tau_{\bar{p}^{-1}}$ are defined on $U_p$ and on $U_p$ one of the two is always non-negative by the hypothesis that $\nabla_\mathbb{H} f \neq 0$ on $U_p$.
\end{proof}
\begin{prop}\label{delta_prop}
Consider the usual anisotropic dilation $\delta_r : \mathbb{H}^n \to \mathbb{H}^n$
, $r>0$, and an $\mathbb{H}$-regular $1$-codimensional surface $S$.\\
Then $\delta_r S := \{ \delta_r(p) ; \ p \in S \}$ is again an $\mathbb{H}$-regular $1$-codimensional surface.
\end{prop}
\begin{proof}
Since $\delta_r S = \{ \delta_r(p) ; \ p \in S \}$, then for all $ q \in \delta_r S$ there exists a point $ p \in S$ so that $q=\delta_r(p)$. For such $p \in S $, there exists a neighbourhood $U_p$ and a function $f: U_p \to \mathbb{R}$ so that $S \cap U_p = \{ f=0 \}$ and $\nabla_\mathbb{H} f \neq 0$ on $U_p$.
Define $U_q := \delta_r( U_p ) $, which is a neighbourhood of $q= \delta_r ( p )$, and a function $\tilde{f}: = f \circ \delta_{1/r} : U_q \to \mathbb{R} $.
Then, for all $ q' \in U_q$,
$$
\tilde{f}(q') = (f \circ \delta_{1/r} )(q') = f ( \delta_{1/r} \delta_r p' ) = f(p')=0,
$$
where $q'= \bar{p}* p' $ and $p' \in U_p$. Then
$$
\delta_r S \cap U_q = \{ \tilde{f}=0 \}.
$$
Furthermore, on $U_q$, using the fact that $\delta_{1/r}$ is a contact map and Lemma \ref{nabla_comp1},
\begin{align*}
\nabla_{\mathbb{H}} \tilde{f} = \nabla_\mathbb{H} ( f \circ \delta_{1/r} ) = (\delta_{1/r})_*^T (\nabla_{\mathbb{H}} f)_{\delta_{1/r}} \neq 0.
\end{align*}
\end{proof}
\begin{prop}\label{letra}
Consider a left translation map $\tau_{\bar{p}} : \mathbb{H}^n \to \mathbb{H}^n$, $\bar{p} \in \mathbb{H}^n$, the anisotropic dilation $\delta_r : \mathbb{H}^n \to \mathbb{H}^n$, $r>0$ and an $\mathbb{H}$-regular $1$-codimensional surface $S$.
Then the $\mathbb{H}$-regular $1$-codimensional surfaces $\tau_{\bar{p}} S$ and $\delta_r S $ are $\mathbb{H}$-orientable (respectively) if and only if $S$ $\mathbb{H}$-orientable.
\end{prop}
\begin{proof}
Remember that $\tau_{\bar{p}} S = \{ \bar{p}*p ; \ p \in S \}$. From Proposition \ref{tau_prop}, one knows that for all $ q \in \tau_{\bar{p}} S$ there exists a point $ p \in S$ so that $q=\bar{p}*p$ and there exists a neighbourhood $U_p$ and a function $f: U_p \to \mathbb{R}$ so that $S \cap U_p = \{ f=0 \}$ and $\nabla_\mathbb{H} f \neq 0$ on $U_p$.
Furthermore, $U_q = \tau_{\bar{p}} U_p = \bar{p} * U_p $ is a neighbourhood of $q= \bar{p}*p$ and the function $\tilde{f}: = f \circ \tau_{\bar{p}}^{-1} : U_q \to \mathbb{R} $ is so that $\tau_{\bar{p}} S \cap U_q = \{ \tilde{f}=0 \}$ and $\nabla_\mathbb{H} \tilde{f} \neq 0$ on $U_q$.\\
Assume now that $S$ is $\mathbb{H}$-orientable, then there exists a global vector field
$$
n_{\mathbb{H}} = \sum_{j=1}^{n} \left ( n_{\mathbb{H},j} X_j + n_{\mathbb{H},n+j} Y_j \right ),
$$
that, locally, takes the form of
$$
\sum_{j=1}^{n} \left (
\frac{ X_j f }{ \vert \nabla_\mathbb{H} f \vert } X_j + \frac{ Y_j f }{ \vert \nabla_\mathbb{H} f \vert } Y_j \right ).
$$
Now we consider:
\begin{align*}
(\tau_{\bar{p}}^{-1})_* n_\mathbb{H} = \sum_{j=1}^{n} \left (
n_{\mathbb{H},j} \circ \tau_{\bar{p}}^{-1}
{X_j}_{\tau_{\bar{p}}^{-1}} +
n_{\mathbb{H},n+j} \circ \tau_{\bar{p}}^{-1}
{Y_j}_{\tau_{\bar{p}}^{-1}} \right ).
\end{align*}
Locally, it becomes
\begin{align*}
(\tau_{\bar{p}}^{-1})_* n_\mathbb{H} =
\sum_{j=1}^{n} \left (
\frac{ X_j f }{ \vert \nabla_\mathbb{H} f \vert } \circ \tau_{\bar{p}}^{-1}
{X_j}_{\tau_{\bar{p}}^{-1}} +
\frac{ Y_j f }{ \vert \nabla_\mathbb{H} f \vert } \circ \tau_{\bar{p}}^{-1}
{Y_j}_{\tau_{\bar{p}}^{-1}} \right ).
\end{align*}
Note that this is still a global vector field and is defined on the whole $\tau_{\bar{p}} S$, therefore it gives an orientation to $\tau_{\bar{p}} S$.
Since we can repeat the whole proof starting from $\tau_{\bar{p}} S$ to $S= \tau_{\bar{p}}^{-1} \tau_{\bar{p}} S$, this proves both directions.\\\\
For the dilation, remember that $\delta_r S = \{ \delta_r(p) ; \ p \in S \}$. From Proposition \ref{delta_prop}, for all $ q \in \delta_r S$ there exists a point $ p \in S$ so that $q=\delta_r(p)$ and there exists a neighbourhood $U_p$ and a function $f: U_p \to \mathbb{R}$ so that $S \cap U_p = \{ f=0 \}$ and $\nabla_\mathbb{H} f \neq 0$ on $U_p$.
In the same way, $U_q = \delta_r( U_p ) $ is a neighbourhood of $q= \delta_r(p)$ and the function $\tilde{f}: = f \circ \delta_{1/r} : U_q \to \mathbb{R} $ is so that $\delta_r S \cap U_q = \{ \tilde{f}=0 \}$ and $\nabla_\mathbb{H} \tilde{f} \neq 0$ on $U_q$.\\
Assume now that $S$ is $\mathbb{H}$-orientable Then there exists a global vector field
$$
n_\mathbb{H} = \sum_{j=1}^{n} \left ( n_{\mathbb{H},j} X_j + n_{\mathbb{H},n+j} Y_j \right ),
$$
that, locally is written as
$$
\sum_{j=1}^{n} \left (
\frac{ X_j f }{ \vert \nabla_\mathbb{H} f \vert } X_j + \frac{ Y_j f }{ \vert \nabla_\mathbb{H} f \vert } Y_j \right ).
$$
Now
\begin{align*}
( \delta_{1/r} )_* n_\mathbb{H} = \sum_{j=1}^{n} \left (
n_{\mathbb{H},j} \circ \delta_{1/r}
{X_j}_{\delta_{1/r}} +
n_{\mathbb{H},n+j} \circ \delta_{1/r}
{Y_j}_{\delta_{1/r}} \right ).
\end{align*}
Locally, it becomes
\begin{align*}
( \delta_{1/r} )_* n_\mathbb{H} =
\sum_{j=1}^{n} \left (
\frac{ X_j f }{ \vert \nabla_\mathbb{H} f \vert } \circ \delta_{1/r}
{X_j}_{\delta_{1/r}} +
\frac{ Y_j f }{ \vert \nabla_\mathbb{H} f \vert } \circ \delta_{1/r}
{Y_j}_{\delta_{1/r}} \right ).
\end{align*}
Note that this is still a global vector field and is defined on the whole $\delta_r S$, therefore it gives an orientation to $\delta_r S$.
Since we can repeat the whole proof starting from $\delta_r S$ to $S=\delta_{1/r} \delta_r S$, this proves both directions.
\end{proof}
\subsection{Comparison}
In this subsection we show how the two notions of orientability are related, concluding that non-$\mathbb{H}$-orientable $\mathbb{H}$-regular surfaces exist, at least when $n=1$.\\
\begin{no}
Consider a $1$-codimensional $C^1$-Euclidean surface $S$ in $\mathbb{H}^n$ with $C(S)\neq 0$. We say that a surface is $C^2_\mathbb{H}$-regular if its Heisenberg normal vector fields $n_\mathbb{H} \in C^1_\mathbb{H}$.
\end{no}
\begin{prop}\label{finalmente4}
Let $S$ be a $1$-codimensional $C^1$-Euclidean surface in $\mathbb{H}^{n}$, with $C(S) = \varnothing$. Then the following holds:
\begin{enumerate}
\item Suppose $S$ is Euclidean-orientable. Recall from condition \eqref{eq2} that $C^1$-Euclidean means that for all $ p \in S$ there exists $ U \in \mathcal{U}_p$ and $g : U \to \mathbb{R}$, $g \in C^1$, so that $S \cap U = \{ g=0 \}$ and $\nabla g \neq 0$ on $U$.
If, for any such $g$, no point of $S$ belongs to the set
$$
\left \{
\left (
- \frac{2 ( \partial_{y_1} g )_p }{ ( \partial_t g )_p }
, \dots,
- \frac{2 ( \partial_{y_n} g )_p }{ ( \partial_t g )_p }
,
\frac{2 ( \partial_{x_1} g )_p }{ ( \partial_t g )_p }
,\dots,
\frac{2 ( \partial_{x_n} g )_p }{ ( \partial_t g )_p }
,t
\right )
, \text{ with } ( \partial_t g )_p \neq 0 \right \},
$$
then
\begin{equation}\label{.1}
S \text{ is } \mathbb{H}\text{-orientable}.
\end{equation}
\item
If $S$ is $C^2_\mathbb{H}$-regular,
\begin{equation}\label{.2}
S \text{ is } \mathbb{H}\text{-orientable implies } S \text{ is Euclidean-orientable } .
\end{equation}
\end{enumerate}
\end{prop}
\noindent
The proof will follow at the end of this chapter. A question arises naturally:
\begin{que}
About the extra conditions for the first implication in Proposition \ref{finalmente4}, what can we say about that set? Is it possible to do better?
\end{que}
\begin{rem}
Remember that, by Proposition \ref{Mobius}, $ \widebar{\mathcal{M}}$ is an $\mathbb{H}$-regular surface not Euclidean-orientable.
Observe also that $ \widebar{\mathcal{M}}$ satisfies the hypotheses of Proposition \ref{finalmente4}.\\
Then implication \eqref{.2} in Proposition \ref{finalmente4} implies that $\widebar{\mathcal{M}}$ is not an $\mathbb{H}$-orientable $\mathbb{H}$-regular surface.
Then we can say that there exist $\mathbb{H}$-regular surfaces which are not $\mathbb{H}$-orientable, at least when $n=1$.\\
This opens the possibility, among others, to study surfaces that are, in the Heisenberg sense, regular but not orientable; for example as supports of Sub-Riemannian currents.
\end{rem}
\noindent
Before proving the proposition, we state a weaker result as a lemma.
\begin{lem}\label{question}
Let $S$ be a $1$-codimensional $C^1$-Euclidean surface in $\mathbb{H}^n$, with $C(S) = \varnothing$.\\
Recall that, by Observation \ref{Cpoints}, $S$ is $\mathbb{H}$-regular and, by condition \eqref{eq1}, this means that for all $p \in S$ there exists $ U \in \mathcal{U}_p$ and $ f : U \to \mathbb{R}$, $f \in C_{\mathbb{H}}^1$, so that $S \cap U = \{ f=0 \}$ and $ \nabla_{\mathbb{H}} f \neq 0$ on $U$. Likewise, by condition \eqref{eq2}, $C^1$-Euclidean means that for all $ p \in S$ there exists $ U' \in \mathcal{U}_p$ and $g : U' \to \mathbb{R}$, $g \in C^1$, so that $S \cap U' = \{ g=0 \}$ and $\nabla g \neq 0 $ on $ U'$.\\
Assume that for each $p \in S$ we can consider $f=g$ on $U \cap U'$. Then the following holds:
$$
S \text{ is Euclidean-orientable implies } S \text{ is } \mathbb{H}\text{-orientable}.
$$
If $S$ is $C^2_\mathbb{H}$-regular,
$$
S \text{ is } \mathbb{H}\text{-orientable implies } S \text{ is Euclidean-orientable } .
$$
\end{lem}
\begin{proof}
The first implication of Lemma \ref{question} says that there exists a global continuous vector field
$$
n_E= \sum_{i=1}^{n} \left ( n_{E,i} \partial_{x_i} + n_{E,n+i} \partial_{y_i} \right ) + n_{E,2n+1} \partial_t \neq 0
$$
so that for all $p \in S$ there exist $U \in \mathcal{U}_p $ and a function $ g : U \to \mathbb{R}$, $ g \in C^1$, so that $S \cap U = \{ g=0 \}$ and $ \nabla g \neq 0$ on $ U$, with
$$
\begin{cases}
n_{E,i} = \mu \partial_{x_i} g,\\
n_{E,n+i} = \mu \partial_{y_i} g, \\
n_{E,2n+1} = \mu \partial_t g,
\end{cases}
\quad i=1,\dots,n,
$$
where $\mu$ is simply a normalising factor so that $\vert n_E \vert =1$ that, from now on, can be ignored. By hypothesis, we also know that for all $ p \in S$ there exist $ \exists U' \in \mathcal{U}_p$ and a function $ f : U' \to \mathbb{R}$, $ f \in C_{\mathbb{H}}^1$, so that $S \cap U = \{ f=0 \} $ and $ \nabla_{\mathbb{H}} f \neq 0 \text{ on } U'$.\\
The extra-hypothesis of this lemma is exactly that $g=f$ and so one can also take $U=U'$. The goal is to find a global $n_{\mathbb{H}}$. A natural choice is
$$
\begin{cases}
n_{\mathbb{H},i} :=n_{E,i} -\frac{1}{2}y_i \cdot n_{E,2n+1},\\
n_{\mathbb{H},n+i} := n_{E,n+i} +\frac{1}{2}x_i \cdot n_{E,2n+1},
\end{cases}
\quad i=1,\dots,n.
$$
Then
\begin{align*}
\sum_{i=1}^{n} \left ( n_{\mathbb{H},i} ^2 + n_{\mathbb{H},n+i}^2 \right ) =& \sum_{i=1}^{n} \left [ \left ( n_{E,i} -\frac{1}{2}y_i \cdot n_{E,2n+1} \right )^2 + \left ( n_{E,n+i} +\frac{1}{2}x_i \cdot n_{E,2n+1} \right )^2 \right ] .
\end{align*}
Locally, and remembering $g=f$, this means
\begin{align*}
\sum_{i=1}^{n} \left ( n_{\mathbb{H},i} ^2 + n_{\mathbb{H},n+i}^2 \right ) =& \mu^2 \sum_{i=1}^{n} \left [ \left ( \partial_{x_i} g -\frac{1}{2}y_i \partial_t g \right )^2 + \left ( \partial_{y_i} g +\frac{1}{2}x_i \partial_t g \right )^2 \right ] \\
= & \mu^2 \sum_{i=1}^{n} \left [ \left ( \partial_{x_i} f -\frac{1}{2}y_i \partial_t f \right )^2 + \mu^2 \left ( \partial_{y_i} f +\frac{1}{2}x_i \partial_t f \right )^2 \right ] \\
=& \mu^2 \sum_{i=1}^{n} \left [ \left ( X_i f \right )^2 + \left ( Y_if \right )^2 \right ] \neq 0.
\end{align*}
So there exists a proper vector
$$
n_{\mathbb{H}}:=\sum_{i=1}^{n} \left ( n_{\mathbb{H},i} X_i+n_{\mathbb{H},n+i} Y_i \right )
$$
and this proves the first implication.\\\\
To prove the second implication of Lemma \ref{question}, first note that there exists a global continuous vector field
$$
n_{\mathbb{H}}=\sum_{i=1}^{n} \left ( n_{\mathbb{H},i} X_i + n_{\mathbb{H},n+i} Y_i \right ) \neq 0
$$
so that for all $ p \in S$ there exist $ U \in \mathcal{U}_p $ and $ f : U \to \mathbb{R}$, $ f \in C_{\mathbb{H}}^1$, so that $ S \cap U = \{ f=0 \}$ and $ \nabla_{\mathbb{H}} f \neq 0$ on $ U$, with
$$
\begin{cases}
n_{\mathbb{H},i} = \lambda X_i f,\\
n_{\mathbb{H},n+i} = \lambda Y_i f,
\end{cases}
\quad i=1,\dots,n,
$$
where $\lambda$ is just a normalising factor so that $\vert n_{\mathbb{H}} \vert =1$ that, from now on, can be ignored.\\
By hypothesis, we also know that for all $ p \in S$ there exist $ U' \in \mathcal{U}_p$ and $ g : U' \to \mathbb{R}$, $ g \in C^1$, so that $S \cap U' = \{ g=0 \} $ and $\nabla g \neq 0 $ on $U'$.\\
As before, since we am in the case $g=f$, we can also take $U=U'$. This time the goal is to find a global vector field $n_E$. A natural choice is
$$
\begin{cases}
n_{E,2n+1}:=\frac{1}{n} \sum_{j=1}^{n} \left ( X_j n_{\mathbb{H},n+j} - Y_j n_{\mathbb{H},j} \right ) ,\\
n_{E,i} := n_{\mathbb{H},i} +\frac{1}{2}y_i \cdot n_{E,2n+1} ,\\
n_{E,n+i} := n_{\mathbb{H},n+i} -\frac{1}{2}x_i \cdot n_{E,2n+1},
\end{cases}
\quad i=1,\dots,n.
$$
Note that we need
$ n_{\mathbb{H},i}, n_{\mathbb{H},n+i} \in C_{\mathbb{H}}^1 \left ( n_{\mathbb{H}} \in C_{\mathbb{H}}^1 \right )$ for this definition to be useful. Indeed, this is the same as having $S$ $ C^2_\mathbb{H}$-regular.\\
Now note that
$$
n_{E,2n+1}=\frac{1}{n} \sum_{j=1}^{n} \left ( X_j n_{\mathbb{H},n+j} - Y_j n_{\mathbb{H},j} \right ) .
$$
Moreover, remembering that locally $g=f$ and that in each $U$ we have one such function, we get
$$
n_{E,2n+1} = \frac{1}{n} \sum_{j=1}^{n} \left ( X_j Y_j f - Y_j X_j f \right ) =\frac{1}{n} \sum_{j=1}^{n} \left[ X_j Y_j g - Y_j X_j g \right ] =\frac{1}{n} n Tg= \partial_t g.
$$
Now
\begin{align*}
& \sum_{i=1}^{n} \left ( n_{E,i}^2 + n_{E,n+i}^2 \right ) + n_{E,2n+1}^2 = \\
&=
\sum_{i=1}^{n} \left [ \left ( n_{\mathbb{H},i} +\frac{1}{2}y_i n_{E,2n+1} \right )^2
+ \left ( n_{\mathbb{H},n+i} -\frac{1}{2}x_i n_{E,2n+1} \right )^2 \right ]
+ n_{E,2n+1}^2.
\end{align*}
This locally is
\begin{align*}
& \sum_{i=1}^{n} \left ( n_{E,i}^2 + n_{E,n+i}^2 \right ) + n_{E,2n+1}^2 = \\
& =
\sum_{i=1}^{n} \left [ \left ( X_i g +\frac{1}{2}y_i ( \partial_t g ) \right )^2
+ \left ( Y_i g -\frac{1}{2}x_i ( \partial_t g ) \right )^2 \right ]
+ \left ( \partial_t g \right )^2\\
&=
\sum_{i=1}^{n} \left [ \left ( \partial_{x_i} g -\frac{1}{2}{y_i} \partial_t g +\frac{1}{2}{y_i} \partial_t g \right )^2
+ \left ( \partial_{y_i} g +\frac{1}{2}{x_i} \partial_t g -\frac{1}{2}{x_i} \partial_t g \right )^2 \right ]
+ \left ( \partial_t g \right )^2\\
&=
\sum_{i=1}^{n} \left ( \left ( \partial_{x_i} g \right )^2
+ \left ( \partial_{y_i} g \right )^2 \right )
+ \left ( \partial_t g \right )^2 \neq 0.
\end{align*}
So there is a proper vector
$$
n_{E}:=\sum_{i=1}^{n} \left ( n_{E,i} \partial_{x_i}+n_{E,n+i} \partial_{y_i} \right ) +n_{E,2n+1} \partial_t
$$
and this proves the second implication and concludes the proof.
\end{proof}
\noindent
Differently from the proof of Lemma \ref{question}, for Proposition \ref{finalmente4} we cannot assume $g=f$. We can still construct the vector field as before but cannot prove straight away that the new-built global vector field is never zero.
\begin{proof}[Proof of implication \eqref{.1} in Proposition \ref{finalmente4}]
We know there exists a global vector field
$$
n_{E}=\sum_{i=1}^{n} \left ( n_{E,i} \partial_{x_i} +n_{E,n+i} \partial_{y_i} \right ) + n_{E,2n+1} \partial_t \neq 0
$$
that can be written locally as
$$
n_{E}=\mu \sum_{i=1}^{n} \left ( \partial_{x_i} g \partial_{x_i} + \partial_{y_i} g \partial_{y_i} \right ) + \mu \partial_t g \partial_t
$$
so that $\nabla g \neq 0 , \ g \in C^1(U,\mathbb{R})$, $U\subseteq \mathbb{H}^n$ open.
Define
$$
\begin{cases}
n_{\mathbb{H},i} :=n_{E,i} -\frac{1}{2}y_i \cdot n_{E,2n+1},\\
n_{\mathbb{H},n+i} := n_{E,n+i} +\frac{1}{2}x_i \cdot n_{E,2n+1},
\end{cases}
\quad i=1,\dots,n.
$$
Locally (for each point $p$ there exists a neighbourhood $U$ where such $g$ is defined) this becomes
$$
\begin{cases}
n_{\mathbb{H},i} =\mu \partial_{x_i} g -\frac{1}{2} y_i \mu \partial_t g= \mu X_i g,\\
n_{\mathbb{H},n+i} = \mu \partial_{y_i} g +\frac{1}{2} x_i \mu \partial_t g= \mu Y_i g ,
\end{cases}
\quad i=1,\dots,n,
$$
where $\mu$ is simply a normalising factor that, from now on, we ignore.\\
In order to verify the $\mathbb{H}$-orientability, we have to show that $\nabla_{\mathbb{H}} g \neq 0$ .
Note here that $ C^1(U,\mathbb{R}) \subsetneq C_{\mathbb{H}}^1(U,\mathbb{R}), $ so $g$ is regular enough.\\
Consider first the case in which $( \partial_t g )_p =0$. We still have that $\nabla_{p} g \neq 0$, so at least one of the derivatives $ ( \partial_{x_i} g )_p , \ ( \partial_{y_i} g )_p $ must be different from zero in $p$. But, when $( \partial_t g )_p =0$, then $(X_i g)_p= ( \partial_{x_i} g )_p$ and $(Y_i g)_p= ( \partial_{y_i} g )_p $, so
$$
\norm{ \nabla_{\mathbb{H},p} g }^2=(X_i g)_p^2 + (Y_i g)_p^2 \neq 0.
$$
\noindent
Second, consider the case when $( \partial_t g )_p \neq 0 $. In this case:
$$
\norm{ \nabla_{\mathbb{H},p} g }^2 =\sum_{i=1}^{n} (X_i g)_p^2 + (Y_i g)_p^2 = \sum_{i=1}^{n} \left ( \partial_{x_i} g -\frac{1}{2}y_{i,p} \partial_t g \right )_p^2 + \left ( \partial_{y_i} g + \frac{1}{2}x_{i,p} \partial_t g \right )_p^2 \neq 0
$$
is equivalent to the fact that there exists $i \in \{1,\dots, n\}$ such that
$$
y_{i,p} \neq \frac{2 ( \partial_{x_i} g )_p }{ ( \partial_t g )_p } \ \text{ or } \ x_{i,p} \neq - \frac{2 ( \partial_{y_i} g )_p }{ ( \partial_t g )_p }.
$$
So the Heisenberg gradient of $g$ in $p$ is zero at the points\\
$$
\left (
- \frac{2 ( \partial_{y_1} g )_p }{ ( \partial_t g )_p }
, \dots,
- \frac{2 ( \partial_{y_n} g )_p }{ ( \partial_t g )_p }
,
\frac{2 ( \partial_{x_1} g )_p }{ ( \partial_t g )_p }
,\dots,
\frac{2 ( \partial_{x_n} g )_p }{ ( \partial_t g )_p }
,t
\right )
$$
and the first implication of the proposition is true
\end{proof}
\begin{proof}[Proof of implication \eqref{.2} in Proposition \ref{finalmente4}]
In the second case \eqref{.2}, we know that there exists a global vector
$$
n_{\mathbb{H}}= \sum_{i=1}^{n} n_{\mathbb{H},i} X_i + n_{\mathbb{H},n+i} Y_i \neq 0
$$
that can be written locally as
$$
n_{\mathbb{H}}= \sum_{i=1}^{n} \lambda X_i f X_i + \lambda Y_i f Y_i
$$
so that $\nabla_{\mathbb{H}} f \neq 0 , \ f \in C_{\mathbb{H}}^1(U,\mathbb{R})$, with $U\subseteq \mathbb{H}^n$ open. As before, $\lambda$ is simply a normalising factor that, from now on, we ignore.\\
Note that $n_{\mathbb{H}} \in C_{\mathbb{H}}^1(U,\mathbb{R})$ (that is the same as asking $S$ to be $C_\mathbb{H}^2$-regular). Then define
$$
\begin{cases}
n_{E,2n+1}:=\frac{1}{n} \sum_{j=1}^{n} \left ( X_j n_{\mathbb{H},n+j} - Y_j n_{\mathbb{H},j} \right ), \\
n_{E,i} := n_{\mathbb{H},i} +\frac{1}{2}y_i \cdot n_{E,2n+1}, \\
n_{E,n+i} := n_{\mathbb{H},n+i} -\frac{1}{2}x_i \cdot n_{E,2n+1} ,
\end{cases}
\quad i=1,\dots,n.
$$
Locally (for each point $p$ there exists a neighbourhood $U$ where such $f$ is defined) we can write the above as:
$$
n_{E,2n+1}
= \frac{1}{n} \sum_{j=1}^{n} \left ( X_j Y_j f - Y_j X_j f \right )
=\frac{1}{n} n Tf= \partial_t f.
$$
So now we have that
$$
\begin{cases}
n_{E,2n+1}= \partial_t f,\\
n_{E,i} = \partial_{x_i} f -\frac{1}{2} {y_i} \partial_t f +\frac{1}{2} {y_i} \partial_t f= \partial_{x_i} f ,\\
n_{E,n+i} = \partial_{y_i} f +\frac{1}{2} {x_i} \partial_t f -\frac{1}{2} {x_i} \partial_t f = \partial_{y_i} f ,
\end{cases}
\quad i=1,\dots,n.
$$
In order to verify the Euclidean-orientability, we have to show that $\nabla f \neq 0$ . \\
Note that $f \in C_{\mathbb{H}}^1(U,\mathbb{R})$ and, a priori, we do not know whether $f \in C^1(U,\mathbb{R})$. However, asking $n_{\mathbb{H}} \in C_{\mathbb{H}}^1(U,\mathbb{R})$ allows us to write $\partial_{x_i}, \partial_{y_i} $ and $ \partial_t$ using only $X_i, Y_i, n_{\mathbb{H},i} $ and $ n_{\mathbb{H},n+i}$, which guarantees that $\partial_{x_i} f, \partial_{y_i} f$ and $ \partial_t f$ are well defined.\\
Now, $\nabla f \neq 0$ if and only if
\begin{align*}
\sum_{i=1}^{n} \left ( (\partial_{x_i} f)^2 + (\partial_{y_i} f)^2 \right ) + (\partial_t f)^2 \neq 0 ,
\end{align*}
which is the same as
\begin{align*}
& \sum_{i=1}^{n} \left [ \left ( X_i f + \frac{1}{2}y_i T f \right )^2 + \left ( Y_i f - \frac{1}{2}x_i T f \right )^2 + \left ( T f \right )^2 \right ] \neq 0 .
\end{align*}
In the case $Tf \neq 0$, we have that $\nabla f \neq 0$ immediately. In the case $Tf=0$, instead, we have that $\nabla f \neq 0$ if and only if
$$
\sum_{i=1}^{n} \left [ \left ( X_i f \right )^2+ \left ( Y_i f \right )^2 \right ] \neq 0,
$$
which is true because $\nabla_{\mathbb{H}} f \neq 0$. This completes the cases and shows that there actually is a global vector field $n_E$ that is continuous (by hypotheses) and never zero. So the second implication of the proposition is true.
\end{proof}
\begin{comment
We write what we found as a Proposition:
\begin{prop}
Let $S$ be a $1$-codimensional $C^1$-Euclidean surface in $\mathbb{H}^1$, with $C(S) = \varnothing$ and $\dim_{\mathcal{H}_{E}} S=2$.
\begin{itemize}
\item Suppose $S$ is Euclidean-orientable, so
$$
\forall p \in S \ \exists U_p \ \exists g : U_p \to \mathbb{R}, \ g \in C^1, \ \text{ so that } S \cap U_p = \{ g=0 \} \text{ and } \nabla g \neq 0 \text{ on } U_p.
$$
If, for any $g$ above, no point of S belongs to the set\\
$$
\left \{ \left ( - \frac{2 ( \partial_y g )_p }{ ( \partial_t g )_p }(t) , \frac{2 ( \partial_x g )_p }{ ( \partial_t g )_p }(t),t \right ) , \text{ with } ( \partial_t g )_p \neq 0 \right \},
$$
then
$$
S \text{ is } \mathbb{H}\text{-orientable}.
$$
\item
If $S \in C^2_\mathbb{H}$
$$
S \text{ is } \mathbb{H}\text{-orientable} \ \ \Longrightarrow \ \ S \text{ is Euclidean-orientable } .
$$
\end{itemize}
\end{prop}
\begin{rem}
In particular, if one reverses the second arrow, this says that, since there is $ \mathcal{M}_{-\bar p}$ (see \ref{solutionMobius}) that is definitely a $1$-codimensional $C^1$-Euclidean surface in $\mathbb{H}^1$, with $C(\mathcal{M}_{-\bar p}) = \varnothing$ and $\dim_{\mathcal{H}_{E}} S=2$, then one also has that $\mathcal{M}_{-\bar p}$ is a not-$\mathbb{H}$-orientable $\mathbb{H}$-regular surface. So we conclude that not-$\mathbb{H}$-orientable $\mathbb{H}$-regular surfaces exist.
\end{rem}
We can conclude this chapter with a couple of natural questions:
\begin{que}
About the extra conditions for the first arrow, what can we say about that set? Is it possible to do better?
\end{que}
\begin{que}
Is there a correlations between $f$ and $g$ when they both exists in a neighbourhood?
\end{que}
\begin{comment
\begin{comment
In each neighbourhood, we have the following possible cases
\begin{enumerate}
\item
$ ( \partial_x g )_p =0, \ ( \partial_y g )_p =0, \ ( \partial_t g )_p =0 $
\item
$ ( \partial_x g )_p=0, \ ( \partial_y g )_p \neq 0, \ ( \partial_t g )_p =0 $
\item
$ ( \partial_x g )_p \neq 0, \ ( \partial_y g )_p =0, \ ( \partial_t g )_p =0 $
\item
$ ( \partial_x g )_p \neq 0, \ ( \partial_y g )_p \neq 0, \ ( \partial_t g )_p =0 $
\item
$ ( \partial_x g )_p =0, \ ( \partial_y g )_p=0, \ ( \partial_t g )_p \neq 0 $
\item
$ ( \partial_x g )_p =0, \ ( \partial_y g )_p \neq 0, \ ( \partial_t g )_p \neq 0 $
\item
$ ( \partial_x g )_p \neq 0, \ ( \partial_y g )_p =0, \ ( \partial_t g )_p \neq 0 $
\item
$ ( \partial_x g )_p \neq 0, \ ( \partial_y g )_p \neq 0, \ ( \partial_t g )_p \neq 0 $
\end{enumerate}
We examinate them one by one:
\begin{enumerate}
\item
$ ( \partial_x g )_p=0, \ ( \partial_y g )_p =0, \ ( \partial_t g )_p =0 $\\
Here $\nabla_{p} g=0$, which makes the case impossible.
\item
$ ( \partial_x g )_p=0, \ ( \partial_y g )_p \neq 0, \ ( \partial_t g )_p =0 $\\
$\nabla_{\mathbb{H},p} g =(Xg)_p^2 + (Yg)_p^2= 0^2 + ( \partial_y g )_p^2 \neq 0$, which means that there exists a neighbourhood where this gradient is different from zero.
\item
$ ( \partial_x g )_p \neq 0, \ ( \partial_y g )_p =0, \ ( \partial_t g )_p =0 $\\
$\nabla_{\mathbb{H},p} g =(Xg)_p^2 + (Yg)_p^2= ( \partial_x g )_p^2 +0^2 \neq 0$ YES
\item
$ ( \partial_x g )_p \neq 0, \ ( \partial_y g )_p \neq 0, \ ( \partial_t g )_p =0 $\\
$\nabla_{\mathbb{H},p} g =(Xg)_p^2 + (Yg)_p^2= ( \partial_x g )_p^2 + ( \partial_y g )_p^2 \neq 0$.
\item
$ ( \partial_x g )_p =0, \ ( \partial_y g )_p=0, \ ( \partial_t g )_p \neq 0 $
$$
\nabla_{\mathbb{H},p} g =(Xg)_p^2 + (Yg)_p^2= \left ( -\frac{1}{2}y_p \partial_t g \right )_p^2 + \left ( \frac{1}{2}x_p \partial_t g \right )_p^2 = \frac{1}{4} ( \partial_t g )_p^2 (x_p^2 + y_p^2) \neq 0
$$
$$
\Longleftrightarrow \ \ (x_p,y_p)\neq (0,0)
$$
Unfortunately here we see that the Heisenberg gradient of $g$ at the point $p$ is zero if the point lies on the vertical axis $\{ (0,0,t) \}$.
\item
$ ( \partial_x g )_p =0, \ ( \partial_y g )_p \neq 0, \ ( \partial_t g )_p \neq 0 $\\
$$
\nabla_{\mathbb{H},p} g =(Xg)_p^2 + (Yg)_p^2= \left ( -\frac{1}{2}y_p \partial_t g \right )_p^2 + \left ( \partial_y g + \frac{1}{2}x_p \partial_t g \right )_p^2 \neq 0
$$
$$
\Longleftrightarrow \ \
y_p \neq 0 \ \ \bigvee \ \ x_p \neq - \frac{2 ( \partial_y g )_p }{ ( \partial_t g )_p }
$$
So the Heisenberg gradient of $g$ at the point $p$ is zero at the points\\
$ \left \{ \left ( - \frac{2 ( \partial_y g )_p }{ ( \partial_t g )_p },0,t \right ) \right \}.$
\item
$ ( \partial_x g )_p \neq 0, \ ( \partial_y g )_p =0, \ ( \partial_t g )_p \neq 0 $\\
$$
\nabla_{\mathbb{H},p} g =(Xg)_p^2 + (Yg)_p^2= \left ( \partial_x g -\frac{1}{2}y_p \partial_t g \right )_p^2 + \left ( \frac{1}{2}x_p \partial_t g \right )_p^2 \neq 0
$$
$$
\Longleftrightarrow \ \
y_p \neq \frac{2 ( \partial_x g )_p }{ ( \partial_t g )_p } \ \ \bigvee \ \ x_p \neq 0
$$
So the Heisenberg gradient of $g$ at the point $p$ is zero at the points\\
$ \left \{ \left (0, \frac{2 ( \partial_x g )_p }{ ( \partial_t g )_p },t \right ) \right \} .$
\item
$ ( \partial_x g )_p \neq 0, \ ( \partial_y g )_p \neq 0, \ ( \partial_t g )_p \neq 0 $\\
$$
\nabla_{\mathbb{H},p} g =(Xg)_p^2 + (Yg)_p^2= \left ( \partial_x g -\frac{1}{2}y_p \partial_t g \right )_p^2 + \left ( \partial_y g + \frac{1}{2}x_p \partial_t g \right )_p^2 \neq 0
$$
$$
\Longleftrightarrow \ \
y_p \neq \frac{2 ( \partial_x g )_p }{ ( \partial_t g )_p } \ \ \bigvee \ \ x_p \neq - \frac{2 ( \partial_y g )_p }{ ( \partial_t g )_p }
$$
So the Heisenberg gradient of $g$ at the point $p$ is zero at the points\\
$ \left \{ \left ( - \frac{2 ( \partial_y g )_p }{ ( \partial_t g )_p } , \frac{2 ( \partial_x g )_p }{ ( \partial_t g )_p },t \right ) \right \} .$
\end{enumerate}
At each neighbourhood, we always have one of the following cases:
\begin{enumerate}
\item
$ ( X f)_p =0, \ ( Y f )_p =0 $
\item
$ ( X f )_p =0, \ ( Y f )_p \neq 0 $
\item
$ ( X f )_p \neq 0, \ ( Y f )_p =0 $
\item
$ ( X f )_p \neq 0, \ ( Y f )_p \neq 0 $
\end{enumerate}
Cases:
\begin{enumerate}
\item
$ ( X f)_p =0, \ ( Y f )_p =0 $\\
Here $\nabla_{\mathbb{H},p} f=0$, which is an impossible impossible.
\item
$ ( X f )_p =0, \ ( Y f )_p \neq 0 $
$$
\nabla_p f \neq 0 \ \ \Longleftrightarrow \ \ \left ( \frac{1}{2}y XY f \right )_p^2 + \left ( Y f - \frac{1}{2}x XY f \right )_p^2 + \left ( XY f \right )_p^2 \neq 0
$$
If $( XY f )_p \neq 0 $, then $\nabla_p f \neq 0$.\\
If $( XY f )_p = 0 $, then $\nabla_p f \neq 0 \ \ \Longleftrightarrow \ \ \left ( Y f \right )_p \neq 0$, which is true.
\item
$ ( X f )_p \neq 0, \ ( Y f )_p =0 $
$$
\nabla_p f \neq 0 \ \ \Longleftrightarrow \ \ \left ( X f - \frac{1}{2}y YX f \right )_p^2 + \left ( \frac{1}{2}x YX f \right )_p^2 + \left ( YX f \right )_p^2 \neq 0
$$
If $( YX f )_p \neq 0 $, then $\nabla_p f \neq 0$.\\
If $( YX f )_p = 0 $, then $ \nabla_p f \neq 0 \ \ \Longleftrightarrow \ \ \left ( X f \right )_p \neq 0$, which is true.
\item
$ ( X f )_p \neq 0, \ ( Y f )_p \neq 0 $
$$
\nabla_p f \neq 0 \ \ \Longleftrightarrow \ \
$$
$$
\left ( X f + \frac{1}{2}y ( XY f - YX f) \right )_p^2 + \left ( Y f - \frac{1}{2}x ( XY f - YX f) \right )_p^2 + \left ( XY f - YX f \right )_p^2 \neq 0
$$
If $( XY f - YX f )_p \neq 0 $, then $\nabla_p f \neq 0$.\\
If $( XY f - YX f )_p = 0 $, then $ \nabla_p f \neq 0 \ \ \Longleftrightarrow \ \ \left ( X f \right )_p^2+\left ( Y f \right )_p^2 \neq 0$, which is again true.
\end{enumerate}
\end{comment
\chapter{Appendices}
\renewcommand{\thesection}{\Alph{section}}
\section{Proof of the Explicit Rumin Complex in $\mathbb{H}^2$}\label{computationH2}
In this section we show a simple but useful lemma and then prove Proposition \ref{exH2}.
\begin{obs}[Change of basis]\label{change}
While dealing with the equivalence classes of the complex, there are the circumstances in which we need to make a change of basis. In particular, in the case of $\frac{\Omega^2}{I^2}$, we already used in Observation \ref{obsH2} that $\{ dx_1 \wedge dy_1, dx_2 \wedge dy_2 \}$ and $\{ dx_1 \wedge dy_1 + dx_2 \wedge dy_2, dx_1 \wedge dy_1 - dx_2 \wedge dy_2 \}$ span the same subspace.
Consider
$$
\begin{cases}
\xi=dx_1 \wedge d y_1,\\
\eta=dx_2 \wedge d y_2,
\end{cases}
\quad \text{ and } \quad
\begin{cases}
u=\frac{\xi+\eta}{\sqrt{2}} = \frac{dx_1 \wedge d y_1 +dx_2 \wedge d y_2}{\sqrt{2}},\\
v=\frac{\xi-\eta}{\sqrt{2}} = \frac{dx_1 \wedge d y_1 -dx_2 \wedge d y_2}{\sqrt{2}}.
\end{cases}
$$
Then denote $\mathcal{B}_{\xi, \eta}^{u,v}$ the linear transformation that sends forms with respect to the basis $\{\xi, \eta\}$ to forms with respect to the basis $\{u,v\}$. Then
$$
\mathcal{B}_{\xi, \eta}^{u,v}:=\left (
\begin{matrix}
\frac{\partial u}{\partial \xi } & \frac{\partial u}{\partial \eta } \\
\frac{\partial v}{\partial \xi } & \frac{\partial v}{\partial \eta }
\end{matrix} \right )=
\left (
\begin{matrix}
\frac{1}{\sqrt{2} } & \frac{1}{\sqrt{2} } \\
\frac{1}{\sqrt{2} } & \frac{-1}{\sqrt{2} }
\end{matrix} \right ).
$$
Consider now a form $\omega_{ \{\xi, \eta\} }$ with respect to the basis $\{\xi, \eta\}$:
$$
\omega_{ \{\xi, \eta\} } = f dx_1 \wedge d y_1 + g dx_2 \wedge d y_2 = f \xi + g \eta =
\left ( \begin{matrix} f \\ g \end{matrix} \right )_{ \{\xi, \eta\} }.
$$
To write the same form with respect to the basis $\{u,v\}$ (call it $\omega_{ \{u, v\} }$), we have
\begin{align*}
\omega_{ \{u, v\} } & =\mathcal{B}_{\xi, \eta}^{u,v} \omega_{ \{\xi, \eta\} }=
\mathcal{B}_{\xi, \eta}^{u,v}
\left ( \begin{matrix} f \\ g \end{matrix} \right )_{ \{\xi, \eta\} }=
\left (
\begin{matrix}
\frac{1}{\sqrt{2} } & \frac{1}{\sqrt{2} } \\
\frac{1}{\sqrt{2} } & \frac{-1}{\sqrt{2} }
\end{matrix}
\right ) \cdot
\left ( \begin{matrix} f \\ g \end{matrix} \right )_{ \{\xi, \eta\} }\\
&=
\left (
\begin{matrix}
\frac{f+g}{\sqrt{2}}\\
\frac{f-g}{\sqrt{2}}
\end{matrix}
\right )_{ \{ u,v \} }
=\frac{f+g}{\sqrt{2}}u + \frac{f-g}{\sqrt{2}} v\\
&=\frac{f+g}{2} (dx_1 \wedge d y_1 +dx_2 \wedge d y_2) + \frac{f-g}{2} (dx_1 \wedge d y_1 -dx_2 \wedge d y_2).
\end{align*}
Since the two bases span forms on the same space, we trivially have that $\omega_{ \{\xi, \eta\} } = \omega_{ \{u, v\} } $.
\begin{comment
\begin{align*}
f \xi + g \eta & = \left (
\begin{matrix}
f \\
g
\end{matrix} \right )_{\xi,\eta}=
M\left (
\begin{matrix}
f \\
g
\end{matrix} \right )=
\left (
\begin{matrix}
\frac{1}{\sqrt{2} } & \frac{1}{\sqrt{2} } \\
\frac{1}{\sqrt{2} } & \frac{-1}{\sqrt{2} }
\end{matrix} \right ) \cdot
\left (
\begin{matrix}
f \\
g
\end{matrix} \right )=
\left (
\begin{matrix}
\frac{f+g}{\sqrt{2}}\\
\frac{f-g}{\sqrt{2}}
\end{matrix} \right )_{u,v}\\
&
=\frac{f+g}{\sqrt{2}}u + \frac{f-g}{\sqrt{2}} v
=
\frac{f+g}{\sqrt{2}} \frac{\xi+\eta}{\sqrt{2}} + \frac{f-g}{\sqrt{2}} \frac{\xi-\eta}{\sqrt{2}} = f \xi + g \eta.
\end{align*}
\end{comment
\end{obs}
\begin{ex}\label{variables}
The following computation, with the same notations for $ \xi,\eta $ and for $u,v$ as above, will be used later. Consider
$$
\omega_{\xi,\eta} = (X_1 \alpha_3 - Y_1 \alpha_1 ) dx_1 \wedge dy_1 +( X_2 \alpha_4 - Y_2 \alpha_2 ) dx_2 \wedge dy_2 .
$$
Then the same form can be rewritten as
\begin{align*}
\omega_{ \{u, v\} } =& \frac{ X_1 \alpha_3 - Y_1 \alpha_1 + X_2 \alpha_4 - Y_2 \alpha_2 }{2} (dx_1 \wedge d y_1 +dx_2 \wedge d y_2) \\
&+ \frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{2} ( dx_1 \wedge dy_1 - dx_2 \wedge dy_2).
\end{align*}
\begin{comment
\begin{align*}
&(X_1 \alpha_3 - Y_1 \alpha_1 ) dx_1 \wedge dy_1 +( X_2 \alpha_4 - Y_2 \alpha_2 ) dx_2 \wedge dy_2 = \left (
\begin{matrix}
X_1 \alpha_3 - Y_1 \alpha_1 \\
X_2 \alpha_4 - Y_2 \alpha_2
\end{matrix} \right )_{ \{ \xi,\eta \} }=\\
&=
\mathcal{B}_{\xi, \eta}^{u,v}
\left (
\begin{matrix}
X_1 \alpha_3 - Y_1 \alpha_1 \\
X_2 \alpha_4 - Y_2 \alpha_2
\end{matrix} \right )_{ \{ \xi,\eta \} }=
\left (
\begin{matrix}
\frac{1}{\sqrt{2} } & \frac{1}{\sqrt{2} } \\
\frac{1}{\sqrt{2} } & \frac{-1}{\sqrt{2} }
\end{matrix} \right ) \cdot
\left (
\begin{matrix}
X_1 \alpha_3 - Y_1 \alpha_1 \\
X_2 \alpha_4 - Y_2 \alpha_2
\end{matrix} \right )_{ \{ \xi,\eta \} }\\
&
=
\left (
\begin{matrix}
(*) \\
\frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{\sqrt{2}}
\end{matrix} \right )_{ \{ u, v \} }\\
&
=
(*)\frac{\xi+\eta}{\sqrt{2}} + \frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{\sqrt{2}} \cdot \frac{\xi-\eta}{\sqrt{2}}\\
&
= (**) + \frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{2} ( dx_1 \wedge dy_1 - dx_2 \wedge dy_2)
\end{align*}
where the first term is not relevant because it will disappear in the equivalence class later.
\end{comment
\end{ex}
\begin{proof}[Proof of Proposition \ref{exH2}]
First we will show the cases regarding $d_Q^{(1)}$, $d_Q^{(2)}$, $d_Q^{(3)}$ and $d_Q^{(4)}$. Finally we will discuss $D$.\\
By Observation \ref{df}, we immediately get that:
$$
d_Q^{(1)} f = [ X_1f dx_1 + X_2f dx_2 + Y_1f dy_1+ Y_2f dy_2]_{I^1}.
$$
Consider now $[\alpha]_{I^1}= [\alpha_1 dx_1 + \alpha_2 dx_2 + \alpha_3 dy_1+ \alpha_4 dy_2]_{I^1} \in \frac{\Omega^1}{I^1}$. In this case we first work apply the definition and obtain:
\begin{align*}
d_Q^{(2)} \left ( \left [\alpha \right ]_{I^1} \right ) = & d_Q \left ( \left [ \alpha_1 dx_1 + \alpha_2 dx_2 + \alpha_3 dy_1+ \alpha_4 dy_2 \right ]_{I^1} \right ) \\
= &\big [- X_2 \alpha_1 dx_1 \wedge dx_2 - Y_1 \alpha_1 dx_1 \wedge dy_1 - Y_2 \alpha_1 dx_1 \wedge dy_2 \\
&+ X_1 \alpha_2 dx_1 \wedge dx_2 - Y_1 \alpha_2 dx_2 \wedge dy_1 - Y_2 \alpha_2 dx_2 \wedge dy_2 \\
&+ X_1 \alpha_3 dx_1 \wedge dy_1 + X_2 \alpha_3 dx_2 \wedge dy_1 - Y_2 \alpha_3 dy_1 \wedge dy_2 \\
&+ X_1 \alpha_4 dx_1 \wedge dy_2 + X_2 \alpha_4 dx_2 \wedge dy_2 + Y_1 \alpha_4 dy_1 \wedge dy_2 \big ]_{I^2} \\
= & \big [ ( X_1 \alpha_2 - X_2 \alpha_1 ) dx_1 \wedge dx_2 + ( X_2 \alpha_3 - Y_1 \alpha_2 ) dx_2 \wedge dy_1\\
&+ ( Y_1 \alpha_4 - Y_2 \alpha_3 ) dy_1 \wedge dy_2 + ( X_1 \alpha_4 - Y_2 \alpha_1 ) dx_1 \wedge dy_2 \\
& + (X_1 \alpha_3 - Y_1 \alpha_1 ) dx_1 \wedge dy_1 +( X_2 \alpha_4 - Y_2 \alpha_2 ) dx_2 \wedge dy_2 \big ]_{I^2} ,
\end{align*}
but this is not enough. Thanks to the equivalence class, the last line can be written differently using Example \ref{variables}, so we get
\begin{align*}
d_Q^{(2)} ([\alpha]_{I^1} )
= & \bigg [ ( X_1 \alpha_2 - X_2 \alpha_1 ) dx_1 \wedge dx_2 + ( X_2 \alpha_3 - Y_1 \alpha_2 ) dx_2 \wedge dy_1\\
&+ ( Y_1 \alpha_4 - Y_2 \alpha_3 ) dy_1 \wedge dy_2 + ( X_1 \alpha_4 - Y_2 \alpha_1 ) dx_1 \wedge dy_2 \\
& + \left ( \frac{ X_1 \alpha_3 - Y_1 \alpha_1 - X_2 \alpha_4 + Y_2 \alpha_2 }{2} \right ) ( dx_1 \wedge dy_1 - dx_2 \wedge dy_2) \bigg ]_{I^2} .
\end{align*}
We proceed on considering $d_Q^{(4)}$. Consider $\alpha= \alpha_1 dx_1 \wedge dx_2 \wedge \theta +\alpha_2 dx_1 \wedge dy_2 \wedge \theta + \alpha_3 dx_2 \wedge dy_1 \wedge \theta + \alpha_4 dy_1 \wedge dy_2 \wedge \theta + \alpha_5( dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta) \in J^3$.\\
Then $d_Q^{(4)}$ becomes
\begin{align*}
d_Q^{(4)} \alpha =& d_Q^{(4)} \big ( \alpha_1 dx_1 \wedge dx_2 \wedge \theta +\alpha_2 dx_1 \wedge dy_2 \wedge \theta + \alpha_3 dx_2 \wedge dy_1 \wedge \theta + \alpha_4 dy_1 \wedge dy_2 \wedge \theta \\
&+\alpha_5( dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta) \big )\\
=& Y_1 \alpha_1 dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta +Y_2 \alpha_1 dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta\\
& -X_2 \alpha_2 dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta -Y_1 \alpha_2 dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta\\
&+ X_1 \alpha_3 dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta +Y_2 \alpha_3 dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta\\
& + X_1 \alpha_4 dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta+ X_2 \alpha_4 dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta \\
&- X_1\alpha_5 dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta - X_2 \alpha_5 dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta\\
&+Y_1 \alpha_5 dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta + Y_2 \alpha_5 dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta\\
&+ \alpha_5( - dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2 + dx_2 \wedge dy_2 \wedge dx_1 \wedge dy_1) \\
=& ( Y_1 \alpha_1 + X_1 \alpha_3 - X_2 \alpha_5) dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta\\
&+( Y_2 \alpha_1 -X_2 \alpha_2 - X_1\alpha_5 )dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta\\
&+( -Y_1 \alpha_2 + X_1 \alpha_4 + Y_2 \alpha_5 ) dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta\\
&+( Y_2 \alpha_3 + X_2 \alpha_4 +Y_1 \alpha_5 ) dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta.
\end{align*}
The last of the easy cases is $d_Q^{(5)}$. Consider $\alpha = \alpha_1 dx_1 \wedge dx_2 \wedge dy_1 \wedge \theta + \alpha_2 dx_1 \wedge dx_2 \wedge dy_2 \wedge \theta + \alpha_3 dx_1 \wedge dy_1 \wedge dy_2 \wedge \theta + \alpha_4 dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta \in J^4.$
Then $d_Q^{(5)}$ gives us
$$
d_Q^{(5)} \alpha= ( -Y_2 \alpha_1 + Y_1 \alpha_2 - X_2 \alpha_3 + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta.
$$
The final case is the study of the behaviour of $D$ and it is the one that requires more effort. Remember that by Observation \ref{df}:
$$
d g = X_1g dx_1 + Y_1 g dy_1 +X_2g dx_2 + Y_2 g dy_2 + Tg \theta.
$$
Consider a form $\alpha' \in {\prescript{}{}\bigwedge}^{2} \mathfrak{h}_1 $:
\begin{align*}
\alpha'=& \alpha_1 dx_1 \wedge dx_2 +\alpha_2 dx_1 \wedge dy_1 + \alpha_3 dx_1 \wedge dy_2 + \alpha_4 dx_2 \wedge dy_1 + \alpha_5 dx_2 \wedge dy_2 \\
&+\alpha_6 dy_1 \wedge dy_2.
\end{align*}
Now, proceding as in the case of Observation \ref{change}, we have two bases $\{ \xi=dx_1 \wedge d y_1, \eta=dx_2 \wedge d y_2 \}$ and $\{ u=\frac{\xi+\eta}{\sqrt{2}}, v=\frac{\xi-\eta}{\sqrt{2}} \}$ that span both the same space.\\
Consider $\omega_{ \{ \xi,\eta \} }= \alpha_2 dx_1 \wedge dy_1 + \alpha_5 dx_2 \wedge dy_2$. In the second basis the same form is
\begin{align*}
\omega_{ \{u, v\} } = \frac{ \alpha_2+ \alpha_5 }{2} (dx_1 \wedge d y_1 +dx_2 \wedge d y_2) + \frac{ \alpha_2- \alpha_5 }{2} ( dx_1 \wedge dy_1 - dx_2 \wedge dy_2).
\end{align*}
Denote $\gamma = \frac{ \alpha_2+ \alpha_5 }{2}$ and $\beta= \frac{ \alpha_2- \alpha_5 }{2} $. So we have
\begin{align*}
\omega_{ \{u, v\} } = \gamma (dx_1 \wedge d y_1 +dx_2 \wedge d y_2) + \beta ( dx_1 \wedge dy_1 - dx_2 \wedge dy_2).
\end{align*}
So, by the definition of $\frac{\Omega^2}{I^2}$, one gets
\begin{align*}
[\alpha']_{I^2}=& [\alpha_1 dx_1 \wedge dx_2 + \alpha_3 dx_1 \wedge dy_2 + \alpha_4 dx_2 \wedge dy_1 +\alpha_6 dy_1 \wedge dy_2\\
&
+ \beta (dx_1 \wedge dy_1 - dx_2 \wedge dy_2) ]_{I^2}.
\end{align*}
Then call $\alpha$ this new representative of the class $[\alpha']_{I^2}$,
\begin{align*}
\alpha :=& \alpha_1 dx_1 \wedge dx_2 + \alpha_3 dx_1 \wedge dy_2 + \alpha_4 dx_2 \wedge dy_1 +\alpha_6 dy_1 \wedge dy_2\\
&
+ \beta (dx_1 \wedge dy_1 - dx_2 \wedge dy_2) ,
\end{align*}
and one obviously has that $[\alpha']_{I^2}=[\alpha]_{I^2}.$ Then also $D([\alpha']_{I^2})=D([\alpha]_{I^2})$ and from now on we will compute the latter.\\
Remember that $D([\alpha]_{I^2}) =d \left ( \alpha + L^{-1} ( - (d\alpha)_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }} ) \wedge \theta \right )$. The full derivative of $\alpha$ is:
\begin{align*}
d \alpha = & d \big ( \alpha_1 dx_1 \wedge dx_2 + \alpha_3 dx_1 \wedge dy_2 + \alpha_4 dx_2 \wedge dy_1 +\alpha_6 dy_1 \wedge dy_2\\
&
+ \beta (dx_1 \wedge dy_1 - dx_2 \wedge dy_2) \big ) \\
=& Y_1 \alpha_1 dy_1 \wedge dx_1 \wedge dx_2 + Y_2 \alpha_1 dy_2 \wedge dx_1 \wedge dx_2 + T \alpha_1 \theta \wedge dx_1 \wedge dx_2 \\
&+ Y_1 \alpha_3 dy_1 \wedge dx_1 \wedge dy_2 +X_2 \alpha_3 dx_2 \wedge dx_1 \wedge dy_2 + T \alpha_3 \theta \wedge dx_1 \wedge dy_2\\
&+ X_1 \alpha_4 dx_1 \wedge dx_2 \wedge dy_1 + Y_2 \alpha_4 dy_2 \wedge dx_2 \wedge dy_1 + T \alpha_4 \theta \wedge dx_2 \wedge dy_1\\
&+ X_1 \alpha_6 dx_1 \wedge dy_1 \wedge dy_2+X_2 \alpha_6 dx_2 \wedge dy_1 \wedge dy_2 + T \alpha_6 \theta \wedge dy_1 \wedge dy_2\\
&- X_1 \beta dx_1 \wedge dx_2 \wedge dy_2 - Y_1 \beta dy_1 \wedge dx_2 \wedge dy_2 \\
&+X_2 \beta dx_2 \wedge dx_1 \wedge dy_1 + Y_2 \beta dy_2 \wedge dx_1 \wedge dy_1 \\
&+T \beta (dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta )\\
=&(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge dy_1 +(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge dy_2 \\
&+(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_1 \wedge dy_2 +(Y_2 \alpha_4 + Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge dy_2 \\
&+ T \alpha_1 \theta \wedge dx_1 \wedge dx_2 + T \alpha_3 \theta \wedge dx_1 \wedge dy_2 + T \alpha_4 \theta \wedge dx_2 \wedge dy_1+ T \alpha_6 \theta \wedge dy_1 \wedge dy_2\\
&+T \beta (dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta ).
\end{align*}
Then we need to isolate the part belonging to ${\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1$, meaning:
\begin{align*}
(d\alpha)_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1}} =&(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge dy_1 +(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge dy_2 \\
&+(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_1 \wedge dy_2 +(Y_2 \alpha_4 + Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge dy_2 \\
=&(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge d \theta -(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge d \theta \\
&-(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_2 \wedge d \theta +(Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dy_1 \wedge d \theta.
\end{align*}
Next we have that
\begin{align*}
L^{-1} ( - (d\alpha)_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }} ) =&- (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 +(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \\
&+(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_2 -(Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dy_1,
\end{align*}
and so
\begin{align*}
d& (L^{-1} ( - (d\alpha)_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }} ) ) \wedge \theta=\\
=&
- X_1 (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge \theta
+Y_1 (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dy_1 \wedge \theta\\
&
+Y_2(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dy_2 \wedge \theta
-X_2 (Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge \theta\\
&
-Y_1 (Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dy_1 \wedge \theta
-Y_2(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dy_2 \wedge \theta\\
&
+X_1 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_2\wedge \theta
+X_2 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_2 \wedge dy_2\wedge \theta\\
&
+Y_1 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_1 \wedge dy_2\wedge \theta
-X_1 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dx_1 \wedge dy_1 \wedge \theta\\
&
-X_2 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge \theta
+Y_2 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dy_1 \wedge dy_2 \wedge \theta.
\end{align*}
Finally we can put all the pieces together and compute $D([\alpha]_{I^2}) =d \left ( \alpha + L^{-1} ( - (d\alpha)_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }} ) \wedge \theta \right ) =
d \alpha + d \left ( L^{-1} ( - (d\alpha)_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }} ) \right ) \wedge \theta + L^{-1} ( - (d\alpha)_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }} ) \wedge d \theta $. \\
First notice that part of $d \alpha$ totally cancels $L^{-1} ( - (d\alpha)_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }} ) \wedge d \theta $. Then we have
\begin{align*}
D&([\alpha]_{I^2}) =\\
=& T \alpha_1 dx_1 \wedge dx_2 \wedge \theta+ T \alpha_3 dx_1 \wedge dy_2 \wedge \theta + T \alpha_4 dx_2 \wedge dy_1 \wedge \theta + T \alpha_6 dy_1 \wedge dy_2 \wedge \theta \\
&+T \beta (dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta ) \\
& - X_1 (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge \theta +Y_1 (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dy_1 \wedge \theta\\
&+ Y_2(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dy_2 \wedge \theta -X_2 (Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge \theta\\
&-Y_1 (Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dy_1 \wedge \theta -Y_2(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dy_2 \wedge \theta\\
&+X_1 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_2\wedge \theta +X_2 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_2 \wedge dy_2\wedge \theta\\
&+Y_1 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_1 \wedge dy_2\wedge \theta -X_1 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dx_1 \wedge dy_1 \wedge \theta\\
&-X_2 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge \theta +Y_2 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dy_1 \wedge dy_2 \wedge \theta\\\\
=&( T \alpha_1 - X_1 Y_1 \alpha_1 +X_1 X_2 \beta -X_1 X_1 \alpha_4 -X_2 Y_2 \alpha_1 +X_2 X_2 \alpha_3 + X_2 X_1 \beta ) dx_1 \wedge dx_2 \wedge \theta \\
&+( T \alpha_3 -Y_2 Y_2 \alpha_1+ Y_2 X_2 \alpha_3 + Y_2 X_1 \beta +X_1 Y_2 \beta -X_1 Y_1 \alpha_3 +X_1 X_1 \alpha_6 ) dx_1 \wedge dy_2 \wedge \theta\\
&+ (T \alpha_4 +Y_1 Y_1 \alpha_1 - Y_1 X_2 \beta +Y_1 X_1 \alpha_4 -X_2 Y_2 \alpha_4-X_2 Y_1 \beta -X_2 X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge \theta \\
&+(T \alpha_6 + Y_1 Y_2 \beta -Y_1 Y_1 \alpha_3 +Y_1 X_1 \alpha_6 +Y_2 Y_2 \alpha_4+Y_2 Y_1 \beta + Y_2 X_2 \alpha_6 ) dy_1 \wedge dy_2 \wedge \theta\\
&+T \beta (dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta )\\
&+ (Y_2Y_1 \alpha_1 - Y_2 X_2 \beta + Y_2X_1 \alpha_4 +X_2 Y_2 \beta -X_2 Y_1 \alpha_3 +X_2 X_1 \alpha_6 )
dx_2 \wedge dy_2\wedge \theta\\
&+(-Y_1 Y_2 \alpha_1 +Y_1 X_2 \alpha_3 +Y_1 X_1 \beta -X_1 Y_2 \alpha_4 -X_1 Y_1 \beta -X_1 X_2 \alpha_6 )
dx_1 \wedge dy_1 \wedge \theta\\\\
=&\left [ ( - X_1 Y_1 -Y_2 X_2) \alpha_1 +X_2 X_2 \alpha_3 -X_1 X_1 \alpha_4 +2X_1 X_2 \beta \right ]
dx_1 \wedge dx_2 \wedge \theta \\
&
+\left [
-Y_2 Y_2 \alpha_1
+(X_2 Y_2 \alpha_3 -X_1 Y_1 )\alpha_3
+X_1 X_1 \alpha_6
+2X_1 Y_2 \beta
\right ]
dx_1 \wedge dy_2 \wedge \theta\\
&
+\left [
+Y_1 Y_1 \alpha_1
+(Y_1 X_1 -Y_2 X_2) \alpha_4
-X_2 X_2 \alpha_6
-2 X_2 Y_1 \beta
\right ]
dx_2 \wedge dy_1 \wedge \theta \\
&
+\left [
-Y_1 Y_1 \alpha_3
+Y_2 Y_2 \alpha_4
+(Y_1 X_1 + X_2 Y_2) \alpha_6
+ 2Y_1 Y_2 \beta
\right ]
dy_1 \wedge dy_2 \wedge \theta\\
&
+\left [
-Y_1 Y_2 \alpha_1
+Y_1 X_2 \alpha_3
-X_1 Y_2 \alpha_4
-X_1 X_2 \alpha_6
\right ]
( dx_1 \wedge dy_1 \wedge \theta
-dx_2 \wedge dy_2\wedge \theta).
\end{align*}
\begin{comment
\begin{align*}
D([\alpha]_{I^2}) &=d \left ( \alpha + L^{-1} ( - (d\alpha)_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }} ) \wedge \theta \right )\\
&
\begin{array}{lll}
=&\cancel{ (Y_1 \alpha_1 - X_2 \beta+ X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge dy_1 } &\rdelim\}{10}{1em}[$d\alpha$]\\
&+\cancel{(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge dy_2} \\
&+\cancel{(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_1 \wedge dy_2} \\
&+\cancel{(Y_2 \alpha_4 + Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge dy_2 } \\\\
&+ T \alpha_1 dx_1 \wedge dx_2 \wedge \theta+ T \alpha_3 dx_1 \wedge dy_2 \wedge \theta \\
&+ T \alpha_4 dx_2 \wedge dy_1 \wedge \theta + T \alpha_6 dy_1 \wedge dy_2 \wedge \theta \\
&+T \beta (dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta ) \\\\
\end{array}
\end{align*}
\begin{align*}
&
\begin{array}{lll}
- X_1 (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_1 \wedge dx_2 \wedge \theta&\rdelim\}{12}{1em}[ $d \left (L^{-1} \left ( - \left (d\alpha \right )_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }} \right ) \right ) \wedge \theta$]\\
+Y_1 (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dy_1 \wedge \theta\\
+ Y_2(Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dy_2 \wedge \theta\\
-X_2 (Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge \theta\\
-Y_1 (Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dy_1 \wedge \theta\\
-Y_2(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dy_2 \wedge \theta\\
+X_1 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_1 \wedge dy_2\wedge \theta\\
+X_2 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dx_2 \wedge dy_2\wedge \theta\\
+Y_1 (Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_1 \wedge dy_2\wedge \theta\\
-X_1 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dx_1 \wedge dy_1 \wedge \theta\\
-X_2 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dx_2 \wedge dy_1 \wedge \theta\\
+Y_2 (Y_2 \alpha_4+ Y_1 \beta + X_2 \alpha_6 ) dy_1 \wedge dy_2 \wedge \theta\\
+\cancel{ (Y_1 \alpha_1 - X_2 \beta + X_1 \alpha_4 ) dx_2 \wedge dx_1 \wedge dy_1 }&\rdelim\}{6}{1em}[$L^{-1} \left ( - \left (d\alpha \right )_{\vert_{{\prescript{}{}\bigwedge}^{k} \mathfrak{h}_1 }}\right ) \wedge d\theta$]\\
\cancel{-(Y_2 \alpha_1 - X_2 \alpha_3 - X_1 \beta ) dx_1 \wedge dx_2 \wedge dy_2 }\\
\cancel{-(Y_2 \beta - Y_1 \alpha_3 + X_1 \alpha_6 ) dy_2 \wedge dx_1 \wedge dy_1}\\
+\cancel{(Y_2 \alpha_4 + Y_1 \beta + X_2 \alpha_6 ) dy_1 \wedge dx_2 \wedge dy_2 } =\\
\end{array}\\\\
=&( \underbrace{T}_{ =X_2 Y_2-Y_2 X_2} \alpha_1 - X_1 Y_1 \alpha_1 +X_1 X_2 \beta -X_1 X_1 \alpha_4 -X_2 Y_2 \alpha_1 +X_2 X_2 \alpha_3 + X_2 X_1 \beta )
dx_1 \wedge dx_2 \wedge \theta \\
&
+( T \alpha_3 -Y_2 Y_2 \alpha_1+ Y_2 X_2 \alpha_3 + Y_2 X_1 \beta +X_1 Y_2 \beta -X_1 Y_1 \alpha_3 +X_1 X_1 \alpha_6 )
dx_1 \wedge dy_2 \wedge \theta\\
&
+ (T \alpha_4 +Y_1 Y_1 \alpha_1 - Y_1 X_2 \beta +Y_1 X_1 \alpha_4 -X_2 Y_2 \alpha_4-X_2 Y_1 \beta -X_2 X_2 \alpha_6 )
dx_2 \wedge dy_1 \wedge \theta \\
&
+(T \alpha_6 + Y_1 Y_2 \beta -Y_1 Y_1 \alpha_3 +Y_1 X_1 \alpha_6 +Y_2 Y_2 \alpha_4+Y_2 Y_1 \beta + Y_2 X_2 \alpha_6 )
dy_1 \wedge dy_2 \wedge \theta\\
&
+T \beta (dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta )\\
&
+ (Y_2Y_1 \alpha_1 - Y_2 X_2 \beta + Y_2X_1 \alpha_4 +X_2 Y_2 \beta -X_2 Y_1 \alpha_3 +X_2 X_1 \alpha_6 )
dx_2 \wedge dy_2\wedge \theta\\
&
+(-Y_1 Y_2 \alpha_1 +Y_1 X_2 \alpha_3 +Y_1 X_1 \beta -X_1 Y_2 \alpha_4 -X_1 Y_1 \beta -X_1 X_2 \alpha_6 )
dx_1 \wedge dy_1 \wedge \theta\\\\
=&\left [ ( - X_1 Y_1 -Y_2 X_2) \alpha_1 +X_2 X_2 \alpha_3 -X_1 X_1 \alpha_4 +2X_1 X_2 \beta \right ]
dx_1 \wedge dx_2 \wedge \theta \\
&
+\left [
-Y_2 Y_2 \alpha_1
+(X_2 Y_2 \alpha_3 -X_1 Y_1 )\alpha_3
+X_1 X_1 \alpha_6
+2X_1 Y_2 \beta
\right ]
dx_1 \wedge dy_2 \wedge \theta\\
&
+\left [
+Y_1 Y_1 \alpha_1
+(Y_1 X_1 -Y_2 X_2) \alpha_4
-X_2 X_2 \alpha_6
-2 X_2 Y_1 \beta
\right ]
dx_2 \wedge dy_1 \wedge \theta \\
&
+\left [
-Y_1 Y_1 \alpha_3
+Y_2 Y_2 \alpha_4
+(Y_1 X_1 + X_2 Y_2) \alpha_6
+ 2Y_1 Y_2 \beta
\right ]
dy_1 \wedge dy_2 \wedge \theta\\
&
+\left [
-Y_1 Y_2 \alpha_1
+Y_1 X_2 \alpha_3
-X_1 Y_2 \alpha_4
-X_1 X_2 \alpha_6
\right ]
( dx_1 \wedge dy_1 \wedge \theta
-dx_2 \wedge dy_2\wedge \theta).
\end{align*}
\end{comment
\end{proof}
\section{General Rumin Differential Operator $d_c$ in $\mathbb{H}^1$ and $\mathbb{H}^2$}\label{A}
In the second chapter we introduced the Rumin complex by defining three different types of operators ($d_Q^{(k)}$ for $k<n$ and $k>n$, $D$ for $k=n$). It turns out that such complex
can be written as one general Rumin operator $d_c$. Here we show $d_c$ specifically for the first and second Heisenberg groups. We follow the presentation of \cite{TRIP} but giving here the minimum necessary amount of details.
\begin{defin}[See 11.22 in \cite{TRIP}]
Recall Remark \ref{Hcarnot}, Definition \ref{dual_basis} and Notation \ref{dfnota}. Let $\alpha \in \Omega^1, \ \alpha \neq 0$, have \emph{pure weight} $p$ if its dual vector field belongs to the $p$-layer $\mathfrak{h}_p$ of the algebra $\mathfrak{h}$. In this case write $w(\alpha)=p$.\\
Let $\beta \in \Omega^k, \ \beta \neq 0$, have \emph{pure weight} $p$ if $\beta$ can be expressed as a linear combination of $k$-forms $\theta_{i_1} \wedge \dots \wedge \theta_{i_k}$ such that $w(\theta_{i_1}) + \dots + w(\theta_{i_k})=p$ for all such forms.\\
Denote $\Omega^{k,p}$ the span of $k$-forms of pure weight $p$.
\end{defin}
\begin{ex}
In the Heisenberg group $\mathbb{H}^n$
\begin{align*}
&w(f)=0, \quad f \in \Omega^0,\\
&w(dx_j)=w(dy_j)=1, \quad j=1,\dots,n,\\
&w(\theta)=2,\\
&w(dx_1 \wedge dx_2)=2,\\
&w(d \theta)= w \left ( - \sum_{j=1}^n dx_j \wedge dx_{n+j} \right )=2,\\
&w(dx_1 \wedge \theta)=3,\\
&w(dx_1 \wedge \dots \wedge dx_n \wedge dy_1 \wedge \dots \wedge dy_n \wedge \theta)=2n+2.
\end{align*}
\end{ex}
\noindent
Recall Notation \ref{dfnota} and notice that all differential $k$-forms $ \{ \theta_{i_1} \wedge \dots \wedge \theta_{i_k} \}_{1\leq i_1 \leq \dots \leq i_k \leq 2n+1}$ are left-invariant. Denote such $k$-forms as $\theta_i^k $, $i \in \mathbb{N}$.\\
Furthermore, any other differential form with constant coefficients is left-invariant.
\begin{obs}[See 11.25 in \cite{TRIP}]\label{samew}
Let $\alpha \in \Omega^{k,p}$ be a left-invariant $k$-form of pure weight $p$ such that $d \alpha \neq 0$, then $w(d\alpha)=w(\alpha)$.
\end{obs}
\begin{obs}[See page 90 in \cite{TRIP}]
A general form $\alpha\in \Omega^{k,p}$ can be expressed as
$$
\alpha =\sum_{i} f_i \theta_i^k,
$$
where $\spn_{i} \{ \theta_i^k \} = \Omega^{k,p}$. Then its exterior differential is
$$
d\alpha =\sum_{i} d \left ( f_i \theta_i^k \right ) = \sum_{i} \left ( \sum_{j=1}^{2n+1} W_j f_i \theta_j \wedge \theta_i^k + f_i d \theta_i^k \right )
$$
This shows that, whenever the terms are not zero,
\begin{align*}
w \left ( \sum_{j=1}^{2n} W_j f_i \theta_j \wedge \theta_i^k \right ) &=p+1,\\
w \left ( W_{2n+1} f_i \theta_{2n+1} \wedge \theta_i^k \right ) &=p+2,\\
w(f_i d \theta_i^k) &=p.
\end{align*}
In particular, the last equality holds by Observation \ref{samew}.
\end{obs}
\noindent
This provides the well-posedness for the following definition.
\begin{defin}[See definition 11.26 in \cite{TRIP}]\label{defind0}
Consider $\alpha \in \Omega^{k,p}$, then we can write:
$$
d \alpha = d_0 \alpha + d_1 \alpha + d_2 \alpha,
$$
where $d_i \alpha$ denotes the part of $d \alpha$ which increases the weight by $i$.\\
In particular $d_0 \alpha$ denotes the part of $d \alpha$ which does not increase the weight of the form. Then we can define
\begin{align*}
d_0 : \Omega^{k,p} \to \Omega^{k+1,p}, \ \sum_{i} f_i \theta_i^k \mapsto \sum_{i} f_i d \theta_i^k,
\end{align*}
whenever $d \theta_i^k \neq 0$.\\
One can easily check that the only form that, after a differentiation, does not change its weight is $\theta$. Therefore, considering a simple differential $k$-form $\alpha$ of the kind
$$
\alpha = f \theta_{i_1} \wedge \dots \wedge \theta_{i_{k}},
$$
for $1 \leq i_1 \leq \dots \leq i_{k} \leq 2n$, one always has
$$
d_0 (\alpha)=0.
$$
On the other hand, considering a simple differential $k$-form
$$
\alpha = f \theta_{i_1} \wedge \dots \wedge \theta_{i_{k-1}} \wedge \theta,
$$
for $1 \leq i_1 \leq \dots \leq i_{k-1} \leq 2n$, one has that
$$
d_0(\alpha) = f \theta_{i_1} \wedge \dots \wedge \theta_{i_{k-1}} \wedge d \theta.
$$
\end{defin}
\begin{defin}[11.32 in \cite{TRIP}]
Thanks to the isomorphism in definition 11.32 in \cite{TRIP}
\begin{equation}\label{isom}
d_{0_{\vert_{ \frac{\Omega^k}{\Ker d_0}}}} : \frac{\Omega^k}{\Ker d_0} \stackrel{\cong}{\to} \Ima d_0,
\end{equation}
one has that, for all $\beta \in \Omega^k $, there exists a unique $\alpha \in \Omega^k$, $\alpha \bot \Ker d_0$, such that $d_0=\beta+\xi$, with $\xi \in (\Ima d_0)^\bot$.
So one can define $d_0^{-1}$ as
\begin{align*}
d_0^{-1}: \Omega^{k+1} \to (\Ker d_0)^\bot, \ \beta \mapsto d_0^{-1} \beta := \alpha
\end{align*}
\end{defin}
and another operator
$$
D := d_0^{-1} (d-d_0).
$$
which is a nilpotent operator not to be confused with the second-order differential operator in the Rumin complex. Remark 11.33.1 in \cite{TRIP} says that $D$ is nilpotent, meaning that there exists $N\in \mathbb{N}$ such that $D^N\equiv 0$. Call $k_{max}$ the maximum non trivial exponent for $D$. Then the following operators are well-defined:
\begin{align*}
P &:=\sum_{k=0}^{k_{max}} (-D)^k,\\
Q &:=P d_0^{-1},\\
{\prescript{}{}\prod}_{E} &:= \text{Id} -Qd- dQ,\\
{\prescript{}{}\prod}_{\mathcal{E}_0} &:= \text{Id} - d_0^{-1} d_0- d_0 d_0^{-1}.
\end{align*}
\begin{prop}[See theorem 11.40 of \cite{TRIP}]
The Rumin complex of the Heisenberg group $\mathbb{H}^n$ can be written as:
$$
0 \to \mathbb{R} \to C^\infty \stackrel{d_c^{(0)}}{\to} \mathcal{E}_0^1 \stackrel{d_c^{(1)}}{\to} \dots \stackrel{d_c^{(n-1)}}{\to} \mathcal{E}_0^n \stackrel{d_c^{(n)}}{\to}\mathcal{E}_0^{n+1} \stackrel{d_c^{(n+1)}}{\to} \dots \stackrel{d_c^{(2n)}}{\to} \mathcal{E}_0^{2n+1} \to 0,
$$
$$
\text{i.e.,} \quad ( \mathcal{E}_0^*, d_c),
$$
where
$$
\mathcal{E}_0^* = \bigoplus_{k=1}^{k_{max}} \mathcal{E}_0^k ,\quad \mathcal{E}_0^k := \Ker d_0^{(k)} \cap \left ( \Ima d_0^{(k-1)} \right )^\bot \quad \text{and} \quad d_c:= {\prescript{}{}\prod}_{\mathcal{E}_0} d {\prescript{}{}\prod}_E.
$$
\end{prop}
\noindent
We show how these construction acts in practice, and when the operators just defined play an active role. In particular:
\begin{obs}\label{nullll}
Note that if, at some degree $k=0,\dots,2n$, $d_0 \equiv 0$, this means that all the other operators we just defined collapse to:
$$
\begin{cases}
D = 0,\\
P =0,\\
Q = 0,\\
{\prescript{}{}\prod}_{E} = \text{Id},\\
{\prescript{}{}\prod}_{\mathcal{E}_0} = \text{Id}.
\end{cases}
$$
In such case $d_c$, on its own defining domain, acts as $d$.
\end{obs}
\begin{obs}
In the case of the first Heisenberg group, $n=1$, we have three operators $d_c^{(k)}$, for $k\in \{0,1,2\}$, and so three corresponding operators $d_0^{(k)}$ with, by \eqref{isom},
\begin{align*}
d_{0_{\vert_{ \frac{\Omega^k}{\Ker d_0^{(k)} }}}}^{(k)} : \frac{\Omega^k}{\Ker d_0^{(k)} } \stackrel{\cong}{\to} \Ima d_0^{(k)} .
\end{align*}
$\blacktriangleright$ If $k=0$, by Definition \ref{defind0} of $d_0$, it follows immediately that $\Ker d_0^{(0)}=\Omega^0$ and so $\frac{\Omega^0}{\Ker d_0^{(0)}}=\{ 0 \} $. Then $ d_0^{(0)} \equiv 0$.\\\\
$\blacktriangleright$ If $k=1$, again by Definition \ref{defind0}, we have that
$$
\Ker d_0^{(1)}=\spn \{dx,dy\}, \quad \text{and so} \quad \frac{\Omega^1}{\Ker d_0^{(1)}}=\spn \{ \theta \} \quad \text{and} \quad \Ima d_0^{(1)} = \spn \{ dx \wedge dy \} .
$$
So we get
\begin{align*}
d_{0_{\vert_{ \frac{\Omega^1}{\Ker d_0^{(1)} }}}}^{(1)} : \frac{\Omega^1}{\Ker d_0^{(1)} } & \stackrel{\cong}{\to} \Ima d_0^{(1)} , \\
\theta & \mapsto - dx \wedge dy.
\end{align*}
If we were to write $d_c^{(1)}$ explicitly with the operators introduced here, the final result would give back $D$ as in Proposition \ref{exH1}.\\\\
$\blacktriangleright$ If $k=2$ it is easy to check that $\Ker d_0^{(2)}=\Omega^2$ and so $\frac{\Omega^2}{\Ker d_0^{(2)}}=\{ 0 \}$. Then, again, $ d_0^{(2)} \equiv 0$.
\end{obs}
\begin{comment
\noindent
The case of $\mathbb{H}^1$ will be rather trivial but it is useful as an example.\\\\
\textbf{First Heisenberg group $\mathbb{H}^1$}\\\\
In the first Heisenberg group there are three cases of $d_c^{(k)}$, with $k=0,1,2$.
\begin{obs}
\textbf{Case $k=0$ in $\mathbb{H}^1$.}\\
In this case $d_0$ plays no role in the complex:
\begin{align*}
d_0: \frac{\Omega^0}{\Ker d_0} & \to \Ima d_0\\
0 & \mapsto 0
\end{align*}
where, easily, $\Ker d_0=\Omega^0$, $\frac{\Omega^0}{\Ker d_0}=\{ 0 \}$ and $\Ima d_0= \{ 0 \}\subseteq \Omega^1$.\\
By Observation \ref{nullll}, it follows then that
$d_c$ acts as $d$ on $: C^\infty \to \mathcal{E}_0^1=\spn \{dx,dy \}$.
\end{obs}
\begin{obs}
\textbf{Case $k=1$ in $\mathbb{H}^1$.}\\
In this case $d_0$ does play a role, indeed:
\begin{align*}
d_0: \frac{\Omega^1}{\Ker d_0} & \to \Ima d_0\\
\theta & \mapsto - dx \wedge dy
\end{align*}
where $\Ker d_0=\spn \{dx,dy\}$, $\frac{\Omega^1}{\Ker d_0}=\spn \{ \theta \}$ and $\Ima d_0 = \spn \{ dx \wedge dy \} \subseteq \Omega^2.$\\
In the same way, also $d_0^{-1}$ plays a role
\begin{align*}
d_0^{-1}: \Omega^2 & \to (\Ker d_0)^\bot = \spn \{ \theta \} \\
dx \wedge dy (\in \Ima d_0) & \mapsto -\theta \\
dx \wedge \theta & \mapsto 0\\
dy \wedge \theta & \mapsto 0
\end{align*}
and so $d_c$ will act differently from $d$. If we were to write $d_c$ explicitly with the operators introduced here, the final result would give us back $D$ as in \ref{exH1}.
\end{obs}
\begin{obs}
\textbf{Case $k=2$ in $\mathbb{H}^1$.}\\
This is the last case in $\mathbb{H}^1$. This case is similar to the first one:
\begin{align*}
d_0: \frac{\Omega^2}{\Ker d_0} & \to \Ima d_0\\
0 & \mapsto 0
\end{align*}
where $\Ker d_0=\Omega^2$, $\frac{\Omega^2}{\Ker d_0}=\{ 0 \}$ and $\Ima d_0 = \{ 0 \} \subseteq \Omega^3.$
Consequently, by observation \ref{nullll},
$d_c$ acts as $d$ on $:\mathcal{E}_0^2=\spn \{dx \wedge \theta,dy \wedge \theta \} \to \mathcal{E}_0^3=\spn \{dx \wedge dy \wedge \theta \}$.\\
\end{obs}
\end{comment
\begin{obs}
In the case of the second Heisenberg group, $n=2$, we have five operators $d_c^{(k)}$, for $k\in \{0,\dots,4\}$, with five corresponding operators $d_0^{(k)}$. Again, by \eqref{isom},
\begin{align*}
d_{0_{\vert_{ \frac{\Omega^k}{\Ker d_0^{(k)} }}}}^{(k)} : \frac{\Omega^k}{\Ker d_0^{(k)} } \stackrel{\cong}{\to} \Ima d_0^{(k)} .
\end{align*}
$\blacktriangleright$ If $k=0$, as before, by Definition \ref{defind0} it follows that, $\Ker d_0^{(0)}=\Omega^0$ and so $\frac{\Omega^0}{\Ker d_0^{(0)}}=\{ 0 \} $. Then $ d_0^{(0)} \equiv 0$.\\\\%1111111111111111111111111111111111111111111111111111111
$\blacktriangleright$ If $k=1$, take $\alpha = f_1 dx_1 + f_2 dx_2 + f_3 dy_1 + f_4 dy_2 + f_5 \theta \in \Omega^1$ and compute
$$
d_0 \alpha= - f_5 (dx_1 \wedge dy_1 + dx_2 \wedge dy_2).
$$
Consequently we have that $\Ker d_0^{(1)}=\spn \{dx_1, dx_2,dy_1,dy_2\}$, and so $ \frac{\Omega^1}{\Ker d_0^{(1)}}=\spn \{ \theta \} $ and $\Ima d_0^{(1)} = \spn \{ dx_1 \wedge dy_1 + dx_2\wedge dy_2 \} .$ \\
The action of $d_0^{(1)}$ is then
\begin{align*}
d_{0_{\vert_{ \frac{\Omega^1}{\Ker d_0^{(1)} }}}}^{(1)} : \frac{\Omega^1}{\Ker d_0^{(1)}} & \to \Ima d_0^{(1)},\\
\theta & \mapsto -dx_1 \wedge dy_1 - dx_2 \wedge dy_2.
\end{align*}
Writing $d_c^{(1)}$ explicitly would give $d_Q^{(1)}$ as in Proposition \ref{exH2}.\\\\%222222222222222222222222222222
$\blacktriangleright$ If $k=2$, one can compute that $\Ker d_0=\spn \{ dx_1 \wedge dx_2, dx_1 \wedge dy_1, dx_1 \wedge dy_2, dx_2 \wedge dy_1, dx_2 \wedge dy_2, dy_1 \wedge dy_2 \}$, $ \frac{\Omega^2}{\Ker d_0}=\spn \{ dx_1 \wedge \theta, dy_1 \wedge \theta, dx_2 \wedge \theta, dy_2 \wedge \theta \}$ and $ \Ima d_0 = \spn \{ dx_1 \wedge dx_2 \wedge dy_2, dy_1 \wedge dx_2 \wedge dy_2, dx_2 \wedge dx_1 \wedge dy_1, dy_2 \wedge dx_1 \wedge dy_1 \} \subseteq \Omega^3.$\\
The action of $d_0^{(2)}$ is then
\begin{align*}
d_{0_{\vert_{ \frac{\Omega^2}{\Ker d_0^{(2)} }}}}^{(2)} : \frac{\Omega^2}{\Ker d_0^{(2)}} & \to \Ima d_0^{(2)},\\
dx_1 \wedge \theta & \mapsto - dx_1 \wedge dx_2 \wedge dy_2,\\
dy_1 \wedge \theta & \mapsto - dy_1 \wedge dx_2 \wedge dy_2,\\
dx_2 \wedge \theta & \mapsto - dx_2 \wedge dx_1 \wedge dy_1,\\
dy_2 \wedge \theta & \mapsto - dy_2 \wedge dx_1 \wedge dy_1.
\end{align*}
Writing $d_c^{(2)}$ explicitly would give $D$ as in Proposition \ref{exH2}.\\\\%33333333333333333333333333333333333333
$\blacktriangleright$ If $k=3$, by computation one can see that $\Ker d_0=\spn \{ dx_1 \wedge dx_2 \wedge dy_1, dx_1 \wedge dx_2 \wedge dy_2, dx_1 \wedge dx_2 \wedge \theta, dx_1 \wedge dy_1 \wedge dy_2, dx_1 \wedge dy_2 \wedge \theta, dx_2 \wedge dy_1 \wedge dy_2, dx_2 \wedge dy_1 \wedge \theta, dy_1 \wedge dy_2 \wedge \theta, dx_1 \wedge dy_1 \wedge \theta dx_2 \wedge dy_2 \wedge \theta \}$, $\frac{\Omega^3}{\Ker d_0}=\spn \{ dx_1 \wedge dy_1 \wedge \theta + dx_2 \wedge dy_2 \wedge \theta \}$ and $\Ima d_0 = \spn \{ dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2 \} \subseteq \Omega^4.$\\
The action of $d_0^{(3)}$ is then
\begin{align*}
d_{0_{\vert_{ \frac{\Omega^3}{\Ker d_0^{(3)} }}}}^{(3)} : \frac{\Omega^3}{\Ker d_0^{(3)}} & \to \Ima d_0^{(3)},\\
dx_1 \wedge dy_1 \wedge \theta + dx_2 \wedge dy_2 \wedge \theta & \mapsto -dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2.
\end{align*}
Once again, also in this case writing $d_c^{(3)}$ explicitly would give $d_Q^{(3)}$ in Proposition \ref{exH2}.\\\\%444444444
$\blacktriangleright$ If $k=4$, we are in the last case. Take $\alpha = f_1 \widehat{dx_1} + f_2 \widehat{dx_2} + f_3 \widehat{dy_1}+ f_4 \widehat{dy_2} + f_5 \widehat{\theta}$, where the hat indicates the presence of all other basis elements except the one written below the hat. Then
$$
d_0 \alpha=0,
$$
and so $ \Ker d_0=\Omega^4$, $ \frac{\Omega^4}{\Ker d_0}=\{ 0 \}$ and $ \Ima d_0 = \{ 0 \} \subseteq \Omega^5$. Then, again, $ d_0^{(4)} \equiv 0$.
\end{obs}
\begin{comment
\noindent
\textbf{Second Heisenberg group $\mathbb{H}^2$}\\\\
The second Heisenberg group has more cases: $k=0,1,2,3,4$.
\begin{obs}
\textbf{Case $k=0$ in $\mathbb{H}^2$.}\\%0000000000000000000000000000000000000000000000000000000000000
As before, in the first case $d_0$ plays no role:
\begin{align*}
d_0: \frac{\Omega^0}{\Ker d_0} & \to \Ima d_0\\
0 & \mapsto 0
\end{align*}
where $\Ker d_0=\Omega^0$, $\frac{\Omega^0}{\Ker d_0}=\{ 0 \} $ and $\Ima d_0 = \{ 0 \} \subseteq \Omega^1.$\\
By Observation \ref{nullll}
$d_c$ acts as $d$ on $: C^0 \to \mathcal{E}_0^1=\spn \{dx_1, dx_2, dy_1, dy_2 \}$.
\end{obs}
\begin{obs}
\textbf{Case $k=1$ in $\mathbb{H}^2$.}\\%1111111111111111111111111111111111111111111111111111111111111111
Take $\alpha = f_1 dx_1 + f_2 dx_2 + f_3 dy_1 + f_4 dy_2 + f_5 \theta$ and compute
$$
d_0 \alpha= - f_5 (dx_1 \wedge dy_1 + dx_2 \wedge dy_2).
$$
The action of $d_0$ in this case is then
\begin{align*}
d_0: \frac{\Omega^1}{\Ker d_0} & \to \Ima d_0\\
\theta & \mapsto -dx_1 \wedge dy_1 - dx_2 \wedge dy_2
\end{align*}
where $\Ker d_0=\spn \{dx_1, dx_2,dy_1,dy_2\}$, $\frac{\Omega^1}{\Ker d_0}=\spn \{ \theta \}$ and $\Ima d_0 = \spn \{ dx_1 \wedge dy_1 + dx_2\wedge dy_2 \} \subseteq \Omega^2.$\\
Consequently
\begin{align*}
d_0^{-1}: \Omega^2 & \to (\Ker d_0)^\bot = \spn \{ \theta \} \\
dx_1 \wedge dy_1 + dx_2\wedge dy_2 (\in \Ima d_0) & \mapsto -\theta \\
\gamma \wedge \theta & \mapsto 0\\
dx_1 \wedge dy_1 - dx_2\wedge dy_2 & \mapsto 0
\end{align*}
Finally not only in the middle case but also in this case $d_c$ acts differently than $d$. Writing $d_c$ explicitly would give us the proper $d_Q$ as in Proposition \ref{exH2}.
\end{obs}
\begin{obs}
\textbf{Case $k=2$ in $\mathbb{H}^2$.}\\%2222222222222222222222222222222222222222222222222222222222222222
This is the middle case of $\mathbb{H}^2$, and one can already guess it is the most elaborated:
\begin{align*}
d_0: \frac{\Omega^2}{\Ker d_0} & \to \Ima d_0\\
dx_1 \wedge \theta & \mapsto - dx_1 \wedge dx_2 \wedge dy_2\\
dy_1 \wedge \theta & \mapsto - dy_1 \wedge dx_2 \wedge dy_2\\
dx_2 \wedge \theta & \mapsto - dx_2 \wedge dx_1 \wedge dy_1\\
dy_2 \wedge \theta & \mapsto - dy_2 \wedge dx_1 \wedge dy_1
\end{align*}
where\\
$\Ker d_0=\spn \{ dx_1 \wedge dx_2, dx_1 \wedge dy_1, dx_1 \wedge dy_2, dx_2 \wedge dy_1, dx_2 \wedge dy_2, dy_1 \wedge dy_2 \}$, $ \frac{\Omega^2}{\Ker d_0}=\spn \{ dx_1 \wedge \theta, dy_1 \wedge \theta, dx_2 \wedge \theta, dy_2 \wedge \theta \}$ and $ \Ima d_0 = \spn \{ dx_1 \wedge dx_2 \wedge dy_2, dy_1 \wedge dx_2 \wedge dy_2, dx_2 \wedge dx_1 \wedge dy_1, dy_2 \wedge dx_1 \wedge dy_1 \} \subseteq \Omega^3.$
Consequently
\begin{align*}
d_0^{-1}: \Omega^3 & \to (\Ker d_0)^\bot = \spn \{ dx_1 \wedge \theta, dy_1 \wedge \theta, dx_2 \wedge \theta, dy_2 \wedge \theta \} \\
dx_1 \wedge dx_2 \wedge dy_2 & \mapsto -dx_1 \wedge \theta \\
dy_1 \wedge dx_2 \wedge dy_2 & \mapsto - dy_1 \wedge \theta \\
dx_2 \wedge dx_1 \wedge dy_1 & \mapsto - dx_2 \wedge \theta\\
dy_2 \wedge dx_1 \wedge dy_1 & \mapsto - dy_2 \wedge \theta\\
\gamma \wedge \theta & \mapsto 0
\end{align*}
Also in this case, $d_c$ acts differently than $d$. Writing $d_c$ explicitly would give us the proper $D$ as in Proposition \ref{exH2}.
\end{obs}
\begin{obs}
\textbf{Case $k=3$ in $\mathbb{H}^2$.}\\%3333333333333333333333333333333333333333333333333333333333333333
Take
\begin{align*}
\alpha =& f_1 dx_1 \wedge dx_2 \wedge dy_1 + f_2 dx_1 \wedge dx_2 \wedge dy_2 + f_3 dx_1 \wedge dx_2 \wedge \theta\\
&+ f_4 dx_1 \wedge dy_1 \wedge dy_2 + f_5 dx_1 \wedge dy_1 \wedge \theta + f_6 dx_1 \wedge dy_2 \wedge \theta \\
& + f_7 dx_2 \wedge dy_1 \wedge dy_2 + f_8 dx_2 \wedge dy_1 \wedge \theta + f_9 dx_2 \wedge dy_2 \wedge \theta +f_{10} dy_1 \wedge dy_2 \wedge \theta
\end{align*}
and compute
$$
d_0 \alpha= - ( f_5 + f_9) (dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2) \ \ \ [=0 \Longleftrightarrow f_5 = - f_9].
$$
Then one has that
\begin{align*}
d_0: \frac{\Omega^3}{\Ker d_0} & \to \Ima d_0\\
dx_1 \wedge dy_1 \wedge \theta + dx_2 \wedge dy_2 \wedge \theta & \mapsto -dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2
\end{align*}
where $\Ker d_0=\spn \{ dx_1 \wedge dx_2 \wedge dy_1, dx_1 \wedge dx_2 \wedge dy_2, dx_1 \wedge dx_2 \wedge \theta, dx_1 \wedge dy_1 \wedge dy_2, dx_1 \wedge dy_2 \wedge \theta, dx_2 \wedge dy_1 \wedge dy_2, dx_2 \wedge dy_1 \wedge \theta, dy_1 \wedge dy_2 \wedge \theta, dx_1 \wedge dy_1 \wedge \theta dx_2 \wedge dy_2 \wedge \theta \}$, $\frac{\Omega^3}{\Ker d_0}=\spn \{ dx_1 \wedge dy_1 \wedge \theta + dx_2 \wedge dy_2 \wedge \theta \}$ and $\Ima d_0 = \spn \{ dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2 \} \subseteq \Omega^4.$\\
Consequently
\begin{align*}
d_0^{-1}: \Omega^3 & \to (\Ker d_0)^\bot = \spn \{ dx_1 \wedge dy_1 \wedge \theta + dx_2 \wedge dy_2 \wedge \theta \} \\
dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2 & \mapsto -dx_1 \wedge dy_1 \wedge \theta - dx_2 \wedge dy_2 \wedge \theta \\
\gamma \wedge \theta & \mapsto 0
\end{align*}
Once again, also in this case $d_c$ acts differently than $d$. $d_c$ explicitly would give us the proper $d_Q$ as in Proposition \ref{exH2}.
\end{obs}
\begin{obs}
\textbf{Case $k=4$ in $\mathbb{H}^2$.}\\%4444444444444444444444444444444444444444444444444444444444444444
This is the last case. Take $\alpha = f_1 \widehat{dx_1} + f_2 \widehat{dx_2} + f_3 \widehat{dy_1}+ f_4 \widehat{dy_2} + f_5 \widehat{\theta}$, where the hat indicates the presence of all other basis elements except the one written below it. Then
$$
d_0 \alpha=0
$$
and so
\begin{align*}
d_0: \frac{\Omega^4}{\Ker d_0} & \to \Ima d_0\\
0 & \mapsto 0
\end{align*}
where $ \Ker d_0=\Omega^4$, $ \frac{\Omega^4}{\Ker d_0}=\{ 0 \}$ and $ \Ima d_0 = \{ 0 \} \subseteq \Omega^5$.\\
By Observation \ref{nullll}
$d_c$ acts as $d$ on $:\mathcal{E}_0^4=\spn \{ \widehat{dx_1}, \widehat{dx_2}, \widehat{dy_1}, \widehat{dy_2} \} \to \mathcal{E}_0^5= \Omega^5$.
\end{obs}
\end{comment
\begin{comment
\subsection{Explicit $d_c$ in $\mathbb{H}^1$}
$$
\mathcal{E}_0^0= \underbrace{ \Ker \underbrace{ d_0}_{k=0} }_{\Omega^0} = \Omega^0
$$
$$
\mathcal{E}_0^1= \underbrace{ \Ker \underbrace{ d_0}_{k=1} }_{dx, dy} \cap \underbrace{ (\Ima \underbrace{ d_0}_{k=0})^\bot }_{\Omega^1} = \spn \{dx,dy\}
$$
$$
\mathcal{E}_0^2= \underbrace{ \Ker \underbrace{ d_0}_{k=2} }_{\Omega^2} \cap \underbrace{ (\Ima \underbrace{ d_0}_{k=1})^\bot }_{dx\wedge \theta,dy\wedge \theta} = \spn \{dx\wedge \theta,dy\wedge \theta\}
$$
$$
\mathcal{E}_0^3= \underbrace{ \Ker \underbrace{ d_0}_{k=3} }_{\Omega^3} \cap \underbrace{ (\Ima \underbrace{ d_0}_{k=2})^\bot }_{\Omega^3} = \spn \{dx\wedge dy\wedge \theta\} = \Omega^3
$$
We could write here all the $d_c$'s explicitly but it would turn out to be just what we did already in Propositions \ref{exH1} and \ref{exH2}, both for $\mathbb{H}^1$ and $\mathbb{H}^2$.
\end{comment
\section{Dimension of the Rumin Complex in $\mathbb{H}^n$}\label{B}
We show formulas to compute the dimensions for the spaces in the Rumin complex. The intent is to see how fast they grow with respect to $n$.
\begin{rem}
Recall from Definition \ref{def_forms} that:
\begin{itemize}
\item $\Omega^k = k$-differential forms in $\mathbb{H}^{n}$
\item $I^k= \{ \beta \wedge \theta + \gamma \wedge d \theta \ / \ \beta \in \Omega^{k-1}, \ \gamma \in \Omega^{k-2} \}$
\item $J^k=\{ \alpha \in \Omega^{k} \ / \ \alpha \wedge \theta =0, \ \alpha \wedge d\theta=0 \}$.
\end{itemize}
\end{rem}
\begin{obs}\label{C?}
\begin{itemize}
\item
It is an immediate observation that $\dim \Omega^k = \binom{2n+1}{k}$.
\item
Modulo the coefficient, the possibilities for a simple differential form $\beta \in \Omega^{k-1}$ such that $\beta \wedge \theta \neq 0$ are $C_{k-1}^{2n} = \binom{2n}{k-1}$.
\item
Modulo the coefficient, the possibilities for a simple differential form $\gamma \in \Omega^{k-2}$ such that $\gamma \wedge d \theta \neq 0$ are $C_{k-2}^{2n+1} = \binom{2n+1}{k-2}$.
\end{itemize}
\end{obs}
\begin{proof}
The first point of the observation is straightforward. One has that $\beta \in \Omega^{k-1}$ and $\beta$ can't contain $\theta$, otherwise $\beta \wedge \theta$ would be null.\\
$ \gamma \in \Omega^{k-2}$ and $ \gamma$ can contain anything. Indeed, to have $d \theta \wedge \gamma =0$, with $\gamma$ simple, one needs $\gamma \in \Omega^{2n}$ at least, but $k-2$ is always lower than $2n$ for any $k$.
\end{proof}
\begin{obs}
To compute the dimension of $I^k$, it is important to know how many times and element of its base can be written both in the form $\beta \wedge \theta$ and $\gamma \wedge d \theta$, with $\beta \in \Omega^{k-1}$, $\beta \wedge \theta \neq 0$, $\gamma \in \Omega^{k-2}$ and $\gamma \wedge d \theta \neq 0$.\\
With these hypotheses, $\beta$ can be written as $\beta = d \theta \wedge \tau$, with $\tau \in \Omega^3$; while $\gamma$ can be written as $\gamma = \theta \wedge \tau'$, with $\tau' \in \Omega^3$.\\
Posing
$$
\beta \wedge \theta= \gamma \wedge d \theta \neq 0
$$
one gets $\tau = \tau' \in \Omega^3$, with $\tau \wedge d\theta \wedge \theta \neq 0$. This means that $\tau$ cannot contain $\theta$. Furthermore, this is the only condition since it is impossible for $\tau$ to annihilate $d \theta$ as $k-3$ is always lower than $2n$, as before.\\
Then, modulo coefficients, the possibilities for a simple $\tau$ are $C_{k-3}^{2n} = \binom{2n}{k-3}$ .
\end{obs}
\begin{comment
\begin{obs}[see 2.9 in \cite{FSSC}] For $k \leq n$,
$$
\dim \frac{\Omega^k}{I^k} = \dim \Ker I^k
$$
where $\Ker I^k : =\left \{ V \in \Omega_k \ / \ V(\omega)=0 \ \forall \omega \in I^k \right \}$. Also
$$
\dim J^{2n+1-k} = \dim \frac{\Omega_{2n+1-k}}{\Ker J^{2n+1-k}}
$$
where $\Ker J^{2n+1-k}:=\left \{ V \in \Omega_{2n+1-k} \ / \ V(\omega)=0 \ \forall \omega \in J^{2n+1-k} \right \}$.
\end{obs}
\end{comment
\begin{obs} [See 2.3 and 2.5 in \cite{FSSC}] For $k \leq n$,
$$
\dim \frac{\Omega^k}{I^k} = \dim J^{2n+1-k} .
$$
\end{obs}
\begin{prop}\label{22star}
Summing up the previous considerations, for $k \leq n$, we obtain
\begin{equation}\label{star}
\dim I^k =\binom{2n}{k-1} + \binom{2n+1}{k-2} - \binom{2n}{k-3} = \binom{2n+1}{k-1}
\end{equation}
\begin{equation}\label{2star}
\dim J^{2n+1-k} = \dim \frac{\Omega^k}{I^k} = \binom{2n+1}{k} - \binom{2n+1}{k-1} = \binom{2n+1}{k} \cdot \frac{2n+2-2k}{2n+2-k}
\end{equation}
The proof follows after the example.
\end{prop}
\begin{ex}
Here some cases as an example.
\begin{center}
\begin{tabular}{c|c|c|c|c|}
& & $\dim \Omega^k$ & $\dim I^k$ & $ \dim \frac{\Omega^k}{ I^k} = \dim J^{2n+1-k}$ \\ \hline
$\mathbb{H}^1 $ & $k=1$ & 3 & 1 & 2 \\ \arrayrulecolor{red}\specialrule{.1em}{.05em}{.05em}
$\mathbb{H}^2 $ & $k=1$ & 5 & 1 & 4 \\ \arrayrulecolor{black}\hline
& $k=2$ & 10 & 5 & 5 \\ \arrayrulecolor{red}\specialrule{.1em}{.05em}{.05em}
$\mathbb{H}^3$ & $k=1$ & 7 & 1 & 6 \\ \arrayrulecolor{black}\hline
& $k=2$ & 21 & 7 & 14 \\ \arrayrulecolor{black}\hline
& $k=3$ & 35 & 21 & 14 \\ \arrayrulecolor{red}\specialrule{.1em}{.05em}{.05em}
$\mathbb{H}^4$ & $k=1$ & 9 & 1 & 8 \\ \arrayrulecolor{black}\hline
& $k=2$ & 36 & 9 & 27 \\ \arrayrulecolor{black}\hline
& $k=3$ & 84 & 36 & 48 \\ \arrayrulecolor{black}\hline
& $k=4$ & 126 & 84 & 42 \\ \arrayrulecolor{red}\specialrule{.1em}{.05em}{.05em}
$\mathbb{H}^5$ & $k=1$ & 11 & 1 & 10 \\ \arrayrulecolor{black}\hline
& $k=2$ & 55 & 11 & 44 \\ \arrayrulecolor{black}\hline
& $k=3$ & 165 & 55 & 110 \\ \arrayrulecolor{black}\hline
& $k=4$ & 330 & 165 & 165 \\ \arrayrulecolor{black}\hline
& $k=5$ & 462 & 330 & 132 \\ \arrayrulecolor{black}\hline
\end{tabular}
\end{center}
\end{ex}
\begin{proof}[Proof of Proposition \ref{22star}]
First we prove equation \eqref{star}. To do so, we rewrite the three terms as follows:
\begin{align*}
\binom{2n}{k-1} &=\frac{(2n)!}{(k-1)!(2n-(k-1))!} \cdot \frac{2n+1}{2n+1} \cdot \frac{2n-(k-1)+1}{2n-(k-1)+1} \\
&
=\frac{(2n+1)!}{(k-1)!(2n+1-(k-1))!} \cdot \frac{1}{2n+1} \cdot (2n-(k-1)+1) \\
&
= \binom{2n+1}{k-1} \cdot \frac{2n-k+2}{2n+1}.
\end{align*}
The second term:
\begin{align*}
\binom{2n+1}{k-2} &=\frac{(2n+1)!}{(k-2)!(2n+1-(k-2))!} \cdot \frac{k-1}{k-1} \cdot \frac{2n+1-(k-2)}{2n+1-(k-2)} \\
&
=\frac{(2n+1)!}{(k-1)!(2n+1-(k-1))!} \cdot (k-1) \cdot \frac{1}{2n+1-(k-2)} \\
&
= \binom{2n+1}{k-1} \cdot \frac{k-1}{2n-k+3}.
\end{align*}
And the last term:
\begin{align*}
\binom{2n}{k-3}&= \frac{(2n)!}{(k-3)!(2n-(k-3))!} \cdot \frac{2n+1}{2n+1} \cdot \frac{(k-2)(k-1)}{(k-2)(k-1)} \cdot \frac{2n-(k-3)}{2n-(k-3)} \\
&
= \frac{(2n+1)!}{(k-1)!(2n-(k-3)-1)!} \cdot \frac{1}{2n+1} \cdot (k-2)(k-1) \cdot \frac{1}{2n-(k-3)} \\
&
= \frac{(2n+1)!}{(k-1)!(2n+1-(k-1))!} \cdot \frac{(k-2)(k-1)}{(2n+1)(2n-(k-3))} \\
&
= \binom{2n+1}{k-1} \cdot \frac{(k-2)(k-1)}{(2n+1)(2n-k+3)} .
\end{align*}
Now we can add and subtract them together and we obtain
$$
\binom{2n}{k-1} + \binom{2n+1}{k-2} - \binom{2n}{k-3}
$$
$$
=\binom{2n+1}{k-1} \cdot \left [
\frac{2n-k+2}{2n+1} + \frac{k-1}{2n-k+3} - \frac{(k-2)(k-1)}{(2n+1)(2n-k+3)}
\right ]
$$
$$
=\binom{2n+1}{k-1} \cdot \left [
\frac{ (2n-k+2) (2n-k+3) + (k-1) (2n+1) -(k-2)(k-1)}{(2n+1)(2n-k+3)}
\right ]
$$
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
=\binom{2n+1}{k-1} \cdot \left [
\frac{ 4n^2 -2n(k-3) + 2n-(k-3) - 2n(k-1) + k^2 -4k +3 +2nk+k-2n-1 -k^2 +3k-2
}{(2n+1)(2n-k+3)}
\right ]
\end{align*}
\end{minipage}
}
$$
=
\binom{2n+1}{k-1} \cdot \left [
\frac{ 4n^2 +n( 8 -2k ) +(3-k)
}{(2n+1)(2n-k+3)}
\right ]=\binom{2n+1}{k-1},
$$
whete the last equality comes from
\[
\begin{array}{r|r}
4n^2 +n( 8 -2k ) +(3-k) & 2n+1 \\ \cline{2-2}
4n^2 +2n \phantom{ 8 -2k ) +(3-k) } & 2n +3 -k \\ \cline{1-1} \\[\dimexpr-\normalbaselineskip+\jot]
2n( 3 -k ) +(3-k) \\
2n( 3 -k ) +(3-k) \\ \cline{1-1} \\[\dimexpr-\normalbaselineskip+\jot]
/ /
\end{array}
\]
\noindent
Now we prove equation \eqref{2star}.
\begin{align*}
\binom{2n+1}{k} - \binom{2n+1}{k-1} &= \binom{2n+1}{k} - \frac{(2n+1)!}{(k-1)!(2n+1-(k-1))!}=\\
&
= \binom{2n+1}{k} - \frac{(2n+1)!}{(k-1)!(2n+1-(k-1))!} \cdot \frac{k}{k} \cdot \frac{2n + 1 - (k-1)}{2n + 1 - (k-1)} \\
&
= \binom{2n+1}{k} - \frac{(2n+1)!}{k!(2n+1-k)!} \cdot k \cdot \frac{1}{2n + 1 - (k-1)} \\
&
= \binom{2n+1}{k} \left [ 1 - \frac{k}{2n+2-k} \right ] = \binom{2n+1}{k} \cdot\frac{2n+2-2k}{2n+2-k} .
\end{align*}
\end{proof}
\mycomment
\section{Basis of the Rumin Complex in $\mathbb{H}^3$}\label{C}
We write explicitly the standard bases of the Rumin complex in $\mathbb{H}^3$. The purpose is to show the exponential complexity of explicit calculations on the Heisenberg group, as soon as the dimension is high enough.\\\\
Recall the definitions of $\Omega^k$, $I^k$, $J^k$ and of the Rumin complex from \ref{def_forms}, \ref{complexHn} and \ref{D}.
\begin{comment
\begin{rec}
Recall from \ref{def_forms} and \ref{complexHn} that:
\begin{itemize}
\item $\Omega^k = k$-differential forms in $\mathbb{R}^{2n+1}$
\item $I^k= \{ \alpha \wedge \theta + \beta \wedge d \theta \ / \ \alpha \in \Omega^{k-1}, \ \beta \in \Omega^{k-2} \}$
\item $J^k=\{ \alpha \in \Omega^{k} \ / \ \alpha \wedge \theta =0, \ \alpha \wedge d\theta=0 \}$.
\end{itemize}
Furthermore, the Rumin complex, is given by
$$
0 \to \mathbb{R} \to C^\infty \stackrel{d_Q}{\to} \frac{\Omega^1}{I^1} \stackrel{d_Q}{\to} \dots \stackrel{d_Q}{\to} \frac{\Omega^n}{I^n} \stackrel{D}{\to} J^{n+1} \stackrel{d_Q}{\to} \dots \stackrel{d_Q}{\to} J^{2n+1} \to 0
$$
where, for $k<n$, one has
$$
d_Q( [\alpha]_{I^*} ) := [d \alpha]_{I^*}.
$$
For $k \geq n$
$$
d_Q := d_{\vert_{J^*}}.
$$
And, $D$ is a second order differential operator as defined in \ref{D}.
\end{rec}
\subsection{Rumin Complex in $\mathbb{H}^3$}
\begin{no}
In this section we write explicitly the standard bases (or a standard bases, in certain cases) of the spaces concerning the Rumin Complex in $\mathbb{H}^3$. Given the high dimension of the sets (indicated on the left of each of them), we used a not-standard notation to write everything explicitly and, at the same time, avoid unreadability.\\
When looking at the basis elements of a space, the elements on the first line are always written completely. On the other lines, when some elements look incomplete, the eyes can complete them just "lowering" what is immediately on top of the empy space.
\end{no}
\begin{ex}
\begin{align*}
& dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } dy_2 \\
& \phantom{dx_1 \wedge dx_2 \wedge \ } d y_1 \wedge dy_2 \\
& \phantom{dx_1 \wedge dx_2 \wedge d y_1 \wedge \ } \theta \\
& \phantom{dx_1 \wedge \ } dx_3 \wedge d y_1 \wedge dy_2 \\
& \phantom{dx_1 \wedge dx_3 \wedge d y_1 \wedge \ } \theta \\
& \phantom{dx_1 \wedge dx_3 \wedge \ } d y_2 \wedge dy_3 \\
& \phantom{dx_1 \wedge dx_3 \wedge d y_2 \wedge \ } \theta \\
& \phantom{dx_1 \wedge dx_3 \wedge \ } d y_3 \wedge \theta
\end{align*}
would stand for
\begin{align*}
& dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } } dy_2 \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} d y_1 \wedge dy_2 \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d y_1 \wedge \ }} \theta \\
& \mathbin{\color{red}{dx_1 \wedge \ }} dx_3 \wedge d y_1 \wedge dy_2 \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge d y_1 \wedge \ }} \theta \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge \ }} d y_2 \wedge dy_3 \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge d y_2 \wedge \ }} \theta \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge \ }} d y_3 \wedge \theta
\end{align*}
\end{ex}
\end{comment
\begin{obs}
When $n=3$, the Rumin complex becomes:\\
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
0 \to \mathbb{R} \to \underbrace{C^\infty}_{dim=1} \stackrel{d_Q}{\to} \underbrace{\frac{\Omega^1}{I^1}}_{dim=6} \stackrel{d_Q}{\to} \underbrace{\frac{\Omega^2}{I^2}}_{dim=14} \stackrel{d_Q}{\to} \underbrace{\frac{\Omega^3}{I^3}}_{dim=14} \stackrel{D}{\to} \underbrace{J^{4}}_{dim=14} \stackrel{d_Q}{\to} \underbrace{J^{5}}_{dim=14} \stackrel{d_Q}{\to} \underbrace{J^{6}}_{dim=6}\stackrel{d_Q}{\to} \underbrace{J^{7}}_{dim=1} \to 0
\end{align*}
\end{minipage}
}
\end{obs}
\noindent
We write explicitly the standard bases of $\Omega^k$, $I^k$ and $J^k$ for $k=1,\dots,6$.
\begin{obs}
\textbf{Case $k=1$.}
$$
\dim \Omega^1 =7, \ \dim I^1 =1 \text{ and } \dim \frac{\Omega^1}{I^1} =6.
$$
\begin{align*}
\Omega^1 &= \spn \{ dx_1, dx_2, dx_3, dy_1, dy_2, dy_3, \theta \}\\
I^1 &=\spn \{ \theta \}\\
\frac{\Omega^1}{I^1} &= \spn \{dx_1, dx_2, dx_3, dy_1, dy_2, dy_3 \} .
\end{align*}
\begin{comment
$$
\dim \Omega^1 =7 \ \ \&
$$
$$
\Omega^1 = \spn \{ dx_1, dx_2, dx_3, dy_1, dy_2, dy_3, \theta \}
$$
$$
\dim I^1 =1 \ \ \&
$$
$$
I^1=\spn \{ \theta \}
$$
$\Longrightarrow$
$$
\dim \frac{\Omega^1}{I^1} =6 \ \ \&
$$
$$
\frac{\Omega^1}{I^1} = \spn \{dx_1, dx_2, dx_3, dy_1, dy_2, dy_3 \} .
$$
\end{comment
\end{obs}
\begin{obs}
\textbf{Case $k=2$.}
$$
\dim \Omega^2 =21, \ \dim I^2 =7 \text{and} \dim \frac{\Omega^2}{I^2} =14 .
$$
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
\Omega^2 = \spn
\begin{Bmatrix}
dx_1 \wedge dx_2, & dx_2 \wedge dx_3 , & dx_3 \wedge dy_1, & dy_1 \wedge dy_2, & dy_2 \wedge dy_3 , & dy_3 \wedge \theta \\
dx_1 \wedge dx_3, & dx_2 \wedge dy_1, & dx_3 \wedge dy_2, & dy_1 \wedge dy_3, & dy_2 \wedge \theta, & \\
dx_1 \wedge d y_1, & dx_2 \wedge dy_2, & dx_3 \wedge dy_3, & dy_1 \wedge \theta, &\\
dx_1 \wedge dy_2, & dx_2 \wedge dy_3, & dx_3 \wedge \theta, & &\\
dx_1 \wedge dy_3, & dx_2 \wedge \theta, & & &\\
dx_1 \wedge \theta & & & &
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
I^2 = \spn
\begin{Bmatrix}
\theta \wedge dx_1, & dx_1 \wedge dy_1+ dx_2 \wedge dy_2 + dx_3 \wedge dy_3 \\
\theta \wedge dx_2,\\
\theta \wedge dx_3,\\
\theta \wedge dy_1,\\
\theta \wedge dy_2,\\
\theta \wedge dy_3
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
\frac{\Omega^2}{I^2} = \spn
\begin{Bmatrix}
dx_1 \wedge dx_2, & dx_2 \wedge dx_3, & dx_3 \wedge dy_1, & dy_1 \wedge dy_2, & dx_1 \wedge dy_1 - dx_2 \wedge dy_2,\\
dx_1 \wedge dx_3, & dx_2 \wedge dy_1, & dx_3 \wedge dy_2, & dy_1 \wedge dy_3, & dx_1 \wedge dy_1 - dx_3 \wedge dy_3, \\
dx_1 \wedge dy_2, & dx_2 \wedge dy_3, & &dy_2 \wedge dy_3, & \\
dx_1 \wedge dy_3, & & & dy_2 \wedge dy_3 &
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\end{obs}
\begin{comment
$$
\dim \Omega^2 =21 \ \ \&
$$
\begin{align*}
\Omega^2 = \spn \{ & dx_1 \wedge dx_2, \ & dx_2 \wedge dx_3 , \ & dx_3 \wedge dy_1, \ & dy_1 \wedge dy_2, \ & dy_2 \wedge dy_3 , \ dy_3 \wedge \theta \}\\
& \phantom{dx_1 \wedge \ } dx_3 & \phantom{dx_2 \wedge \ } dy_1 & \phantom{dx_3 \wedge \ } dy_2 & \phantom{dy_1 \wedge \ } dy_3 & \phantom{dy_2 \wedge \ } \theta \\
& \phantom{dx_1 \wedge \ } d y_1 & \phantom{dx_2 \wedge \ } dy_2 & \phantom{dx_3 \wedge \ } dy_3 & \phantom{dy_1 \wedge \ } \theta &\\
& \phantom{dx_1 \wedge \ } dy_2 & \phantom{dx_2 \wedge \ } dy_3 & \phantom{dx_3 \wedge \ } \theta & &\\
& \phantom{dx_1 \wedge \ } dy_3 & \phantom{dx_2 \wedge \ } \theta & & &\\
& \phantom{dx_1 \wedge \ } \theta & & & &
\end{align*}
$$
\dim I^2 =7 \ \ \&
$$
\begin{align*}
I^2 = \spn \{ & \theta \wedge dx_1, \ dx_1 \wedge dy_1+ dx_2 \wedge dy_2 + dx_3 \wedge dy_3 \}\\
& \phantom{ \theta \wedge \ } dx_2\\
& \phantom{ \theta \wedge \ } dx_3\\
& \phantom{ \theta \wedge \ } dy_1\\
& \phantom{ \theta \wedge \ } dy_2\\
& \phantom{ \theta \wedge \ } dy_3
\end{align*}
\begin{align*}
\frac{\Omega^2}{I^2} = \spn \{
& dx_1 \wedge dx_2, \ & dx_2 \wedge dx_3 , \ & dx_3 \wedge dy_1, \ & dy_1 \wedge dy_2, \ & dy_2 \wedge dy_3, \ dx_1 \wedge dy_1 - dx_2 \wedge dy_2, \}\\
& \phantom{dx_1 \wedge \ } dx_3 & \phantom{dx_2 \wedge \ } dy_1 & \phantom{dx_3 \wedge \ } dy_2 & \phantom{dy_1 \wedge \ } dy_3 , & \phantom{ \ dy_2 \wedge dy_3, \ } \ dx_1 \wedge dy_1 - dx_3 \wedge dy_3 \\
& \phantom{dx_1 \wedge \ } dy_2 & \phantom{dx_2 \wedge \ } dy_3 & & \\
& \phantom{dx_1 \wedge \ } d y_3 & & &
\end{align*}
\end{comment
\begin{obs}
\textbf{Case $k=3$.}
$$
\dim \Omega^3 =35, \ \dim I^3 =21 \text{ and } \dim \frac{\Omega^3}{I^3} =14.
$$
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
\Omega^3 = \spn
\begin{Bmatrix}
dx_1 \wedge dx_2 \wedge dx_3 , & dx_2 \wedge dx_3 \wedge dy_1 , & dx_3 \wedge dy_1 \wedge dy_2, & dy_1 \wedge dy_2 \wedge dy_3 , & dy_2 \wedge dy_3 \wedge \theta \\
dx_1 \wedge dx_2 \wedge d y_1, & dx_2 \wedge dx_3 \wedge dy_2, & dx_3 \wedge dy_1 \wedge dy_3, & dy_1 \wedge dy_2 \wedge \theta, &\\
dx_1 \wedge dx_2 \wedge dy_2 , & dx_2 \wedge dx_3 \wedge dy_3 , & dx_3 \wedge dy_1 \wedge \theta, & &\\
dx_1 \wedge dx_2 \wedge dy_3 , & dx_2 \wedge dx_3 \wedge \theta, & & &\\
dx_1 \wedge dx_2 \wedge \theta , & & & &\\
dx_1 \wedge dx_3 \wedge d y_1 , & dx_2 \wedge dy_1 \wedge dy_2, & dx_3 \wedge dy_2 \wedge dy_3, & dy_1 \wedge dy_3 \wedge \theta, &\\
dx_1 \wedge dx_3 \wedge dy_2 , & dx_2 \wedge dy_1 \wedge dy_3, & dx_3 \wedge dy_2 \wedge \theta, & &\\
dx_1 \wedge dx_3 \wedge dy_3 , & dx_2 \wedge dy_1 \wedge \theta , & & &\\
dx_1 \wedge dx_3 \wedge \theta , & & & &\\
dx_1 \wedge dy_1 \wedge dy_2 , & dx_2 \wedge dy_2 \wedge dy_3 , & dx_3 \wedge dy_3 \wedge \theta, & &\\
dx_1 \wedge dy_1 \wedge dy_3, & dx_2 \wedge dy_2 \wedge \theta, & & &\\
dx_1 \wedge dy_1 \wedge \theta, & & & &\\
dx_1 \wedge dy_2 \wedge dy_3 , & dx_2 \wedge dy_3 \wedge \theta, & & &\\
dx_1 \wedge dy_2 \wedge \theta, & & & &\\
dx_1 \wedge dy_3 \wedge \theta & & & &
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
I^3 = \spn
\begin{Bmatrix}
\theta \wedge dx_1 \wedge dx_2 , & dx_1 \wedge dy_1 \wedge dx_2 + dx_3 \wedge dy_3 \wedge dx_2 , \\
\theta \wedge dx_1 \wedge d x_3, & dx_1 \wedge dy_1 \wedge dy_2 + dx_3 \wedge dy_3 \wedge dy_2 , \\
\theta \wedge dx_1 \wedge d y_1 , & dx_1 \wedge dy_1 \wedge dx_3 + dx_2 \wedge dy_2 \wedge dx_3 , \\
\theta \wedge dx_1 \wedge dy_2, & dx_1 \wedge dy_1 \wedge dy_3 + dx_2 \wedge dy_2 \wedge dy_3 , \\
\theta \wedge dx_1 \wedge dy_3, & dx_2 \wedge dy_2 \wedge dx_1 + dx_3 \wedge dy_3 \wedge dx_1, \\
\theta \wedge dx_2 \wedge dx_3 , & dx_2 \wedge dy_2 \wedge dy_1 + dx_3 \wedge dy_3 \wedge dy_1, \\
\theta \wedge dx_2 \wedge dy_1 , & \\
\theta \wedge dx_2 \wedge dy_2 , & \\
\theta \wedge dx_2 \wedge dy_3 , & \\
\theta \wedge dx_3 \wedge dy_1 , & \\
\theta \wedge dx_3 \wedge dy_2 , & \\
\theta \wedge dx_3 \wedge dy_3 , & \\
\theta \wedge dy_1 \wedge dy_2 , & \\
\theta \wedge dy_1 \wedge dy_3 , & \\
\theta \wedge dy_2 \wedge dy_3 &
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
\frac{\Omega^3}{I^3} = \spn
\begin{Bmatrix}
dx_1 \wedge dx_2 \wedge dx_3 , & dx_2 \wedge dx_3 \wedge dy_1 , & dx_1 \wedge dy_1 \wedge dx_2 - dx_3 \wedge dy_3 \wedge dx_2 , \\
dx_1 \wedge dx_2 \wedge dy_3 , & dx_2 \wedge dy_1 \wedge dy_3 & dx_1 \wedge dy_1 \wedge dy_2 - dx_3 \wedge dy_3 \wedge dy_2 , \\
dx_1 \wedge dx_3 \wedge d y_2, & dx_3 \wedge dy_1 \wedge dy_2, & dx_1 \wedge dy_1 \wedge dx_3 - dx_2 \wedge dy_2 \wedge dx_3 , \\
dx_1 \wedge dy_2 \wedge dy_3 , & dy_1 \wedge dy_2 \wedge dy_3 , & dx_1 \wedge dy_1 \wedge dy_3 - dx_2 \wedge dy_2 \wedge dy_3, \\
& & dx_2 \wedge dy_2 \wedge dx_1 - dx_3 \wedge dy_3 \wedge dx_1, \\
& & dx_2 \wedge dy_2 \wedge dy_1 - dx_3 \wedge dy_3 \wedge dy_1
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\end{obs}
\begin{comment
\begin{align*}
(35) \ \Omega^3 = \spn \{
& dx_1 \wedge dx_2 \wedge dx_3 , \ & dx_2 \wedge dx_3 \wedge dy_1 , \ & dx_3 \wedge dy_1 \wedge dy_2, \ & dy_1 \wedge dy_2 \wedge dy_3 , \ & dy_2 \wedge dy_3 \wedge \theta \}\\
& \phantom{dx_1 \wedge dx_2 \wedge \ } d y_1 & \phantom{dx_2 \wedge dx_3 \wedge \ } dy_2 & \phantom{dx_3 \wedge dy_1 \wedge \ } dy_3 & \phantom{dy_1 \wedge dy_2 \wedge \ } \theta &\\
& \phantom{dx_1 \wedge dx_2 \wedge \ } dy_2 & \phantom{dx_2 \wedge dx_3 \wedge \ } dy_3 & \phantom{dx_3 \wedge dy_1 \wedge \ } \theta & &\\
& \phantom{dx_1 \wedge dx_2 \wedge \ } dy_3 & \phantom{dx_2 \wedge dx_3 \wedge \ } \theta & & &\\
& \phantom{dx_1 \wedge dx_2 \wedge \ } \theta & & & &\\
& \phantom{dx_1 \wedge \ } dx_3 \wedge d y_1 & \phantom{dx_2 \wedge \ } dy_1 \wedge dy_2 & \phantom{dx_3 \wedge \ } dy_2 \wedge dy_3 & \phantom{dy_1 \wedge \ } dy_3 \wedge \theta &\\
& \phantom{dx_1 \wedge dx_3 \wedge \ } dy_2 & \phantom{dx_2 \wedge dy_1 \wedge \ } dy_3 & \phantom{dx_3 \wedge dy_2 \wedge \ } \theta & &\\
& \phantom{dx_1 \wedge dx_3 \wedge \ } dy_3 & \phantom{dx_2 \wedge dy_1 \wedge \ } \theta & & &\\
& \phantom{dx_1 \wedge dx_3 \wedge \ } \theta & & & &\\
& \phantom{dx_1 \wedge \ } dy_1 \wedge dy_2 & \phantom{dx_2 \wedge \ } dy_2 \wedge dy_3 & \phantom{dx_3 \wedge \ } dy_3 \wedge \theta & &\\
& \phantom{dx_1 \wedge dy_1 \wedge \ } dy_3 & \phantom{dx_2 \wedge dy_2 \wedge \ } \theta & & &\\
& \phantom{dx_1 \wedge dy_1 \wedge \ } \theta & & & &\\
& \phantom{dx_1 \wedge \ } dy_2 \wedge dy_3 & \phantom{dx_2 \wedge \ } dy_3 \wedge \theta & & &\\
& \phantom{dx_1 \wedge dy_2 \wedge \ } \theta & & & &\\
& \phantom{dx_1 \wedge \ } dy_3 \wedge \theta & & & &
\end{align*}
\begin{align*}
(21) \ I^3 = \spn \{
& \theta \wedge dx_1 \wedge dx_2 , \ & dx_1 \wedge dy_1 \wedge dx_2 + dx_3 \wedge dy_3 \wedge dx_2 , \ & dx_2 \wedge dy_2 \wedge dx_1 + dx_3 \wedge dy_3 \wedge dx_1, \}\\
& \phantom{\theta \wedge dx_1 \wedge \ } d x_3 & dx_1 \wedge dy_1 \wedge dy_2 + dx_3 \wedge dy_3 \wedge dy_2 , \ & dx_2 \wedge dy_2 \wedge dy_1 + dx_3 \wedge dy_3 \wedge dy_1 \\
& \phantom{\theta \wedge dx_1 \wedge \ } d y_1 & dx_1 \wedge dy_1 \wedge dx_3 + dx_2 \wedge dy_2 \wedge dx_3 & \\
& \phantom{\theta \wedge dx_1 \wedge \ } dy_2 & dx_1 \wedge dy_1 \wedge dy_3 + dx_2 \wedge dy_2 \wedge dy_3 & \\
& \phantom{\theta \wedge dx_1 \wedge \ } dy_3 & & \\
& \phantom{\theta \wedge \ } dx_2 \wedge dx_3 & & \\
& \phantom{\theta \wedge dx_2 \wedge \ } dy_1 & & \\
& \phantom{\theta \wedge dx_2 \wedge \ } dy_2 & & \\
& \phantom{\theta \wedge dx_2 \wedge \ } dy_3 & & \\
& \phantom{\theta \wedge \ } dx_3 \wedge dy_1 & & \\
& \phantom{\theta \wedge dx_3 \wedge \ } dy_2 & & \\
& \phantom{\theta \wedge dx_3 \wedge \ } dy_3 & & \\
& \phantom{\theta \wedge \ } dy_1 \wedge dy_2 & & \\
& \phantom{\theta \wedge dy_1 \wedge \ } dy_3 & & \\
& \phantom{\theta \wedge \ } dy_2 \wedge dy_3 & &
\end{align*}
$\Longrightarrow$
\begin{align*}
(35) \ \frac{\Omega^3}{I^3} = \spn \{
& dx_1 \wedge dx_2 \wedge dx_3 , \ & dx_2 \wedge dx_3 \wedge dy_1 , \ & dx_3 \wedge dy_1 \wedge dy_2, \ & dy_1 \wedge dy_2 \wedge dy_3 , \\
& \phantom{dx_1 \wedge dx_2 \wedge \ } dy_3 & & & \\
& \phantom{dx_1 \wedge \ } dx_3 \wedge d y_2 & \phantom{dx_2 \wedge \ } dy_1 \wedge dy_3 & & \\
& \phantom{dx_1 \wedge \ } dy_2 \wedge dy_3 & & & \\
\end{align*}
(here we have too lines, don't get confused with the notation)
\begin{align*}
& dx_1 \wedge dy_1 \wedge dx_2 - dx_3 \wedge dy_3 \wedge dx_2 , \ & dx_2 \wedge dy_2 \wedge dx_1 - dx_3 \wedge dy_3 \wedge dx_1, \}\\
& dx_1 \wedge dy_1 \wedge dy_2 - dx_3 \wedge dy_3 \wedge dy_2 , \ & dx_2 \wedge dy_2 \wedge dy_1 - dx_3 \wedge dy_3 \wedge dy_1 \\
& dx_1 \wedge dy_1 \wedge dx_3 - dx_2 \wedge dy_2 \wedge dx_3 & \\
& dx_1 \wedge dy_1 \wedge dy_3 - dx_2 \wedge dy_2 \wedge dy_3 & \\
\end{align*}
\end{comment
\begin{obs}
\textbf{Case $k=4$.}
$$
\dim \Omega^4 =35 \text{ and } \dim J^4 =14.
$$
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
\Omega^4 = \spn
\begin{Bmatrix}
dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 , & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 , & dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3, \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_2 , & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_3, & dx_3 \wedge dy_1 \wedge dy_2 \wedge \theta, \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_3, & dx_2 \wedge dx_3 \wedge dy_1 \wedge \theta, & \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge \theta, & & \\
dx_1 \wedge dx_2 \wedge d y_1 \wedge dy_2 , & dx_2 \wedge dx_3 \wedge dy_2 \wedge dy_3, & dx_3 \wedge dy_1 \wedge dy_3 \wedge \theta, \\
dx_1 \wedge dx_2 \wedge d y_1 \wedge dy_3 , & dx_2 \wedge dx_3 \wedge dy_2 \wedge \theta, & \\
dx_1 \wedge dx_2 \wedge d y_1 \wedge \theta , & & \\
dx_1 \wedge dx_2 \wedge d y_2 \wedge dy_3 , & dx_2 \wedge dx_3 \wedge dy_3 \wedge \theta, & \\
dx_1 \wedge dx_2 \wedge d y_2 \wedge \theta, & & \\
dx_1 \wedge dx_2 \wedge d y_3 \wedge \theta , & & \\
dx_1 \wedge dx_3 \wedge d y_1 \wedge dy_2 , & dx_2 \wedge dy_1 \wedge dy_2 \wedge dy_3, & dx_3 \wedge dy_2 \wedge dy_3 \wedge \theta, \\
dx_1 \wedge dx_3 \wedge d y_1 \wedge dy_3 , & dx_2 \wedge dy_1 \wedge dy_2 \wedge \theta , & \\
dx_1 \wedge dx_3 \wedge d y_1 \wedge \theta , & & \\
dx_1 \wedge dx_3 \wedge d y_2 \wedge dy_3 , & dx_2 \wedge dy_1 \wedge dy_3 \wedge \theta, & \\
dx_1 \wedge dx_3 \wedge d y_2 \wedge \theta, & & \\
dx_1 \wedge dx_3 \wedge d y_3 \wedge \theta, & & \\
dx_1 \wedge d y_1 \wedge dy_2 \wedge dy_3 , & dx_2 \wedge dy_2 \wedge dy_3 \wedge \theta, & dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta , \\
dx_1 \wedge d y_1 \wedge dy_2 \wedge \theta , & & \\
dx_1 \wedge d y_1 \wedge dy_3 \wedge \theta, & & \\
dx_1 \wedge d y_2 \wedge dy_3 \wedge \theta & &
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
J^4 = \spn
\begin{Bmatrix}
\theta \wedge dx_1 \wedge dx_2 \wedge dx_3, & \theta \wedge dx_1 \wedge dy_1 \wedge dx_2 - \theta \wedge dx_3 \wedge dy_3 \wedge dx_2 , \\
\theta \wedge dx_1 \wedge dx_2 \wedge dy_3, & \theta \wedge dx_1 \wedge dy_1 \wedge dy_2 - \theta \wedge dx_3 \wedge dy_3 \wedge dy_2, \\
\theta \wedge dx_1 \wedge dx_3 \wedge dy_2, & \theta \wedge dx_1 \wedge dy_1 \wedge dx_3 - \theta \wedge dx_3 \wedge dy_3 \wedge dx_3 ,\\
\theta \wedge dx_1 \wedge dy_2 \wedge d y_3 , & \theta \wedge dx_1 \wedge dy_1 \wedge dy_3 - \theta \wedge dx_3 \wedge dy_3 \wedge dy_3 ,\\
\theta \wedge dx_2 \wedge dx_3 \wedge d y_1 , & \theta \wedge dx_2 \wedge dy_2 \wedge dx_1 - \theta \wedge dx_3 \wedge dy_3 \wedge dx_1 , \\
\theta \wedge dx_2 \wedge d y_1 \wedge dy_3, & \theta \wedge dx_2 \wedge dy_2 \wedge dy_1 - \theta \wedge dx_3 \wedge dy_3 \wedge dy_1 , \\
\theta \wedge dx_3 \wedge dy_1 \wedge d y_2 , & \\
\theta \wedge dy_1 \wedge dy_2 \wedge dy_3 &
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\end{obs}
\begin{comment
$(35) \ \Omega^4 =$
\begin{align*}
= \spn \{
& dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 , \ & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 , \ & dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3, \ & dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta , & \}\\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } dy_2 & \phantom{dx_2 \wedge dx_3 \wedge dy_1 \wedge \ } dy_3 & \phantom{dx_3 \wedge dy_1 \wedge dy_2 \wedge \ } \theta & &\\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } dy_3 & \phantom{dx_2 \wedge dx_3 \wedge dy_1 \wedge \ } \theta & & &\\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } \theta & & & &\\
& \phantom{dx_1 \wedge dx_2 \wedge \ } d y_1 \wedge dy_2 & \phantom{dx_2 \wedge dx_3 \wedge \ } dy_2 \wedge dy_3 & \phantom{dx_3 \wedge dy_1 \wedge \ } dy_3 \wedge \theta & &\\
& \phantom{dx_1 \wedge dx_2 \wedge d y_1 \wedge \ } dy_3 & \phantom{dx_2 \wedge dx_3 \wedge dy_2 \wedge \ } \theta & & &\\
& \phantom{dx_1 \wedge dx_2 \wedge d y_1 \wedge \ } \theta & & & &\\
& \phantom{dx_1 \wedge dx_2 \wedge \ } d y_2 \wedge dy_3 & \phantom{dx_2 \wedge dx_3 \wedge \ } dy_3 \wedge \theta & & &\\
& \phantom{dx_1 \wedge dx_2 \wedge d y_2 \wedge \ } \theta & & & &\\
& \phantom{dx_1 \wedge dx_2 \wedge \ }d y_3 \wedge \theta & & & &\\
& \phantom{dx_1 \wedge \ } dx_3 \wedge d y_1 \wedge dy_2 & \phantom{dx_2 \wedge \ } dy_1 \wedge dy_2 \wedge dy_3 & \phantom{dx_3 \wedge \ } dy_2 \wedge dy_3 \wedge \theta & &\\
& \phantom{dx_1 \wedge dx_3 \wedge d y_1 \wedge \ } dy_3 & \phantom{dx_2 \wedge dy_1 \wedge dy_2 \wedge \ } \theta & & & \\
& \phantom{dx_1 \wedge dx_3 \wedge d y_1 \wedge \ } \theta & & & & \\
& \phantom{dx_1 \wedge dx_3 \wedge \ } d y_2 \wedge dy_3 & \phantom{dx_2 \wedge dy_1 \wedge \ } dy_3 \wedge \theta & & &\\
& \phantom{dx_1 \wedge dx_3 \wedge d y_2 \wedge \ } \theta & & & &\\
& \phantom{dx_1 \wedge dx_3 \wedge \ } d y_3 \wedge \theta & & & &\\
& \phantom{dx_1 \wedge \ } d y_1 \wedge dy_2 \wedge dy_3 & \phantom{dx_2 \wedge \ } dy_2 \wedge dy_3 \wedge \theta & & &\\
& \phantom{dx_1 \wedge d y_1 \wedge dy_2 \ } \wedge \theta & & & &\\
& \phantom{dx_1 \wedge d y_1 \wedge \ } dy_3 \wedge \theta & & & &\\
& \phantom{dx_1 \wedge \ } d y_2 \wedge dy_3 \wedge \theta & & & &
\end{align*}
$\Longrightarrow$
\begin{align*}
(14) \ J^4 = \spn \{
& \theta \wedge dx_1 \wedge dx_2 \wedge dx_3, \ & \theta \wedge dx_1 \wedge dy_1 \wedge dx_2 - \theta \wedge dx_3 \wedge dy_3 \wedge dx_2 , \}\\
& \phantom{\theta \wedge dx_1 \wedge dx_2 \wedge \ } dy_3 & \phantom{ \theta \wedge dx_1 \wedge dy_1 \wedge \ } dy_2 - \phantom{ \theta \wedge dx_3 \wedge dy_3 \wedge \ } dy_2 \\
& \phantom{ \theta \wedge dx_1 \wedge \ } dx_3 \wedge dy_2 & \phantom{\theta \wedge dx_1 \wedge dy_1 \wedge \ } dx_3 - \phantom{ \theta \wedge dx_3 \wedge dy_3 \wedge \ } dx_3 \\
& \phantom{ \theta \wedge dx_1 \wedge \ } dy_2 \wedge d y_3 & \phantom{ \theta \wedge dx_1 \wedge dy_1 \wedge \ } dy_3 - \phantom{ \theta \wedge dx_3 \wedge dy_3 \wedge \ } dy_3 \\
& \phantom{ \theta \wedge \ } dx_2 \wedge dx_3 \wedge d y_1 & \phantom{ \theta \wedge \ } dx_2 \wedge dy_2 \wedge dx_1 - \phantom{ \theta \wedge \ } dx_3 \wedge dy_3 \wedge dx_1 \\
& \phantom{ \theta \wedge dx_2 \wedge \ } d y_1 \wedge dy_3 & \phantom{ \theta \wedge dx_2 \wedge dy_2 \wedge \ } dy_1 - \phantom{ \theta \wedge dx_3 \wedge dy_3 \wedge \ } dy_1 \\
& \phantom{ \theta \wedge \ } dx_3 \wedge dy_1 \wedge d y_2 & \\
& \phantom{ \theta \wedge \ } dy_1 \wedge dy_2 \wedge dy_3 &
\end{align*}
\end{comment
\begin{obs}
\textbf{Case $k=5$.}
$$
\dim \Omega^5 =21 \text{ and } \dim J^5 =14.
$$
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
\Omega^5 = \spn
\begin{Bmatrix}
dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge dy_2 , & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3 , \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge dy_3, & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge \theta \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge \theta, & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_3 \wedge \theta \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_2 \wedge \theta, & dx_2 \wedge dx_3 \wedge dy_2 \wedge dy_3 \wedge \theta \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_2 \wedge \theta, & dx_2 \wedge dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_3 \wedge \theta, & dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta , \\
dx_1 \wedge dx_2 \wedge d y_1 \wedge dy_2 \wedge dy_3 , & \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_2 \wedge \theta, & \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_3 \wedge \theta, & \\
dx_1 \wedge dx_2 \wedge d y_2 \wedge dy_3 \wedge \theta , & \\
dx_1 \wedge dx_3 \wedge d y_1 \wedge dy_2 \wedge dy_3 , & \\
dx_1 \wedge dx_3 \wedge d y_1 \wedge dy_2 \wedge \theta, & \\
dx_1 \wedge dx_3 \wedge d y_1 \wedge dy_3 \wedge \theta, & \\
dx_1 \wedge dx_3 \wedge d y_2 \wedge dy_3 \wedge \theta, & \\
dx_1 \wedge dy_1 \wedge d y_2 \wedge dy_3 \wedge \theta &
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
J^5 = \spn
\begin{Bmatrix}
\theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 , & \theta \wedge dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2 - \theta \wedge dx_1 \wedge dy_1 \wedge dx_3 \wedge dy_3, \\
\theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge dy_1, & \theta \wedge dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2 - \theta \wedge dx_2 \wedge dy_2 \wedge dx_3 \wedge dy_3, \\
\theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge dy_3, & \\
\theta \wedge dx_1 \wedge dx_2 \wedge dy_1 \wedge d y_3 , & \\
\theta \wedge dx_1 \wedge dx_2 \wedge dy_2 \wedge d y_3 , & \\
\theta \wedge dx_1 \wedge dx_3 \wedge dy_1 \wedge d y_3 , & \\
\theta \wedge dx_1 \wedge dx_3 \wedge dy_2 \wedge d y_3 , & \\
\theta \wedge dx_1 \wedge dy_1 \wedge dy_2 \wedge d y_3 , & \\
\theta \wedge dx_2 \wedge dx_3 \wedge dy_1 \wedge d y_2 , & \\
\theta \wedge dx_1 \wedge dx_3 \wedge dy_1 \wedge d y_3, & \\
\theta \wedge dx_2 \wedge dy_1 \wedge dy_2 \wedge d y_3 , & \\
\theta \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge d y_3 &
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\end{obs}
\begin{comment
\begin{align*}
(21) \ \Omega^5= \spn \{
& dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge dy_2 , \ & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3 , \ & dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta , \}\\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge \ } dy_3 & \phantom{dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge \ } \theta & \\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge \ } \theta & \phantom{dx_2 \wedge dx_3 \wedge dy_1 \wedge \ } dy_3 \wedge \theta & \\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } dy_2 \wedge \theta & \phantom{dx_2 \wedge dx_3 \wedge \ } dy_2 \wedge dy_3 \wedge \theta & \\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_2 \wedge \ } \theta & \phantom{dx_2 \wedge \ } dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta &\\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } dy_3 \wedge \theta & & \\
& \phantom{dx_1 \wedge dx_2 \wedge \ } d y_1 \wedge dy_2 \wedge dy_3 & & \\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_2 \wedge \ } \theta & & \\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } dy_3 \wedge \theta & & \\
& \phantom{dx_1 \wedge dx_2 \wedge \ } d y_2 \wedge dy_3 \wedge \theta & & \\
& \phantom{dx_1 \wedge \ } dx_3 \wedge d y_1 \wedge dy_2 \wedge dy_3 & & \\
& \phantom{dx_1 \wedge dx_3 \wedge d y_1 \wedge dy_2 \wedge \ } \theta & & \\
& \phantom{dx_1 \wedge dx_3 \wedge d y_1 \wedge \ } dy_3 \wedge \theta & & \\
& \phantom{dx_1 \wedge dx_3 \wedge \ } d y_2 \wedge dy_3 \wedge \theta & & \\
& \phantom{dx_1 \wedge \ } dy_1 \wedge d y_2 \wedge dy_3 \wedge \theta & & \\
\end{align*}
$\Longrightarrow$
\begin{align*}
(14) \ J^5 = \spn \{
& \theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 , \ & \theta \wedge dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2 - \theta \wedge dx_1 \wedge dy_1 \wedge dx_3 \wedge dy_3, \}\\
& \phantom{\theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge \ } dy_1 & \phantom{ \theta \wedge dx_1 \wedge dy_1 \wedge dy_2 \ } - \theta \wedge dx_2 \wedge dy_2 \wedge dx_3 \wedge dy_3 \\
& \phantom{ \theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge \ } dy_3 & \\
& \phantom{ \theta \wedge dx_1 \wedge dx_2 \wedge \ } dy_1 \wedge d y_3 & \\
& \phantom{ \theta \wedge dx_1 \wedge dx_2 \wedge \ } dy_2 \wedge d y_3 & \\
& \phantom{ \theta \wedge dx_1 \wedge\ } dx_3 \wedge dy_1 \wedge d y_3 & \\
& \phantom{ \theta \wedge dx_1 \wedge dx_3 \wedge \ } dy_2 \wedge d y_3 & \\
& \phantom{ \theta \wedge dx_1 \wedge\ } dy_1 \wedge dy_2 \wedge d y_3 & \\
& \phantom{ \theta \wedge \ } dx_2 \wedge dx_3 \wedge dy_1 \wedge d y_2 & \\
& \phantom{ \theta \wedge dx_1 \wedge dx_3 \wedge dy_1 \wedge \ } d y_3 & \\
& \phantom{ \theta \wedge dx_2 \wedge \ } dy_1 \wedge dy_2 \wedge d y_3 & \\
& \phantom{ \theta \wedge \ } dx_3 \wedge dy_1 \wedge dy_2 \wedge d y_3 & \\
\end{align*}
\end{comment
\begin{obs}
\textbf{Case $k=6$.}
$$
\dim \Omega^6 =7 \text{ and } \dim J^6 =6.
$$
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
\Omega^6 = \spn
\begin{Bmatrix}
dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge dy_2 \wedge dy_3 , & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta , \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge dy_2 \wedge \theta , & \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge dy_3 \wedge \theta , & \\
dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_2 \wedge dy_3 \wedge \theta , & \\
dx_1 \wedge dx_2 \wedge d y_1 \wedge dy_2 \wedge dy_3 \wedge \theta , & \\
dx_1 \wedge dx_2 \wedge d y_1 \wedge dy_2 \wedge dy_3 \wedge \theta & \\
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\centerline{
\begin{minipage}{\linewidth}
\begin{align*}
J^6 = \spn
\begin{Bmatrix}
\theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge d y_2 , \\
\theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge dy_3 , \\
\theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_2 \wedge dy_3 , \\
\theta \wedge dx_1 \wedge dx_2 \wedge dy_1 \wedge dy_2 \wedge d y_3 , \\
\theta \wedge dx_1 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge d y_3 , \\
\theta \wedge dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge d y_3
\end{Bmatrix}
\end{align*}
\end{minipage}
}
\end{obs}
\begin{comment
\begin{align*}
(7) \ \Omega^6= \spn \{
& dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge dy_2 \wedge dy_3 , \ & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta , \}\\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge dy_2 \wedge \ } \theta & \\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge \ } dy_3 \wedge \theta & \\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } dy_2 \wedge dy_3 \wedge \theta & \\
& \phantom{dx_1 \wedge dx_2 \wedge \ } d y_1 \wedge dy_2 \wedge dy_3 \wedge \theta & \\
& \phantom{dx_1 \wedge \ } dx_2 \wedge d y_1 \wedge dy_2 \wedge dy_3 \wedge \theta & \\
\end{align*}
$\Longrightarrow$
\begin{align*}
(6) \ J^6 = \spn \{
& \theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge d y_2 , \}\\
& \phantom{\theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge \ } dy_3 \\
& \phantom{ \theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge \ } d y_2 \wedge dy_3 \\
& \phantom{ \theta \wedge dx_1 \wedge dx_2 \wedge \ } dy_1 \wedge dy_2 \wedge d y_3 \\
& \phantom{ \theta \wedge dx_1 \wedge \ } dx_3 \wedge dy_1 \wedge dy_2 \wedge d y_3 \\
& \phantom{ \theta \wedge \ } dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge d y_3
\end{align*}
\end{comment
\begin{comment
\subsection{Rumin Complex in $\mathbb{H}^3$ (different notation)}
\begin{ex}
\begin{align*}
& dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \\
& \phantom{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } dy_2 \\
& \phantom{dx_1 \wedge dx_2 \wedge \ } d y_1 \wedge dy_2 \\
& \phantom{dx_1 \wedge dx_2 \wedge d y_1 \wedge \ } \theta \\
& \phantom{dx_1 \wedge \ } dx_3 \wedge d y_1 \wedge dy_2 \\
& \phantom{dx_1 \wedge dx_3 \wedge d y_1 \wedge \ } \theta \\
& \phantom{dx_1 \wedge dx_3 \wedge \ } d y_2 \wedge dy_3 \\
& \phantom{dx_1 \wedge dx_3 \wedge d y_2 \wedge \ } \theta \\
& \phantom{dx_1 \wedge dx_3 \wedge \ } d y_3 \wedge \theta
\end{align*}
would stand for
\begin{align*}
& dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ } } dy_2 \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} d y_1 \wedge dy_2 \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d y_1 \wedge \ }} \theta \\
& \mathbin{\color{red}{dx_1 \wedge \ }} dx_3 \wedge d y_1 \wedge dy_2 \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge d y_1 \wedge \ }} \theta \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge \ }} d y_2 \wedge dy_3 \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge d y_2 \wedge \ }} \theta \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge \ }} d y_3 \wedge \theta
\end{align*}
\end{ex}
$$
0 \to \mathbb{R} \to \underbrace{C^\infty}_{dim=1} \stackrel{d_Q}{\to} \underbrace{\frac{\Omega^1}{I^1}}_{dim=6} \stackrel{d_Q}{\to} \underbrace{\frac{\Omega^2}{I^2}}_{dim=14} \stackrel{d_Q}{\to} \underbrace{\frac{\Omega^3}{I^3}}_{dim=14} \stackrel{D}{\to} \underbrace{J^{4}}_{dim=14} \stackrel{d_Q}{\to} \underbrace{J^{5}}_{dim=14} \stackrel{d_Q}{\to} \underbrace{J^{6}}_{dim=6}\stackrel{d_Q}{\to} \underbrace{J^{7}}_{dim=1} \to 0
$$
where
$$
(7) \ \Omega^1 = \spn \{ dx_1, dx_2, dx_3, dy_1, dy_2, dy_3, \theta \}
$$
$$
(1) \ I^1=\spn \{ \theta \}
$$
$\Longrightarrow$
$$
(6) \ \frac{\Omega^1}{I^1} = \spn \{dx_1, dx_2, dx_3, dy_1, dy_2, dy_3 \} .
$$
\begin{align*}
(21) \ \Omega^2 = \spn \{ & dx_1 \wedge dx_2, \ & dx_2 \wedge dx_3 , \ & dx_3 \wedge dy_1, \ & dy_1 \wedge dy_2, \ & dy_2 \wedge dy_3 , \ dy_3 \wedge \theta \}\\
& \mathbin{\color{red}{dx_1 \wedge \ }} dx_3 & \mathbin{\color{red}{dx_2 \wedge \ }} dy_1 & \mathbin{\color{red}{dx_3 \wedge \ }} dy_2 & \mathbin{\color{red}{dy_1 \wedge \ }} dy_3 & \mathbin{\color{red}{dy_2 \wedge \ }} \theta \\
& \mathbin{\color{red}{dx_1 \wedge \ }} d y_1 & \mathbin{\color{red}{dx_2 \wedge \ }} dy_2 & \mathbin{\color{red}{dx_3 \wedge \ }} dy_3 & \mathbin{\color{red}{dy_1 \wedge \ }} \theta &\\
& \mathbin{\color{red}{dx_1 \wedge \ }} dy_2 & \mathbin{\color{red}{dx_2 \wedge \ } } dy_3 & \mathbin{\color{red}{dx_3 \wedge \ } } \theta & &\\
& \mathbin{\color{red}{dx_1 \wedge \ }} dy_3 & \mathbin{\color{red}{dx_2 \wedge \ }} \theta & & &\\
& \mathbin{\color{red}{dx_1 \wedge \ }} \theta & & & &
\end{align*}
\begin{align*}
(7) \ I^2 = \spn \{ & \theta \wedge dx_1, \ dx_1 \wedge dy_1+ dx_2 \wedge dy_2 + dx_3 \wedge dy_3 \}\\
& \mathbin{\color{red}{ \theta \wedge \ }} dx_2\\
& \mathbin{\color{red}{ \theta \wedge \ }} dx_3\\
& \mathbin{\color{red}{ \theta \wedge \ }} dy_1\\
& \mathbin{\color{red}{ \theta \wedge \ }} dy_2\\
& \mathbin{\color{red}{ \theta \wedge \ }} dy_3
\end{align*}
$\Longrightarrow$
\begin{align*}
(14) \ \frac{\Omega^2}{I^2} = \spn \{
& dx_1 \wedge dx_2, \ & dx_2 \wedge dx_3 , \ & dx_3 \wedge dy_1, \ & dy_1 \wedge dy_2, \ & dy_2 \wedge dy_3, \ dx_1 \wedge dy_1 - dx_2 \wedge dy_2, \}\\
& \mathbin{\color{red}{dx_1 \wedge \ }} dx_3 & \mathbin{\color{red}{dx_2 \wedge \ }} dy_1 & \mathbin{\color{red}{dx_3 \wedge \ }} dy_2 & \mathbin{\color{red}{dy_1 \wedge \ }} dy_3 , & \phantom{ \ dy_2 \wedge dy_3, \ } \ dx_1 \wedge dy_1 - dx_3 \wedge dy_3 \\
& \mathbin{\color{red}{dx_1 \wedge \ }} dy_2 & \mathbin{\color{red}{dx_2 \wedge \ }} dy_3 & & \\
& \mathbin{\color{red}{dx_1 \wedge \ }} d y_3 & & &
\end{align*}
\begin{align*}
(35) \ \Omega^3 = \spn \{
& dx_1 \wedge dx_2 \wedge dx_3 , \ & dx_2 \wedge dx_3 \wedge dy_1 , \ & dx_3 \wedge dy_1 \wedge dy_2, \ & dy_1 \wedge dy_2 \wedge dy_3 , \ & dy_2 \wedge dy_3 \wedge \theta \}\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} d y_1 & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge \ }} dy_2 & \mathbin{\color{red}{dx_3 \wedge dy_1 \wedge \ }} dy_3 & \mathbin{\color{red}{dy_1 \wedge dy_2 \wedge \ }} \theta &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} dy_2 & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge \ }} dy_3 & \mathbin{\color{red}{dx_3 \wedge dy_1 \wedge \ }} \theta & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} dy_3 & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge \ }} \theta & & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge \ }} dx_3 \wedge d y_1 & \mathbin{\color{red}{dx_2 \wedge \ }} dy_1 \wedge dy_2 & \mathbin{\color{red}{dx_3 \wedge \ }} dy_2 \wedge dy_3 & \mathbin{\color{red}{dy_1 \wedge \ }} dy_3 \wedge \theta &\\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge \ }} dy_2 & \mathbin{\color{red}{dx_2 \wedge dy_1 \wedge \ }} dy_3 & \mathbin{\color{red}{dx_3 \wedge dy_2 \wedge \ }} \theta & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge \ }} dy_3 & \mathbin{\color{red}{dx_2 \wedge dy_1 \wedge \ }} \theta & & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge \ }} \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge \ }} dy_1 \wedge dy_2 & \mathbin{\color{red}{dx_2 \wedge \ }} dy_2 \wedge dy_3 & \mathbin{\color{red}{dx_3 \wedge \ }} dy_3 \wedge \theta & &\\
& \mathbin{\color{red}{dx_1 \wedge dy_1 \wedge \ }} dy_3 & \mathbin{\color{red}{dx_2 \wedge dy_2 \wedge \ }} \theta & & &\\
& \mathbin{\color{red}{dx_1 \wedge dy_1 \wedge \ }} \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge \ }} dy_2 \wedge dy_3 & \mathbin{\color{red}{dx_2 \wedge \ }} dy_3 \wedge \theta & & &\\
& \mathbin{\color{red}{dx_1 \wedge dy_2 \wedge \ }} \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge \ }} dy_3 \wedge \theta & & & &
\end{align*}
\begin{align*}
(21) \ I^3 = \spn \{
& \theta \wedge dx_1 \wedge dx_2 , \ & dx_1 \wedge dy_1 \wedge dx_2 + dx_3 \wedge dy_3 \wedge dx_2 , \ & dx_2 \wedge dy_2 \wedge dx_1 + dx_3 \wedge dy_3 \wedge dx_1, \}\\
& \mathbin{\color{red}{\theta \wedge dx_1 \wedge \ }} d x_3 & dx_1 \wedge dy_1 \wedge dy_2 + dx_3 \wedge dy_3 \wedge dy_2 , \ & dx_2 \wedge dy_2 \wedge dy_1 + dx_3 \wedge dy_3 \wedge dy_1 \\
& \mathbin{\color{red}{\theta \wedge dx_1 \wedge \ }} d y_1 & dx_1 \wedge dy_1 \wedge dx_3 + dx_2 \wedge dy_2 \wedge dx_3 & \\
& \mathbin{\color{red}{\theta \wedge dx_1 \wedge \ }} dy_2 & dx_1 \wedge dy_1 \wedge dy_3 + dx_2 \wedge dy_2 \wedge dy_3 & \\
& \mathbin{\color{red}{\theta \wedge dx_1 \wedge \ }} dy_3 & & \\
& \mathbin{\color{red}{\theta \wedge \ }} dx_2 \wedge dx_3 & & \\
& \mathbin{\color{red}{\theta \wedge dx_2 \wedge \ }} dy_1 & & \\
& \mathbin{\color{red}{\theta \wedge dx_2 \wedge \ }} dy_2 & & \\
& \mathbin{\color{red}{\theta \wedge dx_2 \wedge \ }} dy_3 & & \\
& \mathbin{\color{red}{\theta \wedge \ }} dx_3 \wedge dy_1 & & \\
& \mathbin{\color{red}{\theta \wedge dx_3 \wedge \ }} dy_2 & & \\
& \mathbin{\color{red}{\theta \wedge dx_3 \wedge \ }} dy_3 & & \\
& \mathbin{\color{red}{\theta \wedge \ }} dy_1 \wedge dy_2 & & \\
& \mathbin{\color{red}{\theta \wedge dy_1 \wedge \ }} dy_3 & & \\
& \mathbin{\color{red}{\theta \wedge \ }} dy_2 \wedge dy_3 & &
\end{align*}
$\Longrightarrow$
\begin{align*}
(35) \ \frac{\Omega^3}{I^3} = \spn \{
& dx_1 \wedge dx_2 \wedge dx_3 , \ & dx_2 \wedge dx_3 \wedge dy_1 , \ & dx_3 \wedge dy_1 \wedge dy_2, \ & dy_1 \wedge dy_2 \wedge dy_3 , \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} dy_3 & & & \\
& \mathbin{\color{red}{dx_1 \wedge \ }} dx_3 \wedge d y_2 & \mathbin{\color{red}{dx_2 \wedge \ }} dy_1 \wedge dy_3 & & \\
& \mathbin{\color{red}{dx_1 \wedge \ }} dy_2 \wedge dy_3 & & & \\
\end{align*}
(here we have two lines, don't get confused with the notation)
\begin{align*}
& dx_1 \wedge dy_1 \wedge dx_2 - dx_3 \wedge dy_3 \wedge dx_2 , \ & dx_2 \wedge dy_2 \wedge dx_1 - dx_3 \wedge dy_3 \wedge dx_1, \}\\
& dx_1 \wedge dy_1 \wedge dy_2 - dx_3 \wedge dy_3 \wedge dy_2 , \ & dx_2 \wedge dy_2 \wedge dy_1 - dx_3 \wedge dy_3 \wedge dy_1 \\
& dx_1 \wedge dy_1 \wedge dx_3 - dx_2 \wedge dy_2 \wedge dx_3 & \\
& dx_1 \wedge dy_1 \wedge dy_3 - dx_2 \wedge dy_2 \wedge dy_3 & \\
\end{align*}
$(35) \ \Omega^4 =$
\begin{align*}
= \spn \{
& dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 , \ & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 , \ & dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3, \ & dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta , & \}\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ }} dy_2 & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge dy_1 \wedge \ }} dy_3 & \mathbin{\color{red}{dx_3 \wedge dy_1 \wedge dy_2 \wedge \ }} \theta & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ }} dy_3 & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge dy_1 \wedge \ }} \theta & & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ }} \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} d y_1 \wedge dy_2 & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge \ }} dy_2 \wedge dy_3 & \mathbin{\color{red}{dx_3 \wedge dy_1 \wedge \ }} dy_3 \wedge \theta & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d y_1 \wedge \ } } dy_3 & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge dy_2 \wedge \ }} \theta & & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d y_1 \wedge \ }} \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} d y_2 \wedge dy_3 & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge \ }} dy_3 \wedge \theta & & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d y_2 \wedge \ }} \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }}d y_3 \wedge \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge \ }} dx_3 \wedge d y_1 \wedge dy_2 & \mathbin{\color{red}{dx_2 \wedge \ }} dy_1 \wedge dy_2 \wedge dy_3 & \mathbin{\color{red}{dx_3 \wedge \ }} dy_2 \wedge dy_3 \wedge \theta & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge d y_1 \wedge \ }} dy_3 & \mathbin{\color{red}{dx_2 \wedge dy_1 \wedge dy_2 \wedge \ }} \theta & & & \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge d y_1 \wedge \ }} \theta & & & & \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge \ }} d y_2 \wedge dy_3 & \mathbin{\color{red}{dx_2 \wedge dy_1 \wedge \ }} dy_3 \wedge \theta & & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge d y_2 \wedge \ }} \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge \ }} d y_3 \wedge \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge \ }} d y_1 \wedge dy_2 \wedge dy_3 & \mathbin{\color{red}{dx_2 \wedge \ }} dy_2 \wedge dy_3 \wedge \theta & & &\\
& \mathbin{\color{red}{dx_1 \wedge d y_1 \wedge dy_2 \ }} \wedge \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge d y_1 \wedge \ }} dy_3 \wedge \theta & & & &\\
& \mathbin{\color{red}{dx_1 \wedge \ }} d y_2 \wedge dy_3 \wedge \theta & & & &
\end{align*}
$\Longrightarrow$
\begin{align*}
(14) \ J^4 = \spn \{
& \theta \wedge dx_1 \wedge dx_2 \wedge dx_3, \ & \theta \wedge dx_1 \wedge dy_1 \wedge dx_2 - \theta \wedge dx_3 \wedge dy_3 \wedge dx_2 , \}\\
& \mathbin{\color{red}{\theta \wedge dx_1 \wedge dx_2 \wedge \ }} dy_3 & \mathbin{\color{red}{ \theta \wedge dx_1 \wedge dy_1 \wedge \ }} dy_2 - \mathbin{\color{red}{ \theta \wedge dx_3 \wedge dy_3 \wedge \ }} dy_2 \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge \ }} dx_3 \wedge dy_2 & \mathbin{\color{red}{\theta \wedge dx_1 \wedge dy_1 \wedge \ }} dx_3 - \mathbin{\color{red}{ \theta \wedge dx_3 \wedge dy_3 \wedge \ }} dx_3 \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge \ }} dy_2 \wedge d y_3 & \mathbin{\color{red}{ \theta \wedge dx_1 \wedge dy_1 \wedge \ }} dy_3 - \mathbin{\color{red}{ \theta \wedge dx_3 \wedge dy_3 \wedge \ }} dy_3 \\
& \mathbin{\color{red}{ \theta \wedge \ }} dx_2 \wedge dx_3 \wedge d y_1 & \mathbin{\color{red}{ \theta \wedge \ }} dx_2 \wedge dy_2 \wedge dx_1 - \mathbin{\color{red}{ \theta \wedge \ }} dx_3 \wedge dy_3 \wedge dx_1 \\
& \mathbin{\color{red}{ \theta \wedge dx_2 \wedge \ }} d y_1 \wedge dy_3 & \mathbin{\color{red}{ \theta \wedge dx_2 \wedge dy_2 \wedge \ }} dy_1 - \mathbin{\color{red}{ \theta \wedge dx_3 \wedge dy_3 \wedge \ }} dy_1 \\
& \mathbin{\color{red}{ \theta \wedge \ }} dx_3 \wedge dy_1 \wedge d y_2 & \\
& \mathbin{\color{red}{ \theta \wedge \ }} dy_1 \wedge dy_2 \wedge dy_3 &
\end{align*}
\begin{align*}
(21) \ \Omega^5= \spn \{
& dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge dy_2 , \ & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3 , \ & dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta , \}\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge \ }} dy_3 & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge \ }} \theta & \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge \ }} \theta & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge dy_1 \wedge \ }} dy_3 \wedge \theta & \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ }} dy_2 \wedge \theta & \mathbin{\color{red}{dx_2 \wedge dx_3 \wedge \ }} dy_2 \wedge dy_3 \wedge \theta & \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_2 \wedge \ }} \theta & \mathbin{\color{red}{dx_2 \wedge \ }} dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta &\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ }} dy_3 \wedge \theta & & \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} d y_1 \wedge dy_2 \wedge dy_3 & & \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge dy_2 \wedge \ }} \theta & & \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ }} dy_3 \wedge \theta & & \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} d y_2 \wedge dy_3 \wedge \theta & & \\
& \mathbin{\color{red}{dx_1 \wedge \ }} dx_3 \wedge d y_1 \wedge dy_2 \wedge dy_3 & & \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge d y_1 \wedge dy_2 \wedge \ }} \theta & & \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge d y_1 \wedge \ }} dy_3 \wedge \theta & & \\
& \mathbin{\color{red}{dx_1 \wedge dx_3 \wedge \ }} d y_2 \wedge dy_3 \wedge \theta & & \\
& \mathbin{\color{red}{dx_1 \wedge \ }} dy_1 \wedge d y_2 \wedge dy_3 \wedge \theta & & \\
\end{align*}
$\Longrightarrow$
\begin{align*}
(14) \ J^5 = \spn \{
& \theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 , \ & \theta \wedge dx_1 \wedge dy_1 \wedge dx_2 \wedge dy_2 - \theta \wedge dx_1 \wedge dy_1 \wedge dx_3 \wedge dy_3, \}\\
& \mathbin{\color{red}{\theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge \ }} dy_1 & \mathbin{\color{red}{ \theta \wedge dx_1 \wedge dy_1 \wedge dy_2 \ }} - \theta \wedge dx_2 \wedge dy_2 \wedge dx_3 \wedge dy_3 \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge \ }} dy_3 & \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge dx_2 \wedge \ }} dy_1 \wedge d y_3 & \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge dx_2 \wedge \ }} dy_2 \wedge d y_3 & \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge\ }} dx_3 \wedge dy_1 \wedge d y_3 & \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge dx_3 \wedge \ }} dy_2 \wedge d y_3 & \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge\ }} dy_1 \wedge dy_2 \wedge d y_3 & \\
& \mathbin{\color{red}{ \theta \wedge \ }} dx_2 \wedge dx_3 \wedge dy_1 \wedge d y_2 & \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge dx_3 \wedge dy_1 \wedge \ }} d y_3 & \\
& \mathbin{\color{red}{ \theta \wedge dx_2 \wedge \ }} dy_1 \wedge dy_2 \wedge d y_3 & \\
& \mathbin{\color{red}{ \theta \wedge \ }} dx_3 \wedge dy_1 \wedge dy_2 \wedge d y_3 & \\
\end{align*}
\begin{align*}
(7) \ \Omega^6= \spn \{
& dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge dy_2 \wedge dy_3 , \ & dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge dy_3 \wedge \theta , \}\\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge dy_2 \wedge \ }} \theta & \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge d y_1 \wedge \ }} dy_3 \wedge \theta & \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge d x_3 \wedge \ }} dy_2 \wedge dy_3 \wedge \theta & \\
& \mathbin{\color{red}{dx_1 \wedge dx_2 \wedge \ }} d y_1 \wedge dy_2 \wedge dy_3 \wedge \theta & \\
& \mathbin{\color{red}{dx_1 \wedge \ }} dx_2 \wedge d y_1 \wedge dy_2 \wedge dy_3 \wedge \theta & \\
\end{align*}
$\Longrightarrow$
\begin{align*}
(6) \ J^6 = \spn \{
& \theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge d y_2 , \}\\
& \mathbin{\color{red}{\theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge d y_1 \wedge \ }} dy_3 \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge dx_2 \wedge dx_3 \wedge \ }} d y_2 \wedge dy_3 \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge dx_2 \wedge \ }} dy_1 \wedge dy_2 \wedge d y_3 \\
& \mathbin{\color{red}{ \theta \wedge dx_1 \wedge \ }} dx_3 \wedge dy_1 \wedge dy_2 \wedge d y_3 \\
& \mathbin{\color{red}{ \theta \wedge \ }} dx_2 \wedge dx_3 \wedge dy_1 \wedge dy_2 \wedge d y_3
\end{align*}
\end{comment
\section{Explicit Computation of the Commutation between Pullback and Rumin Complex}\label{explicitcommutation}
Let $f : \mathbb{H}^1 \to \mathbb{H}^1$ be a smooth contact map and consider the Rumin complex. The following Proposition is already contained in Theorem \ref{Fdc=dcF}, but here we show the result with an explicit computation.\\\\
We recall the result in $\mathbb{H}^1$:
\begin{prop}\label{commuH1}
Consider a contact map $f: U \subset \mathbb{H}^1 \to \mathbb{H}^1$. The following hold
$\blacktriangleright$ For all $ \omega = g \in \mathcal{D}_{\mathbb{H}}^0( U )= C^\infty ( U )$,
\begin{equation}\label{k=1}
(\Lambda^1 df) d_Q \omega = d_Q (\Lambda^0 df \omega), \quad \text{ i.e.,} \quad f^* d_Q \omega = d_Q f^* \omega.
\end{equation}
$\blacktriangleright$ For all $ \omega \in \mathcal{D}_{\mathbb{H}}^1(U) =\frac{\Omega^1}{I^1} \cong \spn \{ dx , dy \}$.
\begin{equation}\label{k=2}
(\Lambda^1 df) D \omega = D (\Lambda^0 df \omega), \quad \text{ i.e.,} \quad f^* D \omega = D f^* \omega.
\end{equation}
$\blacktriangleright$ For all $ \omega \in \mathcal{D}_{\mathbb{H}}^2(U) = J^2 \cong \spn \{ dx \wedge \theta, dy \wedge \theta \}$,
\begin{equation}\label{k=3}
(\Lambda^3 df) d_Q \omega = d_Q (\Lambda^2 df \omega) , \quad \text{ i.e.,} \quad f^* d_Q \omega = d_Q f^* \omega.
\end{equation}
\noindent
Namely, the pullback by a contact map $f$ commutes with the differental operators of the Rumin complex.
\end{prop}
\begin{no}
Recalling Notation \ref{callL} and Lemma \ref{T3=XY12}, we have
$$
\lambda (1,f):= Xf^1 Yf^2 - Xf^2 Yf^1 = T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 .
$$
In this section, for brevity we will denote this simply as
$$
\lambda (f):=\lambda (1,f).
$$
\end{no}
\begin{proof}[ Proof of equation \eqref{k=1} in Proposition \ref{commuH1}]
From lemma \eqref{composition_contact}
\begin{align*}
d_Q (\Lambda^0 df (g)) =& d_Q f^* g = d_Q (g \circ f) = X(g \circ f) dx + Y(g \circ f) dy \\
=& ( Xg X f^1 + Yg X f^2 )dx +( Xg Y f^1 + Yg Y f^2 )dy.
\end{align*}
On the other hand
\begin{align*}
(\Lambda^1 df) d_Q g &= f^* d_Q g = f^* (Xg dx + Yg dy) = Xg f^* dx + Yg f^* dy \\
&
= Xg ( Xf^1 dx + Y f^1 dy ) + Yg( Xf^2 dx + Y f^2 dy)\\
&
= ( Xg X f^1 + Yg X f^2 )dx +( Xg Y f^1 + Yg Y f^2 )dy.
\end{align*}
So the first equality is verified.
\end{proof}
\noindent
We move on to the third equality.
\begin{proof}[Proof of equation \eqref{k=3} in Proposition \ref{commuH1}]
Consider $\omega \in \mathcal{D}_{\mathbb{H}}^2(\mathbb{H}^1)$, namely, $\omega = \omega_1 dx \wedge \theta + \omega_2 dy \wedge \theta$.
\begin{align*}
&(\Lambda^3 df) d_Q \omega = f^* d_Q \omega =f^* ( (X\omega_2-Y\omega_1) dx \wedge dy \wedge \theta)=\\
&=(X\omega_2-Y\omega_1)_f ( Xf^1 Yf^2 - Yf^1 Xf^2) \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dx \wedge dy \wedge \theta \\
&=(X\omega_2-Y\omega_1)_f \lambda^2 (f) dx \wedge dy \wedge \theta,
\end{align*}
where we use equation \eqref{3dim} for the second line.\\\\% and $\lambda =\left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) = ( Xf^1 Yf^2 - Yf^1 Xf^2)$ for the third line.\\\\
On the other hand, thanks to equations \eqref{2dimx} and \eqref{2dimy},
\begin{align*}
d_Q (\Lambda^2 df \omega) =& d_Q f^* \omega =d_Q f^*( \omega_1 dx \wedge \theta + \omega_2 dy \wedge \theta)\\
=&d_Q \left ( (\omega_1)_f f^*( dx \wedge \theta ) + (\omega_2)_f f^*( dy \wedge \theta ) \right ) \\
=&d_Q \Big ( (\omega_1)_f \left ( Xf^1 \lambda (f) dx \wedge \theta + Y f^1 \lambda (f) dy \wedge \theta \right ) \\
& + (\omega_2)_f \left ( Xf^2 \lambda (f) dx \wedge \theta + Y f^2 \lambda (f) dy \wedge \theta \right ) \Big ) \\
=&d_Q \Big ( \lambda (f) \left ( (\omega_1)_f Xf^1 + (\omega_2)_f Xf^2 \right ) dx \wedge \theta \\
&+\lambda (f) \left ( (\omega_1)_f Yf^1 + (\omega_2)_f Yf^2 \right ) dy \wedge \theta \Big ) \\
=&\Big ( X \left (\lambda (f) \left ( (\omega_1)_f Yf^1 + (\omega_2)_f Yf^2 \right ) \right) \\
&- Y \left ( \lambda (f) \left ( (\omega_1)_f Xf^1 + (\omega_2)_f Xf^2 \right ) \right )
\Big ) dx \wedge dy \wedge \theta\\
=& \left ( X \left (\lambda (f) \cdot \mathcal{E}_{\omega,f} \right) - Y \left ( \lambda (f) \cdot \mathcal{F}_{\omega,f} \right ) \right ) dx \wedge dy \wedge \theta\\
=& \Big ( X (\lambda (f) ) \cdot \mathcal{E}_{\omega,f} + \lambda (f) \cdot X(\mathcal{E}_{\omega,f})\\
& - Y ( \lambda (f) ) \cdot \mathcal{F}_{\omega,f} - \lambda (f) \cdot Y(\mathcal{F}_{\omega,f}) \Big ) dx \wedge dy \wedge \theta,
\end{align*}
where one denotes
$$
\begin{cases}
\mathcal{E}_{\omega,f}:=\mathcal{E}(\omega,f):=(\omega_1)_f Yf^1 + (\omega_2)_f Yf^2,\\
\mathcal{F}_{\omega,f}:=\mathcal{F}(\omega,f):=(\omega_1)_f Xf^1 + (\omega_2)_f Xf^2.
\end{cases}
$$
\noindent
Consider these terms one by one. Recall that we already computed $X(\lambda (f))$ and $Y(\lambda (f))$ in Lemma \ref{XCYC}. Then compute $X (\mathcal{E}_{\omega,f})$:
\begin{align*}
X (\mathcal{E}_{\omega,f})=&X \left ( (\omega_1)_f Yf^1 + (\omega_2)_f Yf^2 \right )\\
=&X ((\omega_1)_f ) Yf^1 + (\omega_1)_f XYf^1 + X ( (\omega_2)_f ) Yf^2 + (\omega_2)_f XYf^2\\
=&X (\omega_1 \circ f ) Yf^1 + (\omega_1)_f XYf^1 + X ( \omega_2 \circ f ) Yf^2 + (\omega_2)_f XYf^2\\
=& \left ( ( X\omega_1)_f X f^1 + (Y\omega_1)_f X f^2 \right ) Yf^1 + \left ( ( X\omega_2)_f X f^1 + (Y\omega_2)_f X f^2 \right ) Yf^2 \\
+ (\omega_1)_f XYf^1 + (\omega_2)_f XYf^2,
\end{align*}
where the last line follows by Lemma \ref{composition_contact}.\\\\
The last term $Y ( \mathcal{F}_{\omega,f} ) $ is similar:
\begin{align*}
Y ( \mathcal{F}_{\omega,f} ) =& Y \left ( (\omega_1)_f Xf^1 + (\omega_2)_f Xf^2 \right )\\
=&Y ((\omega_1)_f ) Xf^1 + (\omega_1)_f YXf^1 + Y ( (\omega_2)_f ) Xf^2 + (\omega_2)_f YXf^2\\
=&Y (\omega_1 \circ f ) Xf^1 + (\omega_1)_f YXf^1 + Y ( \omega_2 \circ f ) Xf^2 + (\omega_2)_f YXf^2\\
=& \left ( ( X \omega_1)_f Y f^1 + (Y\omega_1)_f Y f^2 \right ) Xf^1 + \left ( ( X\omega_2)_f Y f^1 + (Y\omega_2)_f Y f^2 \right ) Xf^2 \\
+ (\omega_1)_f YXf^1 + (\omega_2)_f YX f^2,
\end{align*}
where the last line follows again by Lemma \ref{composition_contact}.\\\\
Now we can get $X (\mathcal{E}_{\omega,f}) - Y ( \mathcal{F}_{\omega,f} ) $. Remember that
\begin{align*}
X (\mathcal{E}_{\omega,f}) - Y ( \mathcal{F}_{\omega,f} ) =&\left ( ( X\omega_1)_f X f^1 + (Y\omega_1)_f X f^2 \right ) Yf^1 + \left ( ( X\omega_2)_f X f^1 + (Y\omega_2)_f X f^2 \right ) Yf^2 \\
&+ (\omega_1)_f XYf^1 + (\omega_2)_f XYf^2 -\left ( ( X \omega_1)_f Y f^1 + (Y\omega_1)_f Y f^2 \right ) Xf^1\\
& - \left ( ( X\omega_2)_f Y f^1 + (Y\omega_2)_f Y f^2 \right ) Xf^2 - (\omega_1)_f YXf^1 - (\omega_2)_f YX f^2\\
=& (\omega_1)_f (XYf^1 - YXf^1) + (\omega_2)_f (XYf^2- YX f^2)\\
&+ ( X\omega_1)_f ( X f^1 Yf^1 -Y f^1 Xf^1 ) + (Y\omega_1)_f ( X f^2 Yf^1 - Y f^2 Xf^1 )\\
&+( X\omega_2)_f ( X f^1 Yf^2 - Y f^1 Xf^2 )+ (Y\omega_2)_f ( X f^2 Yf^2 - Y f^2 Xf^2 )\\
=& (\omega_1)_f Tf^1 + (\omega_2)_f T f^2+ \lambda (f) \left ( ( X\omega_2)_f - (Y\omega_1)_f \right ).
\end{align*}
Next is $X (\lambda (f) ) \cdot \mathcal{E}_{\omega,f} - Y ( \lambda (f) ) \cdot \mathcal{F}_{\omega,f}$:
\begin{align*}
X (\lambda (f) ) \cdot \mathcal{E}_{\omega,f} & - Y ( \lambda (f) ) \cdot \mathcal{F}_{\omega,f}=\\
=& ( Xf^2 Tf^1 -Tf^2Xf^1 ) \left ( (\omega_1)_f Yf^1 + (\omega_2)_f Yf^2 \right )\\
-( Yf^2 Tf^1 -Tf^2 Yf^1 ) \left ( (\omega_1)_f Xf^1 + (\omega_2)_f Xf^2 \right )\\
=&(\omega_1)_f \left ( Xf^2 Tf^1 Yf^1 - Tf^2 Xf^1 Yf^1 - Yf^2 Tf^1 Xf^1+ Tf^2 Yf^1 Xf^1 \right ) \\
+(\omega_2)_f \left ( Xf^2 Tf^1 Yf^2 - Tf^2 Xf^1 Yf^2 -Yf^2 Tf^1 Xf^2+ Tf^2 Yf^1 Xf^2 \right )\\
=& (\omega_1)_f Tf^1 \left ( Xf^2 Yf^1 - Yf^2 Xf^1 \right ) +(\omega_2)_f Tf^2 \left ( Yf^1 Xf^2 - Xf^1 Yf^2 \right )\\
=& - \lambda (f) \left ( (\omega_1)_f Tf^1 + (\omega_2)_f Tf^2 \right ).
\end{align*}
\noindent
Finally we can put all the terms together and calculate the coefficient of $dx \wedge dy \wedge \theta$:
\begin{align*}
X (\lambda (f) ) \cdot \mathcal{E}_{\omega,f} - Y &( \lambda (f) ) \cdot \mathcal{F}_{\omega,f} + \lambda (f) ( X(\mathcal{E}_{\omega,f})- Y(\mathcal{F}_{\omega,f}) )=\\
& - \lambda (f) \left [ (\omega_1)_f Tf^1 + (\omega_2)_f Tf^2 \right ] \\
&+ \lambda (f) \left [ (\omega_1)_f Tf^1 + (\omega_2)_f T f^2+ \lambda (f) ( ( X\omega_2)_f - (Y\omega_1)_f ) \right ]\\
=&\lambda ^2(f) \left ( ( X\omega_2)_f - (Y\omega_1)_f \right ) .
\end{align*}
This completes the proof.
\end{proof}
\begin{proof}[Proof of equation \eqref{k=2} in Proposition \ref{commuH1}]
Let $\omega= \omega_1 dx + \omega_2 dy \in \mathcal{D}_{\mathbb{H},1}(U)$. Then
\begin{align*}
f^* D \omega=& f^* \left [ (XX \omega_2 -XY\omega_1 -T\omega_1 ) dx \wedge \theta+ (YX \omega_2 - YY\omega_1 - T \omega_2) dy \wedge \theta \right ]\\
=&(XX \omega_2 -XY\omega_1 -T\omega_1 )_f \left ( X f^1 \lambda (f) dx \wedge \theta + Y f^1 \lambda (f) dy \wedge \theta \right )\\
+(YX \omega_2 - YY\omega_1 - T \omega_2)_f \left ( X f^2 \lambda (f) dx \wedge \theta + Y f^2 \lambda (f) dy \wedge \theta \right )\\
=& \left [
(XX \omega_2 -XY\omega_1 -T\omega_1 )_f X f^1 + (YX \omega_2 - YY\omega_1 - T \omega_2)_f X f^2 \right ] \lambda (f) dx \wedge \theta\\
+ \left [ (XX \omega_2 -XY\omega_1 -T\omega_1 )_f Y f^1 + (YX \omega_2 - YY\omega_1 - T \omega_2)_f Y f^2 \right ] \lambda (f) dy \wedge \theta\\
=& \Big [ - X f^1 ( XY\omega_1 +T\omega_1 )_f - X f^2 (YY\omega_1 )_f + X f^1 (XX \omega_2 )_f \\
& + X f^2 (YX \omega_2 - T \omega_2)_f \Big ] \lambda (f) dx \wedge \theta + \Big [ - Y f^1 ( XY\omega_1 +T\omega_1 )_f \\
& - Y f^2 (YY\omega_1 )_f + Y f^1 (XX \omega_2 )_f + Y f^2 (YX \omega_2 - T \omega_2)_f \Big ] \lambda (f) dy \wedge \theta.
\end{align*}
On the other hand, $D f^* \omega$ will be much more complicated
\begin{align*}
D f^* \omega =& D \left ( (\omega_1)_f ( Xf^1 dx + Y f^1 dy ) +(\omega_2)_f ( Xf^2 dx + Y f^2 dy ) \right )\\
=& D \left ( ( (\omega_1)_f Xf^1 +(\omega_2)_f Xf^2 )dx + ( (\omega_1)_f Yf^1 +(\omega_2)_f Yf^2 ) dy \right )\\
=& \Big [ XX \left ( (\omega_1)_f Yf^1 +(\omega_2)_f Yf^2 \right ) - XY \left ( (\omega_1)_f Xf^1 +(\omega_2)_f Xf^2 \right )\\
& - T \left ( (\omega_1)_f Xf^1 +(\omega_2)_f Xf^2 \right ) \Big ] dx \wedge \theta +\Big [ YX \left ( (\omega_1)_f Yf^1 +(\omega_2)_f Yf^2 \right ) \\
& - YY \left ( (\omega_1)_f Xf^1 +(\omega_2)_f Xf^2 \right ) - T \left ( (\omega_1)_f Yf^1 +(\omega_2)_f Yf^2 \right ) \Big ] dy \wedge \theta.
\end{align*}
\noindent
Given the length of the coefficients, we consider them piece by piece. Consider only the coefficient of $dx \wedge \theta$ and divide it accordindly to the number $k$ of derivatives that hit, respectively, the functions $\omega_1$ and $\omega_2$ (the $T$ derivative counts as two): we denote such parts as $\mathcal{Z}^{(k)} \omega_i$, with $k=0,1,2$ and $i=1,2$.\\
Then the coefficient of $dx \wedge \theta$ becomes
\begin{align*}
(\dots) dx \wedge \theta &= \left ( \mathcal{Z}^{(0)} \omega_1 + \mathcal{Z}^{(0)} \omega_2 + \mathcal{Z}^{(1)} \omega_1 +\mathcal{Z}^{(1)} \omega_2 + \mathcal{Z}^{(2)} \omega_1 + \mathcal{Z}^{(2)} \omega_2 \right ) dx \wedge \theta
\end{align*}
with
\begin{align*}
\mathcal{Z}^{(0)} \omega_1 &:= (\omega_1)_f XXYf^1 + - (\omega_1)_f XYXf^1 - (\omega_1)_f TXf^1 \\
\mathcal{Z}^{(0)} \omega_2 &:= (\omega_2)_f XXYf^2 - (\omega_2)_f XYXf^2 - (\omega_2)_f TXf^2 \\
\mathcal{Z}^{(1)} \omega_1 &:= 2 X \left ( (\omega_1)_f \right ) XYf^1 - Y \left ( (\omega_1)_f \right ) XXf^1 - X \left ( (\omega_1)_f \right ) YXf^1 \\
\mathcal{Z}^{(1)} \omega_2 &:= 2 X \left ( (\omega_2)_f \right ) XYf^2 - Y \left ( (\omega_2)_f \right ) XXf^2 - X \left ( (\omega_2)_f \right ) YXf^2 \\
\mathcal{Z}^{(2)} \omega_1 &:= XX \left ( (\omega_1)_f \right ) Yf^1 - XY \left ( (\omega_1)_f \right ) Xf^1 - T \left ( (\omega_1)_f \right ) Xf^1 \\
\mathcal{Z}^{(2)} \omega_2 &:= XX \left ( (\omega_2)_f \right ) Yf^2 - XY \left ( (\omega_2)_f \right ) Xf^2 - T \left ( (\omega_2)_f \right ) Xf^2.
\end{align*}
Compute:
\begin{align*}
\mathcal{Z}^{(0)} \omega_1 &= (\omega_1)_f XXYf^1 - (\omega_1)_f XYXf^1 - (\omega_1)_f TXf^1\\
&
=(\omega_1)_f X ( XY-YX )f^1- (\omega_1)_f TXf^1\\
&
= (\omega_1)_f (XT-TX) f^1 = (\omega_1)_f [X,T] f^1 =0.
\end{align*}
Notice that, thanks of the simmetry, in the same way $\mathcal{Z}^{(0)} \omega_2=0$.\\
Next, by Lemma \ref{doublederivativecomposition}, one has the following equations in the case $n=1$
\begin{align*}
\begin{aligned}
XX (g \circ f) =& \left [ (XXg)_f X f^1 + ( YXg)_f X f^2 \right ] X f^1 + (Xg)_f X X f^1 \\
&+ \left [ (XYg)_f X f^1 + ( YYg)_f X f^2 \right ] X f^2 + (Yg)_f X X f^2,
\end{aligned}
\end{align*}
\begin{align*}
\begin{aligned}
XY(g \circ f)=&\left [ (XXg)_f X f^1 + ( YXg)_f X f^2 \right ] Y f^1 + (Xg)_f X Y f^1\\
& + \left [ (XYg)_f X f^1 + ( YYg)_f X f^2 \right ] Y f^2 + (Yg)_f X Y f^2,
\end{aligned}
\end{align*}
\begin{align*}
\begin{aligned}
T(g \circ f) =&XgTf^1 + YgTf^2 +\lambda (1,f) Tg,\quad \quad \quad \quad \quad \quad \quad
\end{aligned}
\end{align*}
where $\lambda (1,f):=Xf^1 Yf^2-Yf^1Xf^2$. Using them, one gets
\begin{align*}
\mathcal{Z}^{(2)} \omega_1 =& XX \left ( (\omega_1)_f \right ) Yf^1 - XY \left ( (\omega_1)_f \right ) Xf^1 - T \left ( (\omega_1)_f \right ) Xf^1 \\
=& XX \left ( \omega_1 \circ f \right ) Yf^1 - XY \left ( \omega_1 \circ f \right ) Xf^1 - T \left ( \omega_1 \circ f \right ) Xf^1\\
=& \bigg \{ \left [ (XX\omega_1)_f X f^1 +( YX\omega_1)_f X f^2 \right ] X f^1 + (X\omega_1)_f X X f^1\\
&+ \left [ (XY\omega_1)_f X f^1 + ( YY\omega_1)_f X f^2 \right ] X f^2 + (Y\omega_1)_f X X f^2 \bigg \} Yf^1 \\
&- \bigg \{ \left [ (XX\omega_1)_f X f^1 + ( YX\omega_1)_f X f^2 \right ] Y f^1 + (X\omega_1)_f X Y f^1\\
&+ \left [ (XY\omega_1)_f X f^1 + ( YY\omega_1)_f X f^2 \right ] Y f^2 + (Y\omega_1)_f X Y f^2 \bigg \} Xf^1 \\
&- \bigg \{ (X\omega_1)_f Tf^1 + (Y\omega_1)_f Tf^2 +\lambda (f) (T\omega_1)_f \bigg \} Xf^1\\
=& (XY\omega_1)_f X f^1 ( Xf^2 Yf^1 - Yf^2 Xf^1 ) + ( YY\omega_1)_f X f^2 ( Xf^2 Yf^1 - Yf^2 Xf^1 )\\
&+ (X\omega_1)_f \left [ X X f^1 Yf^1 -( X Y f^1 + Tf^1 ) Xf^1 \right ] \\
&+ (Y\omega_1)_f \left [ X X f^2 Yf^1 -( X Y f^2 + Tf^2) Xf^1 \right ] - (T\omega_1)_f \lambda (f) Xf^1\\
=& - \lambda (f) \left [ (XY\omega_1)_f X f^1 + ( YY\omega_1)_f X f^2 + (T\omega_1)_f Xf^1 \right ] \\
&+ (X\omega_1)_f \left [ X X f^1 Yf^1 -( X Y f^1 + Tf^1 ) Xf^1 \right ]\\
& + (Y\omega_1)_f \left [ X X f^2 Yf^1 -( X Y f^2 + Tf^2) Xf^1 \right ] \\
=& - \lambda (f) \left [ ( YY\omega_1)_f X f^2 +\left ( (XY\omega_1)_f + (T\omega_1)_f \right ) Xf^1 \right ] \\
&+ (X\omega_1)_f \left [ X X f^1 Yf^1 -( X Y f^1 + Tf^1 ) Xf^1 \right ] \\
&+ (Y\omega_1)_f \left [ X X f^2 Yf^1 -( X Y f^2 + Tf^2) Xf^1 \right ].
\end{align*}
Notice that $ - \lambda (f) \left [ ( YY\omega_1)_f X f^2 +\left ( (XY\omega_1)_f + (T\omega_1)_f \right ) Xf^1 \right ]$ is exactly the $ \omega_1$-part of the coefficient of $ dx \wedge \theta$ in $f^*D \omega$.\\
Since we found the $\omega_1$-part of the coefficient of $dx \wedge \theta$ that we wish as result, now we show that the rest is zero. Specifically that
\begin{align}\label{II+=0}
\begin{aligned}
\mathcal{Z}^{(1)} \omega_1 &+ (X\omega_1)_f \left [ X X f^1 Yf^1 -( X Y f^1 + Tf^1 ) Xf^1 \right ] \\
&+ (Y\omega_1)_f \left [ X X f^2 Yf^1 -( X Y f^2 + Tf^2) Xf^1 \right ] \stackrel{Th}{=}0.
\end{aligned}
\end{align}
Indeed
\begin{align*}
\mathcal{Z}^{(1)} \omega_1 =& 2 X \left ( (\omega_1)_f \right ) XYf^1 - Y \left ( (\omega_1)_f \right ) XXf^1 - X \left ( (\omega_1)_f \right ) YXf^1 \\
=& X \left ( (\omega_1)_f \right ) ( XY-YX) f^1 + X \left ( (\omega_1)_f \right ) XYf^1 - Y \left ( (\omega_1)_f \right ) XXf^1 \\
=& X \left ( \omega_1 \circ f \right ) \left ( Tf^1 + XYf^1 \right ) - Y \left ( \omega_1 \circ f \right ) XXf^1\\
=& \left ( (X\omega_1)_f X f^1 + (Y\omega_1)_f X f^2 \right ) \left ( Tf^1 + XYf^1 \right ) \\
&- \left ( (X\omega_1)_f Y f^1 + (Y\omega_1)_f Y f^2 \right ) XXf^1\\
=& (X\omega_1)_f [ X f^1 \left ( Tf^1 + XYf^1 \right ) - Y f^1 XXf^1 ] \\
&+ (Y\omega_1)_f [ X f^2 \left ( Tf^1 + XYf^1 \right ) - Y f^2 XXf^1 ],
\end{align*}
where we used Lemma \ref{composition_contact} in the second to last line.\\\\
With the appropriate cancellations, equation \eqref{II+=0} is reduced to
\begin{equation}
X X f^2 Yf^1 -( X Y f^2 + Tf^2) Xf^1 + X f^2 \left ( Tf^1 + XYf^1 \right ) - Y f^2 XXf^1 =0
\end{equation}
Rearranging the first hand side, notice that
\begin{align*}
& X X f^2 Yf^1 + X f^2 XYf^1 - X Y f^2 Xf^1- Y f^2 XXf^1 - Tf^2 Xf^1 + X f^2 Tf^1=\\
& = X (X f^2 Yf^1) - X (Y f^2 Xf^1) - Tf^2 Xf^1 + X f^2 Tf^1\\
& = X (X f^2 Yf^1 - Y f^2 Xf^1) - Tf^2 Xf^1 + X f^2 Tf^1\\
& = - X (\lambda (f)) - Tf^2 Xf^1 + X f^2 Tf^1\\
&= - ( X f^2 Tf^1 - Tf^2 Xf^1 ) - Tf^2 Xf^1 + X f^2 Tf^1=0,
\end{align*}
where we used a previous calculation of $ X (\lambda (f))$ (see Lemma \ref{XCYC}).\\\\
This indeed shows that the $\omega_1$-parts of the coefficient of $dx \wedge \theta$ are the same in $f^* D \omega $ and $D f^* \omega$. To complete the proof of $f^* D \omega= D f^* \omega$, one should prove the same also for the $\omega_2$-part of the coefficient of $dx \wedge \theta$ and then for the whole $dy \wedge \theta$. This can be done in a similar way.
\end{proof}
\mycomment
\section{Proofs of Section \ref{dercompuspul}}\label{explicitpp}
\begin{obs}[Proof of Lemma \ref{composition}]\label{proofcomposition}
Let $j=1,\dots,2n$. Define
$$
\tilde w_{ j}:=
\begin{cases}
w_{n+j}, \quad \text{if} \ j=1,\dots,n\\
-w_{j-n}, \quad \text{if} \ j=n+1,\dots,2n.
\end{cases}
$$
Then
\begin{align*}
W_j (g \circ f) &= \left (\partial_{w_j}- \frac{1}{2} \tilde w_{ j} \partial_t \right ) (g \circ f)\\
&= \sum_{l=1}^{2n+1} (\partial_{w_l} g)_f \partial_{w_{j}} f^l - \frac{1}{2} \tilde w_{ j} \sum_{l=1}^{2n+1} (\partial_{w_l} g)_f \partial_{t} f^l \\
&= \sum_{l=1}^{2n+1} \left ( \partial_{w_{j}} f^l - \frac{1}{2} \tilde w_{ j} \partial_{t} f^l \right ) (\partial_{w_l} g)_f = \sum_{l=1}^{2n+1} W_{j} f^l (\partial_{w_l} g)_f\\
&= \sum_{l=1}^{2n} W_{j} f^l \left ( W_l g+ \frac{1}{2} \tilde w_{ l} T g \right )_f + W_{j} f^{2n+1} (Tg)_f \\
&= \sum_{l=1}^{2n} ( W_l g)_f W_{j} f^l + (Tg)_f \left ( W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{j} f^l \right ) .
\end{align*}
\end{obs}
\begin{obs}[Proof of Proposition \ref{pushforwardgeneral}]\label{proofpushforwardgeneral}
The proof follows immediately from Lemma \ref{composition} remembering that $(f_* W_j )h=W_j(h \circ f)$. Otherwise one can make the computation directly as follows. Let $j=1,\dots,2n$. Define
$$
\tilde w_{ j}:=
\begin{cases}
w_{n+j}, \quad \text{if} \ j=1,\dots,n\\
-w_{j-n}, \quad \text{if} \ j=n+1,\dots,2n.
\end{cases}
$$
Then
\begin{align*}
f_* W_j &=f_* \left (\partial_{w_j}- \frac{1}{2} \tilde w_{ j} \partial_t \right )\\
&= \sum_{l=1}^{2n+1} \partial_{w_{j}} f^l \partial_{w_l} - \frac{1}{2} \tilde w_{ j} \sum_{l=1}^{2n+1} \partial_{t} f^l \partial_{w_l}\\
&= \sum_{l=1}^{2n+1} \left ( \partial_{w_{j}} f^l - \frac{1}{2} \tilde w_{ j} \partial_{t} f^l \right ) \partial_{w_l} = \sum_{l=1}^{2n+1} W_{j} f^l \partial_{w_l}\\
&= \sum_{l=1}^{2n} W_{j} f^l \left ( W_l + \frac{1}{2} \tilde w_{ l} T \right ) + W_{j} f^{2n+1} T \\
&= \sum_{l=1}^{2n} W_{j} f^l W_l + \left ( W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} W_{j} f^l \right ) T .
\end{align*}
\end{obs}
\begin{comment
Let $f: \mathbb{H}^1 \to \mathbb{H}^1$, $f=(f^1,f^2,f^3 )$ be a $C^1_\mathbb{H}$ function.
\begin{rec}
Recall from \ref{callA} that
$$
\begin{cases}
\mathcal{A}= Xf^3 + \frac{1}{2}f^2 Xf^1 - \frac{1}{2}f^1 Xf^2\\
\mathcal{B}= Yf^3 + \frac{1}{2}f^2 Yf^1 - \frac{1}{2}f^1 Yf^2\\
\mathcal{C}(f)= Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2.
\end{cases}
$$
Also
$$
\begin{cases}
f_* \partial_x =(\partial_x f^1) \partial_x + (\partial_x f^2)\partial_y + (\partial_x f^3)\partial_t\\
f_* \partial_y =(\partial_y f^1) \partial_x + (\partial_y f^2)\partial_y + (\partial_y f^3)\partial_t\\
f_* \partial_t =(\partial_t f^1) \partial_x + (\partial_t f^2)\partial_y + (\partial_t f^3)\partial_t
\end{cases}
$$
and
$$
\begin{cases}
f^* dx=(\partial_x f^1) dx + (\partial_y f^1) dy + (\partial_t f^1) dt\\
f^* dy = (\partial_x f^2) dx + (\partial_y f^2)dy + (\partial_t f^2) dt\\
f^* dt = (\partial_x f^3) dx + (\partial_y f^3) dy + (\partial_t f^3)dt.
\end{cases}
$$
\end{rec}
\begin{obs}[Proof of Proposition \ref{pushforwardgeneral}]\label{proofpushforwardgeneral}
Let $j=1,\dots,2n$. Define
$$
\tilde w_{ j}:=
\begin{cases}
w_{n+j}, \quad \text{if} \ j=1,\dots,n\\
-w_{j-n}, \quad \text{if} \ j=n+1,\dots,2n.
\end{cases}
$$
Then
\begin{align*}
f_* W_j &=f_* \left (\partial_{w_j}- \frac{1}{2} \tilde w_{ j} \partial_t \right )\\
&= \sum_{l=1}^{2n+1} \partial_{w_{j}} f^l \partial_{w_l} - \frac{1}{2} \tilde w_{ j} \sum_{l=1}^{2n+1} \partial_{t} f^l \partial_{w_l}\\
&= \sum_{l=1}^{2n+1} \left ( \partial_{w_{j}} f^l - \frac{1}{2} \tilde w_{ j} \partial_{t} f^l \right ) \partial_{w_l}\\
&= \sum_{l=1}^{2n+1} W_{j} f^l \partial_{w_l}\\
&= \sum_{l=1}^{2n} W_{j} f^l \left ( W_l + \frac{1}{2} \tilde w_{ l} T \right ) + W_{j} f^{2n+1} T \\
&= \sum_{l=1}^{2n} W_{j} f^l W_l + \left ( W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} W_{j} f^l \right ) T .
\end{align*}
Using equation \eqref{compoT},
\begin{align*}
(f_* T)h&=T(h \circ f)\\
&= \sum_{l=1}^{2n} \left (W_l h \right )_f T f^l + (Th)_f \mathcal{C}_f .
\end{align*}
So
$$
f_* T= \sum_{l=1}^{2n} T f^l W_l + \mathcal{C}_f T .
$$
\end{obs}
\begin{obs}[Proof of Proposition \ref{pullbackgeneral}]\label{proofpullbackgeneral}
\begin{align*}
f^* dx&=(\partial_x f^1) dx + (\partial_y f^1) dy + (\partial_t f^1) \left ( \theta + \frac{1}{2}f^1 dy - \frac{1}{2}f^2 dx \right ) \\
&
=
Xf^1 dx + Y f^1 dy + T f^1 \theta.
\end{align*}
\begin{align*}
f^* dy& = (\partial_x f^2) dx + (\partial_y f^2)dy + (\partial_t f^2) \left ( \theta + \frac{1}{2}f^1 dy - \frac{1}{2}f^2 dx \right ) \\
&
=
Xf^2 dx + Y f^2 dy + T f^2 \theta.
\end{align*}
\begin{align*}
f^* \theta =& f^* \left ( dt - \frac{1}{2}x dy + \frac{1}{2}y dx \right ) \\
=&(\partial_x f^3) dx + (\partial_y f^3) dy + (\partial_t f^3) \left ( \theta + \frac{1}{2}f^1 dy - \frac{1}{2}f^2 dx \right )\\
&
- \frac{1}{2}f^1 \left ( Xf^2 dx + Y f^2 dy + T f^2 \theta \right ) + \frac{1}{2}f^2 \left ( Xf^1 dx + Y f^1 dy + T f^1 \theta \right )\\
=& \left ( X f^3 - \frac{1}{2}f^1 Xf^2 + \frac{1}{2}f^2 Xf^1 \right ) dx +\left ( Y f^3 - \frac{1}{2}f^1 Yf^2 + \frac{1}{2}f^2 Yf^1 \right ) dy \\
+\left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) \theta\\
= & \mathcal{A} dx + \mathcal{B} dy + \mathcal{C} \theta.
\end{align*}
\end{obs}
\end{comment
\begin{comment
So the explicit matrices are:
\begin{align*}
f_*=
\begin{pmatrix}
Xf^1 &Yf^1 & Tf^1 \\
Xf^2 & Yf^2 & Tf^2 \\
\left ( Xf^3 + \frac{1}{2}f^2 Xf^1 - \frac{1}{2}f^1 Xf^2 \right ) &\left ( Yf^3 + \frac{1}{2}f^2 Yf^1 - \frac{1}{2}f^1 Yf^2 \right )
& \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right )
\end{pmatrix}
=
\end{align*}
\begin{align*}
=
\begin{pmatrix}
Xf^1 &Yf^1 & Tf^1 \\
Xf^2 & Yf^2 & Tf^2 \\
\mathcal{A} & \mathcal{B} & \mathcal{C}
\end{pmatrix}
\end{align*}
and
\begin{align*}
f^*=
\begin{pmatrix}
Xf^1 & Xf^2 & \left ( Xf^3 + \frac{1}{2}f^2 Xf^1 - \frac{1}{2}f^1 Xf^2 \right ) \\
Yf^1 & Yf^2 & \left ( Yf^3 + \frac{1}{2}f^2 Yf^1 - \frac{1}{2}f^1 Yf^2 \right ) \\
Tf^1 & Tf^2 & \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right )
\end{pmatrix}
=
\begin{pmatrix}
Xf^1 & Xf^2 & \mathcal{A} \\
Yf^1 & Yf^2 & \mathcal{B} \\
Tf^1 & Tf^2 & \mathcal{C}
\end{pmatrix}
\end{align*}
\subsection{$f$ contact map}
$f$ is a contact map if
$$
\begin{cases}
\mathcal{A}= Xf^3 + \frac{1}{2}f^2 Xf^1 - \frac{1}{2}f^1 Xf^2 =0 \\
\mathcal{B}= Yf^3 + \frac{1}{2}f^2 Yf^1 - \frac{1}{2}f^1 Yf^2 =0\\
\mathcal{C}= Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \neq 0
\end{cases}
$$
\begin{rem}\label{TTTT}
Moreover, if $f$ is a contact map we have that
$$
\mathcal{C} = Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 = X f^1 Y f^2 - X f^2 Y f^1.
$$
\end{rem}
\end{comment
\begin{obs}[Proof of Lemma \ref{T3=XY12}]\label{proofT3=XY12}
\begin{align*}
T& f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) T f^l = \\
=& \left ( W_j W_{n+j} - W_{n+j} W_{j} \right ) f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) \left ( W_j W_{n+j} - W_{n+j} W_{j} \right ) f^l \\
=& W_j W_{n+j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_j W_{n+j} f^l
- W_{n+j} W_{j} f^{2n+1} - \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{n+j} W_{j} f^l \\
=& W_j \left ( W_{n+j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{n+j} f^l \right ) - \frac{1}{2} \sum_{l=1}^{2n} W_j \tilde w_{ l} (f) W_{n+j} f^l \\
& - W_{n+j} \left ( W_{j} f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) W_{j} f^l \right) + \frac{1}{2} \sum_{l=1}^{2n} W_{n+j} \tilde w_{ l} (f) W_{j} f^l \\
=& W_j \mathcal{A}(n+j,f) - \frac{1}{2} \sum_{l=1}^{2n} W_j \tilde w_{ l} (f) W_{n+j} f^l - W_{n+j}\mathcal{A}(j,f) + \frac{1}{2} \sum_{l=1}^{2n} W_{n+j} \tilde w_{ l} (f) W_{j} f^l \\
=& \frac{1}{2} \sum_{l=1}^{2n} W_{n+j} \tilde w_{ l} (f) W_{j} f^l - \frac{1}{2} \sum_{l=1}^{2n} W_j \tilde w_{ l} (f) W_{n+j} f^l \\
=& \frac{1}{2} \sum_{l=1}^{n} \left ( W_{n+l} f^{n+l} W_{j} f^l - W_{n+j} f^l W_{j} f^{n+l} \right )
- \frac{1}{2} \sum_{l=1}^{n} \left ( W_j f^{n+l} W_{n+j} f^l - W_j f^l W_{n+j} f^{n+l} \right ) \\
=& \sum_{l=1}^{n} \left ( W_j f^l W_{n+j} f^{n+l} - W_{n+j} f^{l}W_{j} f^{n+l} \right )= \lambda (j,f).
\end{align*}
\end{obs}
\begin{obs}[Proof of Lemma \ref{nabla_comp1}]\label{proofnabla_comp1}
Consider a horizontal vector $V$ and compute the scalar product of the Heisenberg gradient of the composition (which is horizontal by definition) against such vector $V$.
$$
\langle \nabla_{\mathbb{H}} (g \circ f) ,V \rangle_H = \langle d_Q (g \circ f) \vert V \rangle
$$
Note that here we can substitute $d$ to $d_Q$ (and viceversa) because the last component of the differential does not play any role as the computation regards only horizontal objects. Formally we have
\begin{align*}
\langle d \phi \vert V \rangle
= & \langle \sum_{j=1}^{n} \left ( X_j \phi d x_j + Y_j \phi dy_j \right ) + T\phi \theta \vert \sum_{j=1}^{n} \left ( V_j X_j + V_{n+j} Y_j \right ) \rangle \\
=& \langle \sum_{j=1}^{n} \left ( X_j \phi d x_j + Y_j \phi dy_j \right ) \vert \sum_{j=1}^{n} \left ( V_j X_j + V_{n+j} Y_j \right ) \rangle \\
=& \langle d_Q \phi \vert V \rangle.
\end{align*}
We can also repeat this below for $f_* V $ below, since $f_* V $ is still a horizontal vector field. Then
\begin{align*}
\langle \nabla_{\mathbb{H}} (g \circ f) ,V \rangle_H =& \langle d_Q (g \circ f) \vert V \rangle = \langle d (g \circ f) \vert V \rangle =
(g \circ f)_* ( V ) \\
=& ( (g_*)_f \circ f_* ) ( V ) = (d g)_f ( f_* V ) =
\langle (d g)_f \vert f_* V \rangle \\
=& \langle (d_Q g)_f \vert f_* V \rangle = \langle (\nabla_{\mathbb{H}} g)_f , f_* V \rangle_H = \langle f_{*}^T (\nabla_{\mathbb{H}} g)_f, V \rangle_H .
\end{align*}
Then, since $V$ is a general horizontal vector,
$$
\nabla_{\mathbb{H}} (g \circ f) = f_*^T (\nabla_{\mathbb{H}} g)_f .
$$
\end{obs}
\begin{obs}[Proof of Lemma \ref{doublederivativecomposition}]\label{proofdoublederivativecomposition}
Remember that
$$
W_j (g \circ f) = \sum_{l=1}^{2n} \left (W_l g \right )_f W_j f^l.
$$
Then
\begin{align*}
W_j W_i (g \circ f) =& \sum_{l=1}^{2n} W_j \left ( \left ( W_l g \circ f \right ) W_i f^l \right ) \\
=& \sum_{l=1}^{2n} \left [ W_j \left (W_l g \circ f \right ) W_i f^l + \left (W_l g \right )_f W_j W_i f^l
\right ]\\
=& \sum_{l=1}^{2n} \left [ \sum_{h=1}^{2n} \left (
\left (W_h W_l g \right )_f W_j f^h \right ) W_i f^l + \left (W_l g \right )_f W_j W_i f^l
\right ].
\end{align*}
\begin{align*}
T(g \circ f)
=& \left (W_j W_{n+j} - W_{n+j} W_{j} \right ) (g\circ f)\\
=& \sum_{l=1}^{2n} \Bigg [
\left ( \sum_{h=1}^{2n} \left (W_h W_l g \right )_f W_j f^h \right ) W_{n+j} f^l + \left (W_l g \right )_f W_j W_{n+j} f^l \\
&- \left ( \sum_{h=1}^{2n} \left (W_h W_l g \right )_f W_{n+j} f^h \right ) W_j f^l - \left (W_l g \right )_f W_{n+j} W_j f^l \Bigg ],\\
=& \sum_{l=1}^{2n} \left [
\sum_{h=1}^{2n} \left (W_h W_l g \right )_f \left ( W_j f^h W_{n+j} f^l - W_{n+j} f^h W_j f^l \right )
+ \left (W_l g \right )_f T f^l \right ]\\
=& \sum_{l=1}^{2n} \left (W_l g \right )_f T f^l + \sum_{l,h=1}^{2n} \left (W_h W_l g \right )_f \left ( W_j f^h W_{n+j} f^l - W_{n+j} f^h W_j f^l \right ) .
\end{align*}
Note that every time that $l=h$, the corresponding term is zero. Furthermore the term corresponding to a pair $(l,h)$ is the same as the one of the pair $(h,l)$ with a change of sign. Then we can rewrite as
\begin{align*}
T(g \circ f)
=& \sum_{l=1}^{2n} \left (W_l g \right )_f T f^l +
\sum_{\mathclap{\substack{ l,h=1\\ l<h }}}^{2n}
\left (W_h W_l g - W_l W_h g \right )_f \left ( W_j f^h W_{n+j} f^l - W_{n+j} f^h W_j f^l \right ) .
\end{align*}
Then notice that all the terms in the second sum are zero apart from when $h=n+l$. So we can finally write
\begin{align*}
T(g \circ f)
=& \sum_{l=1}^{2n} \left (W_l g \right )_f T f^l +
(Tg)_f \sum_{ l=1}^{n}
\left ( W_j f^l W_{n+j} f^{n+l} - W_{n+j} f^l W_j f^{n+l} \right ) .
\end{align*}
\end{obs}
\begin{obs}[Proof of Proposition \ref{pushforwardcontact}]\label{proofpushforwardcontact}
The proof of equation \eqref{pushforwardWj} comes immediately from the definition of contactness. To prove equation \eqref{pushforwardT} we can use equation \eqref{compoT} while remembering that $(f_* T)h=T(h \circ f)$
\begin{align*}
(f_* T)h&=T(h \circ f)\\
&= \sum_{l=1}^{2n} \left (W_l h \right )_f T f^l + (Th)_f \lambda (f) .
\end{align*}
So
$$
f_* T= \sum_{l=1}^{2n} T f^l W_l + \lambda (f) T .
$$
Instead, if we want to show the proof directly, we just need to work exactly as in the proof of equation \eqref{compoT}.
\end{obs}
\begin{obs}[Proof of Observation \ref{spamT}]\label{proofspamT}
From equation \eqref{pushforwardWj}, we can write $f_* W_j = \sum_{l=1}^{2n} W_{j} f^l W_l $, with $ j=1,\dots,2n .$ Furthermore, since $f$ is a diffeomorphism, $f_* ([W_j,W_{n+j}]) = [f_* W_j,f_* W_{n+j}]$. Then
\begin{align*}
f_* T =&f_* ([W_j,W_{n+j}]) = [f_* W_j,f_* W_{n+j}]= \left [ \sum_{l=1}^{2n} W_{j} f^l W_l , \sum_{h=1}^{2n} W_{n+j} f^h W_h \right ]\\
=& \sum_{l=1}^{2n} W_{j} f^l W_l \sum_{h=1}^{2n} W_{n+j} f^h W_h - \sum_{h=1}^{2n} W_{n+j} f^h W_h \sum_{l=1}^{2n} W_{j} f^l W_l \\
=& \sum_{l,h=1}^{2n} \left ( ( W_{j} f^l W_l ) ( W_{n+j} f^h W_h ) - ( W_{n+j} f^h W_h ) ( W_{j} f^l W_l ) \right ) \\
=& \sum_{l,h=1}^{2n} \left ( W_{j} f^l W_{n+j} f^h W_l W_h - W_{n+j} f^h W_{j} f^l W_h W_l \right ) \\
=& \sum_{l,h=1}^{2n} W_{j} f^l W_{n+j} f^h \left ( W_l W_h - W_h W_l \right ) .
\end{align*}
The terms of this sum are all zero except in the cases $(h,n+h)$, with $h=1,\dots,n$, and $(n+l,l)$, with $l=1,\dots,n$. So
\begin{align*}
f_* T =& \sum_{h=1}^{n} W_{j} f^{n+h} W_{n+j} f^h \left ( W_{n+h} W_h - W_h W_{n+h} \right ) \\
&+ \sum_{l=1}^{n} W_{j} f^l W_{n+j} f^{n+l} \left ( W_l W_{n+l} - W_{n+l} W_l \right ) \\
=& - \sum_{l=1}^{n} W_{j} f^{n+l} W_{n+j} f^l T + \sum_{l=1}^{n} W_{j} f^l W_{n+j} f^{n+l} T \\
=& \sum_{l=1}^{n} \left ( W_{j} f^l W_{n+j} f^{n+l} - W_{j} f^{n+l} W_{n+j} f^l \right ) T
\end{align*}
\end{obs}
\begin{comment
Then
\begin{align*}
f_*=
\begin{pmatrix}
Xf^1 &Yf^1 & Tf^1 \\
Xf^2 & Yf^2 & Tf^2 \\
0 & 0 & Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \neq 0
\end{pmatrix}
=
\begin{pmatrix}
Xf^1 &Yf^1 & Tf^1 \\
Xf^2 & Yf^2 & Tf^2 \\
0 & 0 & \lambda \neq 0
\end{pmatrix}
\end{align*}
and
$$
f_* X = Xf^1 X + Xf^2 Y.
$$
$$
f_* Y= Yf^1 X + Yf^2 Y.
$$
$$
f_* T= Tf^1 X + Tf^2 Y + \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) T=
$$
$$
= Tf^1 X + Tf^2 Y + \mathcal{C} T.
$$
\begin{align*}
f^*=
\begin{pmatrix}
Xf^1 & Xf^2 & 0 \\
Yf^1 & Yf^2 & 0\\
Tf^1 & Tf^2 & \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right )
\end{pmatrix}
=
\begin{pmatrix}
Xf^1 & Xf^2 & 0 \\
Yf^1 & Yf^2 & 0\\
Tf^1 & Tf^2 & \mathcal{C}
\end{pmatrix}
\end{align*}
and
$$
f^* dx= Xf^1 dx + Y f^1 dy + T f^1 \theta.
$$
$$
f^* dy = Xf^2 dx + Y f^2 dy + T f^2 \theta.
$$
$$
f^* \theta = \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) \theta = \mathcal{C} \theta
$$
\subsection{$f$ contact map and diffeomorphism}
$f$ is a contact map and a diffeomorphism iff
$$
\begin{cases}
\mathcal{A} = Xf^3 + \frac{1}{2}f^2 Xf^1 - \frac{1}{2}f^1 Xf^2 =0 \\
\mathcal{B} = Yf^3 + \frac{1}{2}f^2 Yf^1 - \frac{1}{2}f^1 Yf^2 =0\\
\mathcal{C} = Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \neq 0 \Rightarrow \ \mathcal{C} = Tf^3 \neq 0 \\
T f^1=0\\
T f^2 =0\\
\end{cases}
$$
Then
\begin{align*}
f_*=
\begin{pmatrix}
Xf^1 &Yf^1 & 0\\
Xf^2 & Yf^2 & 0 \\
0 & 0 & Tf^3 \neq 0
\end{pmatrix}
\end{align*}
$$
f_* X = Xf^1 X + Xf^2 Y.
$$
$$
f_* Y= Yf^1 X + Yf^2 Y.
$$
$$
f_* T= Tf^3 T.
$$
\begin{align*}
f^*=
\begin{pmatrix}
Xf^1 & Xf^2 & 0 \\
Yf^1 & Yf^2 & 0\\
0 & 0 & Tf^3 .
\end{pmatrix}
\end{align*}
$$
f^* dx= Xf^1 dx + Y f^1 dy.
$$
$$
f^* dy = Xf^2 dx + Y f^2 dy.
$$
$$
f^* \theta = T f^3 \theta
$$
\begin{rem}
See 5.4.1 in \cite{BFP} for another explicit condition of contact map.
\end{rem}
\subsection{$f_*$ and $f^*$ for higher dimension, $f$ contact map}
\begin{rec}
Recall the Remark \ref{T3=XY12}:
$$
\mathcal{C}= Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 = X f^1 Y f^2 - X f^2 Y f^1.
$$
\end{rec}
\begin{rem}\label{XCYC}
$$
X(\mathcal{C})= Xf^1 Tf^2 - Tf^1 Xf^2
$$
and
$$
Y(\mathcal{C})= Yf^1 Tf^2 - Tf^1 Yf^2 .
$$
\end{rem}
\end{comment
\begin{obs}[Proof of Lemma \ref{XCYC}]\label{proofXCYC}
Observe first then
$$
T(f^l W_j f^{n+l})=Tf^l W_j f^{n+l} + f^l TW_j f^{n+l} = Tf^l W_j f^{n+l} + f^l W_j T f^{n+l},
$$
meaning
$$
- f^l W_j T f^{n+l} =- T(f^l W_j f^{n+l}) + Tf^l W_j f^{n+l} .
$$
Likewise
$$
f^{n+l} W_j T f^{l} = T(f^{n+l} W_j f^{l})- Tf^{n+l} W_j f^{l} .
$$
\begin{align*}
W_j& (\lambda (h,f))=\\
=& W_j \left ( T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \tilde w_{ l} (f) T f^l \right )\\
=& W_j T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{2n} \left ( W_j( \tilde w_{ l} (f)) T f^l +
\tilde w_{ l} (f) W_j T f^l
\right ) \\
=& W_j T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{n} \left (
W_j( \tilde w_{ l} (f)) T f^l +\tilde w_{ l} (f) W_j T f^l
+ W_j( \tilde w_{ {n+l}} (f)) T f^{n+l} +\tilde w_{ {n+l}} (f) W_j T f^{n+l}
\right ) \\
=& W_j T f^{2n+1} + \frac{1}{2} \sum_{l=1}^{n} \left (
W_j f^{n+l} T f^l + f^{n+l} W_j T f^l
- W_j f^l T f^{n+l} - f^l W_j T f^{n+l}
\right ) \\
=& T W_j f^{2n+1} + \frac{1}{2} \sum_{l=1}^{n} \left (
T(f^{n+l} W_j f^{l}) - T(f^l W_j f^{n+l})
\right )
+ \sum_{l=1}^{n} \left ( W_j f^{n+l} T f^l - Tf^{n+l} W_j f^{l} \right ) \\
=& \sum_{l=1}^{n} \left ( W_j f^{n+l} T f^l - Tf^{n+l} W_j f^{l} \right )\\
=& \sum_{l=1}^{2n} W_j( \tilde w_{ l} (f) ) T f^l ,
\end{align*}
where we use $ T W_j f^{2n+1} + \frac{1}{2} \sum_{l=1}^{n} \left ( T(f^{n+l} W_j f^{l}) - T(f^l W_j f^{n+l}) \right ) = T (\mathcal{A}(j,f))=0 $.
\end{obs}
\begin{comment
\begin{align*}
X(\lambda )=& X \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right )= X T f^3 - \frac{1}{2}X(f^1 Tf^2) + \frac{1}{2}X(f^2 Tf^1 )\\
= & X T f^3 + \frac{1}{2} [ - Xf^1 Tf^2
\underbrace{ - f^1 XTf^2 + (-Tf^1 X f^2 }_{ -T(f^1 Xf^2) }
+Tf^1 X f^2)\\
&
+ Xf^2 Tf^1 +
\underbrace{ f^2 XTf^1 + (Tf^2Xf^1}_{T(f^2Xf^1)}
-Tf^2Xf^1) ]\\
=&\underbrace{ X T f^3 - \frac{1}{2}T(f^1 Xf^2) + \frac{1}{2} T(f^2Xf^1)}_{T(X f^3 - \frac{1}{2}(f^1 Xf^2) + \frac{1}{2} (f^2Xf^1))=0} + \frac{1}{2} ( Xf^2 Tf^1 -Tf^2Xf^1 )2\\
=& Xf^2 Tf^1 -Tf^2Xf^1 .
\end{align*}
In the same way, given the perfect simmetry, one has that $Y(\mathcal{C})$:
\begin{align*}
Y(\mathcal{C})&=Y \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right )=...=\\
&
= Yf^2 Tf^1 -Tf^2 Yf^1 .
\end{align*}
\end{comment
\begin{comment
\begin{align}
\begin{aligned}
f_* W_j &= \sum_{l=1}^{2n} W_{j} f^l W_l , \quad j=1,\dots,2n .
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
f_* T
&= \sum_{l=1}^{2n} T f^l W_l + \sum_{l=1}^{n} \left ( W_h f^l W_{n+h} f^{n+l} - W_{n+h} f^{l}W_{h} f^{n+l} \right ) T \\
&= \sum_{l=1}^{2n} T f^l W_l + \mathcal{C}(h,f) T .
\end{aligned}
\end{align}
-------------------------
\begin{align*}
f_* (W_j \wedge W_i)=& f_* W_j \wedge f_* W_i \\
=& \sum_{l=1}^{2n} W_{j} f^l W_l \wedge \sum_{h=1}^{2n} W_{i} f^h W_h \\
=& \sum_{l,h=1}^{2n} W_{j} f^l W_{i} f^h W_l \wedge W_h .
\end{align*}
For each pair $(l,h)$ there is a pair $(h,l)$ with same same coefficient and vectors in the opposite order. So we can write:
\begin{align*}
f_* (W_j \wedge W_i)=& \sum_{\mathclap{\substack{ l,h=1\\ l<h }}}^{2n} \left ( W_{j} f^l W_{i} f^h - W_{j} f^h W_{i} f^l \right ) W_l \wedge W_h.
\end{align*}
\begin{align*}
f_* (W_j \wedge T)=& f_* W_j \wedge f_* T \\
=& \sum_{l=1}^{2n} W_{j} f^l W_l \wedge \left ( \sum_{h=1}^{2n} T f^h W_hl + \mathcal{C}(i,f) T \right ) \\
=&\\
=&\\
=&\\
=& \sum_{l,h=1}^{2n} W_{j} f^l W_{i} f^h W_l \wedge W_h .
\end{align*}
\begin{align*}
f_* (X \wedge T)=&f_* X \wedge f_* T = ( Xf^1 Tf^2 - Tf^1 Xf^2 ) X\wedge Y \\
+Xf^1 \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) X\wedge T\\
+ Xf^2 \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) Y\wedge T\\
=& ( Xf^1 Tf^2 - Tf^1 Xf^2 ) X\wedge Y +Xf^1 \mathcal{C} X\wedge T+ Xf^2 \mathcal{C} Y\wedge T\\
=& X(\mathcal{C}) X\wedge Y +Xf^1 \mathcal{C} X\wedge T+ Xf^2 \mathcal{C} Y\wedge T.
\end{align*}
--------------------------
\end{comment
\begin{obs}[Proof of Proposition \ref{highdimensioncontact}]\label{proofhighdimensioncontact}
$$
f_* (X \wedge Y)= f_* X \wedge f_* Y =( Xf^1 Yf^2 - Yf^1 Xf^2) X\wedge Y =\lambda (1,f) X\wedge Y.
$$
\begin{align*}
f_* (X \wedge T)=&f_* X \wedge f_* T = ( Xf^1 Tf^2 - Tf^1 Xf^2 ) X\wedge Y \\
&+Xf^1 \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) X\wedge T\\
&+ Xf^2 \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) Y\wedge T\\
=& ( Xf^1 Tf^2 - Tf^1 Xf^2 ) X\wedge Y +Xf^1 \lambda (1,f) X\wedge T+ Xf^2 \lambda (1,f) Y\wedge T\\
=& X(\lambda (1,f)) X\wedge Y +Xf^1 \lambda (1,f) X\wedge T+ Xf^2 \lambda (1,f) Y\wedge T.
\end{align*}
\begin{align*}
f_* (Y \wedge T)=& f_* Y \wedge f_* T = ( Yf^1 Tf^2 - Tf^1 Yf^2 ) X\wedge Y \\
&+Yf^1 \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) X\wedge T\\
&+ Yf^2 \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) Y\wedge T\\
=& ( Yf^1 Tf^2 - Tf^1 Yf^2 ) X\wedge Y + Yf^1 \lambda (1,f) X\wedge T+ Yf^2 \lambda (1,f) Y\wedge T\\
=&Y(\lambda (1,f)) X\wedge Y + Yf^1 \lambda (1,f) X\wedge T+ Yf^2 \lambda (1,f) Y\wedge T.
\end{align*}
\begin{align*}
f_* (X \wedge Y \wedge T)=&f_* X \wedge f_* Y \wedge f_* T \\
=&( Xf^1 Yf^2 - Yf^1 Xf^2) \left ( Tf^3 + \frac{1}{2}f^2 Tf^1 - \frac{1}{2}f^1 Tf^2 \right ) X \wedge Y \wedge T\\
=& \lambda (1,f)^2 X \wedge Y \wedge T.
\end{align*}
\begin{align*}
f^* (dx \wedge dy)=&f^* dx \wedge f^* dy = ( Xf^1 Yf^2 - Yf^1 Xf^2) dx \wedge dy \\
+ ( Xf^1 Tf^2 - Tf^1 Xf^2 )dx \wedge \theta+ ( Yf^1 Tf^2 - Tf^1 Yf^2 )dy \wedge \theta\\
=& \lambda (1,f) dx \wedge dy + ( Xf^1 Tf^2 - Tf^1 Xf^2 )dx \wedge \theta\\
+ ( Yf^1 Tf^2 - Tf^1 Yf^2 )dy \wedge \theta=\\
=& \lambda (1,f) dx \wedge dy + X(\lambda (1,f)) dx \wedge \theta+ Y(\lambda (1,f)) dy \wedge \theta.
\end{align*}
\begin{align*}
f^* (dx \wedge \theta)=& Xf^1 \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dx \wedge \theta \\
+Y f^1 \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dy \wedge \theta\\
=&Xf^1 \lambda (1,f) dx \wedge \theta + Y f^1\lambda (1,f) dy \wedge \theta.
\end{align*}
\begin{align*}
f^* (dy \wedge \theta)= &Xf^2 \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dx \wedge \theta \\
+ Y f^2 \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dy \wedge \theta\\
=& Xf^2 \lambda (1,f) dx \wedge \theta + Y f^2 \lambda (1,f) dy \wedge \theta.
\end{align*}
\begin{align*}
f^* (dx \wedge dy \wedge \theta )=& ( Xf^1 Yf^2 - Yf^1 Xf^2) \left ( T f^3 - \frac{1}{2}f^1 Tf^2 + \frac{1}{2}f^2 Tf^1 \right ) dx \wedge dy \wedge \theta\\
=& \lambda (1,f)^2 dx \wedge dy \wedge \theta.
\end{align*}
\end{obs}
}
\section{Calculations for the Möbius Strip}\label{appMo}
\begin{obs}\label{Hcoordinates1}
We write $\vec\gamma_r (r,s)$ in Heisenberg coordinates:
\begin{align*}
\vec\gamma_r (r,s) =&
\bigg ( - \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \cos r - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r , \\
& - \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \sin r + \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r , \
\frac{s}{2} \cos \left ( \frac{r}{2} \right )
\bigg ) \\
=& \left (
- \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \cos r -
\left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]
\sin r \right ) \partial_x \\
& +
\left (
- \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \sin r +
\left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]
\cos r \right ) \partial_y
+
\left ( \frac{s}{2} \cos \left ( \frac{r}{2} \right ) \right )\partial_t \\
=& \left (
- \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \cos r -
\left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]
\sin r \right ) X \\
& +
\left(
-
\frac{s}{2} \sin \left ( \frac{r}{2} \right ) \sin r +
\left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]
\cos r \right ) Y \\
& +
\bigg [
\frac{s}{2} \cos \left ( \frac{r}{2} \right )
+
\left (
- \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \cos r -
\left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]
\sin r \right )
\frac{1}{2}y\\
& -
\left (
- \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \sin r +
\left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]
\cos r \right )
\frac{1}{2}x
\bigg ] T .
\end{align*}
The third component can be written better as
\begin{align*}
\bigg [ & \frac{s}{2} \cos \left ( \frac{r}{2} \right ) + \left ( - \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \cos r - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r \right ) \frac{1}{2}y\\
& - \left ( - \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \sin r + \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r \right ) \frac{1}{2}x \bigg ] \\
=& \bigg [ \frac{s}{2} \cos \left ( \frac{r}{2} \right ) + \left ( - \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \cos r - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r \right ) \frac{1}{2} \left [R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r \\
& - \left ( - \frac{s}{2} \sin \left ( \frac{r}{2} \right ) \sin r + \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r \right ) \frac{1}{2} \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r \bigg ] \\
=& \left [ \frac{s}{2} \cos \left ( \frac{r}{2} \right ) - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \sin^2 r \frac{1}{2} - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \cos^2 r \frac{1}{2} \right ] \\
=& \left [\frac{s}{2} \cos \left ( \frac{r}{2} \right ) -\left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2\frac{1}{2} \right ] .\\
\end{align*}
\end{obs}
\begin{obs}\label{Hcoordinates2}
We write $\vec\gamma_s (r,s)$ in Heisenberg coordinates, remembering that $x(r,s)= \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r $ and $ y(r,s) = \left [R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r $:
\begin{align*}
\vec\gamma_s (r,s)=& \cos \left ( \frac{r}{2} \right ) \cos r \partial_x
+
\cos \left ( \frac{r}{2} \right ) \sin r \partial_y
+
\sin \left ( \frac{r}{2} \right ) \partial_t \\
=& \cos \left ( \frac{r}{2} \right ) \cos r X
+
\cos \left ( \frac{r}{2} \right ) \sin r Y \\
&
+
\left ( \sin \left ( \frac{r}{2} \right )
+ \cos \left ( \frac{r}{2} \right ) \cos r \frac{1}{2}y
- \cos \left ( \frac{r}{2} \right ) \sin r \frac{1}{2}x
\right ) T \\
=& \cos \left ( \frac{r}{2} \right ) \cos r X
+
\cos \left ( \frac{r}{2} \right ) \sin r Y\\
&
+
\bigg ( \sin \left ( \frac{r}{2} \right )
+ \cos \left ( \frac{r}{2} \right ) \frac{ \cos r}{2} \left [R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r \\
&- \cos \left ( \frac{r}{2} \right ) \frac{ \sin r}{2} \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r
\bigg ) T \\
=& \cos \left ( \frac{r}{2} \right ) \cos r X
+
\cos \left ( \frac{r}{2} \right ) \sin r Y
+
\sin \left ( \frac{r}{2} \right ) T .
\end{align*}
\end{obs}
\begin{obs}\label{TvecN}
We compute the $T$ component of $\vec{N}= \vec\gamma_r \times_\mathbb{H} \vec\gamma_s= \vec{N}_1 X + \vec{N}_2 Y + \vec{N}_3 T$.
\begin{align*}
\vec{N}_3 =& \left ( - \frac{1}{2}s \sin \left ( \frac{r}{2} \right ) \cos \left ( \frac{r}{2} \right ) \sin r \cos r
- \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos \left ( \frac{r}{2} \right ) \sin^2 r \right )\\
&
-
\left( - \frac{1}{2}s \sin \left ( \frac{r}{2} \right ) \cos \left ( \frac{r}{2} \right ) \sin r \cos r
+ \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos \left ( \frac{r}{2} \right ) \cos^2 r \right ) \\
=& - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos \left ( \frac{r}{2} \right ) .
\end{align*}
\end{obs}
\begin{obs}\label{XvecN}
We compute the $X$ component of $\vec{N}$.
\begin{align*}
\vec{N}_1 =& \left( - \frac{1}{2}s \sin \left ( \frac{r}{2} \right ) \sin r + \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r \right ) \sin \left ( \frac{r}{2} \right ) \\
&
-
\left ( s \frac{1}{2} \cos \left ( \frac{r}{2} \right ) - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \frac{1}{2} \right )
\cos \left ( \frac{r}{2} \right ) \sin r\\
=&
- \frac{1}{2}s \sin^2 \left ( \frac{r}{2} \right ) \sin r + \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r \sin \left ( \frac{r}{2} \right ) \\
&
- s \frac{1}{2} \cos^2 \left ( \frac{r}{2} \right ) \sin r + \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \frac{1}{2} \cos \left ( \frac{r}{2} \right ) \sin r \\
=&
- \frac{1}{2}s \sin r
+ \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \cos r \sin \left ( \frac{r}{2} \right )
+ \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \frac{1}{2} \cos \left ( \frac{r}{2} \right ) \sin r .
\end{align*}
\end{obs}
\begin{obs}\label{YvecN}
We compute the $Y$ component of $\vec{N}$, remembering again $\sin (2 \alpha)=2 \sin \alpha \cos \alpha$ and $\cos (2 \alpha) = 2 \cos^2 \alpha -1$, and naming $\alpha= \frac{r}{2}$.
\begin{align*}
\vec{N}_2 =& \left (
\frac{s}{2} \sin \left ( \frac{r}{2} \right ) \cos r + \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r
\right )
\sin \left ( \frac{r}{2} \right ) \\
&
+
\left (
\frac{s}{2} \cos \left ( \frac{r}{2} \right ) - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \frac{1}{2}
\right )
\cos \left ( \frac{r}{2} \right ) \cos r \\
=& \frac{s}{2} \sin^2 \left ( \frac{r}{2} \right ) \cos r + \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r \sin \left ( \frac{r}{2} \right ) \\
&
+
\frac{s}{2} \cos^2 \left ( \frac{r}{2} \right ) \cos r - \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \frac{1}{2}
\cos \left ( \frac{r}{2} \right ) \cos r \\
=& \frac{s}{2} \cos r
+ \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ] \sin r \sin \left ( \frac{r}{2} \right )
- \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \frac{1}{2}
\cos \left ( \frac{r}{2} \right ) \cos r\\
=& \frac{s}{2} ( 2 \cos^2 \alpha -1 )
+ \left ( R+s \cos \alpha \right ) ( 2 \sin \alpha \cos \alpha ) \sin \alpha \\
&
- \left ( R+s \cos\alpha \right )^2 \frac{1}{2} \cos \alpha ( 2 \cos^2 \alpha -1 )\\
=& s \cos^2 \alpha - \frac{s}{2}
+ 2 ( R+s \cos \alpha ) \sin^2 \alpha \cos \alpha \\
&
- ( R+s \cos\alpha )^2 \cos^3 \alpha
+ \frac{1}{2} [ R+s \cos\alpha ]^2 \cos \alpha\\
=& s \cos^2 \alpha - \frac{s}{2}
+ 2 ( R+s \cos \alpha ) ( 1- \cos^2 \alpha ) \cos \alpha\\
&
- ( R+s \cos\alpha )^2 \cos^3 \alpha
+ \frac{1}{2} [ R+s \cos\alpha ]^2 \cos \alpha \\
=& sz^2 - \frac{s}{2} + 2 ( R+s z ) ( 1- z^2 ) z - ( R+sz )^2 z^3 + \frac{1}{2} ( R+s z )^2 z\\
=& sz^2
- \frac{s}{2}
+ 2 ( R+s z ) ( z- z^3 )
- ( R^2+ 2sRz + s^2 z^2 ) z^3
+ \frac{1}{2} ( R^2+ 2sRz + s^2 z^2 ) z\\
=&
sz^2
- \frac{1}{2} s
+ 2 R ( z- z^3 )
+ 2 s ( z^2- z^4 )
- R^2 z^3 - 2sRz^4 - s^2 z^5
+ \frac{1}{2} R^2 z+ sRz^2 + \frac{1}{2} s^2 z^3 \\
=&
\left ( - z^5 + \frac{1}{2} z^3 \right ) s^2
+ \left ( - 2 (R+1)z^4 + ( R+3) z^2 - \frac{1}{2} \right ) s
- ( R^2 + 2 R) z^3 \\
& + \left ( \frac{1}{2} R^2 + 2 R \right ) z ,
\end{align*}
where we used the facts that $ \sin^2 \alpha = 1- \cos^2 \alpha $ and $\cos \alpha =z $.
\end{obs}
\begin{obs}\label{N1=0N2=0}
We solve the system $\{\vec{N}_1 =0, \ \vec{N}_2 =0 \}$.\\
First impose $\vec{N}_1 =0$ and, using again $\sin (2 \alpha)=2 \sin \alpha \cos \alpha$ and $\cos (2 \alpha) = 2 \cos^2 \alpha -1$, with $\alpha= \frac{r}{2}$, and recalling that $z=\cos \alpha $, we find that:
\end{obs}
\begin{align*}
- \frac{1}{2}s \sin r
+ \Big [ R &+s \cos \left ( \frac{r}{2} \right ) \Big ] \cos r \sin \left ( \frac{r}{2} \right )
+ \left [ R+s \cos \left ( \frac{r}{2} \right ) \right ]^2 \frac{1}{2} \cos \left ( \frac{r}{2} \right ) \sin r =0,
\\
- s \sin \alpha \cos \alpha &
+ \left [ R+s \cos \alpha \right ] ( 2 \cos^2 \alpha -1 ) \sin \alpha
+ \left [ R+s \cos \alpha \right ]^2 \cos^2 \alpha \sin \alpha =0,
\\
\sin \alpha =0
\quad &\vee \quad
- s \cos \alpha
+ \left [ R+s \cos \alpha \right ] ( 2 \cos^2 \alpha -1 )
+ \left [ R+s \cos \alpha \right ]^2 \cos^2 \alpha =0,
\\
\frac{r}{2}= \alpha = k\pi
\quad &\vee \quad
- s z
+ [ R+s z ] ( 2 z^2 -1 )
+ [ R+s z ]^2 z^2 =0,
\\
r = 2k\pi
\quad &\vee \quad
- s z + 2R z^2+2s z^3 - R - s z + R^2 z^2 + 2sRz^3 +s^2 z^4 =0,
\\
r = 2k\pi
\quad &\vee \quad
s^2 z^4 + 2s(R+1) z^3 + ( R^2 +2R ) z^2 - 2s z - R =0
\end{align*}
\noindent
The second condition is an equation of fourth degree in $z$ and of second in $s$, so we solve it in $s$:
$$
r = 2k\pi
\quad \vee \quad
s^2 z^4
+ ( 2(R+1) z^3 - 2 z ) s
+ ( R^2 +2R ) z^2 - R =0
$$
Now consider only the second condition. If $z=0$, one has that
$R=0$,
which is impossible.\\
If $z\neq 0$, to solve the second equation as an equation of $2^{nd}$ order in $s$, compute its discriminant:
\begin{align*}
\Delta &= ( 2(R+1) z^3 - 2 z )^2 -4 z^4 ( ( R^2 +2R ) z^2 - R ) \\
&=4(R+1)^2 z^6 + 4z^2 - 8(R+1)z^4 -4 ( R^2 +2R ) z^6 +4 Rz^4\\
&=4z^6 -4( R +2 ) z^4 + 4z^2\\
&=4z^2 ( z^4 -( R +2 ) z^2 +1).
\end{align*}
Then
\begin{align*}
s& = \frac{ - ( 2(R+1) z^3 - 2 z ) \pm \sqrt{ 4z^2 ( z^4 -( R +2 ) z^2 +1) } }{2 z^4}\\
&= \frac{ - (R+1) z^2 + 1 \pm \sqrt{ z^4 -( R +2 ) z^2 +1 } }{ z^3}.
\end{align*}
Since $z=\cos \frac{r}{2}$, $z \in [0,2\pi)$, then $z \neq 0 $ if and only if $ r \neq \pi +2k\pi.$\\
As a summary, the $X$ component of the normal vector $\vec{N}$ is zero at the points\\
$(x(r,s),y(r,s),t(r,s))$ with
\begin{align}\label{1solution}
(r,s)=(0,s), \quad s \in [-w,w], \quad \text{or}
\end{align}
\begin{align}\label{2solution}
(r,s)= \left (r, \frac{ -(R+1) z^2 + 1 \pm \sqrt{ z^4 -( R+2 ) z^2 +1 } }{ z^3} \right ),
\quad r \in [0, 2 \pi), \ r \neq \pi, \ z=\cos \frac{r}{2}.
\end{align}
Now we check whether the parameters \eqref{1solution} and \eqref{2solution} also force $ \vec{N}_2 (r,s) $, the coefficient of the $Y$ component of $\vec{N}$, to be zero
We will show that $ \vec{N}_2 (r,s) $ is zero only in at most a finite number of the parameters (actually at most one) \eqref{1solution} and \eqref{2solution}.\\\\
$\blacktriangleright$
the case $(r,s)=(0,s)$
gives $z
= \cos \frac{r}{2} = \cos 0=1$ and we obtain that
\begin{align*}
\vec{N}_2 (r,s) =& \left ( - 1 + \frac{1}{2} \right ) s^2 + \left ( - 2 (R+1) + ( R+3) - \frac{1}{2} \right ) s - R^2 - 2 R + \frac{1}{2} R^2 + 2 R \\
=& - \frac{1}{2} s^2 + \left ( - R + \frac{1}{2} \right ) s - \frac{1}{2} R^2 .
\end{align*}
So we get that $\vec{N}_2 (r,s) =0$ if and only if $ s^2 + \left ( 2R -1 \right ) s + R^2 =0$. Then
$$
\Delta = ( 2R -1 )^2 -4R^2
-4R+1
$$
and so
$$
s_1,s_2= \frac{ -2R+1 \pm \sqrt{-4R+1} }{2}.
$$
This proves that $\vec{N}_1 (r,s)$ and $\vec{N}_2 (r,s)$, the component in $X$ and $Y$ of the normal vector $\vec{N}$ are both zero at least in the two cases $(0,s_1)$ and $(0, s_2)$.\\\\
$\blacktriangleright$ The second case gives
$$
s = \frac{ - (R+1) z^2 + 1 \pm \sqrt{ z^4 -( R +2 ) z^2 +1 } }{ z^3} =
\frac{ - (R+1) z^2 + 1 +A(z) }{ z^3},
$$
where $A(z):= \pm \sqrt{ z^4 -( R +2 ) z^2 +1 } $.
Then
\begin{align*}
\vec{N}_2& (r,s)=\\
=& \left ( - z^5 + \frac{1}{2} z^3 \right ) s^2 + \left ( - 2 (R+1)z^4 + ( R+3) z^2 - \frac{1}{2} \right ) s - ( R^2 + 2 R) z^3 + \left ( \frac{1}{2} R^2 + 2 R \right ) z \\
=& \left ( - z^5 + \frac{1}{2} z^3 \right ) \left ( \frac{ - (R+1) z^2 + 1 +A(z) }{ z^3} \right )^2 \\
&+ \left ( - 2 (R+1)z^4 + ( R+3) z^2 - \frac{1}{2} \right ) \frac{ - (R+1) z^2 + 1 +A(z) }{ z^3} \\
& - ( R^2 + 2 R) z^3 + \left ( \frac{1}{2} R^2 + 2 R \right ) z \\
=&\left ( - \frac{1}{z} + \frac{1}{2 z^3} \right ) \left ( (R+1)^2 z^4 + 1 +A^2 (z) -2 (R+1) z^2 +2A(z) - 2(R+1) A(z)z^2 \right ) \\
&+ \left ( - 2 (R+1)z + \frac{( R+3)}{z} - \frac{1}{2 z^3} \right ) ( - (R+1) z^2 + 1 +A(z) ) - ( R^2 + 2 R) z^3 \\
& + \left ( \frac{1}{2} R^2 + 2 R \right ) z \\
=& -(R+1)^2 z^3 - \frac{1}{z} - \frac{A^2 (z)}{z}
+ \frac{(R+1)^2 z}{2 } + \frac{A^2 (z)}{2 z^3} - \frac{ (R+1)}{ z}
+ \frac{A(z)}{ z^3} + 2 (R+1)^2 z^3 \\
& - (R+1) ( R+3)z + \frac{ R+3}{z}
+ \frac{(R+1)}{2 z} - \frac{A(z) }{2 z^3}
- ( R^2 + 2 R) z^3 + \left ( \frac{1}{2} R^2 + 2 R \right ) z \\
=& \left ( -(R+1)^2 + 2 (R+1)^2 - R^2 - 2 R \right ) z^3 \\
& +\left ( \frac{(R+1)^2 }{2 } - (R+1) ( R+3) + \frac{1}{2} R^2 + 2 R \right ) z \\
& + \left ( - 1 - R -1 + R+3 + \frac{R}{2 } + \frac{1}{2 } \right ) \frac{1}{z}
- \frac{A^2 (z)}{z} + \frac{A^2 (z)}{2 z^3}
+ \frac{A(z)}{ z^3} - \frac{A(z) }{2 z^3}
\end{align*}
Here I use that $A^2 (z):= z^4 -( R +2 ) z^2 +1 $ and thus
\begin{align*}
\vec{N}_2 (r,s) =& z^3 +\left ( -R -\frac{5 }{2 } \right ) z + \frac{R+3}{2z} - z^3 + (R+2)z - \frac{1 }{z} + \frac{ 1 }{2 } z - \frac{ (R+2) }{2 z} + \frac{1 }{2 z^3} + \frac{A(z) }{2 z^3} \\
=&- \frac{ 1 }{2z} + \frac{1 }{2 z^3} + \frac{A(z) }{2 z^3} .\\
\end{align*}
Since $z\neq 0$, $\vec{N}_2 (r,s) =0$ if and only if
\begin{align*}
A(z) & = z^2 -1,\\
A^2(z) & = ( z^2 -1 )^2,\\
z^4 -(R+2)z^2 +1 &= z^4 - 2z^2 + 1,\\
-R z^2 &=0,\\
z &=0,
\end{align*}
\begin{comment
$\blacktriangleright$
the case $(r,s)=(0,s)$
gives $z
= \cos \frac{r}{2} = \cos 0=1$ and we obtain that $\vec{N}_2 (r,s) =0$ if and only if
\begin{align*}
&
\left ( - 1 + \frac{1}{2} \right ) s^2
+ \left ( - 2 (R+1) + ( R+3) - \frac{1}{2} \right ) s
- R^2 - 2 R + \frac{1}{2} R^2 + 2 R =0,
\\
&
- \frac{1}{2} s^2
+ \left ( - R + \frac{1}{2} \right ) s
- \frac{1}{2} R^2 =0,
\\
& - s^2
+ \left ( - 2R +1 \right ) s
- R^2 =0,
\\
&
s^2
+ \left ( 2R -1 \right ) s
+ R^2 =0.
\end{align*}
Then
$$
\Delta = ( 2R -1 )^2 -4R^2
-4R+1
$$
and so
$$
s_1,s_2= \frac{ -2R+1 \pm \sqrt{-4R+1} }{2}.
$$
This proves that $\vec{N}_1 (r,s)$ and $\vec{N}_2 (r,s)$, the component in $X$ and $Y$ of the normal vector $\vec{N}$ are both zero at least in the two cases $(0,s_1)$ and $(0, s_2)$.\\\\
$\blacktriangleright$ The second case gives
$$
s = \frac{ - (R+1) z^2 + 1 \pm \sqrt{ z^4 -( R +2 ) z^2 +1 } }{ z^3} =
\frac{ - (R+1) z^2 + 1 +A(z) }{ z^3}
$$
where $A(z):= \pm \sqrt{ z^4 -( R +2 ) z^2 +1 } $.
Then $\vec{N}_2 (r,s) =0$ if and only if
\begin{align*}
&
\left ( - z^5 + \frac{1}{2} z^3 \right ) s^2
+ \left ( - 2 (R+1)z^4 + ( R+3) z^2 - \frac{1}{2} \right ) s
- ( R^2 + 2 R) z^3 \\
&
+ \left ( \frac{1}{2} R^2 + 2 R \right ) z =0,
\end{align*}
\begin{align*}
&\left ( - z^5 + \frac{1}{2} z^3 \right ) \left ( \frac{ - (R+1) z^2 + 1 +A(z) }{ z^3} \right )^2 \\
&
+
\left ( - 2 (R+1)z^4 + ( R+3) z^2 - \frac{1}{2} \right ) \frac{ - (R+1) z^2 + 1 +A(z) }{ z^3} \\
&
- ( R^2 + 2 R) z^3 + \left ( \frac{1}{2} R^2 + 2 R \right ) z =0,
\end{align*}
\begin{align*}
&\left ( - z^5 + \frac{1}{2} z^3 \right ) \frac{ \left ( - (R+1) z^2 + 1 +A(z) \right )^2 }{ z^6} \\
&
+
\left ( - 2 (R+1)z^4 + ( R+3) z^2 - \frac{1}{2} \right ) \frac{ - (R+1) z^2 + 1 +A(z) }{ z^3} \\
&
- ( R^2 + 2 R) z^3 + \left ( \frac{1}{2} R^2 + 2 R \right ) z =0,
\end{align*}
\begin{align*}
&\left ( - \frac{1}{z} + \frac{1}{2 z^3} \right ) \left ( - (R+1) z^2 + 1 +A(z) \right )^2 \\
&
+
\left ( - 2 (R+1)z + \frac{( R+3)}{z} - \frac{1}{2 z^3} \right ) ( - (R+1) z^2 + 1 +A(z) ) \\
&
- ( R^2 + 2 R) z^3 + \left ( \frac{1}{2} R^2 + 2 R \right ) z =0,
\end{align*}
\begin{align*}
&\left ( - \frac{1}{z} + \frac{1}{2 z^3} \right ) \left ( (R+1)^2 z^4 + 1 +A^2 (z) -2 (R+1) z^2 +2A(z) - 2(R+1) A(z)z^2 \right ) \\
&
+ \left ( - 2 (R+1)z + \frac{( R+3)}{z} - \frac{1}{2 z^3} \right ) ( - (R+1) z^2 + 1 +A(z) ) - ( R^2 + 2 R) z^3 \\
&
+ \left ( \frac{1}{2} R^2 + 2 R \right ) z =0,
\end{align*}
\begin{align*}
& -(R+1)^2 z^3 - \frac{1}{z} - \frac{A^2 (z)}{z} +2 (R+1) z - \frac{2A(z)}{z} + 2(R+1) A(z)z
+ \frac{(R+1)^2 z}{2 } + \frac{1}{2 z^3} \\
&
+ \frac{A^2 (z)}{2 z^3} - \frac{ (R+1)}{ z} + \frac{A(z)}{ z^3} - \frac{(R+1) A(z)}{ z} + 2 (R+1)^2 z^3 - 2 (R+1)z - 2 (R+1)A(z) z \\
&
- (R+1) ( R+3)z + \frac{ R+3}{z} + \frac{( R+3)A(z)}{z} + \frac{(R+1)}{2 z} - \frac{1}{2 z^3} - \frac{A(z) }{2 z^3} - ( R^2 + 2 R) z^3 \\
&
+ \left ( \frac{1}{2} R^2 + 2 R \right ) z =0,
\end{align*}
\begin{align*}
& -(R+1)^2 z^3 - \frac{1}{z} - \frac{A^2 (z)}{z} - \frac{2A(z)}{z}
+ \frac{(R+1)^2 z}{2 } + \frac{A^2 (z)}{2 z^3} - \frac{ (R+1)}{ z} + \frac{A(z)}{ z^3} - \frac{(R+1) A(z)}{ z} \\
&
+ 2 (R+1)^2 z^3 - (R+1) ( R+3)z + \frac{ R+3}{z} + \frac{( R+3)A(z)}{z} + \frac{(R+1)}{2 z} - \frac{A(z) }{2 z^3} \\
&
- ( R^2 + 2 R) z^3 + \left ( \frac{1}{2} R^2 + 2 R \right ) z =0,
\end{align*}
\begin{align*}
& -(R+1)^2 z^3 - \frac{1}{z} - \frac{A^2 (z)}{z}
+ \frac{(R+1)^2 z}{2 } + \frac{A^2 (z)}{2 z^3} - \frac{ (R+1)}{ z}
+ \frac{A(z)}{ z^3} + 2 (R+1)^2 z^3 \\
&
- (R+1) ( R+3)z + \frac{ R+3}{z}
+ \frac{(R+1)}{2 z} - \frac{A(z) }{2 z^3}
- ( R^2 + 2 R) z^3 + \left ( \frac{1}{2} R^2 + 2 R \right ) z =0,
\end{align*}
\begin{align*}
& \left ( -(R+1)^2 + 2 (R+1)^2 - R^2 - 2 R \right ) z^3 \\
&
+\left ( \frac{(R+1)^2 }{2 } - (R+1) ( R+3) + \frac{1}{2} R^2 + 2 R \right ) z \\
&
+ \left ( - 1 - R -1 + R+3 + \frac{R}{2 } + \frac{1}{2 } \right ) \frac{1}{z}
- \frac{A^2 (z)}{z} + \frac{A^2 (z)}{2 z^3}
+ \frac{A(z)}{ z^3} - \frac{A(z) }{2 z^3}
=0,
\end{align*}
\begin{align*}
& \left ( R^2+2R+1 - R^2 - 2 R \right ) z^3
+\left ( \frac{1 }{2 } R^2+ R+\frac{1 }{2 } - R^2 -4R -3 + \frac{1}{2} R^2 + 2 R \right ) z \\
&
+ \left ( \frac{R}{2 } + \frac{3}{2 } \right ) \frac{1}{z}
- \frac{ z^4-(R+2)z^2+1 }{z} + \frac{ z^4-(R+2)z^2+1 }{2 z^3}
+ \frac{A(z) }{ 2z^3}
=0,
\end{align*}
\begin{align*}
z^3
+\left ( -R -\frac{5 }{2 } \right ) z
+ \frac{R+3}{2z}
- z^3 + (R+2)z - \frac{1 }{z} +
\frac{ 1 }{2 } z - \frac{ (R+2) }{2 z} + \frac{1 }{2 z^3}
+ \frac{A(z) }{2 z^3}
=0,
\end{align*}
\begin{align*}
z^3
+\left ( -R -\frac{5 }{2 } \right ) z
+ \frac{R+3}{2z}
- z^3 + (R+2)z - \frac{1 }{z}
+ \frac{ 1 }{2 } z - \frac{ (R+2) }{2 z} + \frac{1 }{2 z^3}
+ \frac{A(z) }{2 z^3}
=0,
\end{align*}
\begin{align*}
& \frac{R+3 -2 -R-2}{2z}
+ \frac{1 }{2 z^3}
+ \frac{A(z) }{2 z^3}
=0,\\
&- \frac{ 1 }{2z}
+ \frac{1 }{2 z^3}
+ \frac{A(z) }{2 z^3}
=0,\\
&- z^2
+ 1
+ A(z)
=0,\\
& A(z) = z^2 -1,\\
& A^2(z) = ( z^2 -1 )^2,\\
&z^4 -(R+2)z^2 +1 = z^4 - 2z^2 + 1,\\
&-R z^2 =0,\\
&z =0,
\end{align*}
\end{comment
which is impossible because this is the case $z \neq 0$.\\\\
So the second solution of the equation $\vec{N}_1 (r,s)=0$ is never a solution for the equation $\vec{N}_2(r,s)=0$.
\begin{obs}\label{partialconclusion}
So far we have obtained that the Möbius strip has at most only two critical points and, in this parametrization, they are those obtained by
$$
(r,s_1)= \left (0,\frac{ -2R+1 - \sqrt{-4R+1} }{2} \right ) \text{ and } (r,s_2)=\left (0,\frac{ -2R+1 + \sqrt{-4R+1} }{2} \right ) .
$$
In particular, there are two critical points if $R < \frac{1}{4}$, one if $R = \frac{1}{4}$ and none if $R > \frac{1}{4}$, which could be the most frequent case. Changing the parametrization will change the coefficients, but not the topological properties.\\\\
Now, it's easy to see that, if $R = \frac{1}{4}$, then
$$
s_1=s_2=\frac{ -2\frac{1}{4}+1 }{2} = \frac{1}{4},
$$
but we know that $s \in[-w,w]$ with $w <R= \frac{1}{4}$, so this solution is actually impossible.\\
On the other hand, if $R < \frac{1}{4}$, then $ -2R+1 > - \frac{1}{2} +1=\frac{1}{2}$. So
$$
s_2=\frac{ -2R+1 + \sqrt{-4R+1} }{2} > \frac{ \frac{1}{2} + 0 }{2} =\frac{1}{4},
$$
but again $s \in[-w,w]$ with $w <R< \frac{1}{4}$, so this solution is impossible.\\
Finally, there are no limitations for $s_1$ given by $R< \frac{1}{4}$; $s_1$ can be any real number in $[-w,w]$ with $w <R< \frac{1}{4}$, so the only condition one can ask is:
\begin{align*}
-\frac{1}{4} < s_1 < \frac{1}{4} ,& \\
-\frac{1}{4} < \frac{ -2R+1 - \sqrt{-4R+1} }{2} < \frac{1}{4},&
\\
- 1 < -4R+2 - 2\sqrt{-4R+1} <1,&
\\
- 1 < -4R+2 - 2\sqrt{-4R+1}
\quad &\land \quad
-4R+2 - 2\sqrt{-4R+1} <1,
\\
2\sqrt{-4R+1} < -4R+3
\quad &\land \quad
2\sqrt{-4R+1} > - 4R +1,
\\
4(-4R+1) < (-4R+3)^2
\quad &\land \quad
4 ( -4R+1 ) > (- 4R +1)^2,
\\
-16R+4 < 16R^2 -24R +9
\quad &\land \quad
-16R+ 4 > 16R^2 -8R +1 ,
\\
16R^2 -8R +5 >0
\quad &\land \quad
16R^2 +8R -3 <0.
\end{align*}
Now we apply some basic rules for quadratic inequalities. The first inequality is solved by:
$$
\frac{\Delta}{4}= 16 - 16\cdot 5=-16 \cdot 4<0,
$$
that says that the inequality is always true.\\
The second inequality is solved, instead, by computing
$$
\frac{\Delta}{4}= 16 + 16 \cdot 3=16 \cdot 4 =8^2 ,
$$
and observing the two solutions of the associated equation are
$$
R_{1,2}= \frac{ -4 \pm 8 }{16} . \quad \text{Then}
$$
$$
R_{1}=- \frac{12}{16}=-\frac{3}{4} \quad \text{ and } \quad R_{2}= \frac{4}{16}=\frac{1}{4} .
$$
So the second inequality (and hence the whole system) has solutions:
$$
-\frac{3}{4} < R < \frac{1}{4},
$$
which does not give more limitations than what was known already, that is: $0 < R < \frac{1}{4}$.\\\\
Final conclusion: the Möbius strip has at most only one critical point and, in this parametrization, it is the one obtained by
$$
(r,s_1)= \left (0,\frac{ -2R+1 - \sqrt{-4R+1} }{2} \right ),
$$
when $0< R < \frac{1}{4}$.
\end{obs}
\chapter*{Introduction
\markboth{\MakeUppercase{Introduction}}{\MakeUppercase{Introduction}}
\addcontentsline{toc}{chapter}{Introduction}
\noindent
The purpose of this study is to analyse two related topics: the Rumin cohomology and the orientability of a surface in the most classic example of Sub-Riemannian geometry, the Heisenberg group $\mathbb{H}^n$.\\
Our work begins with a quick definition of Lie groups, Carnot groups and left translation operators, moving then to define the Heisenberg group and its properties. There are many
references for an introduction on the Heisenberg group; here we used, for example, parts of \cite{GCmaster}, \cite{CDPT}, \cite{FSSC} and \cite{TRIP}. The Heisenberg Group $\mathbb{H}^n$, $n \geq 1$, is a $(2n+1)$-dimensional manifold denoted $ ( \mathbb{R}^{2n+1}, * , d_{cc})$ where the group operation $*$ is given by
$$
(x,y,t)*(x',y',t') := \left (x+x',y+y', t+t'- \frac{1}{2} \langle J
\begin{pmatrix}
x \\
y
\end{pmatrix}
,
\begin{pmatrix}
x' \\
y'
\end{pmatrix} \rangle_{\mathbb{R}^{2n}} \right )
$$
with $x,y,x',y' \in \mathbb{R}^n$, $t,t' \in \mathbb{R}$ and $J=
\begin{pmatrix}
0 & I_n \\
-I_n & 0
\end{pmatrix} $.
Additionaly, the Heisenberg Group
is a Carnot group of step $2$ with algebra $\mathfrak{h} = \mathfrak{h}_1 \oplus \mathfrak{h}_2$. The first layer $\mathfrak{h}_1$ has a standard orthonormal basis of left invariant vector fields which are called \textit{horizontal}:
$$
\begin{cases}
X_j=\partial_{x_j} -\frac{1}{2} y_j \partial_t, \\
Y_j=\partial_{y_j} +\frac{1}{2} x_j \partial_t, \ \ \ \ j=1,\dots,n.
\end{cases}
$$
They hold the core property that $[X_j, Y_j] = \partial_t=:T $ for each $j$, where $T$ alone spans the second layer $\mathfrak{h}_2$ and is called the \textit{vertical} direction. By definition, the horizontal subbundle changes inclination at every point (see Figure \ref{fig:balls}),
\begin{figure}[!ht]
\centering
{\includegraphics[width= 5 cm]{./horizontalvf.jpg}}
\caption[Caption for LOF]{Horizontal subbundle in the first Heisenberg Group $\mathbb{H}^1$.\footnotemark}
\label{fig:balls}
\end{figure}
allowing movement from any point to any other point following only horizontal paths. The Carnot--Carathéodory distance $d_{cc}$, then, measures the distance between any two points along curves whose tangent vectors are horizontal.\\
The topological dimension of the Heisenberg group is $2n+1$, while its Hausdorff dimension with respect to the Carnot-Carathéodory distance is $2n+2$. Such dimensional difference leads to the existence of a natural cohomology
called Rumin cohomology and introduced by M. Rumin in 1994 (see \cite{RUMIN}), whose behaviour is significantly different from the standard de Rham one.
This is not the only effect of the dimensional difference: another is that there exist surfaces regular in the Heisenberg sense but fractal in the Euclidean sense (see \cite{KSC}).\\
\noindent
With a dual argument, one can associate at vector fields $X_j$'s,$Y_j$'s and $T$ the corresponding differential forms: $dx_j$'s and $dy_j$'s for $X_j$'s and $Y_j$'s respectively, and
$$
\theta:=dt- \frac{1}{2} \sum_{j=1}^n (x_j dy_j - y_j dx_j )
$$
for $T$. They also divide as \emph{horizontal} and \emph{vertical} in the same way as before.
These differential forms compose the complexes that, in the Heisenberg group, are described by the Rumin cohomology (see \cite{RUMIN} and 5.8 in \cite{FSSC}).
Rumin forms
are compactly supported on an open set $U$ and their sets are denoted by $\mathcal{D}_{\mathbb{H}}^{k} (U)$, where
\footnotetext{pictures shown with permission of the author Anton Lukyanenko.}
$$
\begin{cases}
\mathcal{D}_{\mathbb{H}}^{k} (U) := \frac{\Omega^k}{I^k}, \ \ \text{for } k=1,\dots,n \\
\mathcal{D}_{\mathbb{H}}^{k} (U) := J^k, \ \ \text{for } k=n+1,\dots,2n+1 ,
\end{cases}
$$
with $\Omega^k$ the space of $k$-differential forms, $I^k = \{ \alpha \wedge \theta + \beta \wedge d \theta \ / \ \alpha \in \Omega^{k-1}, \ \beta \in \Omega^{k-2} \}$ and $J^k =\{ \alpha \in \Omega^{k} \ / \ \alpha \wedge \theta =0, \ \alpha \wedge d\theta=0 \}$.\\
The Rumin cohomology is the cohomology of this complex:
$$
0 \to \mathbb{R} \stackrel{ d_Q }{\to} \mathcal{D}_{\mathbb{H}}^{1} (U) \stackrel{ d_Q }{\to} \dots \stackrel{ d_Q }{\to} \mathcal{D}_{\mathbb{H}}^{n} (U)
\stackrel{ D }{\to} \mathcal{D}_{\mathbb{H}}^{n+1} (U) \stackrel{ d_Q }{\to} \mathcal{D}_{\mathbb{H}}^{n+2} (U) \stackrel{ d_Q }{\to} \dots \stackrel{ d_Q }{\to} \mathcal{D}_{\mathbb{H}}^{2n+1} (U) \to 0
$$
where $d$ is the standard differential operator and, for $k < n$,
$
d_Q( [\alpha]_{I^k} ) := [d \alpha]_{I^{k+1}},
$
while, for $k \geq n +1$,
$
d_Q := d_{\vert_{J^k}}.
$
The second order differential operator $D$ is defined as
$$
D( [\alpha]_{I^n} ) : = d \left ( \alpha + L^{-1} \left (- (d \alpha)_{\vert_{ {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 }} \right ) \wedge \theta \right )
$$
whose presence reflects the difference between the topological and Hausdorff dimensions of the space. In the definition above $L: {\prescript{}{}\bigwedge}^{n-1} \mathfrak{h}_1 \to {\prescript{}{}\bigwedge}^{n+1} \mathfrak{h}_1 $, $L(\omega):=d\theta \wedge \omega$, is a diffeomorphism among differential forms.\\
In Chapter \ref{DFaRC} we will carefully describe the cohomology and we show its complete behaviour in the cases $n=1$ and $n=2$. In particular we show how to compute the second order operator $D$ explicitly. In the appendices to this chapter we follow the presentation in \cite{TRIP} and explain how it is possible to write the Rumin differential operators as one operator $d_c$, reducing then the complex to $\left ( \mathcal{D}_{\mathbb{H}}^{k} (U), d_c \right )$ (Appendix \ref{A}). We also discuss the dimensions of the spaces in the Rumin complex (Appendix \ref{B}).\\
The differential operators $d_Q$
and $D$ look much more complicated than the standard operator $d$ and one could wonder whether they also hold the property of commuting with the pullback by a mapping. We show in Chapter \ref{PPHN} that this is true for contact maps, a map whose pushforward sends horizontal vectors to horizontal vectors. Namely one has that for a contact map $f: \mathbb{H}^n \to \mathbb{H}^n$ the following relations hold:
$$
\begin{cases}
f^* d_Q = d_Q f^* \ \ \ \ \text{ for } k \neq n,\\
f^* D = D f^* \ \ \ \ \text{ for } k = n.
\end{cases}
$$
We also show the behaviour of pushfoward and pullback in several situations in this setting, for which a useful starting point is \cite{KORR}.\\
Differential forms can be used to define the notion of orientability, so it is natural to ask whether the Rumin forms provide a different kind of orientability respect to the standard definition. In Chapter \ref{orient4} we show that this is indeed the case.
First, we have to notice how in the Heisenberg group it is natural to give an ad hoc definition of regularity for surfaces, the $\mathbb{H}$-regularity (see \cite{FSSC2001} and \cite{FSSC}) which, roughly speaking, locally requires the surface to be a level set of a function with non-vanishing horizontal gradient. The points such gradient is null are called characteristic (see, for instance, \cite{BAL} and \cite{MAG}) and must usually be avoided.
For such surfaces we give a new definition of orientability ($\mathbb{H}$-orientability) along with some properties. In particular we show that it behaves well with respect to the left-translations and the anisotropic dilation $\delta_r(x,y,t)=(rx,ry,r^2t)$. Furthermore, we prove that $\mathbb{H}$-orientability implies standard orientability, while the opposite is not always true. Finally we show that, up to one point, a Möbius Strip in $\mathbb{H}^1$ is a $\mathbb{H}$-regular surface and we use this fact to prove that there exist $\mathbb{H}$-regular non-$\mathbb{H}$-orientable surfaces, at least in the case $n=1$. \\
Apart from its connection with differential forms, another reason to study orientability is that it plays an important role in the theory of currents. Surfaces connected to currents are usually, but not always, orientable: in Riemanian settings there exists a notion of currents (currents mod $2$) made for surfaces that are not necessarily orientable (see \cite{MORGAN2}). The existence of $\mathbb{H}$-regular non-$\mathbb{H}$-orientable surfaces implies that the same kind of analysis would be meaningful in the Heisenberg group.
\include{./1_Preliminaries}
\include{./2_Differential_forms_and_Rumin_Cohomology}
\include{./3_Pushforward_Pullback}
\include{./4_Orientability}
\include{./50_appendices}
| {
"timestamp": "2019-10-04T02:01:27",
"yymm": "1910",
"arxiv_id": "1910.01164",
"language": "en",
"url": "https://arxiv.org/abs/1910.01164",
"abstract": "The purpose of this study is to analyse two related topics: the Rumin cohomology and the $\\mathbb{H}$-orientability in the Heisenberg group $\\mathbb{H}^n$. In the first three chapters we carefully describe the Rumin cohomology with particular emphasis at the second order differential operator $D$, giving examples in the cases $n=1$ and $n=2$. We also show the commutation between all Rumin differential operators and the pullback by a contact map and, more generally, describe pushforward and pullback explicitly in different situations. Differential forms can be used to define the notion of orientability; indeed in the fourth chapter we define the $\\mathbb{H}$-orientability for $\\mathbb{H}$-regular surfaces and we prove that $\\mathbb{H}$-orientability implies standard orientability, while the opposite is not always true. Finally we show that, up to one point, a Möbius strip in $\\mathbb{H}^1$ is a $\\mathbb{H}$-regular surface and we use this fact to prove that there exist $\\mathbb{H}$-regular non-$\\mathbb{H}$-orientable surfaces, at least in the case $n=1$. This opens the possibility for an analysis of Heisenberg currents mod $2$.",
"subjects": "Differential Geometry (math.DG)",
"title": "Insight in the Rumin Cohomology and Orientability Properties of the Heisenberg Group",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540656452214,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7099046636808808
} |
https://arxiv.org/abs/math/0608118 | Finiteness of Hilbert functions and bounds for Castelnuovo-Mumford regularity of initial ideals | Bounds for the Castelnuovo-Mumford regularity and Hilbert coefficients are given in terms of the arithmetic degree (if the ring is reduced) or in terms of the defining degrees. From this it follows that there exists only a finite number of Hilbert functions associated to reduced algebras over an algebraically closed field with a given arithmetic degree and dimension. A good bound is also given for the Castelnuovo-Mumford regularity of initial ideals which depends neither on term orders nor on the coordinates, and holds for any field. | \section*{Introduction}\smallskip
In the famous book SGA6, Kleiman proved that given two positive
integers $e$ and $d$, there exists only a finite number of Hilbert
functions associated to reduced and equidimensional $K$-algebras
$S$ over an algebraically closed field such that $\operatorname{deg} S \le e$
and $\dim S =d$ (see [K, Corollary 6.11]). An easier and
eleganter proof of this result can be found in a recent paper by
M. Rossi, N. V. Trung and G. Valla [RTV2]. Moreover, the paper
[RTV2] gives a rather general approach to derive the finiteness of
Hilbert functions. It is shown that this problem (for a certain
class of ideals) is equivalent to the boundness of the
Castelnuovo-Mumford regularity and the embedding dimension (see
[RTV2, Theorem 2.3]). The first main purpose of this paper is to
extend Kleiman's result to reduced $K$-algebras. A key point is
to find a suitable invariant to replace the degree. Of course, a
so-called extended degree is a choice, see [RVV, Corollary 4.4],
but such an invariant is very big. It turns out that in our
situation one can take the so-called arithmetic degree - a notion
which maybe reflects better the complexity of ideals than the
usual degree (see [BM, Section 3] and [V, Chapter 9]).
\begin{Theorem}\label{I1} Given two positive integers $a$ and $d$. Assume that $K$ is
an algebraically closed field. Then there exists only a finite number of Hilbert
functions associated to reduced $K$-algebras $S$ such that $\operatorname{adeg} S \le a$ and $\dim S =d$.
\end{Theorem}
Note that the above result does not hold for an arbitrary algebra
(however see [RVV, Corollary 4.4] and [RTV2, Theorem 3.1] for a
possible generalization). As mentioned above, the main point in
the proof of Theorem \ref{I1} is to bound the Castelnuovo-Mumford
regularity. This is not hard to do (see Remark \ref{C6b}). However, a careful analysis allows us to
establish the following explicit bound:
\begin{Theorem}\label{C4} Let $K$ be an arbitrary field and $I$ an arbitrary homogeneous ideal
of $R= K[x_1,...,x_n]$. Assume that $S= R/I$ is a
reduced ring of dimension $d\ge 2$ and degree $e$. Then
$$\operatorname{reg} I \le (\frac{e(e-1)}{2} + \operatorname{adeg} I)^{2^{d-2}}.$$
\end{Theorem}
Applied to the case of reduced and equidimensional algebras, the
bound of Theorem \ref{C4} is better than the one given in [RTV2,
Theorem 3.1 and Lemma 3.3]. In view of the Eisenbud-Goto
conjecture, the above bound is still too big, but it is a first
explicit bound stated in terms of the arithmetic degree. In order to
prove it, as in Lecture 14 of [M] (see also [K], [BM] and [RTV2]), we proceed by induction on the
dimension. However, there is a different point: we simultaneously bound this invariant and the
length of graded components of certain local cohomology modules (see Theorem
\ref{C6} and also Theorem \ref{E5}). \par
The above technique can be also used to estimate the
Castelnuovo-Mumford regularity of arbitrary homogeneous ideals in
terms of the maximal degree $\Delta$ of minimal generators of $I
\subset R=K[x_1,...,x_n]$. If $K$ is any field of zero
characteristic, from Giusti's paper [Gi] it follows that $\operatorname{reg}(I)
\le (2\Delta)^{2^{n-2}}$. Bayer and Mumford suggested that this
bound holds in any characteristic (see the comment after Theorem
3.7 in [BM]). Not long ago, G. Caviglia and E. Sbarra proved that
this is indeed the case:
$$\operatorname{reg} I \le (\Delta^c + \Delta c - c+1 )^{2^{d-1}},$$
where $c= n-d$ (see [CS, Corollary 2.6]). In this paper we will give a completely
different proof for a slight improvement of this result (see Theorem \ref{E1}). \par
The next problem we are interested in is to give good bounds for
the Castelnuovo-Mumford regularity of initial ideals $\operatorname{in} (I)$
with respect to {\it any term order} and in {\it any coordinates}.
Inspired by a result of Chardin and Moreno-Sosias, it was shown in
[HH] that if $R/I$ is a Cohen-Macaulay ring of multiplicity $e\ge
2$ and $d \geq 2$, then $\operatorname{reg}(\operatorname{in}(I)) \le
e^{2^{d-1}}/{2^{2^{d-2}}}$. Long before that M. Giusti [Gi] showed
that in characteristic zero we have $\operatorname{reg} \operatorname{in}(I) \leq (2\Delta)^{2^{n-1} }$, provided the coordinates
are chosen generically and the term order is the lexicographic
order. Combining these facts with the above mentioned result of [CS] and [MM, II.2.2], one may ask
whether such a kind of bounds still
holds for any $\operatorname{reg}(\operatorname{in} I)$. Our second main result confirms it:
\begin{Theorem}\label{I2} Let $K$ be an arbitrary field and and $I$ an arbitrary homogeneous
ideal of $R$.
With respect to any term order and any coordinates we have
$$\operatorname{reg}(\operatorname{in} I) \le (\frac{3}{2}\Delta^c+ \Delta)^{d2^{d-1}}.$$
Moreover, if $R/I$ is a reduced algebra, then we also have
$$\operatorname{reg}(\operatorname{in} I) \le (\operatorname{adeg} I)^{(n-1)2^{d-1}}.$$
\end{Theorem}
An immediate consequence of this theorem says that the maximal degree
of a reduced Gr\"obner base, with respect to any term order and
any coordinates, is bounded by $ (\frac{3}{2}\Delta^c+ \Delta)^{d2^{d-1}}$. In
view of a remarkable example due to Mayr and Meyer, this bound is
nearly the best possible (see, e.g., Example 3.9 and Proposition
3.11 in [BM]).\par
In order to prove this theorem we develop further the method in
[HH]. Instead of initial ideals we consider a much bigger class:
the class of all ideals $J$ having the same Hilbert function as
$I$. So, although the title of the paper is about initial ideals, we are in fact dealing not much with
them. However, by doing so one can use Gotzmann's regularity theorem to
bound $\operatorname{reg} J$ in terms of some data of $I$. Then, by virtue of
Theorem \ref{C4} and Theorem \ref{E1}, we will see that
the only thing left is to estimate the Hilbert coefficients $e_i$
in terms of $\Delta$ or $\operatorname{adeg} (I)$ (see Lemmas \ref{F1} and
\ref{F3}). This problem is also of independent interest. Main
steps to do it may be explained as follows. First, using a recent
result by Herzog, Popescu and Vladoiu [HPV] one can bound
cohomological Hilbert functions (i.e. the length of graded
components of local cohomology modules) in terms of the
Castelnuovo-Mumford regularity. From that we get bounds for the
Hilbert coefficients by the Castelnuovo-Mumford regularity (see
theorems \ref{B3} and \ref{B3b}). The existence of such a bound
was predicted by [RTV2, Theorem 2.3], and this approach is
somewhat new, because usually one tries to estimate the latter
invariant by the former ones (see, e.g., [K] and [BrS, Section
17.2]). However, it is a surprising fact, that the relationships
between these invariants in theorems \ref{B3} and \ref{B3b} are rather
simple. Let us give here a simple version of these results
\begin{Theorem}\label{I3} Let $e_0=e,...,e_{d-1}$ be the Hilbert coefficents of $R/I$ and $b=
\max\{ \Delta, \, \operatorname{adeg} I\}$. Then
\begin{itemize}
\item[(i)] $|e_1| \leq b^c\operatorname{reg} I$.
\item[(ii)] For $i\ge 2$, $|e_i| \le \frac{3}{2} b^c ( \operatorname{reg} I)^i$.
\end{itemize}
\end{Theorem}
Combining theorems \ref{B3} and \ref{B3b} with results on the Castelnuovo-Mumford
regularity found earlier we get bounds for $|e_i|$ in terms of
$\Delta$ or $\operatorname{adeg} (I)$ (see propositions \ref{C10} and \ref{E7}).
These bounds are huge: they are double exponential functions of
$i$. But they are good enough to prove
Theorem \ref{I2}. Furthermore, theorems \ref{B3} and \ref{B3b}
sometimes give really good bounds for
$|e_i|$ if we already know a good estimation for the Castelnuovo-Mumford regularity
(see corollaries \ref{B6} and \ref{B7}).\par
We now give a brief content of the paper. We
prove Theorem \ref{C4} in Section \ref{C}, and reprove in Section \ref{E} the
Caviglia-Sbarra bound on the Castelnuovo-Mumford regularity of an
arbitrary homogeneous ideal in terms of the degrees of its
defining equations (Theorem \ref{E1}). Section \ref{A} is devoted
to bounding Hilbert cohomological functions in terms of the
Castelnuovo-Mumford regularity (see Theorem \ref{A4}). Bounds on
Hilbert coefficients are given in Section \ref{B}. Putting
results of the sections \ref{C} and \ref{B} together we are able
to prove Theorem \ref{I1} without using [RTV2]. This is done
in Section \ref{D}. Theorem \ref{I2} is proved in the last
Section \ref{F}. We refer the readers to Eisenbud's book [E] for
unexplained terminology. \vskip0.5cm
\noindent {\bf Acknowledgment}: The author would like to thank
the Centre de Recerca Matematica (Spain) for the financial support and
hospitality during the final preparation of this article. He is grateful to the referee
for his/her useful remarks and suggestions which lead to an improvement of some main results of the
paper.
\section{Bounds in terms of the arithmetic degree} \label{C}\smallskip
Throughout this paper, if not otherwise stated, $K$ is an
arbitrary field, $R= K[x_1,...,x_n]$ is a polynomial ring
and $I \subset R$ is a homogeneous ideal of dimension $d$. However, all invariants considered in
this paper are not changed under passing from $K$ to $K(u)$, where $u$ is a new indeterminates.
Hence, in proofs we may always assume that $K$ is an infinite field. This assumption guarantees
the choice of generic elements.
Let $c=
n-d$. Note that $c$ is the true codimension of $I$ if $I$ does not
contain a linear form. Let ${\frak m} = (x_1,...,x_n)$ denote the
maximal homogeneous ideal of $R$ and set $S= R/I$. Let us recall
some notions.\par
For an artinian $\Bbb Z$-graded module $N$, let
$$\text{\rm end}(N) = \max \{ t;\ N_t \neq 0\}$$
(with the convention $\max \emptyset = - \infty$). Further, let
$$a_i(R/I) = \text{\rm end}( H^i_{\frak m}(R/I)),$$
where $H^i_{\frak m}(R/I)$ is the local cohomology module with the
support in ${\frak m}$ and $ 0 \leq i\leq d$.
The {\it Castelnuovo-Mumford regularity} is the number $$\operatorname{reg}(R/I) =
\max\{ a_i(R/I) + i;\ 0 \leq i\leq d\}.$$
Note that $\operatorname{reg}(I) = \operatorname{reg}(R/I) +1$. Sometimes we also use the
notation
\begin{eqnarray} \label{reg1}
\operatorname{reg}_k(R/I) =
\max\{ a_i(R/I) + i;\ k \leq i\leq d\},
\end{eqnarray}
where $k$ is a non-negative integer.\par
Following Brodmann and Sharp [BrS], the function
$$h^i_S(t) := \ell (H^i_{\frak m}(S)_t)$$
is called the $i$-th {\it Hilbert homological function} of $S$,
where $\ell(.)$ denotes the dimension of a vector space over $K$.
Let $H_S(t)$ and $P_S(t)$ denote the Hilbert function and the
Hilbert polynomial of $S$, respectively. We will often use the
Grothendieck-Serre formula
\begin{eqnarray} \label{EB1}
P_S(t) - H_S(t) = \sum_{i=0}^d (-1)^{i+1}h^i_S(t).
\end{eqnarray}\par
The leading coefficient of $P_S(t)$, multiplied by $(d-1)!$, is called the degree of $S$ and denoted
by $\operatorname{deg} S$. We also denote $\operatorname{deg} S$ by $e(S)$, or just by $e$.
The arithmetic degree is defined as follows:
$$\operatorname{adeg} S = \operatorname{adeg} I = \sum_{{\frak p} \in \text{Ass}(R/I)} \ell(H^0_{{\frak m}_{\frak p}}(R_{\frak p}/I_{\frak p}))
\operatorname{deg}(R/{\frak p}),$$
(see [BM, Definition 3.4] and [V, Definition 9.13]). The number $
\ell(H^0_{{\frak m}_{\frak p}}(R_{\frak p}/I_{\frak p}))$ is the multiplicity of the component
${\frak p}$ with respect to $I$. In this definition ${\frak p}$ runs over all
associated primes of $S$, while the usual degree $\operatorname{deg} S$ can be
computed by a similar formula, but the sum is only taken over
primes of the highest dimension. Thus
$$\operatorname{adeg} S \ge \operatorname{deg} S,$$
and the equality holds if and only if $S$ is a pure-dimensional
ring. \par
In this section we prove Theorem \ref{C4}. We need some auxiliary
results.
\begin{Lemma}\label{C1} Let $S$ be an one-dimensional Cohen-Macaulay ring.
Then
$$h^1_S(0) + \cdots + h^1_S(\operatorname{reg} S-1) \le e(e-1)/2.$$
\end{Lemma}
\begin{pf} Since $P_S(t) = e$, from the Grothendieck-Serre formula (\ref{EB1}) we have
$$h^1_S(t) = e - H_S(t).$$
Let $r= \operatorname{reg} S$. Since $S$ is a Cohen-Macaulay ring, its Hilbert-Poincare series can be
written in the form
$$HP_{S}(z) := \sum_{i\ge 0} H_{S}(i)z^i = \frac{1 + h_1z + \cdots + h_rz^r}{1-z},$$
where $h_1,...,h_r$ are positive integers (see, e.g., [V, p.
240]). From this it follows that
$$H_S(t) = 1+ h_1 + \cdots + h_t \ge t+1$$
for all $t\le r$.
Moreover, under the Cohen-Macaulay assumption, $r \le e-1$ . Hence
$$h^1_S(0) + \cdots + h^1_S(\operatorname{reg} S-1) \le re - (1+ \cdots + r) = r(2e-r-1)/2 \le e(e-1)/2.$$
\end{pf}
\begin{Lemma}\label{C2} Assume that $S= R/I$ is a reduced ring of dimension at least two. Then
$$h^1_S(-1) \le \operatorname{adeg} I - e.$$
\end{Lemma}
\begin{pf} Since $S$ is reduced, one may write $I = J \cap Q$, where $J$ is the intersection
of all associated primes of $R/I$ of dimension at least 2, and $Q$
is the intersection of all associated primes of $R/I$ of dimension
1. By [HSV, Lemma 1] we have $h^1_{R/J}(-1) = 0$. Thus if $Q=R$,
then $h^1_S(-1) =0$. Assume that $Q \neq R$. Since $J \neq R$ and
$R/I$ has no embedded primes, $J+Q $ is an ${\frak m}$-primary ideal,
i.e. $\dim R/(J+Q) = 0$. The exact sequence
$$0 \to S \to R/J \oplus R/Q \to R/(J+Q) \to 0$$
implies
$$h^1_S(-1) = h^1_{R/J}(-1) + h^1_{R/Q}(-1) = h^1_{R/Q}(-1).$$
Note that $\operatorname{deg} R/Q = \operatorname{adeg} I - \operatorname{adeg} J \le \operatorname{adeg} I -e.$ Since
$R/Q$ is an one-dimensional ring, by the Grothendieck-Serre
formula, we have
$$h^1_{R/Q}(-1) = \operatorname{deg} R/Q \le \operatorname{adeg} I -e.$$
\end{pf}
The proof of Theorem \ref{C4} is proceeded by induction. The next
two lemmas allow us to do induction. The first one is concerning
the behavior of the arithmetic degree by hyperplane section. It is
more subtle than the usual degree, see [MVY]. However we have
\begin{Lemma}\label{B2} Let $K$ be an infinite field, and $S=R/I$ an arbitrary ring of
dimension at least
two and positive depth. Assume that $x_n$ is chosen generically.
Let $T=R/((I,x_n):{\frak m}^{\infty})$ and $r= \operatorname{reg} T$. Then:
\begin{itemize}
\item[(i)] $\operatorname{reg} T \le \operatorname{reg} S$.
\item[(ii)] $\operatorname{adeg} T \le \operatorname{adeg} S$.
\end{itemize}
\end{Lemma}
\begin{pf}
(i) Since $x_n$ is generic, it is a regular element on $S$. We have
$$\operatorname{reg} T = \operatorname{reg}_1 S/x_nS \le \operatorname{reg} S/x_nS = \operatorname{reg} S.$$
(ii) For an $R$-module $M$ and $r\ge -1$, let
$$\operatorname{adeg}_r (M) = \sum_{{\frak p} \in \text{Ass}(M),\ \dim R/{\frak p} = r+1} \ell(H^0_{{\frak m}_{\frak p}}(M_{\frak p}))
\operatorname{deg}(R/{\frak p})$$ (see [BM, Definition 3.4]). Since $x_n$ is generic, by
the prime avoidance lemma,
we may assume
$$x_n \not\in \cup\{ {\frak p};\ {\frak m} \neq {\frak p} \in \text{Ass}(S)\cup_{j\ge 1}
\text{Ass}( \text{Ext}^{n-j}_{R}(S,R))\}.$$
By [MVY, Corollary 2.5] it
follows that
$$\operatorname{adeg}_{r-1}(T) = \operatorname{adeg}_r(S) \ \ \text{for\ all} \ r \ge 1.$$
Since $S$ and $T$ have no zero-dimensional component, we get
$$\begin{array}{ll}
\operatorname{adeg} T &= \operatorname{adeg}_0(T) + \cdots + \operatorname{adeg}_{d-1}(T)\\
&= \operatorname{adeg}_1(S) +\cdots + \operatorname{adeg}_d(S) \le \operatorname{adeg} S.
\end{array}$$
\end{pf}
The first three
statements of the next lemma are contained in the proof of Mumford's theorem on page 101 of the
book [M]
(cf. also [K, Proposition 1.4], [RTV1, Theorem 1.4] and [RTV2, Theorem 1.3]). In order
to make the paper more self-contained, we give here a sketch of
the proof. The proof of (iii) here is also simpler.
\begin{Lemma}\label{C5} Let $K$ be an infinite field and $S= R/I$ a reduced ring of dimension
at least two.
Assume that $x_n$ is chosen generically. Let
$T=R/((I,x_n):{\frak m}^{\infty})$ and $r= \operatorname{reg} T$. Then $T$ is also a
reduced ring and we have
\begin{itemize}
\item[(i)] $\operatorname{reg}_2(S) \le r$ (see the definition in (\ref{reg1})).
\item[(ii)] $h^1_S(t) \ge h^1_S(t+1)$ for all $t \ge r-1$.
\item[(iii)] $\operatorname{reg} S \le r + h^1_S(r-1)$. \item[(iv)] $h^1_S(t)
\le h^1_T(0) + \cdots + h^1_T(t) + \operatorname{adeg} I - e$, for all $t\ge 0$.
\end{itemize}
\end{Lemma}
\begin{pf} Note that $T$ can be considered as the homogeneous coordinate ring of a generic
hyperplane
section of the scheme $\text{Proj}(S)$. Since $K$ is an infinite
ring and $x_{n}$ is generic, by Bertini's theorem [FOV,
Corollary 3.4.14] it follows that $T$ is reduced.
The long exact sequence
\begin{eqnarray}
0 \to H^0_{{\frak m}}(S/x_nS)_{t} & \to & H^1_{{\frak m}}(S)_{t-1} \to H^1_{{\frak m}}(S)_{t}
{\overset{\varphi_t}{\longrightarrow}} H^1_{{\frak m}}(S/x_nS)_t = H^1_{{\frak m}}(T)_{t}
\label{EC1}\\
& \to & H^2_{{\frak m}}(S)_{t-1}\to H^2_{{\frak m}}(S)_{t} \to \cdots \nonumber
\end{eqnarray}
implies (i) and the short exact sequence
$$0 \to H^0_{{\frak m}}(S/x_nS)_{t} \to H^1_{{\frak m}}(S)_{t-1} \to H^1_{{\frak m}}(S)_{t} \to 0$$
for all $t \ge r$. This yields (ii). If $h^1_S(t_0-1) \ge
h^1_S(t_0)$ for some $t_0 \ge r+1$, we would have
$h^0_{S/x_nS}(t_0) = 0$. Since $\operatorname{reg}_1(S/x_nS) = \operatorname{reg} T = r$, it
then implies that $\operatorname{reg} (S/x_nS) \le t_0$. Hence $h^1_S(t_0) =
h^1_S(t_0+1) = \cdots = 0$. Therefore $h^1_S(t)$ is strictly
decreasing to zero when $t \ge r$, which implies (iii).
It remains to show (iv). From the exact sequence (\ref{EC1}) we have
$$h^1_S(u) - h^1_S(u-1) = \ell( \text{Im}(\varphi_u)) - h^0_{S/x_nS}(u) \le h^1_T(u)$$
for all $u \in \Bbb Z$. Adding these inequalities and using
Lemma \ref{C2} we get
$$\begin{array}{ll}
h^1_S(t) & \le h^1_T(0) + \cdots + h^1_T(t) + h^1_S(-1) \\
& \le h^1_T(0) + \cdots + h^1_T(t) + \operatorname{adeg} I - e
\end{array}$$
for all $t\ge 0$.
\end{pf}
Theorem \ref{C4} is a part of the following result. If $ a\in \Bbb R$, we denote by $[a]$
the largest integer not exceeding $a$.
\begin{Theorem}\label{C6} Assume that $S= R/I$ is a reduced ring of dimension at least two. Let
$$m= \frac{e(e-1)}{2} + \operatorname{adeg} I.$$
Then
\begin{itemize}
\item[(i)] $\operatorname{reg} S \le m^{2^{d-2}} -1$. \item[(ii)] For all $t\ge
0$, we have $h^1_S(t) \le m^{2^{d-2}} - e\cdot m^{[2^{d-3}]} $.
\end{itemize}
\end{Theorem}
\begin{pf} We may assume that $x_n$ is generic and choose $T$ as in the previous lemma.
Hence $T$ is a reduced ring. Set $r= \operatorname{reg} T$.
Let $d=2$. In order to show (ii), by Lemma \ref{C5}(ii), we may
assume that $t\le r-1$. Note that $T$ is a Cohen-Macaulay ring and
$e(T) = e$. Then Lemma \ref{C5}(iv) and Lemma \ref{C1} yield:
$$\begin{array}{ll}
h^1_S(t) & \le h^1_T(0) + \cdots + h^1_T(t) + \operatorname{adeg} I - e \\
& \le h^1_T(0) + \cdots + h^1_T(r-1) + \operatorname{adeg} I - e \\
& \le \frac{e(e-1)}{2} + \operatorname{adeg} I - e = m - e.
\end{array}$$
Using this inequality and the fact that $r \le e-1$ (since $T$ is
a Cohen-Macaulay ring), by Lemma \ref{C5}(iii) we get
$$\operatorname{reg} S \le e-1 + m-e = m-1.$$
Thus the case $d=2$ is proven.
Let $d\ge 3$. Since $\dim T = d-1, \ e(T) = e$, and $\operatorname{adeg} T \le
\operatorname{adeg} S$ (by Lemma \ref{B2}(ii)), the induction hypothesis gives
\begin{eqnarray}\label{EC2}
r \le m^{2^{d-3}} -1,
\end{eqnarray}
and for all $t\ge 0$
\begin{eqnarray}\label{EC3}
h^1_T(t) \le m^{2^{d-3}} - e\cdot m^{[2^{d-4}]} \le m^{2^{d-3}} -e
.
\end{eqnarray}
In order to prove (ii), again by Lemma \ref{C5}(ii), we may assume
that $t\le r-1$. Then, by Lemma \ref{C5}(iv), for all $t\ge 0$ we
have
$$\begin{array}{ll}
h^1_S(t) & \le h^1_T(0) + \cdots + h^1_T(t) + \operatorname{adeg} I - e \\
& \le r(m^{2^{d-3}} -e)+ \operatorname{adeg} I - e \ \ \text{(by \ (\ref{EC3}))} \\
& \le (m^{2^{d-3}} -1)(m^{2^{d-3}} -e) + \operatorname{adeg} I - e \ \ \text{(by \ (\ref{EC2}))} \\
& = m^{2^{d-2}} - e\cdot m^{2^{d-3}} - m^{2^{d-3}} + \operatorname{adeg} I\\
& \le m^{2^{d-2}} - e\cdot m^{2^{d-3}}.
\end{array}$$
To prove (i) we use (ii) and Lemma \ref{C5}(iii) :
$$\operatorname{reg} S \le r + h^1_S(r-1) \le m^{2^{d-3}} -1 + m^{2^{d-2}} - e\cdot m^{2^{d-3}} \le
m^{2^{d-2}}-1.$$
\end{pf}
\begin{Remark}\label{C6b} {\rm In order to get a bound for $\operatorname{reg} S$ in terms of $\operatorname{adeg} S$ and
$d$, by induction on $d$, it suffices to estimate $h^1_S(t)$ for $t\geq 0$. This can be easily done
by using the following well-known inequality
$$h^1_S(t) + H_S(t) \leq \operatorname{adeg} S {t+d-1 \choose d-1}$$
(recall that $S$ is reduced). This was pointed out by the referee. However a direct application of this
inequality would only lead to a bound of the following type
$$\operatorname{reg} I \leq (\operatorname{adeg} I)^{d-1)!}.$$}
\end{Remark}
If $K$ is an algebraically closed field, then a result of Gruson,
Lazarsfeld and Peskine [GLP] yields a better bound for the case
$d=2$ as shown in the following statement.
\begin{Proposition}\label{C7} Let $K$ be an algebraically closed field. Assume that $R/I$
is a reduced ring of dimension two. Then $\operatorname{reg} I \le \operatorname{adeg} I$.
\end{Proposition}
\begin{pf} Write $I = J \cap Q$ as in the proof of Lemma \ref{C2}. Then we have an exact sequence:
$$0 \to H^0_{{\frak m}}(R/J+Q)_t \to H^1_{{\frak m}}(R/I)_t \to H^1_{{\frak m}}(R/J)_t \oplus
H^1_{{\frak m}}(R/Q)_t \to 0,$$
and
$$H^2_{{\frak m}}(R/I)_t \cong H^2_{{\frak m}}(R/J)_t \oplus H^2_{{\frak m}}(R/Q)_t.$$
By [GLP, Theorem 1.1] (see also Remark on p. 497 there), $\operatorname{reg} R/J
\le e-1 \le \operatorname{adeg} I -1$. So we may assume that $Q \neq R$. Since
$R/Q$ is one-dimensional and reduced, it is a Cohen-Macaulay ring.
Hence $\operatorname{reg} R/Q \le \operatorname{adeg} R/Q -1 < \operatorname{adeg} I$. To complete the proof
it suffices to show that
$$H^0_{{\frak m}}(R/J+Q)_t = 0 \ \ \text{for \ all}\ t \ge \operatorname{adeg} I -1.$$
Since $\operatorname{reg} J \le e$, $J$ is generated by elements of degree $\le
e$. Hence one may choose an element $x\in J$ of degree $e$ such
that $x$ does not belong to any prime in $Q$, i.e. $x$ is a
regular element on $R/Q$. Then we have
$$\operatorname{reg} R/(Q,x) = \operatorname{reg} R/Q + e-1 \le \operatorname{deg} R/Q -1 + e-1 = \operatorname{adeg} I -2.$$
Since $R/(Q,x)$ is a zero-dimensional ring, this means
$(R/(Q,x))_t =0$ for all $t \ge \operatorname{adeg} I -1$. The inequality
$\ell((R/J+Q)_t) \le \ell((R/(Q,x))_t)$ gives us $H^0_{{\frak m}}(R/J+Q)_t =
(R/J+Q)_t = 0$ for all $t \ge \operatorname{adeg} I -1$, as required.
\end{pf}
The above proposition says that in dimension two one can replace $m$ by $\operatorname{adeg} S$ in Theorem
\ref{C6}(i). However this does not work for Theorem \ref{C6}(ii) as shown by the following example.
\begin{Example}\label{C8} {\rm Given $e\ge 6$ and $S= K[x^e, x^{e-1}y, xy^{e-1}, y^e]$.
Then for $0\le t \le e-2$ one can show that $h^1_S(t) = te+1 - (t+1)^2$, while $\operatorname{adeg} S - e = 0$.
Taking $t_0 = [\frac{e-2}{2}]$, one can see that $h^1_S(t_0)$ is approximately a half of the bound
in Theorem \ref{C6}(ii).
}\end{Example}
Theorem \ref{C4} does not hold if the ring $R/I$ is not reduced.
\begin{Example}\label{C9} {\rm (see [V, Example 9.3.1]) Let $S= K[x,y,u,v]/((x,y)^2,
xu^t+yv^t),\ t\ge 1$. Then $\operatorname{adeg} S = e = 2$, while $\operatorname{reg} S = t$ can be arbitrarily large.
}\end{Example}
\section{Bounds in terms of degrees of defining equations} \label{E}\smallskip
In this section we study arbitrary homogeneous ideals. We will
always write the degrees of polynomials in a minimal homogeneous
basis of $I$ in a decreasing sequence
$$\Delta := \delta_1 \ge \delta_2 \ge \cdots $$
and assume $\Delta \ge 2$. As mentioned in the introduction,
G. Caviglia and E. Sbarra already proved that
$$\operatorname{reg} I \le (\Delta^c + \Delta c - c+1 )^{2^{d-1}}$$
(see [CS, Corollary 2.6]). The purpose of this section is to prove
the following theorem which is a slight improvement of the above
result.
\begin{Theorem}\label{E1} Let $K$ be an arbitrary field and $I$ be an arbitrary homogeneous
ideal of dimension $d\ge 1$. Then
$$\operatorname{reg} I \le (\delta_1 \cdots \delta_c + \Delta -1)^{2^{d-1}}
\le (\Delta^c +\Delta -1)^{2^{d-1}}.$$
\end{Theorem}
The proof of [CS] uses properties of Borel-fixed ideals. The proof
here is completely different and simpler than the one in [CS]. The main
idea of the proof is similar to that of Theorem \ref{C4}. We need some technical lemmas. For
short, set
$$\sigma = \delta_1 +\cdots+ \delta_c - c \ \ \text{and} \ \ \pi =
\delta_1 \cdots \delta_c .$$
If $S= R/I$, then we also write $\Delta = \Delta(S),\
\delta_1 = \delta_1(S), ...$ to emphasize their dependence on $S$ (or $I$). The
following result was pointed out by the referee to the author. Subsequently, it slightly improves our
original Theorem \ref{E5} and Theorem \ref{F4}
\begin{Lemma} {\rm [Sj, Theorem 2]}\label{E2} If $\dim S \le 1$, then $\operatorname{reg} S \le \sigma + \Delta
-1$.
\end{Lemma}
The next result is a special case of [HH, Lemma 3].
\begin{Lemma}\label{E3} Assume that $d=1$. Then for all $t\ge 1$, $h^0_S(t) \le \pi -1$.
\end{Lemma}
Recall that an element $x\in {\frak m}$ is called {\it filter regular}
if $0: {\frak m}^\infty$ is of finite length.
\begin{Lemma}\label{E4} Assume that $\dim S\ge 1$ and $x=x_n$ is a filter regular
element on $S$. Let $T=S/xS$ and $r \ge \max\{ \operatorname{reg} T,\ \Delta-1\}$. Then
\begin{itemize}
\item[(i)] $\operatorname{reg}_1(S) \le r$ (see the definition in (\ref{reg1})).
\item[(ii)] $h^0_S(t) \ge h^0_S(t+1)$ for all $t \ge r$.
\item[(iii)] $\operatorname{reg} S \le r + h^0_S(r)$. \item[(iv)] $h^0_S(t) \le
h^0_T(1) + \cdots + h^0_T(t) $, for all $t\ge 1$.
\end{itemize}
\end{Lemma}
\begin{pf} (i)-(iii) were shown in the proof of [BM, Proposition 3.8]. It follows from
the following exact sequence
$$ 0 \to (0:x)_{t-1} \to H^0_{{\frak m}}(S)_{t-1} \to H^0_{{\frak m}}(S)_{t}
{\overset{\varphi_t}{\longrightarrow}} H^0_{{\frak m}}(T)_{t} \to H^1_{{\frak m}}(S)_{t-1}
\to H^1_{{\frak m}}(S)_{t} \to \cdots $$
For (iii) we need also the assumption $r\ge \Delta -1$ in order to
apply the regularity criterion of [BS, Theorem 1.10].
From the above exact sequence we have
$$h^0_S(u) - h^0_S(u-1) = \ell( \text{Im}(\varphi_u)) - \ell((0:x)_{u-1}) \le h^0_T(u)$$
for all $u \in \Bbb Z$. Since $h^0_S(0) = 0$, adding these
inequalities gives us (iv).
\end{pf}
Theorem \ref{E1} is a part of the following
\begin{Theorem}\label{E5} Let $d\ge 1$. Then
\begin{itemize}
\item[(i)] $\operatorname{reg} S \le (\pi + \Delta -1)^{2^{d-1}} -1$.
\item[(ii)]
For all $t\ge 1$, we have $h^0_S(t) \le (\pi + \Delta -1)^{2^{d-1}} -
(\pi + \Delta -1)^{[2^{d-2}]} $.
\end{itemize}
\end{Theorem}
\begin{pf} Keep the notation of Lemma \ref{E4}. Let $I'$ denote the image of $I$
in $K[x_1,...,x_n]/(x_n) \cong K[x_1,...,x_{n-1}] =: R'$. Then $T \cong R'/I'$
and it is clear that $\sigma(I') \le \sigma$ and $\pi(I') \le \pi$.
First let $d=1$. With the above remark, Lemma \ref{E3} yields
$h^0_S(t) \le \pi -1$ for all $t\ge 1$. Thus (ii) holds. By Lemma
\ref{E2},
$$\operatorname{reg} S \le \delta_1 + \delta_2 +\cdots + \delta_c - c +\Delta -1.$$
By induction on $c$ we get
$$\operatorname{reg} S \le (\delta_1\cdots \delta_{c-1} -1) + (\delta_c -1) + \Delta -1 \le \delta_1\cdots \delta_c -1
+ \Delta -1 = \pi +\Delta - 2.$$
Thus the case $d=1$ is proven.
Now let $d\ge 2$. With the remark at the beginning of the proof
and by the induction hypothesis we may assume that
$$ \operatorname{reg} T \le (\pi + \Delta -1)^{2^{d-2}} -1,$$ and
$$h^0_T(t) \le (\pi + \Delta -1)^{2^{d-2}} - (\pi + \Delta -1)^{[2^{d-3}]}
\le (\pi + \Delta -1)^{2^{d-2}} -1. $$
Let $r= (\pi + \Delta -1)^{2^{d-2}} $. Obviously $r \ge \Delta $. In
order to prove (i), by Lemma \ref{E4}(ii), we may assume that
$t\le r$. Then, by Lemma \ref{E4} (iv) and the induction
hypothesis, we have
$$\begin{array}{ll} h^0_S(t) & \le r ((\pi + \Delta -1)^{2^{d-2}} -1) \le
(\pi + \Delta -1)^{2^{d-2}} ( (\pi + \Delta -1)^{2^{d-2}} -1) \\
&= (\pi + \Delta -1)^{2^{d-1}} - (\pi + \Delta -1)^{2^{d-2}}.
\end{array}$$
Thus (ii) is proven. Using this and Lemma \ref{E4}(iii) we
immediately get (i).
\end{pf}
\begin{Remark}\label{E6} {\rm a) The bound in Theorem \ref{E1} is nearly the best possible.
It was shown that there is an ideal $I$, due to Mayr and Meyer, generated by $10n-6$
forms of degree at most 4 in $10n +1$ variables such that
$\operatorname{reg}(I) > 4^{2^{n-1}} + 1$ (see, e.g., [BM, Example 3.9 and
Proposition 3.11]). \par
b) In this paper we are not interested in giving the best possible bounds for $\operatorname{reg} S$ which are then
more complicated to formulate. On the other hand, for rings of small dimension, there are
also some bounds which are much better than the ones in Theorem \ref{E1}. See, e.g., a recent paper
[CF] for $d\leq 2$. However an application of such results to our proof does not significantly improve
the bound in Theorem \ref{E5} for a larger
$d$.}\end{Remark}
\section{Hilbert cohomological functions} \label{A}\smallskip
In this section we give a bound on $h^i_S(t)$. First, we do this
for Borel-fixed ideals. We need some notation and results from
[HPV]. Let $I\neq 0$ be a monomial ideal. Denote by $G(I)$ the
unique set of monomial generators of $I$. For a monomial $u$, let
$m(u)$ be the maximal index of a variable appeared in $u$. Set
$$m(I) = \max \{ m(u); \ u \in G(I)\}.$$
We recursively define an ascending chain of monomial ideals
$$I = I_0 \subset I_1 \subset \cdots \subset I_{l+1} = R$$
as follows: let $I_0=I$. Suppose $I_j$ is already defined. If $I_j
= R$, then the chain ends. Otherwise, let $n_j = m(I_j)$ and set
$$I_{j+1} = I_j : x_{n_j}^\infty := \cup_{k=1}^\infty I_j : x_{n_j}^k.$$
A stable ideal under the action of upper triangle matrices is
called {\it Borel-fixed}. It is always a monomial ideal. If $I$ is
a Borel-fixed ideal, then $(x_1,...,x_c)$ is the unique minimal
associated prime of $R/I$ (see [E, Corollary 15.25]). Hence in
this case $n \geq n_0 > n_1 > \cdots > n_l =c$. For $j=0,...,l$,
let $J_j \subset K[x_1,...,x_{n_j}]$ be the monomial ideal with
$G(I_j)=G(J_j)$. Denote by
$$J_j^{sat} = J_j :
(x_1,...,x_{n_j})^\infty$$
the {\it saturation} of $J_j$. Then by [HPV,
Corollary 2.6] and local duality we have
\begin{Lemma}\label{A1} Let $I\neq 0$ be a Borel-fixed ideal. Then $H^j_{\frak m}(S) = 0$
if $j \not\in \{n-n_0,...,n-n_l\}$, and we have an isomorphism of $\Bbb Z$-graded
$R$-modules: $$H^j_{\frak m}(S) \cong (J_i^{sat}/J_i)[x_{n_i+1}^{-1},...,x_n^{-1}],$$
if $j=n-n_i$ for some $i=0,...,l$.
\end{Lemma}
In the sequel, for a Borel-fixed ideal $I$ let us denote
\begin{eqnarray}\label{EA1}
B := B(I) = \ell(R/(I,x_{c+1},...,x_n)).
\end{eqnarray}
For short, set $e = \operatorname{deg}(I)$. Note that $B \geq e$.
\begin{Lemma}\label{A2} Let $I\neq 0$ be a Borel-fixed ideal. Then
\begin{itemize}
\item[(i)] $\ell(J_l^{sat}/J_l) =e$. \item[(ii)] For $i< l$ and
all $t \ge 0$ we have
$$\ell([J_i^{sat}/J_i]_t) \le (B-1){t+n_i-c-2\choose n_i-c-1}.$$
\end{itemize}
\end{Lemma}
\begin{pf} Let $M= J_i^{sat}/J_i$ and $R' = K[x_1,...,x_c]$. Since $(x_1,...,x_c)$
is the unique minimal associated prime of $R/I$, by the
construction we have $I \subseteq I_i \subseteq (x_1,...,x_c)$.
Let $i=l$. We have $J_l^{sat} = R'$ and $J_l = I: (x_{c+1},...,x_{n})^\infty$. Hence
$$\ell(M) = \ell(R'/I: (x_{c+1},...,x_{n})^\infty) = \ell((R/I)_{(x_{c+1},...,x_{n})}) = e.$$
Let $i<l$. Set $R" =K[x_1,...,x_{n_i}]$. By the definition $J_i =
G(I_i)R"$. Hence $x_{c+1},...,x_{n_i}$ is a s.o.p. of $R"/J_i$,
and $ I \cap R' \subseteq J_i \cap R'.$ This implies
\begin{eqnarray}\label{EA0}
\ell(\frac{R"}{(J_i, x_{c+1},...,x_{n_i})}) = \ell(\frac{R'}{J_i
\cap R'}) \le \ell(\frac{R'}{I\cap R'}) = \ell(\frac{R}{(I,
x_{c+1},...,x_{n})}) = B.
\end{eqnarray}
On the other side, the inclusion $J_i^{sat} \subseteq
(x_1,...,x_c)$ yields
$$\ell(M_t) \leq \ell(\frac{(x_1,...,x_c)R"}{J_i}) = \ell(\frac{R"}{J_i})
- \ell(\frac{R"}{(x_1,...,x_c)R"} =
\ell(\frac{R"}{J_i}) - {t+n_i-c-1\choose n_i-c-1}.$$ By [RVV,
Proposition 2.4] and (\ref{EA0}) we have
$$\begin{array}{ll}
\ell(\frac{R"}{J_i}) &\le (\ell( \frac{R"}{(J_i, x_{c+1},...,x_{n_i})}) -1)
{t+n_i-c-2\choose n_i-c-1} + {t+n_i-c-1\choose n_i-c-1}\\
&\le (B-1){t+n_i-c-2\choose n_i-c-1} + {t+n_i-c-1\choose n_i-c-1}.
\end{array}$$
Hence $\ell(M_t) \le (B-1){t+n_i-c-2\choose n_i-c-1}$.
\end{pf}
\begin{Lemma}\label{A3} Let $I\neq 0$ be a Borel-fixed ideal and $S=R/I$. Then
\begin{itemize}
\item[(i)] $h^0_S(t) \le (B-1){t+d-2\choose d-1} $ for all $t\ge
0$. \item[(ii)] For $1 \le j \le d-1$ and $t \le \operatorname{reg} S$:
$$h^j_S(t) \le (B-1){\operatorname{reg} S+d-j-2\choose d-j-1} {\operatorname{reg} S-t\choose j}.$$
\item[(iii)] For $t< \operatorname{reg} S$:
$$h^d_S(t) \le e{\operatorname{reg} S-t-1\choose d-1} \le B{\operatorname{reg} S-t-1\choose d-1}.$$
\end{itemize}
\end{Lemma}
\begin{pf} By virtue of Lemma \ref{A1}, (i) is a special case of Lemma \ref{A2}
(when $i=0$). Let $j\ge 1$. By Lemma \ref{A1} we may assume that
$j=n-n_i$ for some $i>0$. Let $M = J_i^{sat}/J_i$. Lemma \ref{A1}
implies
\begin{eqnarray}
h^j_S(t) &=& \sum_{u= 1}^{\text{\rm end}(M)} \ell(M_u){u-j-t+j-1\choose j-1} \label{EA2}\\
&\le& [\max_{1\le u\le \text{\rm end}(M)}\ell(M_u)] \sum_{v=
1}^{\text{\rm end}(M)-j-t}{v+j-1\choose j} \nonumber\\&=&
[\max_{1\le u\le \text{\rm end}(M)}\ell(M_u)] {\text{\rm
end}(M)-t\choose j}. \label{EA3}
\end{eqnarray}
(In the above calculation we set ${a\choose b} =0$ if $b\geq 0$
and $a<b$.) Moreover, again by Lemma \ref{A1}, $\text{\rm end}(M)
= a_j(S) + j \le \operatorname{reg} S$ (see also [HPV, Corollary 2.7]). Since
$n_i-c = n-j-c = d-j$, Lemma \ref{A2}(ii) yields
$$\max_{1\le u\le \text{\rm end}(M)}\ell(M_u) \le (B-1) \max_{1\le u\le \operatorname{reg} S}
{u+d-j-2\choose d-j-1} =
(B-1) {\operatorname{reg} S+d-j-2\choose d-j-1}.$$ From this and (\ref{EA3}) we
get (ii).
Let $j=d$. Then $n_i=c$. From (\ref{EA2}) we have
$$\begin{array}{ll}
h^d_S(t) & \le [\max_{1\le u\le \text{\rm end}(M)}{u - t -1\choose d-1}]
\cdot \sum_{u= 1}^{\text{\rm end}(M)} \ell(M_u)\\
& \le {\operatorname{reg} S-t-1\choose d-1} \cdot \ell(M)\\
&= e{\operatorname{reg} S-t-1\choose d-1} \ \ \text{\rm (by \ Lemma \
\ref{A2}(i))}.
\end{array}$$
\end{pf}
Now we can bound the Hilbert cohomological functions of an
arbitrary homogeneous ideal $I$. Recall that the defining degrees
of $I$ are written in a decreasing sequence
$$\Delta := \delta_1 \ge \delta_2 \ge \cdots,$$
and assume $\Delta \ge 2$.
In the proof of the following theorem, a result of Vasconcelos on
the reduction number plays an essential role. We use initial ideals in order to go back to the
situation of the previous result.
\begin{Theorem}\label{A4} Let $I$ be an arbitrary homogeneous ideal of $R$ and $S=R/I$. Let
$$b= \min\{ \delta_1 \cdots \delta_c,\ (\operatorname{adeg} I)^c\}.$$
Then
\begin{itemize}
\item[(i)] $h^0_S(t) \le (b-1){t+d-2\choose d-1} $ for all $t\ge
0$. \item[(ii)] For $1 \le j \le d-1$ and $t \le \operatorname{reg} S$:
$$h^j_S(t) \le (b-1){\operatorname{reg} S+d-j-2\choose d-j-1} {\operatorname{reg} S-t\choose j}.$$
\item[(iii)] For $t< \operatorname{reg} S$:
$$h^d_S(t) \le e{\operatorname{reg} S-t-1\choose d-1} \le b{\operatorname{reg} S- t -1\choose d-1}.$$
\end{itemize}
\end{Theorem}
\begin{pf} Let $\operatorname{Gin} I$ denote the generic initial ideal of $I$ with respect to
the reverse lexicographic order. Then $\operatorname{Gin} I$ is a Borel-fixed ideal. Moreover we
may assume that the coordinates $x_1,..,x_n$ are chosen generically. By
[BS, Lemma 2.2 and Theorem 2.4] we have
$$\ell(R/(I, x_{c+1},...,x_{n})) = \ell(R/(\operatorname{Gin} I, x_{c+1},...,x_{n})),$$
and
$$\operatorname{reg}(R/I) = \operatorname{reg}(R/\operatorname{Gin} I).$$
By Macaulay's theorem: $e(R/I) = e(R/\operatorname{Gin} I)$. Moreover, by [S,
Theorem 2.4]
$$h^i_{R/I}(t) \le h^i_{R/\operatorname{Gin} I}(t) $$
for all $i\ge 0$ and $t\in \Bbb Z$. Hence, the theorem immediately
follows from the previous lemma if we can show that
$$ B:= \ell(R/(I, x_{c+1},...,x_{n})) \le b.$$
a) Let $I'$ denote the image of $I$ in $R' = R/(x_{c+1},...,x_{n})
\cong K[x_1,...,x_c]$. It is a $(x_1,...,x_c)$-primary ideal.
Since $I$ can be generated by elements of degrees $d_1\le
\delta_1, d_2\le \delta_2, ...$, $I'$ contains a regular sequence
consisting of forms $f_1,...,f_c$ of degrees $d'_1\le d_1\le
\delta_1, ..., d'_c\le d_c\le \delta_c$. Hence
$$B= \ell(R'/I') \le \ell(R'/(f_1,...,f_c)) =d'_1\cdots d'_c \le \delta_1 \cdots \delta_c.$$
b) Since $x_{c+1},...,x_{n}$ is a s.o.p. of $R/\operatorname{Gin}(I)$, it is
also a s.o.p. of $R/I$. Hence it is a minimal reduction of the
algebra $R/I$. By [V, Theorem 9.3.4]
$$x_i^{\operatorname{adeg}(I)} \in (I, x_{c+1},...,x_{n}) , \ \ \text{\rm for \ all} \ i \ge 1.$$
This means $x_1^{\operatorname{adeg}(I)},...,x_c^{\operatorname{adeg}(I)}$ form a regular
sequence in $I'$. The above argument gives $B \le (\operatorname{adeg} I)^c$.
\end{pf}
\begin{Remark}\label{A5} {\rm i) In the above theorem we may replace $b$ by
$(\operatorname{reg} I)^c$ in order to get a bound for $h^j_S(t)$, which depends
only on $\operatorname{reg} I$ and $d,c$. \par
ii) Hilbert cohomological functions are of reverse polynomial
type, i.e. for each $i\ge 0$ there is a polynomial $p^i_S(t)$ such
that $h^i_S(t)= p^i_S(t)$ for all $t\ll 0$ (see [BrS, Theorem
17.1.9]). The number
$$\nu_S^i = \min\{ t\in {\Bbb Z}; \ h^i_S(t)\neq p^i_S(t)\} -1$$
is called $i$-th cohomological postulation number of $S$ (see
[BrL]). Thus, if $H^i_{\frak m}(S)$, $i<d$ is of finite length, then all
graded components $H^i_{\frak m}(S)_t$ vanish below $\nu_S^i $. Brodmann
and Lashgari proved that all $-\nu_S^i, \ i\le d, $ can be bounded
by a polynomial (of huge degree) in the numbers
$h^1_S(0),...,h^d_S(-d+1)$ (see [BrL, Theorem 4.6]). Combining
their result with Theorem \ref{A4} we see that $-\nu_S^i $ can be
bounded by a polynomial in $\operatorname{reg} S$. Thus, the number of
"irregular" negative components of local cohomology modules is
governed by the Castelnuovo-Mumford regularity. }\end{Remark}
\section{Hilbert coefficients} \label{B}\smallskip
Write the Hilbert polynomial in the form:
$$P_{S}(t)= e_0{t+d-1 \choose d-1} -e_1 {t+d-2 \choose d-2}+ \cdots + (-1)^{d-1}e_{d-1}.$$
Then $e_0, e_1,...,e_{d-1}$ are called {\it Hilbert coefficients}
of $S$. Note that $e_0=e$. Sometimes we also write $e_i = e_i(S)$
to emphasize its dependence on $S$.\par
We first estimate $|e_i|$ in terms of the arithmetic degree. For
the application later, the following result is formulated in a
rather technical way.
\begin{Theorem}\label{B3} Let $K$ be an infinite field and $I$ an arbitrary homogeneous ideal.
Assume that
$x_{c+1},...,x_{n}$ are chosen generically. Let $T_d = R/
(I:{\frak m}^\infty),\ T_{d-1} = R/((I,x_n):{\frak m}^\infty), ...$. Then
\begin{itemize}
\item[(i)] $|e_1| \le (\operatorname{adeg} I)^c (\operatorname{reg} T_2 +1) \leq (\operatorname{adeg} I)^c
\operatorname{reg} I$.
\item[(ii)] For $i\ge 2$, $|e_i| \le \frac{3}{2}(\operatorname{adeg}
I)^c (\operatorname{reg} T_{i+1} +1)^i\le \frac{3}{2}(\operatorname{adeg} I)^c ( \operatorname{reg} I)^i$.
\end{itemize}
\end{Theorem}
\begin{pf} The second inequalities in both (i) and (ii) follow from Lemma \ref{B2}(i).
Let us prove the first ones. Set $T = T_d$. Let $H_T(t)$ denote
the Hilbert function of $T$. Since $S$ and $T$ have the same
Hilbert polynomial, $e_{i}= e_i(T)$ for all $i\ge 0$. From the
Grothendieck-Serre formula
$$ P_T(t) - H_T(t) = \sum_{i=0}^d (-1)^{i+1}h^i_T(t),$$
we get (setting $t=-1$):
$$(-1)^{d-1}e_{d-1} = C - D,$$
where
$$C= h^1_T(-1) + h^3_T(-1) \ \cdots ,$$
and
$$D= h^2_T(-1) + h^4_T(-1) \ \cdots .$$
Hence
$$|e_{d-1}| \le \max\{C,\ D\}.$$
For short, set $b = (\operatorname{adeg} I)^c$. If $d=2$, then by Theorem
\ref{A4} we have $C = h^1_T(-1) \le (b-1)(\operatorname{reg} T +1)$ and $D=
h^2_T(-1) \le b\cdot \operatorname{reg} T$. Therefore $|e_1| \le b\cdot (\operatorname{reg}
T_2 +1)$.
Let $d\ge 3$. Since $x_n$ is generic, it is a regular element on
$T$. Since $P_{T_{d-1}}(t) = P_{T/x_nT}(t)$, we have $e_i =
e_i(T) = e_i(T_{d-1})$ for all $i \le d-2$. The corresponding
sequence of rings constructed for $T_{d-1}$ as above are exactly
the rings $T_{d-1}, T_{d-2},...,T_1$. By Lemma \ref{B2}(ii),
$\operatorname{adeg} T_{d-1} \le \operatorname{adeg} T$. Hence, by the induction hypothesis, it
remains to prove (ii) for $i=d-1$. Note that
$${v+u-1 \choose u} \le v^u,\ \ \text{and} \ \ {v+1 \choose u} \le (v+1) \frac{v^{u-1}}{u!}.$$
Let $r = \operatorname{reg} T$. If $d= 2k+1$, where $k\ge 1$, then Theorem
\ref{A4} yields
$$\begin{array}{ll}
C &\le (b-1)(r+1)r^{d-2}\{ 1 + \frac{1}{3!} + \cdots + \frac{1}{(2k-1)!}\} + b
\frac{r^{d-1}}{(2k)!} \\
&\le b(r+1) r^{d-2}\{ 1 + \frac{1}{3!} + \cdots + \frac{1}{(2k-1)!} + \frac{1}{(2k)!}\}\\
& \le \frac{3}{2} b(r+1)^{d-1},
\end{array}$$
and
$$D \le (b-1)(r+1) r^{d-2}\{ \frac{1}{2!} + \cdots + \frac{1}{(2k)!}\} \le
(b-1)(r+1)^{d-1}.$$
Hence $|e_{d-1}| \le \frac{3}{2} b(r+1)^{d-1}$.
The inequality in the case $d= 2k,\
k\ge 2$, can be shown similarly.
\end{pf}
\begin{Remark}\label{B5} {\rm a) In the above proof, if $h^1_T(-1)= 0$, then $C \le
(\operatorname{adeg} I)^c(r+1)^{d-1}$. Hence, if $h^1_{T_{i+1}} (-1) = 0$, then
$$|e_i| \le (\operatorname{adeg} I)^c (\operatorname{reg} T_{i+1} +1)^i\le (\operatorname{adeg} I)^c ( \operatorname{reg} I)^i.$$
b) Consider again Example \ref{C9}: $S= K[x,y,u,v]/((x,y)^2,
xu^t+yv^t),\ t\ge 1$. We have $ e_1 = - (t+1)$, while $\operatorname{reg} (S) = t,\ \operatorname{adeg} S = 2$ and the bound in (i)
of the above theorem is $4(t+1)$. Thus one cannot avoid $\operatorname{reg} I$ in the above theorem.
}\end{Remark}
Note that $\dim T_{i+1} = i+1$. Combining Theorem \ref{B3} and
Theorem \ref{C4} we get
\begin{Proposition}\label{C10} Let $S$ be a reduced ring of dimension at least two. Then
\begin{itemize}
\item[(i)] $|e_1| \le (\operatorname{adeg} S)^c(\frac{e(e-1)}{2} + \operatorname{adeg} S)$.
\item[(ii)] $|e_i| \le \frac{3}{2} (\operatorname{adeg} S)^c(\frac{e(e-1)}{2} +
\operatorname{adeg} S)^{i2^{i-1}}$ if $i\ge 2$.
\end{itemize}
\end{Proposition}
Another consequence of Theorem \ref{B3} is:
\begin{Corollary}\label{B6} Assume that $K$ is an algebraically closed field of
characteristic 0 and $\text{\rm Proj}(R/I)$ is a reduced and
pure-dimensional smooth subscheme in ${\Bbb P}^{n-1}$. Then for
all $i\ge 1$ we have
$$ |e_i| < (i+2)^i e^{c+i}.$$
\end{Corollary}
\begin{pf} By Bertini's theorems (see [FOV, Corollary 3.4.6 and Corollary 3.4.14]) we may
assume that all $\text{\rm Proj}(T_{i})$ are reduced and pure-dimensional
smooth subschemes. By Mumford's bound: $\operatorname{reg} T_{i+1} \le (i+2)(e-2) + 1$
(see [BM, Theorem 3.12(ii)]). Moreover, in this case $h^1_{ T_{i+1}}(-1) = 0$
for all $i \ge 1$ and $\operatorname{adeg} I = e$ . Hence, by Theorem \ref{B3} and Remark \ref{B5}, we get
$$|e_i| \le e^c ((i+2)(e-2) + 2)^i < (i+2)^i e^{c+i}.$$
\end{pf}
\begin{Remark}\label{C11} {\rm Let $S$ be a reduced ring of dimension at least two.
i) It is known that for any $K$-algebra $S$, $e_1 \le e(e-1)/2$
(see [Bl, Remark 3.10]). Hence in the statement (i) of
Proposition \ref{C10} only the following inequality is new: $e_1
\ge - (\operatorname{adeg} S)^c(\frac{e(e-1)}{2} + \operatorname{adeg} S)$.\par
ii) Let us recall\par
\noindent {\bf Eisenbud-Goto conjecture} [EG]: Let $K$ be an
algebraically closed field. If $I$ is a prime ideal containing no
linear form, then $\operatorname{reg} R/I \le e - c$.\par
If this conjecture holds true, then by Remark \ref{B5}, $|e_i| \le
(\operatorname{deg} S)^{c+i}$ provided $S$ is a domain. Note that the Eisenbud-Goto conjecture is close to be
proved for smooth varieties of dimension at most 6 over a field of characteristic zero, by the work of
several people including Lazarsfeld, Ran and Kwak.
This indicates that the bounds in
Theorem \ref{C4} and Proposition \ref{C10} are probably far from being
sharp.\par
iii) There is a bound on $|e_i|$ in terms of the so-called
homological degree which also holds for any standard graded
algebra over an artinian ring, see [RVV, Theorem 4.3]. However the
homological degree is very big.}\end{Remark}
We now estimate $|e_i|$ by mean of the defining degrees. Recall
that homogeneous elements $y_1,...,y_m$ of $S$ form a {\it filter
regular sequence} if $[(y_1,...,y_{i-1}): y_i]_t =
(y_1,...,y_{i-1})_t$ for all $t \gg 0$ and $i=0,...,m$. On the
other words, $y_i$ is a filter regular element on
$S/(y_1,...,y_{i-1})S$.
\begin{Theorem}\label{B3b} Let $I$ be an arbitrary homogeneous ideal.
Assume that $d\ge 2$ and $x_{c+1},...,x_n$ is a filter regular
sequence on $S$. Let $S_d = S,\ S_{d-1} = S/x_nS_d,\ ... $ Set
$\pi = \delta_1 \cdots \delta_c$. Then
\begin{itemize}
\item[(i)] $|e_1| \le \pi\cdot (\operatorname{reg} S_2 +1) \leq \pi\cdot \operatorname{reg}
I$. \item[(ii)] For $i\ge 2$, $|e_i| \le \frac{3}{2}\pi\cdot (\operatorname{reg}
S_{i+1} +1)^i\le \frac{3}{2}\pi\cdot ( \operatorname{reg} I)^i$.
\end{itemize}
\end{Theorem}
\begin{pf}
The proof is similar to that of Theorem \ref{B3} after noticing that
$\operatorname{reg} S_i \le \operatorname{reg} S_{i+1}$ and $\delta_1(S_i) \le
\delta_1(S_{i+1}) ,..., \delta_c(S_i) \le \delta_c(S_{i+1})$ for
all $i \ge 2$.
\end{pf}
Combining it with Theorem \ref{E1} we immediately get
\begin{Proposition}\label{E7}
Let $d\ge 1$. Then
\begin{itemize}
\item[(i)] $|e_1| \le \pi(\pi + \Delta -1)^2$.
\item[(ii)] For all
$i\ge 2$, we have $|e_i| \le \frac{3}{2}\pi (\pi + \Delta -1)^{i2^i}$.
\end{itemize}
In particular $|e_i| < (\frac{3}{2}\Delta^c + \Delta)^{1+i2^i}$ for all $i\ge 1$.
\end{Proposition}
A direct application of Theorem \ref{B3b} sometimes gives much better bounds than the ones in
the previous proposition. For example, using [BEL] and the second
inequality in Theorem \ref{B3b}(ii), one immediately gets that
$$|e_i| \le \frac{3}{2}\pi (\operatorname{reg} I)^i \le \frac{3}{2} \Delta^c(c(\Delta-1))^i <
\frac{3}{2} c^i\Delta^{c+i},$$
provided $\text{\rm Proj}(R/I)$ is a
reduced and pure-dimensional smooth subscheme. Another case is
\begin{Corollary}\label{B7} Let $I$ be an ideal generated by monomials of degree at
most $\Delta$ in $n$ variables. Then for all $i\ge 1$ we have
$$|e_i| \le \frac{3}{2} \min\{ (\operatorname{adeg} I)^{c+i}, \ n^i\Delta^{c+i} \}.$$
\end{Corollary}
\begin{pf} By [HT, Theorem 1.1], $\operatorname{reg} I \le \operatorname{adeg} I$ and by Taylor's resolution (see also [HT,
Theorem 1.2]), $\operatorname{reg} I \le n\Delta$.
Hence the statement follows from theorems \ref{B3} and \ref{B3b}.
\end{pf}
The following example shows that the bounds in Theorem \ref{B3b}
and Corollary \ref{B7} are rather good.
\begin{Example}\label{B4} {\rm Let $n> c+1$ and
$$I= (x_1,...,x_c) \cap (x_1^r,..., x_c^r, x_{c+1}^{r-1},...,x_{n-1}^{r-1}).$$
Using the exact sequence
$$0 \to R/I \to R/P \oplus R/J \to R/(P+J) \to 0,$$
where $P = (x_1,...,x_c)$ and $J = (x_1^r,..., x_c^r, x_{c+1}^{r-1},...,x_{n-1}^{r-1})$,
one can check that
$$\operatorname{reg} R/I = (n-1)r-2n+c+2,$$
and
$$P_{R/I}(t) = {t+d-1 \choose d-1} + [ r^c(r-1)^{d-1} - (r-1)^{d-1}].$$
Hence $|e_{d-1}| = (r^c-1)(r-1)^{d-1}$, while by Corollary
\ref{B7} $|e_{d-1}| \le 3r^{c+d-1}n^{d-1}/2$, and by Theorem
\ref{B3b} $|e_{d-1}| \le 3 r^c[(n-1)r-2n+c+3]^{d-1}$/2.
}\end{Example}
\section{Finiteness of Hilbert functions } \label{D}\smallskip
In this section we prove Theorem \ref{I1}. We need some further
preliminary results. The following result extends an estimation of $H_S(t)$ mentioned in Remark
\ref{C6b} to arbitrary ideals.
\begin{Lemma}\label{D1} Let $I$ be an arbitrary homogeneous ideal. Let
$$b = \min \{ \delta_1\cdots \delta_c,\ (\operatorname{adeg} S)^c\}.$$
For all $t\ge 0$ we have
$$H_S(t) \le (b-1){t+d-2\choose d-1} + {t+d-1\choose d-1}.$$
\end{Lemma}
\begin{pf} We may assume that $x_{c+1},...,x_n$ are chosen generically. In particular,
$x_{c+1},...,x_n$ form a s.o.p. of $S$. Set
$B = \ell(S/(x_{c+1},...,x_n)S)$. By [RVV, Proposition 2.4] for all $t\ge 0$ we have
$$H_S(t) \le (B -1){t+d-2\choose d-1} + {t+d-1\choose d-1}.$$
As shown in the proof of Theorem \ref{A4}, $B\le b$. Hence the
lemma is proven.
\end{pf}
\begin{Lemma}\label{D2} Assume that $K$ is an algebraically closed field, $I$
is an intersection of prime ideals and $I$ contains no linear
form. Then $c \le d(\operatorname{adeg} I -1)$.
\end{Lemma}
\begin{pf} By the assumption
$$ I = \cap_{i=1}^s {\frak p}_i,$$
where ${\frak p}_i$ are prime ideals of height at least $c$. Since $s
\le \operatorname{adeg} I$, the statement is derived from the following
inequality
$$c \le \operatorname{adeg} I - s + (s-1)d.$$
We prove this inequality by induction on $s$. The case $s=1$ is
well known. Let $s>1$. Put $J = \cap_{i=1}^{s-1} {\frak p}_i.$ Let $a$
and $b$ be the maximal number of independent linear forms
contained in $J$ and ${\frak p}_s$, respectively. By the induction hypothesis, we have
$c- a \le \operatorname{adeg} J - (s-1) + (s-2)d$ and $ c-b \le e(R/{\frak p}_s)-1$.
Since $\operatorname{adeg} I = \operatorname{adeg} J + e(R/{\frak p}_s)$, we get
$$2c \le \operatorname{adeg} I -s + (s-2)d + a+b.$$
If $a+b> n$ it would imply that there is a linear form in $J \cap
{\frak p}_s = I$, a contradiction. Hence $a+b\le n = d+c$. The above
inequality then yields $c \le \operatorname{adeg} I - s + (s-1)d.$
\end{pf}
As mentioned in the introduction, Theorem \ref{I1} is an
immediate consequence of Theorem \ref{C4}, Lemma \ref{D2} and
[RTV2, Theorem 2.3]. We give here a direct proof without the use
of [RTV2].\vskip0.5cm
\noindent {\it Proof of Theorem \ref{I1}.} Without the loss of
generality we may assume from the beginning that $I$ contains no
linear form. Note that $e \le \operatorname{adeg} S$. Therefore, by Proposition
\ref{C10} and Lemma \ref{D2}, there are only finitely many Hilbert
polynomials associated to reduced algebras such that $\operatorname{adeg} S \le
a$ and $\dim S \le d$. By Lemmas \ref{D1} and \ref{D2}, there are
only finitely many choices for the initial values of Hilbert
functions, while Theorem \ref{C4} says that for $t \ge
(\frac{e(e-1)}{2} + \operatorname{adeg} S)^{2^{d-2}}$ each Hilbert function
agrees with the corresponding Hilbert polynomial. This implies the
finiteness of the number of Hilbert functions. \hfill $\square$.
\vskip0.5cm
Example \ref{C9} shows that without the assumption $S$ being a
reduced ring Theorem \ref{I1} does not hold.
Applying Proposition \ref{E7} and Theorem \ref{E1}, as in the
proof of Theorem \ref{I1}, we get a similar finiteness result in
terms of the defining degrees.
\begin{Corollary}\label{E8} Given two numbers $\delta$ and $n$, there exist
only finitely many Hilbert functions associated to homogeneous ideals generated
by forms of degrees at most $\delta$ in at most $n$ variables.
\end{Corollary}
\section{Castelnuovo-Mumford regularity of initial ideals} \label{F}\smallskip
In this last section we apply results in the previous sections to
study the Castelnuovo-Mumford regularity of an initial ideal $\operatorname{in}
(I)$ of $I$ with respect to any given term order and coordinates. We
even consider a much bigger class: the class of all ideals $J$
having the same Hilbert function as $I$. Then one can easily
bound $\operatorname{reg} J$ in terms of some data of $I$. This approach was
initiated in [CM] and developed further in [HH]. Let us recall
some notations. The Hilbert polynomial can be uniquely written in
the form
$$P_{R/I}(t)= {c_1+ t \choose t} + {c_2+ t-1 \choose t-1}+ \cdots + {c_s+ t-s+1 \choose t-s+1},$$
where $c_1\geq c_2 \ge \cdots \ge c_s \geq 0$ are integers (see,
e.g., [V, Section B6]). For $0\le i \le d-1$ set
$$B_i = \sharp\{j;\ c_j \ge (d-1)- i\}.$$
Thus in the above notations, $s= B_{d-1}$ (for convenience, we set
$B_{-1} = 0$). The following result easily follows from Gotzmann's
regularity theorem:
\begin{Lemma}\label{F1} {\rm [HH, Lemma 5]} Let $I,J$ be homogeneous ideals having
the same Hilbert function. Then
$$\operatorname{reg} J \le \max\{ \operatorname{reg} I,\ B_{d-1}\}.$$
\end{Lemma}
Since we already know bounds for $\operatorname{reg}(I)$ (see Theorems \ref{C4}
and \ref{E1}), we have only to estimate $B_{d-1}$. For this
purpose we need some relations between the invariants $B_i$ just
defined and the Hilbert coefficients which were given in [Bl,
Proposition 3.9] (see also [CM, Lemma 1.5]).
\begin{Lemma}\label{F2} For all $0 \le j\le d-1$ we have
$$B_i = (-1)^ie_i + {B_{i-1}+1 \choose 2} - {B_{i-2}+1 \choose 3} + \cdots +
(-1)^{i+1}{B_{0}+1 \choose i+1}.$$
\end{Lemma}
Note that $B_{d-1} \ge \cdots \ge B_0 = e$. In order to estimate
$B_j$, we need the following combinatorial result.
\begin{Lemma}\label{F3} Assume that
$$|e_i| \le M^{\alpha + i\beta 2^i} \ \ \text{for\ all\ } i\ge 0,$$
where $M\ge 2$ and $\alpha, \beta \ge 1$. Then for all $0 \le j
\le d-1$ we have
$$B_j \le M^{(\alpha+j\beta)2^j}.$$
\end{Lemma}
\begin{pf} We have $B_0 = e = e_0 \le M^\alpha$ by the assumption. By Lemma
\ref{F2}
the following holds
$$B_1 = -e_1 + {B_0 +1 \choose 2} \le |e_1| + \frac{e(e+1)}{2}
< M^{\alpha + 2\beta} + M^{2\alpha} \le M^{2(\alpha + \beta)}\ \
(\text{since}\ M\ge 2).$$
Let $j\ge 2$. Assume that
\begin{eqnarray} \label{EF1} B_{j-l} \le M^{(\alpha+(j-l)\beta)2^{j-l}}
\end{eqnarray}
for all $l\ge 1$. Lemma \ref{F2} yields:
\begin{eqnarray}
B_j &=& (-1)^je_j + {B_{j-1}+1 \choose 2} - {B_{j-2}+1 \choose 3} + \cdots +
(-1)^{j-1}{B_{0}+1 \choose j-1} \nonumber\\
& \le & |e_j| + {B_{j-1}+1 \choose 2} + {B_{j-3}+1 \choose 4} +
\cdots \label{EF2}
\end{eqnarray}
By (\ref{EF1}), for all $1\le l\le j$ we have
\begin{eqnarray} \label{EF4}
{B_{j-l}+1 \choose l+1} \le \frac{(B_{j-l}+1)^{l+1}}{(l+1)!} \le
\frac{(B_{j-l}+1)^{2^l}}{(l+1)!} \le
\frac{M^{(\alpha+j\beta)2^j}}{(l+1)!}.
\end{eqnarray}
From (\ref{EF2}), (\ref{EF4}) and the assumption $|e_j| \le
M^{\alpha + j\beta 2^j}$ it follows that
$$\begin{array}{ll}
B_j & \le M^{\alpha +j\beta2^j} + M^{(\alpha+j\beta)2^j} \{ \frac{1}{2!} +
\frac{1}{4!} \ \cdots \}\\
&< M^{\alpha +j\beta2^j} + \frac{2}{3} M^{(\alpha+j\beta)2^j}\\
& \le M^{(\alpha+j\beta)2^j}.
\end{array}$$
\end{pf}
By Macaulay's theorem $H_{R/\operatorname{in} I}(t) = H_{R/ I}(t)$ for all $t
\in \Bbb Z$. Hence, Theorem \ref{I2} stated in the introduction is
a special case of the following result.
\begin{Theorem}\label{F4} Let $K$ be an arbitrary field. Let $J$ be an arbitrary
homogeneous ideal of $ R=K[x_1,...,x_n]$ such that $H_{R/J}(t) = H_{R/I}(t)$ for all $t$. Then
\begin{itemize}
\item[(i)] $\operatorname{reg}(J) \le (\frac{3}{2} \Delta^c + \Delta)^{d2^{d-1}}.$ \item[(ii)]
Moreover, if $R/I$ is a reduced algebra, then we also have
$$\operatorname{reg}(J) \le (\operatorname{adeg}(I))^{(n-1)2^{d-1}}.$$
\end{itemize}
\end{Theorem}
\begin{pf}
(i) By Proposition \ref{E7}, $|e_i| < (\frac{3}{2}\Delta^c + \Delta)^{1+i2^i}$ for
all $i\ge 0$. Applying Lemma \ref{F3} to $M= \frac{3}{2}\Delta^c + \Delta,\ \alpha
=1,\ \beta = 1$ and $j=d-1$, we get $B_{d-1}\le
(\frac{3}{2}\Delta^c + \Delta)^{d2^{d-1}}.$ Then (i) follows from Lemma \ref{F1} and
Theorem \ref{E1}.
(ii) For short, set $a = \operatorname{adeg} I$. Note that $a \ge e$ and
\begin{eqnarray} \label{EF5}
\frac{e(e-1)}{2} + a \le a^2.
\end{eqnarray}
Hence, by Proposition \ref{C10}(i)
$$|e_1| \le a ^{c+2}.$$
Let $i \ge 2$. Since $\Delta\ge 2$, $a \ge 2$. By Proposition
\ref{C10}(ii) and (\ref{EF5}), we have
$$\begin{array}{ll}
|e_i| &\le \frac{3}{2}a^c (\frac{a(a+1)}{2})^{i2^{i-1}} = a^c
(\frac{a(a+1)}{2})^{i2^{i-1}-4}\cdot [\frac{3}{2}(\frac{a(a+1)}{2})^4]\\
& \le a^{c + 2(i2^{i-1}-4)} a^8 = a^{c+i2^i}.
\end{array}$$
Thus, applying Lemma \ref{F3} to $M=a,\ \alpha = c,\ \beta =1$ and
$j = d-1$, we get $B_{d-1} \le a^{(n-1)2^{d-1}}$. By Lemma \ref{F1}
and Theorem \ref{C4} this implies $\operatorname{reg} J\le a^{(n-1)2^{d-1}}$.
\end{pf}
Note that if $R/I$ is a Cohen-Macaulay ring of dimension $d\ge 2$
(but not necessarily reduced), then one can get a little bit
better bound (see [HH, Theorem 9]):
$$\operatorname{reg} J \le e^{2^{d-1}}/ 2^{2^{d-2}}.$$
\begin{Example}\label{F5} {\rm Let $I^{lex} $ denote the lex-segment ideal
associated to the Hilbert function $H_{R/I}(t)$. This is the ideal
generated by all first $H_{I}(m)$ monomials of degrees $m$ with
respect to the lexicographic order, when $m$ runs through all
positive integers. It has the same Hilbert function as $I$. If
$R/I$ is a Cohen-Macaulay ring of dimension $d\ge 2$, then from
[CM, Theorem 2.5] it follows that $\operatorname{reg} (I^{lex} ) = B_{d-1}$.\par
i) Let $I$ be an ideal generated by a regular sequence consisting
of forms of degrees $\delta_1 \ge \cdots \ge \delta_c$ such that
$c\ge 2$ and $\delta_2\ge 35$ ($d\ge 2$). It was shown in [HH,
Example 13] that
$$\operatorname{reg} (I^{lex} ) \ge 9\frac{\Delta^{c2^{d-1}}}{9^{2^{d-2}}}.$$
This shows that the bound in Theorem \ref{F4}(i) is close to be sharp.
ii) Let $S$ be a Veronesian embedding $K[y_1,...,y_d]^{(p)}$, i.e.
$S_1$ is generated by all monomials of degree $p$ in the variables
$y_1,...,y_d$, where $ d\ge 3$. This is a Cohen-Macaulay domain
and $P_S(t) = {pt +d-1 \choose d-1}$. Hence $\operatorname{adeg} S = e= p^{d-1}$
and $e_1 = dp^{d-2}(p-1)$. Let $p\ge 35$. Then $e_1 < e^2/36$ and
$e\ge 35^2$. Let $S= K[x_1,...,x_q]/I$, where $q = {p+d-1\choose
d-1}$. By [HH, Proposition 12] we get
$$\operatorname{reg} (I^{lex} ) \ge 9\frac{e^{2^{d-1}}}{9^{2^{d-2}}}.$$
This shows that the bound in the second part of Theorem \ref{F4}
is close to be sharp too. }
\end{Example}
Since $\operatorname{reg} (\operatorname{in} I) \ge \operatorname{reg} I$, the ideals of Mayr and Meyer
again show that the bound $(2\Delta^c)^{d2^{d-1}}$ of Theorem
\ref{I2} is rather good (see Remark \ref{E6}). We do not know
whether one can construct a reduced algebra $R/I$ such that there
is a term order with $\operatorname{reg}(\operatorname{in} I)$ close to
$(\operatorname{adeg}(I))^{(n-1)2^{d-1}}$.
Finally we would like to make the following remark: In the proof
of Theorem \ref{I2} we use very rough estimation for $\operatorname{reg} I$ and
$|e_i|$. It could suspect that if $\operatorname{reg} I$ and $|e_i|$ are small,
then one could get a bound for $\operatorname{reg}(\operatorname{in} I)$, which would be a
single exponent of $d$. But this is not the case as shown by [HH,
Section 4].
\section*{References}\smallskip
\begin{itemize}
\item[[BM]] D. Bayer and D. Mumford, {\it What can be computed in
algebraic geometry?} Computational algebraic geometry and
commutative algebra (Cortona, 1991), 1--48, Sympos. Math., XXXIV,
Cambridge Univ. Press, Cambridge, 1993; MR 95d:13032.
\item[[BS]] D. Bayer and M. Stillman, {\it A criterion for
detecting $m$-regularity}, Invent. Math. {\bf 87}(1987), no. 1,
1--11, MR 87k:13019.
\item[[BEL]] A. Bertram, L. Ein and R. Lazarsfeld, {\it Vanishing
theorems, a theorem of Severi, and the equations defining
projective varieties}, J. Amer. Math. Soc. {\bf 4}(1991),
587--602; MR 92g:14014.
\item[[Bl]] C. Blancafort, {\it Hilbert functions of
graded algebras over Artinian rings}, J. Pure Appl. Algebra {\bf
125}(1998), no. 1-3, 55--78; MR 98m:13023.
\item[[BrL]] M. P. Brodmann and A. F. Lashgari, {\it A diagonal
bound for cohomological postulation numbers of projective
schemes,} J. Algebra {\bf 265}(2003), 631--650; MR 2004f:14030.
\item[[BrS]] M. P. Brodmann and R. Y. Sharp, Local cohomology:
an algebraic introduction with geometric applications. Cambridge
Studies in Advanced Mathematics, 60. Cambridge University Press,
Cambridge, 1998; MR 99h:13020.
\item[[CS]] G. Caviglia and E. Sbarra, {\it Characteristic-free
bounds for Castelnuovo-Mumford regularity}, Compos. Math. 141 (2005), no. 6, 1365--1373; MR
2006i:13032.
\item[[CF]] M. Chardin and A. L. Fall, {\it Sur la r\'egularit\'e de Castelnuovo-Mumford de id\'eaux
en dimension 2}, C. R. Math. Acad. Sci. Paris 341 (2005), no. 4, 233--238; MR 2164678.
\item[[CM]] M. Chardin and G. Moreno-Socias, {\it Regularity of
lex-segment ideals: some closed formulas and applications},
Proc. Amer. Math. Soc. {\bf 131}(2003), no. 4, 1093--1102; MR 2003m:13014.
\item[[E]] D. Eisenbud, Commutative algebra. With a view toward
algebraic geometry. Graduate Texts in Mathematics, 150.
Springer-Verlag, New York, 1995; MR 97a:13001.
\item[[EG]] D. Eisenbud and S. Goto, {\it Linear free resolutions
and minimal multiplicity}, J. Algebra {\bf 88}(1984), no. 1,
89--133; MR 85f:13023.
\item[[FOV]] H. Flenner, L. O'Carroll and W. Vogel, Joins and
intersections. Springer Monographs in Mathematics.
Springer-Verlag, Berlin, 1999; MR 2001b:14010.
\item[[Gi]] M. Giusti, {\it Some effectivity problems in
polynomial ideal theory.} EUROSAM 84 (Cambridge, 1984), 159--171,
Lecture Notes in Comput. Sci., {\bf 174}, Springer, Berlin, 1984,
MR 86d:12001.
\item[[GLP]] L. Gruson, R. Lazarsfeld and C. Peskine, {\it On a
theorem of Castelnuovo, and the equations defining space curves},
Invent. Math. {\bf 72}(1983), no. 3, 491--506; MR 85g:14033.
\item[[HPV]] J. Herzog, D. Popescu and M. Vladoiu, {\it On
the Ext-modules of ideals of Borel type}, Commutative algebra
(Grenoble/Lyon, 2001), 171--186, Contemp. Math., 331, Amer. Math.
Soc., Providence, RI, 2003; MR 2013165.
\item[[HH]] L. T. Hoa and E. Hyry, {\it Castelnuovo-Mumford
regularity of initial ideals}, J. Symb. Comp. {\bf 38}(2004), 1327--1341; MR 2168718.
\item[[HSV]] L. T. Hoa, J. St\"uckrad and W. Vogel, {\it Towards
a structure theory for projective varieties of
degree =codimension +2}, J. Pure Appl. Algebra,
{\bf 71}, 203-231(1991). MR 92f:14002.
\item[[HT]] L. T. Hoa; N. V. Trung, {\it On the
Castelnuovo-Mumford regularity and the arithmetic degree of
monomial ideals,} Math. Z. {\bf 229}(1998), no. 3, 519--537; MR
99k:13034.
\item[[K]] S. L. Kleiman, {\it Les th\'eor\`emes de finitude
pour le foncteur de Picard} , in: "Th\'eorie des intersections et
th\'eor\`eme de Riemann-Roch" (French) S\'eminaire de
G\'eom\'etrie Alg\'ebrique du Bois-Marie 1966--1967 (SGA 6), pp.
616--666. Lecture Notes in Mathematics, Vol. {\bf 225}.
Springer-Verlag, Berlin-New York, 1971; MR 50 $\sharp$7133.
\item[[MVY]] C. Miyazaki, W. Vogel and K. Yanagawa, {\it
Associated primes and arithmetic degrees}, J. Algebra {\bf
192}(1997), no. 1, 166--182; MR 98i:13036.
\item[[MM]] H. M. M\"oller and F. Mora, {\it Upper and lower bounds for the degree of Gr\"obner
bases}, EUROSAM 84 (Cambridge, 1984), 172--183,
Lecture Notes in Comput. Sci., {\bf 174}, Springer, Berlin, 1984;
MR 86k:13008.
\item[[M]] D. Mumford, Lectures on curves on an algebraic surfaces, Princeton Univ. Press,
Princeton 1966; MR 35$\sharp$187.
\item[[RTV1]] M. E. Rossi, N. V. Trung and G. Valla, {\it
Castelnuovo-Mumford regularity and extended degree,} Trans. Math.
Amer. Soc. {\bf 355}(2003), no. 5, 1773--1786; MR 2004b:13020.
\item[[RTV2]] M. E. Rossi, N. V. Trung and G. Valla, {\it
Castelnuovo-Mumford regularity and finiteness of Hilbert
functions,} Lect. Notes Pure Appl. Math., 244, Chapman \& Hall/CRC, Boca Raton, FL, 2006; MR
2184798.
\item[[RVV]] M. E. Rossi, G. Valla and W. V. Vasconcelos, {\it
Maximal Hilbert functions,} Results Math. {\bf 39}(2001), no.
1-2, 99--114; MR 2001m:13020.
\item[[S]] E. Sbarra, {\it Upper bounds for local
cohomology for rings with given Hilbert function}, Comm. Algebra
{\bf 29}(2001), no. 12, 5383--5409; MR 2002j:13024.
\item[[Sj]] R. Sj\"ogren, {\it On the regularity of graded $k$-algebras of Krull dimension $\le 1$},
Math. Scand. {\bf 71}(1992), 167--172; MR 94b:13010.
\item[[V]] W. V. Vasconcelos, Computational methods in
commutative algebra and algebraic geometry. With chapters by D.
Eisenbud, D. R. Grayson, J. Herzog and M. Stillman. Algorithms and
Computation in Mathematics, 2. Springer-Verlag, Berlin, 1998; MR
99c:13048.
\end{itemize}
\end{document}
| {
"timestamp": "2006-08-04T07:30:14",
"yymm": "0608",
"arxiv_id": "math/0608118",
"language": "en",
"url": "https://arxiv.org/abs/math/0608118",
"abstract": "Bounds for the Castelnuovo-Mumford regularity and Hilbert coefficients are given in terms of the arithmetic degree (if the ring is reduced) or in terms of the defining degrees. From this it follows that there exists only a finite number of Hilbert functions associated to reduced algebras over an algebraically closed field with a given arithmetic degree and dimension. A good bound is also given for the Castelnuovo-Mumford regularity of initial ideals which depends neither on term orders nor on the coordinates, and holds for any field.",
"subjects": "Commutative Algebra (math.AC); Algebraic Geometry (math.AG)",
"title": "Finiteness of Hilbert functions and bounds for Castelnuovo-Mumford regularity of initial ideals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540722737478,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7099046626648362
} |
https://arxiv.org/abs/2012.10350 | Scaling Laws for Gaussian Random Many-Access Channels | This paper considers a Gaussian multiple-access channel with random user activity where the total number of users $\ell_n$ and the average number of active users $k_n$ may grow with the blocklength $n$. For this channel, it studies the maximum number of bits that can be transmitted reliably per unit-energy as a function of $\ell_n$ and $k_n$. When all users are active with probability one, i.e., $\ell_n = k_n$, it is demonstrated that if $k_n$ is of an order strictly below $n/\log n$, then each user can achieve the single-user capacity per unit-energy $(\log e)/N_0$ (where $N_0/ 2$ is the noise power) by using an orthogonal-access scheme. In contrast, if $k_n$ is of an order strictly above $n/\log n$, then the capacity per unit-energy is zero. Consequently, there is a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate per unit-energy is infeasible. It is further demonstrated that orthogonal-access schemes in combination with orthogonal codebooks, which achieve the capacity per unit-energy when the number of users is bounded, can be strictly suboptimal.When the user activity is random, i.e., when $\ell_n$ and $k_n$ are different, it is demonstrated that if $k_n\log \ell_n$ is sublinear in $n$, then each user can achieve the single-user capacity per unit-energy $(\log e)/N_0$. Conversely, if $k_n\log \ell_n$ is superlinear in $n$, then the capacity per unit-energy is zero. Consequently, there is again a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate is infeasible that depends on the asymptotic behaviours of both $\ell_n$ and $k_n$. It is further demonstrated that orthogonal-access schemes, which are optimal when $\ell_n = k_n$, can be strictly suboptimal. | \section{Capacity per Unit-Energy of Non-Random Many-Access Channels}
\label{Sec_nonrandom}
\input{nonrandom.tex}
\input{random.tex}
\input{discussion.tex}
\section{Conclusion}
\label{Sec_conclusion}
In this work, we analyzed scaling laws
of a Gaussian random \edit{MnAC} where the total number of users as well as the average number of active users may grow with the blocklength. In particular, we characterized the behaviour of the capacity per unit-energy as a function of the order of growth of the number of users for two notions of probability of error: the classical JPE and the APE proposed by Polyanskiy in~\cite{Polyanskiy17}. For both cases, we demonstrated that there is a sharp transition between orders of growth where all users can achieve the single-user capacity per unit-energy and orders of growth where no positive rate per unit-energy is feasible.
When all users are active with probability one, we showed that the transition threshold separating the two regimes is at the order of growth $n/\log n$ for JPE, and at the order of growth $n$ for APE. While the qualitative behaviour of the capacity per unit-energy remains the same in both cases, there are some interesting differences between JPE and APE in some other aspects. For example, we showed that an orthogonal-access scheme together with orthogonal codebooks is optimal for APE, but it is suboptimal for JPE. Furthermore, when the number of users is unbounded in $n$, the strong converse holds for JPE, but it does not hold for APE. For MnACs where the number of users grows linearly in $n$ and APE---the most common assumptions in the literature---our results imply that a positive rate per unit-energy is infeasible if we require the APE to vanish asymptotically. In contrast, due to the absence of a strong converse, a positive $\epsilon$-rate per unit-energy is feasible. To this end, however, it is necessary that the energy $E_n$ and the payload $\log M_n$ are bounded in $n$.
For the case of random user activity and JPE, we characterized the behaviour of the capacity per unit-energy in terms of the total number of users $\ell_n$ and the average number of active users $k_n$. We showed that, if $k_n\log \ell_n$ is sublinear in $n$, then all users can achieve the single-user capacity per unit-energy, and if $k_n \log \ell_n$ is superlinear in $n$, then the capacity per unit-energy is zero. Consequently, there is again a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where no positive rate per unit-energy is feasible, and the transition threshold separating these two regimes depends in this case on the orders of growth of both $\ell_n$ and $k_n$.
\emph{Inter alia}, this result recovers our characterization of the non-random-access case ($\alpha_n=1$), since $k_n \log k_n = \Theta(n)$ is equivalent to $k_n = \Theta(n/\log n)$.
Our result further implies that
the orders of growth of $\ell_n$ for which interference-free communication is feasible are in general larger than $n/\log n$, and the orders of growth of $k_n$ for which interference-free communication is feasible \edit{may be} smaller than $n/\log n$. This suggests that treating a random MnAC with total number of users $\ell_n$ and average number of users $k_n$ as a non-random MnAC with $k_n$ users \edit{may be} overly-optimistic.
We finally showed that, under JPE, orthogonal-access schemes achieve the single-user capacity per unit-energy when the order of growth of $\ell_n$ is strictly below $n/ \log n$, and they cannot achieve a positive rate per unit-energy when the order of growth of $\ell_n$ is strictly above $n/\log n$, irrespective of the behaviour of $k_n$. Intuitively, by using an orthogonal-access scheme, we treat the random MnAC as if it were non-random. We conclude that orthogonal-access schemes are optimal when all users are active with probability one. However, for general $\alpha_n$, non-orthogonal-access schemes are necessary to achieve the capacity per unit-energy.
\begin{appendices}
\input{appendix.tex}
\end{appendices}
\section{Proof of Lemma~\ref{Lem_energy_infty}}
\label{Sec_energy_infty}
The probability of error of the Gaussian MnAC cannot be smaller than that of the Gaussian point-to-point channel. Indeed, suppose a genie informs the receiver about all transmitted codewords except that of user $i$. Then the receiver can subtract the known codewords from the received vector, resulting in a point-to-point Gaussian channel. Since access to additional information does not increase the probability of error, the claim follows.
We next note that, for a Gaussian point-to-point channel, any $(n,M_n, E_n, \epsilon)$-code satisfies~\cite[Th.~2]{PolyanskiyPV11}
\begin{align}
\frac{1}{M_n} \geq Q\left(\sqrt{\frac{2E_n}{N_0}}+Q^{-1}(1-\epsilon)\right). \label{Eq_finite_energy}
\end{align}
Solving~\eqref{Eq_finite_energy} for $\epsilon$ yields
\begin{align}
\epsilon &\geq 1 - Q\left(Q^{-1}\left(\frac{1}{M_n}\right) - \sqrt{\frac{2E_n}{N_0}}\right). \nonumber
\end{align}
It follows that the probability of error tends to zero as $n \to \infty$ only if $Q^{-1}\left(1/M_n\right) - \sqrt{\frac{2E_n}{N_0}} \rightarrow -\infty$.
Since $Q^{-1}\left(1/M_n\right) \geq 0$ for $M_n\geq 2 $, this in turn is only the case if
$E_n \rightarrow \infty$. This proves Lemma~\ref{Lem_energy_infty}.
\section{Proof of Lemma~\ref{Lem_ortho_code}}
\label{Sec_AWGN_ortho_code}
The upper bounds on the probability of error presented in~\eqref{Eq_orth_sinlg_uppr1} and \eqref{Eq_orth_sinlg_uppr2} are proved in Appendix~\ref{Sec_uppr}. The lower bounds are proved in Appendix~\ref{Sec_lowr}.
\subsection{Upper bounds}
\label{Sec_uppr}
An upper bound on the probability of error for $M$ orthogonal codewords of maximum energy $E$ can be found in~\cite[Sec.~2.5]{ViterbiO79}:
\begin{align}
P_{e,1} & \leq (M-1)^{\rho} \exp\left[-\frac{E}{N_0} \left (\frac{\rho}{\rho+1}\right)\right] \notag\\
& \leq \exp\left[-\frac{E}{N_0} \left(\frac{\rho}{\rho+1}\right)+ \rho \ln M\right], \quad \mbox{for } 0 \leq \rho\leq 1.\label{Eq_enrgy_AWGN}
\end{align}
For the rate per unit-energy $\dot{R} = \frac{ \log M}{E}$, it follows from~\eqref{Eq_enrgy_AWGN}
that
\begin{align}
P_{e,1} & \leq \exp[-E E_0(\rho, \dot{R})], \quad \mbox{for } 0 \leq \rho \leq 1 \label{Eq_achvbl_err_exp}
\end{align}
where
\begin{align}
E_0(\rho, \dot{R}) & \triangleq \left(\frac{1}{N_0} \frac{\rho}{\rho+1} -\frac{\rho \dot{R}}{\log e}\right). \label{Eq_err_exp}
\end{align}
When $ 0 < \dot{R} \leq \frac{1}{4} \frac{\log e}{N_0} $, the maximum of $E_0(\rho, \dot{R})$ over all $0 \leq \rho \leq 1$ is achieved for $\rho=1$.
When $\frac{1}{4} \frac{\log e}{N_0} \leq \dot{R} \leq \frac{\log e}{N_0}$, the maximum of $ E_0(\rho, \dot{R}) $ is achieved for $\rho = \sqrt{ \frac{\log e}{N_0} \frac{1}{\dot{R}} } -1 \in[0,1]$.
So we have
\begin{align}
\max_{0 \leq \rho \leq 1 } E_0(\rho, \dot{R}) =
\begin{cases}
\frac{1}{2 N_0} - \frac{\dot{R}}{\log e}, & 0 < \dot{R} \leq \frac{1}{4} \frac{\log e}{N_0} \\
\left(\sqrt{\frac{1}{N_0}} - \sqrt{ \frac{\dot{R}}{\log e}}\right)^2, & \frac{1}{4} \frac{\log e}{N_0} \leq \dot{R} \leq \frac{\log e}{N_0} .
\end{cases}
\label{Eq_err_exp_achv}
\end{align}
Since $E = \frac{\log M}{\dot{R}}$, we obtain from \eqref{Eq_achvbl_err_exp} and \eqref{Eq_err_exp_achv} that
\begin{align*}
P_{e,1} \leq
& \exp\left[- \frac{\ln M}{\dot{R}}\left(\frac{\log e}{2 N_0} - \dot{R} \right) \right], \quad \text{if } 0 < \dot{R} \leq \frac{1}{4} \frac{\log e}{N_0}
\end{align*}
and
\begin{align*}
P_{e,1} \leq & \exp\left[- \frac{\ln M}{\dot{R}} \left(\sqrt{\frac{\log e}{N_0}} - \sqrt{ \dot{R}}\right)^2\right], \quad \text{if } \frac{1}{4} \frac{\log e}{N_0} \leq \dot{R} \leq \frac{\log e}{N_0}.
\end{align*}
This proves the upper bounds on the probability of error in~\eqref{Eq_orth_sinlg_uppr1} and \eqref{Eq_orth_sinlg_uppr2}.
\subsection{Lower bounds}
\label{Sec_lowr}
To prove the lower bounds on the probability of error presented in~\eqref{Eq_orth_sinlg_uppr1} and~\eqref{Eq_orth_sinlg_uppr2}, we first argue that, for an orthogonal codebook, the optimal probability of error is achieved by codewords of equal energy. Then, for any given $\dot{R}$ and an orthogonal codebook where all codewords have equal energy, we derive the lower bound in~\eqref{Eq_orth_sinlg_uppr2}, which is optimal at high rates. We further obtain an improved lower bound on the probability of error for low rates.
Finally, the lower bound in~\eqref{Eq_orth_sinlg_uppr1} follows by showing that a combination of the two lower bounds yields a lower bound, too.
\subsubsection{Equal-energy codewords are optimal}
We shall argue that, for an orthogonal code with energy upper-bounded by $E_n$, there is no loss in optimality in assuming that all codewords have energy $E_n$. To this end,
we first note that, without loss of generality, we can restrict ourselves to codewords of the form
\begin{align}
{\bf x}_m = (0,\ldots, \sqrt{E_{{\bf x}_m}},\ldots,0), \quad m=1, \ldots, M \label{Eq_ortho_code}
\end{align}
where $E_{{\bf x}_m}\leq E_n$ denotes the energy of codeword ${\bf x}_m$. Indeed, any orthogonal codebook can be multiplied by an orthogonal matrix to obtain this form. Since the additive Gaussian noise ${\bf Z}$ is zero mean and has a diagonal covariance matrix, this does not change the probability of error.
To argue that equal energy codewords are optimal, let us consider a code $\mbox{$\cal{C}$}$ for which some codewords have energy strictly less than $E_n$. From $\mbox{$\cal{C}$}$, we can construct a new code $\mbox{$\cal{C}$}'$ by multiplying each codeword ${\bf x}_m$ by $\sqrt{E_n/E_{{\bf x}_m}}$. Clearly, each codeword in $\mbox{$\cal{C}$}'$ has energy $E_n$. Let ${\bf Y}$ and ${\bf Y}'$ denote the channel outputs when we transmit codewords from $\mbox{$\cal{C}$}$ and $\mbox{$\cal{C}$}'$, respectively, and let $P_e(\mbox{$\cal{C}$})$ and $P_e(\mbox{$\cal{C}$}')$ denote the corresponding minimum probabilities of error. By multiplying each dimension of the channel output ${\bf Y}'$ by $\sqrt{E_{{\bf x}_m}/E_n}$ and adding Gaussian noise of zero mean and variance $E_n/E_{{\bf x}_m}$, we can construct a new channel output $\tilde{{\bf Y}}$ that has the same distribution as ${\bf Y}$. Consequently, $\mbox{$\cal{C}$}'$ can achieve the same probability of error as $\mbox{$\cal{C}$}$ by applying the decoding rule of $\mbox{$\cal{C}$}$ to $\tilde{{\bf Y}}$. It follows that $P_e(\mbox{$\cal{C}$}')\leq P_e(\mbox{$\cal{C}$})$. We conclude that, in order to find lower bounds on the probability of error, it suffices to consider codes whose codewords all have energy $E_n$.
\subsubsection{High-rate lower bound}
\edit{We next derive lower bound \eqref{Eq_orth_sinlg_uppr2}, which applies to high rates per unit-energy. To obtain this bound,} we follow the analysis given in~\cite{ShannonGB67} (see also~\cite[Sec.~3.6.1]{ViterbiO79}). \edit{To this end}, we shall first derive a lower bound on the maximum probability of error
\begin{align*}
P_{e_{\max}} & \triangleq \max_{m} P_{e_m}
\end{align*}
where $P_{e_m}$ denotes the probability of error in decoding message $m$.
In a second step, we derive from this bound a lower bound on the average probability of error $P_{e,1}$ by means of expurgation.
For $P_{e_{\max}}$, it was shown that at least one of the
following two inequalities is always satisfied~\cite[Sec.~3.6.1]{ViterbiO79}:
\begin{align}
1/M & \geq \frac{1}{4} \exp\left[\mu(s) - s\mu'(s) -s \sqrt{2\mu''(s)}\right] \label{Eq_lowr_first} \\
P_{e_{\max}} & \geq \frac{1}{4} \exp\left[\mu(s) + (1-s) \mu'(s) -(1-s) \sqrt{2\mu''(s)}\right] \label{Eq_lowr_second}
\end{align}
for all $0 \leq s \leq 1$, where
\begin{align}
\mu(s) & =-\frac{E}{N_0} s(1-s), \label{Eq_const1} \\
\mu'(s) & = -\frac{E}{N_0}(1-2s),\label{Eq_const2}\\
\mu''(s) & = \frac{2E}{N_0}. \label{Eq_const3}
\end{align}
By substituting these values in \eqref{Eq_lowr_first}, we obtain
\begin{align*}
\ln M \leq \frac{E}{N_0}\left[s^2 + \frac{2s}{\sqrt{E/N_0}}+ \frac{ \ln 4}{E/N_0}\right].
\end{align*}
Using that $0 \leq s \leq 1$ and that $E = \frac{\log M}{\dot{R}}$, this yields
\begin{align}
\dot{R} & \leq \frac{\log e}{N_0} \left[s^2 + \frac{2}{\sqrt{E/N_0}}+ \frac{ \ln 4}{E/N_0}\right]. \label{Eq_uppr_rate}
\end{align}
Similarly, substituting \eqref{Eq_const1}-\eqref{Eq_const3} in \eqref{Eq_lowr_second} yields
\begin{align}
P_{e_{\max}} & \geq \exp\left[- \frac{E}{N_0}(1-s)^2 - 2(1-s)\sqrt{\frac{E}{N_0}} - \ln 4\right] \notag\\
& \geq \exp\left[- \frac{E}{N_0}\left((1-s)^2 + \frac{2}{\sqrt {E/N_0}} +\frac{ \ln 4}{E/N_0}\right)\right]. \label{Eq_low_bnd2_err_prob}
\end{align}
For a given $E$, let $\delta_E$ be defined as $\delta_E \triangleq 2\left(\frac{2}{\sqrt {E/N_0}} +\frac{ \ln 4}{E/N_0}\right)$,
and let $s_{E} \triangleq \sqrt{\dot{R} \frac{N_0}{\log e}- \delta_E}$.
For $s=s_{E}$, \edit{the bound \eqref{Eq_uppr_rate}}, and hence also~\eqref{Eq_lowr_first}, is violated which implies that \eqref{Eq_low_bnd2_err_prob} must be satisfied for $s=s_{E}$. By substituting $s=s_E$ in~\eqref{Eq_low_bnd2_err_prob}, we obtain
\begin{align}
&P_{e_{\max}}
\geq \exp\left[- E\left(\left(\sqrt{ \frac{1}{N_0}}- \sqrt{ \frac{\dot{R}}{\log e} - \frac{\delta_E}{N_0} }\right)^2 + \frac{\delta_E}{2N_0} \right) \right]. \label{Eq_err_exp_upp}
\end{align}
We next use~\eqref{Eq_err_exp_upp} \edit{and expurgation} to derive a lower bound on $P_{e,1}$. Indeed, we divide the codebook $\mbox{$\cal{C}$}$ with $M$ messages into two codebooks $\mbox{$\cal{C}$}_1$ and $\mbox{$\cal{C}$}_2$ of $M/2$ messages each, such that $\mbox{$\cal{C}$}_1$ contains the codewords with the smallest probability of error $P_{e_m}$ and $\mbox{$\cal{C}$}_2$ contains the codewords with the largest $P_{e_m}$. It then holds that each codeword in $\mbox{$\cal{C}$}_1$ has a probability of error satisfying $P_{e_m} \leq 2 P_{e,1}$. Consequently, the largest error probability of code $\mbox{$\cal{C}$}_1$, denoted as $P_{e_{\max}}(\mbox{$\cal{C}$}_1)$, and the average error probability
of code $\mbox{$\cal{C}$}$, denoted as $P_e(\mbox{$\cal{C}$})$, satisfy
\begin{align}
P_e(\mbox{$\cal{C}$}) \geq \frac{1}{2} P_{e_{\max}}(\mbox{$\cal{C}$}_1). \label{Eq_max_half}
\end{align}
Applying \eqref{Eq_err_exp_upp} for $\mbox{$\cal{C}$}_1$, and using that the rate per unit-energy of $\mbox{$\cal{C}$}_1$ satisfies $\dot{R}' = \frac{\log M/2}{E} = \dot{R} - \frac{1}{E}$, we obtain
\begin{align*}
& P_{e_{\max}} (\mbox{$\cal{C}$}_1)
\geq\exp\left[- E\left(\left(\sqrt{ \frac{1}{N_0}}- \sqrt{ \frac{\dot{R}}{\log e} - \frac{1}{E} - \frac{\delta_E}{N_0} }\right)^2 + \frac{\delta_E}{2N_0} \right) \right].
\end{align*}
Together with \eqref{Eq_max_half}, this yields
\begin{align}
P_{e,1} & \geq \exp\left[- E\left(\left(\sqrt{ \frac{1}{N_0}}- \sqrt{ \frac{\dot{R}}{\log e} - \frac{1}{E} - \frac{\delta_E}{N_0} }\right)^2 + \frac{\delta_E}{2N_0} - \frac{\ln 2}{E} \right) \right].\label{Eq_prob_low_bnd}
\end{align}
Let $\delta'_E \triangleq \frac{1}{E} + \frac{\delta_E}{N_0} $. Then
\begin{align*}
\sqrt{ \frac{\dot{R}}{\log e} - \frac{1}{E} - \frac{\delta_E}{N_0} } &= \sqrt{ \frac{\dot{R}}{\log e} - \delta'_E}\\
& = \sqrt{ \frac{\dot{R}}{\log e} } + O(\delta'_{E})\\
&= \sqrt{ \frac{\dot{R}}{\log e} } + O\left(\frac{1}{\sqrt{E}}\right)
\end{align*}
where the last step follows by noting that $O(\delta'_E)=O(\delta_E)=O(1/\sqrt{E})$. Further defining $ \delta''_E \triangleq \frac{\delta_E}{2N_0} - \frac{\ln 2}{E}$, we may write \eqref{Eq_prob_low_bnd} as
\begin{align}
P_{e,1} &\geq \exp\left[- E\left(\left(\sqrt{ \frac{1}{N_0}}- \sqrt{ \frac{\dot{R}}{\log e}} + O\left(\frac{1}{\sqrt{E}}\right)\right)^2 + \delta''_E \right) \right] \notag \\
& = \exp\left[- E\left(\left(\sqrt{ \frac{1}{N_0}}- \sqrt{ \frac{\dot{R}}{\log e}} \right)^2 +O\left(\frac{1}{\sqrt{E}} \right)\right) \right] \label{Eq_prob_low_bnd1}
\end{align}
since $O(\delta''_E)=O(\delta_E)=O(1/\sqrt{E})$. By substituting $E = \frac{\log M}{\dot{R}}$, \eqref{Eq_prob_low_bnd1} yields
\begin{align}
P_{e,1} & \geq \exp\left[- \frac{\ln M}{\dot{R}}\left(\left(\sqrt{\frac{\log e}{N_0}}- \sqrt{\dot{R} }\right)^2 + O\left(\frac{1}{\sqrt{E}}\right) \right) \right].\label{Eq_err_exp_lwr}
\end{align}
We can thus find a function $E\mapsto \beta'_E$
\edit{of order $O(1/\sqrt{E})$} for which the lower bound in~\eqref{Eq_orth_sinlg_uppr2} holds.
\subsubsection{ Low-rate lower bound}
\edit{We next derive a lower bound on the probability of error that applies to low rates per unit-energy. This bound will then be used later to derive the lower bound~\eqref{Eq_orth_sinlg_uppr1}. To obtain this bound,} we first derive a lower bound on $P_{e,1}$ that, for low rates \edit{per unit-energy}, is tighter than \eqref{Eq_err_exp_lwr}. This bound is based on the fact that for $M$ codewords of energy $E$, the minimum Euclidean distance $d_{\min}$ between codewords is upper-bounded by $\sqrt{2EM/(M-1)}$~\cite[Sec.~3.7.1]{ViterbiO79}. Since, for the Gaussian channel, the maximum error probability is lower-bounded by the largest pairwise error probability, it follows that
\begin{align}
P_{e_{\max}} & \geq Q\left( \frac{d_{\min}}{\sqrt{2 N_0}}\right) \notag \\
& \geq Q\left( \sqrt{ \frac{ EM}{N_0(M-1)}}\right) \notag\\
& \geq \left(1-\frac{1}{EM/(N_0(M-1))}\right) \frac{e^{-\frac{EM}{2N_0(M-1)}}}{\sqrt{2 \pi} \sqrt{ EM/(N_0(M-1))}} \label{Eq_Low_rate_low}
\end{align}
where the last inequality follows because~\cite[Prop.~19.4.2]{Lapidoth17}
\begin{align*}
Q(\beta) \geq \left(1-\frac{1}{\beta^2}\right) \frac{e^{-\beta^2/2}}{\sqrt{2 \pi} \beta}, \quad \beta>0.
\end{align*}
Let $\beta_E \triangleq \sqrt{EM/N_0(M-1)}$. It follows that
\begin{align}
\sqrt{E/N_0} \leq \beta_E \leq \sqrt{2E/N_0}, \quad M\geq 2. \label{Eq_beta_bounds}
\end{align}
Applying \eqref{Eq_beta_bounds} to \eqref{Eq_Low_rate_low} yields
\begin{align}
P_{e_{\max}} & \geq \frac{1}{ \sqrt{2 \pi}} \exp\left[-E\left(\frac{1}{2N_0}\left(1+ \frac{1}{M-1}\right)\right)\right]
\exp\left[\ln \left(\frac{1}{\beta_E} - \frac{1}{\beta_E^3}\right)\right] \ \notag \\
& \geq \frac{1}{ \sqrt{2 \pi}} \exp\left[-E\left(\frac{1}{2N_0}\left(1+ \frac{1}{M-1}\right)\right)\right]
\exp\left[-E \frac{\frac{3}{2}\ln (2E/N_0)- \ln(E/N_0 -1)}{E}\right] \notag\\
& = \exp\left[-E\left(\frac{1}{2N_0}\left(1+ \frac{1}{M-1}\right) + O\left(\frac{\ln E }{E}\right)\right)\right].\label{Eq_lowr_prob}
\end{align}
Following similar steps of expurgation as before, we obtain from \eqref{Eq_lowr_prob} the lower bound
\begin{align}
P_{e,1} &\geq \exp\left[-E\left(\frac{1}{2N_0}\left(1+ \frac{1}{\frac{M}{2}-1}\right) + O\left(\frac{\ln E }{E}\right)\right)\right]. \notag
\end{align}
By using that $M = 2^{\dot{R} E}$, it follows that
\begin{align}
P_{e,1} &\geq \exp\left[-E\left(\frac{1}{2N_0}\left(1+ \frac{1}{\frac{2^{\dot{R} E}}{2}-1}\right) + O\left(\frac{\ln E }{E}\right)\right)\right]\label{Eq_low_any_rate}
\end{align}
from which we obtain that, for any rate per unit-energy $\dot{R} > 0$,
\begin{align}
P_{e,1} &\geq \exp\left[-E\left(\frac{1}{2N_0} + O\left(\frac{\ln E}{E}\right)\right)\right]. \label{Eq_low_any_rate1}
\end{align}
\subsubsection{Combining the high-rate and the low-rate bounds}
We finally show that a combination of the lower bounds \eqref{Eq_prob_low_bnd1} and \eqref{Eq_low_any_rate} yield the lower bound in \eqref{Eq_orth_sinlg_uppr1}.
Let $P_e^{\bot}(E,M)$ denote the smallest probability of error that can be achieved by an orthogonal codebook with $M$ codewords of energy $E$.
We first note that $P_e^{\bot}(E,M)$ is monotonically increasing in $M$. Indeed, without loss of optimality, we can restrict ourselves to codewords of the form \eqref{Eq_ortho_code}, all having energy $E$. In this case, the probability of correctly decoding message $m$ is given by~\cite[Sec.~8.2]{Gallager68}
\begin{align}
P_{c,m}^{\bot} & = \textnormal{Pr}\biggl(\bigcap_{i\neq m} \{Y_m > Y_i\}\biggm|{\bf X}={\bf x}_m\biggr) \notag \\
& =\frac{1}{\sqrt{\pi N_0}}\int_{-\infty}^{\infty} \exp\left[ \frac {(y_m - \sqrt{E})^2}{N_0} \right] \textnormal{Pr}\biggl(\bigcap_{i\neq m}\{Y_i < y_m\}\biggm|{\bf X}={\bf x}_m\biggr) d y_m \notag\\
& =\frac{1}{\sqrt{\pi N_0}}\int_{-\infty}^{\infty} \exp\left[ \frac {(y_m - \sqrt{E})^2}{N_0} \right] \left(1-Q(y_m)\right)^{M-1} d y_m
\label{Eq_prob_ortho_corrct}
\end{align}
where $Y_i$ denotes the $i$-th component of the received vector ${\bf Y}$. In the last step of \eqref{Eq_prob_ortho_corrct}, we have used that, conditioned on ${\bf X}={\bf x}_m$, the events $ \{Y_i<y_m\}$, $i\neq m$ are independent and $\textnormal{Pr}(Y_i < y_m|{\bf X}={\bf x}_m)$ can be computed as $1-Q(y_m)$.
Since $P_{c,m}^{\bot}$ is the same for all $m$, we have $P_e^{\bot}(E,M) = 1 -P_{c,m}^{\bot}$. The claim then follows by observing that \eqref{Eq_prob_ortho_corrct} is monotonically decreasing in $M$.
Let $ \tilde{M}$ be the largest power of 2 less than or equal to $M$. It follows by the monotonicity of $P_e^{\bot}(E,M)$ \edit{in M} that
\begin{align}
P_e^{\bot}(E,M) \geq P_e^{\bot}(E, \tilde{M}).\label{Eq_ortho_two_code1}
\end{align}
We next show that, for every $E_1$ and $E_2$ satisfying $E=E_1+E_2$, we have
\begin{align}
P_e^{\bot}(E, \tilde{M}) \geq P_e(E_{1}, \tilde{M}, L)P_e(E_{2}, L+1) \label{Eq_prob_prod1}
\end{align}
where $P_e(E_1, \tilde{M}, L)$ denotes the smallest probability of error that can be achieved by a codebook with $\tilde{M}$ codewords of energy $E_1$ and a list decoder of list size $L$, and $P_e(E_{2}, L+1)$ denotes the smallest probability of error that can be achieved by a codebook with $L+1$ codewords of energy $E_2$.
To prove \eqref{Eq_prob_prod1}, we follow along the lines of \cite{ShannonGB67}, which showed the corresponding result for codebooks of a given blocklength rather than a given energy. Specifically, it was shown in \cite[Th.~1]{ShannonGB67} that, for every codebook $\mbox{$\cal{C}$}$ with $M$ codewords of blocklength $n$, and for any $n_1$ and $n_2$ satisfying $n=n_1+n_2$, we can lower-bound the probability of error by
\begin{equation}
P_e(\mbox{$\cal{C}$}) \geq P_e(n_1, M,L )P_e(n_2, L+1) \label{Eq_prob_prod2}
\end{equation}
where $P_e(n_1, M, L )$ denotes the smallest probability of error that can be achieved by a codebook with $M$ codewords of blocklength $n_1$ and a list decoder of list size $L$, and $P_e(n_2, L+1)$ denotes the smallest probability of error that can be achieved by a codebook with $L+1$ codewords of blocklength $n_2$.
This result follows by writing the codewords ${\bf x}_m$ of blocklength $n$ as concatenations of the vectors
\begin{align*}
{\bf x}'_m =(x_{m,1}, x_{m,2}, \ldots, x_{m,n_1})
\end{align*}
and
\begin{align*}
{\bf x}''_m =(x_{m,n_1+1}, x_{m,n_1+2}, \ldots,x_{m,n_1+n_2})
\end{align*}
and, likewise, by writing the received vector ${\bf y}$ as the concatenation of the vectors ${\bf y}'$ and ${\bf y}''$ of length $n_1$ and $n_2$, respectively. Defining $\Delta_m$ as the decoding region for message $m$ and $\Delta''_m({\bf y}')$ as the decoding region for message $m$ when ${\bf y}'$ was received, we can then write $P_e(\mbox{$\cal{C}$})$ as
\begin{align}
\label{eq:SGB67}
P_e(\mbox{$\cal{C}$}) & = \frac{1}{M} \sum_{m=1}^M \sum_{{\bf y}'} p({\bf y}'|{\bf x}'_m) \sum_{{\bf y}''\in\bar{\Delta}''_m}p({\bf y}''|{\bf x}''_m)
\end{align}
where $\bar{\Delta}''_m$ denotes the complement of $\Delta''_m$. Lower-bounding first the inner-most sum in \eqref{eq:SGB67} and then the remaining terms, one can prove \eqref{Eq_prob_prod2}.
A codebook with $\tilde{M}$ codewords of the form \eqref{Eq_ortho_code} can be transmitted in $\tilde{M}$ time instants, since in the remaining time instants all codewords are zero. We can thus assume without loss of optimality that the codebook's blocklength is $\tilde{M}$. Unfortunately, when the codewords are of the form~\eqref{Eq_ortho_code}, the above approach yields \eqref{Eq_prob_prod1} only in the trivial cases where either $E_1=0$ or $E_2=0$. Indeed, $E_1$ and $E_2$ correspond to the energies of the vectors ${\bf x}'_m$ and ${\bf x}''_m$, respectively, and for \eqref{Eq_ortho_code} we have ${\bf x}'_m=\mathbf{0}$ if $m>n_1$ and ${\bf x}''_m=\mathbf{0}$ if $m\leq n_1$, where $\mathbf{0}$ denotes the all-zero vector. We sidestep this problem by multiplying the codewords by a normalized Hadamard matrix. The Hadamard matrix, denoted by $H_{j}$, is a square matrix of size $j \times j$ with entries $\pm 1$ and has the property that all rows are orthogonal. Sylvester's construction shows that there exists a Hadamard matrix of order $j$ if $j$ is a power of 2. Recalling that $\tilde{M}$ is a power of $2$, we can thus find a normalized Hadamard matrix \[\tilde{H}\triangleq\frac{1}{\sqrt{\tilde{M}}} H_{\tilde{M}}.\] Since the rows of $\tilde{H}$ are orthonormal, it follows that the matrix $\tilde{H}$ is orthogonal. Further noting that the additive Gaussian noise ${\bf Z}$ is zero mean and has a diagonal covariance matrix, we conclude that the set of codewords $\{\tilde{H}{\bf x}_m,\,m=1,\ldots,\tilde{M}\}$ achieve the same probability of error as the set of codewords $\{{\bf x}_m,\,m=1,\ldots,\tilde{M}\}$. Thus, without loss of generality, we can restrict ourselves to codewords of the form $\tilde{{\bf x}}_m=\tilde{H}{\bf x}_m$, where ${\bf x}_m$ is as in \eqref{Eq_ortho_code}. Such codewords have constant modulus, i.e., $|\tilde{x}_{m,k}| = \sqrt{\frac{E}{\tilde{M}}}, k=1, \ldots,\tilde{M}$. This has the advantage that the energies of the vectors
\begin{align*}
\tilde{{\bf x}}'_m =(\tilde{x}_{m,1}, \tilde{x}_{m,2}, \ldots, \tilde{x}_{m,n_1})
\end{align*}
and
\begin{align*}
\tilde{{\bf x}}''_m =(\tilde{x}_{m,n_1+1}, \tilde{x}_{m,n_1+2}, \ldots,\tilde{x}_{m,n_1+n_2})
\end{align*}
are proportional to $n_1$ and $n_2$, respectively. Thus, by emulating the proof of \eqref{Eq_prob_prod2}, we can show that for every $n_1$ and $n_2$ satisfying $\tilde{M}=n_1+n_2$ and $E_i=E n_i/\tilde{M}$, $i=1,2$, we have
\begin{equation}
\label{Eq_prob_prod3}
P_e^{\bot}(E, \tilde{M}) \geq P_e(E_{1},n_1,\tilde{M}, L)P_e(E_{2},n_2,L+1)
\end{equation}
where $P_e(E_{1},n_1,\tilde{M}, L)$ denotes the smallest probability of error that can be achieved by a codebook with $\tilde{M}$ codewords of energy $E_1$ and blocklength $n_1$ and a list decoder of list size $L$, and $P_e(E_{2}, n_2, L+1)$ denotes the smallest probability of error that can be achieved by a codebook with $L+1$ codewords of energy $E_2$ and blocklength $n_2$. We then obtain \eqref{Eq_prob_prod1} from \eqref{Eq_prob_prod3} because
\begin{equation*}
P_e(E_{1},n_1,\tilde{M}, L) \geq P_e(E_{1},\tilde{M}, L) \quad \textnormal{and} \quad P_e(E_{2},n_2,L+1) \geq P_e(E_{2},L+1).
\end{equation*}
We next give a lower bound on $ P_e(E_{1}, \tilde{M}, L)$. Indeed, for list decoding of list size $L$, the inequalities \eqref{Eq_lowr_first} and \eqref{Eq_lowr_second} can be replaced by~\cite[Lemma~3.8.1]{ViterbiO79}
\begin{align}
L/M & \geq \frac{1}{4} \exp\left[\mu(s) - s\mu'(s) -s \sqrt{2\mu''(s)}\right] \label{Eq_lowr_first_list} \\
P_{e_{\max}} & \geq \frac{1}{4} \exp\left[\mu(s) + (1-s) \mu'(s) -(1-s) \sqrt{2\mu''(s)}\right]. \label{Eq_lowr_second_list}
\end{align}
Let $\dot{R}_1 \triangleq \frac{ \log (M/L)}{E_{1}}$ and $\tilde{\dot{R}}_1 \triangleq \frac{ \log (\tilde{M} /L)}{E_{1}}$. From the definition of $\tilde{M} $, we have $\tilde{M} \leq M \leq 2\tilde{M}$. Consequently,
\begin{align}
\dot{R}_1 - \frac{1}{E_1} \leq \tilde{\dot{R}}_1 \leq \dot{R}_1. \label{Eq_rate_low_high}
\end{align}
By following the steps that led to \eqref{Eq_prob_low_bnd1}, we thus obtain
\begin{align}
P_e(E_{1}, \tilde{M}, L) &\geq \exp\left[- E_{1}\left(\left(\sqrt{ \frac{1}{N_0}}- \sqrt{ \frac{\tilde{\dot{R}}_1}{\log e}}\right)^2 + O\left(\frac{1}{\sqrt{E_1}}\right) \right) \right] \notag \\
& = \exp\left[- E_{1}\left(\left(\sqrt{ \frac{1}{N_0}}- \sqrt{ \frac{\dot{R}_1}{\log e}}\right)^2 + O\left(\frac{1}{\sqrt{E_1}}\right) \right) \right]. \label{Eq_prob_low_bnd2_list}
\end{align}
To lower-bound $P_e(E_{2}, L+1)$, we apply \eqref{Eq_low_any_rate} with $\dot{R}_2 \triangleq \frac{\log (L+1)}{E_2}$. \edit{This yields}
\begin{align}
P_e(E_{2}, L+1) & \geq \exp\left[-E_2\left(\frac{1}{2N_0}\left(1+ \frac{1}{\frac{2^{\dot{R}_2 E_2}}{2}-1}\right) + O\left(\frac{\ln E_2 }{E_2}\right)\right)\right]. \label{Eq_low_rate_lowr_bnd}
\end{align}
Let \[\Xi_1(\dot{R}_1)\triangleq\left(\sqrt{ \frac{1}{N_0}}- \sqrt{ \frac{\dot{R}_1}{\log e}}\right)^2\] and \[\Xi_2(\dot{R}_2) \triangleq \frac{1}{2N_0}\left(1+ \frac{1}{\frac{2^{\dot{R}_2 E_2}}{2}-1}\right).\] Then, by substituting \eqref{Eq_prob_low_bnd2_list} and \eqref{Eq_low_rate_lowr_bnd} in \eqref{Eq_prob_prod1}, and by using \eqref{Eq_ortho_two_code1}, we get
\begin{align}
P_e^{\bot}(E, M) & \geq \exp\left[- E_{1}\left(\Xi_1(\dot{R}_1)+ O\left(\frac{1}{\sqrt{E_1}}\right) \right) \right] \exp\left[- E_{2} \left(\Xi_2(\dot{R}_2)+O\left(\frac{\ln E_2 }{E_2}\right)\right) \right]. \label{Eq_ortho_lowr}
\end{align}
Applying \eqref{Eq_ortho_lowr} with a clever choice of $E_1$ and $E_2$, we can show that the error exponent of $P_e^{\bot}(E, M)$ is upper-bounded by a convex combination of $\Xi_1(\dot{R}_1)$ and $\Xi_2(\dot{R}_2)$. Indeed, let $\lambda \triangleq \frac{E_{1}}{E}$. Then, \edit{\eqref{Eq_ortho_lowr} can be written as}
\begin{align}
P_e^{\bot}(E, M) \geq \exp\left[ -E \left(\lambda \Xi_1(\dot{R}_1)+ (1 - \lambda) \Xi_2(\dot{R}_2)+ O\left(\frac{1}{\sqrt{E}}\right) \right) \right] \label{Eq_prob_prod_low}
\end{align}
and
\begin{align*}
\frac{\log M}{E} & = \frac{\log (M/L) + \log L}{E}\\
& = \lambda \frac{\log (M/L) }{E_{1}} + (1-\lambda)\frac{\log L}{E_{2}} \\
& = \lambda \dot{R}_1 + (1-\lambda) \dot{R}_2.
\end{align*}
Let $\dot{R}\triangleq\frac{\log M}{E} \leq \frac{\log e}{4N_0}$ and $\gamma_E\triangleq\min\left\{\frac{1}{\sqrt{E}},\frac{\dot{R}}{2}\right\}$. We conclude the proof of the lower bound in \eqref{Eq_orth_sinlg_uppr1} by \edit{choosing in~\eqref{Eq_prob_prod_low}}
\edit{
\begin{align*}
\lambda_E = \lambda_E & \triangleq \frac{\dot{R} - \gamma_E}{\frac{\log e}{4N_0} - \gamma_E}
\end{align*}
}
and the rates per unit-energy $\dot{R}_1 = \frac{1}{4} \frac{\log e}{N_0}$ and $\dot{R}_2 =\gamma_E$. It follows that
\begin{align}
P_e^{\bot}(E, M) & \geq \exp\left[-E \left(\frac{\lambda_E}{4N_0}+ \frac{1-\lambda_E}{2N_0}+ \frac{1-\lambda_E}{2N_0(\frac{2^{\gamma_E (1-\lambda_E)E}}{2}-1)}+O\left(\frac{1}{\sqrt{E}}\right)\right)\right]\notag\\
& = \exp\left[-E \left(\frac{1}{2N_0}- \frac{\lambda_E}{4N_0}+ \frac{1-\lambda_E}{2N_0(\frac{2^{\gamma_E (1-\lambda_E)E}}{2}-1)}+O\left(\frac{1}{\sqrt{E}}\right)\right)\right]. \label{Eq_low_mid_rate}
\end{align}
Noting that $\lambda_E = \frac{\dot{R}}{(\log e)/4N_0} + O\left(\frac{1}{\sqrt{E}}\right) $, \edit{and that $\frac{1}{2^{\gamma_E (1-\lambda_E)E}} = O(1/\sqrt{E})$,}
\eqref{Eq_low_mid_rate} can be written as
\begin{align}
P_e^{\bot}(E, M) \geq \exp \left[-E \left( \frac{1}{2 N_0} - \frac{\dot{R}}{\log e}+ O\left(\frac{1}{\sqrt{E}}\right)\right)\right], \quad 0 < \dot{R} \leq \frac{1}{4} \frac{\log e}{N_0}. \label{Eq_lowrtae_lowr}
\end{align}
We can thus find a function $E \mapsto \beta_E$ \edit{of order $O(1/\sqrt{E})$} for which the lower bound in~\eqref{Eq_orth_sinlg_uppr1} holds.
\section{Proof of Lemma~\ref{Lem_usr_detect}}
\label{Sec_Lem_detct_proof}
\edit{
To prove Lemma~\ref{Lem_usr_detect}, we treat the cases where $\ell_n=O(1)$ and where $\ell_n=\omega(1)$ separately. In the former case, each user is assigned an exclusive channel use to convey whether it is active or not. The probability of a detection error $P(\mbox{$\cal{D }$})$ can then be analyzed by similar steps as in the proof of Theorem~\ref{Thm_ortho_accs}. In the latter case, we proceed similarly as in the proof of \cite[Th.~2]{ChenCG17}. That is, we draw signatures i.i.d.\ at random according to a zero-mean Gaussian distribution, followed by a truncation step to ensure that the energy of each signature is upper-bounded by $E_n''$. The decoder then produces a vector of length $\ell_n$ with zeros and ones, where a one in the $i$-th position indicates that user $i$ is active. To this end, it chooses the vector that, among all zero-one vectors with not more than a predefined number of ones, approximates the received symbols best in terms of Euclidean distance. The probability of a detection error probability $P(\mbox{$\cal{D }$})$ can then be analyzed by following similar steps as in the proof of \cite[Th.~2]{ChenCG17}.}
\edit{\subsection{Bounded $\ell_n$}}
\edit{We first prove the lemma when $\ell_n$ is bounded in $n$.}
In this case, one can employ a scheme where each user gets an exclusive channel use to convey whether it is active or not. For such a scheme, it is easy to show that (see the proof of Theorem~\ref{Thm_ortho_accs}) the probability of a detection error $P(\mbox{$\cal{D }$})$ is upper-bounded by
\begin{align*}
P(\mbox{$\cal{D }$}) & \leq \ell_n e^{-E_n'' t}
\end{align*}
for some $t > 0$. \edit{Clearly, when $\ell_n$ is bounded, we have $k_n\log\ell_n =o(n)$. Thus, the energy $E_n''$ used for detection is given by $b c_n \ln \ell_n$ and tends to infinity since $c_n \to \infty$ as $n \to\infty$.} It follows that $P(\mbox{$\cal{D }$})$ tends to zero as $n \to \infty$.
\edit{\subsection{Unbounded $\ell_n$}}
Next we prove Lemma~\ref{Lem_usr_detect} for the case where $\ell_n \to \infty$ as $n \to \infty$. To this end, we closely follow the proof of~\cite[Th.~2]{ChenCG17}, but with the power constraint replaced by an energy constraint.
Specifically, we analyze $P(\mbox{$\cal{D }$})$ for the user-detection scheme given in~\cite{ChenCG17}, where signatures are drawn i.i.d. according to a zero-mean Gaussian distribution.
Note that the proof in~\cite{ChenCG17} assumes that
\begin{align}
\lim\limits_{n \to \infty} \ell_n e^{-\delta k_n} =0 \label{Eq_Guo_cond}
\end{align}
for all $\delta > 0$. However, in our case this assumption is not necessary.
To show that all signatures satisfy the energy constraint, we follow the technique used in the proof of Lemma~\ref{Lem_err_expnt}. Similar to Lemma~\ref{Lem_err_expnt}, we denote by $\tilde{q}(\cdot)$ the probability density function of a zero-mean Gaussian random variable with variance $E_n''/(2n'')$. We further let
\begin{align*}
\tilde{{\bf q}}({\bf u}) & = \prod_{i=1}^{n''} \tilde{q}(u_i), \quad {\bf u} =(u_1,\ldots,u_{n''})
\end{align*}
and
\begin{align*}
{\bf q}({\bf u}) & = \frac{1}{\mu} \I{ \|{\bf u}\|^2 \leq E_n''} \tilde{{\bf q}}({\bf u})
\end{align*}
where
\begin{align*}
\mu & = \int \I{ \|{\bf u}\|^2 \leq E_n''} \; \tilde{{\bf q}}({\bf u}) d {\bf u}
\end{align*}
is a normalizing constant.
Clearly, any vector ${\bf S}_i$ distributed according to ${\bf q}(\cdot)$ satisfies the energy constraint $E''_n$ with probability one.
For any index set $\mbox{$\cal{I}$} \subseteq \{1,\ldots, \ell_n\}$, let the matrices $\underline{{\bf S}}_{\mbox{$\cal{I}$}}$ and $ \tilde{\underline{{\bf S}}}_{\mbox{$\cal{I}$}}$ denote the set of signatures for the users in $\mbox{$\cal{I}$}$ that are distributed respectively as
\begin{align*}
\underline{{\bf S}}_{\mbox{$\cal{I}$}} & \sim \prod_{i\in I} {\bf q}({\bf S}_i)
\end{align*}
and
\begin{align*}
\tilde{\underline{{\bf S}}}_{\mbox{$\cal{I}$}} & \sim \prod_{i\in I} \tilde{{\bf q}}({\bf S}_i).
\end{align*}
As noted in the proof of Lemma~\ref{Lem_err_expnt}, we have
\begin{align}
{\bf q}({\bf s}_i) & \leq \frac{1}{\mu} \tilde{{\bf q}}( {\bf s}_i). \label{Eq_prob_signt_uppr}
\end{align}
To analyze the detection error probability, we first define the $\ell_n$-length vector ${\bf D}^a$ as
\begin{align*}
{\bf D}^a \triangleq ( \I{W_1\neq0}, \ldots, \I{W_{\ell_n} \neq 0}).
\end{align*}
\edit{
For some $c''>0$, let
\begin{align*}
v_n \triangleq k_n(1 + c'').
\end{align*}
}
Further let
\begin{align*}
\mbox{$\cal{B}$}^n(v_n) \triangleq \{ {\bf d} \in \{0,1 \}^{\ell_n} : 1 \leq |{\bf d}| \leq v_n \}
\end{align*}
where $|{\bf d}|$ denotes the number of $1$'s in ${\bf d}$. We denote by ${\bf S}^a$ the matrix of signatures of all users, which are generated independently according to ${\bf q}(\cdot)$, and
we denote by $\mathbf{Y}^a$ the first $n''$ received symbols, based on which the receiver performs user detection. The receiver outputs the $\hat{{\bf d}}$ given by
\begin{align}
\hat{{\bf d}} = \mathrm{ arg\,min}_{ {\bf d} \in \mbox{$\cal{B}$}^n(v_n) } \| {\bf Y}^a - {\bf S}^a {\bf d} \| \label{Eq_decod_rule}
\end{align}
as a length-$\ell_n$ vector \edit{guessing} the set of active users. \edit{By the union bound}, the probability of a detection error $P(\mbox{$\cal{D }$})$ is upper-bounded by
\begin{align}
P(\mbox{$\cal{D }$})
& \leq \text{Pr} (|{\bf D}^a| > v_n ) + \sum_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \text{Pr} (\mbox{$\cal{E}$}_d|{\bf D}^a = {\bf d}) \text{Pr} ({\bf D}^a = {\bf d}) + \text{Pr} (\mbox{$\cal{E}$}_d| |{\bf D}^a| = 0) \text{Pr} (|{\bf D}^a| = 0) \label{Eq_detect_err_uoor}
\end{align}
where \edit{$|{\bf D}^a|$ denotes the number of $1$'s in ${\bf D}^a$ and
$\mbox{$\cal{E}$}_d$ denotes the event that there is a detection error}. Next, we show that each term on the RHS of~\eqref{Eq_detect_err_uoor} vanishes as $n \to \infty$.
Using the Chernoff bound for the binomial distribution, we have
\edit{
\begin{align}
\text{Pr} (|{\bf D}^a| > v_n) & \leq \exp(-k_n c''/3)
\end{align}
which vanishes as $n \to \infty$ if $k_n$ is unbounded. For bounded $k_n$, this probability of error vanishes by first letting $n \to \infty$ and then letting $c'' \to \infty$.
}
We continue with the term $\text{Pr} (\mbox{$\cal{E}$}_d|{\bf D}^a = {\bf d})$. For a given ${\bf D}^a = {\bf d}$, let $\kappa_1$ and $\kappa_2$ denote the number of miss detections and false alarms, respectively, i.e.,
\begin{align*}
\kappa_1 &= |\{ j: d_j \neq 0, \hat{d}_j = 0 \} |\\
\kappa_2 &= |\{ j: d_j = 0, \hat{d}_j \neq 0 \} |
\end{align*}
where $d_j$ and $\hat{d}_j$ denote the $j$-th components of the \edit{vectors ${\bf d}$ and $\hat{{\bf d}}$, respectively}. An error happens only if either $\kappa_1$, or $\kappa_2$, or both are strictly positive. The number of users that are either active or are declared as active by the receiver satisfies $|{\bf d}|+\kappa_2 = |\hat{{\bf d}}|+\kappa_1$, so
\begin{align*}
|{\bf d}|+ \kappa_2 & \leq v_n + \kappa_1
\end{align*}
since $|\hat{{\bf d}}|$ is upper-bounded by $v_n$ by the decoding rule~\eqref{Eq_decod_rule}.
So, the pair $(\kappa_1, \kappa_2)$ belongs to the following set:
\begin{align}
\mbox{$\cal{W}$}^{\ell_n}_{{\bf d}} = & \left\{(\kappa_1,\kappa_2): \kappa_1 \in \{0,1,\ldots, |{\bf d}| \}, \kappa_2 \in \{0,1,\ldots,v_n\}, \kappa_1+\kappa_2 \geq 1, |{\bf d}|+\kappa_2 \leq v_n + \kappa_1 \right\}. \label{Eq_decision_sets}
\end{align}
Let $\text{Pr} (\mbox{$\cal{E}$}_{\kappa_1,\kappa_2}|{\bf D}^a = {\bf d})$ be the probability of having exactly $\kappa_1$ miss detections and $\kappa_2$ false alarms when ${\bf D}^a = {\bf d}$.
For given $ {\bf d}$ and $\hat{{\bf d}}$, let $\mbox{$\cal{A}$}^* \triangleq \{j : d_j \neq 0 \}$ and $\mbox{$\cal{A}$} \triangleq \{j : \hat{d}_j \neq 0 \}$. We further define $\mbox{$\cal{A}$}_1 \triangleq \mbox{$\cal{A}$}^* \setminus \mbox{$\cal{A}$}$, $\mbox{$\cal{A}$}_2 \triangleq \mbox{$\cal{A}$} \setminus \mbox{$\cal{A}$}^*$, and
\begin{align*}
T_{\mbox{$\cal{A}$}} & \triangleq \|{\bf Y}^a - \sum_{j \in \mbox{$\cal{A}$}} {\bf S}_j \|^2 - \|{\bf Y}^a - \sum_{j \in \mbox{$\cal{A}$}^*} {\bf S}_j \|^2.
\end{align*}
Using the analysis that led to~\cite[eq.~(67)]{ChenCG17}, we obtain
\begin{align}
\text{Pr} (\mbox{$\cal{E}$}_{\kappa_1,\kappa_2}|{\bf D}^a = {\bf d}) & \leq \binom{|\mbox{$\cal{A}$}^*|}{\kappa_1} \binom{\ell_n}{\kappa_2} \mathrm{E}_{\underline{{\bf S}}_{\mbox{$\cal{A}$}^*}, {\bf Y}} \{ [\mathrm{E}_{\underline{{\bf S}}_{\mbox{$\cal{A}$}_2}}\{ \I{T_{\mbox{$\cal{A}$}} \leq 0} |\underline{{\bf S}}_{\mbox{$\cal{A}$}^*}, {\bf Y} \}]^{\rho}|\} \notag \\
& \leq \binom{|\mbox{$\cal{A}$}^*|}{\kappa_1} \binom{\ell_n}{\kappa_2} \left(\frac{1}{\mu} \right)^{\rho \kappa_2} \mathrm{E}_{\underline{{\bf S}}_{\mbox{$\cal{A}$}^*}, {\bf Y}} \{ [\mathrm{E}_{\underline{\tilde{{\bf S}}}_{\mbox{$\cal{A}$}_2}}\{ \I{T_{\mbox{$\cal{A}$}} \leq 0} |\underline{{\bf S}}_{\mbox{$\cal{A}$}^*}, {\bf Y} \}]^{\rho}\} \notag\\
& \leq \binom{|\mbox{$\cal{A}$}^*|}{\kappa_1} \binom{\ell_n}{\kappa_2} \left(\frac{1}{\mu}\right)^{|\mbox{$\cal{A}$}^*|} \left(\frac{1}{\mu} \right)^{\rho \kappa_2} \mathrm{E}_{\underline{\tilde{{\bf S}}}_{\mbox{$\cal{A}$}^*}, {\bf Y}} \{ [\mathrm{E}_{\underline{\tilde{{\bf S}}}_{\mbox{$\cal{A}$}_2}}\{ \I{T_{\mbox{$\cal{A}$}} \leq 0} |\underline{\tilde{{\bf S}}}_{\mbox{$\cal{A}$}^*}, {\bf Y} \}]^{\rho} \} \label{Eq_detect_err_new_distr2}
\end{align}
where in the second inequality we used that
\begin{align}
{\bf q}( \underline{{\bf s}}_{\mbox{$\cal{A}$}_2}) \leq \left( \frac{1}{\mu}\right)^{\kappa_2}\prod_{i \in \mbox{$\cal{A}$}_2} \tilde{{\bf q}}({\bf s}_i ) \label{Eq_Q_uppr1}
\end{align}
and in the third inequality we used that
\begin{align}
{\bf q}(\underline{{\bf s}}_{\mbox{$\cal{A}$}^*}) \leq \left( \frac{1}{\mu}\right)^{|\mbox{$\cal{A}$}^*|}\prod_{i \in \mbox{$\cal{A}$}^*} \tilde{{\bf q}}({\bf s}_i ). \label{Eq_Q_uppr2}
\end{align}
Here, \eqref{Eq_Q_uppr1} and~\eqref{Eq_Q_uppr2} follow from~\eqref{Eq_prob_signt_uppr}.
For every $\rho \in [0,1]$ and $\lambda \geq 0$, we obtain from~\cite[eq.~(78)]{ChenCG17} that
\edit{
\begin{align}
\binom{|\mbox{$\cal{A}$}^*|}{\kappa_1} \binom{\ell_n}{\kappa_2} \mathrm{E}_{\underline{\tilde{{\bf S}}}_{\mbox{$\cal{A}$}^*}, {\bf Y}} \{ [\mathrm{E}_{\underline{\tilde{{\bf S}}}_{\mbox{$\cal{A}$}_2}}\{ \I{T_{\mbox{$\cal{A}$}} \leq 0} |\underline{\tilde{{\bf S}}}_{\mbox{$\cal{A}$}^*}, {\bf Y} \}]^{\rho} \}& \leq \exp[ -\tilde{E}_n g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) ] \label{Eq_detect_prob_old_distrb}
\end{align}
}
where
\begin{align}
\tilde{E}_n \triangleq & E_n''/2, \notag \\
g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) \triangleq & -\frac{(1-\rho)n''}{2 \tilde{E}_n} \log (1+\lambda \kappa_2 \tilde{E}_n/n'') + \frac{n''}{2\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) \notag \\
& - \frac{|{\bf d}|}{\tilde{E}_n} H_2\left(\frac{\kappa_1}{|{\bf d}|}\right) - \frac{\rho \ell_n}{\tilde{E}_n} H_2\left(\frac{\kappa_2}{\ell_n}\right). \label{Eq_err_exp}
\end{align}
It thus follows from~\eqref{Eq_detect_err_new_distr2} and \eqref{Eq_detect_prob_old_distrb} that
\begin{align}
\text{Pr} (\mbox{$\cal{E}$}_{\kappa_1,\kappa_2}|{\bf D}^a = {\bf d}) & \leq \left(\frac{1}{\mu}\right)^{|\mbox{$\cal{A}$}^*| + \rho\kappa_2} \exp[ -\tilde{E}_n g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) ].\label{Eq_err_mu_uppr}
\end{align}
\edit{We next} show that the RHS of~\eqref{Eq_err_mu_uppr} vanishes as $n \to \infty$. To this end, we first show that $\left(\frac{1}{\mu}\right)^{|\mbox{$\cal{A}$}^*| + \rho\kappa_2} \to 1$ as $n \to \infty$ uniformly in $(\kappa_1, \kappa_2) \in \mbox{$\cal{W}$}^{\ell_n}_{{\bf d}}$ and ${\bf d} \in \mbox{$\cal{B}$}^n(v_n) $.
From the definition of $\mu$, we have
\begin{align*}
\mu & = 1 - \text{Pr} \left(\|\tilde{{\bf S}}_1\|_2^2 > E_n''\right).
\end{align*}
Furthermore, by defining $\tilde{{\bf S}}_0 \triangleq \frac{2 n''}{E_n''} \|\tilde{{\bf S}}_1\|_2^2$ and following the steps that led to~\eqref{Eq_mu_uppr2}, we obtain that, \edit{for $(\kappa_1, \kappa_2) \in \mbox{$\cal{W}$}^{\ell_n}_{{\bf d}}$ and ${\bf d} \in \mbox{$\cal{B}$}^n(v_n) $},
\begin{align}
1 & \leq \left(\frac{1}{\mu}\right)^{|\mbox{$\cal{A}$}^*| + \rho\kappa_2} \notag \\
& \leq \edit{\left(\frac{1}{\mu}\right)^{2 v_n}} \notag\\
& \leq \edit{\left(1 - \exp \left[-\frac{n''}{2} \tau \right]\right)^{-2 v_n}} \label{Eq_mu_uppr}
\end{align}
where $\tau = (1 - \ln 2)$. Here, in the second inequality we used that \edit{$|\mbox{$\cal{A}$}^*| =|{\bf d}| \leq v_n$ and $\rho \kappa_2 \leq v_n$.
Since $k_n \log \ell_n = O(n)$, we have $k_n = o(n)$. This implies that, for every fixed $c''>0$, we have $v_n = o(n)$ because $v_n = \Theta(k_n)$.}
Furthermore, $n'' = \Theta(n)$.
As noted before, for any two non-negative sequences $\{a_n\}$ and $\{b_n\}$ satisfying $a_n\to 0$ and $a_nb_n \to 0$ as $n \to \infty$, it holds that $(1-a_n)^{-b_n} \to 1$ as $n \to \infty$. It follows that the RHS of~\eqref{Eq_mu_uppr} tends to one as $n \to \infty$ uniformly in $(\kappa_1, \kappa_2)\in \mbox{$\cal{W}$}_{{\bf d}}^{\ell_n}$ and ${\bf d} \in \mbox{$\cal{B}$}^n(v_n)$. Consequently, there exists a positive constant $n_0$ that is independent of $\kappa_1$, $\kappa_2$, and ${\bf d}$ and satisfies
\begin{align}
\left( \frac{1}{\mu} \right)^{|\mbox{$\cal{A}$}^*| + \rho\kappa_2} & \leq 2, \quad (\kappa_1, \kappa_2)\in \mbox{$\cal{W}$}_{{\bf d}}^{\ell_n}, {\bf d} \in \mbox{$\cal{B}$}^n(v_n), n \geq n_0. \label{Eq_mu_lowr}
\end{align}
\edit{To bound the exponential term on the RHS of \eqref{Eq_err_mu_uppr}, we need the following lemma.
\begin{lemma}
\label{Lem_err_exp}
If $k_n \log \ell_n = O(n)$, and if $c'$ and $c''$ are sufficiently large, then there exist two positive constants $\gamma >0$ and $n'_0$ such that $g^n_{\frac{2}{3}, \frac{3}{4}}(\kappa_1,\kappa_2, {\bf d}) $, i.e., \eqref{Eq_err_exp} evaluated at $\lambda=2/3$ and $\rho=3/4$, satisfies
\begin{align}
\min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{(\kappa_1,\kappa_2) \in \mbox{$\cal{W}$}^{\ell_n}_{{\bf d}}} g^n_{\frac{2}{3}, \frac{3}{4}}(\kappa_1,\kappa_2, {\bf d}) \geq \gamma, \quad n \geq n'_0. \label{Eq_detect_err_exp}
\end{align}
\end{lemma}
\begin{proof}
See Appendix~\ref{Sec_err_exp}.
\end{proof}
}
\edit{Lemma~\ref{Lem_err_exp}} implies that $\text{Pr} (\mbox{$\cal{E}$}_{\kappa_1,\kappa_2}| {\bf D}^a=\ {\bf d})$ vanishes as $n \to\infty$ uniformly in ${\bf d} \in \mbox{$\cal{B}$}^n(v_n)$. Indeed, if ${\bf d} \in \mbox{$\cal{B}$}^n(v_n)$, then $|{\bf d}| \leq v_n$, which implies that $\kappa_1 \leq v_n$. Furthermore, since the decoder outputs a vector in $\mbox{$\cal{B}$}^n(v_n)$, we also have $\kappa_2 \leq v_n$. It thus follows from \eqref{Eq_err_mu_uppr}, \eqref{Eq_mu_lowr}, and \eqref{Eq_detect_err_exp} that
\begin{align}
\text{Pr} (\mbox{$\cal{E}$}_d| {\bf D}^a=\ {\bf d}) &= \sum_{(\kappa_1,\kappa_2) \in \mathcal{W}_{d}^{\ell_n}} \text{Pr} (\mbox{$\cal{E}$}_{\kappa_1,\kappa_2}| {\bf D}^a=\ {\bf d}) \notag \\
& \leq 2 v_n^2 \exp[-\tilde{E}_n \gamma ] \notag \\
& = 2 \exp\left[ -\tilde{E}_n \left(\gamma - \frac{ 2 \ln v_n}{\tilde{E}_n}\right)\right], \quad {\bf d} \in \mbox{$\cal{B}$}^n(v_n), n \geq \max(n_0,n'_0). \label{Eq_detect_err_uppr}
\end{align}
By the definition of $v_n$ and $\tilde{E}_n$,
\edit{
\begin{equation}
\frac{2\ln v_n}{\tilde{E}_n} = \frac{4 \ln (1+c'')}{b c_n \ln \ell_n} + \frac{ 4 \ln k_n}{b c_n \ln \ell_n}. \label{Eq_sn_En2}
\end{equation}
The first term on the RHS of~\eqref{Eq_sn_En2} vanishes as $n \to \infty$ since $\ell_n$ is unbounded. The second term on the RHS of~\eqref{Eq_sn_En2} is upper-bounded by $4/(b c_n)$ since $\ln k_n \leq \ln \ell_n$. This vanishes as $c_n\to\infty$ (which is the case when $k_n\log\ell_n=o(n)$), or it can be made arbitrarily small by choosing $c_n=c'$ sufficiently large (when $k_n\log\ell_n=\Theta(n)$). Consequently, we obtain that $\frac{2\ln v_n}{\tilde{E}_n} < \gamma$ for sufficiently large $n$ and $c_n$, which implies that the RHS of~\eqref{Eq_detect_err_uppr} tends to zero as $n \to \infty$.
}
We finish the proof of Lemma~\ref{Lem_usr_detect} by analyzing the third term on the RHS of~\eqref{Eq_detect_err_uoor}, namely, $\text{Pr} (\mbox{$\cal{E}$}_d| |{\bf D}^a| = 0) \text{Pr} (|{\bf D}^a| = 0)$. \edit{Since $ \text{Pr} (|{\bf D}^a| = 0)= \left((1-\alpha_n)^{\frac{1}{\alpha_n}}\right)^{k_n}$, this term is given by}
\begin{align*}
\text{Pr} (\mbox{$\cal{E}$}_d| |{\bf D}^a| = 0) \left((1-\alpha_n)^{\frac{1}{\alpha_n}}\right)^{k_n}
\end{align*}
and vanishes if $k_n$ is unbounded. Next we show that this term also vanishes when $k_n$ is bounded. When $|{\bf D}^a| = 0$, an error occurs only if there are false alarms. For $\kappa_2$ false alarms, let $\bar{{\bf S}} \triangleq \sum_{j=1}^{\kappa_2} {\bf S}_j$, and let $S'_i$ denote the $i$-th component of $\bar{{\bf S}}$. From~\cite[eq.~(303)]{ChenCG17}, we obtain the following upper bound on the probability that there are $\kappa_2$ false alarms when $|{\bf D}^a| = 0$:
\begin{align*}
P(\mbox{$\cal{E}$}_{\kappa_2}| |{\bf d}| =0) & \leq \binom{\ell_n}{\kappa_2} \mathrm{E}_{\underline{{\bf S}}_{\mbox{$\cal{A}$}_2}} \left[ \text{Pr} \left\{ \sum_{i=1}^{n''} Z_i S'_i\geq \frac{1}{2} \|\bar{{\bf S}} \|^2 \right\} \bigg| \bar{{\bf S}} \right] \notag \\
& \leq \left(\frac{1}{\mu}\right)^{\kappa_2} \binom{\ell_n}{\kappa_2} \mathrm{E}_{\tilde{\underline{{\bf S}}}_{\mbox{$\cal{A}$}_2}} \left[ \text{Pr} \left\{ \sum_{i=1}^{n''} Z_i S'_i \geq \frac{1}{2} \|\bar{{\bf S}} \|^2 \right\} \bigg|\bar{{\bf S}} \right]
\end{align*}
where in the last inequality we used~\eqref{Eq_prob_signt_uppr}. By following the analysis that led to~\cite[eq.~(309)]{ChenCG17}, we obtain
\begin{align}
P(\mbox{$\cal{E}$}_{\kappa_2}| |{\bf d}| =0) & \leq \left(\frac{1}{\mu}\right)^{\kappa_2} \exp \left[ -\tilde{E}_n (q'_n(\kappa_2) - u_n'(\kappa_2) ) \right] \notag
\end{align}
where
\begin{align}
q'_n(\kappa_2) &\triangleq \frac{n''}{2\tilde{E}_n}\log \left( 1+ \frac{\kappa_2 \tilde{E}_n}{4 n''} \right) \notag
\end{align}
and
\begin{align}
u'_n(\kappa_2) & \triangleq \frac{ \ell_n}{\tilde{E}_n} H_2\left(\frac{\kappa_2}{\ell_n}\right). \notag
\end{align}
\edit{As in~\eqref{Eq_mu_lowr}}, we upper-bound $ \left(\frac{1}{\mu}\right)^{\kappa_2} \leq 2$ uniformly in $\kappa_2$ for $n \geq n_0$. Furthermore, we observe that the behaviours of $q'_n(\kappa_2)$ and $u'_n(\kappa_2)$ are similar to $q_n(\kappa_2)$ and $v_n(\kappa_2)$ given \edit{later} in~\eqref{Eq_qn} and in~\eqref{Eq_sn}, respectively. \edit{So by following those steps}, we can show that
\begin{align}
\liminf_{n \to \infty} \min_{1 \leq \kappa_2 \leq v_n} q'_n(\kappa_2) > 0 \notag
\end{align}
and
\begin{align}
\lim_{n \to \infty} \min_{1 \leq \kappa_2 \leq v_n} \frac{u'_n(\kappa_2)}{ q'_n(\kappa_2)} <1. \notag
\end{align}
It follows that there exist positive constants $\tau'$ and $ \tilde{n}_0$ such that
\begin{align}
P(\mbox{$\cal{E}$}_d | |{\bf d}| =0) & = \sum_{\kappa_2=1}^{v_n}P(\mbox{$\cal{E}$}_{\kappa_2} | |{\bf d}| =0)\\
& \leq 2 v_n \exp \left[ -\tilde{E}_n \tau' \right], \quad n\geq \max(n_0, \tilde{n}_0). \notag
\end{align}
We have already shown that $ v_n^2 \exp [ -\tilde{E}_n \tau' ]$ vanishes as $n \to \infty$ (cf. \eqref{Eq_detect_err_uppr}--\eqref{Eq_sn_En2}), which implies that $ 2 v_n \exp [ -\tilde{E}_n \tau' ]$ vanishes, too as $n \to \infty$. It thus follows that $P(\mbox{$\cal{E}$}_d | |{\bf d}| =0)$ tends to zero as $n \to \infty$. This was the last step required to prove Lemma~\ref{Lem_usr_detect}.
\subsection{Proof of Lemma~\ref{Lem_err_exp}}
\label{Sec_err_exp}
We first note that
\begin{align}
\min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{(\kappa_1,\kappa_2) \in \mbox{$\cal{W}$}^{\ell_n}_{{\bf d}}} g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) & = \min \{ \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} g^n_{\lambda, \rho}(\kappa_1,0,{\bf d}), \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{ 1 \leq \kappa_2 \leq v_n } g^n_{\lambda, \rho}(0,\kappa_2, {\bf d}), \notag \\
& \qquad \qquad \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{ \substack{ 1 \leq \kappa_1 \leq v_n \\ 1 \leq \kappa_2 \leq v_n}} g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) \}. \label{Eq_inf_gn}
\end{align}
Then, we show that, for $\lambda = 2/3$ and $\rho = 3/4$,
\begin{align}
\liminf_{n \rightarrow \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} g^n_{\lambda, \rho}(\kappa_1, 0,{\bf d}) & > 0 \label{Eq_w1_lowr} \\
\liminf_{n \rightarrow \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{ 1 \leq \kappa_2 \leq v_n } g^n_{\lambda, \rho}(0,\kappa_2,{\bf d}) & > 0 \label{Eq_w2_lowr} \\
\liminf_{n \rightarrow \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{ \substack{ 1 \leq \kappa_1 \leq v_n \\ 1 \leq \kappa_2 \leq v_n}} g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) & > 0 \label{Eq_w1w2_lowr}
\end{align}
from which Lemma~\ref{Lem_err_exp} follows.
In order to prove \eqref{Eq_w1_lowr}--\eqref{Eq_w1w2_lowr}, we first lower-bound $g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d})$ by using that, for $0 \leq \lambda \rho \leq 1$, we have
\begin{align}
& 2 \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) \notag \\
& \qquad \qquad \geq \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right) + \log \left(1+\lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right). \label{Eq_log_lowr}
\end{align}
Using~\eqref{Eq_log_lowr} in the second term on the RHS of~\eqref{Eq_err_exp}, we obtain that
\begin{align}
g^n_{\lambda, \rho}(\kappa_1, \kappa_2,{\bf d}) & \geq a^n_{\lambda, \rho}(\kappa_1,{\bf d}) + b^{n}_{\lambda, \rho}(\kappa_2)
\label{Eq_gn_lowr}
\end{align}
where
\begin{align*}
a^n_{\lambda, \rho}(\kappa_1,{\bf d}) \triangleq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) - \frac{|{\bf d}|}{\tilde{E}_n} H_2\left(\frac{\kappa_1}{|{\bf d}|}\right)
\end{align*}
and
\begin{align*}
b^{n}_{\lambda, \rho}(\kappa_2) \triangleq \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right) -\frac{(1-\rho)}{2 \tilde{E}_n} \log (1+\lambda \kappa_2 \tilde{E}_n/n'') - \frac{\rho \ell_n}{\tilde{E}_n} H_2\left(\frac{\kappa_2}{\ell_n}\right).
\end{align*}
\subsubsection{Proof of~\eqref{Eq_w1_lowr}}
We have
\begin{align}
g^n_{\lambda, \rho}(\kappa_1, 0,{\bf d}) & \geq a^n_{\lambda, \rho}(\kappa_1,{\bf d}) + b^{n}_{\lambda, \rho}(0) \notag \\
& \geq a^n_{\lambda, \rho}(\kappa_1,{\bf d}) \label{Eq_gn_w1_lowr}
\end{align}
by~\eqref{Eq_gn_lowr} and \edit{because} $b^{n}_{\lambda, \rho}(0) =0$.
Consequently,
\begin{align*}
\min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} g^n_{\lambda, \rho}(\kappa_1, 0,{\bf d}) & \geq \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} a^n_{\lambda, \rho}(\kappa_1,{\bf d})
\end{align*}
\edit{and}~\eqref{Eq_w1_lowr} follows by showing that
\begin{align}
\liminf_{n \rightarrow \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} a^n_{\lambda, \rho}(\kappa_1,{\bf d}) > 0. \label{Eq_first_lowr}
\end{align}
To this end, let
\begin{align*}
i_n(\kappa_1) & \triangleq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) \\
j_n(\kappa_1,{\bf d}) & \triangleq \frac{|{\bf d}|}{\tilde{E}_n} H_2\left(\frac{\kappa_1}{|{\bf d}|}\right)
\end{align*}
so that
\begin{align}
a^n_{\lambda, \rho}(\kappa_1,{\bf d}) = i_n(\kappa_1) \left(1- \frac{j_n(\kappa_1,{\bf d})}{i_n(\kappa_1)}\right). \label{Eq_an}
\end{align}
Note that
\begin{equation}
i_n(\kappa_1) \geq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right), \quad 1 \leq \kappa_1 \leq v_n \label{Eq_in_lowr}
\end{equation}
and
\begin{align}
\frac{j_n(\kappa_1,{\bf d})}{i_n(\kappa_1)} & = \frac{4 |{\bf d}| H_2\left(\frac{\kappa_1}{|{\bf d}|}\right)}{n'' \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) } \notag \\
& = \frac{4 \kappa_1 \log (|{\bf d}|/\kappa_1) + 4 |{\bf d}|(\kappa_1/|{\bf d}| - 1) \log (1 -\kappa_1/|{\bf d}|) }{n'' \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right) } \label{Eq_jn_in_ratio}.
\end{align}
Next, we upper-bound $(\kappa_1/|{\bf d}| - 1) \log (1 -\kappa_1/|{\bf d}|) $. To this end, we note that the function $f(p) = p - (p-1) \ln (1-p)$, $0 \leq p \leq 1$ satisfies $f(0)=0$ and is monotonically increasing in $p$. \edit{It follows that} $(p-1) \ln (1-p) \leq p$, $0\leq p \leq 1$, which for $ p=\kappa_1/|{\bf d}|$ gives
\begin{align}
(\kappa_1/|{\bf d}| - 1) \log (1 -\kappa_1/|{\bf d}|) \leq (\log e) \kappa_1/|{\bf d}|. \label{Eq_fn_uppr}
\end{align}
Using~\eqref{Eq_fn_uppr} in~\eqref{Eq_jn_in_ratio}, we obtain that
\begin{align}
\frac{j_n(\kappa_1,{\bf d})}{i_n(\kappa_1)}
& \leq \frac{ 4 \log (|{\bf d}|/\kappa_1) + 4 \log e}{n'' \log \left(1 + \lambda \rho (1-\lambda \rho) \kappa_1 \tilde{E}_n/n''\right)/\kappa_1 } \notag \\
& \leq \frac{4 \log (|{\bf d}|/\kappa_1) +4 \log e }{n'' \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)/v_n} \notag \\
& \edit{\leq \frac{v_n (4 \log (v_n) +4 \log e) }{n'' \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}} \label{Eq_new1} \\
& = \frac{4 \log v_n + 4 \log e }{\tilde{E}_n \frac{ \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}{v_n\tilde{E}_n/n''}} \label{Eq_ratio_uppr4}
\end{align}
where the second inequality follows because $\frac{\log (1+x)}{x}$ is monotonically decreasing in $x > 0$, and the subsequent inequality follows because $|{\bf d}| \leq v_n$ and $1 \leq \kappa_1 \leq v_n$. Combining~\eqref{Eq_an}, \eqref{Eq_in_lowr}, and~\eqref{Eq_ratio_uppr4}, $ a^n_{\lambda, \rho}(\kappa_1,{\bf d})$ can thus be lower-bounded by
\begin{align}
a^n_{\lambda, \rho}(\kappa_1,{\bf d}) &\geq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right)\left(1- \frac{4 \log v_n + 4 \log e }{\tilde{E}_n \frac{ \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}{v_n\tilde{E}_n/n''}} \right). \label{Eq_an_lowr}
\end{align}
Note that the RHS of~\eqref{Eq_an_lowr} is independent of $\kappa_1$ and ${\bf d}$.
\edit{To show~\eqref{Eq_first_lowr}, we consider the following two cases:
\subsubsection*{Case 1---$k_n \log \ell_n = o(n)$} Recall that if $k_n \log \ell_n = o(n)$, then $ \tilde{E}_n = bc_n \ln \ell_n/2$.
}
It follows that the term
\begin{align}
\frac{ v_n \tilde{E}_n}{n''} & = \frac{ (1+c'')}{2} c_n \frac{k_n \ln \ell_n}{n} \label{Eq_En_sn_zero2}
\end{align}
tends to zero as $n \to \infty$ since $c_n = \ln \left(\frac{n}{k_n \ln \ell_n}\right)$ and $\frac{k_n \ln \ell_n}{n} = o(1)$ by assumption. \edit{This also implies that $\tilde{E}_n/n'' \to 0$ as $n \to \infty$ since $v_n =\Omega(1)$.} We further have that $\tilde{E}_n\to\infty$ and \edit{$\frac{\log v_n}{\tilde{E}_n} \to 0$ as $n \to \infty$ since $v_n = \Theta(k_n)$ and $\tilde{E}_n = \omega(\log \ell_n)$.} It follows that
\begin{align}
& \liminf_{n \to \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} a^n_{\lambda, \rho}(\kappa_1,{\bf d}) \notag \\
& \qquad \qquad \geq \lim_{n \to \infty} \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right) \lim_{n \to \infty} \left(1- \frac{ 4\log v_n + 4\log e }{\tilde{E}_n \frac{ \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}{v_n\tilde{E}_n/n''}} \right) \notag \\
& \qquad \qquad= \frac{(\log e) \; \lambda \rho (1-\lambda \rho)}{4} \label{Eq_new7}
\end{align}
which for $\lambda=2/3$ and $\rho = 3/4$ is equal to $(\log e)/16$.
\edit{\subsubsection*{Case 2---$k_n\log \ell_n = \Theta(n)$} To analyze this case, we first note that, by the definition of $v_n$, and because $\log k_n\leq \log\ell_n$, the numerator in \eqref{Eq_new1} satisfies $v_n (4 \log (v_n) +4 \log e) = O(k_n\log \ell_n)$. Since $k_n\log \ell_n = \Theta(n)$, and since $n'' =\Theta(n)$, this further implies that there exist $a_2>0$ and $\tilde{n}'_0>0$ such that
\begin{align}
\frac{v_n (4 \log (v_n) +4 \log e) }{n''} \leq a_2, \quad n \geq \tilde{n}'_0. \label{Eq_new8}
\end{align}
We have
\begin{align}
a^n_{\lambda, \rho}(\kappa_1,{\bf d}) &\geq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right)\left(1- \frac{v_n (4 \log (v_n) +4 \log e) }{n'' \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)} \right). \label{Eq_new9}
\end{align}
The RHS of \eqref{Eq_new9} is independent of $\kappa_1$ and ${\bf d}$. We next show that it is bounded away from zero.
Recall that, if $k_n\log\ell_n = \Theta(n)$, then $\tilde{E}_n = b c' \ln\ell_n /2$. We then choose $c'$ sufficiently large such that, for some $\tilde{n}''_0\geq \tilde{n}_0'$,
\begin{equation}
\label{eq:Lemma14_make_a_choice}
\log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right) >a_2, \quad n \geq \tilde{n}''_0.
\end{equation}
Such a choice is possible since we have
\begin{equation*}
\frac{v_n \tilde{E}_n}{n''} = \frac{1+c''}{2} c' \frac{k_n\ln \ell_n}{n}
\end{equation*}
and, by assumption, $\frac{k_n\ln\ell_n}{n}=\Theta(1)$. Consequently, for sufficiently large $n$, $v_n\tilde{E}_n/n''$ is monotonically increasing in $c'$ and ranges from zero to infinity.
Combining \eqref{eq:Lemma14_make_a_choice} with \eqref{Eq_new8} implies that the expression inside the large parentheses on the RHS of \eqref{Eq_new9} is bounded away from zero for $n \geq \tilde{n}''_0$.
We next consider the remaining term on the RHS of \eqref{Eq_new9}. To this end, we note that, if $k_n$ is unbounded, then the assumption $k_n\log\ell_n=\Theta(n)$ implies that $\ln\ell_n=o(n)$. It follows that $\frac{\tilde{E}_n}{n''}=c'\frac{\ln\ell_n}{2n}\to 0$ as $n\to\infty$. If $k_n$ is bounded, then $\ln\ell_n=\Theta(n)$, so $\frac{\tilde{E}_n}{n''}$ is bounded. In both cases, $\frac{n''}{4\tilde{E}_n} \log(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n'')$ tends to a positive value as $n \to \infty$.
Applying the above lines of argument to \eqref{Eq_new9}, we obtain that
\begin{align}
\liminf_{n \to \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_1 \leq v_n} a^n_{\lambda, \rho}(\kappa_1,{\bf d}) >0 \label{Eq_new3}
\end{align}
which concludes the analysis of the second case.
The claim \eqref{Eq_w1_lowr} follows now by combining the above two cases, i.e., \eqref{Eq_new7} and~\eqref{Eq_new3}.
}
\subsubsection{Proof of~\eqref{Eq_w2_lowr}}
Since $a^{n}_{\lambda, \rho}(0,{\bf d}) =0 $, we have that
\begin{align*}
\min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{1 \leq \kappa_2 \leq v_n} g^n_{\lambda, \rho}(0, \kappa_2,{\bf d}) & \geq \min_{1 \leq \kappa_2 \leq v_n} b^n_{\lambda, \rho}(\kappa_2).
\end{align*}
Thus, \eqref{Eq_w2_lowr} follows by showing that
\begin{align}
\liminf_{n \rightarrow \infty} \min_{1 \leq \kappa_2 \leq v_n} b^n_{\lambda, \rho}(\kappa_2) > 0. \label{Eq_w2_lowr1}
\end{align}
To prove \eqref{Eq_w2_lowr1}, we define
\begin{align}
q_n(\kappa_2) & \triangleq \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right) \label{Eq_qn} \\
r_n(\kappa_2) & \triangleq \frac{(1-\rho)}{2 \tilde{E}_n} \log (1+\lambda \kappa_2 \tilde{E}_n/n'') \\
u_n(\kappa_2) & \triangleq \frac{\rho \ell_n}{\tilde{E}_n} H_2\left(\frac{\kappa_2}{\ell_n}\right). \label{Eq_sn}
\end{align}
Then,
\begin{align}
b^n_{\lambda, \rho}(\kappa_1) & = q_n(\kappa_2) \left(1- \frac{r_n(\kappa_2)}{q_n(\kappa_2)} - \frac{u_n(\kappa_2)}{q_n(\kappa_2)}\right). \label{Eq_bn_lowr}
\end{align}
Note that
\begin{equation}
q_n(\kappa_2) \geq \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right), \quad 1 \leq \kappa_2 \leq v_n. \label{Eq_qn_lowr}
\end{equation}
Furthermore,
\begin{align}
\frac{r_n(\kappa_2)}{ q_n(\kappa_2)} &= \frac{\frac{(1-\rho)}{2 \tilde{E}_n} \log (1+\lambda \kappa_2 \tilde{E}_n/n'')}{\frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right)} \notag \\
& \leq \frac{\frac{(1-\rho)}{2 \tilde{E}_n} \log (1+\lambda v_n \tilde{E}_n/n'')}{\frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)} \notag \\
& = \frac{\frac{(1-\rho)v_n}{2 n'' } \frac{ \log (1+\lambda v_n \tilde{E}_n/n'')}{\tilde{E}_nv_n/n''}}{ \frac{ \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)}{4\tilde{E}_n/n''}}, \quad 1 \leq \kappa_2 \leq v_n. \label{Eq_rn_qn_ratio}
\end{align}
Finally,
\begin{align}
\frac{u_n(\kappa_2)}{ q_n(\kappa_2)} & = \frac{4 \rho \ell_n H_2\left(\frac{\kappa_2}{\ell_n}\right)}{n'' \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right)} \notag \\
& = \frac{4 \rho \left[ \kappa_2 \log (\ell_n/\kappa_2) + \ell_n (\kappa_2/\ell_n -1) \log (1 -\kappa_2/\ell_n) \right] }{n'' \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right)}\notag\\
& \leq \frac{4 \rho \left[ \kappa_2 \log (\ell_n/\kappa_2) + \kappa_2 \log e \right] }{n'' \log \left(1+ \lambda(1-\lambda \rho)\kappa_2 \tilde{E}_n/n'' \right)} \notag\\
& \leq \edit{ \frac{4 \rho v_n \left[ \log \ell_n + \log e \right] }{n'' \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)} }\label{Eq_new2} \\
& = \frac{4 \rho \left[ \log \ell_n + \log e \right] }{\tilde{E}_n \frac{ \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)}{v_n\tilde{E}_n/n''}}, \quad 1 \leq \kappa_2 \leq v_n \label{Eq_sn_qn_uppr}
\end{align}
where the first inequality follows from~\eqref{Eq_fn_uppr}.
Combining~\eqref{Eq_qn_lowr}--\eqref{Eq_sn_qn_uppr} with~\eqref{Eq_bn_lowr} yields the lower bound
\begin{align}
b^n_{\lambda, \rho}(\kappa_2) & \geq \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)\left(1 - \frac{\frac{(1-\rho)v_n}{2 n'' } \frac{ \log (1+\lambda v_n \tilde{E}_n/n'')}{\tilde{E}_nv_n/n''}}{ \frac{ \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)}{4\tilde{E}_n/n''}} - \frac{4 \rho \left[ \log \ell_n + \log e \right] }{\tilde{E}_n \frac{ \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)}{v_n\tilde{E}_n/n''}} \right) \label{Eq_bn_lowr_bnd}
\end{align}
which is independent of $\kappa_2$ and ${\bf d}$.
\edit{To prove \eqref{Eq_w2_lowr1}, we first note that, since $k_n = \Omega(1)$, we have $v_n \geq 1 - \lambda\rho$ for $c''$ sufficiently large. Thus, by the monotonicity of $x\mapsto\frac{\log (1+x)}{x}$,
\begin{equation}
\frac{ \frac{ \log (1+\lambda v_n \tilde{E}_n/n'')}{\tilde{E}_nv_n/n''}}{ \frac{ \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)}{\tilde{E}_n/n''}} \leq \frac{1}{1-\lambda \rho}, \quad v_n \geq 1-\lambda\rho. \label{Eq_new12}
\end{equation}
Furthermore, the term
\begin{align}
\frac{2(1-\rho)v_n}{n''} & = \frac{2(1-\rho) k_n(1+c'')}{bn} \label{Eq_new11}
\end{align}
vanishes as $n\to\infty$ since $k_n = o(n)$ by the lemma's assumption that $k_n \log \ell_n = O(n)$. Consequently, the RHS of \eqref{Eq_rn_qn_ratio} tends to zero as $n\to\infty$. It follows that
\begin{equation}
\liminf_{n \rightarrow \infty} \min_{1 \leq \kappa_2 \leq v_n} b^n_{\lambda, \rho}(\kappa_2) \geq \lim_{n \to \infty} \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right) \left(1-\limsup_{n\to\infty} \frac{4 \rho \left[ \log \ell_n + \log e \right] }{\tilde{E}_n \frac{ \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)}{v_n\tilde{E}_n/n''}}\right). \label{Eq_new10}
\end{equation}
To show that the RHS of \eqref{Eq_new10} is positive, we consider the following two cases:}
\subsubsection*{Case 1---$k_n \log \ell_n = o(n)$} Recall that, in this case, $\tilde{E}_n=b c_n \ln \ell_n /2$ and $c_n\to\infty$ as $n\to\infty$. This implies that $\tilde{E}_n \to \infty$, $\frac{\log \ell_n}{ \tilde{E}_n} \to 0$, and $v_n\tilde{E}_n/n'' \to 0$ as $n \to \infty$. It follows that the RHS of \eqref{Eq_sn_qn_uppr} vanishes as $n \to \infty$, so \eqref{Eq_new10} becomes
\begin{align}
\liminf_{n \rightarrow \infty} \min_{1 \leq \kappa_2 \leq v_n} b^n_{\lambda, \rho}(\kappa_2) & \geq \lim_{n \to \infty} \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right) \notag \\
& = \frac{(\log e) \; \lambda(1-\lambda \rho)}{4} \label{Eq_new10_b}
\end{align}
which for $\lambda= 2/3$ and $\rho =3/4$ is equal to $(\log e)/12$. Here, the last step follows by noting that $v_n \tilde{E}_n/n''\to 0$ implies that $\tilde{E}_n/n''\to 0$ since $v_n=\Omega(1)$.
\edit{
\subsubsection*{Case 2---$k_n \log \ell_n = \Theta(n)$} We first note that, in this case, $4 \rho v_n \left[ \log \ell_n + \log e \right] = \Theta(n)$. Thus, there exist two positive constants $a_3$ and $\tilde{n}'''_0$ such that
\begin{align*}
\frac{4 \rho v_n \left[ \log \ell_n + \log e \right] }{n''} & \leq a_3, \quad n \geq \tilde{n}'''_0
\end{align*}
By the same arguments that demonstrate \eqref{eq:Lemma14_make_a_choice}, we can show that $c'$ can be chosen sufficiently large so that, for some $\bar{n}_0\geq \tilde{n}'''_0$,
\begin{equation*}
\log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right) >a_3, \quad n \geq \bar{n}_0.
\end{equation*}
For such a $c'$, the RHS of \eqref{Eq_new2} is strictly less than one. Consequently, the expression inside the large parentheses on the RHS of \eqref{Eq_new10} is bounded away from zero for $n\geq\bar{n}_0$. Furthermore, as noted in the proof of \eqref{Eq_w1_lowr}, when $k_n\log\ell_n=\Theta(n)$, the expression $\frac{n''}{4\tilde{E}_n} \log(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n'')$ tends to a positive value as $n \to \infty$. It follows that
\begin{align}
\liminf_{n \rightarrow \infty} \min_{1 \leq \kappa_2 \leq v_n} b^n_{\lambda, \rho}(\kappa_2) & > 0 \label{Eq_new4}
\end{align}
which concludes the analysis of the second case.
The claim \eqref{Eq_w2_lowr} follows now by combining the above two cases, i.e., \eqref{Eq_new10_b} and \eqref{Eq_new4}.
}
\subsubsection{Proof of~\eqref{Eq_w1w2_lowr}} We use~\eqref{Eq_gn_lowr}, \eqref{Eq_an_lowr}, and \eqref{Eq_bn_lowr_bnd} to lower-bound
\begin{align}
g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) & \geq \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right)\left(1- \frac{ 4\log v_n + 4\log e }{\tilde{E}_n \frac{ \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}{v_n\tilde{E}_n/n''}} \right) \notag \\
& + \frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)\left(1 - \frac{\frac{(1-\rho)v_n}{2 n'' } \frac{ \log (1+\lambda v_n \tilde{E}_n/n'')}{\tilde{E}_nv_n/n''}}{ \frac{ \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)}{4\tilde{E}_n/n''}} - \frac{4 \rho \left[ \log \ell_n + \log e \right] }{\tilde{E}_n \frac{ \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)}{v_n\tilde{E}_n/n''}} \right) \notag
\end{align}
which is independent of $\kappa_1, \kappa_2$, and ${\bf d}$. \edit{It follows that
\begin{equation*}
\liminf_{n \to \infty} \min_{{\bf d} \in \mbox{$\cal{B}$}^n(v_n)} \min_{ \substack{ 1 \leq \kappa_1 \leq v_n \\ 1 \leq \kappa_2 \leq v_n}} g^n_{\lambda, \rho}(\kappa_1,\kappa_2, {\bf d}) \geq \underline{a} + \underline{b}
\end{equation*}
where
\begin{equation*}
\underline{a} \triangleq \liminf_{n \to \infty}\left\{ \frac{n''}{4\tilde{E}_n} \log \left(1 + \lambda \rho (1-\lambda \rho) \tilde{E}_n/n''\right)\left(1- \frac{ 4\log v_n + 4\log e }{\tilde{E}_n \frac{ \log \left(1 + \lambda \rho (1-\lambda \rho) v_n \tilde{E}_n/n''\right)}{v_n\tilde{E}_n/n''}} \right)\right\}
\end{equation*}
and
\begin{equation*}
\underline{b} \triangleq \liminf_{n \to \infty} \left\{\frac{n''}{4\tilde{E}_n} \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)\left(1 - \frac{\frac{(1-\rho)v_n}{2 n'' } \frac{ \log (1+\lambda v_n \tilde{E}_n/n'')}{\tilde{E}_nv_n/n''}}{ \frac{ \log \left(1+ \lambda(1-\lambda \rho) \tilde{E}_n/n'' \right)}{4\tilde{E}_n/n''}} - \frac{4 \rho \left[ \log \ell_n + \log e \right] }{\tilde{E}_n \frac{ \log \left(1+ \lambda(1-\lambda \rho)v_n \tilde{E}_n/n'' \right)}{v_n\tilde{E}_n/n''}} \right)\right\}.
\end{equation*}
We then obtain \eqref{Eq_w1w2_lowr} by noting that it was shown in the proof of \eqref{Eq_w1_lowr} that $\underline{a}>0$, and in the proof of \eqref{Eq_w2_lowr} that $\underline{b}>0$.
Since \eqref{Eq_w1_lowr}--\eqref{Eq_w1w2_lowr} prove Lemma~\ref{Lem_err_exp}, this concludes the proof.
}
\section{Proof of Lemma~\ref{Lem_energy_bound}}
\label{Append_prob_lemma}
\comment{
To prove Lemma~\ref{Lem_energy_bound}, we represent the messages $(W_1, \ldots, W_{\ell_n})$ using an $\ell_n$-length vector such that the $i$-th position of the vector is set to $j,j=0,1,\ldots, M_n$ if user $i$ has message $j$. We further partition the set of all message vectors into $\ell_n +1$ groups depending on the number of zeros in the vector. We then compute the average probability of error for each group except for the group corresponds to all zero vector whose probability vanishes in all non-trivial cases. We then find a lower bound to the probability of error for each group. This lower bound is obtained by further grouping the vectors in each group into subgroups such that any two vector in a subgroup are at a maximum Hamming distance two. Then the maximum difference in the energy of two vectors in a subgroup is a constant multiple of the energy allowed for a single user. By using a well-known ineqaulity called Birg\'e's inequality, we find a lower bound on the probability of error for each of these subgroups. This then yields a lower bound on the probability of error which proves the lemma.
}
Let $\mbox{$\cal{W}$}$ denote the set of the $(M_n+1)^{\ell_n}$ messages of all users. To prove Lemma~\ref{Lem_energy_bound}, we represent each ${\bf w} \in \mbox{$\cal{W}$}$ using an \edit{length-$\ell_n$} vector such that the $i$-th position of the vector is set to $j$ if user $i$ has message $j$.
The Hamming distance $d_H$ between two messages ${\bf w}=(w_1,\ldots,w_{\ell_n})$ and ${\bf w}'=(w'_1,\ldots,w'_{\ell_n})$ is defined as the number of positions at which ${\bf w}$ differs from ${\bf w}'$, i.e.,
$d_H({\bf w},{\bf w}') \triangleq \left|\{i: w_i\neq w'_i \}\right|$.
We first group the set $\mbox{$\cal{W}$}$ into $\ell_n +1$ subgroups.
Two messages ${\bf w}, {\bf w}' \in \mbox{$\cal{W}$}$ belong to the same subgroup if they have the same number of zeros. Note that all the messages in a subgroup have the same probability, since the probability of a message ${\bf w}$ is determined by the number of zeros in it.
Let $\cT_{t}$ denote the set of messages ${\bf w} \in \mbox{$\cal{W}$}$ with $t$ non-zero entries, where $t=0, \ldots, \ell_n$.
Further let
\begin{align}
\text{Pr}(\cT_{t}) \triangleq \text{Pr}({\bf W} \in \cT_{t})\notag
\end{align}
which can be evaluated as
\begin{equation}
\text{Pr}(\cT_{t}) = (1-\alpha_n)^{\ell_n-t} \left( \frac{\alpha_n}{M_n}\right)^{t} |\cT_{t}|. \label{Eq_type_prob}
\end{equation}
We define
\begin{align}
P_e(\cT_{t}) \triangleq \frac{1}{|\cT_{t}|} \sum_{{\bf w}\in \cT_{t}} P_e({\bf w}) \label{Eq_type_err_prob}
\end{align}
where $P_e({\bf w})$ denotes the probability of error in decoding the set of messages ${\bf w}=(w_1,\ldots,w_{\ell_n})$.
It follows that
\begin{align}
P_{e}^{(n)} & = \sum_{{\bf w} \in \mbox{$\cal{W}$}} \text{Pr}({\bf W} ={\bf w}) P_e({\bf w}) \notag \\
& = \sum_{t=0}^{\ell_n} \sum_{{\bf w}\in \cT_{t}} (1-\alpha_n)^{\ell_n-t} \left( \frac{\alpha_n}{M_n}\right)^{t} |\cT_{t}| \frac{1}{|\cT_{t}|} P_e({\bf w})\notag \\
& = \sum_{t=0}^{\ell_n} \text{Pr}( \cT_{t}) \frac{1}{|\cT_{t}|} \sum_{w\in \cT_{t}} P_e({\bf w}) \notag \\
& = \sum_{t=0}^{\ell_n} \text{Pr}(\cT_{t}) P_e(\cT_{t}) \notag \\
& \geq \sum_{t=1}^{\ell_n} \text{Pr}(\cT_{t}) P_e(\cT_{t}) \label{Eq_avg_prob_err3}
\end{align}
where we have used \eqref{Eq_type_prob} and the definition of $P_e(\cT_{t})$ in \eqref{Eq_type_err_prob}.
To prove Lemma~\ref{Lem_energy_bound}, we next show that
\begin{align}
P_e(\cT_{t}) \geq 1 - \frac{ 256 E_n/N_0+\log 2}{\log \ell_n}, \quad t=1,\ldots, \ell_n. \label{Eq_prob_typ_lowr}
\end{align}
To this end, we partition each $\cT_{t}, t=1,\ldots, \ell_n $ into $D_t$ sets $\mbox{$\cal{S}$}_d^t$. For every $ t=1,\ldots, \ell_n $, \edit{we then show that this partition satisfies}
\begin{align}
\frac{1}{|\mbox{$\cal{S}$}_d^t|} \sum_{{\bf w} \in \mbox{$\cal{S}$}_d^t} P_e({\bf w}) \geq 1 - \frac{ 256 E_n/N_0+\log 2}{\log \ell_n}. \label{Eq_avg_prob_lowr}
\end{align}
This yields~\eqref{Eq_prob_typ_lowr} since
\begin{align}
P_e(\cT_{t}) & = \sum_{d=1}^{D_t} \frac{|\mbox{$\cal{S}$}_d^t|}{|\cT_{t}|} \frac{1}{|\mbox{$\cal{S}$}_d^t|} \sum_{{\bf w} \in \mbox{$\cal{S}$}_d^t} P_e({\bf w}).
\end{align}
Before we continue by defining the sets $\mathcal{S}_d^t$, we note that
\begin{align}
M_n \geq 2 \label{Eq_seqMn_assum}
\end{align}
since $M_n=1$ would contradict the \edit{lemma's first assumption} that $\dot{R}>0$.
We further have that
\begin{align}
\ell_n \geq 5 \label{Eq_seqln_assum}
\end{align}
\edit{by the lemma's second assumption.}
We next define a partition of $\cT_{t}, t=1,\ldots, \ell_n $ that satisfies the following:
\begin{align}
|\mbox{$\cal{S}$}_d^t| \geq \ell_n +1, \quad d=1, \ldots, D_t \label{Eq_set_lowr}
\end{align}
and
\begin{align}
d_{H}({\bf w},{\bf w}') \leq 8, \quad {\bf w}, {\bf w}' \in \mbox{$\cal{S}$}_d^t. \label{Eq_distnce_uppr}
\end{align}
To this end, we consider the following four cases:
\subsubsection*{Case 1---$t=1$} For $t=1$, we do not partition the set, i.e., $\mbox{$\cal{S}$}_1^1 = \mbox{$\cal{T}$}_ {1}$. Thus, we have $|\mbox{$\cal{S}$}_1^1| = \ell_nM_n$. From~\eqref{Eq_seqMn_assum} and~\eqref{Eq_seqln_assum}, it follows that $|\mbox{$\cal{S}$}_1^1| \geq \ell_n +1$. Since any two messages ${\bf w}, {\bf w}' \in \mbox{$\cal{T}$}_ {1}$
have only one non-zero entry, we further have that $d_{H}({\bf w},{\bf w}') \leq 2$. Consequently, \eqref{Eq_set_lowr} and~\eqref{Eq_distnce_uppr} are satisfied.
\subsubsection*{Case 2---$t=2,\ldots,\ell_n -2$}
In this case, we obtain a partition by finding a code $\mbox{$\cal{C}$}_t$ in $\mbox{$\cal{T}$}_ {t}$ that has minimum Hamming distance $5$, and for every ${\bf w}\in \cT_{t}$, there exists at least one codeword in $\mbox{$\cal{C}$}_t$ which is at most at Hamming distance 4 from it.
Such a code exists because, if for some ${\bf w}\in \cT_{t}$ all codewords were at Hamming distance 5 or more, then we could add ${\bf w}$ to $\mbox{$\cal{C}$}_t$ without affecting its minimum distance.
Thus, for all ${\bf w} \notin \mbox{$\cal{C}$}_t$, there exists at least one index $j$ such that $d_H({\bf w},{\bf c}_t(j)) \leq 4$, where
${\bf c}_t(1),\ldots, {\bf c}_t(|\mbox{$\cal{C}$}_t|)$ denote the codewords of \edit{the} code $\mbox{$\cal{C}$}_t$. With this code $\mathcal{C}_t$, we partition $\mbox{$\cal{T}$}_t$ into the sets $\mathcal{S}_d^t$, $d=1,\ldots,D_t$ \edit{with $D_t = |\mbox{$\cal{C}$}_t|$} the following procedure:
\begin{enumerate}
\item For a given $d=1, \ldots,D_t$, we assign ${\bf c}_t(d)$ to $\mbox{$\cal{S}$}_d^t$ as well as all ${\bf w} \in \cT_{t}$ that satisfy $d_H({\bf w}, {\bf c}_t(d))\leq 2$. These assignments are unique since the code $\mbox{$\cal{C}$}_t$ has minimum Hamming distance 5.
\item We then consider all ${\bf w}\in \cT_{t}$ for which there is no codeword ${\bf c}_t(1), \ldots, {\bf c}_t(|\mbox{$\cal{C}$}_t|)$ satisfying $d_H({\bf w}, {\bf c}_t(d))\leq 2$ and assign them to the set \edit{$\mbox{$\cal{S}$}_d^t$} with index $d = \min \{j=1,\ldots, D_t: d_H({\bf w}, {\bf c}_t(j)) \leq 4 \}$.
\end{enumerate}
Like this, we obtain a partition of $ \cT_{t}$.
Since any two ${\bf w}, {\bf w}' \in \mbox{$\cal{S}$}_d^t$ are at most at a Hamming distance 4 from the codeword ${\bf c}_t(d)$, we have that
$d_{H}({\bf w},{\bf w}') \leq 8$. Consequently, \eqref{Eq_distnce_uppr} is satisfied.
To show that~\eqref{Eq_set_lowr} is satisfied, too, we use the following fact:
\begin{align}
\text{For two natural numbers } a \text{ and } b, \text { if } a \geq 4 \text{ and } 2\leq b \leq a-2, \text{ then } b(a-b) \geq a. \label{Eq_prod_seq}
\end{align}
This fact follows since $b(a-b)$ is increasing \edit{in $b$} from $b=2$ to $b = \lfloor a/2\rfloor$ and is
decreasing \edit{in $b$} from $b = \lfloor a/2\rfloor$ to $b=a-2$. So $b(a-b)$ is minimized at $b=2$ and $b=a-2$, where it has the value $2a-4$. For $a\geq 4$, this value is greater than or equal to $a$, hence the claim follows.
From~\eqref{Eq_prod_seq}, it follows that, if $|\mbox{$\cal{S}$}_d^t| \geq 1+ t(\ell_n - t)$, then $|\mbox{$\cal{S}$}_d^t|\geq 1+ \ell_n$. It thus remains to show that $|\mbox{$\cal{S}$}_d^t| \geq 1+ t(\ell_n - t)$.
To this end, for every codeword ${\bf c}_t(d)$, consider all sequences in $\cT_{t}$ which differ exactly in one non-zero position and in one zero position from ${\bf c}_t(d)$. There are $ t(\ell_n - t)M_n$ such sequences in $\cT_{t}$, which can be lower-bounded as
\begin{align}
t(\ell_n - t)M_n & \geq t(\ell_n - t) \notag \\
& \geq \ell_n \label{Eq_set_lowr2}
\end{align}
by~\eqref{Eq_seqMn_assum}, \eqref{Eq_seqln_assum}, and~\eqref{Eq_prod_seq}.
Since the codeword ${\bf c}_t(d)$ also belongs to $S_d^t$, it follows from~\eqref{Eq_set_lowr2} that
\begin{align}
| \mbox{$\cal{S}$}_d^t| &\geq \ell_n +1. \notag
\end{align}
\subsubsection*{Case 3---$t=\ell_n -1$}
We obtain a partition by defining a code $\mbox{$\cal{C}$}_t$ in $ \mbox{$\cal{T}$}_{\ell_n -1}$
that has
the same properties as the code used for Case 2. We then use the same procedure as in Case 2 to assign messages in ${\bf w} \in\mbox{$\cal{T}$}_{\ell_n -1}$ to the sets $\mbox{$\cal{S}$}_d^t$, $d=1,\ldots,D_t$. This gives a partition of $\mbox{$\cal{T}$}_{\ell_n -1}$ where any two ${\bf w}, {\bf w}'\in\mbox{$\cal{S}$}_d^t$ satisfy $d_H(\mathbf{w},\mathbf{w}')\leq 8$. Consequently, this partition satisfies~\eqref{Eq_distnce_uppr}.
We next show that this partition also satisfies~\eqref{Eq_set_lowr}. To this end, for every codeword ${\bf c}_t(d)$, consider all the sequences which differ exactly in two non-zero positions from ${\bf c}_t(d)$.
There are $ \binom{\ell_n-1}{2} (M_n-1)^2$ such sequences in $ \mbox{$\cal{T}$}_{\ell_n -1}$. Since $\mbox{$\cal{S}$}_d^t$ also contains the codeword ${\bf c}_t(d)$, we obtain that
\begin{align*}
| \mbox{$\cal{S}$}_d^t| & \geq \notag
\binom{\ell_n-1}{2} (M_n-1)^2 + 1\\
& \geq \binom{\ell_n-1}{2} +1 \\
& \geq \ell_n +1
\end{align*}
by~\eqref{Eq_seqMn_assum} and~\eqref{Eq_seqln_assum}.
\subsubsection*{Case 4---$t=\ell_n$} We obtain a partition by defining a code $\mathcal{C}_t$ in $\mbox{$\cal{T}$}_{\ell_n-1}$ that has the same properties as the code used in Case 2. We then use the same procedure as in Case 2 to assign messages in $\mathbf{w}\in\mbox{$\cal{T}$}_t$ to the sets $\mathcal{S}_d^t$, $d=1,\ldots,D_t$. This gives a partition of $\mbox{$\cal{T}$}_t$ where any two $\mathbf{w}, \mathbf{w}'\in\mathcal{S}_d^t$ satisfy $d_H(\mathbf{w},\mathbf{w}')\leq 8$. Consequently, this partition satisfies~\eqref{Eq_distnce_uppr}.
We next show that this partition also satisfies~\eqref{Eq_set_lowr}. To this end, for every codeword ${\bf c}_t(d)$, consider all sequences which are at Hamming distance $1$ from ${\bf c}_t(d)$. There are $\ell_n(M_n-1)$ such sequences. Since $\mbox{$\cal{S}$}_d^t$ also contains the codeword, we have
\begin{align}
| \mbox{$\cal{S}$}_d^t| & \geq 1+ \ell_n(M_n-1) \notag \\
& \geq 1+\ell_n \notag
\end{align}
by~\eqref{Eq_seqMn_assum}.
Having obtained a partition of $\mbox{$\cal{T}$}_t$ that satisfies~\eqref{Eq_set_lowr} and~\eqref{Eq_distnce_uppr}, we next derive the lower bound~\eqref{Eq_avg_prob_lowr}. To this end, we use a stronger form of Fano's inequality known as Birg\'e's inequality.
\begin{lemma}[Birg\'e's inequality]
\label{Lem_Berge}
Let $(\mbox{$\cal{Y}$}, \mbox{$\cal{B}$})$ be a measurable space with a $\sigma$-field, and let $P_1,\ldots, P_N$ be probability measures defined on $\mbox{$\cal{B}$}$. Further let $\mbox{$\cal{A}$}_i$, $i=1, \ldots,N$ denote $N$ events defined on $\mbox{$\cal{Y}$}$, where $N\geq 2$.
Then
\begin{align*}
\frac{1}{N} \sum_{i=1}^{N} P_i(\mbox{$\cal{A}$}_i) \leq \frac{\frac{1}{N^2} \sum_{i,j}^{} D(P_i\|P_j)+\log 2}{\log (N-1)}.
\end{align*}
\end{lemma}
\begin{proof}
See~\cite{Yatracos88} and references therein.
\end{proof}
To apply Lemma~\ref{Lem_Berge} to the problem at hand, we set $N=|\mbox{$\cal{S}$}_d^t|$ and $P_j = P_{{\bf Y}|{\bf X}}(\cdot|{\bf x}(j))$, where ${\bf x}(j)$ denotes the set of codewords transmitted to convey the set of messages $j \in \mbox{$\cal{S}$}_d^t$.
We further define $\mbox{$\cal{A}$}_j$ as the subset of $\mbox{$\cal{Y}$}^n$ for which the decoder declares the set of messages $j\in\mbox{$\cal{S}$}_d^t$. Then, the probability of
error in decoding messages $j\in\mbox{$\cal{S}$}_d^t$ is given by $P_e(j) =1-P_j(\mbox{$\cal{A}$}_j)$, and $\frac{1}{|\mbox{$\cal{S}$}_d^t|} \sum_{j\in \mbox{$\cal{S}$}_d^t} P_j(\mbox{$\cal{A}$}_j)$ denotes the average probability of correctly decoding a \edit{set of messages} in $\mbox{$\cal{S}$}_d^t$.
For two multivariate Gaussian distributions \mbox{${\bf Z}_1 \sim \mbox{$\cal{N}$}(\boldsymbol {\mu_1 }, \frac{N_0}{2}I)$}
and ${\bf Z}_2 \sim \mbox{$\cal{N}$}(\boldsymbol {\mu_2}, \frac{N_0}{2}I)$ (where $I$ denotes the identity matrix),
the relative entropy $D({\bf Z}_1\| { \bf Z}_2)$ is given by $ \frac{ ||\boldsymbol {\mu_1 - \mu_2}||^2}{N_0}$. We next note that $P_{{\bf w}} = \mbox{$\cal{N}$}(\overline{{\bf x}}({\bf w}), \frac{N_0}{2}I)$ and $P_{{\bf w}'} = \mbox{$\cal{N}$}(\overline{{\bf x}}({\bf w}'), \frac{N_0}{2}I)$, where $\overline{{\bf x}}(j)$ denotes the sum of codewords contained in ${\bf x}(j)$.
\edit{By construction}, any two messages ${\bf w}, {\bf w}' \in \mbox{$\cal{S}$}_d^t$ are at a Hamming distance of at most 8. Without loss of generality, let us assume that $w_j = w'_j$ for $j=9, \ldots, \ell_n$. Then
\begin{align}
\big\|\sum_{j=1}^{\ell_n} {\bf x}_j(w_j) -\sum_{i=1}^{\ell_n} {\bf x}_j(w'_j)\big\|^2 & = \big\|\sum_{i=1}^{8} {\bf x}_j(w_j) - {\bf x}_j(w'_j)\big\|^2 \notag \\
& \leq \big\|\sum_{j=1}^{8} |{\bf x}_j(w_j) - {\bf x}_j(w'_j)|\big\|^2 \notag\\
& \leq (8 \times 2\sqrt{E_n})^2 \notag \\
& = 256 E_n \notag
\end{align}
where we have used the triangle inequality and that the energy of a codeword for any user is upper-bounded by $E_n$. Thus, $D(P_{{\bf w}}\| P_{{\bf w}'}) \leq 256 E_n/N_0$.
It follows from Birg\'e's inequality that
\begin{align}
\frac{1}{|\mbox{$\cal{S}$}_d^t|} \sum_{{\bf w} \in \mbox{$\cal{S}$}_d^t} P_e({\bf w}) & \geq 1 - \frac{ 256 E_n/N_0+\log 2}{\log (|\mbox{$\cal{S}$}_d^t|-1)} \notag \\
& \geq 1 - \frac{ 256 E_n/N_0+\log 2}{\log \ell_n } \label{Eq_prob_set_lowr}
\end{align}
where the last step holds because $ |\mbox{$\cal{S}$}_d^t|-1 \geq \ell_n$. This proves~\eqref{Eq_avg_prob_lowr} and hence also~\eqref{Eq_prob_typ_lowr}.
Combining~\eqref{Eq_prob_typ_lowr} and~\eqref{Eq_avg_prob_err3}, we obtain
\begin{align*}
P_{e}^{(n)} & \geq \left( 1 - \frac{ 256 E_n/N_0+\log 2}{\log \ell_n } \right) \sum_{i=1}^{\ell_n} \text{Pr}(\mbox{$\cal{T}$}_{i}) \\
& = \left( 1 - \frac{ 256 E_n/N_0+\log 2}{\log \ell_n } \right) (1-\text{Pr}(\mbox{$\cal{T}$}_{0} )).
\end{align*}
\edit{
The probability $\textnormal{Pr}(\mathcal{T}_0)=\left((1-\alpha_n)^{\frac{1}{\alpha_n}}\right)^{k_n}$ is upper-bounded by $e^{-k_n}$ so, by the lemma's assumption $k_n=\Omega(1)$,
\begin{equation*}
\limsup_{n\to\infty} \textnormal{Pr}(\mathcal{T}_0) < 1.
\end{equation*}
}
Consequently, $P_{e}^{(n)}$ \edit{may tend to zero as $n \to \infty$} only if
\begin{align}
E_n & = \Omega\left(\log \ell_n \right).\notag
\end{align}
This proves Lemma~\ref{Lem_energy_bound}.
\comment{
\section{Proof of Lemma~\ref{Lem_detect_uppr}}
\label{Sec_appnd_ortho}
Let ${\bf Y}_1$ denote the received vector of length $n/\ell_n$ corresponding to user 1 in the orthogonal-access scheme.
From the pilot signal, which is the first symbol $Y_{11} $ of ${\bf Y}_1$, the receiver guesses whether user 1 is active or not. Specifically, the user is estimated as active if $Y_{11} > \frac{\sqrt{tE_n}}{2}$ and as inactive otherwise.
If the user is declared as active, then the receiver decodes the message from the rest of ${\bf Y}_1$.
Let $\text{Pr} ( \hat{W}_1 \neq w |W_1 = w)$ denote the decoding error probability when message $w,w=0, \ldots, M_n$ was transmitted.
Then, $P_1$ is given by
\begin{align}
P_1 & = (1-\alpha_n)\text{Pr} ( \hat{W}_1 \neq 0) + \frac{\alpha_n}{M_n} \sum_{w=1}^{M_n} \text{Pr} ( \hat{W}_1 \neq w |W_1 = w) \notag \\
& \leq \text{Pr} ( \hat{W}_1 \neq 0|W_1=0) + \frac{1}{M_n} \sum_{w=1}^{M_n} \text{Pr} ( \hat{W}_1 \neq w | W_1 = w). \label{Eq_err_prob_uppr}
\end{align}
If $W_1=0$, then an error occurs if $Y_{11} > \frac{\sqrt{tE_n}}{2}$. So, we have
\begin{align}
\text{Pr} ( \hat{W}_1 \neq 0|W_1=0) & = Q\left( \frac{\sqrt{tE_n}}{2} \right). \label{Eq_err_prob_uppr2}
\end{align}
Let $\mbox{$\cal{E}$}_{11}$ denote the event $Y_{11} \leq \frac{\sqrt{tE_n}}{2}$ and $D_w$ denote the error event in decoding message $w$ for the transmission scheme described in Section~\ref{Sec_ortho_access} when the user is known to be active. Then, for every $w=1,\ldots,M_n$
\begin{align}
\text{Pr} ( \hat{W}_1 \neq w |W_1 = w) & = \text{Pr} (\mbox{$\cal{E}$}_{11} \cup \{ \mbox{$\cal{E}$}_{11}^c \cap \hat{W}_1 \neq w \}| W_1 = w) \notag \\
& \leq \text{Pr} (\mbox{$\cal{E}$}_{11}| W_1 = w) + \text{Pr} ( \mbox{$\cal{E}$}_{11}^c | W_1 = w) \text{Pr} ( \hat{W}_1 \neq w | W_1 = w, \mbox{$\cal{E}$}_{11}^c ) \notag \\
& \leq \text{Pr} (\mbox{$\cal{E}$}_{11}| W_1 = w) + \text{Pr} ( D_w | W_1 = w) \notag
\end{align}
where the last step follows because $\text{Pr} (\mbox{$\cal{E}$}_{11}^c|W_1=w)\leq 1$ and by the definition of $D_w$.
We next define $\text{Pr} (D) = \frac{1}{M_n} \sum_{w=1}^{M_n} \text{Pr} (D_w)$. Since $P(\mbox{$\cal{E}$}_{11} | W_1 =w) = Q\left( \frac{\sqrt{tE_n}}{2} \right)$,
it follows from~\eqref{Eq_err_prob_uppr} that
\begin{align}
P_1 & \leq 2 Q\left( \frac{\sqrt{tE_n}}{2} \right)+ P(D). \label{Eq_singl_usr_uppr}
\end{align}
We next upper-bound $P(D)$. To this end, we use the following upper bound on the average probability of error $P(\mathcal{E})$ of the Gaussian point-to-point channel for a code of blocklength $n$ with power $P$~\cite[Section~7.4]{Gallager68}
\begin{align}
P(\mbox{$\cal{E}$}) & \leq M_n^{ \rho} \exp[-nE_0(\rho, P)], \; \mbox{ for every } 0< \rho \leq 1 \label{Eq_upp_dec_AWGN}
\end{align}
where
\begin{align}
E_0(\rho, P) & \triangleq \frac{\rho}{2} \ln \left(1+\frac{2P}{(1+\rho)N_0}\right). \notag
\end{align}
By substituting in~\eqref{Eq_upp_dec_AWGN} $n$ by $\frac{n}{\ell_n} - 1$ and $P$ by $P_n = \frac{(1-t)E_n}{\frac{n}{\ell_n} -1}$, we obtain that $P(D)$ can be upper-bounded in terms of the rate per unit-energy $\dot{R}=\frac{\log M_n}{E_n}$ as follows:
\begin{align}
P(D) & \leq M_n^{ \rho} \exp\left[-\left(\frac{ n}{\ell_n}-1\right)E_0(\rho, P_n)\right] \nonumber \\
& = \exp\left[ \rho \ln M_n - \left(\frac{ n}{\ell_n}-1\right) \frac{\rho}{2} \ln \left(1+\frac{ 2E_n(1-t)}{ \left(\frac{ n}{\ell_n}-1\right)(1+\rho)N_0}\right) \right] \nonumber \\
& = \exp\left[ -E_n(1-t) \rho \left( \frac{\ln \left(1+\frac{ 2E_n(1-t)}{ \left(\frac{ n}{\ell_n}-1\right)(1+\rho)N_0}\right)}{ \frac{2E_n(1-t)}{ \left(\frac{ n}{\ell_n}-1\right)} } -\frac{\dot{R}}{(1-t) \log e} \right)\right]. \label{Eq_err_uppr}
\end{align}
We next choose $E_n = c_n \ln n$ with $c_n \triangleq \ln\bigl(\frac{n}{\ell_n\ln n}\bigr)$. Since, by assumption, $\ell_n = o(n / \log n)$, this implies that $\frac{\ell_nE_n}{n} \to 0$ as $n \to \infty$, hence
$\frac{E_n}{n/\ell_n -1} \to 0$. Thus,
the first term in the inner most bracket in \eqref{Eq_err_uppr} tends to $1/((1+\rho)N_0)$ as $n \to \infty$. It follows that for $\dot{R} < \frac{\log e}{N_0}$, there exists a sufficiently large $n'_0$, a $ t > 0$, a $\rho > 0$, and a $\delta>0$ such that, for $n\geq n'_0$, the RHS of \eqref{Eq_err_uppr} is upper-bounded by $\exp[-E_n(1-t) \rho \delta]$. It follows that, for our choice $E_n=c_n\ln n$, we have for $n\geq n_0'$
\begin{align}
P(D) & \leq \exp \left[ \ln \left(\frac{1}{n}\right)^{c_n\delta \rho(1-t)} \right]. \notag
\end{align}
Since $c_n \to \infty $ as $n\to\infty$, and hence also
$c_n\delta \rho(1-t) \to \infty$, this yields
\begin{align}
P(D) & \leq \frac{1}{n^2} \label{Eq_act_dec_uppr}
\end{align}
for sufficiently large $n \geq n_0'$.
Similary, for $n\geq \tilde{n}_0$ and sufficiently large $\tilde{n}_0$, we can upper-bound
\begin{equation}
2 Q\left(\frac{\sqrt{t E_n}}{2}\right) \leq \frac{1}{n^2} \label{Eq_usr_det_uppr}
\end{equation}
by upper-bounding the $Q$-function as $Q(\beta)\leq \frac{e^{-\beta^2/2}}{\sqrt{2\pi}\beta}$ and evaluating the resulting bound for $E_n=c_n\ln n$.
Using~\eqref{Eq_act_dec_uppr} and~\eqref{Eq_usr_det_uppr} in~\eqref{Eq_singl_usr_uppr}, we obtain for $n \geq \max(\tilde{n}_0,n_0')$ that
\begin{align}
P_1 \leq \frac{2}{n^2}. \notag
\end{align}
This proves Lemma~\ref{Lem_detect_uppr}.
}
\section*{Acknowledgment}
The authors wish to thank the Associate Editor A.~Anastasopoulos and the anonymous referees for their valuable comments.
\section{Non-vanishing probability of error}
\section{\edit{Comparison With the Polyanskiy Setting of Many-Access Channels}}
\label{Sec_discuss}
\edit{In this paper, we basically follow the setting of the MnAC introduced Chen \emph{et al.} \cite{ChenCG17}. That is, we assume that each user has a different codebook and require the probability of error to vanish as $n\to\infty$. By Lemma~\ref{Lem_energy_infty}, the latter requirement can only be satisfied if $E_n\to\infty$ as $n\to\infty$, which for a fixed rate per unit-energy implies that $M_n\to\infty$. In other words, the payload of the user tends to infinity as $n\to\infty$.
In an attempt to introduce a notion of a random-access code that is appealing to the different communities interested in the multiple-access problem, Polyanskiy \cite{Polyanskiy17} proposed a different setting, where
\begin{enumerate}
\item all encoders use the same codebook;
\item the decoding is up to permutations of messages;
\item the probability of error is not required to vanish as $n\to\infty$.
\end{enumerate}
He further introduced the per-user probability of error
\begin{equation}
\label{eq:PUPE}
\frac{1}{k_n} \sum_{i=1}^{k_n} \text{Pr}\bigl(\{\hat{W}_i \neq W_i\} \cup \{\text{$W_j=W_i$ for some $j\neq i$}\}\bigr).
\end{equation}
As argued in \cite{Polyanskiy17}, the probability that two messages are equal is typically small, in which case the event \[\{\text{$W_j=W_i$ for some $j\neq i$}\}\] can be ignored and \eqref{eq:PUPE} is essentially equivalent to the APE defined in \eqref{eq_Pe_A}.
The setting where all encoders use the same codebook and decoding is up to permutations of messages is sometimes also referred to as \emph{unsourced multiple-access}. Unsourced multiple-access has two benefits: it may be more practical in scenarios with a large number of users, and many popular schemes, such as slotted ALOHA and coded slotted ALOHA, become achievability bounds and can be compared against each other and against information-theoretic benchmarks.
By not requiring the probability of error to vanish as $n\to\infty$, it is not necessary to let $M_n\to\infty$ with the blocklength. So, in the above setting, the payload can be of fixed size, which may be appealing from a practical perspective.
In \cite{Polyanskiy17}, Polyanskiy presented a random-coding achievability bound and used it as a benchmark for the performance of practical schemes, including coded slotted ALOHA, treating intereference as noise, and time-division multiple-access. He further studied the minimum energy-per-bit that can be achieved by an $(n,M_n,E_n,\epsilon)$ code for APE when each user has a different codebook, the payload $M_n$ and the probability of error $\epsilon$ are fixed, and the number of users grows linearly with the blocklength, i.e., $k_n=\mu n$ for some $0<\mu\ll 1$. The bounds obtained in \cite{Polyanskiy17} and in the follow-up work \cite{ZadikPT19} suggest that, whenever $\mu$ is below some critical value, the minimum energy-per-bit is independent of $\mu$. In other words, there exists a critical density of users below which interference-free communication is feasible. This is consistent with the conclusions we drew from Theorems~\ref{Thm_nonrandom} and \ref{Thm_random_JPE} for JPE, and from Theorems~\ref{Thm_capac_APE} and \ref{Thm_capac_PUPE} for APE. However, these theorems also demonstrate that there is an important difference: According to Theorems~\ref{Thm_nonrandom} and \ref{Thm_capac_APE}, a linear growth of the number of users in $n$ implies that the capacity per unit-energy $\dot{C}$ is zero, irrespective of the value of $\mu$, and irrespective of whether JPE or APE is considered. Since rate per unit-energy is the reciprocal of energy-per-bit, this implies that the minimum energy-per-bit is infinite. In contrast, the bounds presented in \cite{Polyanskiy17} and \cite{ZadikPT19} show that the minimum energy-per-bit for a fixed probability of error $\epsilon$ is finite or, equivalently, that the $\epsilon$-capacity per unit-energy $\dot{C}_{\epsilon}$ is strictly positive. Thus, the capacity per unit-energy is strictly smaller than the $\epsilon$-capacity per unit-energy, which implies that, for APE, the strong converse does not hold.
}
\edit{In order to explore this point further, we discuss in the rest of this section how the largest achievable rate per unit-energy changes if we allow for a non-vanishing error probability. For the sake of simplicity, we shall assume throughout the section that users are active with probability one, i.e., $\alpha_n=1$.}
\edit{We first argue that, when the number of users is bounded in $n$, then a simple orthogonal-access scheme achieves an $\epsilon$-capacity per unit-energy that can even be larger than the single-user capacity per unit-energy $\frac{\log e}{N_0}$, irrespective of whether JPE or APE is assumed.} We shall do so by means of the following example.
\begin{example}
\label{Ex_finite_users}
Consider a $k$-user Gaussian MAC with normalized noise variance $N_0/2=1$ and where the number of users is independent of $n$. Suppose that each user has two messages to transmit using energy $E_n=1$. Consider an orthogonal-access scheme where each user gets one channel use and remains silent in the remaining channel uses. In this channel use, each user transmits either $+1$ or $-1$ to convey its message. Since the access scheme is orthogonal, the receiver can perform independent decoding for each user, which yields $\text{Pr} (\hat{W}_i \neq W_i)=Q(1)$. Consequently, we can achieve the rate per unit-energy $\frac{\log M_n}{E_n}=1$ at APE $P_{e,A}^{(n)}=Q(1)$ and at JPE $P_{e}^{(n)}=1 - (1 -Q(1))^k$. \edit{Since $\frac{\log e}{N_0}=\frac{\log e}{2}\approx 0.7213$, we conclude that, if $\epsilon\geq Q(1)$ (for APE) or $\epsilon\geq 1 - (1 -Q(1))^k$ (for JPE), then the $\epsilon$-capacity per unit-energy exceeds the single-user capacity per unit-energy.}
\end{example}
\begin{remark}
\label{remark2}
A crucial ingredient in the above scheme is that the energy $E_n$ is bounded in $n$. Indeed, it follows from~\cite[Th.~3]{PolyanskiyPV11} that, if $E_n \to \infty$ as $n \to \infty$, as required, e.g., in~\cite[Def.~2]{Verdu90} (See Remark~\ref{remark}), then the $\epsilon$-capacity per unit-energy of the Gaussian single-user channel is equal to $\frac{\log e}{N_0}$, irrespective of \mbox{$0<\epsilon<1$}. The genie argument provided at the beginning of \edit{the} proof of Theorem~\ref{Thm_capac_APE} then yields that the same is true for the Gaussian MnAC.
\end{remark}
\edit{In the following two subsections,} we discuss the $\epsilon$-capacity per unit-energy when the number of users $k_n$ tends to infinity as $n$ tends to infinity. Specifically, in Subsection~\ref{Sec_non_vanish_JPE} we demonstrate that, irrespective of the order of growth of $k_n$, the $\epsilon$-capacity per unit-energy for JPE \edit{is the same as $\dot{C}$, i.e., the strong converse holds in this case.}
In Subsection~\ref{Sec_non_vanish_APE}, \edit{we consider the case where $k_n=\mu n$ and show by means of a simple example that, for some fixed payload $M_n$ and sufficiently small $\mu$, the $\epsilon$-capacity per unit-energy for APE is indeed independent of $\mu$, as suggested by the bounds in \cite{Polyanskiy17} and \cite{ZadikPT19}.}
\subsection{Non-Vanishing JPE}
\label{Sec_non_vanish_JPE}
The following theorem characterizes the behavior of the $\epsilon$-capacity per unit-energy for JPE and an unbounded number of users.
\begin{theorem}
\label{Thm_non_vanish_JPE}
The \mbox{$\epsilon$-capacity} per unit-energy $\dot{C}_{\epsilon}$ \edit{of the non-random MnAC with JPE} has the following behavior:
\begin{enumerate}
\item If $k_n=\omega(1)$ and $k_n = o(n/\log n)$, then $\dot{C}_{\epsilon}=\frac{\log e}{N_0}$ for every $0 < \epsilon<1$. \label{Thm_eps_part1}
\item If $k_n = \omega(n/ \log n)$, then $\dot{C}_{\epsilon}=0$ for every $0 < \epsilon<1$.\label{Thm_eps_part2}
\end{enumerate}
\end{theorem}
\begin{proof}
We first prove Part~\ref{Thm_eps_part1}). It follows from~\eqref{Eq_prob_typ_lowr} in the proof of Lemma~\ref{Lem_energy_bound} that, for \edit{$M_n\geq 2$ and $k_n\geq 5$},
\begin{equation}
P_{e}^{(n)} \geq 1 - \frac{256 E_n/N_0+\log 2}{\log k_n}.\label{Eq_joint_lowr}
\end{equation}
This implies that $P_{e}^{(n)}$ tends to one unless $E_n = \Omega(\log k_n)$. Since, by the theorem's assumption, \edit{we have} $k_n=\omega(1)$, it follows that $E_n\to\infty$ is necessary to achieve a JPE strictly smaller than one. As argued in Remark~\ref{remark2} (see also Remark~\ref{remark}), if $E_n\to\infty$ as $n\to\infty$, then the $\epsilon$-capacity per unit-energy of the Gaussian MnAC cannot exceed the single-user capacity per unit-energy $\frac{\log e}{N_0}$. Furthermore, by Theorem~\ref{Thm_capac_APE}, if $k_n=o(n/\log n)$, then any rate per unit-energy satisfying $\dot{R}<\frac{\log e}{N_0}$ is achievable, hence it is also $\epsilon$-achievable. We thus conclude that, if $k_n=\omega(1)$ and $k_n=o(n/\log n)$, then $\dot{C}_{\epsilon}=\frac{\log e}{N_0}$ for every $0 < \epsilon<1$.
To prove Part~\ref{Thm_eps_part2}), we use the upper bound~\eqref{Eq_R_avg1}, namely
\begin{equation}
\dot{R}\leq \frac{ \frac{1}{k_nE_n} + \frac{n}{2 k_nE_n}\log(1+\frac{ 2k_nE_n}{nN_0})}{1 -P_{e}^{(n)}}.\label{Eq_R_avg_JPE}
\end{equation}
By~\eqref{Eq_joint_lowr}, $P_{e}^{(n)}$ tends to one unless $E_n = \Omega(\log k_n)$. For $k_n=\omega(n/\log n)$, this implies that \mbox{$k_nE_n/n \to \infty$} as $n \to \infty$, so the RHS of \eqref{Eq_R_avg_JPE} vanishes as $n$ tends to infinity. We thus conclude that, if $k_n=\omega(n/\log n)$, then $\dot{C}_{\epsilon} = 0$ for every $0<\epsilon<1$.
\end{proof}
Theorems~\ref{Thm_nonrandom} and~\ref{Thm_non_vanish_JPE} demonstrate that
$\dot{C}_{\epsilon}=\dot{C}$ for every $0 < \epsilon < 1$, provided that the number of users is unbounded in $n$. Consequently, the strong converse holds for JPE. As argued \edit{in the proof of Theorem~\ref{Thm_non_vanish_JPE}}, this result hinges on the fact that the probability of error can be strictly smaller than one only if the energy tends to infinity as $n \to \infty$. As explained in Remarks~\ref{remark} and \ref{remark2}, in this case the capacity per unit-energy cannot exceed $\frac{\log e}{N_0}$. As we shall see in the next subsection, an APE strictly smaller than one can also be achieved at a positive rate per unit-energy if the energy is bounded in $n$. \edit{This allows for a positive $\epsilon$-capacity per unit-energy for APE when $k_n$ grows linearly in $n$.}
\subsection{Non-Vanishing APE}
\label{Sec_non_vanish_APE}
In this subsection, \edit{we focus on the case where $k_n=\mu n$ and show that, when the payload of each user is $1$ bit and $\mu\leq 1$, the $\epsilon$-capacity per unit-energy for APE is indeed independent of $\mu$. This supports the conjecture in \cite{ZadikPT19} that there exists a critical density of users below which interference-free communication is feasible.}
\edit{Let $\mbox{$\cal{E}$}^*(M,\mu,\epsilon)$ denote the minimum energy-per-bit required to send $M$ messages at an APE not exceeding $\epsilon$ when the number of users is given by $k_n=\mu n$. While it is difficult to obtain the exact closed form expression of $\mbox{$\cal{E}$}^*(M,\mu,\epsilon)$ for general $M,\mu$ and $\epsilon$, tight upper and lower bounds of $\mbox{$\cal{E}$}^*(M,\mu,\epsilon)$ were derived in~\cite{Polyanskiy17, ZadikPT19}. Furthermore, as we shall argue next, if the payload of each user is 1 bit and $\mu\leq 1$, then $\mbox{$\cal{E}$}^*(M,\mu,\epsilon)$ can be evaluated in closed form.}
For simplicity, assume that $N_0/2=1$. Then,
\begin{equation}
\label{eq:1bit_nomu}
\mbox{$\cal{E}$}^*(2,\mu, \epsilon) = \left( \max\{0, Q^{-1}(\epsilon)\}\right)^2, \quad 0 < \mu \leq 1
\end{equation}
where $Q^{-1}$ denotes the inverse of $Q$ function.
Indeed, that $\mbox{$\cal{E}$}^*(2,\mu, \epsilon)\geq(\max\{0, Q^{-1}(\epsilon)\})^2$ follows from \eqref{eq:LB_P2P}. Furthermore, if $\mu\leq 1$, then we can assign each user one channel use. Following the orthogonal-access scheme presented in Example~\ref{Ex_finite_users}, but where each user transmits either $+\sqrt{E}$ or $-\sqrt{E}$ (instead of $+1$ or $-1$) with energy $E=(\max\{0,Q^{-1}(\epsilon)\})^2$, we can achieve \edit{$P_{e,A}^{(n)}\leq\epsilon$}. \edit{Thus, with energy \eqref{eq:1bit_nomu} we can send $2$ messages at an APE not exceeding $\epsilon$.}
Observe that the RHS of~\eqref{eq:1bit_nomu} does not depend on $\mu$ and agrees with the minimum energy-per-bit required to send one bit over the Gaussian single-user channel with error probability $\epsilon$. Thus, when $\mu\leq 1$, we can send one bit free of interference.
Further observe that~\eqref{eq:1bit_nomu} is finite for every positive $\epsilon$. Consequently, the $\epsilon$-capacity per unit-energy, which is given by the reciprocal of~\eqref{eq:1bit_nomu}, is strictly positive. \edit{This is in contrast to the capacity per unit-energy which}, by Part~\ref{Thm_APE_conv_part}) of Theorem~\ref{Thm_capac_APE}, is zero. \edit{Thus,} the strong converse does not hold \edit{for APE} when the number of users grows linearly in $n$.
As mentioned \edit{in the previous subsection}, to achieve a positive rate per unit-energy, it is crucial that the energy $E_n$ and payload $\log M_n$ are bounded in $n$. Indeed, for $k_n=\mu n$, the RHS of~\eqref{eq_Part2_Th1_end1} vanishes as $E_n$ tends to infinity, \edit{in which case} no positive rate per unit-energy is $\epsilon$-achievable. Moreover, for $k_n=\mu n$ and a bounded $E_n$, \eqref{eq_rate_APE_uppr}
implies that the payload $\log M_n$ is bounded, too. We conclude that the arguably most common assumptions in the literature on MnACs---linear growth of the number of users, a non-vanishing APE, and a fixed payload---are the only set of assumptions under which a positive rate per unit-energy is achievable, unless we consider \edit{sublinear} growths of $k_n$.
\section{Introduction}
Chen~\emph{et al.} \cite{ChenCG17} introduced the many-access channel (MnAC) as a multiple-access channel (MAC) where the number of users grows with the blocklength and each user is active with a given probability. This model is motivated by systems consisting of a single receiver and many transmitters, the number of which is comparable or even larger than the blocklength. This situation may occur, \emph{e.g.}, in a machine-to-machine communication scenario with many thousands of devices in a given cell that are active only sporadically. In \cite{ChenCG17}, Chen~\emph{et al.} considered a Gaussian MnAC with $\ell_n$ users, each of which is active with probability $\alpha_n$, and determined the number of messages $M_n$ each user can transmit reliably with a codebook of average power not exceeding $P$. Since then, MnACs have been studied in various papers under different settings.
An example of a MAC is the uplink connection in a cellular network. Current cellular networks follow a \emph{grant-based} access protocols, i.e., an active device has to obtain permission from the base station to transmit data. In MnACs, this will lead to a large signalling overhead. \emph{Grant-free} access protocols, where active devices can access the network without a permission, were proposed to overcome this~\cite{LiuPopovski18}.
The synchronization issues \edit{arising} in such scenarios have been studied by Shahi \emph{et al.}~\cite{ShahiTD18}.
In some of the MnACs, such as sensor networks, detecting the identity of a device that sent a particular message may not be important. This scenario was studied under the name of \emph{unsourced} massive access by Polyanskiy~\cite{Polyanskiy17}, who further introduced the notion of \emph{per-user probability of error}.
Specifically, \cite{Polyanskiy17} analyzed the minimum \emph{energy-per-bit} required to reliably transmit a message over an unsourced Gaussian MnAC where the number of active users grows linearly in the blocklength and each user's payload is fixed. Low-complexity schemes for this setting were studied in many works~\cite{OrdentlichP17, Vem19, Amalladinne20, Fengler19, Fengler20}. Generalizations to quasi-static fading MnACs can be found in~\cite{KowshikPISIT19,KowshikP19,KowshikTCOM20}. Zadik \emph{et al.}~\cite{ZadikPT19} presented improved bounds on the tradeoff between user density and energy-per-bit for the many-access channel introduced in~\cite{Polyanskiy17}.
Related to energy per-bit is the \emph{capacity per unit-energy} $\dot{C}$ which is defined as the largest number of bits per unit-energy that can be transmitted reliably over a channel. Verd\'u \cite{Verdu90} showed that $\dot{C}$ can be obtained from the capacity-cost function $C(P)$, defined as the largest number of bits per channel use that can be transmitted reliably with average power per symbol not exceeding $P$, as
\begin{align*}
\dot{C}=\sup_{P>0} \frac{C(P)}{P}.
\end{align*}
For the Gaussian channel with noise power $N_0/2$, this is equal to $\frac{\log e}{N_0}$. Verd\'u further showed that the capacity per unit-energy can be achieved by a codebook that is orthogonal in the sense that the nonzero components of different codewords do not overlap. \edit{Such a codebook corresponds to pulse position modulation (PPM) either in time or frequency domains.} In general, we shall say that a codebook is orthogonal if the inner product between different codewords is zero. The two-user Gaussian multiple access channel (MAC) was also studied in~\cite{Verdu90}, and it was demonstrated that both users can achieve the single-user capacity per unit-energy by timesharing the channel between the users, i.e., while one user transmits the other user remains silent. This is an \emph{orthogonal-access scheme} in the sense that the inner product between codewords of different users is zero.\footnote{Note, however, that in an orthogonal-access scheme the codebooks are not required to be orthogonal. That is, codewords of different codebooks are orthogonal to each other, but codewords of the same codebook need not be.} To summarize, in a two-user Gaussian MAC, both users can achieve the rate per unit-energy $\frac{\log e}{N_0} $ by combining an orthogonal-access scheme with orthogonal codebooks. This result can be directly generalized to any finite number of users.
The picture changes when the number of users grows without bound with the blocklength $n$.
In this paper, we consider a setting where the total number of users $\ell_n$ may grow as an arbitrary function of the blocklength and the probability $\alpha_n$ that a user is active may be a function of the blocklength, too. Contributions of this paper are as follows.
\begin{enumerate}
\item First, we consider the capacity per unit-energy of the Gaussian MnAC as a function of the order of growth of users when all users are active with probability one. In Theorem~\ref{Thm_nonrandom}, we show that, if the order of growth is above $n/ \log n$, then the capacity per unit-energy is zero, and if the order of growth is below $n/ \log n$, then each user can achieve the singe-user capacity per unit-energy $\frac{\log e}{N_0}$. Thus, there is a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate is infeasible. \edit{
We further show that, if the order of growth is proportional to $n/\log n$, then the capacity per unit-energy is strictly between zero and $\frac{\log e}{N_0}$. Finally, we show that the capacity per unit-energy can be achieved by an orthogonal-access scheme.}
\item Since an orthogonal-access scheme in combination with orthogonal codebooks is optimal in achieving the capacity per unit-energy for a finite number of users, we study the performance of such a scheme for an unbounded number of users. In particular, we characterize in Theorem~\ref{Thm_ortho_code} the largest rate per unit-energy that can be achieved with an orthogonal-access scheme and orthogonal codebooks. Our characterization shows that this scheme is only optimal if the number of users grows more slowly than any positive power of $n$.
\item We then analyze the behaviour of the capacity per unit-energy of the Gaussian MnAC as a function of \edit{the} order of growth of the number of users for the per-user probability of error which, in this paper, we shall refer to as \emph{average probability of error} (APE). In contrast, we refer to the classical probability of error as \emph{joint probability of error} (JPE). We demonstrate that, if the order of growth is sublinear, then each user can achieve the capacity per unit-energy $\frac{\log e}{N_0}$ of the single-user Gaussian channel. Conversely, if the growth is linear or above, then the capacity per unit-energy is zero (Theorem~\ref{Thm_capac_APE}). Comparing with the results in Theorem~\ref{Thm_nonrandom}, we observe that relaxing the error probability from JPE to APE shifts the transition threshold separating the two regimes of interference-free communication and no reliable communication from $n/\log n$ to $n$.
\item We next consider MnACs with random user activity. As before, we consider a setting where the total number of users $\ell_n$ may grow as an arbitrary function of the blocklength. Furthermore, the probability $\alpha_n$ that a user is active may be a function of the blocklength, too. Let $k_n = \alpha_n \ell_n$ denote the average number of active users. We demonstrate in Theorem~\ref{Thm_random_JPE} that, if $k_n \log \ell_n$ is sublinear in $n$, then each user can achieve the single-user capacity per unit-energy. Conversely, if $k_n \log \ell_n$ is superlinear in $n$, then the capacity per unit-energy is zero. \edit{We also demonstrate that, if $k_n \log \ell_n$ is linear in $n$, then the capacity per unit-energy is strictly between zero and $\frac{\log e}{N_0}$.}
\item We further show in Theorem~\ref{Thm_ortho_accs} that orthogonal-access schemes, which are optimal when $\alpha_n=1$, are strictly suboptimal when $\alpha_n \to 0$. In Theorem~\ref{Thm_capac_PUPE}, we then characterize the behaviour of \edit{the} random MnAC under APE.
\item \edit{We conclude the paper with a comparison of the setting considered in this paper and the setting proposed by Polyanskiy in \cite{Polyanskiy17}. Since an important aspect of the Polyanskiy setting is that the probability of error does not vanish as the blocklength tends to infinity, we briefly discuss the behavior of the $\epsilon$-capacity per unit-energy, i.e., the largest rate per unit-energy for which the error probability does not exceed a given $\epsilon$.} For the case where the users are active with probability one, we show that, for JPE and an unbounded number of users, the $\epsilon$-capacity per unit energy coincides with $\dot{C}$. In other words, the strong converse holds in this case. In contrast, for APE, the $\epsilon$-capacity per unit-energy can be strictly larger than $\dot{C}$, so the strong converse does not hold.
\end{enumerate}
The rest of the paper is organized as follows. Section~\ref{Sec_model} introduces the system model and the different notions of probability of error. Section~\ref{Sec_nonrandom} presents our results for
the case where all users are active with probability one (``non-random MnAC"). Section~\ref{sec_random_MnAC} presents our results for the case where the user activity is random (``random MnAC").
\edit{
Section~\ref{Sec_discuss}
briefly discusses our results with that obtained in~\cite{Polyanskiy17} under non-vanishing probability of error.} Section~\ref{Sec_conclusion} concludes the paper with a summary and discussion of our results.
\section{Problem Formulation and Preliminaries}
\label{Sec_model}
\subsection{Model and Definitions}
\label{Sec_Def}
Consider a network with $\ell$ users that, if they are active, wish to transmit their messages $W_i, i=1, \ldots, \ell$ to one common receiver, see Fig.~\ref{Fig_many_acc}. The messages are assumed to be independent and uniformly distributed on $ \mbox{$\cal{M }$}_n^{(i)} \triangleq \{1,\ldots,M_n^{(i)}\}$. To transmit their messages, the users send a codeword of $n$ symbols over the channel, where $n$ is referred to as the \emph{blocklength}. We consider a many-access scenario where the number of users $\ell$ may grow with $n$, hence, we denote it as $\ell_n$.
We assume that a user is active with probability $\alpha_n$.
\edit{
We denote the average number of active users at blocklength $n$ by $k_n$, i.e., $k_n = \alpha_n \ell_n$.}
Let $\mbox{$\cal{U}$}_n$ denote the set of active users at blocklength $n$, defined as
\begin{align*}
\mbox{$\cal{U}$}_n \triangleq \{\edit{i=1,\ldots, \ell_n} : \mbox{user } i \mbox{ is active} \}.
\end{align*}
We consider a Gaussian channel model where the received vector ${\bf Y}$ is given by
\begin{align}
{\bf Y} & = \sum_{i\in \mbox{$\cal{U}$}_n } {\bf x}_i(W_i) + {\bf Z}. \label{Eq_model}
\end{align}
Here $ {\bf x}_i(W_i)$ is the length-$n$ transmitted codeword from user $i$ for message $W_i$, and ${\bf Z}$ is
a vector of $n$ i.i.d. Gaussian components $Z_j \sim \mbox{$\cal{N}$}(0, N_0/2)$ (where $\mbox{$\cal{N}$}(\mu, \sigma^2)$ denotes the Gaussian distribution with mean $\mu$ and variance $\sigma^2$) independent of ${\bf X}_i\triangleq {\bf x}_i(W_i)$. The decoder produces
\edit{an estimate of the users that are active and estimates of their transmitted messages.}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{MnAC_Model.pdf}
\caption{Many-access channel with $\ell_n$ users at blocklength $n$.}
\label{Fig_many_acc}
\end{figure}
There are different ways to define the overall probability of error.
Let $\hat{\mathcal{U}}_n$ and $\hat{W}_i$ denote the decoder's \edit{estimates} of the set of active users and the message transmitted by user $i$, respectively. Further let $\mathcal{D}_E$ denote the event \edit{that} the set of active users was detected erroneously, i.e., that $\hat{\mathcal{U}}_n \neq \mathcal{U}_n$, and let $\mathcal{M}_E$ denote the event
that $\hat{W}_i\neq W_i$ for some $i\in\mathcal{U}_n$ (where we set $\hat{W}_i=0$ for every $i\notin \hat{\mathcal{U}}_n$). One possibility to measure the likelihood of these two events is via the probability of error
\begin{equation}
P_e^{(n)} = \text{Pr} (\mathcal{D}_E \cup\mathcal{M}_E). \label{Eq_prob_err_union}
\end{equation}
Another possibility is to consider
\begin{equation}
P_{\textnormal{max}}^{(n)} = \max(\text{Pr} (\mbox{$\cal{D }$}_E),\text{Pr} (\mbox{$\cal{M }$}_E)).
\end{equation}
We have \edit{$P_{\textnormal{max}}^{(n)} \leq P_e^{(n)} \leq 2 P_{\textnormal{max}}^{(n)}$}. Indeed, the left inequality follows since $\text{Pr} (\mbox{$\cal{D }$}_E\cup \mathcal{M}_E)\geq \text{Pr} (\mbox{$\cal{D }$}_E ) $ and $\text{Pr} (\mbox{$\cal{M }$}_E\cup \mbox{$\cal{M }$}_E)\geq \text{Pr} (\mbox{$\cal{M }$}_E)$. The right-most inequality follows because, by the union bound, $P_e^{(n)}\leq \text{Pr} (\mathcal{D}_E)$ + $\text{Pr} (\mathcal{M}_E) \leq 2 P_{\textnormal{max}}^{(n)}$. So, in general, $P_{\textnormal{max}}^{(n)}$ is more optimistic than $P_{e}^{(n)}$. However, if we wish the probability of error to vanish as $n\to\infty$, then the two definitions are equivalent since $P_{e}^{(n)}$ vanishes if, and only if, $P_{\textnormal{max}}^{(n)}$ vanishes.
In this paper, we will mainly consider the more pessimistic definition of probability of error $P_e^{(n)}$. For this definition, one can model an inactive user by an active user that transmits message $W_i=0$ and an encoder that maps the zero message to the all-zero codeword. The decoder then simply guesses the transmitted message, and the error probability $P_e^{(n)}$ is given by the probability that the decoder's guess $\hat{W}_i$ is different from $W_i$. Mathematically, this can be described as follows. We enhance the message set to
\begin{align*}
\overline{\mbox{$\cal{M }$}}_n^{(i)}\triangleq \mbox{$\cal{M }$}_n^{(i)} \cup \{0\}
\end{align*}
and define the distribution of the $i$-th user's message as
\begin{align}
\text{Pr} \{W_i = w\} =
\begin{cases}
1 - \alpha_n, & \quad w=0 \\
\frac{\alpha_n}{M_n^{(i)}}, & \quad w \in \{1,\ldots,M_n^{(i)}\}.
\end{cases}
\label{Eq_messge_def}
\end{align}
We assume that the codebook is such that message $0$ is mapped to the all-zero codeword. Then, the channel model~\eqref{Eq_model} can be written as
\begin{align*}
{\bf Y} & = \sum_{i=1}^{\ell_n} {\bf x}_i(W_i) + {\bf Z}.
\end{align*}
We next introduce the notion of an $(n,\bigl\{M_n^{(\cdot)}\bigr\},\bigl\{E_n^{(\cdot)}\bigr\}, \epsilon)$ code.
\begin{definition}
\label{Def_nMCode}
For $0 \leq \epsilon \leq 1$, an $(n,\bigl\{M_n^{(\cdot)}\bigr\},\bigl\{E_n^{(\cdot)}\bigr\}, \epsilon)$ code for the Gaussian MnAC consists of:
\begin{enumerate}
\item Encoding functions $f_i: \{0, 1,\ldots,M_n^{(i)}\} \rightarrow \mathbb{R}^n$, \mbox{$i =1,\ldots, \ell_n$} which map user $i$'s message to the codeword ${\bf x}_i(W_i)$, satisfying the energy constraint
\begin{align}
\label{Eq_energy_consrnt}
\sum_{j=1}^{n} x_{ij}^2(W_i) \leq E_n^{(i)}, \quad \textnormal{ with probability one}
\end{align}
where $x_{ij}(W_i)$ is the $j$-th symbol of the transmitted codeword. We set $x_{ij}(0) = 0$, $j=1, \ldots, n$ for all users $i=1,\ldots, \ell_n$.
\item Decoding function $g: \mathbb{R}^n \rightarrow \{ 0,1,\ldots,M_n^{(1)}\} \times \ldots \times \{ 0,1,\ldots,M_n^{(\ell_n)}\} $ which maps the received vector ${\bf Y}$ to the messages of all users and whose probability of error $P_{e}^{(n)}$ satisfies
\end{enumerate}
\begin{align}
\label{Eq_prob_err}
P_{e}^{(n)} \triangleq \text{Pr} \{ g({\bf Y}) \neq (W_1,\ldots,W_{\ell_n}) \} \leq \epsilon.
\end{align}
\end{definition}
\edit{The probability of error in~\eqref{Eq_prob_err} is equal to $P_e^{(n)}$ defined in~\eqref{Eq_prob_err_union}. Indeed, the event $g({\bf Y}) \neq (W_1,\ldots,W_{\ell_n})$ occurs if, and only if, there exists at least one index $i=1,\ldots, \ell_n$ for which $\hat{W_i}\neq W_i$. This in turn implies that either event $\mbox{$\cal{D }$}_E$ occurs (if $W_i=0$) or event $\mbox{$\cal{M }$}_E$ occurs (if $W_i \neq 0$). Conversely, if the event $\mbox{$\cal{D }$}_E \cup \mbox{$\cal{M }$}_E$ occurs, then there exists either some $i \notin \mbox{$\cal{U}$}_n$ for which $\hat{W}_i \neq 0$ or some $i \in \mbox{$\cal{U}$}_n$ for which $\hat{W}_i \neq W_i$. Consequently, there exists at least one index $i=1,\ldots, \ell_n$ for which $\hat{W_i}\neq W_i$. It follows that the events $ g({\bf Y}) \neq (W_1,\ldots,W_{\ell_n})$ and $\mbox{$\cal{D }$}_E \cup \mbox{$\cal{M }$}_E$ are equivalent.}
\edit{We shall say that the \emph{codebook of user $i$ is orthogonal} if the inner product between ${\bf x}_i(w)$ and ${\bf x}_i(w')$ is zero for every $w\neq w'$, where $w,w'=1,\ldots,M_n^{(i)}$. Similarly, we shall say that an \emph{access scheme is orthogonal} if, for any two users $i$ and $j$, the inner product between ${\bf x}_i(w)$ and ${\bf x}_j(w')$ is zero for every $w=1,\ldots,M_n^{(i)}$ and $w'=1,\ldots, M_n^{(j)}$.}
An $(n,\{M_n^{(\cdot)}\},\{E_n^{(\cdot)}\}, \epsilon)$ code is said to be \emph{symmetric} if $M_n^{(i)} = M_n$ and $E_n^{(i)} = E_n$ for all $i=1, \ldots, \ell_n$. For compactness, we denote such a code by $(n, M_n, E_n, \epsilon)$. In this paper, we restrict ourselves to symmetric codes.
\begin{definition}
\label{Def_Sym_Rate_Cost}
For a symmetric code, the rate per unit-energy $\dot{R}$ is said to be $\epsilon$-achievable if for every $\delta > 0$ there exists an $n_0$ such that, if $n \geq n_0$, then an $(n,M_n,E_n, \epsilon)$ code can be found whose rate per unit-energy satisfies $\frac{\log M_n}{ E_n} > \dot{R} - \delta$. Furthermore, $\dot{R}$ is said to be achievable if it is $\epsilon$-achievable for all $0 < \epsilon < 1$. The capacity per unit-energy $\dot{C}$ is the supremum of all achievable rates per unit-energy. The $\epsilon$-capacity per unit-energy $\dot{C}_{\epsilon}$ is the supremum of all $\epsilon$-achievable rates per unit-energy.
\end{definition}
\begin{remark}
\label{remark}
In \cite[Def.~2]{Verdu90}, a rate per unit-energy $\dot{R}$ is said to be $\epsilon$-achievable if for every $\alpha>0$ there exists an $E_0$ such that, if $E \geq E_0$, then an $(n,M,E,\epsilon)$ code can be found whose rate per unit-energy satisfies $\frac{\log M}{E} > \dot{R} - \alpha$. \edit{Thus, in contrast to Definition~\ref{Def_Sym_Rate_Cost}, the energy $E$ is required to be larger than some threshold, rather than the blocklength $n$.} For the MnAC, where the number of users grows with the blocklength, we believe it is more natural to impose \edit{a threshold on $n$.} Definition~\ref{Def_Sym_Rate_Cost} is also consistent with the definition of energy-per-bit in \cite{Polyanskiy17, ZadikPT19}. Further note that, for the capacity per unit-energy, where a vanishing error probability is required,
Definition~\ref{Def_Sym_Rate_Cost} is in fact equivalent to
\cite[Def.~2]{Verdu90}, since $P_{e}^{(n)}\to 0$ only if $E_n \to \infty$ (see Lemma~\ref{Lem_energy_infty} ahead).
\end{remark}
\begin{remark}
Many works in the literature on many-access channels, including \cite{Polyanskiy17, OrdentlichP17,ZadikPT19,KowshikPISIT19,KowshikP19,KowshikTCOM20}, consider a \emph{per-user probability of error}
\begin{equation}
\label{eq_Pe_A}
P_{e,A}^{(n)} \triangleq \frac{1}{\ell_n} \sum_{i=1}^{\ell_n} \textnormal{Pr}\{\hat{W_i} \neq W_i\}
\end{equation}
rather than the probability of error in~\eqref{Eq_prob_err}. In this paper, we shall refer to \eqref{Eq_prob_err} as \emph{joint error probability (JPE)} and to \eqref{eq_Pe_A} as \emph{average error probability (APE)}. While we mainly consider the JPE, we also discuss the capacity per unit-energy for APE. To this end, we define an $(n,\{M_n^{(\cdot)}\},\{E_n^{(\cdot)}\}, \epsilon)$ code for APE with the same encoding and decoding functions described in Definition~\ref{Def_nMCode}, but with the probability of error \eqref{Eq_prob_err} replaced with~\eqref{eq_Pe_A}. The capacity per unit-energy \edit{and the $\epsilon$-capacity per unit-energy} for APE, denoted by $\dot{C}^A$ \edit{and $\dot{C}_{\epsilon}^A$ respectively, are} then defined as in Definition~\ref{Def_Sym_Rate_Cost}.
\end{remark}
\subsection{Order Notation}
Let $\{a_n\}$ and $\{b_n\}$ be two sequences of nonnegative real numbers.
We write $a_n = O(b_n)$ if $\limsup_{n \to \infty} \frac{a_n}{b_n} < \infty$. Similarly, we write $a_n = o(b_n)$ if $ \lim_{n\rightarrow \infty} \frac{a_n}{b_n} = 0$, and $a_n = \Omega(b_n)$ if $\liminf\limits_{n \rightarrow \infty} \frac{a_n}{b_n} >0$.
The notation $a_n = \Theta (b_n)$ indicates that $a_n = O(b_n)$ and $a_n = \Omega(b_n)$.
Finally, we write
$a_n = \omega (b_n)$ if $\lim\limits_{n\rightarrow \infty} \frac{a_n}{b_n} = \infty$.
\subsection{Main Results}
\label{Sec_feasble_infeasble}
\begin{theorem}
\label{Thm_nonrandom}
The capacity per unit-energy of the \edit{Gaussian} non-random \edit{MnAC} has the following behaviour:
\begin{enumerate}
\item If $k_n = o(n/\log n)$, then any rate per unit-energy satisfying $\dot{R} < \frac{\log e}{N_0}$ is achievable. Moreover, this rate can be achieved by an orthogonal-access scheme. \label{Thm_Infeasble_achv}
\item If $k_n =\omega(n / \log n)$, then $\dot{C} =0$. In words, if the order of $k_n$ is strictly above $n/\log n$, then no coding scheme achieves a positive rate per unit-energy. \label{Thm_Infeasble_convrs}
\edit{\item If $k_n = \Theta( \frac{n}{\log n})$, then $0< \dot{C} < \frac{\log e}{N_0} $. In words, if the order of $k_n$ is exactly $n/\log n$, then a positive rate per unit-energy, but strictly less than $\frac{\log e}{N_0}$ is achievable.
\label{Thm_exact_order}}
\end{enumerate}
\end{theorem}
\begin{proof}
See Subsection~\ref{Sec_proof_Thm1}.
\end{proof}
Theorem~\ref{Thm_nonrandom} demonstrates that there is a sharp transition between orders of growth of $k_n$ where
each user can achieve the single-user capacity per unit-energy $\frac{\log e}{N_0}$, i.e., \edit{where} users can communicate as if free of interference, and orders of growth where no positive rate per unit-energy is feasible. The transition threshold separating these two regimes is at the order of growth $n/ \log n$.
The capacity per unit-energy can be achieved using an orthogonal-access scheme where each user is assigned an exclusive time slot.
As we shall show in Section~\ref{sec_random_MnAC}, such an access scheme is wasteful in terms of resources and strictly suboptimal when users are active only sporadically.
\edit{The theorem also demonstrates that, when the order of growth of $k_n$ is exactly equal to $n/\log n$, the rate per unit-energy is strictly positive, but also strictly less than $\frac{\log e}{N_0}$.}
As mentioned in the introduction, when the number of users is finite, all users can achieve the single-user capacity per unit-energy $\frac{\log e}{N_0}$ by an orthogonal-access scheme where each user uses an orthogonal codebook. In the following theorem, we show that this is not necessarily the case \edit{anymore} when the number of users grows with the blocklength.
\begin{theorem}
\label{Thm_ortho_code}
The largest rate per unit-energy $\dot{C}_{\bot \bot}$ achievable with an orthogonal-access scheme and orthogonal codebooks has the following behaviour:
\begin{enumerate}[1)]
\item If $k_n = o(n^{c})$ for every $c>0$, then $\dot{C}_{\bot \bot} = \frac{\log e}{N_0}$. \label{Thm_ortho_part1}
\item If $k_n=\Theta\left({n^c}\right)$, then
\begin{equation*}
\dot{C}_{\bot \bot} = \begin{cases} \frac{\log e}{N_0} \frac{1}{\left(1+\sqrt{\frac{c}{1-c}}\right)^2}, \quad & \textnormal{if $0<c\leq 1/2$} \\ \frac{\log e}{2 N_0} (1-c), \quad & \textnormal{if $1/2<c<1$}.\end{cases}
\end{equation*}
\label{Thm_ortho_part2}
\end{enumerate}
\end{theorem}
\begin{proof}
See Subsection~\ref{sec_ortho}.
\end{proof}
Theorem~\ref{Thm_ortho_code} shows that an orthogonal-access scheme in combination with orthogonal codebooks is optimal only if $k_n$ grows more slowly than any positive power of $n$. Part~\ref{Thm_ortho_part2}) of Theorem~\ref{Thm_ortho_code} gives the largest rate per unit-energy achievable when the order of $k_n$ is a positive power of $n$.
\edit{
\begin{remark}
\label{Rem_orth_code}
Observe that the behavior of $\dot{C}_{\bot \bot}$ as a function of $c$ can be divided into two regimes: if $1/2<c<1$, then $\dot{C}_{\bot \bot}$ decays linearly in $c$; if $0<c\leq 1/2$, then the dependence of $\dot{C}_{\bot \bot}$ on $c$ is nonlinear. This is a consequence of the behavior of the error exponent achievable with orthogonal codebooks. More specifically, Theorem~\ref{Thm_ortho_code} follows from lower and upper bounds on the probability of error that become asymptotically tight as $E\to\infty$; see Lemma~\ref{Lem_ortho_code}. The lower bound follows from the sphere-packing bound \cite{ShannonGB67}. The upper bound is obtained by applying Gallager's $\rho$-trick to improve upon the union bound \cite[Sec.~2.5]{ViterbiO79}, followed by an optimization over the parameter $0\leq\rho\leq 1$. When the rate per unit-energy is smaller than $\frac{1}{4} \frac{\log e}{N_0}$, the optimal value of $\rho$ is $1$, and the exponent of the upper bound depends linearly on the rate per unit-energy. For rates per unit-energy above $\frac{1}{4} \frac{\log e}{N_0}$, the optimal value of $\rho$ depends on the rate per unit-energy, which results in a nonlinear dependence of the exponent on the rate per unit-energy. This behavior of the error exponent as a function of the rate per unit-energy translates to the two regimes of $\dot{C}_{\bot \bot}$ observed in Theorem~\ref{Thm_ortho_code}.
\end{remark}
}
Next we discuss the behaviour of the capacity per unit-energy for APE. We
show that, if the order of growth of $k_n$ is sublinear,
then each user can achieve the single-user capacity per unit-energy $\frac{\log e}{N_0}$. Conversely, if the growth of $k_n$ is linear or above, then the capacity per unit-energy is zero. We have the following theorem.
\begin{theorem}
\label{Thm_capac_APE}
The capacity per unit-energy $\dot{C}^A$ for APE has the following behavior:
\begin{enumerate}
\item If $k_n = o(n)$, then $\dot{C}^A = \frac{\log e}{N_0}$. Furthermore, the capacity per unit-energy can be achieved by an orthogonal-access scheme where each user uses an orthogonal codebook. \label{Thm_APE_achv_part}
\item If $k_n = \Omega(n)$, then $\dot{C}^A =0$. \label{Thm_APE_conv_part}
\end{enumerate}
\end{theorem}
\begin{proof}
See Subsection~\ref{sec_APE}.
\end{proof}
Theorem~\ref{Thm_capac_APE} demonstrates that under APE the capacity per unit-energy has a similar behaviour as under JPE. Again, there is a sharp transition between orders of growth of $k_n$ where interference-free communication is possible and orders of growth where no positive rate per unit-energy is feasible. The main difference is that the transition threshold is shifted from $n/\log n$ to $n$. Such an improvement on the order of growth is possible because, for the probability of error to vanish as $n\to \infty$, the energy $E_n$ needs to satisfy different necessary constraints under JPE and APE. Indeed, we show in the proof of Theorem~\ref{Thm_nonrandom} that
the JPE vanishes only if the energy $E_n$ scales logarithmically in the number of users (Lemma~\ref{Lem_convrs_err_prob}), and a positive rate per unit-energy is feasible only if the total power $k_n E_n/n$ is bounded in $n$. No sequence $\{E_n\}$ can satisfy both these conditions if $k_n = \omega(n/\log n)$. In contrast, for the APE to vanish asymptotically, the energy $E_n$ does not need to grow logarithmically in the number of users, it suffices that it tends to infinity as $n \to \infty$. We can then find sequences $\{E_n\}$ that tend to infinity and for which $k_nE_n/n$ is bounded if, and only if, $k_n$ is sublinear in $n$. Also note that, for APE, an orthogonal-access scheme with orthogonal codebooks is optimal for all orders of $k_n$, whereas for JPE it is only optimal if the order of $k_n$ is not a positive power of $n$.
\subsection{Proof of Theorem~\ref{Thm_nonrandom}}
\label{Sec_proof_Thm1}
We first give an outline of the proof of Theorem~\ref{Thm_nonrandom}. To prove Part~\ref{Thm_Infeasble_achv}), we use an orthogonal-access scheme where the total number of channel uses is divided equally among all the users. Each user
uses the same single-user code in the assigned channel uses. The receiver decodes the message of each user separately, which is possible because the access scheme is orthogonal. We \edit{next} express the overall probability of error in terms of the number of users $k_n$ and the
probability of error achieved by the single-user code in an AWGN channel, which we then show vanishes as $n \to \infty$ if $k_n = o(n/\log n)$. The proof of Part~\ref{Thm_Infeasble_convrs}) hinges mainly on two facts. The first one is that the probability of error vanishes only if the energy $E_n$ scales at least
\edit{logarithmically} in the number of users, i.e., $E_n = \Omega(\log k_n)$.\footnote{A similar bound was presented in \cite[p.~82]{Polyanskiy18} for the case where $M_n=2$.} The second one is that we have $\dot{R}>0$ only if the total power $k_nE_n/n$ is bounded as $n \to \infty$, which
is a direct consequence of Fano's inequality. If $k_n = \omega(n/\log n)$, then there is no sequence $\{E_n\}$ that simultaneously satisfies these two conditions. \edit{Part~\ref{Thm_exact_order}) follows by revisiting the proofs of Parts~\ref{Thm_Infeasble_achv}) and \ref{Thm_Infeasble_convrs}) for the case where $k_n=\Theta(n/\log n)$.}
\subsubsection{Proof of Part~\ref{Thm_Infeasble_achv})} The achievability uses an orthogonal-access scheme \edit{where, in each time step,} only one user transmits, \edit{the} other users remain silent. We first note that the probability of correct decoding of any orthogonal-access scheme is given by
\begin{align*}
P_c^{(n)} = \prod_{i=1}^{k_n}\left(1-P_{e,i}\right)
\end{align*}
where $P_{e,i} = \text{Pr} (\hat{W}_i \neq W_i)$ denotes the probability of error in decoding user $i$'s message.
In addition, if each user follows the same coding scheme, then the probability of correct decoding is given by
\begin{align}
P_c^{(n)} & = \left(1-P_{e,1}\right)^{k_n}. \label{Eq_ortho_prob_err}
\end{align}
For a Gaussian point-to-point channel with \edit{ blocklength $N$ and power constraint $P$, i.e., $\frac{E_N}{N} \leq P$,} there exists an encoding and decoding scheme whose
average probability of error is upper-bounded by
\begin{align}
P(\mbox{$\cal{E}$}) & \leq M_N^{ \rho} \exp[-NE_0(\rho, P)], \; \mbox{ for every } 0< \rho \leq 1 \label{Eq_upp_prob_AWGN}
\end{align}
where
\begin{align}
E_0(\rho, P) & \triangleq \frac{\rho}{2} \ln \left(1+\frac{2P}{(1+\rho)N_0}\right). \notag
\end{align}
This bound is due to Gallager and can be found in~\cite[Sec.~7.4]{Gallager68}.
Now let us consider an orthogonal-access scheme in which each user gets $n/k_n$ channel uses, and we timeshare between users. Each user follows
the coding scheme that achieves~\eqref{Eq_upp_prob_AWGN} with power constraint $P_n = \frac{E_n}{n/k_n}$. Note that this coding scheme satisfies also the energy constraint~\eqref{Eq_energy_consrnt}.
\edit{
Then, we obtain the following upper bound $P_{e,1}$ for a fixed rate per unit-energy $\dot{R} = \frac{\log M_n}{E_n}$, by substituting in~\eqref{Eq_upp_prob_AWGN}
$N$ by $n/k_n$ and $P$ by $P_n = \frac{E_n}{n/k_n}$:}
\begin{align}
P_{e,1} & \leq M_n^{ \rho} \exp\left[-\frac{ n}{k_n}E_0(\rho, P_n)\right] \nonumber \\
& = \exp\left[ \rho \ln M_n - \frac{ n}{k_n} \frac{\rho}{2} \ln \left(1+\frac{ 2E_nk_n/n}{(1+\rho)N_0}\right) \right] \nonumber \\
& = \exp\left[ -E_n \rho \left( \frac{\ln (1+\frac{ 2E_nk_n/n}{(1+\rho)N_0})}{2E_nk_n/n } -\frac{\dot{R}}{ \log e} \right)\right]. \label{Eq_err_uppr1}
\end{align}
Combining \eqref{Eq_err_uppr1} with \eqref{Eq_ortho_prob_err}, we obtain that the probability of correct decoding can be lower-bounded as
\begin{align}
& 1 -P_e^{(n)} \geq \Biggl(1 - \exp\Biggl[ -E_n \rho \Biggl( \frac{\ln (1+\frac{ 2E_nk_n/n}{(1+\rho)N_0})}{2E_nk_n/n } -\frac{\dot{R}}{ \log e} \Biggr)\Biggr]\Biggr)^{k_n}. \label{Eq_ortho_prob_lower}
\end{align}
We next choose $E_n = c_n \ln n$ with $c_n \triangleq \ln\bigl(\frac{n}{k_n\ln n}\bigr)$. Since, by assumption, $k_n = o(n / \log n)$, this implies that $\frac{k_nE_n}{n} \to 0$ as $n \to \infty$. Consequently, the first term in the inner-most bracket in \eqref{Eq_ortho_prob_lower} tends to $1/((1+\rho)N_0)$ as $n \to \infty$. It follows that for $\dot{R} < \frac{\log e}{N_0}$, there exists a sufficiently large $n_0$, a $0<\rho \leq 1$, and a $\delta>0$ such that, for all $n\geq n_0$, the right-hand side (RHS) of \eqref{Eq_ortho_prob_lower} is lower-bounded by $\left(1-\exp[-E_n \rho \delta]\right)^{k_n}$. Since $c_n\delta \rho \to \infty$ as $n\to\infty$, we have
\begin{align}
\left(1-\exp[-E_n \rho \delta]\right)^{k_n}& \geq \left(1-\frac{1}{n^{2}}\right)^{k_n} \notag \\
& \geq \left(1-\frac{1}{n^{2}}\right)^{\frac{n}{\log n}} \notag \\
& = \left[\left(1 - \frac{1}{n^{2}}\right)^{n^{2}}\right]^{\frac{1}{n\log n}} \label{Eq_prob_corrct}
\end{align}
for $n \geq n_0$ and sufficiently large $n_0$, such that $c_n\delta \rho\geq2$ and $k_n \leq \frac{n}{\log n}$. Noting that $(1 - \frac{1}{n^{2}})^{n^{2}} \to 1/e$ and $\frac{1}{n\log n} \to 0$ as $n\to\infty$, we obtain that the RHS of~\eqref{Eq_prob_corrct} tends to one as $n \to \infty$. This implies that, if $k_n = o(n/\log n)$, then any rate per unit-energy $\dot{R} < \frac{\log e}{ N_0} $ is achievable.
\subsubsection{Proof of Part~\ref{Thm_Infeasble_convrs})}
Let ${\bf W}$ and ${\bf \hat{W}}$ denote the vectors $(W_1,\ldots, W_{k_n})$ and $(\hat{W_1},\ldots,\hat{W}_{k_n})$, respectively. Then
\begin{align}
k_n \log M_n& = H({\bf W}) \nonumber\\
& = H({\bf W}|{\bf \hat{W}})+I({\bf W};{\bf \hat{W}})\nonumber\\
& \leq 1+P_e^{(n)}k_n \log M_n + I({\bf X};{\bf Y}) \nonumber
\end{align}
by Fano's inequality and the data processing inequality. By following~\cite[Sec.~15.3]{CoverJ06},
it can be shown that
\mbox{$I({\bf X};{\bf Y}) \leq \frac{n}{2} \log \left(1+\frac{2 k_nE_n}{nN_0}\right)$}. Consequently,
\begin{equation}
\frac{\log M_n }{E_n} \leq \frac{1}{k_nE_n}+ \frac{ P_e^{(n)} \log M_n}{E_n} + \frac{n}{2 k_nE_n} \log \left(\!1+\frac{ 2k_nE_n}{nN_0}\!\right)\!. \notag
\end{equation}
This implies that the rate per unit-energy $\dot{R}=(\log M_n)/E_n$ is upper-bounded by
\begin{align}
\dot{R}\leq \frac{ \frac{1}{k_nE_n} + \frac{n}{2 k_nE_n}\log(1+\frac{ 2k_nE_n}{nN_0})}{1 -P_e^{(n)}}.\label{Eq_R_avg1}
\end{align}
We next show by contradiction that, if $k_n =\omega(n / \log n)$, then $P_e^{(n)} \to 0$ as $n \to \infty$ only if $\dot{C}=0$. Thus, assume that $k_n =\omega(n / \log n)$ and that there exists a code with rate per unit-energy $\dot{R} >0$ such that $P_e^{(n)} \to 0$ as $n \to \infty$.
To prove that there is a contradiction we need the following lemma.
\begin{lemma}
\label{Lem_energy_infty}
If $M_n \geq 2$, then $P_{e}^{(n)} \to 0$ only if $E_n \to \infty$.
\end{lemma}
\begin{proof}
See Appendix~\ref{Sec_energy_infty}.
\end{proof}
By the assumption $\dot{R} > 0$, we have that $M_n \geq 2$.
Since we further assumed that
$P_{e}^{(n)} \to 0$, Lemma~\ref{Lem_energy_infty} implies that $E_n \to \infty$.
Together with \eqref{Eq_R_avg1}, this in turn implies that $\dot{R} > 0$ is only possible if
$k_nE_n/n$ is bounded in $n$. Thus,
\begin{align}
E_n = O(n/k_n). \label{Eq_energy_bnd}
\end{align}
The next lemma presents another necessary condition on the order of $E_n$ which contradicts \eqref{Eq_energy_bnd}.
\begin{lemma}
\label{Lem_convrs_err_prob}
If $\dot{R} > 0$ and $k_n\geq 5$, then $P_{e}^{(n)} \to 0$ only if $E_n = \Omega(\log k_n)$.
\end{lemma}
\begin{proof}
This lemma is a special case of Lemma~\ref{Lem_energy_bound} stated in the proof of Theorem~\ref{Thm_random_JPE} in Section~\ref{sec_random_MnAC} and proven in Appendix~\ref{Append_prob_lemma}.
\end{proof}
We finish the proof by showing that, if $k_n =\omega(n / \log n)$, then there exists no sequence $\{E_n\}$ of order $\Omega(\log k_n)$ that satisfies \eqref{Eq_energy_bnd}. Indeed, $E_n = \Omega(\log k_n)$ and $k_n =\omega(n / \log n)$ imply that
\begin{align}
E_n =\Omega(\log n) \label{Eq_energy_bnd1}
\end{align}
because the order of $E_n$ is lower-bounded by the order of $\log n - \log \log n$, and $\log n - \log \log n = \Theta(\log n)$. Furthermore, \eqref{Eq_energy_bnd} and $k_n =\omega(n / \log n)$ imply that
\begin{align}
E_n & = o(\log n). \label{Eq_energy_bnd2}
\end{align}
Since no sequence $\{E_n\}$ can simultaneously satisfy~\eqref{Eq_energy_bnd1} and~\eqref{Eq_energy_bnd2}, this contradicts the assumption that there exists a code with a positive rate per unit-energy such that the probability of error vanishes as $n$ tends to infinity. Consequently,
if $k_n = \omega(n/\log n)$, then no positive rate per unit-energy is achievable. This proves Part~\ref{Thm_Infeasble_convrs}) of Theorem~\ref{Thm_nonrandom}.
\edit{
\subsubsection{Proof of Part~\ref{Thm_exact_order})}
\label{Sec_prop_exact}
To show that $\dot{C}>0$, we use the same orthogonal-access scheme as in the proof of Part~\ref{Thm_Infeasble_achv}) of Theorem~\ref{Thm_nonrandom}. Thus, each user is assigned $n/k_n$ channel uses, and only one user transmits at a time. We further assume that each user uses energy $E_n=c\log n$, where $c$ is some positive constant to be determined later. By the assumption $k_n = \Theta(n/\log n)$, there exist $n_0>0$ and $0 < a_1 \leq a_2$ such that, for all $n \geq n_0$, we have
\begin{equation}
\label{eq:kn_Theta(n/logn)}
a_1 \frac{n}{\log n} \leq k_n\leq a_2 \frac{n}{\log n}.
\end{equation}
The probability of error in decoding the first user's message is then given by \eqref{Eq_err_uppr1}, namely,
\begin{align}
P_{e,1} & \leq \exp\left[ -E_n \rho \left( \frac{\ln \bigl(1+\frac{ 2E_nk_n/n}{(1+\rho)N_0}\bigr)}{2E_nk_n/n } -\frac{\dot{R}}{ \log e} \right)\right] \nonumber\\
& \leq \exp\left[ -c \log n \, \rho \left( \frac{\ln \bigl(1+\frac{ 2a_2 c}{(1+\rho)N_0}\bigr)}{2 a_2 c} -\frac{\dot{R}}{ \log e} \right)\right], \quad \textnormal{for every $0<\rho \leq 1, \; n \geq n_0$}\label{eq:blabla2_t1}
\end{align}
where the last inequality follows since $k_nE_n/n \leq a_2c$ for $n \geq n_0$.
We next set
\begin{equation*}
\dot{R} = \frac{\log e}{2} \frac{\ln \bigl(1+\frac{ 2a_2 c}{(1+\rho)N_0}\bigr)}{2a_2 c}
\end{equation*}
which is clearly positive for fixed $a_2$, $c$, and $\rho$. The upper bound \eqref{eq:blabla2_t1} then becomes
\begin{equation}
\label{eq:blabla3_t1}
P_{e,1} \leq \exp\left[ -\log n \, \frac{\rho}{2} c\frac{\ln \bigl(1+\frac{ 2a_2 c}{(1+\rho)N_0}\bigr)}{2a_2 c}\right], \quad n \geq n_0.
\end{equation}
For every fixed $a_2$ and $\rho$, the term
\begin{equation*}
\frac{\rho}{2} c\frac{\ln \bigl(1+\frac{ 2a_2c}{(1+\rho)N_0}\bigr)}{2a_2 c} = \frac{\rho}{2}\frac{\ln \bigl(1+\frac{ 2a_2 c}{(1+\rho)N_0}\bigr)}{2a_2}
\end{equation*}
is a continuous, monotonically increasing, function of $c$ that is independent of $n$ and ranges from zero to infinity. We can therefore find a $c $ such that \eqref{eq:blabla3_t1} simplifies to
\begin{equation*}
P_{e,1} \leq \exp[-\ln n] = \frac{1}{n}, \quad n \geq n_0.
\end{equation*}
The above scheme has a positive rate per unit-energy. It remains to show that this rate per unit-energy is also achievable, i.e., that the overall probability of correct decoding tends to one as $n\to\infty$. To this end, we use \eqref{Eq_ortho_prob_err} to obtain that
\begin{align}
1- P_e^{(n)} &= (1-P_{e,1})^{k_n} \notag\\
&\geq \left(1-\frac{1}{n}\right)^{a_2n/\log n}, \quad n \geq n_0. \label{eq:blabla4_t1}
\end{align}
Since $(1-\frac{1}{n})^n\to 1/e$ and $\frac{a_2}{\log n}\to 0$ as $n\to\infty$, the RHS of \eqref{eq:blabla4_t1} tends to one as $n\to\infty$, hence so does the probability of correct decoding.
We next show that $\dot{C} <\frac{\log e}{N_0}$. Lemma~\ref{Lem_convrs_err_prob} implies that, if $k_n = \Theta(n/\log n)$, then
$P_{e}^{(n)} $ vanishes only if $E_n = \Omega(\log n)$. Furthermore, if $E_n = \omega(\log n)$, then it follows from~\eqref{Eq_R_avg1} that $\dot{C} =0$ since, in this case, $k_nE_n/n$ tends to infinity as $n \to \infty$. Without loss of generality, we can thus assume that $E_n$ must satisfy $E_n = \Theta(\log n)$. Thus, there exist $n'_0>0$ and $0<l_1 \leq l_2$ such that, for all $n \geq n'_0$, we have $l_1\log n \leq E_n \leq l_2 \log n$. Together with \eqref{eq:kn_Theta(n/logn)}, this implies that $\frac{k_n E_n}{n} \geq a_1 l_l$ for all $n \geq \max(n_0,n'_0)$. The claim that $\dot{C}<\frac{\log e}{N_0}$ follows then directly from \eqref{Eq_R_avg1}. Indeed, using that $\frac{\log (1+x)}{x}< \log e$ for every $x>0$, we obtain that
\begin{equation}
\label{eq:blabla_t1}
\frac{n}{2 k_nE_n} \log \left(1+\frac{ 2k_nE_n}{nN_0}\right) \leq \frac{1}{2 a_1 l_1}\log\left(1+\frac{2 a_1 l_1}{N_0}\right)< \frac{\log e}{N_0}, \quad n \geq \max(n_0,n'_0).
\end{equation}
By \eqref{Eq_R_avg1}, in the limit as $P_e^{(n)}\to 0$ and $E_n\to \infty$, the rate per unit-energy is upper-bounded by \eqref{eq:blabla_t1}. It thus follows that $\dot{C} < \frac{\log e}{N_0}$, which concludes the proof of Part~\ref{Thm_exact_order}) of Theorem~\ref{Thm_nonrandom}.
}
\subsection{Proof of Theorem~\ref{Thm_ortho_code}}
\label{sec_ortho}
The proof of Theorem~\ref{Thm_ortho_code} is based on the following lemma, which presents bounds on the probability of error achievable over a Gaussian point-to-point channel with an orthogonal codebook.
\begin{lemma}
\label{Lem_ortho_code}
The probability of error $P_{e,1} = \text{Pr} (\hat{W} \neq W )$ achievable over a Gaussian point-to-point channel with an orthogonal codebook with $M$ codewords and energy less than or equal to $E$ satisfies the following bounds:
\begin{enumerate}
\item For $0 < \dot{R} \leq \frac{1}{4} \frac{\log e}{N_0}$,
\begin{align}
& \exp\left[- \frac{\ln M}{\dot{R}}\left(\frac{\log e}{2 N_0} - \dot{R} + \beta_E \right) \right] \leq P_{e,1} \leq \exp\left[- \frac{\ln M}{\dot{R}}\left(\frac{\log e}{2 N_0} - \dot{R} \right) \right]. \label{Eq_orth_sinlg_uppr1}
\end{align}
\item For $\frac{1}{4} \frac{\log e}{N_0} \leq \dot{R} \leq \frac{\log e}{N_0}$,
\begin{align}
& \exp\left[- \frac{\ln M}{\dot{R}}\left(\left(\sqrt{\frac{\log e}{N_0}}- \sqrt{\dot{R} }\right)^2 + \beta'_E \right) \right] \leq P_{e,1} \leq \exp\left[- \frac{\ln M}{\dot{R}} \left(\sqrt{\frac{\log e}{N_0}} - \sqrt{ \dot{R}}\right)^2 \right]. \label{Eq_orth_sinlg_uppr2}
\end{align}
\end{enumerate}
\edit{
In~\eqref{Eq_orth_sinlg_uppr1} and~\eqref{Eq_orth_sinlg_uppr2}, $\beta_E$ and $\beta'_E$ are some constants of order $O(\frac{1}{\sqrt{E}})$.}
\end{lemma}
\begin{proof}
\edit{ The upper bounds in \eqref{Eq_orth_sinlg_uppr1} and \eqref{Eq_orth_sinlg_uppr2} are obtained by upper-bounding the probability of error using Gallager's $\rho$-trick to improve upon the union bound \cite[Sec.~2.5]{ViterbiO79}, followed by a maximization over $\rho$. For $0 < \dot{R} \leq \frac{1}{4} \frac{\log e}{N_0}$, the optimal $\rho$ is equal to $1$; for $\frac{1}{4} \frac{\log e}{N_0} \leq \dot{R} \leq \frac{\log e}{N_0}$, the optimal $\rho$ is a function of $\dot{R}$. Hence, the upper bounds in \eqref{Eq_orth_sinlg_uppr1} and~\eqref{Eq_orth_sinlg_uppr2} have different dependencies on $\dot{R}$. The lower bounds in \eqref{Eq_orth_sinlg_uppr1} and \eqref{Eq_orth_sinlg_uppr2} follow from the sphere-packing bound by Shannon, Gallager, and Berlekamp \cite{ShannonGB67}. However, their approach to improve the sphere-packing bound at low rates by writing codewords as concatenations of subcodewords and lower-bounding the error exponent by the convex combination of the error exponents of these subcodewords does not directly apply to our setting where $\log M/E$ is held fixed and $E\to\infty$ (rather than $\log M/n$ is held fixed and $n\to\infty$). The reason is that, for some orthogonal codebooks, the energy of one of the subcodebooks is always zero, resulting in a trivial case where the Shannon-Gallager-Berlekamp approach cannot improve upon the original sphere-packing bound. To sidestep this problem, we lower-bound the probability of error by first rotating the orthogonal codebook in such a way that the energy of each subcodeword is proportional to its blocklength, after which the Shannon-Gallager-Berlekamp approach can be applied. For a full proof of Lemma~\ref{Lem_ortho_code}, see Appendix~\ref{Sec_AWGN_ortho_code}.}
\end{proof}
Next, we define
\begin{align}
a \triangleq \left\{
\begin{array}{cl}
\frac{\left(\frac{\log e}{2 N_0} - \dot{R} \right) }{\dot{R}}, & \quad \mbox{if } 0 < \dot{R} \leq \frac{1}{4} \frac{\log e}{N_0} \\
\frac{ \left(\sqrt{\frac{\log e}{N_0}} - \sqrt{ \dot{R}}\right)^2 }{\dot{R}}, & \quad \mbox{if } \frac{1}{4} \frac{\log e}{N_0} \leq \dot{R} \leq \frac{\log e}{N_0}
\end{array}\right. \label{Eq_def_a}
\end{align}
and let $a_E \triangleq a + \max \{ \beta_E, \beta'_E\} $.
Then, the bounds in Lemma~\ref{Lem_ortho_code} can be written as
\begin{align}
1/M^{a_E} & \leq P_{e,1} \leq 1/M^{a}. \label{Eq_prob_err_singl}
\end{align}
Now let us consider the case where the users apply an orthogonal-access scheme together with orthogonal codebooks.
For such a scheme, the collection of codewords from all users is orthogonal, hence there are at most $n$ codewords of length $n$. Since with a symmetric code, each user transmits the same number of messages, it follows that each user transmits $M_n=n/k_n$ messages with
codewords of energy less than or equal to $E_n$.
In this case, we obtain from \eqref{Eq_ortho_prob_err} and \eqref{Eq_prob_err_singl} that
\begin{align*}
\left(1-\left(\frac{k_n}{n}\right)^{a}\right)^{k_n} \leq \left(1- P_{e,1}\right)^{k_n} \leq \left(1-\left(\frac{k_n}{n}\right)^{a_{E_n}}\right)^{k_n}
\end{align*}
which, denoting $a_n \triangleq a_{E_n}$, can be written as
\begin{align}
& \left[\left(1-\left(\frac{k_n}{n}\right)^a\right)^{(\frac{n}{k_n})^a}\right]^{\frac{k_n^{1+a}}{n^a}}\leq \left(1- P_{e,1}\right)^{k_n}
\leq \left[\left(1-\left(\frac{k_n}{n}\right)^{a_n}\right)^{(\frac{n}{k_n})^{a_n}}\right]^ {\frac{k_n^{1+a_n}}{n^{a_n}}}. \label{Eq_ortho_code_upp_low}
\end{align}
Since Theorem~\ref{Thm_ortho_code}
only concerns a sublinear number of users, we have
\begin{align*}
\lim\limits_{n\to \infty} \left(1-\left(\frac{k_n}{n}\right)^a\right)^{(\frac{n}{k_n})^a}
& = \frac{1}{e}.
\end{align*}
Furthermore, if $P_e^{(n)} \to 0$ then, by Lemma~\ref{Lem_energy_infty}, $E_n \to \infty$ as $n \to \infty$. In this case, $a_n$ converges to the finite value $a$ as $n \to \infty$, and we obtain
\begin{align*}
\lim\limits_{n \to \infty} \left(1-\left(\frac{k_n}{n}\right)^{a_n}\right)^{(\frac{n}{k_n})^{a_n}} & = \frac{1}{e}.
\end{align*}
So \eqref{Eq_ortho_code_upp_low} implies that
$P_e^{(n)} \to 0$ as $n \to \infty$ if
\begin{align}
\lim\limits_{n \to \infty} {\frac{k_n^{1+a}}{n^a}} = 0\label{Eq_ordr_lowr}
\end{align}
and only if
\begin{align}
\lim\limits_{n \to \infty} {\frac{k_n^{1+a_n}}{n^{a_n}}} = 0. \label{Eq_ordr_uppr}
\end{align}
We next use these observations to prove Parts~\ref{Thm_ortho_part1}) and \ref{Thm_ortho_part2}) of Theorem~\ref{Thm_ortho_code}. We begin with Part~\ref{Thm_ortho_part1}). Let $\dot{R} < \frac{\log e}{N_0}$. Thus, we have $a>0$ which implies that we can find a constant $\eta < a/(1+a)$ such that $n^{\eta (1+a)}/n^a \to 0$ as $n \to \infty$. Since, by assumption, $k_n =o(n^c)$ for every $c> 0$, it follows that there exists an $n_0$ such that, for all $n\geq n_0$, we have $k_n \leq n ^{\eta(1+a)}$. This implies that \eqref{Eq_ordr_lowr} is satisfied, from which Part~\ref{Thm_ortho_part1}) follows.
We next prove Part~\ref{Thm_ortho_part2}) of Theorem~\ref{Thm_ortho_code}.
Indeed, if $k_n=\Theta\left({n^c}\right)$, $0<c<1$, then there exist $0<l_1\leq l_2$ and $n_0$ such that, for all $n\geq n_0$, we have $(l_1n)^c\leq k_n \leq (l_2n)^c$. Consequently,
\begin{align}
{\frac{(l_1 n)^{c(1+a_n)}}{n^{a_n}}} \leq {\frac{k_n^{1+a_n}}{n^{a_n}}}
\leq {\frac{(l_2 n)^{c(1+a_n)}}{n^{a_n}}}. \label{Eq_ortho_err_uppr}
\end{align}
If $P_e^{(n)} \to 0$, then from~\eqref{Eq_ordr_uppr} we have ${\frac{k_n^{1+a_n}}{n^{a_n}}}\to 0$. Thus, \eqref{Eq_ortho_err_uppr} implies that $c(1+a_n) - a_n$ converges to a negative value.
Since $c(1+a_n) - a_n$ tends to $c(1+a) - a$ as $n \to \infty$, it follows that $P_e^{(n)} \to 0$ only if $c(1+a) - a < 0$, which is the same as $a > c/(1-c)$.
Using similar arguments, it follows from \eqref{Eq_ordr_lowr} that if $a > c/(1-c)$, then $P_e^{(n)} \to 0$. Hence, $P_e^{(n)} \to 0$ if, and only if, $a > c/(1-c)$.
It can be observed from~\eqref{Eq_def_a} that $a$ is a monotonically decreasing function of $\dot{R}$. So for $k_n=\Theta\left({n^c}\right), 0<c<1$, the capacity per unit-energy $\dot{C}_{\bot \bot}$ is given by
\begin{align}
\dot{C}_{\bot \bot} = \sup \{\dot{R}\geq 0 : a(\dot{R}) > c/(1-c)\} \notag
\end{align}
where we write $a(\dot{R})$ to make it clear that $a$ as defined in \eqref{Eq_def_a} is a function of $\dot{R}$.
This supremum can be computed as
\begin{equation*}
\dot{C}_{\bot \bot} = \begin{cases} \frac{\log e}{N_0} \left(\frac{1}{1+\sqrt{\frac{c}{1-c}}}\right)^2, & \quad \mbox{if } 0 < c \leq 1/2\\
\frac{\log e}{2 N_0} (1-c), & \quad \mbox{if } 1/2<c < 1
\end{cases}
\end{equation*}
which proves Part~\ref{Thm_ortho_part2}) of Theorem~\ref{Thm_ortho_code}.
\subsection{Proof of Theorem~\ref{Thm_capac_APE}}
\label{sec_APE}
\subsubsection{Proof of Part~\ref{Thm_APE_achv_part})}
We first argue that $P_{e,A}^{(n)} \to 0$ only if $E_n \to \infty$, and that in this case, $\dot{C}^A \leq \frac{\log e}{N_0}$. Indeed, let \text{$P_{e,i} \triangleq \textnormal{Pr}\{\hat{W}_i\neq W_i\}$} denote the probability that message $W_i$ is decoded erroneously. We then have that $P_{e,A}^{(n)} \geq \min_{i} P_{e,i}$. Furthermore, $P_{e,i}$ is lower-bounded by the error probability of the Gaussian single-user channel, since a single-user channel can be obtained from the MnAC if a genie informs the receiver about the codewords transmitted by users $j \neq i$. By applying the lower bound \cite[eq.~(30)]{PolyanskiyPV11} on the error probability of the Gaussian single-user channel, we thus obtain
\begin{equation}
\label{eq:LB_P2P}
P_{e,A}^{(n)} \geq Q\left(\sqrt{\frac{2E_n}{N_0}}\right), \quad M_n \geq 2
\end{equation}
where $Q$ denotes \edit{the $Q$-function, i.e.,} the tail distribution function of the standard Gaussian distribution.
Hence, $P_{e,A}^{(n)} \to 0$ only if $E_n \to \infty$. As mentioned in Remark~\ref{remark}, when $E_n$ tends to infinity as $n\to\infty$, the capacity per unit-energy $\dot{C}^A$ coincides with the capacity per unit-energy defined in \cite{Verdu90}, which for the Gaussian single-user channel is given by $\frac{\log e}{N_0}$ \cite[Ex.~3]{Verdu90}. Furthermore, if $P_{e,A}^{(n)} \to 0$ as $n\to\infty$, then there exists at least one user $i$ for which $P_{e,i} \to 0$ as $n\to\infty$. By the above genie argument, this user's rate per unit-energy is upper-bounded by the capacity per unit-energy of the Gaussian single-user channel. Since, for the class of symmetric codes considered in this paper, each user transmits at the same rate per unit-energy, we conclude that $\dot{C}^A \leq \frac{\log e}{N_0}$.
We next show that any rate per unit-energy $\dot{R} < \frac{\log e}{N_0}$ is achievable by an orthogonal-access scheme where each user uses an orthogonal codebook of blocklength $n/k_n$.
To transmit message $w_i$, user $i$ sends in his assigned slot the codeword ${\bf x}(w_i) = (x_1(w_i), \ldots, x_{n/k_n }(w_i))$, which is given by
\begin{align*}
x_{j}(w_i) = \begin{cases}
\sqrt{E_n}, & \text{ if } j=w_i\\
0, & \text{ otherwise}.
\end{cases}
\end{align*}
To show that the probability of error vanishes, we use the following bound from Lemma~\ref{Lem_ortho_code}:
\begin{align}
P_{e,i} \leq
\begin{cases}
\exp\left\{- \frac{\ln M_n}{\dot{R}}\left(\frac{\log e}{2 N_0} - \dot{R} \right) \right\}, & \text{ if } 0 < \dot{R} \leq \frac{1}{4} \frac{\log e}{N_0}\\
\exp\left\{- \frac{\ln M_n}{\dot{R}} \left(\sqrt{\frac{\log e}{N_0}} - \sqrt{ \dot{R}}\right)^2\right\}, & \text{ if } \frac{1}{4} \frac{\log e}{N_0} \leq \dot{R} \leq \frac{\log e}{N_0}.
\end{cases}
\label{Eq_ortho_prob_uppr1}
\end{align}
It follows from~\eqref{Eq_ortho_prob_uppr1} that, if $\dot{R} < \frac{\log e}{N_0}$ and $M_n \to \infty$ as $n \to \infty$, then $P_{e,i}, i=1,\ldots, k_n$ tends to zero as $n \to \infty$. Since $k_n =o(n)$, it follows that $M_n = n/k_n $ tends to $\infty$, as $n \to \infty$. Thus, for any $\dot{R} < \frac{\log e}{N_0}$, the probability of error $P_{e,i}$ vanishes. This implies that also $P_{e,A}^{(n)} $ vanishes as $n \to \infty$, thus proving Part~\ref{Thm_APE_achv_part}).
\subsubsection{Proof of Part~\ref{Thm_APE_conv_part})}
Fano's inequality yields that
\begin{equation*}
\log M_n \leq 1+ P_{e,i}\log M_n+ I(W_i; \hat{W}_i), \quad \mbox{for } i=1,\ldots, k_n.
\end{equation*}
Averaging over all $i$'s then gives
\begin{IEEEeqnarray}{lCl}
\log M_n & \leq & 1+ \frac{1}{k_n} \sum_{i=1}^{k_n} P_{e,i}\log M_n+ \frac{1}{k_n} I({\bf W}; {\bf \hat{W}}) \nonumber\\
& \leq & 1+P_{e,A}^{(n)}\log M_n+ \frac{1}{k_n} I({\bf W}; {\bf Y}) \nonumber\\
& \leq & 1 + P_{e,A}^{(n)} \log M_n+\frac{n}{2k_n} \log \left(1+\frac{2 k_nE_n}{nN_0}\right) \IEEEeqnarraynumspace\label{eq_rate_APE_uppr}
\end{IEEEeqnarray}
where the first inequality follows because the messages $W_i, i=1, \ldots, k_n$ are independent and because conditioning reduces entropy, the second inequality follows from the definition of $P_{e,A}^{(n)}$ and the data processing inequality, and the third inequality follows by upper-bounding $I({\bf W};{\bf Y})$ by $\frac{n}{2} \log \bigl(1+\frac{2 k_nE_n}{nN_0}\bigr)$.
Dividing both sides of \eqref{eq_rate_APE_uppr} by $E_n$, and solving the inequality for $\dot{R^{A}}$, we obtain the upper bound
\begin{equation}
\label{eq_Part2_Th1_end1}
\dot{R^{A}}\leq \frac{ \frac{1}{E_n} + \frac{n}{2 k_nE_n}\log(1+\frac{ 2k_nE_n}{nN_0})}{1 -P_{e,A}^{(n)}}.
\end{equation}
As argued at the beginning of the proof of Part~\ref{Thm_achv_part}), we have $P_{e,A}^{(n)} \to 0$ only if $E_n \to \infty$. If $k_n = \Omega(n)$, then this implies that $k_nE_n/n \to \infty$ as $n \to \infty$. It thus follows from \eqref{eq_Part2_Th1_end1} that, if $k_n = \Omega(n)$, then $\dot{C}^A=0$, which is Part~\ref{Thm_conv_part}) of Theorem~\ref{Thm_capac_APE}.
\section{Capacity per Unit-Energy of Random Many-Access Channels}
\label{sec_random_MnAC}
In this section, we \edit{consider} the case where the users' activation probability can be strictly smaller than $1$. In Subsection~\ref{Sec_results}, we discuss the capacity per unit-energy of random MnACs. In particular, we present our main result in Theorem~\ref{Thm_random_JPE}, which characterizes the capacity per unit-energy in terms of $\ell_n$ and $k_n$. Then, in
Theorem~\ref{Thm_ortho_accs}, we analyze the largest rate per unit-energy achievable using an orthogonal-access scheme. Finally, in Theorem~\ref{Thm_capac_PUPE}, we briefly discuss the behaviour of the capacity per unit-energy of random MnAC for APE. The proofs of Theorems~\ref{Thm_random_JPE}--\ref{Thm_capac_PUPE} are presented in Subsections~\ref{Sec_proof_random}, \ref{Sec_ortho_access}, and \ref{sec_average}, respectively.
\subsection{Capacity per Unit-Energy of Random MnAC}
\label{Sec_results}
Before presenting our results, we first note that the case where $k_n$ vanishes as $n \to \infty$ is uninteresting.
Indeed, this case only happens if $\alpha_n \to 0$.
Then, the probability that all the users are inactive, given by $\bigl( (1-\alpha_n)^{\frac{1}{\alpha_n}}\bigr)^{k_n}$,
tends to one since $(1-\alpha_n)^{\frac{1}{\alpha_n}} \to 1/e $ and $k_n \to 0$. Consequently, if each user employs a code with $M_n=2$ and $E_n =0$ for all $n$, and if the decoder always declares that all users are inactive, then the probability of error $P_{e}^{(n)}$ vanishes as $n \to \infty$. This implies that $\dot{C}= \infty$. \edit{ In the following, we avoid this trivial case and assume that $\ell_n$ and $\alpha_n$ are such that $k_n = \Omega(1)$. This implies that the inverse of $\alpha_n$ is upper-bounded by the order of $\ell_n$, i.e., $\frac{1}{\alpha_n} = O(\ell_n)$.} We have the following theorem.
\begin{theorem}
\label{Thm_random_JPE}
Assume that $k_n =\Omega(1)$. Then the capacity per unit-energy of the Gaussian random MnAC has the following behavior:
\begin{enumerate}
\item If $k_n \log \ell_n = o(n)$, then $\dot{C} = \frac{\log e}{N_0}$. \label{Thm_achv_part}
\item If $k_n \log \ell_n = \omega(n)$, then $\dot{C} =0$. \label{Thm_conv_part}
\edit{\item If $k_n \log \ell_n = \Theta(n)$, then $ 0< \dot{C} < \frac{\log e}{N_0}$.
\label{Thm_exact_order}
}
\end{enumerate}
\end{theorem}
\begin{proof}
See Subsection~\ref{Sec_proof_random}.
\end{proof}
Theorem~\ref{Thm_random_JPE} demonstrates that there is a sharp transition between orders of growth of $k_n$ where interference-free communication is feasible and orders of growth where no positive rate per unit-energy is feasible. Recall that the same behaviour was observed for the non-random-access case ($\alpha_n=1$), where the transition threshold separating these two regimes is at the order of growth $n/ \log n$, as shown in Theorem~\ref{Thm_nonrandom}. For \edit{a} general $\alpha_n$, this transition threshold depends both on $\ell_n$ and $k_n$.
\edit{However, when $\liminf_{n\to\infty} \alpha_n >0$, then $k_n = \Theta(\ell_n)$ and the order of growth of $k_n \log \ell_n$ coincides with that of both $ k_n \log k_n$ and $\ell_n \log \ell_n$. It follows that, in this case, the transition thresholds for both $ k_n$ and $\ell_n$ are also at $n /\log n$, since $k_n \log k_n = \Theta(n)$ is equivalent to $k_n = \Theta(n/\log n)$.
}
When $\alpha_n \to 0$, the orders of growth of $k_n$ and $\ell_n$ are different and the transition threshold for $\ell_n$ is in general larger than $n / \log n$.
\edit{For example, when $\ell_n = n$ and $\alpha_n = \frac{1}{\sqrt{n}}$, then $k_n \log \ell_n =\sqrt{n} \log n = o(n)$, so all users can communicate without interference.} Thus, random user-activity enables interference-free communication at an order of growth above the limit $n/ \log n$. Similarly, when $\alpha_n \to 0$, the transition threshold for $k_n$ \edit{may be smaller than $n/ \log n$, even though this is only the case if $\ell_n$ is superpolynomial in $n$. For example, when $\ell_n=2^n$ and $\alpha_n=\frac{\sqrt{n}}{2^n \log n }$, then $k_n=\frac{\sqrt{n}}{\log n}=o(n/\log n)$ and $k_n\log \ell_n=\frac{n^{3/2}}{\log n}=\omega(n)$, so no positive rate per unit-energy is feasible.} This implies that treating a random MnAC with $\ell_n$ users as a non-random MnAC with $k_n$ users may be overly-optimistic, since it suggests that interference-free communication is feasible at orders of growth of $k_n$ where actually no positive rate per unit-energy is feasible.
In the proof of Part~\ref{Thm_Infeasble_achv}) of Theorem~\ref{Thm_nonrandom}, we have shown that, when \mbox{$k_n =o(n/ \log n)$} and \mbox{$\alpha_n =1$}, an orthogonal-access scheme achieves the capacity per unit-energy. It turns out that this is not necessarily the case anymore when $\alpha_n \to 0$, as we show in the following theorem.
\begin{theorem}
\label{Thm_ortho_accs}
Assume that $k_n = \Omega(1)$. The largest rate per unit-energy $\dot{C}_{\bot}$ achievable with an orthogonal-access scheme satisfies the following:
\begin{enumerate}[1)]
\item If $ \ell_n = o(n/ \log n)$, then $\dot{C}_{\bot} = \frac{\log e }{N_0}$. \label{Thm_ortho_accs_achv}
\item If $ \ell_n = \omega(n/ \log n)$, then $\dot{C}_{\bot} =0$. \label{Thm_ortho_accs_conv}
\edit{ \item
If $\ell_n = \Theta( \frac{n}{\log n})$, then $0 < \dot{C}_{\bot} < \frac{\log e}{N_0}$.
\label{Thm_exact_ortho}}
\end{enumerate}
\end{theorem}
\begin{proof}
See Subsection~\ref{Sec_ortho_access}.
\end{proof}
Observe that there is again a sharp transition between the orders of growth of $\ell_n$ where interference-free communication is feasible and orders of growth where no positive rate per unit-energy is feasible. In contrast to the optimal transmission scheme, the transition threshold for the orthogonal-access schemes is located at $n/ \log n$, irrespective of the behavior of $\alpha_n$. Thus, by using an orthogonal-access scheme, we treat the random MnAC as if it were a non-random MnAC. This also implies that there are orders of growth of $\ell_n$ and $k_n$ where non-orthogonal-access schemes are necessary to achieve the capacity per unit-energy.
Next we present our results on the behaviour of capacity per unit-energy for APE. To this end, we first note that, if $\alpha_n \to 0$ as $n \to \infty$, then $\text{Pr} \{W_i =0\} \to 1$ for all $i=1,\ldots,\ell_n$.
Consequently, if each user employs a code with $M_n =2$ and $E_n =0$ for all $n$, and if the decoder always declares that all users are inactive, then the APE vanishes as $n \to \infty$. This implies that $\dot{C}^A = \infty$.
In the following, we avoid this trivial case and assume that $\alpha_n$ is bounded away from zero.
For $\alpha_n=1$ (non-random-access case) and APE, we showed in Theorem~\ref{Thm_capac_APE} that if the number of users grows sublinear in $n$, then each user can achieve the single-user capacity per unit-energy, and if the order of growth is linear or superlinear, then the capacity per unit-energy is zero. Perhaps not surprisingly, the same result holds in the random-access case since, when $\alpha_n$ is bounded away from zero, $k_n$ is of the same order as $\ell_n$. We have the following theorem.
\begin{theorem}
\label{Thm_capac_PUPE}
If \edit{$\liminf_{n\to\infty} \alpha_n > 0$}, then $\dot{C}^A$ has the following behavior:
\begin{enumerate}
\item If $\ell_n = o(n)$, then $\dot{C}^A = \frac{\log e}{N_0}$. \label{Thm__avg_achv_part}
\item If $\ell_n = \Omega(n)$, then $\dot{C}^A =0$. \label{Thm__avg_conv_part}
\end{enumerate}
\end{theorem}
\begin{proof}
See Subsection~\ref{sec_average}.
\end{proof}
\subsection{Proof of Theorem~\ref{Thm_random_JPE}}
\label{Sec_proof_random}
We first give an outline of the proof. The achievability scheme to show Part~\ref{Thm_achv_part}) is a non-orthogonal-access scheme where the codewords of all users are of length $n$ and the codebooks of different users may be different. In each codebook, the codewords consist of two parts. The first $n''$ symbols are a signature part that is used to convey to the receiver that the user is active. The remaining $n-n''$ symbols are used to send the message. The decoder follows a two-step decoding process. First, it determines which users are active, then it decodes the messages of all users that are estimated as active. For such a two-step decoding process, we analyze two types of errors: the detection error and the decoding error. We show that, if $k_n \log \ell_n$ is sublinear in $n$, then the probability of detection error and the probability of decoding error tend to zero as $n \to \infty$. The proof of Part~\ref{Thm_conv_part}) follows along similar lines as that of Part~\ref{Thm_Infeasble_convrs}) of Theorem~\ref{Thm_nonrandom}. We first show that the probability of error vanishes only if the energy $E_n$ scales at least \edit{logarithmically} in the total number of users, i.e., $E_n = \Omega(\log \ell_n)$. We \edit{then} show that a positive rate per unit-energy is achievable only if the total power of the active users, given by $k_n E_n/n$, is bounded as $n \to \infty$. \edit{The proof of Part~\ref{Thm_conv_part}) concludes by noting that, if $k_n\log \ell_n$ is superlinear in $n$, then there is no $E_n$ that can simultaneously satisfy these two conditions. Part~\ref{Thm_exact_order}) follows by revisiting the proofs of Parts~\ref{Thm_achv_part}) and \ref{Thm_conv_part}) for the case where $k_n\log \ell_n=\Theta(n)$.}
\subsubsection{Proof of Part~\ref{Thm_achv_part})}
We use an achievability scheme with a decoding process consisting of two steps. First, the receiver determines which users are active. \edit{It then fixes an arbitrary positive integer $\xi$, based on which it decides whether it will decode the messages of all active users, or whether it will declare an error. Specifically, if the number of estimated active users is less than or equal to $\xi k_n$, then the receiver decodes the messages of all active users.} If the number of estimated active users is greater than $\xi k_n$, then it declares an error.\footnote{\edit{The threshold $\xi k_n$ becomes inactive when $\alpha_n$ is bounded away from zero. Indeed, the number of estimated active users is a random variable taking value in $\{0,\ldots,\ell_n\}$. When $\alpha_n$ is bounded away from zero, $\ell_n = k_n/\alpha_n$ is bounded by $\xi k_n$ for some positive integer $\xi$. Hence, in this case we can find a threshold $\xi$ such that the receiver will never have to declare an error.}} \edit{By the union bound,} the total error probability of this scheme can be upper-bounded by
\begin{align*}
P(\mbox{$\cal{D }$}) + \sum_{k'_n=1}^{\xi k_n}\text{Pr} \{K'_n=k_n'\}P_m(k'_n)+ \text{Pr} \{K'_n>\xi k_n\}
\end{align*}
where $K'_n$ \edit{is a random variable describing the number} of active users, $P(\mbox{$\cal{D }$})$ is the probability of a detection error, and $P_m(k'_n)$ is the probability of a decoding error when the receiver has correctly detected that there are $k'_n$ active users. In the following, we show that these probabilities vanish as $n\to\infty$ for any fixed, positive integer $\xi$. Furthermore, by Markov's inequality, we have that $\text{Pr} \{K'_n>\xi k_n\}\leq 1/\xi$. It thus follows that the total probability of error vanishes as we let first $n\to\infty$ and then $\xi\to\infty$.
To enable user detection at the receiver, out of $n$ channel uses, each user uses the first $n''$ channel uses to send its signature and $n'=n -n''$ channel uses for sending the message. The
signature uses energy $E_n''$ out of $E_n$, while the energy used for sending message is given by $E_n' = E_n -E_n''$.
Let ${\bf s}_i$ denote the signature of user $i$ and $\tilde{{\bf x}}_i(w_i)$ denote the codeword of length $n'$ for sending the message $w_i$, where $w_i =1,\ldots, M_n$. Then, the codeword ${\bf x}_i(w_i)$ is given by the concatenation of ${\bf s}_i$ and $\tilde{{\bf x}}_i(w_i)$, denoted as
\begin{align*}
{\bf x}_i(w_i) = ({\bf s}_i, \tilde{{\bf x}}_i(w_i)).
\end{align*}
For a given arbitrary $0 < b < 1$, we let
\begin{equation}
n'' = bn, \quad \label{Eq_channel_choice}
\end{equation}
\label{Page_energy}
\begin{equation}
\label{Eq_energy_choice}
E_n'' = bE_n, \quad E_n = c_n \ln \ell_n
\end{equation}
with $c_n = \ln (\frac{n}{k_n\ln \ell_n})$.\footnote{\edit{In our scheme, a fraction of the total energy must be assigned to the signature part in order to ensure that the detection error probability vanishes as $n\to\infty$. However, this incurs a loss in rate per unit-energy, so this fraction will be made arbitrarily small at the end of the proof. Alternatively, one could consider a sequence $\{b_n\}$ that satisfies $b_n\to 0$ and $b_n c_n \to\infty$ as $n\to\infty$.}}
Based on the first $n''$ received symbols, the receiver detects which users are active. We need the following lemma to show that the detection error probability vanishes as $n \to \infty$.
\edit{
\begin{lemma}
\label{Lem_usr_detect}
Assume that $k_n \log \ell_n = O(n)$, and let $E_n = c_n \ln \ell_n$, where
\begin{equation*}
c_n =
\begin{cases}
\ln\left(\frac{n}{k_n \log \ell_n}\right), &\text{ if } k_n \log \ell_n = o(n)\\
c', &\text{ if } k_n \log \ell_n = \Theta(n)
\end{cases}
\end{equation*}
for some positive constant $c'$ that is independent of $n$. If $c'$ is sufficiently large, then there exist signatures ${\bf s}_i, i=1, \ldots, \ell_n$ with $n''=bn$ channel uses and energy $E_n''=bE_n$ such that $P(\mbox{$\cal{D }$})$ vanishes as $n \to \infty$.
\end{lemma}
}
\begin{proof}
\edit{The proof of Lemma~\ref{Lem_usr_detect} follows along similar lines as that of~\cite[Th.~2]{ChenCG17}. However, there are some differences in the settings considered. Here, the goal is to achieve user detection with minimum energy, whereas in \cite{ChenCG17} the goal is to achieve user detection with the minimum number of channel uses. Furthermore, the energy we assign to the signature part is proportional to the total energy $E_n$, cf.~\eqref{Eq_energy_choice}, whereas in \cite{ChenCG17} the energy assigned to the signature part is proportional to the number of channel uses. These differences have the positive effect that, in our proof, the condition \cite[Eq.~(19)]{ChenCG17}, namely that
\begin{equation*}
\lim_{n\to\infty} \ell_n e^{-\delta k_n} =0
\end{equation*}
for all $\delta>0$, is not necessary. For a full proof of Lemma~\ref{Lem_usr_detect}, see Appendix~\ref{Sec_Lem_detct_proof}.
}
\end{proof}
We next use the following lemma to show that $P_m(k'_n)$ vanishes as $n \to \infty$ uniformly in $k'_n \in \mbox{$\cal{K}$}_n$, where $\mbox{$\cal{K}$}_n \triangleq \{1, \ldots, \xi k_n\}$.
\begin{lemma}
\label{Lem_err_expnt}
Let $A_{k'_n} \triangleq \frac{1}{k_n'} \sum_{i=1}^{k_n'} \I{ \hat{W}_i \neq W_i}$ and \mbox{$\mbox{$\cal{A}$}_{k'_n} \triangleq \{1/k_n', \ldots,1 \}$}, where $\I{\cdot}$ denotes the indicator function. Then, for any arbitrary $0<\rho \leq 1$, we have
\begin{equation}
\textnormal{Pr}\{A_{k'_n} = a\} \leq \left(\frac{1}{\mu}\right)^{2k'_n} {k_n' \choose a k_n'} M_n^{a k_n' \rho} e^{-n'E_{0,k_n'}(a, \rho,n)}, \quad a\in\mbox{$\cal{A}$}_{k'_n}\label{Eq__random_prob_err}
\end{equation}
where
\begin{align}
E_{0,k_n'}(a, \rho,n) \triangleq \frac{\rho}{2} \ln \left(1+\frac{a 2k_n' E_n'}{n'(\rho +1)N_0}\right) \label{Eq_random_expnt}
\end{align}
and
\begin{align}
\mu & \triangleq \int \I{ \|{\bf v}\|^2 \leq E_n'} \prod_{i=1}^{n'} \tilde{q}(v_i) d {\bf v} \label{Eq_def_mu}
\end{align}
is a normalizing constant. In~\eqref{Eq_def_mu}, $\tilde{q}(\cdot)$ denotes the probability density function of a zero-mean Gaussian random variable with variance $E_n'/(2n')$ and ${\bf v} = (v_1,v_2,\ldots, v_{n'})$.
\end{lemma}
\begin{proof}
The upper bound in~\eqref{Eq__random_prob_err} without the factor $(1/\mu)^{2k'_n}$ can be obtained using random coding with i.i.d. Gaussian inputs~\cite[Th.~2]{Gallager85}. However, while i.i.d. Gaussian codebooks satisfy the energy constraint on average (averaged over all codewords), there may be some codewords in the codebook that violate it.
We therefore need to adapt the proof of~\cite[Th.~2]{Gallager85} as follows. Let
\begin{align*}
\tilde{{\bf q}}({\bf v}) & = \prod_{i=1}^{n'} \tilde{q}(v_i).
\end{align*}
For codewords $\tilde{{\bf X}}_i,i=1,\ldots,k'_n$ which are distributed according to $\tilde{{\bf q}}(\cdot)$, the probability $\text{Pr} (A_{k'_n}=a)$ can be upper-bounded as~\cite[Th.~2]{Gallager85}
\begin{align}
\text{Pr} (A_{k'_n} = a) & \leq {k_n' \choose a k_n'} M_n^{a k_n' \rho} \int \tilde{{\bf q}}(\tilde{{\bf x}}_{ak'_n+1}) \cdots \tilde{{\bf q}}(\tilde{{\bf x}}_{k'_n}) \; G ^{1+\rho} \; d\tilde{{\bf x}}_{ak'_n+1} \cdots d\tilde{{\bf x}}_{k'_n} \;d\tilde{{\bf y}} \label{eq_a1}
\end{align}
where
\begin{align*}
G & = \int \tilde{{\bf q}}(\tilde{{\bf x}}_1) \cdots \tilde{{\bf q}}(\tilde{{\bf x}}_{ak'_n}) \left( p(\tilde{{\bf y}} \mid \tilde{{\bf x}}_{1},\cdots, \tilde{{\bf x}}_{k'_n})\right) ^{1/1+\rho} d\tilde{{\bf x}}_{1} \cdots d\tilde{{\bf x}}_{ak'_n} \notag.
\end{align*}
Using the fact that the channel is memoryless, the RHS of~\eqref{Eq__random_prob_err} without the factor $(1/\mu)^{2k'_n}$ follows from~\eqref{eq_a1}. The case of $k'_n =2$ was analyzed in~\cite[Eq.~(2.33)]{Gallager85}.
Now suppose that all codewords are generated according to the distribution
\begin{align*}
{\bf q}({\bf v}) & = \frac{1}{\mu} \I{ \|{\bf v}\|^2 \leq E_n'} \tilde{{\bf q}}({\bf v}).
\end{align*}
Clearly, such codewords satisfy the energy constraint $E'_n$ with probability one. Furthermore,
\begin{align}
{\bf q}({\bf v}) & \leq \frac{1}{\mu} \tilde{{\bf q}}({\bf v}). \label{Eq_prob_signt_uppr1}
\end{align}
By replacing $\tilde{{\bf q}}(\cdot)$ in~\eqref{eq_a1} by ${\bf q}(\cdot)$, and upper-bounding ${\bf q}(\cdot)$ by~\eqref{Eq_prob_signt_uppr1}, we obtain that
\begin{align}
\textnormal{Pr}\{A_{k'_n} = a\} \leq \left(\frac{1}{\mu}\right)^{(1+\rho)(ak'_n)}\left(\frac{1}{\mu}\right)^{k'_n - ak'_n} {k_n' \choose a k_n'} M_n^{a k_n' \rho} e^{-n'E_{0,k_n'}(a, \rho,n)}, \quad a\in\mbox{$\cal{A}$}_{k'_n}. \label{Eq_Prob_An_uppr}
\end{align}
From the definition of $\mu$, we have that $0 < \mu \leq 1$.
Since we further have $\rho\leq 1$ and $a\leq 1$, it follows that
$(1/\mu)^{(1+\rho)(ak'_n)} \leq (1/\mu)^{a k_n'+k_n'} $. Using this bound in~\eqref{Eq_Prob_An_uppr}, we obtain~\eqref{Eq__random_prob_err}.
\end{proof}
Next, we show that $\left(\frac{1}{\mu}\right)^{2k'_n} \to 1$ as $n \to \infty$ uniformly in $k'_n \in \mbox{$\cal{K}$}_n$. Let ${\bf H}$ be a Gaussian vector which is distributed according to $\tilde{{\bf q}}(\cdot)$. Then, by the definition of $\mu$, we have
\begin{align}
\mu & = 1 - \text{Pr} \left(\|{\bf H}_1\|_2^2 > E_n'\right) \notag
\end{align}
so $(1/\mu)^{2 k'_n} \geq 1$.
Let us consider ${\bf H}_0 \triangleq \frac{2 n'}{E_n'} \|{\bf H}_1\|_2^2$, which has a central chi-square distribution with $n'$ degrees of freedom. Then,
\begin{align*}
\text{Pr} \left(\|{\bf H}_1\|_2^2 > E_n'\right) & = \text{Pr} ({\bf H}_0 > 2 n').
\end{align*}
So, from
the Chernoff bound we obtain that
\begin{align*}
\text{Pr} ({\bf H}_0 > a) & \leq \frac{E(e^{t{\bf H}_0})}{e^{ta}} \\
& = \frac{(1-2t)^{-n'/2}}{e^{ta}}
\end{align*}
for every $t > 0$. By choosing $a= 2 n'$ and $t= \frac{1}{4}$, this yields
\begin{align}
\text{Pr} ({\bf H}_0 > 2 n') & \leq \frac{ \left(\frac{1}{2}\right)^{-n'/2}}{ \exp(n'/2) } \notag \\
& = \exp \left[-\frac{n'}{2} \tau \right] \notag
\end{align}
where $\tau \triangleq \left( 1 - \ln 2 \right)$ is strictly positive. Thus,
\begin{align}
1 & \leq \left(\frac{1}{\mu}\right)^{2k'_n} \notag \\
& \leq \left(\frac{1}{\mu}\right)^{2\xi k_n} \notag \\
& \leq \left(1 - \exp \left[-\frac{n'}{2} \tau \right]\right)^{-(2\xi k_n)}, \quad k'_n \in \mbox{$\cal{K}$}_n. \label{Eq_mu_uppr2}
\end{align}
By assumption, we have that $k_n=o(n)$ and $n' = \Theta(n)$.
Since for any two non-negative sequences $\{a_n\}$ and $\{b_n\}$ satisfying $a_n\to 0$ and $a_nb_n \to 0$ as $n \to \infty$, it holds that $(1-a_n)^{-b_n} \to 1$ as $n \to \infty$, we obtain that the RHS of~\eqref{Eq_mu_uppr2} tends to one as $n \to \infty$ uniformly in $k'_n\in \mbox{$\cal{K}$}_n$. So there exists a positive constant $n_0$ that is independent of $k'_n$ and satisfies
\begin{align}
\left(\frac{1}{\mu}\right)^{2k'_n} \leq 2, \quad k'_n \in \mbox{$\cal{K}$}_n, n \geq n_0. \label{Eq_mu_uppr1}
\end{align}
The probability of error $P_m(k'_n)$ can be written as
\begin{equation}
P_m(k'_n)= \sum\limits_{a\in\mbox{$\cal{A}$}_{k'_n}} \textnormal{Pr}\{A_{k'_n} = a\}. \label{Eq_prob_err_def}
\end{equation}
From Lemma~\ref{Lem_err_expnt} and~\eqref{Eq_mu_uppr1}, we obtain
\begin{align}
\textnormal{Pr}\{A_{k'_n} = a\} & \leq 2 {k_n' \choose a k_n'} M_n^{a k_n' \rho} \exp[-n'E_{0,k_n'}(a, \rho,n)] \notag\\
& \leq 2 \exp\left[ k_n'H_2(a) + a \rho k_n' \log M_n - n'E_{0,k_n'}(a, \rho,n) \right]\notag \\
& = 2 \exp \left[-E_n'f_{k'_n}(a, \rho,n)\right], \quad n \geq n_0 \label{eq:why_is_this_unnumbered}
\end{align}
where \edit{$H_2(\cdot)$ denotes the binary entropy function, and}
\begin{align}
f_{k'_n}(a, \rho,n) \triangleq \frac{n'E_{0,k_n'}(a, \rho,n)}{E_n'} - \frac{a \rho k_n' \log M_n}{E_n'} - \frac{k_n' H_2(a)}{E_n'}. \label{Eq_fn_def}
\end{align}
We next show that, for sufficiently large $n$, we have
\begin{align}
\textnormal{Pr}\{A_{k'_n} = a\} \leq 2 \exp \left[-E_n'f_{\xi k_n}(1/(\xi k_n), \rho,n)\right], \quad a\in\mbox{$\cal{A}$}_{k'_n}, k'_n \in \mbox{$\cal{K}$}_n. \label{Eq_err_upp_bnd}
\end{align}
To this end, we lower-bound
\begin{align}
\frac{d f_{k'_n}(a, \rho,n)}{da} & \geq \rho k'_n \left[ \frac{1}{1+\frac{2k'_nE'_n}{n'(\rho+1)N_0}} \frac{1}{(1+\rho)N_0} - \frac{\dot{R}}{(1-b)\log e} \right]\nonumber \\
& \geq \rho \left[ \frac{1}{1+\frac{2 \xi k_nE'_n}{n'(\rho+1)N_0}} \frac{1}{(1+\rho)N_0} - \frac{\dot{R}}{(1-b)\log e} \right] \label{Eq_new16}
\end{align}
by using simple algebra. This implies that
for any fixed value of $\rho$ and our choices of $E_n'$ and \mbox{$\dot{R} = \frac{(1-b)\log e}{(1+\rho)N_0} - \delta$} (for some arbitrary $0<\delta<\frac{(1-b)\log e}{(1+\rho)N_0}$),
\begin{equation*}
\liminf_{n\to\infty} \min_{k'_n \in \mbox{$\cal{K}$}_n} \min_{a \in \mbox{$\cal{A}$}_{k'_n}} \frac{d f_{k'_n}(a, \rho,n)}{da} > 0.
\end{equation*}
Indeed, \edit{the RHS of \eqref{Eq_new16} is independent of $a$ and $k'_n$ and tends to $\rho\delta$ as $n\to\infty$}, since $\frac{k_n E_n'}{n'} \to 0$ by our choice of $E_n'$ and because $k_n = o(n / \log n)$. \edit{Thus, for sufficiently large $n$ and a given $\rho$, the function $a\mapsto f_{k'_n}(a, \rho,n)$ is monotonically increasing on $\mbox{$\cal{A}$}_{k'_n}$ for every $k'_n \in \mbox{$\cal{K}$}_n$.}
It follows that there exists a positive constant $n'_0$ that is independent of $k'_n$ and satisfies
\begin{equation*}
\min_{a \in \mbox{$\cal{A}$}_{k'_n}} f_{k'_n}(a, \rho,n) = f_{ k'_n}(1/k'_n, \rho,n), \quad k'_n \in \mbox{$\cal{K}$}_n, n \geq n'_0.
\end{equation*}
It further follows from the definition of $f_{k'_n}(a, \rho,n) $ in \eqref{Eq_fn_def} that, for $a = 1/k'_n$ and a given $\rho$, $f_{ k'_n}(a, \rho,n')$ is decreasing in $k'_n$, since in this case the first two terms on the RHS of~\eqref{Eq_fn_def} are independent of $k'_n$ and the third term is increasing in $k'_n$.
Hence, we can further lower-bound
\begin{equation*}
\min_{a \in \mbox{$\cal{A}$}_{k'_n}} f_{k'_n}(a, \rho,n) \geq f_{\xi k_n}(1/(\xi k_n), \rho,n), \quad k'_n \in \mbox{$\cal{K}$}_n, n \geq n'_0.
\end{equation*}
Next, we show that, for our choice of $E_n'$ and \mbox{$\dot{R}$}, we have
\begin{equation}
\label{eq:lim_pos}
\liminf_{n \rightarrow \infty} f_{\xi k_n}(1/(\xi k_n), \rho,n) >0.
\end{equation}
Let
\begin{IEEEeqnarray}{rCl}
i_n(\rho) & \triangleq & \frac{n' E_{0,\xi k_n}(1/(\xi k_n),\rho,n)}{E_n'}\label{Eq_new17} \\
j(\rho) & \triangleq & \frac{\rho \dot{R}}{(1-b)\log e} \label{Eq_new18}\\
h_n(1/(\xi k_n)) & \triangleq & \frac{\xi k_n H_2(1/(\xi k_n))}{E_n'}. \label{Eq_new19}
\end{IEEEeqnarray}
Note that $\frac{h_n(1/(\xi k_n))}{j(\rho)}$ vanishes as $n \to \infty$ for our choice of $E_n'$.
Consequently,
\begin{IEEEeqnarray*}{lCl}
\liminf_{n \rightarrow \infty} f_{\xi k_n}(1/(\xi k_n), \rho,n)
& = & j(\rho) \biggl\{\liminf_{n\to\infty} \frac{i_n(\rho)}{j(\rho)} - 1 \biggr\}.
\end{IEEEeqnarray*}
The term $j( \rho) =\rho \dot{R}/(1-b)\log e$ is bounded away from zero for our choice of $\dot{R}$ and $\delta < \frac{(1-b)\log e}{(1+\rho)N_0}$. Furthermore, since $E_n'/n' \to 0$, we get
\begin{equation}
\label{eq:we_get_this}
\lim_{n\to\infty} \frac{i_n(\rho)}{j(\rho)} = \frac{(1-b)\log e}{(1+\rho)N_0 \dot{R}}
\end{equation}
which is strictly larger than $1$ for our choice of $\dot{R}$. So, \eqref{eq:lim_pos} follows.
We conclude that there exist two positive constants $\gamma$ and $n''_0 \geq \max(n_0,n_0')$ that are independent of $k'_n$ and satisfy $f_{k'_n}(a, \rho,n) \geq \gamma $ for $a\in \mbox{$\cal{A}$}_{k'_n}$, $k'_n\in \mbox{$\cal{K}$}_n$, and $n \geq n''_0$.
Consequently, \edit{it follows from \eqref{eq:why_is_this_unnumbered} that}, for $n \geq n''_0$,
\begin{align}
\textnormal{Pr}\{A_{k'_n} = a\} \leq 2 e^{-E_n'\gamma}, \quad a \in \mbox{$\cal{A}$}_{k'_n}, k'_n \in \mbox{$\cal{K}$}_n. \label{Eq_type_uppr}
\end{align}
Since $|\mbox{$\cal{A}$}_{k'_n}| = k_n'$, \eqref{Eq_prob_err_def} and \eqref{Eq_type_uppr} yield that
\begin{align}
P_m(k'_n)\leq k_n' 2 e^{-E_n'\gamma}, \quad k'_n \in \mbox{$\cal{K}$}_n, n \geq n''_0. \notag
\end{align}
Further upper-bounding $k'_n \leq \xi k_n$, this implies that
\begin{align}
\sum_{k'_n=1}^{\xi k_n}\text{Pr} \{K'_n=k_n'\}P_m(k'_n)& \leq \xi k_n 2 e^{-E_n'\gamma}, \quad n \geq n''_0. \label{Eq_sum_prob_uppr}
\end{align}
Since $E_n' = (1-b)c_n \ln \ell_n$ and $k_n = O(\ell_n)$, it follows that the RHS of~\eqref{Eq_sum_prob_uppr} tends to 0 as $n \to \infty$ for our choice of \mbox{$\dot{R} = \frac{(1-b)\log e}{(1+\rho)N_0} - \delta$}.
Since $\rho,\delta,$ and $b$ are arbitrary, any rate $\dot{R} < \frac{\log e}{N_0}$ is thus achievable. This proves Part~\ref{Thm_achv_part}) of Theorem~\ref{Thm_random_JPE}.
\subsubsection{Proof of Part~\ref{Thm_conv_part})}
Let $\hat{W}_i$ denote the receiver's estimate of $W_i$, and
denote by ${\bf W}$ and ${\bf \hat{W}}$ the vectors $(W_1,\ldots, W_{\ell_n})$ and $(\hat{W_1},\ldots,\hat{W}_{\ell_n})$, respectively.
The messages $W_1,\ldots, W_{\ell_n}$ are independent, so it follows from~\eqref{Eq_messge_def} that
\begin{align*}
H({\bf W}) = \ell_n H({\bf W}_1) = \ell_n \left(H_2(\alpha_n) + \alpha_n \log M_n \right).
\end{align*}
Since $H({\bf W}) = H({\bf W}|{\bf Y})+I({\bf W};{\bf Y})$, we obtain
\begin{align}
\ell_n \left(H_2(\alpha_n) + \alpha_n \log M_n \right) & =H({\bf W}|{\bf Y})+I({\bf W};{\bf Y}). \label{Eq_messge_entrpy}
\end{align}
To bound $H({\bf W})$, we use the upper bounds~\cite[Lemma~2]{ChenCG17}
\begin{align}
H({\bf W}|{\bf Y}) \leq & \log 4 + 4 P_{e}^{(n)}\big(k_n \log M_n + k_n + \ell_n H_2(\alpha_n) + \log M_n \big) \label{Eq_messg_cond_entrpy}
\end{align}
and~\cite[Lemma~1]{ChenCG17}
\begin{align}
I({\bf W};{\bf Y}) \leq \frac{n}{2} \log \left(1+\frac{ 2k_nE_n}{nN_0}\right). \label{Eq_mutl_info_uppr}
\end{align}
Using~\eqref{Eq_messg_cond_entrpy} and~\eqref{Eq_mutl_info_uppr} in~\eqref{Eq_messge_entrpy}, rearranging terms, and dividing by $k_nE_n$, yields
\begin{IEEEeqnarray}{lCl}
\left(1-4 P_{e}^{(n)}(1+1/k_n)\right) \dot{R} &\leq & \frac{\log 4}{k_nE_n} + \frac{H_2(\alpha_n)}{\alpha_n E_n} \! \left(4 P_{e}^{(n)} -1\right) \nonumber \\
& & {} + 4 P_{e}^{(n)} (1/E_n + 1/k_n) +\frac{n}{2 k_nE_n} \log \left(1+\frac{ 2k_nE_n}{nN_0}\right). \label{Eq_rate_joint_uppr}
\end{IEEEeqnarray}
We next show that, if $k_n \log \ell_n = \omega(n)$, then the RHS of~\eqref{Eq_rate_joint_uppr} tends to a non-positive value. To this end, we need the following lemma.
\begin{lemma}
\label{Lem_energy_bound}
If $\dot{R} > 0$ \edit{and $\ell_n \geq 5$}, then $P_{e}^{(n)}$ vanishes as $n\to\infty$ only if \mbox{$E_n = \Omega(\log \ell_n)$}.
\end{lemma}
\begin{proof}
\edit{ See Appendix~\ref{Append_prob_lemma}.}
\end{proof}
Part~\ref{Thm_conv_part}) of Theorem~\ref{Thm_random_JPE} follows now by contradiction. Indeed, let us assume that $k_n \log \ell_n = \omega(n)$, $P_{e}^{(n)} \to 0$, and $\dot{R} >0$. \edit{The assumption $k_n\log\ell_n=\omega(n)$ implies that $\ell_n\to\infty$ as $n\to\infty$.} Then, Lemma~\ref{Lem_energy_bound} together with the assumption that $k_n = \Omega(1)$ implies that \edit{$E_n\to\infty$ and $k_n E_n=\omega(n)$.} It follows that the last term on the RHS of~\eqref{Eq_rate_joint_uppr} tends to zero as $n \to \infty$. \edit{Furthermore,} together with the assumption that $k_n = \Omega(1)$,
and since $P_{e}^{(n)}$ tends to zero as $n \to \infty$, this implies that the first and third term on the RHS of~\eqref{Eq_rate_joint_uppr} vanish as $n \to \infty$. Finally,
$\frac{H_2(\alpha_n)}{\alpha_n E_n}$ is a sequence of non-negative numbers and $(4 P_{e}^{(n)} -1) \to -1$ as $n \to \infty$, so the second term converges to a non-positive value. \edit{Noting that, by the assumption $k_n=\Omega(1)$, the term $(1-4 P_{e}^{(n)}(1+1/k_n))$ tends to one as $P_{e}^{(n)} \to 0$}, we thus obtain from \eqref{Eq_rate_joint_uppr} that $\dot{R}$ tends to a non-positive value as $n \to \infty$. This contradicts the assumption $\dot{R} > 0$, so Part~\ref{Thm_conv_part}) of Theorem~\ref{Thm_random_JPE} follows.
\edit{\subsubsection{Proof of Part~\ref{Thm_exact_order})}
\label{Sec_exact_random}
To show that $\dot{C}>0$ when $k_n\log\ell_n=\Theta(n)$, we use the same achievability scheme and follow the same analysis as in the proof of Part~\ref{Thm_achv_part}) of Theorem~\ref{Thm_random_JPE}. That is, each user uses $n''=b n$ channel uses for sending a signature and $n'=n - n''$ channel uses for sending the message. Furthermore, the decoding process consists of two steps. First, the receiver determines which users are active. If the number of estimated active users is less than or equal to $\xi k_n$, for some arbitrary positive integer $\xi$, then the receiver decodes in a second step the messages of all active users. If the number of estimated active users is greater than $\xi k_n$, then the receiver declares an error. We set $E_n = c' \ln \ell_n$ for some $c'>0$ chosen sufficiently large so that, by Lemma~\ref{Lem_usr_detect}, the probability of a detection error vanishes as $n \to \infty$. We next show that there exists an $\dot{R}>0$ such that the probability of a decoding error also vanishes as $n \to \infty$. To this end, we first argue that, if $k_n \log \ell_n = \Theta(n)$, then we can find an $\dot{R}>0$ such that $f_{k'_n}(a, \rho,n)$ defined in~\eqref{Eq_fn_def} satisfies
\begin{align}
\liminf_{n\to\infty} \min_{k'_n \in \mbox{$\cal{K}$}_n} \min_{a \in \mbox{$\cal{A}$}_{k'_n}} f_{k'_n}(a, \rho,n) & > 0. \label{Eq_new13}
\end{align}
In Part~\ref{Thm_achv_part}) of Theorem~\ref{Thm_random_JPE}, we proved \eqref{Eq_new13} by first showing that there exists a positive constant $n'_0$ such that
\begin{align}
\min_{k'_n \in \mbox{$\cal{K}$}_n} \min_{a \in \mbox{$\cal{A}$}_{k'_n}} f_{k'_n}(a, \rho,n) \geq f_{\xi k_n}(1/(\xi k_n), \rho,n), \quad n \geq n'_0 \label{Eq_new14}
\end{align}
and then
\begin{align}
\liminf_{n \rightarrow \infty} f_{\xi k_n}(1/(\xi k_n), \rho,n) >0.\label{Eq_new15}
\end{align}
We follow the same steps here, too. Since, by assumption, $k_n E'_n = k_n (1-b) c'\ln\ell_n=\Theta(n)$ for every fixed $c'>0$ and $n'=\Theta(n)$, there exist $r_1>0$ and $\tilde{n}_0>0$ such that
\begin{align*}
\frac{k_nE'_n}{n'} & \leq r_1, \quad n\geq \tilde{n}_0.
\end{align*}
It then follows from \eqref{Eq_new16} that, for every
\begin{align}
0< \dot{R} < \frac{ (1-b)\log e}{(\rho+1)N_0+2\xi r_1} \label{Eq_new21}
\end{align}
we have that
\begin{align*}
\liminf_{n\to\infty} \min_{k'_n \in \mbox{$\cal{K}$}_n} \min_{a \in \mbox{$\cal{A}$}_{k'_n}} \frac{d f_{k'_n}(a, \rho,n) }{d a} >0.
\end{align*}
Thus, for sufficiently large $n$ and a given $\rho$, the function $a\mapsto f_{k'_n}(a, \rho,n)$ is monotonically increasing on $\mbox{$\cal{A}$}_{k'_n}$ for every $k'_n \in \mbox{$\cal{K}$}_n$. This gives \eqref{Eq_new14}.
To prove \eqref{Eq_new15}, we write
\begin{equation}
\label{eq:f(bla,bla)}
f_{\xi k_n}(1/(\xi k_n), \rho,n) = j(\rho)\left( \frac{i_n(\rho)}{j(\rho)} - 1 - \frac{h_n(1/(\xi k_n))}{j(\rho)} \right)
\end{equation}
where $i_n(\rho)$, $j(\rho)$, and $h_n(1/(\xi k_n))$ are defined in \eqref{Eq_new17}, \eqref{Eq_new18}, and \eqref{Eq_new19}, respectively. We consider two cases:
\subsubsection*{Case 1---$k_n$ is unbounded} In this case, the assumption $k_n\log\ell_n=\Theta(n)$ implies that $\Theta(\log\ell_n)=o(n)$. Since $E_n'=\Theta(\log \ell_n)$ for every fixed $c'>0$, it follows that $E'_n/n'\to 0$. We thus get \eqref{eq:we_get_this}, namely
\begin{equation*}
\lim_{n \to \infty} \frac{i_n(\rho)}{j(\rho)} = \frac{ (1-b)\log e}{\dot{R}(\rho+1)N_0}.
\end{equation*}
If $\dot{R}$ satisfies \eqref{Eq_new21}, then this is strictly larger than $1$. Furthermore, if $k_n$ is unbounded, then we can make $\frac{h_n(1/(\xi k_n)) }{j(\rho)}$ arbitrarily small by choosing $c'$ sufficiently large. Since $j(\rho)$ is bounded away from zero for every positive $\rho$ and $\dot{R}$, we then obtain \eqref{Eq_new15} from \eqref{eq:f(bla,bla)}.
\subsubsection*{Case 2---$k_n$ is bounded} In this case, $\Theta(\log \ell_n)=\Theta(k_n\log\ell_n)=\Theta(n)$, so for every fixed $c'>0$ we have $E'_n = \Theta(n)$. It follows that we can find $r_2 >0$ and $\tilde{n}'_0$ such that
\begin{equation*}
\frac{2 E'_n}{n'(1+\rho)N_0} \leq r_2, \quad n\geq \tilde{n}'_0.
\end{equation*}
If we choose
\begin{equation}
\dot{R} < \frac{\ln (1+r_2)}{r_2}\frac{(1-b)\log e}{(\rho+1) N_0} \label{Eq_new22}
\end{equation}
then
\begin{align}
\lim_{n \to \infty} \frac{i_n(\rho)}{j(\rho)} >1.
\end{align}
Furthermore, if $k_n$ is bounded, then $h_n(1/(\xi k_n))$ vanishes as $n \to \infty$, since $\xi k_n$ is bounded and $E'_n \to \infty$. Recalling that $j(\rho)$ is bounded away from zero for every positive $\rho$ and $\dot{R}$, we then again obtain \eqref{Eq_new15} from \eqref{eq:f(bla,bla)}.
From \eqref{Eq_new14} and \eqref{Eq_new15}, it follows that, for every positive $\dot{R}$ satisfying both \eqref{Eq_new21} and \eqref{Eq_new22}, there exist two positive constants $\gamma$ and $n_0''\geq \max(n_0,n_0',\tilde{n}_0')$ (where $n_0$ is as in \eqref{eq:why_is_this_unnumbered}) that are independent of $k'_n$ and satisfy $f_{k'_n}(a, \rho,n) \geq \gamma $ for $a\in \mbox{$\cal{A}$}_{k'_n}$, $k'_n\in \mbox{$\cal{K}$}_n$, and $n \geq n''_0$. It follows from \eqref{eq:why_is_this_unnumbered} that
\begin{align}
\sum_{k'_n=1}^{\xi k_n}\text{Pr} \{K'_n=k_n'\}P_m(k'_n) & \leq \xi k_n 2 e^{-E_n'\gamma} \notag \\
& = 2 \xi \exp\left[-E_n'\left(\gamma - \frac{\ln k_n}{E_n'}\right)\right], \quad n \geq n_0''. \label{Eq_new24}
\end{align}
The term
\begin{align*}
\frac{\ln k_n}{E'_n} & = \frac{\ln k_n}{(1-b)c'\ln \ell_n}
\end{align*}
can be made arbitrarily small by choosing $c'$ sufficiently large since $\ln k_n \leq \ln \ell_n$. We thus have that $\frac{\ln k_n}{E'_n} < \gamma$ for sufficiently large $c'$, in which case the RHS of~\eqref{Eq_new24} vanishes as $n \to \infty$. Since, by Markov's inequality, we further have that $\text{Pr} \{K_n'>\xi k_n\}\leq 1/\xi$, we conclude that the probability of a decoding error vanishes as we let first $n\to\infty$ and then $\xi\to\infty$. Consequently, if $k_n \log \ell_n = \Theta(n)$, then $\dot{C}>0$.
To prove that $\dot{C} < \frac{\log e}{N_0}$, we first note that the assumption $k_n \log \ell_n = \Theta(n)$ implies that $\ell_n \to \infty$ as $n \to \infty$. Then, Lemma~\ref{Lem_energy_bound}
shows that $P_e^{(n)}$ vanishes as $n \to \infty$ only if $E_n = \Omega(\log \ell_n)$. This further implies that $P_e^{(n)}\to 0$ only if $k_n E_n = \Omega(n)$. If $k_n E_n = \omega(n)$, then it follows from the proof of Part~\ref{Thm_conv_part}) of Theorem~\ref{Thm_random_JPE} that $\dot{C}=0$. We can thus assume without loss of optimality that $k_n E_n = \Theta(n)$. In this case, by following the arguments given in the proof of
Part~\ref{Thm_conv_part}) of Theorem~\ref{Thm_random_JPE}, we obtain that the first and the third term on the RHS of \eqref{Eq_rate_joint_uppr} vanish as $n \to \infty$. Furthermore, the second term tends to a non-positive value, and the factor $(1-4 P_{e}^{(n)}(1+1/k_n))$ tends to one. It then follows from \eqref{Eq_rate_joint_uppr} that
\begin{equation}
\label{eq:Th7_juhuu}
\dot{R} \leq \limsup_{n\to\infty} \frac{n}{2 k_nE_n} \log \left(1+\frac{ 2k_nE_n}{nN_0}\right).
\end{equation}
Since $k_n E_n = \Theta(n)$, there exist $n_0>0$ and $l_1>0$ such that, for $n \geq n_0$, we have $\frac{k_nE_n}{n} \geq l_1$. By noting that $\frac{\log (1+x)}{x} < \log e$ for every $x >0$, we thus obtain that the RHS of \eqref{eq:Th7_juhuu} is strictly less than $\frac{\log e}{N_0}$ for $n \geq n_0$. Hence $\dot{C}< \frac{\log e}{N_0}$, which concludes the proof of Part~\ref{Thm_exact_order}) of Theorem~\ref{Thm_random_JPE}.
}
\subsection{Proof of Theorem~\ref{Thm_ortho_accs}}
\label{Sec_ortho_access}
\subsubsection{Proof of Part~\ref{Thm_ortho_accs_achv})}
To prove Part~\ref{Thm_ortho_accs_achv}) of Theorem~\ref{Thm_ortho_accs}, we present a scheme that is similar to the one used in the proof of Part~\ref{Thm_Infeasble_achv}) of Theorem~\ref{Thm_nonrandom}. Specifically, each user is assigned $n/\ell_n$ channel uses, out of which the first one is used for sending a pilot signal and the rest are used for sending the message. Out of the available energy $E_n$, $t E_n$ (for some arbitrary $0 < t < 1$ \edit{to be determined later}) is used for the pilot signal and $(1-t)E_n$ is used for sending the message. Let $\tilde{{\bf x}}(w) $ denote the codeword of length $\frac{n}{\ell_n}-1$ for sending message $w$. Then,
user $i$ sends in his assigned slot the codeword
\begin{align*}
{\bf x}(w_i) = \left(\sqrt{t E_n}, \tilde{{\bf x}}(w_i)\right).
\end{align*}
The receiver first detects from the pilot signal whether user $i$ is active or not. If the user is estimated as active, then the receiver decodes the user's message.
Let $P_{e,i} = \textnormal{Pr}\{\hat{W_i} \neq W_i\}$ denote the probability that user $i$'s message is decoded erroneously.
Since all users follow the same coding scheme, the probability of correct decoding is given by
\begin{align}
P_c^{(n)} = \left(1-P_{e,1}\right)^{\ell_n}. \label{Eq_ortho_corrct}
\end{align}
By employing the transmission scheme that was used to prove Theorem~\ref{Thm_nonrandom}, we get an upper bound on the probability of error $P_{e,1}$ as follows. Let ${\bf Y}_1$ denote the received vector of length $n/\ell_n$ corresponding to user 1 in the orthogonal-access scheme.
From the pilot signal, which is the first symbol $Y_{11} $ of ${\bf Y}_1$, the receiver guesses whether user 1 is active or not. Specifically, the user is estimated as active if $Y_{11} > \frac{\sqrt{tE_n}}{2}$ and as inactive otherwise.
If the user is declared as active, then the receiver decodes the message from the rest of ${\bf Y}_1$.
Let $\text{Pr} ( \hat{W}_1 \neq w |W_1 = w)$ denote the decoding error probability when message $w,w=0, \ldots, M_n$ was transmitted.
Then, $P_{e,1}$ is given by
\begin{align}
P_{e,1} & = (1-\alpha_n)\text{Pr} ( \hat{W}_1 \neq 0|W_1=0) + \frac{\alpha_n}{M_n} \sum_{w=1}^{M_n} \text{Pr} ( \hat{W}_1 \neq w |W_1 = w) \notag \\
& \leq \text{Pr} ( \hat{W}_1 \neq 0|W_1=0) + \frac{1}{M_n} \sum_{w=1}^{M_n} \text{Pr} ( \hat{W}_1 \neq w | W_1 = w). \label{Eq_err_prob_uppr}
\end{align}
If $W_1=0$, then an error occurs if $Y_{11} > \frac{\sqrt{tE_n}}{2}$. So, we have
\begin{align}
\text{Pr} ( \hat{W}_1 \neq 0|W_1=0) & = Q\left( \frac{\sqrt{tE_n}}{2} \right). \label{Eq_err_prob_uppr2}
\end{align}
If $w=1,\ldots,M_n$, then an error happens either \edit{by declaring the user as inactive} or by erroneously decoding the message. An active user is declared as inactive if $Y_{11} < \frac{\sqrt{tE_n}}{2}$. So, \edit{by the union bound}
\begin{align}
\frac{1}{M_n} \sum_{w=1}^{M_n} \text{Pr} ( \hat{W}_1 \neq w | W_1 = w)& \leq Q\left( \frac{\sqrt{tE_n}}{2} \right) + \edit{P_m}
\end{align}
\edit{where $P_m$ is the probability that the decoder correctly declares user 1 as active but erroneously decodes its message.} It then follows from~\eqref{Eq_err_prob_uppr} and \eqref{Eq_err_prob_uppr2} that
\begin{align}
P_{e,1} & \leq 2 Q\left( \frac{\sqrt{tE_n}}{2} \right)+ \edit{P_m}. \label{Eq_singl_usr_uppr}
\end{align}
By choosing $E_n=c_n \ln n$ with $c_n=\ln\left(\frac{n}{\ln n}\right)$, we can upper-bound \edit{$P_m$} by following the steps that led to~\eqref{Eq_prob_corrct}. Thus, \edit{for every $\dot{R}<\frac{\log e}{N_0}$, there exists a sufficiently large $n_0$ and a $0<t<1$ such that}
\begin{align}
P_m & \leq \frac{1}{n^2}, \quad n\geq n_0. \label{Eq_prob_decd}
\end{align}
Furthermore, for the above choice of $E_n$, there exists a sufficiently large $n_0'$ such that
\begin{align}
2 Q\left( \frac{\sqrt{tE_n}}{2} \right) & \leq \frac{1}{n^2}, \quad n\geq n'_0. \label{Eq_detect_prob_uppr}
\end{align}
Using~\eqref{Eq_prob_decd} and~\eqref{Eq_detect_prob_uppr}
in~\eqref{Eq_singl_usr_uppr}, we then obtain from~\eqref{Eq_ortho_corrct} that
\begin{align}
P_c^{(n)} & \geq \left(1-\frac{2}{n^{2}}\right)^{\ell_n} \notag \\
& \geq \left(1-\frac{2}{n^{2}}\right)^{\frac{n}{\log n}}, \quad \edit{n \geq \max(n_0,n'_0)}\notag
\end{align}
which tends to one as $n \to \infty$. This proves Part~\ref{Thm_ortho_accs_achv}) of Theorem~\ref{Thm_ortho_accs}.
\subsubsection{Proof of Part~\ref{Thm_ortho_accs_conv})}
\edit{Recall that} we consider symmetric codes, i.e., the pair $(M_n,E_n)$ is the same for all users. However, each user may be assigned different numbers of channel uses. Let $n_i$ denote the number of channel uses assigned to user $i$. For an orthogonal-access scheme, if $\ell_n = \omega(n/ \log n)$, then there exists at least one user, say $i=1$, such that $n_i = o(\log n)$.
Using that $H(W_1 | W_1 \neq 0 ) = \log M_n$, it follows from Fano's inequality that
\begin{align}
\log M_n & \leq 1+P_{e,1} \log M_n + \frac{n_1}{2 }\log\left(1+\frac{ 2 E_n}{n_1N_0}\right). \nonumber
\end{align}
This implies that the rate per unit-energy $\dot{R}=(\log M_n)/E_n$ for user 1 is upper-bounded by
\begin{align}
\dot{R} \leq \frac{ \frac{1}{E_n} + \frac{n_1}{2 E_n}\log\left(1+\frac{ 2E_n}{n_1N_0}\right)}{1 -P_{e,1}}.\label{Eq_R_avg}
\end{align}
Since $\ell_n = \omega(n/ \log n)$, it follows from Lemma~\ref{Lem_energy_bound} that $P_{e}^{(n)}$ goes to zero only if
\begin{align}
E_n = \Omega(\log n). \label{Eq_ortho_enrg_lowr}
\end{align}
Furthermore, \eqref{Eq_R_avg} implies that $\dot{R}>0$ only if
$E_n = O(n_1)$. Since $n_1 = o(\log n)$, this further implies that
\begin{align}
E_n = o(\log n). \label{Eq_ortho_enrg_uppr}
\end{align}
No sequence $\{E_n\}$ can satisfy both~\eqref{Eq_ortho_enrg_uppr} and~\eqref{Eq_ortho_enrg_lowr} simultaneously. We thus obtain that if $\ell_n =\omega(n/ \log n)$, then the capacity per unit-energy is zero. This is Part~\ref{Thm_ortho_accs_conv}) of Theorem~\ref{Thm_ortho_accs}.
\edit{
\subsubsection{Proof of Part~\ref{Thm_exact_ortho})}
\label{Sec_exact_ortho}
To show that $\dot{C}_{\bot}>0$, we use the same achievability scheme given in the proof of Part~\ref{Thm_ortho_accs_achv}) of Theorem~\ref{Thm_ortho_accs}. That is, each user is assigned $n/\ell_n$ channel uses, out of which one is used for sending a pilot signal and the rest are used for sending the message. Out of the available energy $E_n$, $tE_n$ (for some arbitrary $0<t<1$ to be determined later) is used for the pilot signal and $(1-t)E_n$ is used for sending the message. We choose $E_n = c'\log n$, where $c' = \frac{c}{1-t}$ and $c$ is chosen as in the proof of Part~\ref{Thm_exact_order}) of Theorem~\ref{Thm_nonrandom}. The probability of error in decoding user 1's message is then upper-bounded by \eqref{Eq_singl_usr_uppr}, namely,
\begin{align}
P_{e,1} & \leq 2 Q\left(\frac{\sqrt{t E_n}}{2}\right) +P_m \label{Eq_ortho_exct1}
\end{align}
where $P_m$ denotes the probability that the decoder correctly declares user 1 as active but makes an error in decoding its message.
By the assumption $\ell_n = \Theta(n/\log n)$, there exist $n_0>0$ and $0<a_1\leq a_2$ such that, for $n \geq n_0$, we have $a_1 \frac{n}{\log n} \leq \ell_n \leq a_2 \frac{n}{\log n}$. Since $\alpha_n \leq 1$, it follows that $k_n \leq a_2 \frac{n}{\log n}$ for $n \geq n_0$. By following the proof of
Part~\ref{Thm_exact_order}) of Theorem~\ref{Thm_nonrandom}, we then obtain that one can set
\begin{equation*}
\dot{R} = (1-t)\frac{\log e}{2} \frac{\ln \bigl(1+\frac{ 2a_2 c}{(1+\rho)N_0}\bigr)}{2a_2 c}
\end{equation*}
(for an arbitrary $0<\rho \leq 1$) and find a $c$ independent of $n$ and $t$ such that
\begin{align}
P_m & \leq \frac{1}{n}, \quad n \geq n_0. \label{Eq_ortho_exct2}
\end{align}
Furthermore, for $E_n=c'\log n$, the upper bound $Q(x) \leq \frac{1}{2} e^{-x^2/2}$, $x\geq 0$ yields that
\begin{equation*}
2 Q\left(\frac{\sqrt{t E_n}}{2}\right) \leq \exp\left[-\log n \frac{t}{1-t}\frac{c}{8}\right].
\end{equation*}
For every fixed $c$, the term $\frac{t}{1-t}\frac{c}{8}$ is a continuous, monotonically increasing, function of $t$ that is independent of $n$ and ranges from zero to infinity.
We can therefore find a $0<t<1$ such that
\begin{align*}
2 Q\left(\frac{\sqrt{t E_n}}{2}\right) & \leq \frac{1}{n}.
\end{align*}
Together with~\eqref{Eq_ortho_exct1} and~\eqref{Eq_ortho_exct2}, this implies that
\begin{align}
P_{e,1} & \leq \frac{2}{n}, \quad n\geq n_0. \label{Eq_ortho_exct3}
\end{align}
The above scheme has a positive rate per unit-energy. It remains to show that this rate per unit-energy is also achievable. To this end, we note that,
for an orthogonal-access scheme, the probability of correct decoding is given by $P_c^{(n)} = (1-P_{e,1})^{\ell_n}$. It therefore follows from~\eqref{Eq_ortho_exct3} that
\begin{align}
P_c^{(n)} & \geq \left(1- \frac{2}{n} \right)^{a_2\frac{n}{\log n}},\quad n\geq n_0. \label{Eq_ortho_exct4}
\end{align}
Since $ \left(1- \frac{2}{n} \right)^{n/2} \to 1/e$ and $\frac{2a_2}{\log n}\to 0$ as $n \to \infty$, the RHS of~\eqref{Eq_ortho_exct4} tends to one as $n \to \infty$. This implies that the probability of correct decoding tends to one as $n\to\infty$, hence the rate per unit-energy is indeed achievable. Thus, if $\ell_n = \Theta(n/\log n)$, then $\dot{C}_{\bot} >0$.
We next show that $\dot{C}_{\bot} < \frac{\log e}{N_0}$. To this end, we first note that, if $\ell_n = \Theta( \frac{n}{\log n})$, and if we employ an orthogonal-access scheme, then there exists at least one user, say $i=1$, such that $n_1=O(\log n)$. That is, there exist $n_0>0$ and $a>0$ such that, for all $n\geq n_0$, we have $n_1 \leq a \log n$. Furthermore, Lemma~\ref{Lem_energy_bound} implies that, if $\ell_n = \Theta(n/\log n)$, then $P_e^{(n)}$ vanishes only if $E_n = \Omega(\log n)$. If $E_n = \omega(\log n)$, then it follows from~\eqref{Eq_R_avg} that a positive $\dot{R}$ is achievable only if $n_1 = \omega(\log n)$, which contradicts the fact that $n_1=O(\log n)$. We can thus assume without loss of optimality that $E_n = \Theta(\log n)$, i.e., there exist $n'_0>0$ and $0<l_1\leq l_2$ such that, for all $n \geq n'_0$, we have $l_1 \log n \leq E_n \leq l_2 \log n$. Consequently, $\frac{E_n}{n_1} \geq \frac{l_1}{a}$ for $n \geq \max(n_0,n_0')$. The claim that $\dot{C}_{\bot}<\frac{\log e}{N_0}$ follows then directly from \eqref{Eq_R_avg}. Indeed, using that $\frac{\log (1+x)}{x}< \log e$ for every $x>0$, we obtain that
\begin{align}
\frac{n_1}{2 E_n}\log\left(1+\frac{ 2E_n}{n_1N_0}\right) & \leq \frac{a}{2l_1} \log \left(1+\frac{2l_1}{a N_0}\right) < \frac{\log e}{N_0}, \quad n \geq \max(n_0,n_0'). \label{Eq_R_ortho}
\end{align}
By \eqref{Eq_R_avg}, in the limit as $P_{e,1} \to 0$ and $E_n \to \infty$, the rate per unit-energy is upper-bounded by~\eqref{Eq_R_ortho}. It thus follows that $\dot{C}_{\bot} < \frac{\log e}{N_0}$, which concludes the proof of Part~\ref{Thm_exact_ortho}) of Theorem~\ref{Thm_ortho_accs}.
}
\subsection{Proof of Theorem~\ref{Thm_capac_PUPE}}
\label{sec_average}
The proofs of Part~\ref{Thm__avg_achv_part}) and Part~\ref{Thm__avg_conv_part}) follow along the similar lines as those of Part~\ref{Thm_APE_achv_part}) and Part~\ref{Thm_APE_conv_part}) of Theorem~\ref{Thm_capac_APE}, respectively.
\subsubsection{Proof of Part~\ref{Thm__avg_achv_part})} We first argue that $P_{e,A}^{(n)} \to 0$ only if $E_n \to \infty$, and that in this case $\dot{C}^A \leq \frac{\log e}{N_0}$. Indeed, we have
\begin{align*}
P_{e,A}^{(n)} & \geq \min_{i} \text{Pr}\{\hat{W}_i\neq W_i\} \\
& \geq \alpha_n \text{Pr}(\hat{W_i} \neq W_i | W_i \neq 0) \; \text{ for some } i.
\end{align*}
\edit{Since $\liminf_{n \to \infty} \alpha_n >0$}, this implies that
$P_{e,A}^{(n)} $ vanishes only if $ \text{Pr}(\hat{W_i} \neq W_i | W_i \neq 0)$ vanishes. We next note that \mbox{$\text{Pr}(\hat{W_i} \neq W_i | W_i \neq 0)$} is lower-bounded by the error probability of the Gaussian single-user channel. By following the arguments presented at the beginning of the proof of Theorem~\ref{Thm_capac_APE}, we obtain that $P_{e,A}^{(n)} \to 0$ only if $E_n \to \infty$, which also implies that $\dot{C}^A \leq \frac{\log e}{N_0}$.
For the achievability in Part~\ref{Thm__avg_achv_part}), we use an orthogonal-access scheme where each user uses an orthogonal codebook of blocklength $n/\ell_n$.
Out of these $n/\ell_n$ channel uses, the first one is used for sending a pilot signal to convey that the user is active, and the remaining channel uses are used to send the message. Specifically, the codeword ${\bf x}_i(j)$ sent by user $i$ to convey message $j$ is given by
\begin{align*}
x_{ik}(j) = \begin{cases}
\sqrt{t E_n}, & \text{ if } k=1 \\
\sqrt{(1-t) E_n}, & \text{ if } k=j+1\\
0, & \text{ otherwise}
\end{cases}
\end{align*}
for some arbitrary $0 < t <1$. From the pilot signal, the receiver first detects whether the user is active or not. For this detection method, as noted before, the probability of detection error is given by $2Q\left(\frac{\sqrt{tE_n}}{2} \right)$. Since
\begin{equation*}
E_n = \frac{\log M_n}{\dot{R}} = \frac{\log ( \frac{n}{\ell_n} -1)}{\dot{R}}
\end{equation*}
and since $\ell_n$ is sublinear in $n$, $E_n$ tends to infinity as $n \to \infty$. \edit{This} implies that the detection error vanishes as $n \to \infty$. If $\dot{R} < \frac{\log e}{N_0}$, then the probability of erroneously decoding the message also vanishes for this code, which follows from the proof of Theorem~\ref{Thm_capac_APE}. This proves Part~\ref{Thm__avg_achv_part}) of Theorem~\ref{Thm_capac_PUPE}.
\subsubsection{Proof of Part~\ref{Thm__avg_conv_part})} Fano's inequality yields that $H(\hat{W}_i|W_i) \leq 1+ P_{e,i}\log M_n$. Since $H(W_i) = H_2(\alpha_n) + \alpha_n \log M_n$, we have
\begin{equation*}
H_2(\alpha_n) + \alpha_n \log M_n \leq 1+ P_{e,i}\log M_n+ I(W_i; \hat{W}_i)
\end{equation*}
for $i=1,\ldots, \ell_n$. Averaging over all $i$'s then gives
\begin{align}
H_2(\alpha_n) + \alpha_n \log M_n & \leq 1+ \frac{1}{\ell_n} \sum_{i=1}^{\ell_n} P_{e,i}\log M_n+ \frac{1}{\ell_n} I({\bf W}; {\bf \hat{W}}) \nonumber\\
& \leq 1+P_{e,A}^{(n)}\log M_n+ \frac{1}{\ell_n} I({\bf W}; {\bf Y}) \nonumber\\
& \leq 1 + P_{e,A}^{(n)} \log M_n+\frac{n}{2\ell_n} \log \left(1+\frac{2 k_nE_n}{nN_0}\right). \label{Eq_avg_prob_uppr}
\end{align}
Here, the first inequality follows because the messages $W_i, i=1, \ldots, \ell_n$ are independent and because conditioning reduces entropy, the second inequality follows from the definition of $P_{e,A}^{(n)}$ and the data processing inequality, and the third inequality follows from~\eqref{Eq_mutl_info_uppr}.
Dividing both sides of \eqref{Eq_avg_prob_uppr} by $E_n$, and rearranging terms, yields the following upper-bound on the rate per unit-energy $\dot{R}^A$:
\begin{equation}
\label{eq:Part2_Th1_end}
\dot{R^{A}}\leq \frac{ \frac{1 - H_2(\alpha_n)}{E_n} + \frac{n}{2 \ell_nE_n}\log(1+\frac{ 2k_nE_n}{nN_0})}{\alpha_n -P_{e,A}^{(n)}}.
\end{equation}
As noted before, $P_{e,A}^{(n)} \to 0$ only if $E_n \to \infty$.
It follows that $\frac{1 - H_2(\alpha_n)}{E_n}$ vanishes as $n \to \infty$.
Furthermore, together with the assumptions $\ell_n=\Omega(n)$ and
\edit{$\liminf_{n \to \infty} \alpha_n >0$}, $E_n\to\infty$ \edit{implies} that $k_nE_n/n=\alpha_n \ell_nE_n/n$ tends to infinity as $n\to\infty$. This in turn implies that
\begin{equation*}
\frac{n}{2\ell_n E_n} \log\left(1+\frac{2 k_n E_n}{n N_0}\right)=\frac{n \alpha_n}{2 k_n E_n}\log\left(1+\frac{2k_n E_n}{n N_0}\right)
\end{equation*}
vanishes as $n\to \infty $. It thus follows from~\eqref{eq:Part2_Th1_end} that $\dot{R}^A$ vanishes as $n\to\infty$, thereby proving Part~\ref{Thm__avg_conv_part}) of Theorem~\ref{Thm_capac_PUPE}.
| {
"timestamp": "2021-09-03T02:23:12",
"yymm": "2012",
"arxiv_id": "2012.10350",
"language": "en",
"url": "https://arxiv.org/abs/2012.10350",
"abstract": "This paper considers a Gaussian multiple-access channel with random user activity where the total number of users $\\ell_n$ and the average number of active users $k_n$ may grow with the blocklength $n$. For this channel, it studies the maximum number of bits that can be transmitted reliably per unit-energy as a function of $\\ell_n$ and $k_n$. When all users are active with probability one, i.e., $\\ell_n = k_n$, it is demonstrated that if $k_n$ is of an order strictly below $n/\\log n$, then each user can achieve the single-user capacity per unit-energy $(\\log e)/N_0$ (where $N_0/ 2$ is the noise power) by using an orthogonal-access scheme. In contrast, if $k_n$ is of an order strictly above $n/\\log n$, then the capacity per unit-energy is zero. Consequently, there is a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate per unit-energy is infeasible. It is further demonstrated that orthogonal-access schemes in combination with orthogonal codebooks, which achieve the capacity per unit-energy when the number of users is bounded, can be strictly suboptimal.When the user activity is random, i.e., when $\\ell_n$ and $k_n$ are different, it is demonstrated that if $k_n\\log \\ell_n$ is sublinear in $n$, then each user can achieve the single-user capacity per unit-energy $(\\log e)/N_0$. Conversely, if $k_n\\log \\ell_n$ is superlinear in $n$, then the capacity per unit-energy is zero. Consequently, there is again a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate is infeasible that depends on the asymptotic behaviours of both $\\ell_n$ and $k_n$. It is further demonstrated that orthogonal-access schemes, which are optimal when $\\ell_n = k_n$, can be strictly suboptimal.",
"subjects": "Information Theory (cs.IT)",
"title": "Scaling Laws for Gaussian Random Many-Access Channels",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540680555949,
"lm_q2_score": 0.7248702821204019,
"lm_q1q2_score": 0.7099046596072224
} |
https://arxiv.org/abs/2005.11751 | On 3-strand singular pure braid group | In the present paper we study the singular pure braid group $SP_{n}$ for $n=2, 3$. We find generators, defining relations and the algebraical structure of these groups. In particular, we prove that $SP_{3}$ is a semi-direct product $SP_{3} = \widetilde{V}_3 \leftthreetimes \mathbb{Z}$, where $\widetilde{V}_3$ is an HNN-extension with base group $\mathbb{Z}^2 * \mathbb{Z}^2$ and cyclic associated subgroups. We prove that the center $Z(SP_3)$ of $SP_3$ is a direct factor in $SP_3$. | \section{Introduction}
E. Artin introduced the braid group in 1925 and gave a presentation for this group in 1947 \cite{A}. The braid groups have applications in a wide variety of areas of mathematics, physics and biology. The notion of singular braids was introduced independently by John Baez \cite{Baez} and Joan Birman \cite{Bir}. It was shown that such braids form a monoid $SB_n$ which contains the Artin braid group $B_n$. The research of singular knots has been mostly motivated by the study of Vassiliev invariants. This led to increase of research activities in knot theory at the time. Singular braids and singular knots have been investigated by many scientifics \cite{C1}-\cite{JL}, \cite{Z}.
The singular braid group $SG_n$ was introduced by F.~Fenn, E.~Keyman and C.~Rourke \cite{FKR}. They proved that the singular braid monoid $SB_n$ is embedded into the group $SG_n$. The word problem for the singular braid monoid with three strings was solved by A.~Jarai \cite{J} (see also \cite{DG1}). R. Corran solved the word \cite{C1} and conjugacy problems for singular Artin monoid \cite{C2}. L. Paris proved Birman's conjecture that the desingularization map $\eta $: $S{B_n} \to \mathbb{Z}\left[ {{B_n}} \right]$ is injective \cite{Par} . This also gives a solution for the word problem in the singular braid monoid.
O.~Dasbach and B.~Gemein \cite{DG} introduced the singular pure braid group that is a generalization of the pure braid group $P_n$, found the set of generators and defining relations for this group and established that this group can be constructed using successive HNN-extensions. Investigation of homological properties of singular braids was started in \cite{V1}. We refer to \cite{V2,V3} for more details on singular braid groups and also other generalized braid groups.
Monoid and group of pseudo braids was presented in \cite{BJW} and proved that it is isomorphic to monoid and group of singular braids, respectively.
The paper is organized as follows. In Section \ref{prelim} we give basic definitions of braid group $B_n$, pure braid group $P_n$ and singular braid monoid $SB_n$, also we recall the Reidemeister - Schreier method. In Section \ref{pure} we find a presentation of a singular pure braid group $SP_2$ and $SP_3$, using the Reidemeister - Schreier method. Usually properties of braid groups with small number of strings are different from the general case (see for example \cite{BMVW} and \cite{DG1}). In another generating set for $SP_3$ was found in \cite{DG}.
Since $SP_3$ is normal in $SG_3$, conjugations by generator of $SG_3$ induce automorphisms of $SP_3$. We find these automorphisms in Proposition \ref{p4.1}.
Then we study the structure of $SP_{3}$ (see Theorem \ref{th}) and prove that $SP_{3}$ is a semi-direct product $SP_{3} = \widetilde{V}_3 \leftthreetimes \mathbb{Z}$, where $\widetilde{V}_3$ is an HNN-extension with base group $\mathbb{Z}^2 * \mathbb{Z}^2$ and cyclic associated subgroups. Also, we establish that the center $Z(SG_3) = Z(SP_3)$ is a direct factor in $SP_3$ (see Theorem~\ref{t3} and Corollary~\ref{c3}).
\section{Basic definitions}\label{prelim}
\subsection{Artin braid groups}\label{Artin }
The braid group $B_n$, $n\geq 2$, on $n$ strings can be defined as
a group generated by $\sigma_1,\sigma_2,\ldots,\sigma_{n-1}$ with the defining relations
\begin{center}
$\sigma_i \, \sigma_{i+1} \, \sigma_i = \sigma_{i+1} \, \sigma_i \, \sigma_{i+1},~~~ i=1,2,\ldots,n-2, $
\end{center}
\begin{center}
$\sigma_i \, \sigma_j = \sigma_j \, \sigma_i,~~~|i-j|\geq 2. $
\end{center}
There exists a homomorphism of $B_n$ onto the symmetric group $S_n$ on
$n$ letters. This homomorphism maps
$\sigma_i$ to the transposition $(i,i+1)$, $i=1,2,\ldots,n-1$.
The kernel of this homomorphism is called the
{\it pure braid group} and denoted by
$P_n$. The group $P_n$ is generated by $a_{ij}$, $1\leq i < j\leq n$.
These
generators can be expressed by the generators of
$B_n$ as follows
$$
a_{i,i+1}=\sigma_i^2,
$$
$$
a_{ij} = \sigma_{j-1} \, \sigma_{j-2} \ldots \sigma_{i+1} \, \sigma_i^2 \, \sigma_{i+1}^{-1} \ldots
\sigma_{j-2}^{-1} \, \sigma_{j-1}^{-1},~~~i+1< j \leq n.
$$
The subgroup $P_n$ is normal in $B_n$, and the quotient $B_n / P_n$ is the symmetric group $S_n$. The generators of $B_n$ act on the generator $a_{ij} \in P_n$ by the rules:
\begin{center}
$\sigma_k^{-1} a_{ij} \sigma_k = a_{ij}, ~\mbox{for}~k \not= i-1, i, j-1, j,$ \\
$\sigma_{i}^{-1} a_{i,i+1} \sigma_{i} = a_{i,i+1}, $ \\
$ \sigma_{i-1}^{-1} a_{ij} \sigma_{i-1} = a_{i-1,j},$ \\
$ \sigma_{i}^{-1} a_{ij} \sigma_{i} = a_{i+1,j} [a_{i,i+1}^{-1}, a_{ij}^{-1}], ~\mbox{for}~j \not= i+1, $\\
$ \sigma_{j-1}^{-1} a_{ij} \sigma_{j-1} = a_{i,j-1},$ \\
$\sigma_{j}^{-1} a_{ij} \sigma_{j} = a_{ij} a_{i,j+1} a_{ij}^{-1},$
\end{center}
where $[a, b] = a^{-1} b^{-1} a b = a^{-1} a^b$.
Denote by
$$
U_{i} = \langle a_{1i}, a_{2i}, \ldots, a_{i-1,i} \rangle,~~~i = 2, \ldots, n,
$$
a subgroup of $P_n$.
It is known that $U_i$ is a free group of rank $i-1$. The pure braid group $P_n$ is defined by the relations (for $\varepsilon = \pm 1$):
\begin{center}
$a_{ik}^{-\varepsilon} a_{kj} a_{ik}^{\varepsilon} = (a_{ij} a_{kj})^{\varepsilon} a_{kj} (a_{ij} a_{kj})^{-\varepsilon}, $ \\
$ a_{km}^{-\varepsilon} a_{kj} a_{km}^{\varepsilon} = (a_{kj} a_{mj})^{\varepsilon} a_{kj} (a_{kj} a_{mj})^{-\varepsilon}, ~\mbox{for}~m < j, $ \\
$ a_{im}^{-\varepsilon} a_{kj} a_{im}^{\varepsilon} = [a_{ij}^{-\varepsilon}, a_{mj}^{-\varepsilon}]^{\varepsilon} a_{kj} [a_{ij}^{-\varepsilon}, a_{mj}^{-\varepsilon}]^{-\varepsilon}, ~\mbox{for}~i < k < m, $ \\
$a_{im}^{-\varepsilon} a_{kj} a_{im}^{\varepsilon} = a_{kj}, ~\mbox{for}~k < i < m < j ~\mbox{or}~ m < k. $
\end{center}
The group $P_n$ is the semi--direct product of the normal subgroup
$U_n$ and $P_{n-1}$. Similarly, $P_{n-1}$ is the semi--direct product of the free group
$U_{n-1}$ and $P_{n-2},$ and so on.
Therefore, $P_n$ is decomposable (see \cite{Mar}) into the following semi--direct product
$$
P_n=U_n\rtimes (U_{n-1}\rtimes (\ldots \rtimes
(U_3\rtimes U_2))\ldots),~~~U_i\simeq F_{i-1}, ~~~i=2,3,\ldots,n.
$$
\subsection{Singular braid groups}\label{Singular }
{\it The Baez--Birman monoid}
\cite{Baez, Bir} or {\it the singular braid monoid} $SB_n$ is generated
(as a monoid) by elements $\sigma_i,$ $\sigma_i^{-1}$, $\tau_i$, $i = 1, 2, \ldots, n-1$.
The elements $\sigma_i,$ $\sigma_i^{-1}$ generate the braid group
$B_n$. The generators $\tau_i$ satisfy the defining relations
\begin{center}
$\tau_i \, \tau_j = \tau_j \, \tau_i, ~~~|i - j| \geq 2, $
\end{center}
other relations are mixed:
\begin{center}
$\tau_{i} \, \sigma_{j} = \sigma_{j} \, \tau_{i}, ~~~|i - j| \geq 2, $
\end{center}
\begin{center}
$\tau_{i} \, \sigma_{i} = \sigma_{i} \, \tau_{i},~~~ i=1,2,\ldots,n-1, $
\end{center}
\begin{center}
$\sigma_{i} \, \sigma_{i+1} \, \tau_i = \tau_{i+1} \, \sigma_{i} \, \sigma_{i+1},~~~ i=1,2,\ldots,n-2,$
\end{center}
\begin{center}
$\sigma_{i+1} \, \sigma_{i} \, \tau_{i+1} = \tau_{i} \,
\sigma_{i+1} \, \sigma_{i}, ~~~ i=1,2,\ldots,n-2.$
\end{center}
In the work \cite{FKR} it was proved that the singular braid monoid $SB_n$ is embedded into
the group $SG_n$ which is called the {\it
singular braid group} and has the same defining relations as $SB_n$.
$SB_n$ is generated by the unit, by the classical elementary braids $\sigma_i$ with their inverses and by the corresponding elementary singular braids $\tau_i$ (view Figure~\ref{figure}.):
\begin{figure}[h]
\centering{
\includegraphics[totalheight=4.cm]{pic.pdf}
\caption{The elementary braids $\sigma_i$ and $\tau_i$ .} \label{figure}
}
\end{figure}
\subsection{Singular pure braid group}\label{pure}
Define the map
$$
\pi : SG_n \longrightarrow S_n
$$
of $SG_n$ onto the symmetric group $S_n$ on $n$ symbols by actions the on generators
$$
\pi(\sigma_i) = \pi(\tau_i) = (i, i+1), ~~~ i = 1, 2, \ldots, n-1.
$$
The kernel $\mbox{ker}(\pi)$ of this map is called the
{\it singular pure braid group} and denoted by $SP_n$ (see \cite{DG}).
It is clear that $SP_n$ is a normal subgroup of index $n!$ of $SG_n$ and we have the short exact sequence
$$
1 \to SP_n \to SG_n \to S_n \to 1.
$$
To find a presentation of $SP_n$ we will use
the Reidemeister--Schreier method (see, for example, \cite[Ch. 2.2]{KMS}).
\subsection{ Reidemeister - Schreier method}\label{method}
Let $m_{kl} = \rho_{k-1} \, \rho_{k-2} \ldots \rho_l$ for $l < k$ and $m_{kl} = 1$
in other cases. Then the set
$$
\Lambda_n = \left\{ \prod\limits_{k=2}^n m_{k,j_k} \vert 1 \leq j_k
\leq k \right\}
$$
is a Schreier set of coset representatives of $SP_n$ in $SG_n$.
Define the map $^- : SG_n \longrightarrow \Lambda_n$ which takes an element
$w \in SG_n$
into the representative $\overline{w}$ from $\Lambda_n$. In this case the element
$w \overline{w}^{-1}$ belongs to $SP_n$. By Theorem 2.7 from \cite{KMS}
the group $SP_n$ is generated by
$$
S_{\lambda, a} = \lambda a \cdot (\overline{\lambda a})^{-1},
$$
where $\lambda$ runs over the set $\Lambda_n$ and $a$ runs over the set of generators of
$SG_n$.
To find defining relations of $SP_n$ we define
a rewriting process $\tau $. It allows us to rewrite a word which is written in the generators
of $SG_n$ and presents an element in $SP_n$ as a word in the generators of $SP_n$.
Let us associate to the reduced word
$$
u = a_1^{\varepsilon_1} \, a_2^{\varepsilon_2} \ldots
a_{\nu}^{\varepsilon_{\nu}},~~~\varepsilon_l = \pm 1,~~~a_l \in
\{\sigma_1, \sigma_2, \ldots, \sigma_{n-1}, \tau_1, \tau_2, \ldots, \tau_{n-1}
\},
$$
the word
$$
\tau(u) = S_{k_1,a_1}^{\varepsilon_1} \, S_{k_2,a_2}^{\varepsilon_2}
\ldots S_{k_{\nu},a_{\nu}}^{\varepsilon_{\nu}}
$$
in the generators of $SP_n$, where $k_j$ is a representative of the ($j-1$)th
initial segment
of the word $u$ if $\varepsilon_j = 1$ and $k_j$ is a representative of the $j$th
initial segment of
the word $u$ if
$\varepsilon_j = -1$.
By \cite[Theorem 2.9]{KMS}, the group $SP_n$ is defined by relations
$$
r_{\mu,\lambda} = \tau (\lambda \, r_{\mu} \, \lambda^{-1}),~~~\lambda \in
\Lambda_n,
$$
where $r_{\mu}$ is the defining relation of $SG_n$.
\section{Generators and defining relations for $SP_2$ and $SP_3$}
In this section we will use the Reidemeister--Schreier method, which we described in the previous section.
\subsection{Case $n=2$.} In this case
$$
SG_2 = \langle \sigma_1, \tau_1~|~\sigma_1 \tau_1 = \tau_1 \sigma_1 \rangle \cong \mathbb{Z} \times \mathbb{Z}.
$$
The set of coset representatives:
$$
\Lambda_2 = \{ 1, \sigma_1 \}.
$$
The group $SP_2$ is generated by elements
$$
S_{\lambda,a} = \lambda a \cdot (\overline{\lambda a})^{-1},~~~\lambda \in \Lambda_2,~~a \in \{ \sigma_1, \tau_1 \}.
$$
Hence,
$$
S_{1,\sigma_1} = \sigma_1 \cdot (\overline{\sigma_1})^{-1} = \sigma_1 \cdot \sigma_1^{-1} = 1,
$$
$$
S_{1,\tau_1} = \tau_1 \cdot (\overline{\tau_1})^{-1} = \tau_1 \cdot \sigma_1^{-1},
$$
$$
S_{\sigma_1,\sigma_1} = \sigma_1^2 \cdot \overline{\sigma_1^2}^{-1} = \sigma_1^2 \cdot 1 = \sigma_1^2,
$$
$$
S_{\sigma_1,\tau_1} = \sigma_1 \tau_1 \cdot (\overline{\sigma_1 \tau_1})^{-1} = \sigma_1 \tau_1.
$$
We see that $SP_2$ is generated by three elements:
$$
S_{1,\tau_1} = \tau_1 \sigma_1^{-1},~~~S_{\sigma_1,\sigma_1} = \sigma_1^2,~~~S_{\sigma_1,\tau_1} = \sigma_1 \tau_1.
$$
The element $a_{12} = \sigma_1^2$ is a generator of the pure braid group $P_2$.
To find the set of defining relations for $SP_2$ take the relation $r = \sigma_1 \tau_1 \sigma_1^{-1} \tau_1^{-1}$ and using the rewritable process we get
$$
r = S_{1,\sigma_1} S_{\sigma_1,\tau_1} S_{\sigma_1,\sigma_1}^{-1} S_{1,\tau_1}^{-1} = S_{\sigma_1,\tau_1} S_{\sigma_1,\sigma_1}^{-1} S_{1,\tau_1}^{-1} = 1.
$$
Using this relation we can remove the generator
$$
S_{1,\tau_1} = S_{\sigma_1,\tau_1} S_{\sigma_1,\sigma_1}^{-1}.
$$
Relation
$$
\sigma_1 r \sigma_1^{-1} = S_{1,\sigma_1} S_{\sigma_1,\sigma_1} S_{1,\tau_1} S_{1,\sigma_1}^{-1} S_{\sigma_1,\tau_1}^{-1} S_{1,\sigma_1}^{-1} =
S_{\sigma_1,\sigma_1} S_{1,\tau_1} S_{\sigma_1,\tau_1}^{-1} = 1.
$$
Applying the previous relation we get
$$
S_{\sigma_1,\sigma_1} S_{\sigma_1,\tau_1} = S_{\sigma_1,\tau_1} S_{\sigma_1,\sigma_1}.
$$
Put $a_{12} = S_{\sigma_1,\sigma_1}$, $b_{12} = S_{\sigma_1,\tau_1}$. Then, using the fact that $SG_2$ is abelian we get
\begin{lemma} \label{l3.1}
1) $SP_2 = \langle a_{12}, b_{12}~|~a_{12} \, b_{12} = b_{12} \, a_{12} \rangle \cong \mathbb{Z} \times \mathbb{Z};$
2) $SP_2$ is normal in $SG_2$ and the action of $SG_2$ on $SP_2$ is defined by the formulas
$$
a_{12}^{\sigma_1} = a_{12},~~~b_{12}^{\sigma_1} = b_{12},
$$
$$
a_{12}^{\tau_1} = a_{12},~~~b_{12}^{\tau_1} = b_{12}.
$$
\end{lemma}
\subsection{Case $n=3$.} In this case $SG_3$ is generated by elements
$$
\sigma_1, \sigma_2, \tau_1, \tau_2,
$$
and is defined by relations
$$
\sigma_1 \tau_1 = \tau_1 \sigma_1,~~~\sigma_1 \sigma_2 \sigma_1 = \sigma_2 \sigma_1 \sigma_2,~~~\sigma_2 \tau_2 = \tau_2 \sigma_2,~~~
\sigma_1 \sigma_2 \tau_1 = \tau_2 \sigma_1 \sigma_2,~~~\sigma_2 \sigma_1 \tau_2 = \tau_1 \sigma_2 \sigma_1.
$$
The set of coset representatives:
$$
\Lambda_3 = \{ 1, \sigma_1, \sigma_2, \sigma_1 \sigma_2, \sigma_2 \sigma_1, \sigma_1 \sigma_2 \sigma_1 \}.
$$
The group $SP_3$ is generated by elements
$$
S_{\lambda,a} = \lambda a \cdot (\overline{\lambda a})^{-1},~~~\lambda \in \Lambda_3,~~a \in \{ \sigma_1, \sigma_2, \tau_1, \tau_2 \}.
$$
We find these elements
\begin{align*}
& S_{1,\sigma_1} = \sigma_1 \cdot (\overline{\sigma_1})^{-1} = \sigma_1 \cdot \sigma_1^{-1} = 1,\\
& S_{1,\sigma_2} = \sigma_2 \cdot (\overline{\sigma_2})^{-1} = \sigma_2 \cdot \sigma_2^{-1} = 1,\\
& S_{1,\tau_1} = \tau_1 \cdot (\overline{\tau_1})^{-1} = \tau_1 \cdot \sigma_1^{-1},\\
& S_{1,\tau_2} = \tau_2 \cdot (\overline{\tau_2})^{-1} = \tau_2 \cdot \sigma_2^{-1},\\
\end{align*}
\begin{align*}
& S_{\sigma_1,\sigma_1} = \sigma_1^2 \cdot \overline{\sigma_1^2}^{-1} = \sigma_1^2 \cdot 1 = \sigma_1^2,\\
& S_{\sigma_1,\sigma_2} = \sigma_1 \sigma_2 \cdot (\overline{\sigma_1 \sigma_2})^{-1} = 1,\\
& S_{\sigma_1,\tau_1} = \sigma_1 \tau_1 \cdot (\overline{\sigma_1 \tau_1})^{-1} = \sigma_1 \tau_1,\\
& S_{\sigma_1,\tau_2} = \sigma_1 \tau_2 \cdot (\overline{\sigma_1 \tau_2})^{-1} = \sigma_1 \tau_2 \sigma_2^{-1} \sigma_1^{-1},\\
\end{align*}
\begin{align*}
& S_{\sigma_2,\sigma_1} = \sigma_2 \sigma_1 \cdot (\overline{\sigma_2 \sigma_1})^{-1} = 1,\\
& S_{\sigma_2,\sigma_2} = \sigma_2^2 \cdot \overline{\sigma_2^2}^{-1} = \sigma_2^2 \cdot 1 = \sigma_2^2,\\
& S_{\sigma_2,\tau_1} = \sigma_2 \tau_1 \cdot (\overline{\sigma_2 \sigma_1})^{-1} = \sigma_2 \tau_1 \sigma_1^{-1} \sigma_2^{-1},\\
& S_{\sigma_2,\tau_2} = \sigma_2 \tau_2,\\
\end{align*}
\begin{align*}
& S_{\sigma_1 \sigma_2,\sigma_1} = \sigma_1 \sigma_2 \sigma_1 \cdot (\sigma_1 \sigma_2 \sigma_1)^{-1} = 1,\\
& S_{\sigma_1 \sigma_2,\sigma_2} = \sigma_1 \sigma_2^2 \sigma_1^{-1},\\
& S_{\sigma_1 \sigma_2,\tau_1} = \sigma_1 \sigma_2 \tau_1 \sigma_1^{-1} \sigma_2^{-1} \sigma_1^{-1},\\
&S_{\sigma_1 \sigma_2,\tau_2} = \sigma_1 \sigma_2 \tau_2 \sigma_1^{-1},\\
\end{align*}
\begin{align*}
& S_{\sigma_2 \sigma_1,\sigma_1} = \sigma_2 \sigma_1^2 \sigma_2^{-1},\\
& S_{\sigma_2 \sigma_1,\sigma_2} = \sigma_2 \sigma_1 \sigma_2 \sigma_1^{-1} \sigma_2^{-1} \sigma_1^{-1},\\
& S_{\sigma_2 \sigma_1,\tau_1} = \sigma_2 \sigma_1 \tau_1 \sigma_2^{-1},\\
& S_{\sigma_2 \sigma_1,\tau_2} = \sigma_2 \sigma_1 \tau_2 \sigma_1^{-1} \sigma_2^{-1} \sigma_1^{-1},\\
\end{align*}
\begin{align*}
& S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1} = \sigma_1 \sigma_2 \sigma_1^2 \sigma_2^{-1} \sigma_1^{-1},\\
& S_{\sigma_1 \sigma_2 \sigma_1, \sigma_2} = \sigma_1 \sigma_2 \sigma_1 \sigma_2 \sigma_1^{-1} \sigma_2^{-1},\\
& S_{\sigma_1 \sigma_2 \sigma_1, \tau_1} = \sigma_1 \sigma_2 \sigma_1 \tau_1 \sigma_2^{-1} \sigma_1^{-1},\\
& S_{\sigma_1 \sigma_2 \sigma_1, \tau_2} = \sigma_1 \sigma_2 \sigma_1 \tau_2 \sigma_1^{-1} \sigma_2^{-1}.\\
\end{align*}
Hence, $SP_3$ is generated by elements
$$
S_{1,\tau_1},~~S_{1,\tau_2},~~~S_{\sigma_1,\sigma_1},~~~S_{\sigma_1,\tau_1},~~~S_{\sigma_1,\tau_2},~~~S_{\sigma_2,\sigma_2},~~~S_{\sigma_2,\tau_1},~~~
S_{\sigma_2,\tau_2},~~~S_{\sigma_1 \sigma_2,\sigma_2},~~~S_{\sigma_1 \sigma_2,\tau_1},~~~S_{\sigma_1 \sigma_2,\tau_2},
$$
$$
S_{\sigma_2 \sigma_1,\sigma_1},~~~S_{\sigma_2 \sigma_1,\sigma_2},~~~S_{\sigma_2 \sigma_1,\tau_1},~~~S_{\sigma_2 \sigma_1,\tau_2},~~~
S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1},~~~S_{\sigma_1 \sigma_2 \sigma_1, \sigma_2},~~~S_{\sigma_1 \sigma_2 \sigma_1, \tau_1},~~~S_{\sigma_1 \sigma_2 \sigma_1, \tau_2}.
$$
We see that elements
$$
S_{\sigma_1,\sigma_1},~~~S_{\sigma_2,\sigma_2},~~~S_{\sigma_1 \sigma_2,\sigma_2},~~~S_{\sigma_2 \sigma_1,\sigma_1},~~~S_{\sigma_2 \sigma_1,\sigma_2},~~~
S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1},~~~S_{\sigma_1 \sigma_2 \sigma_1, \sigma_2},
$$
contain only generators $\sigma_1$ and $\sigma_2$. We will show that these elements generate the pure braid group $P_3$.
Next find the set of defining relations.
\textbf{1)}Take the relation $r_1 = \sigma_1 \tau_1 \sigma_1^{-1} \tau_1^{-1}$. We considered relation $r_1 = 1$ and $\sigma_1 r_1 \sigma_1^{-1} = 1$ in the previous case. Conjugating $r_1$ by other coset representatives, we get
$$
r_{1,\sigma_2} = \sigma_2 r_1 \sigma_2^{-1} = S_{1,\sigma_2} S_{\sigma_2,\sigma_1} S_{\sigma_2 \sigma_1,\tau_1} S_{\sigma_2 \sigma_1,\sigma_1}^{-1} S_{\sigma_2, \tau_1}^{-1} S_{1,\sigma_2}^{-1} = S_{\sigma_2 \sigma_1,\tau_1} S_{\sigma_2 \sigma_1,\sigma_1}^{-1} S_{\sigma_2, \tau_1}^{-1} = 1,
$$
From this relation
$$
S_{\sigma_2, \tau_1} = S_{\sigma_2 \sigma_1,\tau_1} S_{\sigma_2 \sigma_1, \sigma_1}^{-1}.
$$
Hence, we can remove this relation and the generator $S_{\sigma_2, \tau_1}$.
Relation
$$
r_{1,\sigma_1\sigma_2} = \sigma_1 \sigma_2 r_1 \sigma_2^{-1} \sigma_1^{-1} = S_{1,\sigma_1} S_{\sigma_1,\sigma_2} S_{\sigma_1 \sigma_2, \sigma_1} S_{\sigma_1 \sigma_2 \sigma_1,\tau_1} S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1}^{-1} S_{\sigma_1 \sigma_2, \tau_1}^{-1} S_{\sigma_1, \sigma_2}^{-1} S_{1,\sigma_1}^{-1} =
$$
$$
= S_{\sigma_1 \sigma_2 \sigma_1,\tau_1} S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1}^{-1} S_{\sigma_1 \sigma_2, \tau_1}^{-1} = 1.
$$
From this relation
$$
S_{\sigma_1 \sigma_2, \tau_1} = S_{\sigma_1 \sigma_2 \sigma_1,\tau_1} S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1}^{-1}.
$$
Hence, we can remove this relation and the generator $S_{\sigma_1 \sigma_2, \tau_1}$.
From the relation
\[{r_{1,{\sigma _2}{\sigma _1}}} = {S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _2},{\tau _1}}}S_{{\sigma _2}{\sigma _1},{\tau _1}}^{ - 1}=1\]
follows that
\[{S_{{\sigma _2}{\sigma _1},{\tau _1}}} = {S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _2},{\tau _1}}}\]
Using the formula for ${S_{{\sigma _2},{\tau _1}}}$, we get relation
\[{S_{{\sigma _2}{\sigma _1},{\tau _1}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}} = {S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _2}{\sigma _1},{\tau _1}}}\]
Relation
$$
r_{1,\sigma_1\sigma_2\sigma_1} = \sigma_1 \sigma_2 \sigma_1 r_1 \sigma_1^{-1} \sigma_2^{-1} \sigma_1^{-1} =
S_{1,\sigma_1} S_{\sigma_1,\sigma_2} S_{\sigma_1 \sigma_2, \sigma_1} S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1}
S_{\sigma_1 \sigma_2,\tau_1} \cdot
$$
$$
\cdot S_{\sigma_1 \sigma_2, \sigma_1}^{-1} S_{\sigma_1 \sigma_2 \sigma_1, \tau_1}^{-1} S_{\sigma_1 \sigma_2, \sigma_1}^{-1} S_{\sigma_1, \sigma_2}^{-1} S_{1,\sigma_1}^{-1} =
$$
$$
= S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1}
S_{\sigma_1 \sigma_2,\tau_1} S_{\sigma_1 \sigma_2 \sigma_1, \tau_1}^{-1} = 1.
$$
Since
$$
S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1} = S_{\sigma_2, \sigma_2},~~~ S_{\sigma_1 \sigma_2,\tau_1} = S_{\sigma_1 \sigma_2 \sigma_1, \tau_1}
S_{\sigma_2, \sigma_2}^{-1},
$$
we get the relation
$$
S_{\sigma_2, \sigma_2} S_{\sigma_1 \sigma_2 \sigma_1, \tau_1} S_{\sigma_2, \sigma_2}^{-1} = S_{\sigma_1 \sigma_2 \sigma_1, \tau_1}.
$$
\medskip
\begin{lemma} \label{l1}
From relation $r_1 = \sigma_1 \tau_1 \sigma_1^{-1} \tau_1^{-1}$ follows 6 relations, applying which we can remove generators:
$$
S_{1,\tau_1} = S_{\sigma_1,\tau_1} S_{\sigma_1,\sigma_1}^{-1},~~~S_{\sigma_2, \tau_1} = S_{\sigma_2 \sigma_1,\tau_1} S_{\sigma_2 \sigma_1, \sigma_1}^{-1},~~~S_{\sigma_1 \sigma_2, \tau_1} = S_{\sigma_1 \sigma_2 \sigma_1,\tau_1} S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1}^{-1},
$$
and we get 3 relations:
$$
S_{\sigma_1,\sigma_1} S_{\sigma_1,\tau_1} = S_{\sigma_1,\tau_1} S_{\sigma_1,\sigma_1},
$$
$$
{S_{{\sigma _2}{\sigma _1},{\tau _1}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}} = {S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _2}{\sigma _1},{\tau _1}}},
$$
$$
S_{\sigma_2, \sigma_2} S_{\sigma_1 \sigma_2 \sigma_1, \tau_1} S_{\sigma_2, \sigma_2}^{-1} = S_{\sigma_1 \sigma_2 \sigma_1, \tau_1}.
$$
\end{lemma}
\medskip
\textbf{2)} Take the relation $r_2 = \sigma_1 \sigma_2 \sigma_1 \sigma_2^{-1} \sigma_1^{-1} \sigma_2^{-1}$. Then
$$
r_2 = r_{2,1} = S_{1,\sigma_1} S_{\sigma_1,\sigma_2} S_{\sigma_1 \sigma_2, \sigma_1} S_{\sigma_2 \sigma_1, \sigma_2}^{-1}
S_{\sigma_2, \sigma_1}^{-1} S_{1, \sigma_2}^{-1} = S_{\sigma_2 \sigma_1, \sigma_2}^{-1} = 1,
$$
i.e. $S_{\sigma_2 \sigma_1, \sigma_2} = 1$ and we can remove this generator.
Conjugating this relation by $\sigma_1^{-1}$, we get
$$
r_{2,\sigma_1} = S_{1,\sigma_1} S_{\sigma_1,\sigma_1} S_{1,\sigma_2} S_{\sigma_2, \sigma_1} S_{\sigma_1 \sigma_2 \sigma_1, \sigma_2}^{-1}
S_{\sigma_1 \sigma_2, \sigma_1}^{-1} S_{\sigma_1, \sigma_2}^{-1} S_{1, \sigma_1}^{-1} = S_{\sigma_1, \sigma_1} S_{\sigma_1 \sigma_2 \sigma_1, \sigma_2}^{-1} = 1,
$$
i.e. $S_{\sigma_1 \sigma_2 \sigma_1, \sigma_2} = S_{\sigma_1, \sigma_1}$ and we can remove $S_{\sigma_1 \sigma_2 \sigma_1, \sigma_2}$.
Conjugating $r_2$ by $\sigma_2^{-1}$, we get
$$
r_{2,\sigma_2} = S_{1,\sigma_2} S_{\sigma_2,\sigma_1} S_{\sigma_2 \sigma_1, \sigma_2} S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1}
S_{\sigma_1, \sigma_2}^{-1} S_{1,\sigma_1}^{-1} S_{\sigma_2, \sigma_2}^{-1} S_{1,\sigma_2}^{-1} = S_{\sigma_2 \sigma_1, \sigma_2} S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1} S_{ \sigma_2, \sigma_2}^{-1} = 1.
$$
Since $S_{\sigma_2 \sigma_1, \sigma_2} = 1$, from this relation follows that
$S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1} = S_{\sigma_2, \sigma_2}$ and we can remove $S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1}$.
Conjugating $r_2$ by $(\sigma_1 \sigma_2)^{-1}$, we get
Relation
\[{r_{2,{\sigma _1}{\sigma _2}}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _2}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}}S_{{\sigma _1},{\sigma _1}}^{ - 1}S_{{\sigma _1}{\sigma _2},{\sigma _2}}^{ - 1} = 1\]
gives relation
\[{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _2}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}} = {S_{{\sigma _1}{\sigma _2},{\sigma _2}}}{S_{{\sigma _1},{\sigma _1}}}\]
which is equivalent to
$$
{S_{{\sigma _1},{\sigma _1}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}} = {S_{{\sigma _1}{\sigma _2},{\sigma _2}}}{S_{{\sigma _1},{\sigma _1}}}.
$$
From it we can remove the generator
$$
S_{{\sigma _1}{\sigma _2},{\sigma _2}} = S_{{\sigma _1},{\sigma _1}} S_{{\sigma _2}{\sigma _1},{\sigma _1}} S_{{\sigma _1},{\sigma _1}}^{-1}.
$$
Conjugating by $(\sigma _2 \sigma _1)^{-1}$ we get
\[{r_{2,{\sigma _2}{\sigma _1}}} = {S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _2},{\sigma _2}}}S_{{\sigma _1}{\sigma _2},{\sigma _2}}^{ - 1}S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}^{ - 1} = 1\]
or
\[{S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _2},{\sigma _2}}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _1}{\sigma _2},{\sigma _2}}}.\]
From the previous relation
\[ S_{{\sigma _1},{\sigma _1}} {S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _1},{\sigma _1}}^{-1}} = {S_{{\sigma _2},{\sigma _2}}^{-1}} {S_{{\sigma _2}{\sigma _1}, {\sigma _1}}}{S_{{\sigma _2},{\sigma _2}}}.\]
We see that it is relation in $P_3$:
$$
a_{12} a_{13} a_{12}^{-1} = a_{23}^{-1} a_{13} a_{23}.
$$
Take the relation:
\[{r_{2,{\sigma _1}{\sigma _2}{\sigma _1}}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _1}{\sigma _2},{\sigma _2}}}{S_{{\sigma _1},{\sigma _1}}}S_{{\sigma _2},{\sigma _2}}^{ - 1}S_{{\sigma _2}{\sigma _1},{\sigma _1}}^{ - 1}S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _2}}^{ - 1} = 1,\]
that is equivalent to
\[{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _1}{\sigma _2},{\sigma _2}}}{S_{{\sigma _1},{\sigma _1}}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _2}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _2},{\sigma _2}}}\]
or
\[{S_{{\sigma _2},{\sigma _2}}}{S_{{\sigma _1}{\sigma _2},{\sigma _2}}}{S_{{\sigma _1},{\sigma _1}}} = {S_{{\sigma _1},{\sigma _1}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _2},{\sigma _2}}}.\]
Hence we get
\[{S_{{\sigma _2},{\sigma _2}}}{S_{{\sigma _1},{\sigma _1}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}} = {S_{{\sigma _1},{\sigma _1}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _2},{\sigma _2}}}.\]
This is relation in $P_3$:
$$
a_{23} a_{12} a_{13} = a_{12} a_{13} a_{23}.
$$
Taking into account the previous relation we get the following conjugation rule
$$
a_{12} a_{23} a_{12}^{-1} = a_{23}^{-1} a_{13}^{-1} a_{23} a_{13} a_{23}.
$$
\medskip
\begin{lemma} \label{l2}
From relation $r_2 = \sigma_1 \sigma_2 \sigma_1 \sigma_2^{-1} \sigma_1^{-1} \sigma_2^{-1}$ follows 6 relations, applying which we can remove 4 generators:
$$
S_{\sigma_2 \sigma_1, \sigma_2} = 1,~~~S_{\sigma_1 \sigma_2 \sigma_1, \sigma_2} = S_{\sigma_1, \sigma_1},~~~S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1} = S_{\sigma_2, \sigma_2},~~~S_{{\sigma _1}{\sigma _2},{\sigma _2}} = S_{{\sigma _1},{\sigma _1}} S_{{\sigma _2}{\sigma _1},{\sigma _1}} S_{{\sigma _1},{\sigma _1}}^{-1},
$$
and we get 2 relations:
$$
S_{{\sigma _1},{\sigma _1}} {S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _1},{\sigma _1}}^{-1}} = {S_{{\sigma _2},{\sigma _2}}^{-1}} {S_{{\sigma _2}{\sigma _1}, {\sigma _1}}}{S_{{\sigma _2},{\sigma _2}}}
$$
$$
{S_{{\sigma _2},{\sigma _2}}}{S_{{\sigma _1},{\sigma _1}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}} = {S_{{\sigma _1},{\sigma _1}}}{S_{{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _2},{\sigma _2}}}.
$$
\end{lemma}
From the analysis of relations $r_{2,\lambda}$ follows
\begin{corollary}
The generators
$$
S_{{\sigma _1},{\sigma _1}} = a_{12},~~~S_{{\sigma _2}{\sigma _1}, {\sigma _1}} = a_{13},~~~S_{{\sigma _2},{\sigma _2}} = a_{23},
$$
satisfy relations
$$
a_{12} a_{13} a_{12}^{-1} = a_{23}^{-1} a_{13} a_{23},~~~a_{12} a_{23} a_{12}^{-1} = a_{23}^{-1} a_{13}^{-1} a_{23} a_{13} a_{23}.
$$
\end{corollary}
\medskip
\textbf{3)} Consider the relation $r_3 = \sigma_2 \tau_2 \sigma_2^{-1} \tau_2^{-1}$. From it
\[{r_{3,1}} = {S_{{\sigma _2},{\tau _2}}}S_{{\sigma _2},{\sigma _2}}^{ - 1}S_{1,{\tau _2}}^{ - 1}\]
and we can remove the generator
\[{S_{1,{\tau _2}}} = {S_{{\sigma _2},{\tau _2}}}S_{{\sigma _2},{\sigma _2}}^{ - 1}.\]
Conjugating by $(\sigma_1)^{-1}$
\[{r_{3,{\sigma _1}}} = {S_{{\sigma _1}{\sigma _2},{\tau _2}}}S_{{\sigma _1}{\sigma _2},{\sigma _2}}^{ - 1}S_{{\sigma _1},{\tau _2}}^{ - 1}\]
or
\[{S_{{\sigma _1},{\tau _2}}} = {S_{{\sigma _1}{\sigma _2},{\tau _2}}}S_{{\sigma _1}{\sigma _2},{\sigma _2}}^{ - 1}.\]
Take in attention the formula
$$
S_{{\sigma _1}{\sigma _2},{\sigma _2}} = S_{{\sigma _1},{\sigma _1}} S_{{\sigma _2}{\sigma _1},{\sigma _1}} S_{{\sigma _1},{\sigma _1}}^{-1},
$$
we can remove the generator
$$
{S_{{\sigma _1},{\tau _2}}} = {S_{{\sigma _1}{\sigma _2},{\tau _2}}} S_{{\sigma _1},{\sigma _1}} S_{{\sigma _2}{\sigma _1},{\sigma _1}}^{-1} S_{{\sigma _1},{\sigma _1}}^{-1}
$$
Conjugating by $(\sigma_2)^{-1}$
\[{r_{3,{\sigma _2}}} = {S_{{\sigma _2},{\sigma _2}}}{S_{1,{\tau _2}}}S_{{\sigma _2},{\tau _2}}^{ - 1} =
{S_{{\sigma _2},{\sigma _2}}} ( {S_{\sigma_2,{\tau _2}}} S_{{\sigma _2},{\sigma _2}}^{ - 1} ) S_{{\sigma _2},{\tau _2}}^{ - 1} =1\]
and we have the relation
\[ {S_{{\sigma _2},{\sigma _2}}} {S_{{\sigma _2},{\tau _2}}} = {S_{{\sigma _2},{\tau _2}}} {S_{\sigma_2, {\sigma _2}}}.\]
Conjugating by $(\sigma _1 \sigma_2)^{-1}$
\[{r_{3,{\sigma _1}{\sigma _2}}} = {S_{{\sigma _1}{\sigma _2},{\sigma _2}}}{S_{{\sigma _1},{\tau _2}}}S_{{\sigma _1}{\sigma _2},{\tau _2}}^{ - 1} = 1\]
or
\[{S_{{\sigma _1}, {\sigma_1}}} {S_{{\sigma _2}{\sigma _1},{\sigma _1}}} {S_{{\sigma _1}, {\sigma_1}}^{-1}} \cdot {S_{{\sigma _1}{\sigma _2},{\tau _2}}} \cdot
{S_{{\sigma _1}, {\sigma_1}}} {S_{{\sigma _2}{\sigma _1},{\sigma _1}}^{-1}} {S_{{\sigma _1}, {\sigma_1}}^{-1}} = {S_{{\sigma _1}{\sigma _2},{\tau _2}}}.\]
This is relation in $SP_3$.
Conjugating by $(\sigma _2 \sigma_1)^{-1}$
\[{r_{3,{\sigma _2}{\sigma _1}}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}}S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _2}}^{ - 1}S_{{\sigma _2}{\sigma _1},{\tau _2}}^{ - 1} = 1.\]
We can remove the generator
\[{S_{{\sigma _2}{\sigma _1},{\tau _2}}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}}S_{{\sigma _1},{\sigma _1}}^{ - 1}.\]
Conjugating by $(\sigma_1 \sigma _2 \sigma_1)^{-1}$
\[{r_{3,{\sigma _1}{\sigma _2}{\sigma _1}}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _2}}}{S_{{\sigma _2}{\sigma _1},{\tau _2}}}S_{{\sigma _2}{\sigma _1},{\sigma _2}}^{ - 1}S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}^{ - 1} = 1\]
or
\[{r_{3,{\sigma _1}{\sigma _2}{\sigma _1}}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _2}}}{S_{{\sigma _2}{\sigma _1},{\tau _2}}}S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}^{ - 1} = 1,\]
i. e.
\[{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}} = {S_{{\sigma _1},{\sigma _1}}}{S_{{\sigma _2}{\sigma _1},{\tau _2}}}.\]
Using the previous relation we get
\[{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}}{S_{{\sigma _1},{\sigma _1}}} = {S_{{\sigma _1},{\sigma _1}}}{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}}.\]
\medskip
\begin{lemma} \label{l3}
From relation $r_3 = \sigma_2 \tau_2 \sigma_2^{-1} \tau_2^{-1}$ follows 6 relations, applying which we can remove 3 generators:
$$
{S_{1,{\tau _2}}} = {S_{{\sigma _2},{\tau _2}}}S_{{\sigma _2},{\sigma _2}}^{ - 1},~~~{S_{{\sigma _1},{\tau _2}}} = {S_{{\sigma _1}{\sigma _2},{\tau _2}}} S_{{\sigma _1},{\sigma _1}} S_{{\sigma _2}{\sigma _1},{\sigma _1}}^{-1} S_{{\sigma _1},{\sigma _1}}^{-1},~~~
{S_{{\sigma _2}{\sigma _1},{\tau _2}}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}}S_{{\sigma _1},{\sigma _1}}^{ - 1},
$$
and we get 3 relations:
\[ {S_{{\sigma _2},{\sigma _2}}} {S_{{\sigma _2},{\tau _2}}} = {S_{{\sigma _2},{\tau _2}}} {S_{\sigma_2, {\sigma _2}}}.\]
$$
{S_{{\sigma _1}, {\sigma_1}}} {S_{{\sigma _2}{\sigma _1},{\sigma _1}}} {S_{{\sigma _1}, {\sigma_1}}^{-1}} \cdot {S_{{\sigma _1}{\sigma _2},{\tau _2}}} \cdot
{S_{{\sigma _1}, {\sigma_1}}} {S_{{\sigma _2}{\sigma _1},{\sigma _1}}^{-1}} {S_{{\sigma _1}, {\sigma_1}}^{-1}} = {S_{{\sigma _1}{\sigma _2},{\tau _2}}}
$$
$$
{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}}{S_{{\sigma _1},{\sigma _1}}} = {S_{{\sigma _1},{\sigma _1}}}{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}}.
$$
\end{lemma}
\textbf{4)} Take the relation $r_4 = \sigma_1 \sigma_2 \tau_1 \sigma_2^{-1} \sigma_1^{-1} \tau_2^{-1}$ and rewrite in the new generators:
\[{r_{4,1}} = {S_{{\sigma _1}{\sigma _2},{\tau _1}}}S_{1,{\tau _2}}^{ - 1} = 1\]
or
\[{S_{1,{\tau _2}}} = {S_{{\sigma _1}{\sigma _2},{\tau _1}}}.\]
Using the formulas for these generators from Lemmas \ref{l1} and \ref{l3} we get
\[{S_{{\sigma _2},{\tau _2}}} {S_{{\sigma _2},{\sigma _2}}^{-1}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _1}}}
{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}^{-1}}.\]
Since
$$
{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}} = {S_{{\sigma _2}, {\sigma _2}}},
$$
we get the relation
$$
{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _1}}} = {S_{{\sigma _2},{\tau _2}}}.
$$
Applying this relation, we can remove ${S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _1}}}$ from the generating set of $SP_3$.
Conjugating by $(\sigma_1)^{-1}$
\[{r_{4,{\sigma _1}}} = {S_{{\sigma _1},{\sigma _1}}}{S_{{\sigma _2},{\tau _1}}}S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _2}}^{ - 1} S_{{\sigma _1},{\tau _2}}^{ - 1} =
S_{{\sigma _1},{\sigma _1}} \cdot S_{{\sigma _2}{\sigma _1}, {\tau _1}}S_{{\sigma _2}{\sigma _1},{\sigma _1}}^{-1} \cdot S_{{\sigma _1},{\sigma _1}}^{ - 1} \cdot S_{{\sigma _1},{\sigma _1}} S_{{\sigma _2}{\sigma _1},{\sigma _1}} S_{{\sigma _1},{\sigma _1}}^{ - 1}
S_{{\sigma _1}{\sigma _2}, {\tau_2}}^{ - 1}= 1\]
and we can remove the generator
$$
S_{{\sigma _1}{\sigma _2}, {\tau _2}} = S_{{\sigma _1},{\sigma _1}} S_{{\sigma _2}{\sigma _1}, {\tau _1}} S_{{\sigma _1},{\sigma _1}}^{ - 1}.
$$
Next relation
\[{r_{4,{\sigma _2}}} = {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _1}}} S_{{\sigma _2},{\tau _2}}^{ - 1} = {S_{{\sigma _2},{\tau _2}}} S_{{\sigma _2},{\tau _2}}^{ - 1} = 1\]
is the trivial relation.
Also, it is easy to see that ${r_{4,{\sigma _1}{\sigma _2}}} = 1$ is the trivial relation.
Next relation
\[{r_{4,{\sigma _1}{\sigma _2}}} = {S_{{\sigma _2}{\sigma _1},{\sigma _1}}} S_{{\sigma _2}, {\sigma _2}} {S_{{1}, {\tau _1}}}
S_{{\sigma _1}{\sigma _2}, {\sigma _2}}^{ - 1} S_{{\sigma _1}{\sigma _2}{\sigma _1}, {\sigma _1}}^{ - 1} S_{{\sigma _2}{\sigma _1},{\tau _2}}^{ - 1} = 1.\]
Using formulas from the previous lemmas, we get
\[{S_{{\sigma _2}{\sigma _1}, {\sigma _1}}} {S_{{\sigma _2},{\sigma _2}}} \cdot {S_{{\sigma _1},{\tau _1}}} S_{{\sigma _1},{\sigma _1}}^{ - 1} \cdot
S_{{\sigma _1},{\sigma _1}} S_{{\sigma _2}{\sigma _1}, {\sigma _1}}^{ - 1} S_{{\sigma _1}, {\sigma _1}}^{ - 1}
S_{{\sigma _2}, {\sigma _2}}^{ - 1} \cdot
{S_{{\sigma _1}, {\sigma _1}}} {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}^{-1}} = 1.\]
Applying this relation we can remove
$$
S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}} = {S_{{\sigma _2}{\sigma _1}, {\sigma _1}}} {S_{{\sigma _2},{\sigma _2}}} {S_{{\sigma _1},{\tau _1}}} S_{{\sigma _2}{\sigma _1}, {\sigma _1}}^{ - 1} S_{{\sigma _1}, {\sigma _1}}^{ - 1}
S_{{\sigma _2}, {\sigma _2}}^{ - 1}
{S_{{\sigma _1}, {\sigma _1}}}.
$$
The relation $r_{4,{\sigma _1}{\sigma _2}{\sigma _1}} = 1$ gives the trivial relation.
Hence, we have proven
\medskip
\begin{lemma} \label{l4}
From relation $r_4 = \sigma_1 \sigma_2 \tau_1 \sigma_2^{-1} \sigma_1^{-1} \tau_2^{-1}$ follows 3 non-trivial relations, applying which we can remove 3 generators:
$$
{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _1}}} = {S_{{\sigma _2},{\tau _2}}},~~~S_{{\sigma _1}{\sigma _2}, {\tau _2}} = S_{{\sigma _1},{\sigma _1}} S_{{\sigma _2}{\sigma _1}, {\tau _1}} S_{{\sigma _1},{\sigma _1}}^{ - 1},
$$
$$
S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}} = {S_{{\sigma _2}{\sigma _1}, {\sigma _1}}} {S_{{\sigma _2},{\sigma _2}}} {S_{{\sigma _1},{\tau _1}}} S_{{\sigma _2}{\sigma _1}, {\sigma _1}}^{ - 1} S_{{\sigma _1}, {\sigma _1}}^{ - 1}
S_{{\sigma _2}, {\sigma _2}}^{ - 1}
{S_{{\sigma _1}, {\sigma _1}}}.
$$
\end{lemma}
\medskip
\textbf{5)} Take the relation $r_5 = \sigma_2 \sigma_1 \tau_2 \sigma_1^{-1} \sigma_2^{-1} \tau_1^{-1}$ and rewrite in the new generators:
\[{r_{5,1}} = S_{{\sigma _2}{\sigma _1},{\tau _2}} S_{1,{\tau _1}}^{ - 1} =
S_{{\sigma _1}{\sigma _2}{\sigma _1}, {\tau _2}} S_{{\sigma _1},{\tau _1}}^{ - 1}= 1.\]
Using the formulas from the previous lemmas we get relation
$$
(a_{13} a_{23}) S_{{\sigma _1},{\tau _1}} = S_{{\sigma _1},{\tau _1}} (a_{13} a_{23}).
$$
Relation
\[{r_{5,{\sigma _1}}} = S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}
S_{{\sigma _1},{\tau _1}}^{ - 1} = 1\]
is the same as the previous relation.
Relation
\[{r_{5,{\sigma _2}}} = {S_{{\sigma _2},{\sigma _2}}} {S_{\sigma_1,{\tau _2}}} S_{{\sigma _1}{\sigma _2}{\sigma _1}, {\sigma _1}}^{ - 1} S_{{\sigma _2}, {\tau _1}}^{ - 1} = {S_{{\sigma _2},{\sigma _2}}} {S_{{\sigma _1},{\tau _2}}} S_{{\sigma _2}, {\sigma _2}}^{ - 1} S_{{\sigma _2}, {\tau _1}}^{ - 1}.\]
Using the formulas from the previous lemmas we get relation
$$
S_{{\sigma _2} {\sigma _1}, {\tau _1}} a_{13}^{-1} = a_{23} a_{12} (S_{{\sigma _2} {\sigma _1}, {\tau _1}} a_{13}^{-1}) a_{12}^{-1} a_{23}^{-1}.
$$
From relation ${r_{5,{\sigma _1}{\sigma _2}}} = 1$ follows relation
$$
a_{12}^{-1} S_{{\sigma _2}, {\tau _2}} a_{12} = a_{13} S_{{\sigma _2}, {\tau _2}} a_{13}^{-1}.
$$
From $r_{5,{\sigma _2}{\sigma _1}} = 1$ follows relation
$$
a_{12} S_{{\sigma _2}{\sigma _1}, {\tau _1}} a_{12}^{-1} = a_{23}^{-1} S_{{\sigma _2}{\sigma _1}, {\tau _1}} a_{23}.
$$
From ${r_{5,{\sigma _1}{\sigma _2}{\sigma _1}}} = 1$ follows relation
$$
a_{12}^{-1} S_{{\sigma _2}, {\tau _2}} a_{12} = a_{13} S_{{\sigma _2}, {\tau _2}} a_{13}^{-1}
$$
and we see that it is the same relation which follows from ${r_{5,{\sigma _1}{\sigma _2}}} = 1$.
Hence we have proven
\medskip
\begin{lemma} \label{l5}
From relation $r_5 = \sigma_2 \sigma_1 \tau_2 \sigma_1^{-1} \sigma_2^{-1} \tau_1^{-1}$ follows relations:
$$
(a_{13} a_{23}) S_{{\sigma _1},{\tau _1}} = S_{{\sigma _1},{\tau _1}} (a_{13} a_{23}),
$$
$$
S_{{\sigma _2} {\sigma _1}, {\tau _1}} a_{13}^{-1} = a_{23} a_{12} (S_{{\sigma _2} {\sigma _1}, {\tau _1}} a_{13}^{-1}) a_{12}^{-1} a_{23}^{-1},
$$
$$
a_{12}^{-1} S_{{\sigma _2}, {\tau _2}} a_{12} = a_{13} S_{{\sigma _2}, {\tau _2}}a_{13}^{-1},
$$
$$
a_{12} S_{{\sigma _2}{\sigma _1}, {\tau _1}} a_{12}^{-1} = a_{23}^{-1} S_{{\sigma _2}{\sigma _1}, {\tau _1}} a_{23}.
$$
\end{lemma}
\medskip
Let us introduce the following notations
$$
a_{12} = S_{{\sigma _1},{\sigma _1}} = \sigma_1^2,~~~a_{13} = S_{{\sigma _2}{\sigma _1}, {\sigma _1}} = \sigma_2 \sigma_1^2 \sigma_2^{-1},~~~a_{23} = S_{{\sigma _2},{\sigma _2}} = \sigma_2^2,
$$
$$
b_{12} = S_{{\sigma _1}, {\tau _1}} = \sigma _1 \tau _1,~~~{b_{13}} = {S_{{\sigma _2}{\sigma _1},{\tau _1}}} = {\sigma _2}{\sigma _1}{\tau _1}\sigma _2^{ - 1},~~~b_{23} = S_{{\sigma _2}, {\tau _2}} = \sigma _2 \tau _2.
$$
Then we can express other generators of $SP_3$.
\begin{lemma} The following equalities hold
\begin{align*}
& {S_{1,{\tau _1}}} = {\tau _1} \cdot \sigma _1^{ - 1} = {b_{12}}a_{12}^{ - 1},\\
& {S_{1,{\tau _2}}} = {\tau _2} \cdot \sigma _2^{ - 1} = {b_{23}}a_{23}^{ - 1},\\
& {S_{{\sigma _1},{\sigma _1}}} = \sigma _1^2 = {a_{12}},\\
&{S_{{\sigma _1},{\tau _1}}} = {\sigma _1}{\tau _1} = {b_{12}},\\
&{S_{{\sigma _1},{\tau _2}}} = {\sigma _1}{\tau _2}\sigma _2^{ - 1}\sigma _1^{ - 1} = {a_{23}^{-1}}{b_{13}}a_{13}^{ - 1}a_{23}, \\
&{S_{{\sigma _2},{\sigma _2}}} = \sigma _2^2 = {a_{23}}, \\
& {S_{{\sigma _2},{\tau _1}}} = {\sigma _2}{\tau _1}\sigma _1^{ - 1}\sigma _2^{ - 1} = {b_{13}}a_{13}^{ - 1},\\
& {S_{{\sigma _2},{\tau _2}}} = {\sigma _2}{\tau _2} = {b_{23}},\\
& {S_{{\sigma _1}{\sigma _2},{\sigma _2}}} = {\sigma _1}\sigma _2^2\sigma _1^{ - 1} = {a_{23}^{ - 1}}{a_{13}}a_{23},\\
& {S_{{\sigma _1}{\sigma _2},{\tau _1}}} = {\sigma _1}{\sigma _2}{\tau _1}\sigma _1^{ - 1}\sigma _2^{ - 1}\sigma _1^{ - 1} = {b_{23}}a_{23}^{ - 1},\\
&{S_{{\sigma _1}{\sigma _2},{\tau _2}}} = {\sigma _1}{\sigma _2}{\tau _2}\sigma _1^{ - 1} = {a_{23}^{ - 1}}{b_{13}}a_{23},\\
& {S_{{\sigma _2}{\sigma _1},{\sigma _1}}} = {\sigma _2}\sigma _1^2\sigma _2^{ - 1} = {a_{13}},\\
& {S_{{\sigma _2}{\sigma _1},{\tau _1}}} = {\sigma _2}{\sigma _1}{\tau _1}\sigma _2^{ - 1} = {b_{13}},\\
& {S_{{\sigma _2}{\sigma _1},{\tau _2}}} = {\sigma _2}{\sigma _1}{\tau _2}\sigma _1^{ - 1}\sigma _2^{ - 1}\sigma _1^{ - 1} = {b_{12}}a_{12}^{ - 1},\\
&{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}} = {\sigma _1}{\sigma _2}\sigma _1^2\sigma _2^{ - 1}\sigma _1^{ - 1} = {a_{23}},\\
& {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _2}}} = {\sigma _1}{\sigma _2}{\sigma _1}{\sigma _2}\sigma _1^{ - 1}\sigma _2^{ - 1} = {a_{12}},\\
&{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _1}}} = {\sigma _1}{\sigma _2}{\sigma _1}{\tau _1}\sigma _2^{ - 1}\sigma _1^{ - 1} = {b_{23}},\\
& {S_{{\sigma _1}{\sigma _2}{\sigma _1},{\tau _2}}} = {\sigma _1}{\sigma _2}{\sigma _1}{\tau _2}\sigma _1^{ - 1}\sigma _2^{ - 1} = {b_{12}}.\\
\end{align*}
\end{lemma}
The group $SP_3$ has the following presentation.
\begin{theorem} \label{t1}
The singular pure braid group $SP_3$ is generated by elements
$$
a_{12},~~a_{13},~~a_{23},~~b_{12},~~b_{13},~~b_{23},
$$
and is defined by relations:
$$
a_{12} a_{13} a_{12}^{-1} = a_{23}^{-1} a_{13} a_{23},~~~a_{12} a_{23} a_{12}^{-1} = a_{23}^{-1} a_{13}^{-1} a_{23} a_{13} a_{23},
$$
-- what are relations in $P_3$;
$$
a_{12} b_{12} = b_{12} a_{12}
$$
-- what is the relation in $SP_2$;
$$
a_{13} b_{13} = b_{13} a_{13},
$$
$$
a_{23} b_{23} = b_{23} a_{23},
$$
$$
b_{12} (a_{13} a_{23}) b_{12}^{-1} = a_{13} a_{23},
$$
$$
a_{12} b_{13} a_{12}^{-1} = a_{23}^{-1} b_{13} a_{23},
$$
$$
a_{12} b_{23} a_{12}^{-1} = a_{23}^{-1} a_{13}^{-1} b_{23} a_{13} a_{23}.
$$
\end{theorem}
\begin{corollary} \label{c2}
From these relations follow the conjugation rules
$$
a_{12}^{-1} a_{13} a_{12} = a_{13} a_{23} a_{13} a_{23}^{-1} a_{13}^{-1},~~~a_{12}^{-1} a_{23} a_{12}= a_{13} a_{23} a_{13}^{-1},
$$
$$
a_{12}^{-1} b_{13} a_{12} = a_{13} a_{23} b_{13} a_{23}^{-1} a_{13}^{-1},~~~a_{12}^{-1} b_{23} a_{12}= a_{13} b_{23} a_{13}^{-1}.
$$
\end{corollary}
\section{Some properties of $SP_3$}
\medskip
\subsection{Conjugations by elements of $SG_3$} Since $SP_3$ is normal in $SG_3$ conjugations by elements from $SG_3$ give automorphisms of $SP_3$. The following proposition gives conjugation formulas of generators $a_{12}$, $a_{13}$, $a_{23}$, $b_{12}$, $b_{13}$, $b_{23}$ by generators of $SG_3$.
\begin{proposition} \label{p4.1}
Generators of $SG_3$ act on the generators of $SP_3$ by the rules:
-- action of $\sigma_1$:
$$
a_{12}^{\sigma_1} = a_{12},~~~a_{13}^{\sigma_1} = a_{13} a_{23} a_{13}^{-1},~~~ a_{23}^{\sigma_1} = a_{13};
$$
$$
b_{12}^{\sigma_1} = b_{12},~~~b_{13}^{{\sigma _1}} = {a_{13}}{b_{23}}a_{13}^{ - 1},~~~ b_{23}^{\sigma _1} = b_{13};
$$
-- action of $\sigma_2$:
$$
a_{12}^{\sigma_2} = a_{23}^{-1} a_{13} a_{23},~~~a_{13}^{\sigma_2} = a_{12},~~~ a_{23}^{\sigma_2} = a_{23};
$$
$$
b_{12}^{{\sigma _2}} = a_{23}^{ - 1}{b_{13}}{a_{23}},~~~b_{13}^{{\sigma _2}} = {b_{12}},~~~ b_{23}^{\sigma_2} = b_{23};
$$
-- action of $\tau_1$:
$$
a_{12}^{\tau_1} = a_{12},~~~a_{13}^{\tau_1} = b_{12}^{-1} a_{23} b_{12},~~~a_{23}^{\tau_1} = b_{12}^{-1} a_{23}^{-1} a_{13} a_{23} b_{12},
$$
$$
b_{12}^{\tau_1} = b_{12},~~~b_{13}^{{\tau _1}} = b_{12}^{ - 1}{b_{23}}{b_{12}},~~~b_{23}^{{\tau _1}} = b_{12}^{ - 1}{a_{12}}{b_{13}}a_{12}^{ - 1}{b_{12}},
$$
-- action of $\tau_2$:
$$
a_{12}^{\tau _2} = b_{23}^{ - 1}{a_{13}}{b_{23}},~~~
a_{13}^{\tau _2} = b_{23}^{ - 1}{a_{23}}{a_{12}}a_{23}^{ - 1}{b_{23}},~~~a_{23}^{\tau _2} = a_{23},
$$
$$
b_{12}^{\tau _2} = b_{23}^{ - 1}{b_{13}}{b_{23}},~~~b_{13}^{\tau _2} = b_{23}^{-1} a_{23} b_{12} a_{23}^{ - 1}{b_{23}},~~~
b_{23}^{\tau_2} = b_{23}.
$$
\end{proposition}
\begin{proof}
The formulas of conjugations of $a_{12}$, $a_{13}$, $a_{23}$ by $\sigma_1$ and $\sigma_2$ follow from conjugation rules in $B_3$.
Let us prove the other formulas:
$$
\tau_1^{-1} a_{13} \tau_1 = \tau_1^{-1} \sigma_2 \sigma_1 \sigma_1 \sigma_2^{-1} \tau_1 = S_{\sigma_1, \tau_1}^{-1}
S_{\sigma_1 \sigma_2 \sigma_1, \sigma_1}S_{\sigma_1, \tau_1} = b_{12}^{-1} a_{23} b_{12},
$$
$$
\tau_1^{-1} a_{23} \tau_1 = \tau_1^{-1} \sigma_2 \sigma_2 \tau_1 = S_{\sigma_1, \tau_1}^{-1}
S_{\sigma_1 \sigma_2, \sigma_2} S_{\sigma_1, \tau_1} = b_{12}^{-1} a_{12} a_{13} a_{12}^{-1} b_{12} = b_{12}^{-1} a_{23}^{-1} a_{13} a_{23} b_{12},
$$
\[\begin{gathered}
\tau _2^{ - 1}{a_{13}}{\tau _2} = \tau _2^{ - 1}{\sigma _2}{\sigma _1}{\sigma _1}\sigma _2^{ - 1}{\tau _2} = S_{{\sigma _2},{\tau _2}}^{ - 1}{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _2}}}S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}^{ - 1}{S_{{\sigma _2},{\tau _2}}} = \hfill \\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = S_{{\sigma _2},{\tau _2}}^{ - 1}{S_{{\sigma _2},{\sigma _2}}}{S_{{\sigma _1},{\sigma _1}}}S_{{\sigma _2},{\sigma _2}}^{ - 1}{S_{{\sigma _2},{\tau _2}}} = b_{23}^{ - 1}{a_{23}}{a_{12}}a_{23}^{ - 1}{b_{23}}, \hfill \\
\end{gathered} \]
\[\tau _2^{ - 1}{b_{12}}{\tau _2} = \tau _2^{ - 1}{\sigma _1}{\tau _1}{\tau _2} = S_{{\sigma _2},{\tau _2}}^{ - 1}{S_{{\sigma _2}{\sigma _1},{\tau _1}}}{S_{{\sigma _2},{\tau _2}}} = b_{23}^{ - 1}{b_{13}}{b_{23}},\]
\[\begin{gathered}
\tau _2^{ - 1}{b_{13}}{\tau _2} = \tau _2^{ - 1}{\sigma _2}{\sigma _1}{\tau _1}\sigma _2^{ - 1}{\tau _2} = S_{{\sigma _2},{\tau _2}}^{ - 1}{S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}}{S_{{\sigma _1},{\tau _1}}}S_{{\sigma _1}{\sigma _2}{\sigma _1},{\sigma _1}}^{ - 1}{S_{{\sigma _2},{\tau _2}}} = \hfill \\
\,\,\,\,\,\,\,\,\,\,\,\,\,\, = S_{{\sigma _2},{\tau _2}}^{ - 1}{S_{{\sigma _2},{\sigma _2}}}{S_{{\sigma _1},{\tau _1}}}S_{{\sigma _2},{\sigma _2}}^{ - 1}{S_{{\sigma _2},{\tau _2}}} = b_{23}^{ - 1}{a_{23}}{b_{12}}a_{23}^{ - 1}{b_{23}}, \hfill \\
\end{gathered} \]
\end{proof}
\subsection{Decomposition of $SP_3$} Define the following subgroups of $SP_3$:
$$
V_2 = SP_2 = \langle a_{12}, b_{12} \rangle,~~~V_3 = \langle a_{13}, a_{23}, b_{13}, b_{23} \rangle,~~~\widetilde{V}_3 = \langle V_3, b_{12} \rangle.
$$
As was proved in Lemma \ref{l3.1}
$$
V_2 = SP_2 = \langle a_{12}, b_{12}~||~ a_{12} b_{12} = b_{12} a_{12} \rangle \cong \mathbb{Z} \times \mathbb{Z}.
$$
There is a homomorphism $SP_3 \to V_2$ that sends the generators $a_{13}, a_{23}, b_{13}, b_{23}$ to 1 and keeps the generators $a_{12}, b_{12}$. It is easily to see that the image of this homomorphism is $SP_2$ and the kernel is the normal closure of $V_3$ in $SP_3$.
The following theorem describes a structure of $SP_3$.
\begin{theorem} \label{th}
1) $SP_3 = \widetilde{V}_3 \leftthreetimes \mathbb{Z}$, where $\mathbb{Z} = \langle a_{12} \rangle$;
2) $\widetilde{V}_3$ has a presentation
$$
\widetilde{V}_3 = \langle a_{13}, a_{23}, b_{13}, b_{23}, b_{12}~||~ [a_{13}, b_{13}] = [a_{23}, b_{23}] = [a_{13} a_{23}, b_{12}]= 1 \rangle
$$
and is an HNN-extension with base group $V_3$, stable letter $b_{12}$ and associated subgroups $A \cong B = \langle a_{13} a_{23} \rangle$ and identity isomorphism $A \to B$:
$$
\widetilde{V}_3 = \langle V_3, b_{12}~|~rel(V_3),~~~b_{12}^{-1} (a_{13} a_{23}) b_{12} = a_{13} a_{23} \rangle,
$$
where $rel(V_3)$ is the set of relations in $V_3$.
3) $V_3 = \langle a_{13}, a_{23}, b_{13}, b_{23} ~||~[a_{13}, b_{13}] = [a_{23}, b_{23}] = 1 \rangle \cong \mathbb{Z}^2 * \mathbb{Z}^2$.
\end{theorem}
\begin{proof}
1) From the defining relations of $SP_3$ follows that $\widetilde{V}_3$ is normal in $SP_3$ and $\langle a_{12} \rangle$ acts on this subgroup by automorphisms.
Since $SP_3 = \widetilde{V}_3 \cdot \langle a_{12} \rangle$ and $\widetilde{V}_3 \cap \langle a_{12} \rangle = 1$, we have the desired decomposition.
2) To find a presentation of $\widetilde{V}_3$ define an endomorphism $\varphi : SP_3 \to \langle a_{12} \rangle$ that sends the generators $b_{12}, a_{13}, a_{23}, b_{13}, b_{23}$ to 1 and sends the generator $a_{12}$ to itself. To find $\mathrm{Ker}\varphi$ use the Reidemeister-Shraier method. The kernel
is the normal closure of $\widetilde{V}_3$ in $SP_3$, but since $\widetilde{V}_3$ is normal in $SP_3$, then $\mathrm{Ker}\varphi = \widetilde{V}_3$. To find defining relations of $\widetilde{V}_3$ take the set $\Lambda = \{ a_{12}^k ~|~k \in \mathbb{Z} \}$ as coset representatives of $\widetilde{V}_3$ in $SP_3$. Then $\widetilde{V}_3$ is defined by relations $\lambda r \lambda^{-1} = 1$, where $r=1$ is a relation in $SP_3$ and $\lambda \in \Lambda$. From Theorem \ref{t1} and Corollary \ref{c2} follows that all these relations follow from relations
$$
[a_{13}, b_{13}] = [a_{23}, b_{23}] = [a_{13} a_{23}, b_{12}]= 1.
$$
The decomposition in HNN-extension follows from the definition.
3) From the properties of HNN-extension follows that the base group is embedded in the HNN-extension.
\end{proof}
\subsection{The center of $SG_n$ }\label{center}
It is well-known that the center $Z(B_n) = Z(P_n)$ is infinite cyclic group that is generated by
$$
\delta_n = (\sigma_1 \sigma_2 \ldots \sigma_{n-1})^n = a_{12} (a_{13} a_{23}) \ldots (a_{1n} a_{2n} \ldots a_{n-1,n}).
$$
It was shown that $Z(B_n) \cong Z(SG_n)$ (see \cite{FRZ,V4}). On the other side, M.~V.~Neshchadim \cite{N1,N2} proved that $Z(P_n)$ is a direct factor in $P_n$.
\begin{question}
Is it true that $Z(SG_n)$ is a direct factor in $SP_n$?
\end{question}
We will prove that it is true for $n = 3$. To do it denote $\delta = \delta_3 = a_{12} a_{13} a_{23}$. If we remove the generator $a_{12} = \delta a_{23}^{-1} a_{13}^{-1}$, then we get the following presentation of $SP_3$.
\begin{theorem} \label{t3}
The singular pure braid group $SP_3$ is generated by elements
$$
\delta,~~a_{13},~~a_{23},~~b_{12},~~b_{13},~~b_{23},
$$
and is defined by relations:
$$
\delta b_{12} = b_{12} \delta,~~~\delta a_{13} = a_{13} \delta,~~~\delta a_{23} = a_{23} \delta,~~~\delta b_{13} = b_{13} \delta,~~~
\delta b_{23} = b_{23} \delta,
$$
-- what are relations of commutativity with $\delta$;
$$
a_{13} b_{13} = b_{13} a_{13},~~~
a_{23} b_{23} = b_{23} a_{23},~~~
b_{12} (a_{13} a_{23}) b_{12}^{-1} = a_{13} a_{23},
$$
-- what are relations in $\widetilde{V}_3$.
\end{theorem}
From this representation we get
\begin{corollary} \label{c3}
$SP_3$ is the direct product
$$
SP_3 = Z \times \widetilde{V}_3,
$$
where $Z = \langle \delta \rangle$ is the center of $SP_3$ and
$
\widetilde{V}_3 = \langle a_{13}, a_{23}, b_{12}, b_{13}, b_{23} \rangle.
$
\end{corollary}
\emph{Acknowledgements. }The first and second named authors acknowledge the support from the Russian Science Foundation (project No. 19-41-02005).
| {
"timestamp": "2020-05-26T02:17:01",
"yymm": "2005",
"arxiv_id": "2005.11751",
"language": "en",
"url": "https://arxiv.org/abs/2005.11751",
"abstract": "In the present paper we study the singular pure braid group $SP_{n}$ for $n=2, 3$. We find generators, defining relations and the algebraical structure of these groups. In particular, we prove that $SP_{3}$ is a semi-direct product $SP_{3} = \\widetilde{V}_3 \\leftthreetimes \\mathbb{Z}$, where $\\widetilde{V}_3$ is an HNN-extension with base group $\\mathbb{Z}^2 * \\mathbb{Z}^2$ and cyclic associated subgroups. We prove that the center $Z(SP_3)$ of $SP_3$ is a direct factor in $SP_3$.",
"subjects": "Group Theory (math.GR)",
"title": "On 3-strand singular pure braid group",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540674530015,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7099046591704204
} |
https://arxiv.org/abs/1712.07801 | Density Estimation with Contaminated Data: Minimax Rates and Theory of Adaptation | This paper studies density estimation under pointwise loss in the setting of contamination model. The goal is to estimate $f(x_0)$ at some $x_0\in\mathbb{R}$ with i.i.d. observations, $$ X_1,\dots,X_n\sim (1-\epsilon)f+\epsilon g, $$ where $g$ stands for a contamination distribution. In the context of multiple testing, this can be interpreted as estimating the null density at a point. We carefully study the effect of contamination on estimation through the following model indices: contamination proportion $\epsilon$, smoothness of target density $\beta_0$, smoothness of contamination density $\beta_1$, and level of contamination $m$ at the point to be estimated, i.e. $g(x_0)\leq m$. It is shown that the minimax rate with respect to the squared error loss is of order $$ [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^2(1\wedge m)^2]\vee[n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}], $$ which characterizes the exact influence of contamination on the difficulty of the problem. We then establish the minimal cost of adaptation to contamination proportion, to smoothness and to both of the numbers. It is shown that some small price needs to be paid for adaptation in any of the three cases. Variations of Lepski's method are considered to achieve optimal adaptation.The problem is also studied when there is no smoothness assumption on the contamination distribution. This setting that allows for an arbitrary contamination distribution is recognized as Huber's $\epsilon$-contamination model. The minimax rate is shown to be $$ [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee [\epsilon^{\frac{2\beta_0}{\beta_0+1}}]. $$ The adaptation theory is also different from the smooth contamination case. While adaptation to either contamination proportion or smoothness only costs a logarithmic factor, adaptation to both numbers is proved to be impossible. | \section{Results for Arbitrary Contamination}\label{sec:ma}
\subsection{Minimax Rates}
In this section, we study the contamination model without any structural assumption on the contamination distribution:
\begin{align*}
X_1,\dots,X_n \sim (1-\epsilon)P_f+\epsilon G
\end{align*}
where $P_f$ is a distribution on $\mathbb{R}$ that has a density function $f$, and $G$ is an arbitrary contamination distribution. This leads to the following model space
$$\mathcal{M}(\epsilon,\beta_0,L_0)=\left\{(1-\epsilon)P_f+\epsilon G\Big| f\in\mathcal{P}(\beta_0,L_0)\text{ and } G\text{ is an arbitrary distribution}\right\}.$$
This is often referred to as Huber's $\epsilon$-contamination model \citep{huber1964robust,huber1965robust}. Nonparametric function estimation under Huber's $\epsilon$-contamination model has recently been studied by \cite{chen2016general,gao2017robust} for global loss functions. In this paper, our focus is on the local estimation of $f(0)$. The corresponding minimax risk is defined by
$$\mathcal{R}(\epsilon,\beta_0,L_0)=\inf_{\widehat f(0)}\sup_{p(\epsilon,f,g)\in {\mathcal M}(\epsilon,\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2.$$
In contrast to the minimax rate studied in Section \ref{sec:minimax-s}, we only have one parameter $\epsilon$ that indexes the influence of the contamination for $\mathcal{R}(\epsilon,\beta_0,L_0)$.
\begin{thm}\label{thm:minimax-arb}
Under the setting above, we have
\begin{equation}
\mathcal{R}(\epsilon,\beta_0,L_0)\asymp [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^{\frac{2\beta_0}{\beta_0+1}}].\label{eq:easy-minimax}
\end{equation}
\end{thm}
The minimax rate given by Theorem \ref{thm:minimax-arb} only involves two terms. The first term $n^{-\frac{2\beta_0}{2\beta_0+1}}$ is the classical minimax rate for nonparametric estimation. The second term $\epsilon^{\frac{2\beta_0}{\beta_0+1}}$ characterizes the influence of contamination. It is worth noticing that the smoothness index of $f$ appears both in $n^{-\frac{2\beta_0}{2\beta_0+1}}$ and $\epsilon^{\frac{2\beta_0}{\beta_0+1}}$. A larger value of $\beta_0$ implies a less influence of the contamination. This is in contrast to the rate of $\mathcal{R}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)$ in Theorem \ref{thm:minimax-rate}.
The phase transition boundary of $\mathcal{R}(\epsilon,\beta_0,L_0)$ occurs at $\epsilon=n^{-\frac{\beta_0+1}{2\beta_0+1}}$. Below this level, we have $\mathcal{R}(\epsilon,\beta_0,L_0)\asymp n^{-\frac{2\beta_0}{2\beta_0+1}}$, and the contamination has no influence on the classical minimax rate. When $\epsilon$ is above $n^{-\frac{\beta_0+1}{2\beta_0+1}}$, the rate becomes $\epsilon^{\frac{2\beta_0}{\beta_0+1}}$, dominated by the contamination of data. Since we have about $n\epsilon$ contaminated observations in expectation, an optimal procedure can achieve the classical minimax rate $n^{-\frac{2\beta_0}{2\beta_0+1}}$ with at most $n\epsilon\leq n^{\frac{\beta_0}{2\beta_0+1}}$ contaminated data points. Note that the number $n^{\frac{\beta_0}{2\beta_0+1}}$ is an increasing function of $\beta_0$.
For the upper bound of the minimax rate, we again consider the kernel density estimator
$\widehat{f}_h(0)=\frac{1}{n}\sum_{i=1}^n\frac{1}{h}K\left(\frac{X_i}{h}\right)$.
The error $\widehat{f}_h(0)-f(0)$ can be decomposed as $(\widehat{f}_h(0)-\mathbb{E}\widehat{f}_h(0))+(\mathbb{E}_h\widehat{f}(0)-f(0))$. Then, a direct analysis shows that the risk can be bounded by three terms,
\begin{equation}
\mathbb{E}\left(\widehat{f}_h(0)-f(0)\right)^2\lesssim \frac{1}{nh}\vee h^{2\beta_0}\vee \frac{\epsilon^2}{h^2},\label{eq:RD-for-arb}
\end{equation}
which leads to the optimal choice of bandwidth $h=n^{-\frac{1}{2\beta_0+1}}\vee \epsilon^{\frac{1}{\beta_0+1}}$. It is interesting to note that this choice of bandwidth is always larger than or equal to $n^{-\frac{1}{2\beta_0+1}}$. Recall that when the contamination is smooth, the optimal bandwidth in Theorem \ref{thm:upperbound} is smaller than $n^{-\frac{1}{2\beta_0+1}}$. Thus, when there is contamination in the data, one may need to use a larger or smaller bandwidth compared with $n^{-\frac{1}{2\beta_0+1}}$ depending on the assumption of contamination.
The lower bound part of Theorem \ref{thm:minimax-arb} can be viewed as an application of Theorem 5.1 in \cite{chen2015robust}. A general lower bound for Huber's $\epsilon$-contamination model in \cite{chen2015robust} reveals a critical quantity called modulus of continuity, defined as
$$\omega(\epsilon)=\sup\left\{|f(0)-\widetilde{f}(0)|^2: {\sf TV}(P_f,P_{\widetilde{f}})\leq \epsilon/(1-\epsilon), f,\widetilde{f}\in\mathcal{P}(\beta_0,L_0)\right\}.$$
The definition of modulus of continuity goes back to \cite{donoho1994statistical,donoho1991geometrizing}, and its relation to Huber's $\epsilon$-contamination model is characterized in \cite{chen2015robust}. In the current setting, it can be shown that $\omega(\epsilon)\asymp \epsilon^{\frac{2\beta_0}{\beta_0+1}}$, which leads to the lower bound part of Theorem \ref{thm:minimax-arb}. In Section \ref{sec:pf-miss}, we will give an alternative self-contained proof of the lower bound.
\subsection{Adaptation to Either Contamination Proportion or Smoothness}\label{sec:ad-hb}
The key to adaptation to either contamination proportion or smoothness is the risk decomposition (\ref{eq:RD-for-arb}) of the kernel density estimator $\widehat{f}_h(0)=\frac{1}{n}\sum_{i=1}^n\frac{1}{h}K\left(\frac{X_i}{h}\right)$. We write (\ref{eq:RD-for-arb}) as the sum of two terms. That is,
\begin{equation}
\frac{1}{nh}\vee h^{2\beta_0}\vee \frac{\epsilon^2}{h^2}\asymp \left(\frac{\epsilon^2}{h^2}+\frac{1}{nh}\right) + h^{2\beta_0}.\label{eq:increasing-decreasing}
\end{equation}
The first term $\frac{\epsilon^2}{h^2}+\frac{1}{nh}$ is a decreasing function of $h$ with a possibly unknown $\epsilon$, while the second term $h^{2\beta_0}$ is an increasing function of $h$ with a possibly unknown $\beta_0$. If we know $\epsilon$ but do not know $\beta_0$, then we can use Lespki's method with $\frac{\epsilon^2}{h^2}+\frac{1}{nh}$ as a reference curve. On the other hand, if we know $\beta_0$ but do not know $\epsilon$, we can then use a reverse version of Lepski's method with $h^{2\beta_0}$ as a reference curve. Specifically, when $\epsilon$ is known but $\beta_0$ is unknown, we use
\begin{equation}
\widehat h=\max\left\{h\in\mathcal{H}:|\widehat f_h(0)-\widehat f_l(0)|\leq c_1\left(\sqrt{\frac{\log n}{nl}}+\frac{\epsilon}{l}\right), \forall l\leq h, l\in\mathcal{H}\right\}.\label{eq:h-lep-epsilon}
\end{equation}
If the set that is maximized over is empty, we take $\widehat{h}=\frac{1}{n}$.
When $\beta_0$ is known but $\epsilon$ is unknown, we use
\begin{equation}
\widehat h=\min\Bigg\{h\in \mathcal{H}:|\widehat f_h(0)-\widehat f_l(0)|\leq c_1l^{\beta_0}, \forall l\geq h, l\in\mathcal{H}\Bigg\}.\label{eq:h-lep-beta}
\end{equation}
If the set that is minimized over is empty, we take $\widehat{h}=1$.
Before stating the guarantee for $\widehat f_{\widehat h}(0)$, we want to emphasize that whether the contamination proportion $\epsilon$ is known or not is more than a matter of normalization. As a comparison, recall the risk decomposition for a kernel density estimator with structured contamination in (\ref{eq:RD-s}). There, both $h^{2\beta_0}$ and $\epsilon^2h^{2\beta_1}$ are increasing functions of $h$. This implies that simultaneous adaptation to both $\epsilon$ and $h$ is possible through Lepski's method, and whether $\epsilon$ is given or not only affects the normalization of the kernel density estimator, which is not the case for arbitrary contamination because of (\ref{eq:increasing-decreasing}).
\begin{thm}\label{thm:lepski3}
Consider the adaptive kernel density estimator $\widehat{f}(0)=\widehat{f}_{\widehat{h}}(0)$ with the bandwidth $\widehat{h}$ given by (\ref{eq:h-lep-epsilon}) or (\ref{eq:h-lep-beta}). In either case, we set $\mathcal{H}=\left\{1,\frac{1}{2},\cdots,\frac{1}{2^m}\right\}$ such that $\frac{1}{2^m}\leq \frac{1}{n}<\frac{1}{2^{m-1}}$ and $c_1$ to be a sufficiently large constant. The kernel $K$ is selected from $\mathcal{K}_l(L)$ with a large constant $l\geq \floor{\beta_0}$. Then, we have
$$\sup_{p(\epsilon,f,g)\in \mathcal M(\epsilon,\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\lesssim \left(\frac{\log n}{n}\right)^{\frac{2\beta_0}{2\beta_0+1}}\vee \epsilon^{\frac{2\beta_0}{\beta_0+1}}.$$
\end{thm}
With one of $\epsilon$ and $\beta_0$ given, Theorem \ref{thm:lepski3} guarantees adaptive estimation with the rate $\left(\frac{\log n}{n}\right)^{\frac{2\beta_0}{2\beta_0+1}}\vee \epsilon^{\frac{2\beta_0}{\beta_0+1}}$. Compared with the minimax rate in Theorem \ref{thm:minimax-arb}, we have an extra logarithmic factor due to the ignorance of either $\epsilon$ or $\beta_0$. This logarithmic factor cannot be removed by any adaptive procedure in view of the results of \cite{brown1996constrained,lepski1997optimal,cai2003rates}.
\subsection{Adaptation to Both Contamination Proportion and Smoothness?}
When both contamination proportion and smoothness are unknown, the adaptation theory with arbitrary contamination is completely different from the case with structured contamination. Since there is no constraint on the contamination distribution, a model with $(\epsilon,\beta_0)$ can also be written as a different model with $(\widetilde{\epsilon},\widetilde{\beta}_0)$. As a consequence, we can prove the following lower bound.
\begin{lemma}\label{thm:unidentifiable}
For any constants $c_1,c_2>0$, there exists a constant $c_0$, such that for any $\beta_0,\widetilde{\beta}_0\leq c_1$, and any $L_0, \widetilde{L_0}\geq c_2$, and any estimator $\widehat{f}(0)$, one of the following lower bounds must be true,
\begin{eqnarray*}
\sup_{p(\epsilon,f,g)\in\mathcal{M}(\epsilon,\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat{f}(0)-f(0)\right)^2 \geq c_0\epsilon^{\frac{2\widetilde{\beta}_0}{\widetilde{\beta}_0+1}}, \\
\sup_{p(0,f,g)\in\mathcal{M}(0,\widetilde{\beta}_0,\widetilde{L}_0)}\mathbb{E}_{p^n}\left(\widehat{f}(0)-f(0)\right)^2 \geq c_0\epsilon^{\frac{2\widetilde{\beta}_0}{\widetilde{\beta}_0+1}}.
\end{eqnarray*}
\end{lemma}
Lemma \ref{thm:unidentifiable} says that in order for any estimator to adapt to two classes with different contamination proportions and smoothness indices, say $\mathcal{M}(\epsilon,\beta_0,L_0)$ and $\mathcal{M}(0,\widetilde{\beta}_0,\widetilde{L}_0)$, it is impossible to achieve a rate that is better than $\epsilon^{\frac{2\widetilde{\beta}_0}{\widetilde{\beta}_0+1}}$ across both classes. The lower bound $\epsilon^{\frac{2\widetilde{\beta}_0}{\widetilde{\beta}_0+1}}$ is a function of both $\epsilon$, the contamination proportion of the first class $\mathcal{M}(\epsilon,\beta_0,L_0)$, and $\widetilde{\beta}_0$, the smoothness index of the second class $\mathcal{M}(0,\widetilde{\beta}_0,\widetilde{L}_0)$. As we will show in the following, this specific form has a profound implication, in that an adaptive estimation rate that is a function of an individual class is impossible!
As a first step, the following definition formulates what adaptivity means in our specific setting.
\begin{definition}
An estimator $\widehat f(0)$ is called $(c_1,c_2,c_3,r_1(\cdot),r_2(\cdot))$ rate adaptive if the following holds: for any $n\geq 1$, any $\epsilon\leq 1/2$, any $\beta_0\leq c_1$ and any $L_0\leq c_2$, we have
\begin{equation}
\sup_{p(\epsilon,f,g)\in\mathcal{M}(\epsilon,\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat{f}(0)-f(0)\right)^2\leq c_3n^{-r_1(\beta_0)}\vee\epsilon^{r_2(\beta_0)}.\label{eq:adaptive-def}
\end{equation}
\end{definition}
As concrete examples, when the contamination distribution is restricted to those with density functions that are H\"{o}lder smooth, it is shown in Theorem \ref{thm:lepski} that adaptive estimation is possible with some $r_1(\beta_0)<\frac{2\beta_0}{2\beta_0+1}$ and $r_2(\beta_0)=2$. When the contamination distribution is arbitrary, Theorem \ref{thm:lepski3} shows that adaptive estimation is possible over $(\epsilon,\beta_0)$ if either $\epsilon$ or $\beta_0$ is fixed (known) with some $r_1(\beta_0)<\frac{2\beta_0}{2\beta_0+1}$ and $r_2(\beta_0)=\frac{2\beta_0}{\beta_0+1}$.
In contrast, the following theorem shows that such a goal is impossible for any $r_1(\cdot)$ and $r_2(\cdot)$ when both $\epsilon$ and $\beta_0$ are unknown.
\begin{thm}\label{thm:impossible}
For any constants $c_1,c_2,c_3>0$ and any positive functions $r_1(\cdot)$ and $r_2(\cdot)$, there is no estimator $\widehat f(0)$ that is $(c_1,c_2,c_3,r_1(\cdot),r_2(\cdot))$ rate adaptive.
\end{thm}
The impossibility result of Theorem \ref{thm:impossible} is a consequence of Lemma \ref{thm:unidentifiable}. The lower bound $\epsilon^{\frac{2\widetilde{\beta}_0}{\widetilde{\beta}_0+1}}$ in Lemma \ref{thm:unidentifiable} involves an $\epsilon$ and a $\widetilde{\beta}$ from two different classes. This leads to a contradiction given the definition of adaptivity in (\ref{eq:adaptive-def}). A rigorous proof of this argument will given in Section \ref{sec:pf-imp}.
In conclusion, when the contamination is arbitrary, the theory of adaptation to both contamination proportion and smoothness is qualitatively different from adaptation to only one of them. In comparison, when the contamination is structured, that difference is just quantitative according to the results in Section \ref{sec:as}. Therefore, in order to achieve sensible error rates adaptively in a robust density estimation context, we need to either assume a given contamination proportion, a given smoothness index, or a structured contamination distribution.
\section{Discussion}\label{sec:discussion}
\iffalse
\subsection{Optimal Choice of Bandwidth}
When there is no contamination, or $\epsilon=0$, it is well known that the optimal choice of bandwidth is $h=n^{-\frac{1}{2\beta_0+1}}$. For robust density estimation with contamination, should one use a larger or a smaller bandwidth? The question does not seem to have a simple and intuitive answer. An argument for using a larger bandwidth is to include more good observations. On the other hand, however, one can also argue that a smaller bandwidth helps to reduce the number of contaminated observations. Our results reveal that the answer depends on the source of contamination. When the contamination density is H\"{o}lder smooth, the optimal bandwidth is
$$h=n^{-\frac{1}{2\beta_0+1}}\wedge n^{-\frac{1}{2\beta_1+1}}\epsilon^{-\frac{2}{2\beta_1+1}},$$
always smaller than or equal to $n^{-\frac{1}{2\beta_0+1}}$. In contrast, when the contamination distribution is arbitrary, the optimal bandwidth is
$$h=n^{-\frac{1}{2\beta_0+1}}\vee \epsilon^{\frac{1}{\beta_0+1}},$$
always larger than or equal to $n^{-\frac{1}{2\beta_0+1}}$. This interesting phenomenon results from the very different bias-variance trade-off in the two settings.
\fi
\subsection{Extensions to Multivariate Settings}
The results in the paper can all be extended to robust multivariate density estimation. We define a $d$-dimensional isotropic H\"{o}lder class as follows,
$$\Sigma_d(\beta,L)=\left\{f:\mathbb{R}^d\rightarrow\mathbb{R}\Bigg|\max_{l\in I(\beta)}\left|\nabla_lf(x_1)-\nabla_lf(x_2)\right|\leq L\|x_1-x_2\|^{\beta-\floor{\beta}}\text{ for any }x_1,x_2\in\mathbb{R}^d\right\},$$
where we use $I(\beta)$ to denote the set of multi-indices $\{l=(l_1,...,l_d)\big| l_1+\cdots+l_d=\floor{\beta}\}$. The class of density functions is defined as
$$\mathcal{P}_d(\beta,L)=\left\{f:\mathbb{R}^d\rightarrow[0,\infty)\Bigg| f\in\Sigma_d(\beta,L), \int f=1\right\}.$$
Note that the dimension $d$ is assumed to be a constant. Then, the two contamination models considered in the paper are extended as
$$\mathcal{M}_d(\epsilon,\beta_0,\beta_1,L_0,L_1,m)=\left\{(1-\epsilon)f+\epsilon g\Big| f\in \mathcal{P}_d(\beta_0,L_0), g\in \mathcal{P}_d(\beta_1,L_1), g(0)\leq m\right\},$$
and
$$\mathcal{M}_d(\epsilon,\beta_0,L_0)=\left\{(1-\epsilon)P_f+\epsilon G\Big| f\in\mathcal{P}_d(\beta_0,L_0)\text{ and } G\text{ is an arbitrary distribution}\right\}.$$
Similarly, we can define the corresponding minimax rates $\mathcal R_d(\epsilon,\beta_0,\beta_1,L_0,L_1,m)$ and $\mathcal R_d(\epsilon,\beta_0,L_0)$.
\begin{thm}\label{thm:multi}
For the two contamination models on $\mathbb{R}^d$, we have
$$\mathcal R_d(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\asymp [n^{-\frac{2\beta_0}{2\beta_0+d}}]\vee[\epsilon^2(1\wedge m)^2]\vee[n^{-\frac{2\beta_1}{2\beta_1+d}}\epsilon^{\frac{2d}{2\beta_1+d}}],$$
and
$$\mathcal R_d(\epsilon,\beta_0,L_0)\asymp [n^{-\frac{2\beta_0}{2\beta_0+d}}]\vee [\epsilon^{\frac{2\beta_0}{\beta_0+d}}].$$
\end{thm}
The extra factor of dimension $d$ makes the interpretation of results even more interesting. For example, the phase transition boundary of $\mathcal{R}_d(\epsilon,\beta_0,L_0)$ now occurs at $\epsilon=n^{-\frac{\beta_0+d}{2\beta_0+d}}$. This implies that the influence of contamination becomes more severe as the dimension grows. In contrast, the minimax rate of $\mathcal R_d(\epsilon,\beta_0,\beta_1,L_0,L_1,m)$ leads to a completely different interpretation. For example, when $m\geq 1$, we have
$$\mathcal R_d(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\asymp n^{-\frac{2\beta_0}{2\beta_0+d}}\vee\epsilon^2.$$
The second term $\epsilon^2$ does not change with the dimension $d$, and the phase transition boundary between $n^{-\frac{2\beta_0}{2\beta_0+d}}$ and $\epsilon^2$ is at $\epsilon=n^{-\frac{\beta_0}{2\beta_0+d}}$, which increases with respect to $d$. This suggests that the influence of contamination becomes less severe as $d$ grows. In short, the contamination influence on density estimation can be drastically different in a multivariate setting, depending on whether the contamination distribution is structured or arbitrary.
\subsection{Consistency in the Hardest Scenario}
When there is no constraint on the contamination distribution, adaptation is impossible over both contamination proportion and smoothness in the sense of (\ref{eq:adaptive-def}). One may wonder whether there is still anything to do in such a scenario with almost nothing is assumed.
In this section, we show that consistency is still possible under this hardest scenario.
Before introducing the procedure, we remark that achieving consistency without knowing $\epsilon$ and $\beta_0$ is a non-trivial problem due to the risk decomposition (\ref{eq:RD-for-arb}) for a kernel density estimator. According to (\ref{eq:RD-for-arb}), a choice of bandwidth that leads to consistency must satisfy $nh\rightarrow\infty$, $h\rightarrow 0$ and $h/\epsilon\rightarrow\infty$. Note that the first and the second requirements can be satisfied easily with a choice of $h$ that does not depend on any model parameter. For example, one can choose $h=n^{-1/2}$. However, the third requirement $h/\epsilon\rightarrow\infty$ is problematic without the knowledge of $\epsilon$. For any choice of $h\rightarrow 0$, there is an adversarial $\epsilon$ to make $h/\epsilon\rightarrow\infty$ fail.
Despite the above difficulty, we show that a data-driven bandwidth leads to consistency if we know that the smoothness $\beta_0$ has a lower bound $\widetilde{\beta}_0$. We consider a kernel density estimator $\widehat f_h(0)=\frac{1}{n}\sum_{i=1}^n\frac{1}{h}K\left(\frac{X_i}{h}\right)$. Then, we choose $h$ by the reverse version of Lepskis' method that is similar to (\ref{eq:h-lep-beta}). We define $\widehat{h}$ by
\begin{equation}
\widehat h=\min\Bigg\{h\in \mathcal{H}:|\widehat f_h(0)-\widehat f_l(0)|\leq c_1l^{\widetilde\beta_0}, \forall l\geq h, l\in\mathcal{H}\Bigg\}.\label{eq:h-lep-beta-wt}
\end{equation}
Again, we use the convention that if the set that is minimized over is empty, we take $\widehat{h}=1$.
\begin{thm}\label{thm:arbitrarylepski}
Consider the kernel density estimator $\widehat{f}(0)=\widehat{f}_{\widehat{h}}(0)$ with the bandwidth $\widehat{h}$ given by (\ref{eq:h-lep-beta-wt}). We set $\mathcal{H}=\left\{1,\frac{1}{2},\cdots,\frac{1}{2^m}\right\}$ such that $\frac{1}{2^m}\leq \frac{1}{n}<\frac{1}{2^{m-1}}$ and $c_1$ to be a sufficiently large constant. The kernel $K$ is selected from $\mathcal{K}_l(L)$ with a large constant $l\geq \floor{\beta_0}$. Then, as $n\rightarrow\infty$ and $\epsilon\rightarrow 0$. we have
$$\sup_{p(\epsilon,f,g)\in \mathcal M(\epsilon,\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\rightarrow 0,$$
if $\beta_0\geq \widetilde{\beta}_0$.
\end{thm}
Note that the requirements $n\rightarrow\infty$ and $\epsilon\rightarrow 0$ are necessary conditions of consistency given the minimax rate (\ref{eq:easy-minimax}). The procedure does not require knowledge of $\epsilon$ or $\beta_0$, and thus consistency can be achieved without knowing $\epsilon$ and $\beta_0$ even if adaptation is impossible. The procedure (\ref{eq:h-lep-beta-wt}) uses a conservative $\widetilde{\beta}_0$ in the reverse version of Lepski's method, and can be viewed as an extension of (\ref{eq:h-lep-beta}) that uses the true smoothness index $\beta_0$.
\section{Introduction}
Nonparametric density estimation is a well-studied classical topic \citep{silverman1986density,devroyecombinatorial,tsybakov09}. In this paper, we consider this classical statistical task with a modern twist. Instead of assuming i.i.d. observations from a true density $f$, we assume
\begin{equation}
X_1,...,X_n\sim (1-\epsilon)f+\epsilon g,\label{eq:huber}
\end{equation}
where $g$ is a density not related to $f$, and the goal is to estimate $f(x_0)$ at some $x_0\in\mathbb{R}$. In other words, for each observation, there is an $\epsilon$ probability that the observation is sampled from a distribution not related to the density of interest.
This problem naturally appears in both robust statistics and multiple testing literature. In robust statistics literature, $g$ has the name ``contamination", and the task is interpreted as robustly estimating a density $f$ with contaminated data points \citep{chen2016general}. In multiple testing literature, $f$ and $g$ are respectively called null density and alternative density, and the task is interpreted as estimating null density at a point \citep{efron2004large}. In this paper, we use the name ``contamination" to refer to both $g$ and the observations generated from it.
The nature of the problem heavily depends on the assumptions put on $f$ and $g$. When there is no constraint on the contamination distribution $g$, the data generating process (\ref{eq:huber}) is also recognized as Huber's $\epsilon$-contamination model \citep{huber1964robust,huber1965robust}. Recent work on nonparametric estimation in such a setting includes \cite{chen2016general,gao2017robust}, and the influence of contamination on minimax rates is investigated by \cite{chen2015robust,chen2016general}. On the other hand, in the literature of multiple testing, it is more common to put parametric structural assumptions on the alternative $g$, and optimal rates of estimating the null density $f$ are investigated by \cite{jin2007estimating,cai2010optimal}.
In this paper, we explore this problem with connections to nonparametric density estimation literature in mind. Specifically, the density function $f$ is assumed to have a H\"{o}lder smoothness $\beta_0$. Both cases of structured and arbitrary contamination are considered and fundamental limit of this problem is studied by establishing minimax rate. In the structured contamination case, the contamination distribution $g$ is endowed with a $\beta_1$ H\"{o}lder smoothness, and the contamination level at the point $x_0$ is assumed to satisfy $g(x_0)\leq m$. The minimax rate of estimating $f(x_0)$ with respect to the squared error loss is shown to be of order
\begin{equation}
[n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^2(1\wedge m)^2]\vee[n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}].\label{eq:minimax-stru}
\end{equation}
The minimax rate involves three terms, and the influence of contamination on estimation is precisely characterized. The first term $n^{-\frac{2\beta_0}{2\beta_0+1}}$ corresponds to the classical minimax rate of nonparametric estimation when there is no contamination. The second term $\epsilon^2(1\wedge m)^2$ is determined by contamination on $x_0$. It depends on both the contamination proportion $\epsilon$ and the contamination level $m$. The last term $n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}$ is caused by contamination on the neighborhood of $x_0$, which is present even if the contamination level $m$ is zero. In the arbitrary contamination case, or equivalently under Huber's $\epsilon$-contamination model, the minimax rate is of order
\begin{equation}
[n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee [\epsilon^{\frac{2\beta_0}{\beta_0+1}}].\label{eq:minimax-arb-con}
\end{equation}
Compared with (\ref{eq:minimax-stru}), the rate (\ref{eq:minimax-arb-con}) is easier to understand in terms of the influence of the contamination. It is interesting to note that even though $\beta_0$ is the smoothness index of $f$, it still appears on the second term in (\ref{eq:minimax-arb-con}). Thus, when the contamination is arbitrary, its influence on estimation is also determined by the smoothness of the target density.
We also thoroughly investigate the theory of adaptation in both settings of contamination models. Depending on specific settings, various adaptation costs are necessary. For the contamination model with structured contamination, when the contamination proportion is unknown, an optimal adaptive procedure can achieve the rate (\ref{eq:minimax-stru}) with $\epsilon^2(1\wedge m)^2$ replaced by $\epsilon^2$. When the smoothness is unknown, an optimal adaptive procedure can achieve the rate (\ref{eq:minimax-stru}) with $n$ replaced by $n/\log n$. Similarly, for the contamination model with arbitrary contamination, the rate (\ref{eq:minimax-arb-con}) can be achieved up to a logarithmic factor when either $\epsilon$ or $\beta_0$ is unknown. On the other hand, however, when both the contamination proportion and the smoothness are unknown, the adaptation theories are completely different for the two contamination models. For structured contamination, the adaptation cost is just the combination of the cost of unknown contamination proportion and that of unknown smoothness. In contrast, for arbitrary contamination, we show that adaptation is simply impossible when both $\epsilon$ and $\beta_0$ are unknown. In other words, it is impossible to adaptively achieve a rate of the form $n^{-r_1(\beta_0)}\vee \epsilon^{r_2(\beta_0)}$ with any two functions $r_1(\cdot)$ and $r_2(\cdot)$.
The theory of adaptation in nonparametric functional estimation without contamination is well studied in the literature.
It is shown by \cite{brown1996constrained,lepski1997optimal,cai2006optimal} that a logarithmic factor must be paid for estimating a point of a density function when smoothness is not known. Adaptation costs of estimating other nonparametric functionals have been investigated in \cite{lepskii1991problem,tribouley2000adaptive,johnstone2001chi,cai2003rates,cai2005adaptive}. Compared with the results in the literature, the presence of contamination brings extra complication to the problem of adaptation. It is remarkable that the adaptation cost depends very sensitively on each specific setting and contamination model. The new phenomena revealed in our paper for adaptation with contamination have not been discovered before.
The rest of the paper is organized as follows. The contamination model with structured contamination is studied in Section \ref{sec:ms} and Section \ref{sec:as}. Results of minimax rates and costs of adaptation are given in Section \ref{sec:ms} and Section \ref{sec:as}, respectively. The corresponding theory of contamination model with arbitrary contamination is investigated in Section \ref{sec:ma}. In Section \ref{sec:discussion}, we discuss extensions of our results to multivariate density estimation and a consistent procedure in the hardest scenario where adaptation is impossible. All proofs are given in Section \ref{sec:proof}.
We close this section by introducing notations that will be used later. For $a,b\in\mathbb{R}$, let $a\vee b=\max(a,b)$ and $a\wedge b=\min(a,b)$. For an integer $m$, $[m]$ denotes the set $\{1,2,...,m\}$.
For a positive real number $x$, $\ceil{x}$ is the smallest integer no smaller than $x$ and $\floor{x}$ is the largest integer no larger than $x$. For two positive sequences $\{a_n\}$ and $\{b_n\}$, we write $a_n\lesssim b_n$ or $a_n=O(b_n)$ if $a_n\leq Cb_n$ for all $n$ with some consntant $C>0$ independent of $n$. The notation $a_n\asymp b_n$ means we have both $a_n\lesssim b_n$ and $b_n\lesssim a_n$.
Given a set $S$, $|S|$ denotes its cardinality, and $\mathbbm{1}_S$ is the associated indicator function. We use $\mathbb{P}$ and $\mathbb{E}$ to denote generic probability and expectation whose distribution is determined from the context. The notation $\mathbb{E}(X:S)$ stands for $\mathbb{E}(X\mathbbm{1}_S)$.
The class of infinitely differentiable functions on $\mathbb R$ is denoted by $\mathcal{C}^{\infty}(\mathbb R)$.
For two probability measures $\mathbb{P}$ and $\mathbb{Q}$, the chi-squared divergence is defined as $\chi^2(\mathbb{P},\mathbb{Q})=\int \frac{d\mathbb{P}^2}{d\mathbb{Q}}-1$, and the total variation distance is defined as ${\sf TV}(\mathbb{P},\mathbb{Q})=\sup_B|\mathbb{P}(B)-\mathbb{Q}(B)|$. Throughout the paper, $C$, $c$ and their variants denote generic constants that do not depend on $n$. Their values may change from place to place.
\section*{Acknowledgement}
The authors thank Zhao Ren for reading the manuscript and for his helpful comments.
The research of CG is supported in part by NSF grant DMS-1712957.
\bibliographystyle{plainnat}
\section{Proofs}\label{sec:proof}
\subsection{Proofs of Theorem \ref{thm:upperbound} and Theorem \ref{thm:fixedbandwidth}}\label{sec:pf-upper-structure}
\begin{proof}[Proof of Theorem \ref{thm:upperbound}]
Decompose the error as
\begin{align*}
\widehat{f}(0)-f(0)=(\widehat{f}(0)-\mathbb{E}\widehat{f}(0))+\left(\mathbb{E}\widehat{f}(0)-f(0)-\frac{\epsilon}{1-\epsilon}g(0)\right)+\frac{\epsilon}{1-\epsilon}g(0),
\end{align*}
where the first term is the stochastic error, the second term stands for bias, and the third term is the misspecification error caused by contamination.
For the variance term, we have
\begin{align*}
\mathbb{E}(\widehat{f}(0)-\mathbb{E}\widehat{f}(0))^2=\Var\left(\frac{\sum_{i=1}^n \frac{1}{h}K\left(\frac{X_i}{h}\right)}{n(1-\epsilon)}\right)=\frac{\Var(\frac{1}{h}K(\frac{X}{h}))}{n(1-\epsilon)^2},
\end{align*}
where
\begin{align*}
\Var\left(\frac{1}{h}K\left(\frac{X}{h}\right)\right)\leq \int\frac{1}{h^2}K^2\left(\frac{x}{h}\right)((1-\epsilon)f(x)+\epsilon g(x))dx\lesssim \frac{1}{h}\int \frac{1}{h}K^2\left(\frac{x}{h}\right)dx\lesssim \frac{1}{h}.
\end{align*}
This gives the variance bound
\begin{equation}\label{eq:term1}
\mathbb{E}(\widehat{f}(0)-\mathbb{E}\widehat{f}(0))^2\lesssim \frac{1}{nh}.
\end{equation}
For the bias term we have
\begin{align*}
\mathbb{E} \widehat{f}(0)=\int \frac{1}{h}K\left(\frac{x}{h}\right)f(x)dx+\frac{\epsilon}{1-\epsilon}\int \frac{1}{h}K\left(\frac{x}{h}\right)g(x)dx.
\end{align*}
Since $f\in\mathcal{P}(\beta_0,L_0)$ and $g\in\mathcal{P}(\beta_1,L_1)$, we have $|\int \frac{1}{h}K\left(\frac{x}{h}\right)(f(x)-f(0))dx|\lesssim h^{\beta_0}$ and $|\int \frac{1}{h}K\left(\frac{x}{h}\right)(g(x)-g(0))dx|\lesssim h^{\beta_1}$. See \cite[Chapter 1.2]{tsybakov09} for an explicit bias calculation. Adding up the two bias bounds, we get
\begin{equation}\label{eq:term2}
\left|\mathbb{E}\widehat{f}(0)-f(0)-\frac{\epsilon}{1-\epsilon}g(0)\right|\lesssim h^{\beta_0}+\epsilon h^{\beta_1}.
\end{equation}
For the last term, it is easy to see that
\begin{equation}\label{eq:term3}
\left(\frac{\epsilon}{1-\epsilon}g(0)\right)^2\lesssim \epsilon^2(m\wedge 1)^2,
\end{equation}
since $g(0)\leq m$ by the assumption and $g(0)\lesssim 1$ by the fact that $g\in\mathcal{P}(\beta_1,L_1)$.
With the relation $\mathbb{E}(A_1+A_2+A_3)^2\lesssim \mathbb{E}A_1^2+ \mathbb{E}A_2^2 + \mathbb{E}A_3^2$ and the three bounds in (\ref{eq:term1}), (\ref{eq:term2}) and (\ref{eq:term3}), we conclude the proof by the specific choice of $h=n^{-\frac{1}{2\beta_0+1}}\wedge n^{-\frac{1}{2\beta_1+1}}\epsilon^{-\frac{2}{2\beta_1+1}}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:fixedbandwidth}]
The error decomposes as
\begin{align*}
\widehat f(0)-f(0)=(\widehat f(0)-\mathbb{E}\widehat f(0))+(\mathbb{E}\widehat f(0)-(1-\epsilon)f(0)-\epsilon g(0))+\epsilon(g(0)-f(0)).
\end{align*}
Using the same argument that leads to (\ref{eq:term1}), we have $\mathbb{E}(\widehat f(0)-\mathbb{E}\widehat f(0))^2\lesssim \frac{1}{nh}$ for the variance term.
The bias term $(\mathbb{E}\widehat f(0)-(1-\epsilon)f(0)-\epsilon g(0))$ can be further decomposed as
\begin{align*}
(1-\epsilon)\int \frac{1}{h}K\left(\frac{x}{h}\right)(f(x)-f(0))dx+\epsilon\int \frac{1}{h}K\left(\frac{x}{h}\right)(g(x)-g(0))dx.
\end{align*}
Therefore, the same argument that leads to (\ref{eq:term2}) also gives the bound
\begin{align*}
|\mathbb{E}\widehat f(0)-(1-\epsilon)f(0)-\epsilon g(0)|\lesssim h^{\beta_0} + \epsilon h^{\beta_1}.
\end{align*}
For the last term, we have
$\epsilon|g(0)-f(0)|\lesssim\epsilon$. Combining the three bounds above, we have
$$\mathbb{E}\left(\widehat f(0)-f(0)\right)^2\lesssim \frac{1}{nh}+h^{2\beta_0}+\epsilon^2.$$
Choose $h=n^{-\frac{1}{2\beta_0+1}}$, and the proof is complete.
\end{proof}
\subsection{Proof of Theorem \ref{thm:smoothlowerbound}}\label{sec:pf-lower-structure}
The proof of Theorem \ref{thm:smoothlowerbound} mainly relies on Le Cam's two-point argument. The method is summarized by the following lemma.
\begin{lemma}\label{lem:lowerbound}
Consider two distributions $P_{\theta_0}$ and $P_{\theta_1}$ whose parameters of interest are separated by $\Delta=|T_{\theta_0}-T_{\theta_1}|$. Assume
$\chi^2\left(P_{\theta_0},P_{\theta_1}\right)\leq \alpha$.
Then, we have
$$\inf_{\widehat{T}}\sup_{\theta\in\{\theta_0,\theta_1\}}\mathbb{E}_{\theta}\left(\widehat{T}-T_{\theta}\right)^2\geq \frac{1}{8}e^{-\alpha}\Delta^2.$$
\end{lemma}
We refer the readers to \cite{yu1997assouad} and \cite[Chapter 2.3]{tsybakov09} for rigorous proofs.
In the setting of Theorem \ref{thm:smoothlowerbound}, we need to find two pairs of density functions $(f,g)$ and $(\widetilde f,\widetilde g)$ that satisfy $f,\widetilde f\in \mathcal{P}(\beta_0,L_0)$, $g,\widetilde g\in \mathcal{P}(\beta_1,L_1)$ and $g(0)\vee \widetilde g(0)\leq m$. Since we are working with i.i.d. observations, it is sufficient to show that
\begin{align*}
\chi^2\left(p(\epsilon,\widetilde f,\widetilde g),p(\epsilon,f,g)\right)\lesssim n^{-1}.
\end{align*}
Then, Lemma \ref{lem:lowerbound} implies $\mathcal R(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\gtrsim |f(0)-\widetilde f(0)|^2$.
The lower bound of Theorem \ref{thm:smoothlowerbound} contains three terms. We thus split the proof into three parts, and then combine the three arguments in the end.
\begin{lemma}\label{lem:term1}
We have
\begin{align*}
\mathcal R(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\gtrsim n^{-\frac{2\beta_0}{2\beta_0+1}}.
\end{align*}
\end{lemma}
\begin{proof}
The proof uses a similar argument in \cite[Chapter 2.5]{tsybakov09}. Since we are dealing with a setting with contamination, we still give a proof to be self contained.
We define the following four functions,
\begin{align*}
g(x)&=\widetilde g(x)=c_{1}a(c_{1}x),\\
f(x)&=f_0(x),\\
\widetilde f(x)&=f_0(x)+c_{2}h^{\beta_0}b\left(\frac{x}{h}\right).
\end{align*}
Here, we take $f_0$ as the density function of some normal distribution with mean zero so that $f_0\in\mathcal{P}(\beta_0,L_0/2)$. The functions $a(x)$ and $b(x)$ are given by Lemma \ref{lem:a} and Lemma \ref{lem:b}.
We first verify that for appropriate choices of $c_1,c_2$ and $h\leq 1$, the constructed functions are well-defined densities in the desired parameter spaces.
\begin{itemize}
\item We have $f\in \mathcal{P}(\beta_0,L_0)$ by construction. Since $h\leq 1$, $b(x/h)$ is compactly supported on an area where $f_0$ is lower bounded by some positive constant. Thus, with a $c_2>0$ that is sufficiently small, $\widetilde f$ is nonnegative. The fact $\int \widetilde{f}=1$ can be derived from the property of $b$ in Lemma \ref{lem:b}. Hence, $\widetilde f\in \mathcal{P}(\beta_0,L_0)$ when $c_{2}$ is small enough.
\item With a sufficiently small $c_1>0$, we have $g,\widetilde{g}\in \mathcal{P}(\beta_1,L_1)$.
\item By $a(0)=0$ according to Lemma \ref{lem:a}, we get $|g(0)|\vee |\widetilde g(0)|\leq m$.
\end{itemize}
We use the notation $p=(1-\epsilon)f+\epsilon g$ and $q=(1-\epsilon)\widetilde{f}+\epsilon\widetilde{g}$. Note that $p$ can be lower bounded by a positive constant on the interval $[-1,1]$ according to its definition. Moreover, we have
$$p(x)-q(x)=-(1-\epsilon)c_{2}h^{\beta_0}b\left(\frac{x}{h}\right),$$
and the support of $b\left(\frac{x}{h}\right)$ is $[-h,h]\subset[-1,1]$. This leads to the bound
$$\chi^2(q,p)=\int_{-1}^1\frac{(p-q)^2}{p}\lesssim \int (p-q)^2 \asymp h^{2\beta_0}\int b^2\left(\frac{x}{h}\right)\asymp {h}^{2\beta_0+1}.$$
In order that $n\chi^2(q,p)\lesssim 1$, we can choose $h=n^{-\frac{1}{2\beta_0+1}}$. This leads to
$$|f(0)-\widetilde f(0)|\asymp n^{-\frac{\beta_0}{2\beta_0+1}}.$$
Use Lemma \ref{lem:lowerbound}, and the proof is complete.
\end{proof}
\begin{lemma}\label{lem:term2}
We have
\begin{align*}
\mathcal R(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\gtrsim \epsilon^2(1\wedge m)^2.
\end{align*}
\end{lemma}
\begin{proof}
By \cite{tsybakov09}, for any $p\in\mathcal{P}(\beta,L)$, there exists a constant $p_{max}$ such that $\sup_x|p(x)|\leq p_{max}$. Therefore, it is sufficient to consider $m$ that is bounded by some constant, say $m\leq 1$.
Consider the following four functions,
\begin{align*}
f(x)&=f_0(x),\\
\widetilde f(x) &=f_0(x)+c_1\frac{\epsilon}{1-\epsilon}mb(x),\\
g(x)&=c_2a(c_2x)+c_1mb(x),\\
\widetilde g(x)&=c_2a(c_2x).
\end{align*}
Here, we take $f_0$ as the density function of some normal distribution with mean zero so that $f_0\in\mathcal{P}(\beta_0,L_0/2)$. The functions $a(x)$ and $b(x)$ are given by Lemma \ref{lem:a} and Lemma \ref{lem:b}.
With appropriate choices of the constants $c_1,c_2>0$, $f,\widetilde{f},g,\widetilde{g}$ are well-defined density functions that belong to the desired function classes.
\begin{itemize}
\item By Lemma \ref{lem:a},
We have $f_0\in\mathcal{P}(\beta_0,L_0/2)\subset \mathcal{P}(\beta_0,L_0)$ by construction. Since $f_0$ is strictly positive on $[-1,1]$ and $b$ is compactly supported on $[-1,1]$, we have $\widetilde f\in \mathcal{P}(\beta_0,L_0)$ for some sufficiently small constant $c_1>0$ according to the properties of $b$ listed in Lemma \ref{lem:b}.
\item By definition of $a$, we have $\widetilde g \in \mathcal{P}(\beta_1,L_1/2)$ for some sufficiently small $c_2>0$ according to Lemma \ref{lem:a}. Since $b(x)$ only takes negative values when $c_2a(c_2x)$ is lower bounded by a positive constant, $g$ is nonnegative and $g\in \mathcal{P}(\beta_1,L_1)$ when $c_1$ is small enough.
\item We also have $|g(0)|\vee |\widetilde g(0)|\leq m$ for a sufficiently small $c_1$ because $a(0)=0$ and $|b(0)|$ is bounded by a constant according to Lemma \ref{lem:a} and Lemma \ref{lem:b}.
\end{itemize}
In summary, we have
$$(1-\epsilon)f+\epsilon g, (1-\epsilon)\widetilde{f}+\epsilon\widetilde{g}\in \mathcal{M}(\epsilon,\beta_0,\beta_1,L_0,L_1,m).$$
Moreover, according to our construction, we have
$$(1-\epsilon)f+\epsilon g= (1-\epsilon)\widetilde{f}+\epsilon\widetilde{g},$$
and
$$|f(0)-\widetilde{f}(0)|=c_1\frac{\epsilon}{1-\epsilon}m|b(0)|\gtrsim m\epsilon,$$
where we have used $|b(0)|\gtrsim 1$ by Lemma \ref{lem:b}.
Finally, using Lemma \ref{lem:lowerbound}, we obtain the desired lower bound result.
\end{proof}
\begin{lemma}\label{lem:term3}
Assume $\beta_1\leq \beta_0$ and $n\epsilon^2\geq 1$. Then, we have
\begin{align*}
\mathcal R(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\gtrsim n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}.
\end{align*}
\end{lemma}
\begin{proof}
Consider the following four functions,
\begin{align*}
f(x) &=f_0(x),\\
\widetilde f(x) &=f_0(x)+c_{2}\frac{\epsilon}{1-\epsilon}\left[h^{\beta_0}l\left(\frac{x}{h}\right)-h^{\beta_0}l\left(\frac{2(x-c_{4})}{h}\right)-h^{\beta_0}l\left(\frac{2(x+c_{4})}{h}\right)\right],\\
g(x)&=c_{1}a(c_{1}x)+c_{2}\left[h^{\beta_0}l\left(\frac{x}{h}\right)-h^{\beta_0}l\left(\frac{2(x-c_{4})}{h}\right)-h^{\beta_0}l\left(\frac{2(x+c_{4})}{h}\right)\right]-c_{3}\widetilde h^{\beta_1}b\left(\frac{x}{\widetilde h}\right),\\
\widetilde g(x)&=c_{1}a(c_{1}x).
\end{align*}
Since the proof relies on perturbing a density at a point where it is $0$, the verification of nonnegativity is more delicate, which motivates another tuning constant controlling the center of the negative part of the perturbation.
Here, we take $f_0$ as the density function of some normal distribution with mean zero so that $f_0\in\mathcal{P}(\beta_0,L_0/2)$. The functions $a(x)$ and $b(x)$ are given by Lemma \ref{lem:a} and Lemma \ref{lem:b}. The numbers $h$ and $\widetilde{h}$ are chosen so that the following equation is satisfied:
\begin{equation}\label{eq:relation}
c_{2}h^{\beta_0}l(0)=c_{3}\widetilde h^{\beta_1}b(0).
\end{equation}
Now, we verify that with appropriate choices of constants $c_1,c_2,c_3,c_4$, the constructed functions belong to the parameter spaces.
\begin{itemize}
\item The functions $f$ and $\widetilde g$ are automatically density functions by definition. Note that we can choose a small constant $c_4$ so that the negative perturbation $-h^{\beta_0}l\left(\frac{2(x-c_{4})}{h}\right)-h^{\beta_0}l\left(\frac{2(x+c_{4})}{h}\right)$ has a support in a region where both $f_0$ and $c_1a(c_1x)$ are bounded below by a positive constant. This immediately implies that $\widetilde{f}(x)\geq 0$ for all $x$ with a sufficiently small constant $c_2$. Similarly, the support of $-c_{3}\widetilde h^{\beta_1}b\left(\frac{x}{\widetilde h}\right)$ is $[-\widetilde{h},\widetilde{h}]$, which is contained in a region where $c_1a(c_1x)$ is bounded below by a positive constant for a sufficiently small $\widetilde{h}$. Therefore, $g(x)\geq 0$ for all $x$ with a sufficiently small constant $c_3$. We also note that $\int \widetilde{f}=\int g=1$ according to the definitions.
\item When $c_{1},c_{2},c_{3}$ are chosen small enough, we have $f,\widetilde f\in \Sigma(\beta_0,L_0)$ and $g,\widetilde g \in \Sigma(\beta_1,L_1)$. Here $g\in \Sigma(\beta_1,L_1)$ is a consequence of the assumption that $\beta_1\leq\beta_0$.
\item Finally, we have $l(2c_4/h)=l(-2c_4/h)=0$ for a sufficiently small $h$. This implies $g(0)=\widetilde g(0)=0$ because of (\ref{eq:relation}). Therefore, $|g(0)\vee \widetilde g(0)|\leq m$.
\end{itemize}
In summary, we have
$$(1-\epsilon)f+\epsilon g, (1-\epsilon)\widetilde{f}+\epsilon\widetilde{g}\in \mathcal{M}(\epsilon,\beta_0,\beta_1,L_0,L_1,m).$$
Besides the properties listed above, we also note that both $f$ and $g$ can be bounded from below by some positive constant on the interval $[-1,1]$, if the constants $c_2,c_3$ are sufficiently small. This implies that the density $(1-\epsilon)f+\epsilon g$ is lower bounded by some positive constant on the interval $[-1,1]$.
Now, according to the above construction, for $p=(1-\epsilon)f+\epsilon g$ and $q=(1-\epsilon)\widetilde{f}+\epsilon\widetilde{g}$, we have
\begin{align*}
p(x)-q(x)=-\epsilon c_{3}\widetilde h^{\beta_1}b\left(\frac{x}{\widetilde h}\right).
\end{align*}
Given that the support of $b\left(\frac{x}{\widetilde h}\right)$ is within $[-\widetilde{h},\widetilde{h}]\subset[-1,1]$ with a sufficiently small $\widetilde{h}$, we have
$$\chi^2(q,p)=\int_{-1}^1\frac{(p-q)^2}{p}\lesssim \int (p-q)^2 \asymp \epsilon^2 \widetilde{h}^{2\beta_1}\int b^2\left(\frac{x}{\widetilde h}\right)\asymp \epsilon^2\widetilde{h}^{2\beta_1+1}.$$
In order that $n\chi^2(q,p)\lesssim 1$, it is sufficient to choose $\widetilde{h}\asymp\left(n\epsilon^2\right)^{-\frac{1}{2\beta_1+1}}$. The condition $n\epsilon^2\geq 1$ implies that $\widetilde{h}$ can be picked sufficiently small.
Moreover, with the relation (\ref{eq:relation}), we have
$$|f(0)-\widetilde{f}(0)|=c_2\frac{\epsilon}{1-\epsilon}h^{\beta_0}l(0)\asymp \epsilon h^{\beta_0}\asymp \epsilon \widetilde{h}^{\beta_1}\asymp\epsilon^{\frac{1}{2\beta_1+1}}n^{-\frac{\beta_1}{2\beta_1+1}}.$$
Finally, using Lemma \ref{lem:lowerbound}, we obtain the desired lower bound result.
\end{proof}
We combine the results of Lemma \ref{lem:term1}, Lemma \ref{lem:term2} and Lemma \ref{lem:term3}.
\begin{proof}[Proof of Theorem \ref{thm:smoothlowerbound}]
In order that the third term $n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}$ dominates the other two, it is necessary that $\epsilon^2\geq n^{\frac{2\beta_1-2\beta_0}{2\beta_0+1}}$. This implies both $\beta_1\leq \beta_0$ and $n\epsilon^2\geq 1$. By Lemma \ref{lem:term3}, we have
$$\mathcal R(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\gtrsim n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}.$$
When the first or the second term dominate, we use Lemma \ref{lem:term1} and Lemma \ref{lem:term2}, and obtain
$$\mathcal R(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\gtrsim [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^2(1\wedge m)^2].$$
Hence, the proof is complete.
\end{proof}
\subsection{Proofs of Theorem \ref{thm:smadapt1} and Theorem \ref{thm:smadapt2}}\label{sec:pf-cri}
The proofs of both theorems rely on the following constrained risk inequality by \cite{brown1996constrained}.
\begin{lemma}\label{lem:crineq}
Consider two distributions $P_{\theta_0}$ and $P_{\theta_1}$ whose parameters of interest are separated by $\Delta=|T_{\theta_0}-T_{\theta_1}|$. For any estimator $\widehat{T}$, assume
$$
\mathbb{E}_{\theta_0}(\widehat T-T_{\theta_0})^2\leq \delta^2.
$$
Then, whenver $\delta I\leq \Delta$, we have
$$
\mathbb{E}_{\theta_1}(\widehat T-T_{\theta_1})^2\geq (\Delta-\delta I)^2,
$$
where $I=\sqrt{\int \frac{dP_{\theta_1}^2}{dP_{\theta_0}}}$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thm:smadapt1}]
We consider the following four functions,
\begin{align*}
f&=f_0,\\
g&=c_1a(c_1x),\\
\widetilde f&= \frac{1-\epsilon}{1-\widetilde\epsilon}f_0+\frac{\epsilon-\widetilde\epsilon}{1-\widetilde\epsilon}c_1a(c_1x),\\
\widetilde g&=c_1a(c_1x).
\end{align*}
Here, we take $f_0$ as the density function of some normal distribution with mean zero so that $f_0\in\mathcal{P}(\beta_0,L_0/2)$. The function $a(\cdot)$ is given by Lemma \ref{lem:a}. The constant $c_1$ is sufficiently small so that $c_1a(c_1x)$ belongs to both $\mathcal{P}(\beta_0,L_0/2)$ and $\mathcal{P}(\beta_1,L_1/2)$. Now it is easy to check that $f,\widetilde f\in \mathcal{P}(\beta_0,L_0)$, $g,\widetilde g\in \mathcal{P}(\beta_1,L_1)$ and $g(0)\vee\widetilde g(0)=0\leq m$, so that the constructed functions are well-defined densities in the parameter spaces.
It is easy to check that
$$(1-\epsilon)f+\epsilon g=(1-\widetilde\epsilon)\widetilde{f}+\widetilde\epsilon \widetilde{g}.$$
This implies $\int q^2/p=1$ for $p=(1-\epsilon)f+\epsilon g$ and $q=(1-\widetilde\epsilon)\widetilde{f}+\widetilde\epsilon \widetilde{g}$. We also have
$$\left|f(0)-\widetilde{f}(0)\right|=\frac{\epsilon-\widetilde\epsilon}{1-\widetilde\epsilon}f(0).$$
According to Lemma \ref{lem:crineq}, suppose there is an estimator $\widehat{f}(0)$ that satisfies $\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\leq C\widetilde\epsilon^2$, we must have
$$\mathbb{E}_{q^n}(\widehat{f}(0)-\widetilde{f}(0))^2\geq \left(\frac{\epsilon-\widetilde\epsilon}{1-\widetilde\epsilon}f(0)-C^{1/2}\widetilde\epsilon\right)^2.$$
Therefore, there exists a constant $C'>0$, such that for $\epsilon\geq C'\widetilde\epsilon$, $\mathbb{E}_{q^n}(\widehat{f}(0)-\widetilde{f}(0))^2\gtrsim {\epsilon}^2$, and the proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:smadapt2}]
We construct the following four functions
\begin{align*}
\widetilde f(x) &=f_0(x),\\
f(x) &=f_0(x)-c_{2}\frac{\epsilon}{1-\epsilon}\left[h^{\beta_0}l\left(\frac{x}{h}\right)-h^{\beta_0}l\left(\frac{2(x-c_{4})}{h}\right)-h^{\beta_0}l\left(\frac{2(x+c_{4})}{h}\right)\right],\\
\widetilde g(x)&=c_{1}a(c_{1}x),\\
{g}(x)&=c_{1}a(c_{1}x)+c_{2}\left[h^{\beta_0}l\left(\frac{x}{h}\right)-h^{\beta_0}l\left(\frac{2(x-c_{4})}{h}\right)-h^{\beta_0}l\left(\frac{2(x+c_{4})}{h}\right)\right]-c_{3}\widetilde h^{\beta_1}b\left(\frac{x}{\widetilde h}\right).
\end{align*}
The construction is similar to that in the proof of Lemma \ref{lem:term3}. The difference is that the perturbation is now put on both ${f}$ and ${g}$. Here, we take $f_0$ as the density function of some normal distribution with mean zero so that $f_0\in\mathcal{P}(\beta_0,L_0/2)$. The functions $a(x)$ and $b(x)$ are given by Lemma \ref{lem:a} and Lemma \ref{lem:b}. The numbers $h$ and $\widetilde{h}$ are chosen so that the following equation is satisfied:
\begin{equation}\label{eq:relation2}
c_{2}h^{{\beta}_0}l(0)=c_{3}\widetilde h^{{\beta}_1}b(0).
\end{equation}
Similar to the argument used in Lemma \ref{lem:term3}, it is not hard to check that with appropriate choices of the constants $c_1,c_2,c_3$, we have $\widetilde f\in\mathcal{P}(\widetilde\beta_0,\widetilde L_0)$, $\widetilde g\in\mathcal{P}(\widetilde\beta_1,\widetilde L_1)$, ${f}\in\mathcal{P}({\beta}_0,{L}_0)$ and ${g}\in\mathcal{P}({\beta}_1,{L}_1)$, given that $\widetilde\beta_0\geq {\beta}_0\geq{\beta}_1$ and $\widetilde{\beta}_1>\beta_1$. The numbers $h$ and $\widetilde{h}$ are both required to be sufficiently small. We also have $g(0)=\widetilde{g}(0)=0$ according to the definition with an appropriate choice of $c_4$. Then, the constructed functions are well-defined densities in the parameter spaces.
With the notation $p=(1-\epsilon)f+\epsilon g$ and $q=(1-\epsilon)\widetilde{f}+\epsilon\widetilde{g}$, we check the quantities in Lemma \ref{lem:crineq}. Note that
$$|p(x)-q(x)|=c_{3}\epsilon\widetilde h^{\beta_1}b\left(\frac{x}{\widetilde h}\right).$$
With a similar argument in the proof of Lemma \ref{lem:term3}, the function $b\left(\frac{x}{\widetilde h}\right)$ is supported within $[-\widetilde{h},\widetilde{h}]\subset[-1,1]$, and $p(x)$ is lower bounded by some constant uniformly over $x\in[-1,1]$. This implies,
$$I=\left(\int\frac{q^2}{p}\right)^{\frac{n}{2}}=\left(1+\int \frac{(q-p)^2}{p}\right)^{\frac{n}{2}}\leq \exp\left(\frac{C_1n}{2}\int (p-q)^2\right)\leq \exp\left(C_1'n\epsilon^2\widetilde{h}^{2{\beta}_1+1}\right).$$
Moreover, we also have
$$\Delta=|f(0)-\widetilde{f}(0)|=c_2\frac{\epsilon}{1-\epsilon}h^{{\beta}_0}l(0),$$
and
$$\delta=C^{1/2}\left(\frac{n}{\log n}\right)^{-\frac{\widetilde\beta_1}{2\widetilde\beta_1+1}}\epsilon^{\frac{1}{2\widetilde\beta_1+1}}.$$
In order that $I\leq \left(\frac{n\epsilon^2}{\log n}\right)^c$ for some sufficiently small constant $c>0$, we can choose $\widetilde{h}\asymp \left(\frac{n\epsilon^2}{\log n}\right)^{-\frac{1}{2{\beta}_1+1}}$, which is always possible with the condition $n\epsilon^2\geq (\log n)^2$. According to the relation (\ref{eq:relation2}), we have $\Delta\asymp \epsilon^{\frac{1}{2{\beta}_1+1}}\left(\frac{n}{\log n}\right)^{-\frac{{\beta}_1}{2{\beta}_1+1}}$.
Plugging these quantities into the constrained risk inequality in Lemma \ref{lem:crineq} and using $\beta_1<\widetilde{\beta}_1$, we get the desired lower bound.
\end{proof}
\subsection{Proofs of Theorem \ref{thm:lepski2} and Theorem \ref{thm:lepski}}
The proofs of the two theorems are similar. Thus, we give a detailed proof of Theorem \ref{thm:lepski} first, and then sketch the proof of Theorem \ref{thm:lepski2}.
\begin{proof}[Proof of Theorem \ref{thm:lepski}]
For every bandwidth $h$, the error decomposes as
\begin{equation}\label{eq:riskdecomposition}
\widehat f_h(0)-f(0)=(\widehat f_h(0)-\mathbb{E}\widehat f_h(0))+(\mathbb{E}\widehat f_h(0)-(1-\epsilon)f(0)-\epsilon g(0))+\epsilon(g(0)-f(0)),
\end{equation}
where the three terms correspond to a stochastic part that depends on $h$, a deterministic part that depends on $h$, and a deterministic part that does not depend on $h$. With the same argument in the proof of Theorem \ref{thm:fixedbandwidth}, we have
$$\mathbb{E}(\widehat f(0)-\mathbb{E}\widehat f(0))^2\lesssim \frac{1}{nh},$$
$$|\mathbb{E}\widehat f(0)-(1-\epsilon)f(0)-\epsilon g(0)|\lesssim h^{\beta_0} + \epsilon h^{\beta_1},$$
and
$$\epsilon|g(0)-f(0)|\lesssim\epsilon.$$
Define the oracle bandwidth $h_*$ to be the largest $h\in \mathcal{H}$ such that
\begin{align*}
h^{\beta_0} + \epsilon h^{\beta_1}\leq c\sqrt{\frac{\log n}{nh}},
\end{align*}
where the constant $c>0$ will be determined later.
Then it is easy to see that $h_*$ satisfies
\begin{equation}\label{eq:optimaltradeoff}
c'\sqrt{\frac{\log n}{nh_*}}\leq h_*^{\beta_0}+\epsilon h_*^{\beta_1} \leq c\sqrt{\frac{\log n}{nh_*}},
\end{equation}
for some constant $c'$ that only depends on $c$.
We proceed to prove that $\widehat h\geq h_*$ with high probability. By the definition of $\widehat{h}$, we have
\begin{align*}
\mathbb{P}(\widehat h< h_*)&\leq \mathbb{P}\left(\exists l\leq h_*\text{ and }l\in\mathcal{H} \mbox{ s.t. } |\widehat f_{h_*}(0)-\widehat f_l(0)|> c_1\sqrt{\frac{\log n}{nl}}\right)\\
&\leq \sum_{l\leq h_*,l\in\mathcal{H}}\mathbb{P}\left(|\widehat f_{h_*}(0)-\widehat f_l(0)|> c_1\sqrt{\frac{\log n}{nl}}\right).
\end{align*}
We derive a bound for $\mathbb{P}\left(|\widehat f_{h_*}(0)-\widehat f_l(0)|> c_1\sqrt{\frac{\log n}{nl}}\right)$ for each $l\leq h_*$ and $l\in\mathcal{H}$.
Due to the error decomposition (\ref{eq:riskdecomposition}), we have:
\begin{align*}
|\widehat f_{h_*}(0)-\widehat f_l(0)|\leq C(h_*^{\beta_0}+\epsilon h_*^{\beta_1})+|\widehat f_{h_*}(0)-\mathbb{E}\widehat f_{h_*}(0)|+|\widehat f_l(0)-\mathbb{E}\widehat f_l(0)|,
\end{align*}
for some constant $C>0$.
By (\ref{eq:optimaltradeoff}),
the bias term can be controlled as
$$C(h_*^{\beta_0}+\epsilon h_*^{\beta_1})\leq C\times c\sqrt{\frac{\log n}{nh_*}}\leq \frac{c_1}{2}\sqrt{\frac{\log n}{nl}},$$
for a sufficiently small $c>0$.
Thus, we have
\begin{align*}
\mathbb{P}(\widehat h< h_*)&\leq \sum_{l\leq h_*,l\in\mathcal{H}}\mathbb{P}\left(|\widehat f_{h_*}(0)-\mathbb{E}\widehat f_{h_*}(0)|+|\widehat f_l(0)-\mathbb{E}\widehat f_l(0)|\geq \frac{c_1}{2}\sqrt{\frac{\log n}{nl}}\right) \\
& \leq \sum_{l\leq h_*,l\in\mathcal{H}}\mathbb{P}\left(|\widehat f_{h_*}(0)-\mathbb{E}\widehat f_{h_*}(0)|\geq \frac{c_1}{4}\sqrt{\frac{\log n}{nh^*}}\right) \\
& + \sum_{l\leq h_*,l\in\mathcal{H}}\mathbb{P}\left(|\widehat f_{l}(0)-\mathbb{E}\widehat f_{l}(0)|\geq \frac{c_1}{4}\sqrt{\frac{\log n}{nl}}\right).
\end{align*}
For any $l\leq h_*$ and $l\in\mathcal{H}$, we use Bernstein's inequality, and get
\begin{eqnarray*}
&& \mathbb{P}\left(|\widehat f_{l}(0)-\mathbb{E}\widehat f_{l}(0)|\geq t\right) \\
&\leq& \mathbb{P}\left(\left|\frac{1}{n}\sum_{i=1}^nl^{-1}K(X_i/l)-\mathbb{E}l^{-1}K(X/l)\right|\geq t\right) \\
&\leq& 2\exp\left(-\frac{nt^2/2}{\sigma^2+Mt/3}\right),
\end{eqnarray*}
where we choose $t=\frac{c_1}{4}\sqrt{\frac{\log n}{nl}}$, and $\sigma^2$ and $M$ have bounds
$$\sigma^2\leq \mathbb{E}l^{-2}K^2(X/l)\lesssim l^{-1}\text{ and }M\lesssim l^{-1}.$$
This implies the bound
\begin{equation}
\mathbb{P}\left(|\widehat f_{l}(0)-\mathbb{E}\widehat f_{l}(0)|\geq \frac{c_1}{4}\sqrt{\frac{\log n}{nl}}\right)\leq 2\exp\left(-C'\log n\right),\label{eq:haoyang-nb}
\end{equation}
where the constant $C'>0$ can be arbitrarily large given a sufficiently large $c_1>0$.
For example, we set a large enough $c_1>0$ so that $C'=3$. This gives
$$\mathbb{P}(\widehat h< h_*)\leq 4|\mathcal{H}|n^{-3}\lesssim n^{-3}\log n.$$
Now, on the event $\{\widehat h\geq h_*\}$, the risk decomposes as
\begin{align*}
|\widehat f_{\widehat h}(0)-f(0)|\leq |\widehat f_{\widehat h}(0)-\widehat f_{h_*}(0)|+|\widehat f_{h_*}(0)-f(0)|.
\end{align*}
Due to the definition of $\widehat h$, the first term satisfies
\begin{equation}\label{eq:oraclebandwidth1}
|\widehat f_{\widehat h}(0)-\widehat f_{h_*}(0)|\leq c_1\sqrt{\frac{\log n}{nh_*}}.
\end{equation}
For the second term, the error decomposition and the relation (\ref{eq:optimaltradeoff}) implies
$$\mathbb{E}|\widehat f_{h_*}(0)-f(0)|^2\lesssim \frac{\log n}{nh_*}+\epsilon^2.$$
Therefore, we have
\begin{eqnarray*}
&& \mathbb{E}(\widehat f_{\widehat h}(0)-f(0))^2 \\
&\leq& \mathbb{E}((\widehat f_{\widehat h}(0)-f(0))^2:\widehat h\geq h_*)+\mathbb{E}((\widehat f_{\widehat h}(0)-f(0))^2:\widehat h< h_*)\\
&\leq& 2\mathbb{E}((\widehat f_{\widehat h}(0)-\widehat f_{h_*}(0))^2:\widehat h\geq h_*)+2\mathbb{E}((\widehat f_{h_*}(0)-f(0))^2:\widehat h\geq h_*)+O\left(n^2\mathbb{P}(\widehat h< h_*)\right)\\
&\lesssim& \frac{\log n}{nh_*} +\epsilon^2 + \frac{\log n}{n} \\
&\lesssim& \left(\frac{\log n}{n}\right)^{\frac{2\beta_0}{2\beta_0+1}}+\epsilon^2.
\end{eqnarray*}
The last inequality above is by realizing that $h_*\asymp \left(\frac{n}{\log n}\right)^{-\frac{1}{2\beta_0+1}}$ from the relation (\ref{eq:optimaltradeoff}). The proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:lepski2}]
The proof for Theorem \ref{thm:lepski2} follows the same argument as that of Theorem \ref{thm:lepski}. The only difference lies in the normalization, which leads to the error decomposition
$$\widehat{f}_h(0)-f(0)=(\widehat{f}_h(0)-\mathbb{E}\widehat{f}_h(0))+\left(\mathbb{E}\widehat{f}_h(0)-f(0)-\frac{\epsilon}{1-\epsilon}g(0)\right)+\frac{\epsilon}{1-\epsilon}g(0).$$
The rest of the details are the same and is omitted.
\end{proof}
\subsection{Proof of Theorem \ref{thm:minimax-arb}}\label{sec:pf-miss}
We split the proof into upper and lower bounds. We first prove the following upper bound.
\begin{thm}
For the estimator $\widehat{f}(0)=\widehat{f}_h(0)$ with some $K\in \mathcal{K}_{\left \lfloor{\beta_0}\right \rfloor}(L)$ and $h=n^{-\frac{1}{2\beta_0+1}}\vee \epsilon^{\frac{1}{\beta_0+1}}$, we have
$$\sup_{p(\epsilon,f,g)\in \mathcal M(\epsilon,\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\lesssim [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^{\frac{2\beta_0}{\beta_0+1}}].$$
\end{thm}
\begin{proof}
Decompose the error as
\begin{align*}
\widehat{f}_h(0)-f(0)=(\widehat{f}_h(0)-\mathbb{E}\widehat{f}_h(0))+(\mathbb{E}_h\widehat{f}(0)-f(0)),
\end{align*}
where the first term is the stochastic error and the second term is the bias.
For the first term, we have
\begin{align*}
\mathbb{E}(\widehat{f}_h(0)-\mathbb{E}\widehat{f}_h(0))^2=\frac{1}{n}\Var\left(\frac{1}{h}K\left(\frac{X}{h}\right)\right),
\end{align*}
and
\begin{eqnarray*}
\Var\left(\frac{1}{h}K\left(\frac{X}{h}\right)\right) &\leq& (1-\epsilon)\int\frac{1}{h^2}K^2\left(\frac{x}{h}\right)f(x)dx + \epsilon\int\frac{1}{h^2}K^2\left(\frac{x}{h}\right)dG(x) \\
&\lesssim& \frac{1}{h}\int \frac{1}{h}K^2\left(\frac{x}{h}\right)dx+\frac{\epsilon}{h^2}\int dG(x) \\
&\lesssim& \frac{1}{h} + \frac{\epsilon}{h^2}.
\end{eqnarray*}
Therefore, we have
\begin{equation}\label{eq:term1new}
\mathbb{E}(\widehat{f}_h(0)-\mathbb{E}\widehat{f}_h(0))^2\lesssim\frac{1}{nh}+\frac{\epsilon}{nh^2}.
\end{equation}
For the bias term, we have
\begin{align*}
\mathbb{E} \widehat{f}_h(0)-f(0)=(1-\epsilon)\int \frac{1}{h}K\left(\frac{x}{h}\right)(f(x)-f(0))dx+\epsilon\int \frac{1}{h}K\left(\frac{x}{h}\right)dG(x)-\epsilon f(0),
\end{align*}
where the first term has bound
$$\left|\int \frac{1}{h}K\left(\frac{x}{h}\right)(f(x)-f(0))dx\right|\lesssim h^{\beta_0},$$
by \cite[Chapter 1.2]{tsybakov09}, and the next two terms can be bounded as
$$\left|\epsilon\int \frac{1}{h}K\left(\frac{x}{h}\right)dG(x)-\epsilon f(0)\right|\lesssim \frac{\epsilon}{h}\int dG(x) +\epsilon f(0)\lesssim \frac{\epsilon}{h}.$$
Therefore, we have
\begin{equation}\label{eq:term2new}
|\mathbb{E}\widehat{f}_h(0)-f(0)|\lesssim h^{\beta_0}+\frac{\epsilon}{h}.
\end{equation}
Combine the two bounds (\ref{eq:term1new}) and (\ref{eq:term2new}), choose $h=n^{-\frac{1}{2\beta_0+1}}\vee \epsilon^{\frac{1}{\beta_0+1}}$, and then we complete the proof.
\end{proof}
Now we state the lower bound.
\begin{thm}\label{thm:arbitrarylowerbound}
We have
$$
{\mathcal R}(\epsilon,\beta_0,L_0)\gtrsim [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^{\frac{2\beta_0}{\beta_0+1}}].
$$
\end{thm}
Before proving this theorem, we need the following lemma.
\begin{lemma}\label{lem:dif}
A function $d(x)$ can be written as the difference of two density functions if and only if
$$\int d=0\quad\text{and}\quad \int|d|\leq 2.$$
\end{lemma}
\begin{proof}
The ``only if" part is obvious. Now assume the two conditions hold, and then for any density function $f$, we have the following decomposition for $d$,
\begin{align*}
d=\left[d_++\left(1-\frac{1}{2}\int |d|\right)f\right]-\left[d_-+\left(1-\frac{1}{2}\int |d|\right)f\right],
\end{align*}
where $d_+$ and $d_-$ are the positive and negative parts of the function. The first condition implies $\int d_+=\int d_-=\frac{1}{2}\int |d|$. Thus,
\begin{align*}
\int \left[d_++\left(1-\frac{1}{2}\int |d|\right)f\right]=\int\left[d_-+\left(1-\frac{1}{2}\int |d|\right)f\right]=1.
\end{align*}
The second condition guarantees that both $d_++\left(1-\frac{1}{2}\int |d|\right)f$ and $d_-+\left(1-\frac{1}{2}\int |d|\right)f$ are nonnegative. Thus, the proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:arbitrarylowerbound}]
For the lower bound of the first term $n^{-\frac{2\beta_0}{2\beta_0+1}}$, see the proof of Lemma \ref{lem:term1}. We give a proof for the second term. Consider the following two functions
\begin{align*}
f&=f_0,\\
\widetilde f &=f_0+ch^{\beta_0}b\left(\frac{x}{h}\right).
\end{align*}
Here, we take $f_0$ as the density function of some normal distribution with mean zero so that $f_0\in\mathcal{P}(\beta_0,L_0/2)$. The function $b$ is defined in Lemma \ref{lem:b}.
The constant $c$ is chosen small enough so that $f\in \mathcal{P}(\beta_0,L_0)$. In order that there exist $g$ and $\widetilde g$ so that
\begin{align*}
(1-\epsilon)f+\epsilon g=(1-\epsilon)\widetilde f+\epsilon\widetilde g,
\end{align*}
it suffices to verify the existence of densities $g$ and $\widetilde g$ such that
\begin{align*}
g(x)-\widetilde g(x)=\frac{1-\epsilon}{\epsilon}\left(\widetilde{f}(x)-f(x)\right)=c\frac{1-\epsilon}{\epsilon}h^{\beta_0}b\left(\frac{x}{h}\right).
\end{align*}
By Lemma \ref{lem:dif}, it further suffices to verify the condition
\begin{align*}
c\frac{1-\epsilon}{\epsilon}\int h^{\beta_0}\left|b\left(\frac{x}{h}\right)\right|dx\leq 2,
\end{align*}
and this is guaranteed by taking some $h\asymp \epsilon^{\frac{1}{\beta_0+1}}$. Now we have $g$ and $\widetilde{g}$ such that $(1-\epsilon)f+\epsilon g=(1-\epsilon)\widetilde f+\epsilon\widetilde g$ holds. Moreover,
$$|f(0)-\widetilde{f}(0)|=ch^{\beta_0}b(0)\asymp \epsilon^{\frac{\beta_0}{\beta_0+1}}.$$
Apply Lemma \ref{lem:lowerbound}, and the proof is complete.
\end{proof}
\subsection{Proofs of Theorem \ref{thm:lepski3} and Theorem \ref{thm:arbitrarylepski}}
We first prove Theorem \ref{thm:arbitrarylepski}. Then, the proof of Theorem \ref{thm:lepski3} will be sketched using arguments in the proofs of Theorem \ref{thm:lepski} and Theorem \ref{thm:arbitrarylepski}.
\begin{proof}[Proof of Theorem \ref{thm:arbitrarylepski}]
We consider observations $X_1,...,X_n$. We assume that $X_1,...,X_a$ are generated from the density $f$ with some integer $a$, and the remaining observations $X_{a+1},...,X_n$ are generated from contamination. The number $a$ follows $\text{Binomial}(n,1-\epsilon)$. This is without loss of generality, because the definition of $\widehat{f}$ does not depend on the order of the data $X_1,...,X_n$. Apply Bernstein's inequality, and we get
$$\mathbb{P}\left(\frac{n-a}{n}\geq 2\epsilon\right)\leq \exp\left(-\frac{3}{8}n\epsilon\right).$$
From now on, we assume that $\epsilon\geq \frac{8\log n}{n}$, so that $\frac{n-a}{n}\leq 2\epsilon$ with probability at least $1-n^{-3}$. The case $\epsilon< \frac{8\log n}{n}$ will be considered in the end of the proof. Moreover, the following analysis conditions on the event $\left\{\frac{n-a}{n}\leq 2\epsilon\right\}$, and we use $\bar{\mathbb{P}}$ and $\bar{\mathbb{E}}$ to denote probability and expectation conditioning on the random variable $a$.
We start by the following error decomposition,
\begin{eqnarray*}
\widehat f_h(0)-f(0) &=& \frac{1}{n}\sum_{i=1}^a\left(h^{-1}K(X_i/h)-\mathbb{E}_{X\sim f}h^{-1}K(X/h)\right) \\
&& + \frac{a}{n}\left(\mathbb{E}_{X\sim f}h^{-1}K(X/h) - f(0)\right) \\
&& + \frac{1}{n}\sum_{i=a+1}^n h^{-1}K(X_i/h) - \frac{n-a}{n}f(0).
\end{eqnarray*}
With similar arguments used in the proof of Theorem \ref{thm:upperbound}, we have
$$\bar{\mathbb{E}}\left(\frac{1}{n}\sum_{i=1}^a\left(h^{-1}K(X_i/h)-\mathbb{E}_{X\sim f}h^{-1}K(X/h)\right)\right)^2\lesssim \frac{1}{nh},$$
and
$$\left|\mathbb{E}_{X\sim f}h^{-1}K(X/h) - f(0)\right|\lesssim h^{\beta_0}.$$
Moreover, $\frac{n-a}{n}\leq 2\epsilon$ implies that
$$\frac{1}{n}\sum_{i=a+1}^n h^{-1}K(X_i/h)\lesssim \frac{\epsilon}{h},$$
and $\frac{n-a}{n}f(0)\lesssim \epsilon$. These bounds motivate us to define an oracle bandwidth $h_*$ that is the smallest $h\in\mathcal{H}$ such that
\begin{align*}
\frac{\epsilon}{h}+ \sqrt{\frac{\log n}{nh}}\leq h^{\widetilde{\beta}_0}.
\end{align*}
Then it is obvious that $h_*$ satisfies
\begin{equation}\label{eq:hstarstar}
ch_*^{\widetilde{\beta}_0}\leq\frac{\epsilon}{h_*}+ \sqrt{\frac{\log n}{nh_*}}\leq h_*^{\widetilde{\beta}_0},
\end{equation}
with some constant $c>0$. Now we prove that $\widehat h\leq h_*$ holds with high probability. According to the definition of $\widehat{h}$, we have
\begin{eqnarray*}
\bar{\mathbb{P}}(\widehat{h}>h_*) &\leq& \bar{\mathbb{P}}\left(\exists l\geq h_*\text{ and }l\in \mathcal{H}\text{ s.t. } |\widehat f_{h_*}(0)-\widehat f_l(0) |\geq c_1l^{\widetilde \beta_0}\right) \\
&\leq&\sum_{l\geq h_*,l\in\mathcal{H}}\bar{\mathbb{P}}\left(|\widehat f_{h_*}(0)-\widehat f_l(0) |\geq c_1l^{\widetilde \beta_0}\right).
\end{eqnarray*}
By the risk decomposition, for $l\geq h_*$ and $l\in\mathcal{H}$, the difference $|\widehat f_{h^*}(0)-\widehat f_l(0) |$ is bounded as
\begin{eqnarray*}
|\widehat f_{h_*}(0)-\widehat f_l(0)| &\leq& \left|\frac{1}{n}\sum_{i=1}^a\left(h_*^{-1}K(X_i/h_*)-\mathbb{E}_{X\sim f}h_*^{-1}K(X/h_*)\right)\right| \\
&& + \left|\frac{1}{n}\sum_{i=1}^a\left(l^{-1}K(X_i/l)-\mathbb{E}_{X\sim f}l^{-1}K(X/l)\right)\right| \\
&& + C\left(\frac{\epsilon}{h_*}+l^{\beta_0}\right),
\end{eqnarray*}
for some constant $C>0$. According to (\ref{eq:hstarstar}) and the condition $\widetilde{\beta}_0\leq \beta_0$, we have
$$C\left(\frac{\epsilon}{h_*}+l^{\beta_0}\right)\leq C\left(h_*^{\widetilde{\beta}_0}+l^{\widetilde{\beta}_0}\right)\leq \frac{c_1}{4}h_*^{\widetilde{\beta}_0}+\frac{c_1}{4}l^{\widetilde{\beta}_0},$$
where the last inequality holds for a sufficiently large $c_1$.
Thus, we have the bound
\begin{eqnarray*}
\bar{\mathbb{P}}(\widehat{h}>h_*) &\leq& \sum_{l\geq h_*,l\in\mathcal{H}}\bar{\mathbb{P}}\left(|\widehat f_{h_*}(0)-\widehat f_l(0) |\geq \frac{c_1}{2}l^{\widetilde \beta_0}+\frac{c_1}{2}h_*^{\widetilde \beta_0}\right) \\
&\leq& \sum_{l\geq h_*,l\in\mathcal{H}}\bar{\mathbb{P}}\left( \left|\frac{1}{n}\sum_{i=1}^a\left(h_*^{-1}K(X_i/h_*)-\mathbb{E}_{X\sim f}h_*^{-1}K(X/h_*)\right)\right|\geq \frac{c_1}{4}h_*^{\widetilde{\beta}_0}\right) \\
&& + \sum_{l\geq h_*,l\in\mathcal{H}}\bar{\mathbb{P}}\left( \left|\frac{1}{n}\sum_{i=1}^a\left(l^{-1}K(X_i/l)-\mathbb{E}_{X\sim f}l^{-1}K(X/l)\right)\right|\geq \frac{c_1}{4}l^{\widetilde{\beta}_0}\right) \\
&\leq& \sum_{l\geq h_*,l\in\mathcal{H}}\bar{\mathbb{P}}\left( \left|\frac{1}{n}\sum_{i=1}^a\left(h_*^{-1}K(X_i/h_*)-\mathbb{E}_{X\sim f}h_*^{-1}K(X/h_*)\right)\right|\geq \frac{c_1}{4}\sqrt{\frac{\log n}{nh_*}}\right) \\
&& + \sum_{l\geq h_*,l\in\mathcal{H}}\bar{\mathbb{P}}\left( \left|\frac{1}{n}\sum_{i=1}^a\left(l^{-1}K(X_i/l)-\mathbb{E}_{X\sim f}l^{-1}K(X/l)\right)\right|\geq \frac{c_1}{4}\sqrt{\frac{\log n}{nl}}\right),
\end{eqnarray*}
where the last inequality is by (\ref{eq:hstarstar}) and the observation that
$$l^{\widetilde{\beta}_0}\geq h_*^{\widetilde{\beta}_0}\geq \sqrt{\frac{\log n}{nh_*}}\geq \sqrt{\frac{\log n}{nl}}.$$
Use Bernstein's inequality in a similar way that derives (\ref{eq:haoyang-nb}), we obtain the bound
$$\bar{\mathbb{P}}\left( \left|\frac{1}{n}\sum_{i=1}^a\left(l^{-1}K(X_i/l)-\mathbb{E}_{X\sim f}l^{-1}K(X/l)\right)\right|\geq \frac{c_1}{4}\sqrt{\frac{\log n}{nl}}\right)\leq 2n^{-3},$$
when the constant $c_1$ is chosen to be sufficiently large. Then, we have
$$\bar{\mathbb{P}}(\widehat{h}>h_*)\lesssim n^{-3}\log n.$$
On the event $\widehat h\leq h_*$, the error decomposes as
\begin{align*}
|\widehat f_{\widehat h}(0)-f(0)|\leq |\widehat f_{\widehat h}(0)-\widehat f_{h_*}(0)|+|\widehat f_{h_*}(0)-f(0)|.
\end{align*}
Due to definition of $\widehat h$, the first term is bounded as
\begin{align*}
|\widehat f_{\widehat h}(0)-\widehat f_{h_*}(0)|\leq c_1h_*^{\widetilde \beta_0}.
\end{align*}
The second term uses the oracle bandwidth $h_*$. Then, we have
\begin{align*}
\bar{\mathbb{E}}(\widehat f_{\widehat h}(0)-f(0))^2&\leq \bar{\mathbb{E}}((\widehat f_{\widehat h}(0)-f(0))^2:\widehat h\leq h_*)+\bar{\mathbb{E}}((\widehat f_{\widehat h}(0)-f(0))^2:\widehat h> h_*)\\
&\lesssim \bar{\mathbb{E}}((\widehat f_{\widehat h}(0)-\widehat f_{h_*}(0))^2:\widehat h\leq h_*)+\bar{\mathbb{E}}((\widehat f_{h_*}(0)-f(0))^2:\widehat h\leq h_*)+n^2\bar{\mathbb{P}}(\widehat h> h_*)\\
&\lesssim h_*^{2\widetilde \beta_0} + \frac{1}{nh_*} + h_*^{2\beta_0} + \frac{\epsilon^2}{h_*^2} + n^{-1}\log n\\
&\lesssim \left(\frac{\log n}{n}\right)^{\frac{2\widetilde\beta_0}{2\widetilde\beta_0+1}}\vee \epsilon^{\frac{2\widetilde\beta_0}{\widetilde\beta_0+1}},
\end{align*}
where we have used (\ref{eq:hstarstar}) in the last inequality. Integrating over the random variable $a$, we have
\begin{eqnarray}
\nonumber \mathbb{E}(\widehat f_{\widehat h}(0)-f(0))^2 &\leq& \mathbb{E}\left((\widehat f_{\widehat h}(0)-f(0))^2: \frac{n-a}{n}< 2\epsilon\right) + \mathbb{E}\left((\widehat f_{\widehat h}(0)-f(0))^2: \frac{n-a}{n}\geq 2\epsilon\right) \\
\nonumber &\lesssim& \left(\frac{\log n}{n}\right)^{\frac{2\widetilde\beta_0}{2\widetilde\beta_0+1}}\vee \epsilon^{\frac{2\widetilde\beta_0}{\widetilde\beta_0+1}} + n^2\mathbb{P}\left(\frac{n-a}{n}\geq 2\epsilon\right) \\
\label{eq:error-ac-bound0} &\lesssim& \left(\frac{\log n}{n}\right)^{\frac{2\widetilde\beta_0}{2\widetilde\beta_0+1}}\vee \epsilon^{\frac{2\widetilde\beta_0}{\widetilde\beta_0+1}}.
\end{eqnarray}
Finally, we consider the situation when $\epsilon< \frac{8\log n}{n}$. In this case, for any contamination distribution $g$, there is another $\widetilde{g}$ such that
$$(1-\epsilon)f+\epsilon g= \left(1-\frac{8\log n}{n}\right)f+\frac{8\log n}{n}\widetilde{g}.$$
See \cite{chen2015robust} for a rigorous argument of the above equality. Then, we can equivalently analyze the risk with contamination proportion $\frac{8\log n}{n}$. This leads to the error bound
\begin{equation}
\left(\frac{\log n}{n}\right)^{\frac{2\widetilde\beta_0}{2\widetilde\beta_0+1}}\vee \left(\frac{\log n}{n}\right)^{\frac{2\widetilde\beta_0}{\widetilde\beta_0+1}}\asymp \left(\frac{\log n}{n}\right)^{\frac{2\widetilde\beta_0}{2\widetilde\beta_0+1}}\asymp \left(\frac{\log n}{n}\right)^{\frac{2\widetilde\beta_0}{2\widetilde\beta_0+1}}\vee \epsilon^{\frac{2\widetilde\beta_0}{\widetilde\beta_0+1}}.\label{eq:error-ac-bound}
\end{equation}
Hence, we let $n\rightarrow \infty$ and $\epsilon\rightarrow 0$, and the proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:lepski3}]
For the estimator that uses (\ref{eq:h-lep-beta}), the result is a special case of Theorem \ref{thm:arbitrarylepski} by letting $\widetilde{\beta}_0=\beta_0$ in view of the bounds (\ref{eq:error-ac-bound0}) and (\ref{eq:error-ac-bound}). For the estimator that uses (\ref{eq:h-lep-epsilon}), the result follows the same argument as the proof of Theorem \ref{thm:lepski} .
\end{proof}
\subsection{Proofs of Lemma \ref{thm:unidentifiable} and Theorem \ref{thm:impossible}}\label{sec:pf-imp}
\begin{proof}[Proof of Lemma \ref{thm:unidentifiable}]
We use $\phi(\cdot)$ to denote the density of $N(0,1)$. Then, define
\begin{align*}
f(x)=&c_{3}\phi(c_{3}x),\\
g(x)=&\frac{c_4}{\epsilon^{\frac{1}{\widetilde{\beta}_0+1}}}\phi\left(\frac{c_{4}x}{\epsilon^{\frac{1}{\widetilde{\beta}_0+1}}}\right),\\
\widetilde f(x)=&(1-\epsilon)f(x)+\epsilon g(x),\\
\widetilde g(x)=&\phi(x).
\end{align*}
First, there exists a constant $c_3$ depending on $c_1,c_2$ such that for any $\beta_0,\widetilde{\beta}_0\leq c_1$ and $L_0,\widetilde{L}_0\geq c_2$, we have $f\in \mathcal{P}(\beta_0,L_0)\cap \mathcal{P}(\widetilde{\beta}_0,\widetilde L_0/2)$. This is due to the fact that $\phi^{(\alpha)}(x)$ is uniformly bounded for all $\alpha\leq c$ when $c$ is some constant. By definition,
\begin{align*}
\epsilon g(x)=c_{4}\epsilon^{\frac{\widetilde{\beta}_0}{\widetilde{\beta}_0+1}}\phi\left(\frac{c_{4}x}{\epsilon^{\frac{1}{\widetilde{\beta}_0+1}}}\right),
\end{align*}
For the same reason as above, there exists a constant $c_4$ depending on $c_1,c_2$ such that for any $\widetilde{\beta}_0\leq c_1$ and $\widetilde{L}_0\geq c_2$, we have $\epsilon g\in \Sigma(\widetilde{\beta}_0,\widetilde L_0/2)$, which then implies $\widetilde f\in\mathcal{P}(\widetilde{\beta}_0,\widetilde L_0)$. Now we note that
$$(1-\epsilon)f+ \epsilon g = (1-0)\widetilde{f}+ 0\widetilde{g},$$
and
$$\left|\widetilde{f}(0)-f(0)\right|=\epsilon|f(0)-g(0)|\geq c_0\epsilon^{\frac{\widetilde{\beta}_0}{\widetilde{\beta}_0+1}},$$
when $\epsilon$ smaller than a constant and where $c_0$ is a constant depending on $c_3,c_4$.
Thus for any estimator $\widehat{f}(0)$,
$$\left[\sup_{p(\epsilon,f,g)\in\mathcal{M}(\epsilon,\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat{f}(0)-f(0)\right)^2\right]\vee\left[\sup_{p(0,f,g)\in\mathcal{M}(0,\widetilde{\beta}_0,\widetilde{L}_0)}\mathbb{E}_{p^n}\left(\widehat{f}(0)-f(0)\right)^2\right]\geq c_0\epsilon^{\frac{2\widetilde{\beta}_0}{\widetilde{\beta}_0+1}},$$
by applying Lemma \ref{lem:lowerbound}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:impossible}]
For any constants $c_1,c_2$, let $c_0$ be the constant as guaranteed to exist in Theorem \ref{thm:unidentifiable}, and assume there exists an estimator $\widehat f(0)$ which is $(c_1,c_2,c_3,r_1(\beta_0),r_2(\beta_0))$ rate adaptive. With $L_0=c_2$, we consider two models respectively with parameters $(n,\epsilon,\beta_0,L_0)$ and $(n,0,\widetilde\beta_0,L_0)$, with the specific values of $n,\epsilon,\beta_0,\widetilde\beta_0$ to be chosen later. By the definition of rate adaptivity (\ref{eq:adaptive-def}), we have:
\begin{align*}
\sup_{p(\epsilon,f,g)\in\mathcal{M}(\epsilon,\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat{f}(0)-f(0)\right)^2&\leq c_3[\epsilon^{r_2(\beta_0)}\vee n^{-r_1(\beta_0)}],\\
\sup_{p(0,f,g)\in\mathcal{M}(0,\widetilde\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat{f}(0)-f(0)\right)^2&\leq c_3n^{-r_1(\widetilde\beta_0)}.
\end{align*}
On the other hand, Theorem \ref{thm:unidentifiable} claims that for any small enough $\epsilon$, any large enough $n$, any $\beta_0,\widetilde\beta_0\leq c_1$, we have
\begin{align*}
\sup_{p(\epsilon,f,g)\in\mathcal{M}(\epsilon,\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat{f}(0)-f(0)\right)^2\vee \sup_{p(0,f,g)\in\mathcal{M}(0,\widetilde\beta_0,L_0)}\mathbb{E}_{p^n}\left(\widehat{f}(0)-f(0)\right)^2& \geq c_0\epsilon^{\frac{2\widetilde{\beta}_0}{\widetilde{\beta}_0+1}}.
\end{align*}
Together this yields
\begin{equation}\label{eq:contradiction}
c_3[\epsilon^{r_2(\beta_0)}\vee n^{-r_1(\beta_0)}\vee n^{-r_1(\widetilde\beta_0)}]\geq c_0\epsilon^{\frac{2\widetilde{\beta}_0}{\widetilde{\beta}_0+1}}.
\end{equation}
Now we choose $n,\beta_0,\widetilde\beta_0,\epsilon$ in a legitimate range so that this inequality becomes a contradiction. First we fix some $\beta_0\leq c_1$. Then we choose $\epsilon$ to be small enough such that $c_3\epsilon^{r_2(\beta_0)}\leq c_0 \epsilon^a$ for some $a>0$. Indeed this holds as long as $\epsilon^{r_2(\beta_0)}<\frac{c_0}{c_3}$. Now since $a>0$, we can choose $\widetilde\beta_0$ to be small enough such that $\frac{2\widetilde{\beta}_0}{\widetilde{\beta}_0+1}<a$. Finally since $r_1(\beta_0),r_1(\widetilde\beta_0)>0$, we can choose $n$ large enough such that $c_3[n^{-r_1(\beta_0)}\vee n^{-r_1(\widetilde{\beta}_0)}]< c_0\epsilon^{\frac{2\widetilde{\beta}_0}{\widetilde{\beta}_0+1}}$. With these choices, it is obvious that equation (\ref{eq:contradiction}) becomes a contradiction, as desired.
\end{proof}
\subsection{Proof of Theorem \ref{thm:multi}}
The proofs are exactly the same as in the one-dimensional case. For the lower bounds, we only need to replace the mollifier function $l(x)$ by its multivariate extension $l_d(x)=l(\|x\|)$. The upper bounds are achieved by
$\widehat{f}_h(0)=\frac{1}{n(1-\epsilon)}\sum_{i=1}^n\frac{1}{h^d}K_d\left(\frac{X_i}{h}\right)$,
where the bandwidth is $h=n^{-\frac{1}{2\beta_0+d}}\wedge n^{-\frac{1}{2\beta_0+d}}\epsilon^{-\frac{2}{2\beta_0+d}}$ for structured contamination and is $h=n^{-\frac{1}{2\beta_0+d}}\vee \epsilon^{\frac{1}{\beta_0+d}}$ for arbitrary contamination. We can use a product kernel for $K_d$. See \cite[Chapter 12]{devroyecombinatorial} for details.
\section{Minimax Rates with Structured Contamination}\label{sec:ms}
\subsection{Results and Implications}\label{sec:minimax-s}
Consider i.i.d. observations $X_1,...,X_n\sim (1-\epsilon)f+\epsilon g$.
The goal is to estimate $f$ at a given point. Without loss of generality, we aim to estimate $f(0)$.
In other words, for every $i\in[n]$, we have $X_i\sim f$ with probability $1-\epsilon$ and $X_i\sim g$ with probability $\epsilon$. Thus, there are approximately $n\epsilon$ observations that are not related to the density function $f$, which are referred to as contamination.
To study the fundamental limit of estimating $f$ with contaminated data, we need to specify appropriate regularity conditions on both $f$ and $g$.
We first define the H\"{o}lder class by
$$\Sigma(\beta,L)=\left\{f:\mathbb{R}\rightarrow\mathbb{R}\Bigg|\left|f^{(\floor{\beta})}(x_1)-f^{(\floor{\beta})}(x_2)\right|\leq L|x_1-x_2|^{\beta-\floor{\beta}}\text{ for any }x_1,x_2\in\mathbb{R}\right\}.$$
Here, $\beta$ stands for the smoothness parameter, and $L$ stands for the radius of the function space.
The H\"{o}lder class of density functions is defined as
$$\mathcal{P}(\beta,L)=\left\{f:\mathbb{R}\rightarrow[0,\infty)\Bigg| f\in\Sigma(\beta,L), \int f=1\right\}.$$
Finally, we define the class of mixtures in the form of $(1-\epsilon)f+\epsilon g$ by
$$\mathcal{M}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)=\left\{(1-\epsilon)f+\epsilon g\Big| f\in \mathcal{P}(\beta_0,L_0), g\in \mathcal{P}(\beta_1,L_1), g(0)\leq m\right\}.$$
This class is indexed by several numbers. Throughout the paper, we refer to $\epsilon$ as contamination proportion and $m$ as contamination level at $0$. The pair $(\beta_0,L_0)$ controls the smoothness of the density function $f$ that we want to estimate, and the pair $(\beta_1,L_1)$ controls the smoothness of the contamination density $g$. Among the six numbers, $\epsilon$ and $m$ are allowed to depend on the sample size $n$, but the numbers $\beta_0,\beta_1,L_0,L_1$ are all assumed to be constants that do not depend on $n$ throughout the paper. It is also assumed that $\epsilon\leq 1/2$.
The minimax risk of estimation is defined as (notice that we suppress the dependence on $n$ for $\mathcal{R}$)
$$\mathcal{R}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)=\inf_{\widehat f(0)}\sup_{p(\epsilon,f,g)\in \mathcal M(\epsilon,\beta_0,\beta_1,L_0,L_1,m)}\mathbb{E}_{X_1,\dots,X_n\sim p}\left(\widehat f(0)-f(0)\right)^2,$$
where the notation $p(\epsilon,f,g)$ is used to denote the density $(1-\epsilon)f+\epsilon g$.
Later in the paper, we will shorthand $\mathbb{E}_{X_1,\dots,X_n\sim p}$ by $\mathbb{E}_{p^n}$.
Obviously, the minimax risk becomes smaller if $\epsilon$ gets smaller or $n$ gets larger. Besides the role of $\epsilon$ and $n$, the other model indices are also expected to affect the difficulty of the problem, as listed in the following.
\begin{itemize}
\item The smoothness of $f$: From classical density estimation theory, we know the smoother $f$ is, the easier it is to estimate $f(0)$.
\item The level of $g(0)$: Intuitively, the smaller $g(0)$ is, the smaller its influence is on $f(0)$, and thus the easier the problem is.
\item The smoothness of $g$: Intuitively, the smoother $g$ is, the less the contamination effect can spread, and thus the easier it is to account for the effect of $g$ in the contamination model.
\end{itemize}
Now we present the following theorem of minimax rate, that justifies our intuition above.
\begin{thm}\label{thm:minimax-rate}
Under the setting above, we have
\begin{equation}
\mathcal{R}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\asymp [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^2(1\wedge m)^2]\vee[n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}].\label{eq:minimax-rate}
\end{equation}
In other words, $\mathcal{R}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)$ can be upper and lower bounded by the right hand side of (\ref{eq:minimax-rate}) up to a constant that only depends on $\beta_0,\beta_1,L_0,L_1$.
\end{thm}
Theorem \ref{thm:minimax-rate} completely characterizes the difficulty of estimating $f(0)$ with contaminated data. The three terms in the rate (\ref{eq:minimax-rate}) have different but very clear meanings. The first term $n^{-\frac{2\beta_0}{2\beta_0+1}}$ is the classical minimax rate of estimating a smooth function at a given point without contamination. The second term $\epsilon^2(1\wedge m)^2$ is proportional to the squared of the product of contamination level and contamination proportion. The last term $n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}$ is perhaps the most interesting. Here the effect of $\epsilon$ is powered by an exponent depending on $\beta_1$, and it stands for the interaction between the contamination proportion and the contamination smoothness. The fact that it does not depend on $m$ implies that we have to pay this price with contaminated data even if $g(0)=0$.
To further understand the implications of Theorem \ref{thm:minimax-rate}, we present the following illustrative special cases of the minimax rate (\ref{eq:minimax-rate}). First, when $\epsilon=0$, we get
$$\mathcal{R}(0,\beta_0,\beta_1,L_0,L_1,m)\asymp n^{-\frac{2\beta_0}{2\beta_0+1}}.$$
This is simply the classical minimax rate of estimating $f(0)$ without contamination.
Next, to understand the role of $m$, we consider two extreme cases of $m=0$ and $m=\infty$. From (\ref{eq:minimax-rate}), we have
$$\mathcal{R}(\epsilon,\beta_0,\beta_1,L_0,L_1,0)\asymp [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}],$$
and
$$\mathcal{R}(\epsilon,\beta_0,\beta_1,L_0,L_1,\infty)\asymp [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee\epsilon^2.$$
The case of $m=0$ is particularly interesting. It implies $g(0)=0$, and one may expect that the contamination would have no influence on the minimax rate. This intuition is not true because of the term $n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}$. Since nonparametric estimation of $f(0)$ also depends on the values of the density function at a neighborhood of $0$, the contamination from $g$ can still have an effect on the neighborhood of $0$ despite that $g(0)=0$. A smaller value of $\beta_1$ allows a greater perturbation by $g$ on the neighborhood of $0$. When $m=\infty$, the minimax rate has a simple form of $[n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee\epsilon^2$. The influence on the minimax rate from contamination is always $\epsilon^2$, regardless of the smoothness $\beta_1$.
Finally, we consider the cases of $\beta_1=0$ and $\beta_1=\infty$. In fact, the H\"{o}lder class $\Sigma(\beta,L)$ with $\beta_1=\infty$ is not well defined, but the discussion below still holds for a sufficiently large constant $\beta_1$. From (\ref{eq:minimax-rate}), we have
$$\mathcal{R}(\epsilon,\beta_0,0,L_0,L_1,m)\asymp [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee\epsilon^2,$$
and
$$\mathcal{R}(\epsilon,\beta_0,\infty,L_0,L_1,m)\asymp [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^2(1\wedge m)^2].$$
The influence of the contamination takes the forms of $\epsilon^2$ and $\epsilon^2(1\wedge m)^2$ for the two extreme cases. This immediately implies that for any values of $\epsilon,\beta_0,\beta_1,L_0,L_1,m$, we have
$$[n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^2(1\wedge m)^2]\lesssim \mathcal{R}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\lesssim [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee\epsilon^2.$$
In other words, the influence of contamination on the minimax rate is sandwiched between $m^2\epsilon^2$ and $\epsilon^2$.
\subsection{Upper Bounds}
The minimax rate (\ref{eq:minimax-rate}) can be achieved by a simple kernel density estimator that takes the form
\begin{equation}
\widehat{f}_h(0)=\frac{1}{n(1-\epsilon)}\sum_{i=1}^n\frac{1}{h}K\left(\frac{X_i}{h}\right).\label{eq:def-KDE}
\end{equation}
This estimator is slightly different from the classical kernel density estimator because it is normalized by $\frac{1}{n(1-\epsilon)}$ instead of $\frac{1}{n}$. The knowledge of the contamination proportion $\epsilon$ is very critical to achieve the minimax rate (\ref{eq:minimax-rate}). Later, we will show in Section \ref{sec:un-epsilon} that the minimax rate (\ref{eq:minimax-rate}) cannot be achieved if $\epsilon$ is not known.
We introduce the following class of kernel functions.
\begin{eqnarray*}
\mathcal{K}_{l}(L) &=& \Bigg\{K:\mathbb{R}\rightarrow\mathbb{R}\Big| \int K=1, \int x^jK(x)dx=0 \text{ for all } j\in[l], \\
&& \quad\quad\|K\|_{\infty}\vee \int K^2 \vee \int|x|^l|K(x)|dx\leq L \Bigg\}.
\end{eqnarray*}
The class $\mathcal{K}_l(L)$ collects all bounded and squared integrable kernel functions of order $l$. The number $L>0$ is assumed to be a constant throughout the paper. We refer to \cite{devroyecombinatorial} for examples of kernel functions in the class $\mathcal{K}_l(L)$.
\begin{thm}\label{thm:upperbound}
For the estimator $\widehat{f}(0)=\widehat{f}_h(0)$ with some $K\in \mathcal{K}_{\left \lfloor{\beta_0\vee \beta_1}\right \rfloor}(L)$ and $h=n^{-\frac{1}{2\beta_0+1}}\wedge n^{-\frac{1}{2\beta_1+1}}\epsilon^{-\frac{2}{2\beta_1+1}}$, we have
$$\sup_{p(\epsilon,f,g)\in \mathcal M(\epsilon,\beta_0,\beta_1,L_0,L_1,m)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\lesssim [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^2(1\wedge m)^2]\vee[n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}].$$
\end{thm}
Theorem \ref{thm:upperbound} reveals an interesting choice of the bandwidth
$h=n^{-\frac{1}{2\beta_0+1}}\wedge n^{-\frac{1}{2\beta_1+1}}\epsilon^{-\frac{2}{2\beta_1+1}}$.
Compared with the optimal bandwidth of order $n^{-\frac{1}{2\beta_0+1}}$ in classical nonparametric function estimation, the $h$ in the structured contamination setting is always smaller. The choice of bandwidth is a consequences of the specific bias-variance tradeoff under the structured contamination model. As an interesting contrast, in the case of arbitrary contamination, the optimal choice of bandwidth is always larger than the usual one, see Section \ref{sec:ma}.
The error bound in Theorem \ref{thm:upperbound} can be found through a classical bias-variance tradeoff argument. We can decompose the difference $\widehat{f}(0)-f(0)$ as
\begin{equation}
(\widehat{f}(0)-\mathbb{E}\widehat{f}(0))+\left(\mathbb{E}\widehat{f}(0)-f(0)-\frac{\epsilon}{1-\epsilon}g(0)\right)+\frac{\epsilon}{1-\epsilon}g(0).\label{eq:error-decomp}
\end{equation}
Here, the first term is the stochastic error. The second term gives the approximation error of the kernel convolution. The last term is caused by the contamination at $0$. Direct analysis of the three terms gives the bound
\begin{equation}
\mathbb{E}\left(\widehat f(0)-f(0)\right)^2\lesssim \frac{1}{nh}+(h^{2\beta_0}+\epsilon^2h^{2\beta_1})+\epsilon^2(m\wedge 1)^2.\label{eq:RD-s}
\end{equation}
Now with the choice $h=n^{-\frac{1}{2\beta_0+1}}\wedge n^{-\frac{1}{2\beta_1+1}}\epsilon^{-\frac{2}{2\beta_1+1}}$, we obtain the error bound in Theorem \ref{thm:upperbound}. For detailed derivation, see the proof of Theorem \ref{thm:upperbound} in Section \ref{sec:pf-upper-structure}.
\subsection{Lower Bounds}
In this section, we study the lower bound part of the minimax rate (\ref{eq:minimax-rate}). We first state a theorem.
\begin{thm}\label{thm:smoothlowerbound}
We have
$$
\mathcal R(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\gtrsim [n^{-\frac{2\beta_0}{2\beta_0+1}}]\vee[\epsilon^2(1\wedge m)^2]\vee[n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}].
$$
\end{thm}
The first term $n^{-\frac{2\beta_0}{2\beta_0+1}}$ is the classical minimax lower bound for nonparametric estimation. Thus, we will only give here a overview of how to derive the second and the third terms.
Two specific functions are used as building blocks for our construction, and their definitions and properties are summarized in the following two lemmas.
\begin{lemma}\label{lem:a}
Let $l(x)=e^{-\frac{1}{1-x^2}}\mathbbm{1}_{\{|x|\leq 1\}}$. Define
$$a(x)=\begin{cases}
c_0l(x+1), & -2\leq x\leq 0, \\
c_0l(x-1), & 0\leq x\leq 2.
\end{cases}$$
The constant $c_0$ is chosen so that $\int a=1$. It satisfies the following properties:
\begin{enumerate}
\item $a$ is an even density function compactly supported on $[-2,2]$.
\item $a(0)=0$.
\item For any constants $\beta,L>0$, there exists a constant $c>0$, such that $ca(cx)\in \mathcal{P}(\beta,L)\cap \mathcal{C}^{\infty}(\mathbb R)$.
\item For any small constant $c>0$, $a$ is uniformly lower bounded by a positive constant on $[-1,-c]\cup [c,1]$, and it is uniformly upper bounded by a positive constant on $\mathbb{R}$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{lem:b}
Let $l(x)=e^{-\frac{1}{1-x^2}}\mathbbm{1}_{\{|x|\leq 1\}}$. Define
$$b(x)=\begin{cases}
-l\left(4x+3\right), & -1\leq x\leq -\frac{1}{2}, \\
l(2x), & |x|\leq \frac{1}{2},\\
-l\left(4x-3\right), & \frac{1}{2}\leq x\leq 1.
\end{cases}$$
It satisfies the following properties:
\begin{enumerate}
\item $b$ is an even function compactly supported on $[-1,1]$.
\item For any $\beta,L>0$, there exists a constant $c>0$ such that $cb\in \Sigma(\beta,L)\cap \mathcal{C}^{\infty}(\mathbb R)$.
\item $b$ is uniformly lower bounded by a positive constant on $[-\frac{1}{4},\frac{1}{4}]$, and $|b|$ is uniformly upper bounded by a positive constant on $\mathbb{R}$.
\item $\int b=0$.
\end{enumerate}
\end{lemma}
Both the proofs of the second and the third terms in the lower bound involve careful constructions of two pairs of densities $(f,g)$ and $(\widetilde{f},\widetilde{g})$. In order to show $\mathcal{R}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\gtrsim \epsilon^2(1\wedge m)^2$, we consider the following constructions,
\begin{align*}
f(x)&=f_0(x),\\
\widetilde f(x) &=f_0(x)+c_1\frac{\epsilon}{1-\epsilon}(m\wedge 1)b(x),\\
g(x)&=c_2a(c_2x)+c_1(m\wedge 1)b(x),\\
\widetilde g(x)&=c_2a(c_2x).
\end{align*}
Here, the constants $c_1,c_2$ are chosen so that the constructed functions $f,\widetilde{f},g,\widetilde{g}$ are well-defined densities in the desired parameter spaces. It is easy to check that with the above construction,
$$(1-\epsilon)f+\epsilon g=(1-\epsilon)\widetilde{f}+\epsilon \widetilde{g}.$$
This implies that with the presence of contamination, an estimator $\widehat{f}(0)$ cannot distinguish between the two data generating processes $(1-\epsilon)f+\epsilon g$ and $(1-\epsilon)\widetilde{f}+\epsilon \widetilde{g}$. As a consequence, an error of order $|f(0)-\widetilde{f}(0)|^2\asymp \epsilon^2(1\wedge m)^2$ cannot be avoided.
The derivation of the lower bound $\mathcal{R}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)\gtrsim n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}$ is more intricate. Consider the following four functions,
\begin{align*}
f(x) &=f_0(x),\\
\widetilde f(x) &=f_0(x)+\frac{\epsilon}{1-\epsilon}{\color{blue}c_{2}\left[h^{\beta_0}l\left(\frac{x}{h}\right)-h^{\beta_0}l\left(\frac{2(x-c_{4})}{h}\right)-h^{\beta_0}l\left(\frac{2(x+c_{4})}{h}\right)\right]},\\
g(x)&=c_{1}a(c_{1}x){+\color{blue}c_{2}\left[h^{\beta_0}l\left(\frac{x}{h}\right)-h^{\beta_0}l\left(\frac{2(x-c_{4})}{h}\right)-h^{\beta_0}l\left(\frac{2(x+c_{4})}{h}\right)\right]}{-\color{red}c_{3}\widetilde h^{\beta_1}b\left(\frac{x}{\widetilde h}\right)},\\
\widetilde g(x)&=c_{1}a(c_{1}x),
\end{align*}
where the definitions of the functions $l,a,b$ are given in Lemma \ref{lem:a} and Lemma \ref{lem:b}. Again, the constants $c_1,c_2,c_3,c_4$ are chosen properly so that the constructed functions are well-defined densities in the desired function classes.
A dominant feature of this constructions is that $g$ is a perturbation of $\widetilde{g}$ with two levels of perturbation, respectively with bandwidth $h$ and $\widetilde{h}$, while usual lower bound proof in nonparametric estimation involves perturbing a function at a single bandwidth level. The first level of perturbation $h^{\beta_0}l\left(\frac{x}{h}\right)$ serves to cancel the effect of the corresponding perturbation on $f$, while the second perturbation $-\widetilde{h}^{\beta_1}b\left(\frac{x}{\widetilde h}\right)$ serves to ensure the constraint of contamination level. Indeed, if we relate $h$ and $\widetilde{h}$ through the equation $h^{\beta_0}\asymp \widetilde{h}^{\beta_1}$, then it is direct that $\widetilde{g}(0)=g(0)=0$. In other words, the constructed contamination density functions $g$ and $\widetilde{g}$ both have contamination level $0$. An illustration of this construction with a two-level perturbation is given by Figure \ref{fig:hyl}.
\begin{figure}
\caption{An illustration of the construction of $g$.}\label{fig:hyl}
\centering
\includegraphics[width=7.5cm,height=5cm]{g}
\end{figure}
The colors of the plot correspond to those in the formulas.
With the above construction, it is not hard to check that
$$p(\epsilon,f,g)-p(\epsilon,\widetilde{f},\widetilde{g})=-c_{3}\epsilon\widetilde h^{\beta_1}b\left(\frac{x}{\widetilde h}\right).$$
In order that an estimator cannot distinguish between the two densities $p(\epsilon,f,g)=(1-\epsilon)f+\epsilon g$ and $p(\epsilon,\widetilde{f},\widetilde{g})=(1-\epsilon)\widetilde{f}+\epsilon \widetilde g$, a sufficient condition is $\chi^2\left(p(\epsilon,\widetilde{f},\widetilde{g}),p(\epsilon,f,g)\right)\lesssim n^{-1}$ (see Lemma \ref{lem:lowerbound}), which leads to the choice of $\widetilde{h}$ at the order $\widetilde{h}\asymp\left(n\epsilon^2\right)^{-\frac{1}{2\beta_1+1}}$. As a consequence, an error of order
$$|f(0)-\widetilde{f}(0)|^2\asymp \epsilon^2 h^{2\beta_0}\asymp \epsilon^2 \widetilde{h}^{2{\beta}_1}\asymp\epsilon^{\frac{2}{2\beta_1+1}}n^{-\frac{2\beta_1}{2\beta_1+1}}$$
cannot be avoided. A rigorous proof of Theorem \ref{thm:smoothlowerbound} will be given in Section \ref{sec:pf-lower-structure}.
\section{Adaptation Theory with Structured Contamination}\label{sec:as}
\subsection{Summary of Results}
To achieve the minimax rate in Theorem \ref{thm:minimax-rate}, the kernel density estimator (\ref{eq:def-KDE}) requires the knowledge of contamination proportion $\epsilon$ and smoothness $(\beta_0,\beta_1)$. In this section, we discuss adaptive procedures to estimate $f(0)$ without the knowledge of these parameters. However, adaptation to $\epsilon$ or to $(\beta_0,\beta_1)$ is not free, and one can only achieve slower rates than the minimax rate (\ref{eq:minimax-rate}). The adaptation cost varies for each different scenario. A summary of our results is listed below.
\begin{itemize}
\item When the contamination proportion is unknown, the best possible rate is
\begin{align*}
n^{-\frac{2\beta_0}{2\beta_0+1}}\vee\epsilon^2.
\end{align*}
\item When the smoothness parameters are unknown, the best possible rate is
\begin{align*}
\left[\left(\frac{n}{\log n}\right)^{-\frac{2\beta_0}{2\beta_0+1}}\right]\vee \left[\epsilon^2(1\wedge m)^2\right]\vee\left[\left(\frac{n}{\log n}\right)^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}\right].
\end{align*}
\item
When both the contamination proportion and the smoothness are unknown, the best possible rate becomes
\begin{align*}
\left(\frac{n}{\log n}\right)^{-\frac{2\beta_0}{2\beta_0+1}}\vee\epsilon^2.
\end{align*}
\end{itemize}
Compared with the minimax rate (\ref{eq:minimax-rate}), the ignorance of the contamination proportion implies that $m$ is replaced by $1$ in the rate, while the ignorance of the smoothness implies that $n$ is replaced by $n/\log n$ in the rate.
\subsection{Unknown Contamination Proportion}\label{sec:un-epsilon}
The kernel density estimator (\ref{eq:def-KDE}) depends on $\epsilon$ in two ways: the normalization through $\frac{1}{n(1-\epsilon)}$ and the optimal choice of bandwidth $h$.
Without the knowledge of $\epsilon$, we consider the following estimator
\begin{equation}
\widehat f_h(0)=\frac{1}{n}\sum_{i=1}^n \frac{1}{h}K\left(\frac{X_i}{h}\right).\label{eq:def-KDE-n}
\end{equation}
The first difference between (\ref{eq:def-KDE-n}) and (\ref{eq:def-KDE}) is the normalization. When $\epsilon$ is not given, we can only use $\frac{1}{n}$ in (\ref{eq:def-KDE-n}). Moreover, the choice of $h$ in (\ref{eq:def-KDE-n}) cannot depend on $\epsilon$.
\begin{thm}\label{thm:fixedbandwidth}
For the estimator $\widehat{f}(0)=\widehat{f}_h(0)$ with some $K\in \mathcal{K}_{\left \lfloor{\beta_0\vee \beta_1}\right \rfloor}(L)$ and $h=n^{-\frac{1}{2\beta_0+1}}$, we have
$$\sup_{p(\epsilon,f,g)\in \mathcal M(\epsilon,\beta_0,\beta_1,L_0,L_1,m)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\lesssim n^{-\frac{2\beta_0}{2\beta_0+1}}\vee\epsilon^2.$$
\end{thm}
With the choice $h=n^{-\frac{1}{2\beta_0+1}}$, $\widehat{f}_h$ becomes the classical nonparametric density estimator.
The contamination results in an extra $\epsilon^2$ in the rate compared with the classical nonparametric minimax rate, regardless of the values of $m$ and $\beta_1$. Note that in the current setting, the error $\widehat f_h(0)-f(0)$ has the following decomposition,
\begin{equation}
(\widehat f_h(0)-\mathbb{E}\widehat f_h(0))+(\mathbb{E}\widehat f_h(0)-(1-\epsilon)f(0)-\epsilon g(0))+\epsilon(g(0)-f(0)).\label{eq:error-decomp2}
\end{equation}
The difference between (\ref{eq:error-decomp}) and (\ref{eq:error-decomp2}) is resulted from different normalizations in (\ref{eq:def-KDE}) and (\ref{eq:def-KDE-n}).
Some standard calculation gives the bound
\begin{align*}
\mathbb{E}(\widehat f_h(0)-f(0))^2\lesssim \frac{1}{nh}\vee h^{2\beta_0}\vee \epsilon^2,
\end{align*}
which implies the optimal choice of bandwidth $h=n^{-\frac{1}{2\beta_0+1}}$, and thus the rate in Theorem \ref{thm:fixedbandwidth}. A detailed proof is given in Section \ref{sec:pf-upper-structure}.
In view of the form of the minimax rate (\ref{eq:minimax-rate}), the rate given by Theorem \ref{thm:fixedbandwidth} can be obtained by replacing the $\epsilon^2(1\wedge m)^2$ in (\ref{eq:minimax-rate}) with $\epsilon^2$. A matching lower bound for adaptivity to $\epsilon$ is given by the following theorem.
\begin{thm}\label{thm:smadapt1}
Consider two models $\mathcal{M}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)$ and $\mathcal{M}(\widetilde\epsilon,\beta_0,\beta_1,L_0,L_1,m)$ with different contamination proportions. For any estimator $\widehat{f}(0)$ that satisfies
\begin{align*}
\sup_{p(\epsilon,f,g)\in\mathcal{M}(\widetilde\epsilon,\beta_0,\beta_1,L_0,L_1,m)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\leq C\widetilde\epsilon^2,
\end{align*}
for some constant $C>0$, there must exist another constant $C'>0$, such that for $\epsilon\geq C'\widetilde\epsilon$, we have
\begin{align*}
\sup_{p(\epsilon,f,g)\in\mathcal{M}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\gtrsim \epsilon^2.
\end{align*}
\end{thm}
Theorem \ref{thm:smadapt1} shows that it is impossible to achieve a rate that is faster than $\epsilon^2$ even over only two different contamination proportions. The proof of Theorem \ref{thm:smadapt1} relies on the following construction,
\begin{align*}
f&=f_0,\\
g&=c_1a(c_1x),\\
\widetilde f&= \frac{1-\epsilon}{1-\widetilde\epsilon}f_0+\frac{\epsilon-\widetilde\epsilon}{1-\widetilde\epsilon}c_1a(c_1x),\\
\widetilde g&=c_1a(c_1x).
\end{align*}
With an appropriate choice of the constant $c_1>0$, we have $(1-\epsilon)f+\epsilon g\in\mathcal{M}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)$ and $(1-\widetilde{\epsilon})\widetilde{f}+\widetilde{\epsilon}\widetilde g\in \mathcal{M}(\widetilde\epsilon,\beta_0,\beta_1,L_0,L_1,m)$. Moreover, it is easy to check that
$$(1-\epsilon)f+\epsilon g=(1-\widetilde{\epsilon})\widetilde{f}+\widetilde{\epsilon}\widetilde g.$$
In other words, a model with contamination proportion $\epsilon$ can also be written as a mixture that uses a different $\widetilde{\epsilon}$. Unless the contamination proportion is specified, one cannot tell the difference between $(1-\epsilon)f+\epsilon g$ and $(1-\widetilde{\epsilon})\widetilde{f}+\widetilde{\epsilon}\widetilde g$. This leads to a lower bound of the error, which is of order $|f(0)-\widetilde{f}(0)|^2\asymp\epsilon^2$. A rigorous proof of Theorem \ref{thm:smadapt1} that uses a constrained risk inequality in \cite{brown1996constrained} is given in Section \ref{sec:pf-cri}.
\subsection{Unknown Smoothness}\label{sec:un-beta}
In this section, we consider the case that the smoothness numbers are unknown, but the contamination proportion is given.
In view of the kernel density estimator (\ref{eq:def-KDE}) that achieves the minimax rate, we can still use the normalization by $\frac{1}{n(1-\epsilon)}$ because of the knowledge of $\epsilon$, but the bandwidth $h$ needs to be picked in a data-driven way. For a given $h$, define
\begin{align*}
\widehat f_h(0)=\frac{1}{n(1-\epsilon)}\sum_{i=1}^n \frac{1}{h}K\left(\frac{X_i}{h}\right).
\end{align*}
With a discrete set $\mathcal{H}$ and some constant $c_1>0$, Lepski's method \citep{lepskii1991problem,lepskii1992asymptotically,lepskii1993asymptotically} selects a data-driven bandwidth through the following procedure,
\begin{equation}
\widehat h=\max\left\{h\in\mathcal{H}:|\widehat f_h(0)-\widehat f_l(0)|\leq c_1\sqrt{\frac{\log n}{nl}}, \forall l\leq h, l\in\mathcal{H}\right\}.\label{eq:h-lep}
\end{equation}
In words, we choose the largest bandwidth below which the variance dominates. If the set that is maximized over is empty, we will use the convention $\widehat{h}=\frac{1}{n}$. The estimator $\widehat{f}_{\widehat{h}}(0)$ that uses a data-driven bandwidth enjoys the following guarantee.
\begin{thm}\label{thm:lepski2}
Consider the adaptive kernel density estimator $\widehat{f}(0)=\widehat{f}_{\widehat{h}}(0)$ with the bandwidth defined by (\ref{eq:h-lep}). In (\ref{eq:h-lep}), we set $\mathcal{H}=\left\{1,\frac{1}{2},\cdots,\frac{1}{2^m}\right\}$ such that $\frac{1}{2^m}\leq \frac{1}{n}<\frac{1}{2^{m-1}}$ and $c_1$ to be a sufficiently large constant. The kernel $K$ is selected from $\mathcal{K}_l(L)$ with a large constant $l\geq \floor{\beta_0\vee\beta_1}$. Then, we have
\begin{eqnarray*}
&& \sup_{p(\epsilon,f,g)\in \mathcal M(\epsilon,\beta_0,\beta_1,L_0,L_1,m)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2 \\
&\lesssim& \left[\left(\frac{n}{\log n}\right)^{-\frac{2\beta_0}{2\beta_0+1}}\right]\vee \left[\epsilon^2(1\wedge m)^2\right]\vee\left[\left(\frac{n}{\log n}\right)^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}\right].
\end{eqnarray*}
\end{thm}
Lepski's method is known to be adaptive over various nonparametric classes, and it can achieve minimax rates up to a logarithmic factor without knowing the smoothness parameter \citep{lepski1997optimal}. Theorem \ref{thm:lepski2} shows that this is also the case with contaminated observations. With an adaptive kernel density estimator normalized by $\frac{1}{n(1-\epsilon)}$, the minimax rate (\ref{eq:minimax-rate}) is achieved up to a logarithmic factor in Theorem \ref{thm:lepski2}.
A comparison between the adaptive rate given by Theorem \ref{thm:lepski2} and the minimax rate (\ref{eq:minimax-rate}) reveals two differences. The first adaptation cost is given by $\left(\frac{n}{\log n}\right)^{-\frac{2\beta_0}{2\beta_0+1}}$, compared with $n^{-\frac{2\beta_0}{2\beta_0+1}}$ in (\ref{eq:minimax-rate}). Previous work in adaptive nonparametric estimation \citep{brown1996constrained,lepski1997optimal,cai2003rates} implies that this cost is unavoidable for adaptation to smoothness.
The second adaptation cost is given by $\left(\frac{n}{\log n}\right)^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}$, compared with $n^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}$ in (\ref{eq:minimax-rate}). In the next theorem, we show that this adaptations cost is also unavoidable without the knowledge of the smoothness parameters.
\begin{thm}\label{thm:smadapt2}
Consider two models $\mathcal{M}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)$ and $\mathcal{M}(\epsilon,\widetilde\beta_0,\widetilde\beta_1,\widetilde L_0,\widetilde L_1,m)$ with different smoothness parameters. Assume that $\beta_0\leq\tilde \beta_0$, $\beta_1<\tilde \beta_1$, ${\beta}_0\geq{\beta}_1$ and $n\epsilon^2\geq(\log n)^2$. For any estimator $\widehat{f}(0)$ that satisfies
$$\sup_{p(\epsilon,f,g)\in \mathcal{M}(\epsilon,\widetilde\beta_0,\widetilde\beta_1,\widetilde L_0,\widetilde L_1,m)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\leq C\left(\frac{n}{\log n}\right)^{-\frac{2\widetilde\beta_1}{2\widetilde\beta_1+1}}\epsilon^{\frac{2}{2\widetilde\beta_1+1}},$$
for some constant $C>0$, we must have
$$\sup_{p(\epsilon,f,g)\in \mathcal{M}(\epsilon,\beta_0,\beta_1,L_0,L_1,m)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\gtrsim \left(\frac{n}{\log n}\right)^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}.$$
\end{thm}
Similar to the statement of Theorem \ref{thm:smadapt1}, Theorem \ref{thm:smadapt2} shows that it is impossible to achieve a rate that is faster than $\left(\frac{n}{\log n}\right)^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}$ across two function classes with different smoothness parameters.
We remark that the assumptions ${\beta}_0\geq {\beta}_1$ and $n\epsilon^2\geq(\log n)^2$ in Theorem \ref{thm:smadapt2} are necessary conditions for $\left(\frac{n}{\log n}\right)^{-\frac{2\beta_1}{2\beta_1+1}}\epsilon^{\frac{2}{2\beta_1+1}}$ to dominate $\left(\frac{n}{\log n}\right)^{-\frac{2\beta_0}{2\beta_0+1}}$. Without these two conditions, $\left(\frac{n}{\log n}\right)^{-\frac{2\beta_0}{2\beta_0+1}}$ is the larger term between the two, and the lower bound is already in the literature.
In conclusion, the rate in Theorem \ref{thm:lepski2} achieved by Lepski's method cannot be improved unless smoothness parameters are given.
\subsection{Unknown Contamination Proportion and Unknown Smoothness}
When both the contamination proportion and the smoothness are unknown, we consider Lepski's method with a kernel density estimator normalized by $\frac{1}{n}$. Define
\begin{align*}
\widehat f_h(0)=\frac{1}{n}\sum_{i=1}^n \frac{1}{h}K\left(\frac{X_i}{h}\right).
\end{align*}
Then, a data-driven bandwidth $\widehat{h}$ is selected according to (\ref{eq:h-lep}). Again, if the set that is maximized over is empty in (\ref{eq:h-lep}), we will use the convention $\widehat{h}=\frac{1}{n}$. Note that this is a fully data-driven estimator that is adaptive to both the contamination proportion and the smoothness. It enjoys the following guarantee.
\begin{thm}\label{thm:lepski}
Consider the adaptive kernel density estimator $\widehat{f}(0)=\widehat{f}_{\widehat{h}}(0)$ with the bandwidth defined by (\ref{eq:h-lep}). In (\ref{eq:h-lep}), we set $\mathcal{H}=\left\{1,\frac{1}{2},\cdots,\frac{1}{2^m}\right\}$ such that $\frac{1}{2^m}\leq \frac{1}{n}<\frac{1}{2^{m-1}}$ and $c_1$ to be a sufficiently large constant. The kernel $K$ is selected from $\mathcal{K}_l(L)$ with a large constant $l\geq \floor{\beta_0\vee\beta_1}$. Then, we have
$$\sup_{p(\epsilon,f,g)\in \mathcal M(\epsilon,\beta_0,\beta_1,L_0,L_1,m)}\mathbb{E}_{p^n}\left(\widehat f(0)-f(0)\right)^2\lesssim \left(\frac{n}{\log n}\right)^{-\frac{2\beta_0}{2\beta_0+1}}\vee\epsilon^2.$$
\end{thm}
Compared with the minimax rate in Theorem \ref{thm:minimax-rate}, the rate in Theorem \ref{thm:lepski} can be understood as replacing $n$ and $\epsilon^2(1\wedge m)^2$ respectively by $n/\log n$ and $\epsilon^2$ in (\ref{eq:minimax-rate}). In view of the results in both Section \ref{sec:un-epsilon} and Section \ref{sec:un-beta}, this rate $\left(\frac{n}{\log n}\right)^{-\frac{2\beta_0}{2\beta_0+1}}\vee\epsilon^2$ in Theorem \ref{thm:lepski} cannot be improved by any procedure that is adaptive to both contamination proportion and smoothness.
| {
"timestamp": "2018-07-30T02:04:39",
"yymm": "1712",
"arxiv_id": "1712.07801",
"language": "en",
"url": "https://arxiv.org/abs/1712.07801",
"abstract": "This paper studies density estimation under pointwise loss in the setting of contamination model. The goal is to estimate $f(x_0)$ at some $x_0\\in\\mathbb{R}$ with i.i.d. observations, $$ X_1,\\dots,X_n\\sim (1-\\epsilon)f+\\epsilon g, $$ where $g$ stands for a contamination distribution. In the context of multiple testing, this can be interpreted as estimating the null density at a point. We carefully study the effect of contamination on estimation through the following model indices: contamination proportion $\\epsilon$, smoothness of target density $\\beta_0$, smoothness of contamination density $\\beta_1$, and level of contamination $m$ at the point to be estimated, i.e. $g(x_0)\\leq m$. It is shown that the minimax rate with respect to the squared error loss is of order $$ [n^{-\\frac{2\\beta_0}{2\\beta_0+1}}]\\vee[\\epsilon^2(1\\wedge m)^2]\\vee[n^{-\\frac{2\\beta_1}{2\\beta_1+1}}\\epsilon^{\\frac{2}{2\\beta_1+1}}], $$ which characterizes the exact influence of contamination on the difficulty of the problem. We then establish the minimal cost of adaptation to contamination proportion, to smoothness and to both of the numbers. It is shown that some small price needs to be paid for adaptation in any of the three cases. Variations of Lepski's method are considered to achieve optimal adaptation.The problem is also studied when there is no smoothness assumption on the contamination distribution. This setting that allows for an arbitrary contamination distribution is recognized as Huber's $\\epsilon$-contamination model. The minimax rate is shown to be $$ [n^{-\\frac{2\\beta_0}{2\\beta_0+1}}]\\vee [\\epsilon^{\\frac{2\\beta_0}{\\beta_0+1}}]. $$ The adaptation theory is also different from the smooth contamination case. While adaptation to either contamination proportion or smoothness only costs a logarithmic factor, adaptation to both numbers is proved to be impossible.",
"subjects": "Statistics Theory (math.ST); Methodology (stat.ME)",
"title": "Density Estimation with Contaminated Data: Minimax Rates and Theory of Adaptation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540668504082,
"lm_q2_score": 0.7248702821204019,
"lm_q1q2_score": 0.7099046587336183
} |
https://arxiv.org/abs/1509.09272 | On stationary solutions of KdV and mKdV equations | Stationary solutions on a bounded interval for an initial-boundary value problem to Korteweg--de~Vries and modified Korteweg--de~Vries equation (for the last one both in focusing and defocusing cases) are constructed. The method of the study is based on the theory of conservative systems with one degree of freedom. The obtained solutions turn out to be periodic. Exact relations between the length of the interval and coefficients of the equations which are necessary and sufficient for existence of nontrivial solutions are established. | \section*{}
Both Korteweg--de~Vries equation (KdV)
$$
u_t+au_x+u_{xxx}+uu_x=0
$$
and modified Korteweg--de~Vries equation (mKdV)
$$
u_t+au_x+u_{xxx}\pm u^2u_x=0
$$
(the sign "+" stands for the focusing case and the sign "-" -- for the defocusing one) describe propagation of long nonlinear waves in dispersive media. We assume $a$ to be an arbitrary real constant. If these equations are considered on a bounded interval $(0,L)$, then for well-posedness of an inital-boundary value problem besides an initial profile one must set certain boundary conditions, for example,
$$
u\big|_{x=0} = u\big|_{x=L} =u_x\bigl|_{x=L}=0
$$
(see \cite{Kh, F01, BSZ, F07} and others).
It follows from the results of \cite{FL} that such a problem for KdV equation possesses certain internal dissipation:
under some relations between $a$ and $L$ and sufficiently small initial data solutions decay at large time. Similar properties hold for mKdV equation. In order to answer the question if the smallness is essential one has to construct non-decaying solutions. The simplest case of such solutions are stationary solutions: $u=u(x)$. In this situation the considered equations are reduced to the following ordinary differential equations:
\begin{equation}\label{1}
u''' + au' + uu' =0,
\end{equation}
\begin{equation}\label{2}
u''' + au' + u^2u' =0,
\end{equation}
\begin{equation}\label{3}
u''' + au' - u^2u' =0,
\end{equation}
and the boundary conditions --- to the following ones:
\begin{equation}\label{4}
u(0)=u(L)=u'(L)=0.
\end{equation}
The goal of the present paper is to investigate existence of nontrivial solutions to these problems under different relations between $a$ and $L$. The method of the study is based on the qualitative theory of conservative systems with one degree of freedom (see, for example, \cite{A}).
The first example of such a solution by this method for equation (\ref{1}) was constructed in the case $a=0$, $L=2$ in \cite{GS}. In recent paper \cite{DNa} also for equation (\ref{1}) such solutions were constructed for $a=1$ and $L\in (0,2\pi)$ and exact formulas via elliptic Jacobi functions were obtained. In the present paper these special functions are not used.
\begin{lemma}\label{L1}
If $u\in C^3[0,L]$ is a solution to any problems (\ref{1}), (\ref{4}), or (\ref{2}), (\ref{4}), or (\ref{3}), (\ref{4}), then it is infinitely smooth and periodic with period $L$.
\end{lemma}
\begin{proof}
Integrating each of the equations (\ref{1})--(\ref{3}) we obtain that the function $u$ satisfies an equation
\begin{equation}\label{5}
u'' + F'(u) =0, \quad F(0)=0, \quad F\in C^\infty.
\end{equation}
Following \cite{A} introduce a "full energy" $E(x) \equiv \frac12 \bigl(u'(x)\bigr)^2 +F\bigl(u(x)\bigr)$. Then (\ref{5}) yields that $E'(x)\equiv 0$, that is $E(x)\equiv \mathrm{const}$. By virtue of (\ref{4}) $E(L)=0$, therefore $E(0)=0$ and so $u'(0)=0$. The end of the proof is obvious.
\qed
\end{proof}
Further let a fundamental period for a nontrivial periodic function denotes a minimal possible positive value of a period.
By a symbol $u_{a,T}$ denote a nontrivial solution to any of considered problems with the fundamental period $T$.
\begin{theorem}\label{T1}
If $aL^2 \ne 4\pi^2$ then there exists a unique solution $u_{a,L}$ to problem (\ref{1}), (\ref{4}). If $aL^2 = 4\pi^2$ such a solution does not exist.
\end{theorem}
\begin{theorem}\label{T2}
If $aL^2 < 4\pi^2$ then there exists a unique up to the sign solution $u_{a,L}$ to problem (\ref{2}), (\ref{4}).
If $aL^2 \geq 4\pi^2$ such solutions do not exist.
\end{theorem}
\begin{theorem}\label{T3}
If $aL^2 > 4\pi^2$ then there exists a unique up to the sign solution $u_{a,L}$ to problem (\ref{3}), (\ref{4}).
If $aL^2 \leq 4\pi^2$ such solutions do not exist.
\end{theorem}
\begin{remark}\label{R1}
If $aL^2 \ne 4\pi n^2$ for certain natural $n\geq 2$ then obviously the function $u(x) \equiv n^2 u_{a/n^2,L}(nx)$ is a solution to problem (\ref{1}), (\ref{4}) with the fundamental period $T=L/n$. If $aL^2 < 4\pi n^2$ for certain natural $n$ then the function $u(x) \equiv n u_{a/n^2,L}(nx)$ is a solution to problem (\ref{2}), (\ref{4}) with the fundamental period $T=L/n$.
In particular, nontrivial solutions to problems (\ref{1}), (\ref{4}) and (\ref{2}), (\ref{4}) exist for any $a$ and positive $L$.
If $aL^2 \leq 4\pi^2$ then nontrivial solutions to problem (\ref{3}), (\ref{4}) do not exist.
\end{remark}
Further for convenience we pass from the segment $[0,L]$ to the segment $[-1,1]$. For $x\in [-1,1]$ in the case of equation (\ref{1}) make a substitution $y(x) \equiv \frac {L^2}4 u\bigl(\frac L2 (x+1)\bigr)$, while in the case of equations (\ref{2}) and (\ref{3}) --- substitution $y(x) \equiv \frac L2 u\bigl(\frac L2 (x+1)\bigr)$. Then for $b=\frac{L^2}4 a$ these equations transform respectively to the following ones:
\begin{equation}\label{6}
y''' + by' + yy' =0,
\end{equation}
\begin{equation}\label{7}
y''' + by' + y^2y' =0,
\end{equation}
\begin{equation}\label{8}
y''' + by' - y^2y' =0,
\end{equation}
and consider periodic solutions to these equations with the fundamental period $T=2$ such that
\begin{equation}\label{9}
y(-1) = y'(-1) =0.
\end{equation}
We apply the following lemma in the spirit of the qualitative theory of conservative systems with one degree of freedom.
\begin{lemma}\label{L2}
Consider an initial value problem
\begin{equation}\label{10}
y'' + F'(y) =0, \qquad y(-1) = y'(-1) =0,
\end{equation}
where $F\in C^\infty$, $F(0)=0$. Then a nontrivial periodic solution to problem (\ref{10}) with the fundamental period $T=2$ exist if and only if $F'(0) \ne 0$ and there exists $y_0 \ne 0$ such that $F(y_0)=0$, $F'(y_0) \ne 0$, $F(y)<0$ for $y\in (0,y_0)$ if $y_0>0$, $F(y)<0$ for $y\in (y_0,0)$ if $y_0<0$ and
\begin{equation}\label{11}
\int_0^{y_0} \frac{dy}{\sqrt{-2F(y)}} =1 \quad \mathrm{if}\ y_0>0,\qquad
\int_{y_0}^0 \frac{dy}{\sqrt{-2F(y)}} =1 \quad \mathrm{if}\ y_0<0.
\end{equation}
\end{lemma}
\begin{proof}
First of all note that similarly to (\ref{5}) $E(x)\equiv \frac12 \bigl(y'(x)\bigr)^2 +F\bigl(y(x)\bigr) \equiv 0$ if $y(x)$ is a solution to problem (\ref{10}). Due to uniqueness of solutions to the initial value problem the condition $F'(0) \ne 0$ is necessary for existence of nontrivial solutions.
Consider, for example, the case $F'(0)<0$. If the function $F$ is negative $\forall y>0$ then it is easy to see that there is no periodic solution to problem (\ref{10}). Therefore, existence of positive $y_0$ such that $F(y_0)=0$, $F(y)<0$ for $y\in (0,y_0)$ is necessary.
Uniqueness of the solution implies that the function $y(x)$ is even (if exists). Then it is easy to see that it possesses the following properties: $y'(x)>0$ for $x\in (-1,0)$, $y'(x)<0$ for $x\in (0,1)$, $y(0)=y_0$, $y'(0)=0$. Again due to uniqueness $F'(y_0)\ne 0$.
Therefore, for $x\in [0,1]$ the function $y(x)$ satisfies the following conditions:
$$
\frac{dy}{dx} = -\sqrt{-2F(y)},\quad y(0)=y_0, \quad y(1)=0.
$$
Integrating we obtain that $\displaystyle\int_0^{y_0} \frac{dy}{\sqrt{-2F(y)}} =1$.
It is easy to see that under these assumptions the desired solution exist. The case $F'(0)>0$ is considered in a similar way (then $y_0<0$).
\qed
\end{proof}
Now we can prove our theorems.
\begin{proof}[Theorem~\ref{T1}]
Equation (\ref{6}) is equivalent to equation
\begin{equation}\label{12}
y'' + by + \frac 12 y^2 =c
\end{equation}
for certain real constant $c$. Therefore, construction of a solution transforms to search of a constant $c$ such for a function
$$
F(y) \equiv \frac16 y^3 + \frac{b}2 y^2 -cy = \frac 16 y(y^2+3by-6c) \equiv \frac16 y F_0(y)
$$
the hypothesis of Lemma~\ref{L2} is satisfied. Note that $F'(y) = \frac12 y^2 +by-c$. Therefore, the condition $F'(0)\ne 0$ implies that $c\ne 0$.
Real simple nonzero roots of the function $F_0$ exist if and only if $D=9b^2+24c>0$ and then these roots are expressed by formulas $y_0 = \frac12 (-3b +\sqrt{D})$, $y_1 = -\frac12 (3b+\sqrt{D})$.
It is easy to see that if $c>0$ then for any $b$ the root $y_0>0$, $F(y)<0$ for $y\in (0,y_0)$, $F'(y_0)\ne 0$. If $c\in (-3b^2/8,0)$ then for $b>0$ the root $y_0<0$, $F(y)<0$ for $y\in (y_0,0)$, $F'(y_0)\ne 0$.
Therefore, we have to find the constant $c$ for which condition (\ref{11}) is satisfied. Note that
$$
-2F(y) = \frac 13 y (y_0-y)(y-y_1).
$$
After the change of variable $y=y_0t$ each of equations (\ref{11}) reduces to an equation
$$
I(b,c) \equiv \sqrt{3} \int_0^1 \frac{dt}{\sqrt{t(1-t)(y_0t-y_1)}} =1.
$$
Since $y_0t-y_1 = \frac12(\sqrt{D}-3b)t+\frac12(\sqrt{D}+3b)$ it is easy to see that for the fixed $b$ the function $I(b,c)$ monotonically decreases. Moreover, $\displaystyle\lim\limits_{c\to+\infty} I(b,c)=0$ and for $b>0$
$$
\lim\limits_{c\to -\frac38 b^2+0} I(b,c) =\sqrt{\frac2b}\int_0^1 \frac{dt}{\sqrt{t}(1-t)}=+\infty,\
\lim\limits_{c\to 0} I(b,c) =\frac1{\sqrt{b}}\int_0^1 \frac{dt}{\sqrt{t(1-t)}}=\frac\pi{\sqrt{b}},
$$
for $b= 0$
$$
\lim\limits_{c\to 0+0} I(b,c) = \lim\limits_{c\to 0+0}\frac1{\sqrt{2c}}\int_0^1 \frac{dt}{\sqrt{t(1-t)(t+1)}} =+\infty,
$$
for $b< 0$
$$
\lim\limits_{c\to 0+0} I(b,c) = \frac1{\sqrt{|b|}}\int_0^1 \frac{dt}{t\sqrt{1-t}} =+\infty.
$$
Therefore, the desired value of $c$ exists and is unique if $b\ne \pi^2$, while for $b=\pi^2$ such a value does not exist.
\qed
\end{proof}
\begin{remark}\label{R2}
The substitution $u(x)=a_0+v(x-x_0)$ under the appropriate choice of the parameters $a_0$ and $x_0$ transforms any periodic solution of equation (\ref{1}) with the period $L$ to solution of an equation $v'''+(a+a_0)v'+vv'=0$ satisfying conditions $v(0)=v'(0)=v(L)=v'(L)=0$. Therefore, any solution of equation (\ref{1}) with the fundamental period $L$ can be expressed in this way by the functions $u_{a+a_0,L}$. Solutions similar to functions $u_{a,L}$ were considered also in \cite{Ne}. In \cite{AnBS} representation of periodic solutions of equation (1) is given via elliptic Jacobi functions. The advantage of our approach is that it can give transparent description of solutions.
Consider, for example, the case $b>0$. Then for $b\in (0,\pi^2)$ the constructed solution of problem (\ref{6}), (\ref{9}) is an even "hill" of the height $y_0 =\frac 12(-3b+\sqrt{9b^2+24c})>0$, while for $b>\pi^2$ --- an even "hole" of the depth $y_0<0$. Note that $I_c(b,c)<0$, $I_b(b,c)<0$. Therefore, the equation $I(b,c)=1$ determines a smooth decreasing function $c(b)$. Since $I(\pi^2,0)=1$ we have that $c(\pi^2)=0$. Return to equation (\ref{1}). Let $a>0$. If
$u_0=\frac12(-3a+\sqrt{9a^2 +384c L^{-2}})$, where $c=c(L^2a/4)$, then for $L<2\pi/\sqrt{a}$ the solution $u_{a,L}$ to problem (\ref{1}), (\ref{4}) is a "hill" of the height $u_0>0$ and for $L>2\pi/\sqrt{a}$ --- a "hole" of the depth $u_0<0$ (the center in both cases is at the point $L/2$). In addition, $u_0\to+\infty$ as $L\to 0$, $u_0\to 0$ as $L\to 2\pi/\sqrt{a}$, $u_0\to 0$ as $L\to +\infty$.
\end{remark}
\begin{proof}[Theorem~\ref{T2}]
Equation (\ref{7}) is equivalent to equation
\begin{equation}\label{13}
y'' + by + \frac 13 y^3 =c
\end{equation}
for certain real constant $c$. Let
$$
F(y) \equiv \frac1{12} y^4 + \frac{b}2 y^2 -cy = \frac 1{12} y(y^3+6by-12c) \equiv \frac1{12} y F_0(y).
$$
Note that the substitution $z(x)\equiv -y(x)$ leads to an equation similar to (\ref{13}), where $c$ is replaced by $(-c)$. Therefore, further it is sufficient to assume that $c>0$ (if $c=0$ then $F'(0)=0$).
Similarly to the proof of Theorem~\ref{T1} we need to find the roots of the function $F_0$. We apply Cardano formulas. Let
$D=8b^3+36c^2$,
$$
p= \sqrt[3]{6c +\sqrt{D}},\quad q= \sqrt[3]{6c -\sqrt{D}} \quad \mathrm{if}\ D\geq 0,
$$
$$
p= \sqrt[3]{6c +i\sqrt{|D|}}= \sqrt{2|b|}e^{\frac{i}3\arccos(3c/\sqrt{2|b|^3})},\quad q=\overline{p} \quad
\mathrm{if}\ D<0.
$$
The the function $F_0$ has a real root $y_0=p+q>0$. Moreover, if $D>0$ there are two complex conjugate roots with negative real parts, and if $D\leq 0$ (it is possible only for $b<0$) --- two negative real roots $y_1$ and $y_2$ ($y_1=y_2$ if $D=0$).
According to Vi\`ete formulas $y_1+y_2=-y_0$, $y_1y_2=6b-y_0y_1-y_0y_2= 6b+y_0^2$ and then
$$
-2F(y) = \frac 16 y (y_0-y)(y^2+y_0y+y_0^2+6b).
$$
After the change of variable $y=y_0t$ first equation (\ref{11}) reduces to an equation
$$
I(b,c) \equiv \sqrt{6} \int_0^1 \frac{dt}{\sqrt{t(1-t)(y_0^2t^2+y_0^2t+y_0^2+6b)}} =1.
$$
It is easy to see that for the fixed $b$ the function $y_0(c)$ monotonically increases and $y_0(c)\to +\infty$ as
$c\to +\infty$ (note that $y_0=\sqrt{8|b|}\cos\bigl(\frac13\arccos(3c/\sqrt{2|b|^3})\bigr)$ if $D<0$).
Then for the fixed $b$ the function $I(b,c)$ monotonically decreases and $\displaystyle\lim\limits_{c\to+\infty} I(b,c)=0$.
Moreover, if $c\to 0+0$ then $y_0(c)\to 0$ for $b\geq 0$ and $y_0(c)\to\sqrt{6|b|}$ for $b<0$. Therefore,
$$
\lim\limits_{c\to 0+0} I(b,c) =\frac1{\sqrt{b}}\int_0^1 \frac{dt}{\sqrt{t(1-t)}}=\frac\pi{\sqrt{b}}\quad \mathrm{if}\ b>0,
$$
$$
\lim\limits_{c\to 0+0} I(b,c) =\lim\limits_{c\to 0+0} \frac{\sqrt{6}}{\sqrt[3]{12c}} \int_0^1\frac{dt}{\sqrt{t(1-t^3)}}
=+\infty\quad \mathrm{if}\ b=0,
$$
$$
\lim\limits_{c\to 0+0} I(b,c) = \frac1{\sqrt{|b|}}\int_0^1 \frac{dt}{t\sqrt{1-t^2}} =+\infty\quad \mathrm{if}\ b<0.
$$
Hence, the desired positive value of $c$ exists and is unique if $b< \pi^2$, while for $b\geq \pi^2$ such a value does not exist.
\qed
\end{proof}
\begin{proof}[Theorem~\ref{T3}]
Equation (\ref{8}) is equivalent to equation
\begin{equation}\label{14}
y'' + by - \frac 13 y^3 =c
\end{equation}
for certain real constant $c$. Let
$$
F(y) \equiv -\frac1{12} y^4 + \frac{b}2 y^2 -cy = -\frac 1{12} y(y^3-6by+12c) \equiv -\frac1{12} y F_0(y).
$$
As in the proof of Theorem~\ref{T2} consider only the case $c>0$.
Again apply Cardano formulas. Let $D=-8b^3+36c^2$,
$$
p= \sqrt[3]{-6c +\sqrt{D}},\quad q= \sqrt[3]{-6c -\sqrt{D}} \quad \mathrm{if}\ D\geq 0,
$$
$$
p= \sqrt[3]{-6c +i\sqrt{|D|}}= \sqrt{2b}e^{\frac{i}3\bigl(\pi+\arccos(3c/\sqrt{2b^3})\bigr)},\quad q=\overline{p} \quad \mathrm{if}\ D<0.
$$
If $D>0$ then the function $F_0$ has a real root $y_0=p+q<0$ and two complex conjugate roots $y_1$ and $y_2$. If $D=0$ then again the function $F_0$ has a real root $y_0=p+q<0$ and a double real root $y_1=y_2>0$. Both these two cases do not satisfy the hypothesis of Lemma~\ref{L2} since $F'(0)<0$.
It remains to consider the case $D<0$ (it is possible only if $b>0$), then $c\in (0,\frac{\sqrt{2}}3 b^{3/2})$. Here the function $F_0$ has three distinct real roots, where a root
$y_0=p+q=\sqrt{8b}\cos\bigl(\frac{\pi}3+\frac13\arccos(3c/\sqrt{2b^3})\bigr)>0$, a root $y_1<0$, a root $y_2>y_0$. We have that $y_1+y_2=-y_0$, $y_2y_2=-6b+y_0^2$ and then
$$
-2F(y) = \frac 16 y (y_0-y)(6b-y_0^2-y_0y-y^2).
$$
After the change of variable $y=y_0t$ first equation (\ref{11}) reduces to an equation
$$
I(b,c) \equiv \sqrt{6} \int_0^1 \frac{dt}{\sqrt{t(1-t)(6b-y_0^2(1+t+t^2))}} =1.
$$
Similarly to the previous theorem for the fixed $b$ the function $y_0(c)$ monotonically increases, therefore, unlike to the previous theorem the function $I(b,c)$ also monotonically increases. It is easy to see that
$$
\lim\limits_{c\to 0+0} I(b,c) =\frac1{\sqrt{b}}\int_0^1 \frac{dt}{\sqrt{t(1-t)}}=\frac\pi{\sqrt{b}},
$$
$$
\lim\limits_{c\to \frac{\sqrt{2}}3 b^{3/2}-0} I(b,c) = \sqrt{\frac3b}\int_0^1 \frac{dt}{(1-t)\sqrt{t(t+2)}} =+\infty.
$$
Hence, the desired positive value of $c$ exists and is unique if $b> \pi^2$, while for $b\leq \pi^2$ such a value does not exist.
\qed
\end{proof}
\begin{remark}\label{R3}
In \cite{AnNa, An, Na} periodic solutions of equations (\ref{2}) and (\ref{3}) were considered in the case when the constant $c=0$ in equations (\ref{13}) and (\ref{14}). Therefore, the periodic solutions constructed in the present paper do not coincide with solutions from that papers.
\end{remark}
\begin{acknowledgement}
The first author was supported by Project 333, State Assignment in the field of scientific activity implementation of Russia.
\end{acknowledgement}
\input{referenc}
\end{document}
| {
"timestamp": "2015-10-01T02:12:48",
"yymm": "1509",
"arxiv_id": "1509.09272",
"language": "en",
"url": "https://arxiv.org/abs/1509.09272",
"abstract": "Stationary solutions on a bounded interval for an initial-boundary value problem to Korteweg--de~Vries and modified Korteweg--de~Vries equation (for the last one both in focusing and defocusing cases) are constructed. The method of the study is based on the theory of conservative systems with one degree of freedom. The obtained solutions turn out to be periodic. Exact relations between the length of the interval and coefficients of the equations which are necessary and sufficient for existence of nontrivial solutions are established.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "On stationary solutions of KdV and mKdV equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540662478147,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7099046582968164
} |
https://arxiv.org/abs/1407.5280 | Density and spectrum of minimal submanifolds in space forms | Let $M^m$ be a minimal properly immersed submanifold in an ambient space close, in a suitable sense, to the space form $\mathbb{N}^n_k$ of curvature $-k\le 0$. In this paper, we are interested in the relation between the density function $\Theta(r)$ of $M^m$ and the spectrum of the Laplace-Beltrami operator. In particular, we prove that if $\Theta(r)$ has subexponential growth (when $k<0$) or sub-polynomial growth ($k=0$) along a sequence, then the spectrum of $M^m$ is the same as that of the space form $\mathbb{N}^m_k$. Notably, the result applies to Anderson's (smooth) solutions of Plateau's problem at infinity on the hyperbolic space $\mathbb{H}^n$, independently of their boundary regularity. We also give a simple condition on the second fundamental form that ensures $M$ to have finite density. In particular, we show that minimal submanifolds of $\mathbb{H}^n$ with finite total curvature have finite density. | \section{Introduction}
\label{intro}
Let $M^m$ be a minimal, properly immersed submanifold in a complete ambient space $N^n$. In the present paper, we are interested in the case when $N$ is close, in a sense made precise below, to a space form $\mathbb{N}_k^n$ of curvature $-k\le 0$. In particular, our focus is the study of the spectrum of the Laplace Beltrami operator $-\Delta$ on $M$ and its relationship with the density at infinity of $M$, that is, the limit as $r \rightarrow +\infty$ of the (monotone) quantity
\begin{equation}\label{def_densityinfty_hyp}
\Theta(r) \doteq \frac{\mathrm{vol}(M \cap B_r)}{V_k(r)},
\end{equation}
where $B_r$ indicates a geodesic ball of radius $r$ in $N^n$ and $V_k(r)$ is the volume of a geodesic ball of radius $r$ in $\mathbb{N}^m_k$. Hereafter, we will say that $M$ has finite density if
$$
\Theta(+\infty) \doteq \lim_{r \rightarrow +\infty} \Theta(r) <+\infty.
$$
To properly put our results into perspective, we briefly recall few facts about the spectrum of the Laplacian on a geodesically complete manifold. It is known by works of P. Chernoff \cite{chernoff} and R.S. Strichartz \cite{strichartz} that $-\Delta$ on a complete manifold is essentially self-adjoint on the domain $C^\infty_c(M)$, and thus it admits a unique self-adjoint extension, which we still call $-\Delta$. Since $-\Delta$ is positive and self-adjoint, its spectrum is the set of $\lambda \geq0$ such that
$\Delta +\lambda I$ does not have bounded inverse. Sometimes we say spectrum of $M$ rather than spectrum of $-\Delta$ and we denote it by $\sigma(M)$.
The well-known Weyl's characterization for the spectrum of a self-adjoint operator in a Hilbert space implies the following
\begin{lemma}\cite[Lemma 4.1.2]{Davies}\label{lem_weyl}
\label{lem3}
A number $\lambda \in \mathbb{R}$ lies in $\sigma(M)$ if and only if there exists a sequence of nonzero functions
$u_j\in \mathrm{Dom}(-\Delta)$ such that
\begin{equation}\label{L2norm}
\|\Delta u_j+ \lambda u_j\|_{2} = o\big( \|u_j\|_2\big) \qquad \text{as } \, j \rightarrow +\infty.
\end{equation}
\end{lemma}
In the literature, characterizations of the whole $\sigma(M)$ are known only in few special cases. Among them, the Euclidean space, for which $\sigma(\mathbb R^m) = [0,\infty)$, and the hyperbolic space $\mathbb H^m_k$, for which
\begin{equation}\label{spectrum_Hm}
\sigma(\mathbb H^m_k) = \left[ \frac{(m-1)^2k}{4}, +\infty \right)\!.
\end{equation}
The approach to guarantee that $\sigma(M) = [c, +\infty)$, for some $c \ge 0$, usually splits into two parts. The first one is to show that $\inf \sigma(M) \ge c$ via, for instance, the Laplacian comparison theorem from below (\cite{mckean}, \cite{GPB}), and the second one is to produce a sequence like in lemma \ref{lem3} for each $\lambda>c$. This step is accomplished by considering radial functions of compact support, and, at least in the first results on the topic like the one in \cite{donnelly}, uses the comparison theorems on both sides for $\Delta \rho$, $\rho$ being the distance from a fixed origin $o \in M$. Therefore, the method needs both a pinching on the sectional curvature and the smoothness of $\rho$, that is, that $o$ is a pole of $M$ (see \cite{donnelly}, \cite{escobarfreire},\cite{jli} and Corollary 2.17 in \cite{BMR_memoirs}), which is a severe topological restriction. Since then, various efforts were made to weaken both the curvature and the topological assumptions. We briefly overview some of the main achievements.\par
In \cite{Kumura}, Kumura observed that to perform the second step (and just for it) it is enough that there exists a relatively compact, mean convex, smooth open set $\Omega$ with the property that the normal exponential map realizes a global diffeomorphism $\partial \Omega \times \mathbb R_0^+ \rightarrow M \backslash \Omega$. Conditions of this kind seem, however, unavoidable for his techniques to work. On the other hand, in \cite{Kumura_2} the author drastically weakened the curvature requirements needed to establish Step 2, by replacing the two-sided pinching on the sectional curvature with a combination of a lower bound on a suitably weighted volume and an $L^p$-bound on the Ricci curvature.
Regarding the need for a pole, major recent improvements have been made in a series of papers (\cite{sturm}, \cite{wang}, \cite{Lu}, \cite{charalambouslu}): their guiding idea was to replace the $L^2$-norm in relation \eqref{L2norm} with the $L^1$-norm, which via a trick in \cite{wang}, \cite{Lu} enables to use smoothed distance functions to construct sequences as in Lemma \ref{lem_weyl}. Building on deep function-theoretic results due to Sturm \cite{sturm} and Charalambous-Lu \cite{charalambouslu}, in \cite{wang}, \cite{Lu} the authors proved that $\sigma(M) =[0,\infty )$ when
\begin{equation}\label{iporicci}
\liminf_{\rho(x) \rightarrow +\infty} \mathrm{Ricc}_x = 0
\end{equation}
in the sense of quadratic forms, without any topological assumption. This remarkable result improves on \cite{jli} and \cite{escobarfreire} (see also Corollary 2.17 in \cite{BMR_memoirs}), where $M$ was assumed to have a pole. Further refinements of \eqref{iporicci} have been given in \cite{charalambouslu}. However, when \eqref{iporicci} does not hold, the situation is more delicate and is still the subject of an active area of research. In this respect, we also quote the general function-theoretic criteria developed by H. Donnelly \cite{donnelly_exha}, and K.D. Elworthy and F-Y. Wang \cite{elworthywang} to ensure that a half-line belongs to the spectrum of $M$.\par
The main concern in this paper is to achieve, in the above-mentioned setting of minimal submanifolds $\varphi : M \rightarrow N$, a characterization of the whole $\sigma(M)$ free from curvature or topological conditions on $M$ (in this respect, observe that the completeness of $M$ follows from that of $N$ and the properness of $\varphi$). It is known by \cite{Cheung} and \cite{GPB} that for a minimal immersion $\varphi : M^m \rightarrow \mathbb{N}^n_k$ the fundamental tone of $M$, $\inf \sigma(M)$, is at least that of $\mathbb{N}^m_k$, i.e.,
\begin{equation}\label{infspec_intro}
\inf \sigma(M) \ge \frac{(m-1)^2k}{4}.
\end{equation}
Moreover, as a corollary of \cite{Kumura} and \cite{BJM}, \cite{BC}, if the second fundamental form $\mathrm{II}$ satisfies the decay estimate
\begin{equation}\label{intro_decayII}
\begin{array}{ll}
\displaystyle \lim_{\rho(x) \rightarrow +\infty} \rho(x)|\mathrm{II}(x)| =0 & \quad \text{if } \, k=0 \\[0.3cm]
\displaystyle \lim_{\rho(x) \rightarrow +\infty} |\mathrm{II}(x)| =0 & \quad \text{if } \, k>0 \\[0.2cm]
\end{array}
\end{equation}
($\rho(x)$ being the intrinsic distance with respect to some fixed origin $o \in M$), then $M$ has the same spectrum that a totally geodesic submanifold $\mathbb{N}^m_k \subset \mathbb{N}^n_k$, that is,
\begin{equation}\label{spectrum_McomeHm}
\sigma(M) = \left[ \frac{(m-1)^2k}{4}, +\infty \right)\!.
\end{equation}
According to \cite{Anderson_prep}, \cite{filho}, \eqref{intro_decayII} is ensured when $M$ has finite total curvature, that is, when
\begin{equation}\label{finite_total}
\int_M |\mathrm{II}|^m < +\infty.
\end{equation}
\begin{remark}
\emph{A characterization of the essential spectrum, similar to \eqref{spectrum_McomeHm}, also holds for submanifolds of the hyperbolic space $\mathbb H^n_k$ with constant (normalized) mean curvature $H<\sqrt{k}$. There, condition \eqref{finite_total} is replaced by the finiteness of the $L^m$-norm of the traceless second fundamental form. For deepening, see \cite{castillon}.
}
\end{remark}
Condition \eqref{intro_decayII} is a quite binding requirement for \eqref{spectrum_McomeHm} to hold, since it needs a pointwise control of the second fundamental form, and the search for more manageable conditions has been at the heart of the present paper. Here, we identify a suitable growth on the density function $\Theta(r)$ \emph{along a sequence} as a natural candidate to replace it, see \eqref{bellissima}. As a very special case, \eqref{spectrum_McomeHm} holds when $M$ has finite density. It might be interesting that just a volume growth condition along a sequence could control the whole spectrum of $M$; for this to happen, the minimality condition enters in a crucial and subtle way.
Regarding the relation between \eqref{finite_total} and the finiteness of $\Theta(+\infty)$, we remark that their interplay has been investigated in depth for minimal submanifolds of $\mathbb R^n$, but the case of $\mathbb H^n_k$ seems to be partly unexplored. In the next section, we will briefly discuss the state of the art, to the best of our knowledge. As a corollary of Theorem \ref{teo_finitedens} below, we will show the following
\begin{corollary}\label{cor_densitycurvature}
Let $M^m$ be a minimal properly immersed submanifold in $\mathbb H^n_k $. If $M$ has finite total curvature, then $\Theta(+\infty)<+\infty$.
\end{corollary}
As far as we know, this result was previously known just in dimension $m=2$ via a Chern-Osserman type inequality, see the next section for further details.\par
We now come to our results, beginning with defining the ambient spaces which we are interested in: these are manifolds with a pole, whose radial sectional curvature is suitably pinched to that of the model $\mathbb{N}^n_k$.
\begin{definition}\label{def_closeHn}
Let $N^n$ possess a pole $\bar o$ and denote with $\bar \rho$ the distance function from $\bar o$. Assume that the radial sectional curvature $\bar K_\mathrm{rad}$ of $N$, that is, the sectional curvature restricted to planes $\pi$ containing $\bar\nabla \bar \rho$, satisfies
\begin{equation}\label{pinchsectio}
- G\big( \bar \rho(x) \big) \le \bar K_\mathrm{rad}(\pi_x) \le -k \le 0 \qquad \forall \, x \in N \backslash \{\bar o\},
\end{equation}
for some $G \in C^0(\mathbb R^+_0)$. We say that
\begin{itemize}
\item[$(i)$] \emph{$N$ has a pointwise (respectively, integral) pinching to $\mathbb R^n$ if $k=0$ and
$$
sG(s) \rightarrow 0 \ \text{ as } \, s \rightarrow +\infty \qquad \big(\textrm{respectively, $\, sG(s) \in L^1(+\infty)$}\big);
$$
}
\item[$(ii)$] \emph{$N$ has a pointwise (respectively, integral) pinching to $\mathbb H^n_k$ if $k>0$ and
$$
G(s)-k \rightarrow 0 \ \text{ as } \, s \rightarrow +\infty \qquad \big(\textrm{respectively, $\, G(s)-k \in L^1(+\infty)$}\big).
$$
}
\end{itemize}
\end{definition}
Hereafter, given an ambient manifold $N$ with a pole $\bar o$, the density function $\Theta(r)$ will always be computed by taking extrinsic balls centered at $\bar o$.\par
Our main achievements are the following two theorems. The first one characterizes $\sigma(M)$ when the density of $M$ grows subexponentially (respectively, sub-polynomially) along a sequence. Condition \eqref{bellissima} below is very much in the spirit of a classical growth requirement due to R. Brooks \cite{brooks} and Y. Higuchi \cite{higuchi} to bound from above the infimum of the essential spectrum of $-\Delta$. However, we stress that our Theorem \ref{teo_spectrum} seems to be the first result in the literature characterizing the whole spectrum of $M$ under just a mild volume assumption.
\begin{theorem}\label{teo_spectrum}
Let $\varphi : M^m \rightarrow N^n$ be a minimal properly immersed submanifold, and suppose that $N$ has a pointwise or an integral pinching to a space form. If either
\begin{equation}\label{bellissima}
\begin{array}{ll}
\text{$N$ is pinched to $\mathbb H^n_k$, and} & \qquad \displaystyle \liminf_{s \rightarrow +\infty} \frac{\log \Theta(s)}{s} = 0, \quad \text{or } \\[0.4cm]
\text{$N$ is pinched to $\mathbb R^n$, and} & \qquad \displaystyle \liminf_{s \rightarrow +\infty} \frac{\log \Theta(s)}{\log s} = 0.
\end{array}
\end{equation}
then
\begin{equation}\label{wholespectrum}
\sigma(M) = \left[ \frac{(m-1)^2k}{4}, +\infty\right)\!.
\end{equation}
\end{theorem}
The above theorem is well suited for minimal submanifolds constructed via Geometric Measure Theory since, typically, their existence is guaranteed by controlling the density function $\Theta(r)$. As an important example, Theorem \ref{teo_spectrum} applies to all solutions of Plateau's problem at infinity $M^m \rightarrow \mathbb H^n_k$ constructed in \cite{Anderson}, provided that they are smooth. Indeed, because of their construction, $\Theta(+\infty)<+\infty$ (see \cite{Anderson}, part [A] at p. 485) and they are proper (it can also be deduced as a consequence of $\Theta(+\infty)<+\infty$, see Remark \ref{rem_proper}). By standard regularity theory, smoothness of $M^m$ is automatic if $m \le 6$.
\begin{corollary}\label{cor_plateau}
Let $\Sigma \subset \partial_\infty \mathbb H^n_k$ be a closed, integral $(m-1)$ current in the boundary at infinity of $\mathbb H^n_k$ such that, for some neighbourhood $U\subset \mathbb H^ n_k$ of $\operatorname{supp}(\Sigma)$, $\Sigma$ does not bound in $U$, and let $M^m \hookrightarrow \mathbb H^n_k$ be the solution of Plateau's problem at infinity constructed in \cite{Anderson} for $\Sigma$. If $M$ is smooth, then \eqref{wholespectrum} holds.
\end{corollary}
An interesting fact of Corollary \ref{cor_plateau} is that $M$ is \emph{not} required to be regular up to $\partial_\infty \mathbb H^n_k$, in particular it might have infinite total curvature. In this respect, we observe that if $M$ be $C^2$ up to $\partial_\infty \mathbb H^n_k$, then $M$ would have finite total curvature (Lemma \ref{prop_mazet} in Appendix 1). By deep regularity results, this is the case if, for instance, $M^m \rightarrow \mathbb H^{m+1}_k$ is a smooth hypersurface that solves Plateau's problem for $\Sigma$, and $\Sigma$ is a $C^{2,\alpha}$ (for $\alpha>0$), embedded compact hypersurface of $\partial_\infty \mathbb H^n_k$. See Appendix 1 for details.\par
The spectrum of solutions of Plateau's problems has also been considered in \cite{PLM} for minimal surfaces in $\mathbb R^3$. In this respect, it is interesting to compare Corollary \ref{cor_plateau} with $(3)$ of Corollary 2.6 therein.
\begin{remark}
\emph{The solution $M$ of Plateau's problem in \cite{Anderson} is constructed as a weak limit of a sequence $M_j$ of minimizing currents for suitable boundaries $\Sigma_j$ converging to $\Sigma$. and property $\Theta(+\infty)<+\infty$ is a consequence of a uniform upper bound for the mass of a sequence $M_j$ (part [A], p. 485 in \cite{Anderson}). Such a bound is achieved because of the way the boundaries $\Sigma_j$ are constructed, in particular, since they are all sections of the same cone. One might wonder whether $\Theta(+\infty)<+\infty$, or at least the subexponential growth in \eqref{bellissima}, is satisfied by all solutions of Plateau's problem. In this respect, we just make this simple observation: in the hypersurface case $n=m+1$, if $M \cap B^{m+1}_r$ is volume-minimizing then clearly
$$
\Theta(r) = \frac{\mathrm{vol}(M \cap B^{m+1}_r)}{V_k(r)} \le \frac{\mathrm{vol}( \partial B_r^{m+1} \subset \mathbb H_k^{m+1})}{V_k(r)} = c_k \frac{\sinh^m(\sqrt{k}r)}{V_k(r)},
$$
but this last expression diverges exponentially fast as $r \rightarrow +\infty$ (differently from its Euclidean analogous, which is finite). This might suggest that a general solution of Plateau's problem does not automatically satisfies $\Theta(+\infty)<+\infty$, and maybe not even \eqref{bellissima}.
}
\end{remark}
In our second result we focus on the particular case when $\Theta(+\infty) <+\infty$, and we give a sufficient condition for its validity in terms of the decay of the second fundamental form. Towards this aim, we shall restrict to ambient spaces with an integral pinching.
\begin{theorem}\label{teo_finitedens}
Let $\varphi : M^m \rightarrow N^n$ be a minimal immersion, and suppose that $N$ has an integral pinching to a space form. Denote with $\rho(x)$ the intrinsic distance from some reference origin $o \in M$. Assume that there exist $c>0$ and $\alpha>1$ such that the second fundamental form satisfies, for $\rho(x) >>1$,
\begin{equation}\label{approaching_hyp}
\begin{array}{ll}
\displaystyle |\mathrm{II}(x)|^2 \le \frac{c}{\rho(x) \log^{\alpha}\rho(x)} & \qquad \text{if $N$ is pinched to $\mathbb H^n_k$;} \\[0.4cm]
\displaystyle |\mathrm{II}(x)|^2 \le \frac{c}{\rho(x)^2 \log^{\alpha}\rho(x)} & \qquad \text{if $N$ is pinched to $\mathbb R^n$.}
\end{array}
\end{equation}
Then, $\varphi$ is proper, $M$ is diffeomorphic to the interior of a compact manifold with boundary, and $\Theta(+\infty)<+\infty$.
\end{theorem}
The assertions that $\varphi$ be proper and $M$ have finite topology is well-known under assumptions even weaker than \eqref{approaching_hyp} and not necessarily requiring the minimality, see for instance \cite{BJM}, \cite{BC}. Former results are due to \cite{Anderson_prep} ($N= \mathbb R^n$) and \cite{filho}, \cite{castillon} ($N=\mathbb H^n_k$). Here, our original contribution is to show that $M$ has finite density. Because of a result in \cite{filho}, \cite{PigolaVeronelli}, if $\varphi : M \rightarrow \mathbb H^n_k$ has finite total curvature then $|II(x)| = o(\rho(x)^{-1})$ as $\rho(x) \rightarrow +\infty$. Hence, \eqref{approaching_hyp} is met and Corollary \ref{cor_densitycurvature} follows at once.\par
We briefly describe the strategy of the proof of Theorem \ref{teo_spectrum}. In view of \eqref{infspec_intro}, it is enough to show that each $\lambda > (m-1)^2 k/4$ lies in $\sigma(M)$. To this end, we follow an approach inspired by a general result due to K.D. Elworthy and F-Y. Wang \cite{elworthywang}. However, Elworthy-Wang's theorem is not sufficient to conclude, and we need to considerably refine the criterion in order to fit in the present setting. To construct the sequence as in Lemma \ref{lem_weyl}, a key step is to couple the volume growth requirement \eqref{bellissima} with a sharpened form of the monotonicity formula for minimal submanifolds, which improves on the classical ones in \cite{simon}, \cite{Anderson}. Indeed, in Proposition \ref{prop_monotonicity} we describe three monotone quantities other than $\Theta(s)$, that might be useful beyond the purpose of the present paper. For example, in the very recent \cite{GimenoMarkvosen} the authors discovered and used some of the relations in Proposition \ref{prop_monotonicity} to show interesting comparison results for the capacity and the first eigenvalue of minimal submanifolds.
\subsection{Finite density and finite total curvature in $\mathbb R^n$ and $\mathbb H^n$}
The first attempt to extend the classical theory of finite total curvature surfaces in $\mathbb R^n$ (see \cite{Osserman_book}, \cite{JorgeMeeks}, \cite{ChernOsserman_1}, \cite{ChernOsserman_2}) to the higher-dimensional case is due to M.T. Anderson. In \cite{Anderson_prep}, the author drew from \eqref{finite_total} a number of topological and geometric consequences, and here we focus on those useful to highlight the relationship between total curvature and density. First, he showed that \eqref{finite_total} implies the decay
\begin{equation}\label{decay_Rn}
\lim_{\rho(x) \rightarrow +\infty} \rho(x)|\mathrm{II}(x)| =0,
\end{equation}
where $\rho(x)$ is the intrinsic distance from a fixed origin, and as a consequence $M$ is proper, the extrinsic distance function $r$ has no critical points outside some compact set and $|\nabla r| \rightarrow 1$ as $r$ diverges, so by Morse theory $M$ is diffeomorphic to the interior of a compact manifold with boundary. Moreover, he proved that $M$ has finite density via a higher-dimensional extension of the Chern-Osserman identity \cite{ChernOsserman_1}, \cite{ChernOsserman_2}, namely the following relation linking the Euler characteristic $\chi(M)$ and the Pfaffian form $\Omega$ (\cite{Anderson_prep}, Theorem 4.1):
\begin{equation}\label{chern_oss_Rm}
\chi(M) = \int_M \Omega + \lim_{r \rightarrow +\infty} \frac{\mathrm{vol}(M \cap \partial B_r)}{V_0'(r)}.
\end{equation}
Observe that, since $|\nabla r| \rightarrow 1$, by coarea's formula the limit in the right hand-side coincides with $\Theta(+\infty)$. We underline that property $\Theta(+\infty)<+\infty$ plays a fundamental role to apply the machinery of manifold convergence to get information on the limit structure of the ends of $M$ (\cite{Anderson_prep}, \cite{ShenZhu}, \cite{Tysk}). For instance, $\Theta(+\infty)$ is related to the number $\mathcal{E}(M)$ of ends of $M$: if we denote with $V_1, \ldots, V_{\mathcal{E}(M)}$ the (finitely many) ends of $M$, \eqref{finite_total} implies for $m \ge 3$ the identities
\begin{equation}\label{numberfinals}
\Theta(+\infty) = \sum_{i=1}^{\mathcal{E}(M)} \lim_{r \rightarrow +\infty} \frac{\mathrm{vol}(V_i \cap \partial B_r)}{V_0'(r)} \equiv \mathcal{E}(M),
\end{equation}
and thus $M$ is totally geodesic provided that it has only one end and finite total curvature (\cite{Anderson_prep}, Thm 5.1 and its proof). Further information on the mutual relationship between the finiteness of the total curvature and $\Theta(+\infty)<+\infty$ can be deduced under the additional requirement that $M$ is stable or it has finite stability index. For example, by work of J. Tysk \cite{Tysk}, if $M^m$ has finite index and $m \le 6$, then
\begin{equation}\label{equiindex}
\Theta(+\infty)<+\infty \qquad \text{if and only if} \qquad \int_M |\mathrm{II}|^m <+\infty.
\end{equation}
\begin{remark}
\emph{Indeed, the main result in \cite{Tysk} states that, when $\Theta(+\infty)<+\infty$ and $m \le 6$, $M$ has finite index if and only if it has finite total curvature. However, since the finite total curvature condition alone implies both that $M$ has finite index and $\Theta(+\infty)<+\infty$ (in any dimension\footnote{As said, finite total curvature implies $\Theta(+\infty)<+\infty$ by \eqref{chern_oss_Rm}, while the finiteness of the index can be seen as an application of the generalized Cwikel-Lieb-Rozembljum inequality (see \cite{liyau}) to the stability operator $L = -\Delta -|\mathrm{II}|^2$, recalling that a minimal submanifold $M^m \rightarrow \mathbb R^n$ satisfies a Sobolev inequality. We refer to \cite{PRS} for deepening.}), the characterization in \eqref{equiindex} is equivalent to Tysk's theorem. We underline that it is still a deep open problem whether or not, for $m \ge 3$, stability or finite index alone implies the finiteness of the density at infinity.
}
\end{remark}
Since then, efforts were made to investigate analogous properties for minimal submanifolds of finite total curvature immersed in $\mathbb H^n_k$. There, some aspects show strong analogy with the $\mathbb R^n$ case, while others are strikingly different. For instance, minimal immersions $\varphi : M^m \rightarrow \mathbb H^n_k$ with finite total curvature enjoy the same decay property \eqref{decay_Rn} with respect to the intrinsic distance $\rho(x)$ (\cite{filho}, see also \cite{PigolaVeronelli}), which is enough to deduce that they are properly immersed and diffeomorphic to the interior of a compact manifold with boundary. Moreover, Anderson \cite{Anderson} proved the monotonicity of $\Theta(r)$ in \eqref{def_densityinfty_hyp}. In order to show (among other things) that complete, finite total curvature surfaces $M^2 \hookrightarrow \mathbb H^n$ have finite density, in \cite{Chen}, \cite{ChenCheng} the authors obtained the following Chern-Osserman type inequality:
\begin{equation}\label{chern_osserman_hyp}
\chi(M) \ge - \frac{1}{4\pi} \int_M |\mathrm{II}|^2 + \Theta(+\infty),
\end{equation}
see also \cite{GimenoPalmer_2}. However, in the higher dimensional case we found no analogous of \eqref{chern_oss_Rm}, \eqref{chern_osserman_hyp} in the literature, and adapting the proof of \eqref{chern_oss_Rm} to the hyperbolic ambient space seems to be subtler than what we expected. In fact, an \emph{equality} like \eqref{chern_oss_Rm} is not even possible to obtain, since there exist minimal submanifolds of $\mathbb H^n_k$ with finite density but whose density at infinity depends on the chosen reference origin \cite{Gimeno_private}. We point out that, on the contrary, inequality \eqref{chern_osserman_hyp} holds for each choice of the reference origin in $\mathbb R^n$. This motivated the different route that we follow to prove Theorem \ref{teo_finitedens} and Corollary \ref{cor_densitycurvature}.
Among the results in \cite{Anderson_prep} that could not admit a corresponding one in $\mathbb H_k^n$, in view of the solvability of Plateau's problem at infinity on $\mathbb H_k^n$ we stress that a relation like \eqref{numberfinals} cannot hold for each minimal submanifold of $\mathbb H^n_k$ with finite total curvature. Indeed, there exist a wealth of properly immersed minimal submanifolds in $\mathbb H^n_k$ with finite total curvature and one end: for example, referring to the upper half-space model, the graphical solution of Plateau's problem for $\Sigma^{m-1} \subset \partial_\infty \mathbb H^n_k$ being the boundary of a convex set (constructed at the end of \cite{Anderson}) has finite total curvature, as follows from Lemma \ref{prop_mazet} and the regularity results recalled in Appendix 1. It shall be observed, however, that when $\mathrm{II}$ decays sufficiently fast at infinity with respect to the extrinsic distance function $r(x)$:
\begin{equation}\label{decay_Hn}
\lim_{r(x) \rightarrow +\infty} e^{2 \sqrt{k}r(x)}|\mathrm{II}(x)| =0,
\end{equation}
then the inequality $\Theta(+\infty) \le \mathcal{E}(M)$ still holds for minimal hypersurfaces in $\mathbb H^n_k$ as shown in \cite{GimPal}, and in particular $M$ is totally geodesic provided that it has only one end, as first observed in \cite{KasueSugahara}, \cite{KasueSugahara_2}. We remark that there exists an infinite family of complete minimal cylinders $\varphi_\lambda : \mathbb S^1 \times \mathbb R \rightarrow \mathbb H^3$ whose second fundamental form $\mathrm{II}_\lambda$ decays exactly of order $\exp\{-2r(x)\}$, see \cite{Mori}.
\section{Preliminaries}
Let $\varphi : (M^m, \langle \, , \, \rangle) \rightarrow (N^n, ( \, , \, ))$ be an isometric immersion of a complete $m$-dimensional Riemannian manifold $M$ into an ambient manifold $N$ of dimension $n$ and possessing a pole $\bar o$. We denote with $\nabla, \mathrm{Hess}\, , \Delta$ the connection, the Riemannian Hessian and the Laplace-Beltrami operator on $M$, while quantities related to $N$ will be marked with a bar. For instance, $\bar\nabla, \overline{\mathrm{dist}}, \overline{\mathrm{Hess}\, }$ will identify the connection, the distance function and the Hessian in $N$. Let $\bar \rho(x)= \overline{\mathrm{dist}}(x,\bar o)$ be the distance function from $\bar o$. Geodesic balls in $N$ of radius $R$ and center $y$ will be denoted with $B_R^N(y)$. Moreover, set
\begin{equation}\label{def_r}
r \ : \ M \rightarrow \mathbb R, \qquad r(x) = \bar\rho\big(\varphi(x)\big),
\end{equation}
for the extrinsic distance from $\bar o$. We will indicate with $\Gamma_{\! s}$ the extrinsic geodesic spheres restricted to $M$:
$\Gamma_{\! s} \doteq \{x\in M;\;r(x)=s\}$. Fix a base point $o \in M$. In what follows, we shall also consider the intrinsic distance function $\rho(x) = \mathrm{dist}(x,o)$ from a reference origin $o \in M$.
\subsection{Target spaces}
Hereafter, we consider an ambient space $N$ possessing a pole $\bar o$ and, setting $\bar \rho(x) \doteq \mathrm{dist}(x, \bar o)$, we assume that \eqref{pinchsectio} is met for some $k \ge 0$ and some $G \in C^0(\mathbb R^+_0)$. Let $\mathrm{sn}_k(t)$ be the solution of
\begin{equation}
\left\{\begin{array}{l}
\mathrm{sn}_k'' - k\, \mathrm{sn}_k = 0 \quad \text{on } \, \mathbb R^+, \\[0.1cm]
\mathrm{sn}_k(0)=0, \quad \mathrm{sn}_k'(0)=1,
\end{array}\right.
\end{equation}
that is
\begin{equation}\label{def_snk}
\mathrm{sn}_k(t) = \left\{ \begin{array}{ll} t & \quad \text{if } \, k=0, \\[0.1cm]
\sinh(\sqrt{k}t)/\sqrt{k} & \quad \text{if } \, k>0.
\end{array}\right.
\end{equation}
Observe that $\mathbb R^n$ and $\mathbb H^n_k$ can be written as the differentiable manifold $\mathbb R^n$ equipped with the metric given, in polar geodesic coordinates $(\rho, \theta) \in \mathbb R^+ \times \mathbb S^{n-1}$ centered at some origin, by
$$
\mathrm{d} s^2_k = \mathrm{d} \rho^2 + \mathrm{sn}^2_k(\rho)\,\mathrm{d} \theta^2,
$$
$\mathrm{d} \theta^2$ being the metric on the unit sphere $\mathbb S^{n-1}$.\\
We also consider the model $M^n_g$ associated with the lower bound $-G$ for $\bar K_\mathrm{rad}$, that is, we let $g \in C^2(\mathbb R^+_0)$ be the solution of
\begin{equation}\label{def_g}
\left\{\begin{array}{l}
g'' - Gg = 0 \quad \text{on } \, \mathbb R^+, \\[0.1cm]
g(0)=0, \quad g'(0)=1,
\end{array}\right.
\end{equation}
and we define $M^n_g$ as being $(\mathbb R^n, \mathrm{d} s^2_g)$ with the $C^2$-metric $\mathrm{d} s_g^2 = \mathrm{d} \rho^2 + g^2(\rho) \mathrm{d} \theta^2$ in polar coordinates. Condition \eqref{pinchsectio} and the Hessian comparison theorem (Theorem 2.3 in \cite{PRS}, or Theorem 1.15 in \cite{BMR_memoirs}) imply
\begin{equation}\label{hessiancomp}
\frac{\mathrm{sn}_k'(\bar\rho)}{\mathrm{sn}_k(\bar\rho)} \Big( ( \, , \, ) - \mathrm{d} \bar \rho \otimes\mathrm{d} \bar\rho\Big) \le \overline{\mathrm{Hess}\, }(\bar\rho) \le \frac{g'(\bar \rho)}{g(\bar \rho)}\Big( ( \, , \, ) - \mathrm{d} \bar \rho \otimes\mathrm{d} \bar\rho\Big).
\end{equation}
The next proposition investigates the ODE properties that follow from the assumptions of pointwise or integral pinching.
\begin{proposition}\label{prop_pinching}
Let $N^n$ satisfy \eqref{pinchsectio}, and let $\mathrm{sn}_k,g$ be solutions of \eqref{def_snk}, \eqref{def_g}. Define
\begin{equation}\label{rel_useful}
\zeta(s) \doteq \frac{g'(s)}{g(s)} - \frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)}.
\end{equation}
Then, $\zeta(0^+)=0$, $\zeta \ge 0$ on $\mathbb R^+$. Moreover,
\begin{itemize}
\item[$(i)$] If $N$ has a pointwise pinching to $\mathbb H^n_k$ or $\mathbb R^n$, then $\zeta(s) \rightarrow 0$ as $s \rightarrow +\infty$.
\item[$(ii)$] If $N$ has an integral pinching to $\mathbb H^n_k$ or $\mathbb R^n$, then $g/\mathrm{sn}_k \rightarrow C$ as $s \rightarrow +\infty$ for some $C \in \mathbb R^+$, and
\begin{equation}\label{intebounds_zeta}
\zeta(s) \in L^1(\mathbb R^+), \qquad \zeta(s) \frac{\mathrm{sn}_k(s)}{\mathrm{sn}_k'(s)} \rightarrow 0 \ \text{ as } \, s \rightarrow +\infty.
\end{equation}
\end{itemize}
\end{proposition}
\begin{proof}
The non-negativity of $\zeta$, which in particular implies that $g/\mathrm{sn}_k$ is non-decreasing, follows from $G\ge k$ via Sturm comparison, and $\zeta(0^+)=0$ depends on the asymptotic relations $\mathrm{sn}_k'/\mathrm{sn}_k = s^{-1} + o(1)$ and $g'/g = s^{-1} + o(1)$ as $s \rightarrow 0^+$, which directly follow from the ODEs satisfied by $\mathrm{sn}_k$ and $g$. To show $(i)$, differentiating $\zeta$ we get
\begin{equation}\label{relazeta}
\zeta'(s) = R(s) - \zeta(s) B(s),
\end{equation}
where $R(s) \doteq G(s) - k$ and $\displaystyle B(s) \doteq \frac{g'(s)}{g(s)} + \frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)}$.
Thus, integrating on $[1,s]$, we can rewrite $\zeta$ as follows:
\begin{equation}\label{rewritezeta}
\zeta(s) = \zeta(1)e^{ - \int_1^sB} + e^{ - \int_1^sB}\int_1^s R(\sigma)e^{\int_1^\sigma B}\mathrm{d} \sigma
\end{equation}
Using that $B \not \in L^1([1,+\infty))$, and applying de l'Hopital's theorem, we infer
$$
\lim_{s \rightarrow +\infty} \zeta(s) = \lim_{s \rightarrow +\infty}\frac{R(s)}{B(s)} \le \lim_{s \rightarrow +\infty} \frac{\mathrm{sn}_k(s)[G(s)-k]}{\mathrm{sn}_k'(s)}.
$$
In our pointwise pinching assumptions on $G(s)$, for both $k=0$ and $k>0$ the last limit is zero, hence $\zeta(s) \rightarrow 0$ as $s$ diverges. To show $(ii)$, suppose that $N$ has an integral pinching to $\mathbb H^n_k$ or to $\mathbb R^n$. We first observe that the boundedness of $g/\mathrm{sn}_k$ on $\mathbb R^+$ equivalent to the property $\zeta \in L^1(+\infty)$, as it follows from
\begin{equation}
\log \frac{g(s)}{\mathrm{sn}_k(s)} = \int_0^{s} \frac{\mathrm{d}}{\mathrm{d} \sigma}\log\left(\frac{g(\sigma)}{\mathrm{sn}_k(\sigma)}\right)\mathrm{d} s = \int_0^{s}\zeta
\end{equation}
(we used that $(g/\mathrm{sn}_k)(0^+) =1$). The boundedness of $g/\mathrm{sn}_k$ is the content of Corollary 4 and Remark 16 in \cite{BMR_Yamabe},
but we prefer here to present a direct proof. Integrating \eqref{rewritezeta} on $[1,s]$ and using Fubini's theorem, the monotonicity of $g/\mathrm{sn}_k$ and the expression of $B$ we obtain
\begin{equation}\label{equa_bonita}
\begin{array}{lcl}
\displaystyle \int_1^s \zeta & = & \displaystyle \zeta(1) \int_1^s \frac{g(1)\mathrm{sn}_k(1)}{g(\sigma)\mathrm{sn}_k(\sigma)}\,\mathrm{d} \sigma +
\int_1^s e^{ - \int_1^\sigma B}\int_1^\sigma R(\tau)e^{\int_1^\tau B}\mathrm{d} \tau \, \mathrm{d} \sigma \\[0.5cm]
& \le & \displaystyle \zeta(1)\mathrm{sn}_k(1)^2 \int_1^s \frac{\mathrm{d} \sigma}{\mathrm{sn}_k^2(\sigma)} + \int_1^s \left[\int_\tau^s e^{ - \int_1^\sigma B}R(\tau)e^{\int_1^\tau B}\mathrm{d} \sigma\right] \mathrm{d} \tau \\[0.5cm]
& \le & \displaystyle C + \int_1^s R(\tau)g(\tau)\mathrm{sn}_k(\tau)\left[\int_\tau^s \frac{\mathrm{d} \sigma}{g(\sigma)\mathrm{sn}_k(\sigma)}\right] \mathrm{d} \tau \\[0.5cm]
& \le & \displaystyle C + \int_1^s R(\tau)g(\tau)\mathrm{sn}_k(\tau)\left[\int_\tau^{+\infty} \frac{\mathrm{d} \sigma}{g(\sigma)\mathrm{sn}_k(\sigma)}\right] \mathrm{d} \tau \\[0.5cm]
\end{array}
\end{equation}
for some $C>0$, where we have used that $\mathrm{sn}_k^{-2}, g^{-1}\mathrm{sn}_k^{-1} \in L^1(+\infty)$. Next, since $g\,\mathrm{sn}_k/\mathrm{sn}_k^2$ is non-decreasing, Proposition 3.12 in \cite{BMR_memoirs} ensures the validity of the following inequality:
$$
g(\tau)\mathrm{sn}_k(\tau)\left[\int_\tau^{+\infty} \frac{\mathrm{d} \sigma}{g(\sigma)\mathrm{sn}_k(\sigma)}\right] \le \mathrm{sn}_k^2(\tau)\left[\int_\tau^{+\infty} \frac{\mathrm{d} \sigma}{\mathrm{sn}_k^2(\sigma)}\right].
$$
It is easy to show that the last expression is bounded if $k>0$, and diverges at the order of $\tau$ if $k=0$. In other words, it can be bounded by $C_1\mathrm{sn}_k/\mathrm{sn}_k'$ on $[1,+\infty)$, for some large $C_1>0$. Therefore, by \eqref{equa_bonita}
$$
\displaystyle \int_1^s \zeta \le \displaystyle C + C_1\int_1^s R(\tau)\frac{\mathrm{sn}_k(\tau)}{\mathrm{sn}_k'(\tau)} \mathrm{d} \tau = \displaystyle C + C_1\int_1^s \big[G(\tau)-k\big]\frac{\mathrm{sn}_k(\tau)}{\mathrm{sn}_k'(\tau)} \mathrm{d} \tau.
$$
In our integral pinching assumptions, both for $k=0$ and for $k>0$ it holds $(G-k)\mathrm{sn}_k/\mathrm{sn}_k' \in L^1(+\infty)$, and thus $\zeta \in L^1(+\infty)$. Next, we use \eqref{relazeta} and the non-negativity of $\zeta,B$ to obtain
$$
\begin{array}{lcl}
\displaystyle \left( \frac{\zeta(s)\mathrm{sn}_k(s)}{\mathrm{sn}_k'(s)}\right)' & = & \displaystyle \big[G(s)-k - \zeta(s)B(s)\big]\frac{\mathrm{sn}_k(s)}{\mathrm{sn}_k'(s)} + \zeta(s)\left[ 1 - k \left(\frac{\mathrm{sn}_k(s)}{\mathrm{sn}_k'(s)}\right)^2\right] \\[0.5cm]
& \le & \displaystyle \frac{\big[G(s)-k\big]\mathrm{sn}_k(s)}{\mathrm{sn}_k'(s)} + \zeta(s) \in L^1(+\infty),
\end{array}
$$
hence $\zeta \mathrm{sn}_k/\mathrm{sn}_k' \in L^\infty(\mathbb R^+)$ by integrating. This implies that the function $B$ in \eqref{relazeta} satisfies $B \le C\mathrm{sn}_k'/\mathrm{sn}_k$ for some constant $C>0$. Therefore, from \eqref{relazeta} we get $\zeta' \ge -\zeta B \ge - C \zeta \mathrm{sn}_k'/\mathrm{sn}_k$. Integrating on $[s,t]$ and using the monotonicity of $\mathrm{sn}_k'/\mathrm{sn}_k$ we obtain
$$
- C\frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)}\int_s^t \zeta \le \zeta(t) -\zeta(s).
$$
Since $\zeta \in L^1(\mathbb R^+)$, we can choose a divergent sequence $\{t_j\}$ such that $\zeta(t_j) \rightarrow 0$ as $j \rightarrow +\infty$. Setting $t=t_j$ into the above inequality and taking limits we deduce
$$
\zeta(s) \le C\frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)}\int_s^{+\infty} \zeta,
$$
thus letting $s \rightarrow +\infty$ we get the second relation in \eqref{intebounds_zeta}.
\end{proof}
\subsection{A transversality lemma}
This subsection is devoted to an estimate of the measure of the critical set
$$
S_{t,s} = \Big\{ x \in M \ : \ t \le r(x) \le s, \ |\nabla r(x)|=0 \Big\},
$$
with the purpose of justifying some coarea's formulas for integrals over extrinsic annuli. We begin with the next
\begin{lemma}\label{lem_coarea}
Let $\varphi : M^m \rightarrow N^n$ be an isometric immersion, and let $r(x) = \overline{\mathrm{dist}}(\varphi(x), \bar o)$ be the extrinsic distance function from $\bar o \in N$. Denote with $\Gamma_{\!\sigma} \doteq \{x\in M;\; r(x)=\sigma\}$. Then, for each $f \in L^1(\{t \le r \le s\})$,
\begin{equation}\label{coarea_strong}
\int_{\{t \le r \le s\}} f \,\mathrm{d} x = \int_{S_{t,s}}\!f \,\mathrm{d} x + \int_t^s \left[ \int_{\Gamma_{\!\sigma}} \frac{f}{|\nabla r|}\right] \mathrm{d} \sigma.
\end{equation}
In particular, if
\begin{equation}\label{assu_Hm}
\mathrm{vol}(S_{t,s})=0,
\end{equation}
then
\begin{equation}\label{coarea_strong2}
\int_{\{t \le r \le s\}} f \,\mathrm{d} x = \int_t^s \left[ \int_{\Gamma_{\!\sigma}} \frac{f}{|\nabla r|}\right] \mathrm{d} \sigma.
\end{equation}
\end{lemma}
\begin{proof}
We prove \eqref{coarea_strong} for $f \ge 0$, and the general case follows by considering the positive and negative part of $f$. By the coarea's formula, we know that for each $g \in L^1(\{t \le r \le s\})$,
\begin{equation}\label{classical_coarea}
\int_{\{t \le r \le s\}} g|\nabla r| \,\mathrm{d} x = \int_t^s \left[ \int_{\Gamma_{\!\sigma}} g\right] \mathrm{d} \sigma.
\end{equation}
Fix $j$ and consider $A_j = \{ |\nabla r|> 1/j\}$ and the function $$g = f1_{A_j}/|\nabla r| \in L^1(\{t \le r \le s\}).$$ Applying \eqref{classical_coarea}, letting $j \rightarrow +\infty$ and using the monotone convergence theorem we deduce
\begin{equation}\label{limit}
\int_{\{t \le r \le s\} \backslash S_{t,s}} f \,\mathrm{d} x = \int_t^s \left[ \int_{\Gamma_{\!\sigma} \backslash S_{t,s}} \frac{f}{|\nabla r|}\right] \mathrm{d} \sigma = \int_t^s \left[ \int_{\Gamma_{\!\sigma}} \frac{f}{|\nabla r|}\right] \mathrm{d} \sigma,
\end{equation}
where the last equality follows since $\Gamma_\sigma \cap S_{t,s} = \emptyset$ for a.e. $\sigma \in [t,s]$, in view of Sard's theorem. Formula \eqref{coarea_strong} follows at once.
\end{proof}
Let now $N$ possess a pole $\bar o$ and satisfy \eqref{pinchsectio}, and consider a minimal immersion $\varphi: M\rightarrow N$. Since, by the Hessian comparison theorem, geodesic spheres in $N$ centered at $\bar o$ are positively curved, it is reasonable to expect that the ``transversality" condition \eqref{assu_Hm} holds. This is the content of the next
\begin{proposition}\label{important!}
Let $\varphi: M^{m} \rightarrow N^n$ be a minimal immersion, where $N$ possesses a pole $\bar o$ and satisfies \eqref{pinchsectio}.
Then,
\begin{equation}
\mathrm{vol}(S_{0,+\infty}) = 0.
\end{equation}
\end{proposition}
\begin{proof}
Suppose by contradiction that $\mathrm{vol}(S_{0,+\infty})>0$. By Stampacchia and Rademacher's theorems,
\begin{equation}\label{stampa}
\nabla |\nabla r|(x) = 0 \qquad \text{for a.e. } \, x \in S_{0,+\infty}.
\end{equation}
Pick one such $x$ and a local Darboux frame $\{e_i\}, \{e_\alpha\}$, $1 \le i \le m$, $m+1 \le \alpha \le n$ around $x$, that is, $\{e_i\}$ is a local orthonormal frame for $TM$ and $\{e_\alpha\}$ is a local orthonormal frame for the normal bundle $TM^\perp$. Since $\nabla r(x)=0$, then $\bar \nabla \bar \rho(x) \in T_xM^\perp$. Up to rotating $\{e_\alpha\}$, we can suppose that $\bar \nabla \bar \rho(x) = e_n(x)$. Fix $i$ and consider a unit speed geodesics $\gamma: (-\varepsilon,\varepsilon) \rightarrow M$ such that $\gamma(0)=x$, $\dot \gamma(0)=e_i$. Identify $\gamma$ with its image $\varphi \circ \gamma$ in $N$. By Taylor's formula and \eqref{stampa},
$$
|\nabla r|(\gamma(t)) = o(t) \qquad \text{as } \, t \rightarrow 0^+.
$$
Using that $|\nabla r| = \sqrt{ 1- \sum_\alpha( \bar \nabla \bar \rho, e_\alpha)^2}$, we deduce
\begin{equation}\label{buono_0}
1 - \sum_\alpha( \bar \nabla \bar \rho, e_\alpha)_{\gamma(t)}^2 = o(t^2).
\end{equation}
Since $\bar \nabla \bar \rho(x) = e_n(x)$, we deduce from \eqref{buono} that also
\begin{equation}\label{buono}
u(t) \doteq 1 - ( \bar \nabla \bar \rho, e_n)_{\gamma(t)}^2 = o(t^2),
\end{equation}
thus $\dot u(0) = \ddot u(0)=0$. Computing,
$$
\begin{array}{lcl}
\dot u(t) & = & 2(\bar \nabla \bar \rho, e_n) \left[ (\bar \nabla_{\dot \gamma} \bar \nabla \bar \rho, e_n) + ( \bar \nabla \bar \rho, \bar \nabla_{\dot \gamma} e_n)\right] \\[0.2cm]
\ddot u(t) & = & 2 \left[ (\bar \nabla_{\dot \gamma} \bar \nabla \bar \rho, e_n) + ( \bar \nabla \bar \rho, \bar \nabla_{\dot \gamma} e_n)\right]^2 \\[0.2cm]
& & + 2 (\bar \nabla \bar \rho, e_n)\left[\displaystyle ( \bar \nabla_{\dot \gamma} \bar \nabla_{\dot \gamma} \bar \nabla \bar \rho, e_n) + 2 (\bar \nabla_{\dot \gamma} \bar \nabla \bar \rho, \bar \nabla_{\dot \gamma} e_n) + (\bar \nabla \bar \rho, \bar \nabla_{\dot \gamma} \bar \nabla_{\dot \gamma} e_n)\right].
\end{array}
$$
Evaluating at $t=0$ we deduce
$$
0 = \ddot u(0)/2 = \displaystyle ( \bar \nabla_{e_i} \bar \nabla_{e_i} \bar \nabla \bar \rho, \bar \nabla \bar \rho) + 2 (\bar \nabla_{e_i} \bar \nabla \bar \rho, \bar \nabla_{e_i} e_n) + (e_n, \bar \nabla_{e_i} \bar \nabla_{e_i} e_n).
$$
Differentiating twice $1 = |e_n|^2 = |\bar \nabla \bar \rho|^2$ along $e_i$ we deduce the identities $(e_n, \bar \nabla_{e_i} \bar \nabla_{e_i} e_n) = -|\bar \nabla_{e_i} e_n|^2$ and $( \bar \nabla_{e_i} \bar \nabla_{e_i} \bar \nabla \bar \rho, \bar \nabla \bar \rho) = -|\bar \nabla_{e_i} \bar \nabla \bar \rho|^2$, hence
$$
0 = \ddot u(0)/2 = \displaystyle -|\bar \nabla_{e_i}\bar \nabla \bar \rho|^2 + 2 (\bar \nabla_{e_i} \bar \nabla \bar \rho, \bar \nabla_{e_i} e_n) - |\bar \nabla_{e_i} e_n|^2 = - \big| \bar \nabla_{e_i} \bar \nabla \bar \rho - \bar \nabla_{e_i} e_n\big|^2,
$$ which implies $\bar \nabla_{e_i} \bar \nabla \bar \rho = \bar \nabla_{e_i} e_n$. Therefore, at $x$,
$$
(\mathrm{II}(e_i,e_i),e_n) = - (\bar \nabla_{e_i} e_n, e_i) = - (\bar \nabla_{e_i} \bar \nabla \bar \rho, e_i) = \overline{\mathrm{Hess}\, }(\bar \rho)(e_i,e_i).
$$
Tracing with respect to $i$, using that $M$ is minimal and \eqref{hessiancomp} we conclude that
$$
0 \ge \frac{\mathrm{sn}_k'(r(x))}{\mathrm{sn}_k(r(x))} (m- |\nabla r(x)|^2) = m\frac{\mathrm{sn}_k'(r(x))}{\mathrm{sn}_k(r(x))} >0,
$$
a contradiction.
\end{proof}
\section{Monotonicity formulae and conditions equivalent to $\Theta(+\infty)<+\infty$}\label{sec_mono}
Our first step is to improve the classical monotonicity formula for $\Theta(r)$, that can be found in \cite{simon} (for $N=\mathbb R^n$) and \cite{Anderson} (for $N=\mathbb H^n_k$). For $k \ge 0$, let $v_k, V_k$ denote the volume function, respectively, of geodesic spheres and balls in the space form of sectional curvature $-k$ and dimension $m$, i.e.,
\begin{equation}\label{def_vk}
v_k(s) = \omega_{m-1}\mathrm{sn}_k(s)^{m-1}, \qquad V_k(s) = \int_0^s v_k(\sigma) \mathrm{d} \sigma,
\end{equation}
where $\omega_{m-1}$ is the volume of the unit sphere $\mathbb S^{m-1}$. Although we shall not use all the four monotone quantities in \eqref{monotones} below, nevertheless they have independent interest, and for this reason we state the result in its full strength. We define the \emph{flux} $J(s)$ of $\nabla r$ over the extrinsic sphere $\Gamma_s$:
\begin{equation}\label{def_J}
J(s) \doteq \frac{1}{v_k(s)} \int_{\Gamma_s} |\nabla r|.
\end{equation}
\begin{proposition}[The monotonicity formulae]\label{prop_monotonicity}
Suppose that $N$ has a pole $\bar o$ and satisfies \eqref{pinchsectio}, and let $\varphi : M^m \rightarrow N^n$ be a proper minimal immersion. Then, the functions
\begin{equation}\label{monotones}
\Theta(s), \qquad \frac{1}{V_k(s)}\int_{\{0 \le r \le s\}}|\nabla r|^2
\end{equation}
are absolutely continuous and monotone non-decreasing. Moreover, $J(s)$ coincides, on an open set of full measure, with the absolutely continuous function
$$
\bar J(s) \doteq \frac{1}{v_k(s)}\int_{\{r \le s\}} \Delta r
$$
and $\bar J(s)$, $V_k(s)\big[ \bar J(s)-\Theta(s)\big]$ are non-decreasing. In particular, $J(s) \ge \Theta(s)$ a.e. on $\mathbb R^+$.
\end{proposition}
\begin{remark}\label{rem_tkachev}
\emph{To the best of our knowledge, the monotonicity of $J(s)$ (aside from its differentiability properties) has first been shown, in the Euclidean setting, in a paper by V. Tkachev \cite{tkachev}.
}
\end{remark}
\begin{proof}
We first observe that, in view of Lemma \ref{lem_coarea} and Proposition \ref{important!} applied with $f = \Delta r$,
\begin{equation}\label{usefullll}
v_k(s)\bar J(s) \doteq \int_{\{r \le s\}} \Delta r \equiv \int_0^s \left[\int_{\Gamma_\sigma}\frac{\Delta r}{|\nabla r|}\right]\mathrm{d} \sigma
\end{equation}
is absolutely continuous, and by the divergence theorem it coincides with $v_k(s)J(s)$ whenever $s$ is a regular value of $r$. Consider
\begin{equation}\label{def_f}
f(s) = \int_0^s \frac{V_k(\sigma)}{v_k(\sigma)} \mathrm{d} \sigma = \int_0^s \frac{1}{v_k(\sigma)} \left[\int_0^\sigma v_k(\tau) \mathrm{d} \tau \right]\mathrm{d} \sigma
\end{equation}
which is a $C^2$ solution of
$$
f'' + (m-1) \frac{\mathrm{sn}_k'}{\mathrm{sn}_k} f' = 1 \quad \text{on } \, \mathbb R^+, \quad f(0)=0, \quad f'(0)=0,
$$
and define $\psi(x) = f(r(x)) \in C^2(M)$. Let $\{e_i\}$ be a local orthonormal frame on $M$. Since $\varphi$ is minimal, by the chain rule and the lower bound in the Hessian comparison theorem \ref{hessiancomp}
\begin{equation}\label{basica}
\Delta r = \sum_{j=1}^m \overline{\mathrm{Hess}\, }(\bar \rho)\big(\mathrm{d} \varphi(e_j), \mathrm{d} \varphi(e_j)\big) \ge \frac{\mathrm{sn}_k'(r)}{\mathrm{sn}_k(r)} \big(m -|\nabla r|^2\big).
\end{equation}
We then compute
\begin{equation}\label{deltaf}
\begin{array}{lcl}
\displaystyle \Delta \psi & = & \displaystyle f''|\nabla r|^2 + f'\Delta r \ge \displaystyle f''|\nabla r|^2 + f'\frac{\mathrm{sn}_k'}{\mathrm{sn}_k}(m-|\nabla r|^2) \\[0.3cm]
& = & \displaystyle \displaystyle 1 + \left(1-|\nabla r|^2\right)\left(f'(r)\frac{\mathrm{sn}_k'(r)}{\mathrm{sn}_k(r)}- f''(r)\right).
\end{array}
\end{equation}
It is not hard to show that the function
$$
z(s) \doteq f'(s)\frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)}- f''(s) = \frac{m}{m-1}\frac{V_k(s) v_k'(s)}{v_k^2(s)} - 1.
$$
is non-negative and non-decreasing on $\mathbb R^+$. Indeed, from
\begin{equation}\label{derizeta}
z(0)=0, \qquad z'(s) = \frac{m}{v_k(s)} \left[k V_k(s) - \frac{1}{m-1} v_k'(s)z(s)\right]
\end{equation}
we deduce that $z'>0$ when $z<0$, which proves that $z \ge 0$ on $\mathbb R^+$.
Fix $0<t<s$ regular values for $r$. Integrating \eqref{deltaf} on the smooth compact set $\{t \le r \le s\}$ and using the divergence theorem we deduce
\begin{equation}\label{mono}
\frac{V_k(s)}{v_k(s)} \int_{\Gamma_{\! s}} |\nabla r| - \frac{V_k(t)}{v_k(t)} \int_{\Gamma_t} |\nabla r| \ge \mathrm{vol}\big(\{t \le r \le s\}\big).
\end{equation}
By the definition of $J(s)$ and $\Theta(s)$, and since $J(s) \equiv \bar J(s)$ for regular values, the above inequality rewrites as follows:
$$
V_k(s)\bar J(s) - V_k(t)\bar J(t) \ge V_k(s)\Theta(s) - V_k(t)\Theta(t),
$$
or in other words,
$$
V_k(s)\big[\bar J(s)-\Theta(s)\big] \ge V_k(t)\big[\bar J(t)-\Theta(t)\big].
$$
Since all the quantities involved are continuous, the above relation extends to all $t,s \in \mathbb R^+$, which proves the monotonicity of $V_k[\bar J -\Theta]$. Letting $t \rightarrow 0$ we then deduce that $\bar J(s) \ge \Theta(s)$ on $\mathbb R^+$. Next, by using $f \equiv 1$ and $f \equiv |\nabla r|^2$ in Lemma \ref{lem_coarea} and exploiting again Proposition \ref{important!} we get
\begin{equation}\label{inevolume}
\mathrm{vol}\big(\{t \le r \le s\}\big) = \int_t^s \left[\int_{\Gamma_{\!\sigma}} \frac{1}{|\nabla r|}\right]\mathrm{d} \sigma, \qquad \int_{\{0\le r\le s\}}|\nabla r|^2 = \int_0^s \left[\int_{\Gamma_\sigma}|\nabla r|\right]\mathrm{d} \sigma,
\end{equation}
showing that the two quantities in \eqref{monotones} are absolutely continuous. Plugging into \eqref{mono}, letting $t \rightarrow 0$ and using that $z \ge 0$ we deduce
\begin{equation}\label{fromthis}
\frac{V_k(s)}{v_k(s)} \int_{\Gamma_{\! s}} |\nabla r| \ge \int_0^s \left[\int_{\Gamma_{\!\sigma}} \frac{1}{|\nabla r|}\right]\mathrm{d} \sigma,
\end{equation}
for regular $s$, which together with the trivial inequality $|\nabla r|^{-1} \ge |\nabla r|$ and with \eqref{inevolume} gives
\begin{equation}\label{dueineq}
\begin{array}{l}
\quad \displaystyle V_k(s)\int_{\Gamma_{\! s}} |\nabla r| \ge v_k(s) \int_0^s \left[\int_{\Gamma_{\!\sigma}} |\nabla r|\right]\mathrm{d} \sigma, \\[0.4cm]
\quad \displaystyle V_k(s)\left[ \frac{\mathrm{d}}{\mathrm{d} s} \mathrm{vol}\big( \{ r \le s\}\big)\right] \ge v_k(s) \mathrm{vol}\big(\{r \le s\}\big).
\end{array}
\end{equation}
Integrating the second inequality we obtain the monotonicity of $\Theta(s)$, while integrating the first one and using \eqref{inevolume} we obtain the monotonicity of the second quantity in \eqref{monotones}. To show the monotonicity of $\bar J(s)$, by \eqref{basica} and using the full information coming from \eqref{hessiancomp} we obtain
\begin{equation}\label{deltar}
\frac{\mathrm{sn}_k'(r)}{\mathrm{sn}_k(r)}\big(m-|\nabla r|^2\big) \le \Delta r \le \frac{g'(r)}{g(r)}\big(m-|\nabla r|^2\big).
\end{equation}
In view of the identity \eqref{usefullll}, we consider regular $s>0$, we divide \eqref{deltar} by $|\nabla r|$ and integrate on $\Gamma_s$ to get
\begin{equation}\label{diff_vol}
\frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)}\int_{\Gamma_{\! s}}\frac{m-|\nabla r|^2}{|\nabla r|} \le \big(v_k(s)\bar J(s)\big)' \le \frac{g'(s)}{g(s)}\int_{\Gamma_{\! s}}\frac{m-|\nabla r|^2}{|\nabla r|}.
\end{equation}
Writing $m-|\nabla r|^2 = m(1-|\nabla r|^2) + (m-1)|\nabla r|^2$, setting for convenience
\begin{equation}\label{def_Is}
v_g(s) = \omega_{m-1}g(s)^{m-1}, \qquad T(s) \doteq \frac{\int_{\Gamma_{\! s}}|\nabla r|^{-1}}{\int_{\Gamma_{\! s}}|\nabla r|}-1,
\end{equation}
rearranging we deduce the two inequalities
\begin{equation}\label{diff_vol_2}
\begin{array}{rcl}
\big(v_k(s)\bar J(s)\big)' & \ge & \displaystyle v_k'(s)\bar J(s) + m\frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)}T(s)v_k(s) \bar J(s)\\[0.4cm]
\big(v_k(s)\bar J(s)\big)' & \le & \displaystyle \frac{v_g'(s)}{v_g(s)}v_k(s)\bar J(s) + m\frac{g'(s)}{g(s)}T(s)v_k(s) \bar J(s).
\end{array}
\end{equation}
Expanding the derivative on the left-hand side, we deduce
\begin{equation}\label{diff_vol_3}
\begin{array}{rcl}
\bar J'(s) & \ge & \displaystyle m \frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)}T(s) \bar J(s), \\[0.4cm]
\displaystyle \left( \frac{v_k(s)}{v_g(s)}\bar J(s)\right)' & \le & \displaystyle m \frac{g'(s)}{g(s)}T(s) \left( \frac{v_k(s)}{v_g(s)}\bar J(s)\right).
\end{array}
\end{equation}
The first inequality together with the non-negativity of $T$ implies the desired $\bar J' \ge 0$, concluding the proof. The second inequality in \eqref{diff_vol_3}, on the other hand, will be useful in awhile.
\end{proof}
\begin{remark}\label{rem_proper}
\emph{The properness of $\varphi$ is essential in the above proof to justify integrations by parts. However, if $\varphi$ is non-proper, at least when $N$ is Cartan-Hadamard with sectional curvature $\bar K \le -k$ the function $\Theta$ is still monotone in an extended sense. In fact, as it has been observed in \cite{Tysk} for $N=\mathbb R^{m+1}$, $\Theta(s)= +\infty$ for each $s$ such that $\{r <s\}$ contains a limit point of $\varphi$. Briefly, if $\bar x \in N$ is a limit point with $\bar \rho(\bar x) < s$, choose $\varepsilon>0$ such that $2\varepsilon < s - \bar \rho(\bar x)$, and a diverging sequence $\{x_j\}\subset M$ such that $\varphi(x_j) \rightarrow \bar x$. We can assume that the balls $B_\varepsilon(x_j) \subset M$ are pairwise disjoint. Since $\overline \mathrm{dist}(\varphi(x), \varphi(x_j)) \le \mathrm{dist}(x,x_j)$, we deduce that $\varphi(B_\varepsilon(x_j)) \subset \{r <s\}$ for $j$ large enough, and thus
$$
\mathrm{vol}\big(\{r \le s\}\big) \ge \sum_j \mathrm{vol}(B_\varepsilon(x_j)).
$$
However, using that $\bar K \le -k$ and since $N$ is Cartan-Hadamard, we can apply the intrinsic monotonicity formula (see Proposition \ref{prop_monointri} in Appendix 2 below) with chosen origin $\varphi(x_j)$ to deduce that $\mathrm{vol}(B_\varepsilon(x_j)) \ge V_k(\varepsilon)$ for each $j$, whence $\mathrm{vol}(\{r \le s\}) = +\infty$.
}
\end{remark}
We next investigate conditions equivalent to the finiteness of the density.
\begin{proposition}\label{prop_equivalence}
Suppose that $N$ has a pole and satisfies \eqref{pinchsectio}. Let $\varphi : M^m \rightarrow N^n$ be a proper minimal immersion. Then, the following properties are equivalent:
\begin{itemize}
\item[(1)] $\Theta(+\infty)< +\infty$;
\item[(2)] $\bar J(+\infty)<+\infty$.
\end{itemize}
Moreover, both $(1)$ and $(2)$ imply that
\begin{equation}\label{integrabilitystrange}\tag{$3$}
\frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)}\left[\frac{\int_{\Gamma_{\! s}}|\nabla r|^{-1}}{\int_{\Gamma_{\! s}}|\nabla r|}-1\right] \in L^1(\mathbb R^+).
\end{equation}
If further $N$ has an integral pinching to $\mathbb R^n$ or $\mathbb H^n_k$, then $(1) \Leftrightarrow (2) \Leftrightarrow (3)$.
\end{proposition}
\begin{proof}
We refer to the proof of the previous proposition for notation and formulas.\\
$(2) \Rightarrow (1)$ is obvious since, by the previous proposition, $\bar J(s) \ge \Theta(s)$.\\
$(1) \Rightarrow (2)$. Note that the limit in $(2)$ exists since $\bar J$ is monotone. Suppose by contradiction that $\bar J(+\infty)=+\infty$, let $c>0$ and fix $s_c$ large enough that $\bar J(s) \ge c$ for $s\ge s_c$. From \eqref{inevolume} and \eqref{def_J}, and since $\bar J \equiv J$ a.e.,
$$
\begin{array}{lcl}
\Theta(s) & = & \displaystyle \frac{1}{V_k(s)} \int_0^s \left[ \int_{\Gamma_{\!\sigma}} \frac{1}{|\nabla r|} \right]\mathrm{d} \sigma \ge \frac{1}{V_k(s)}\int_0^s v_k(\sigma)J(\sigma)\mathrm{d} \sigma \\[0.5cm]
& \ge & \displaystyle \frac{1}{V_k(s)}\int^s_{s_c} v_k(\sigma)J(\sigma)\mathrm{d} \sigma \ge c\frac{V_k(s)-V_k(s_c)}{V_k(s)}.
\end{array}
$$
Letting $s \rightarrow +\infty$ we get $\Theta(+\infty) \ge c$, hence $\Theta(+\infty)=+\infty$ by the arbitrariness of $c$, contradicting $(1)$.\\
$(2) \Rightarrow (3)$. Integrating \eqref{diff_vol_3} on $[1,s]$ we obtain
\begin{equation}\label{diff_vol_4}
c_1 \exp\left\{m\int_1^s \frac{\mathrm{sn}_k'(\sigma)}{\mathrm{sn}_k(\sigma)}T(\sigma)\mathrm{d} \sigma\right\} \le \bar J(s) \le c_2 \frac{v_g(s)}{v_k(s)}\exp\left\{m\int_1^s\left[\frac{g'(\sigma)}{g(\sigma)}\right]T(\sigma)\mathrm{d} \sigma \right\},
\end{equation}
for some constants $c_1,c_2>0$, where $v_g(s),T(s)$ is as in \eqref{def_Is}. The validity of $(2)$ and the first inequality show that $\mathrm{sn}_k'T/\mathrm{sn}_k \in L^1(+\infty)$, that is, $(3)$ is satisfied.\\
$(3) \Rightarrow (2)$. In our pinching assumptions on $N$, $(ii)$ in Proposition \ref{prop_pinching} gives
$$
\frac{g'}{g} = \frac{\mathrm{sn}_k'}{\mathrm{sn}_k} + \zeta, \quad \text{with} \quad \zeta \le C \frac{\mathrm{sn}_k'}{\mathrm{sn}_k} \ \text{ on } \, \mathbb R^ +, \quad \text{and} \quad g \le C\mathrm{sn}_k \ \text{ on } \, \mathbb R^+,
$$
for some $C>0$. Plugging into \eqref{diff_vol_4} and recalling the definition of $v_g$ we obtain
$$
\displaystyle \bar J(s) \le c_3\exp\left\{c_4\int_1^s\left[\frac{\mathrm{sn}_k'(\sigma)}{\mathrm{sn}_k(\sigma)}\right]T(\sigma)\mathrm{d} \sigma \right\},
$$
for some $c_3,c_4>0$, and $(3) \Rightarrow (2)$ follows by letting $s \rightarrow +\infty$.
\end{proof}
\begin{remark}
\emph{A version of Propositions \ref{prop_monotonicity} and \ref{prop_equivalence} that covers most of the material presented above has also been independently proved in the very recent \cite{GimenoMarkvosen}, see Theorems 2.1 and 6.1 therein. We mention that their results are stated for more general ambient spaces subjected to specific function-theoretic requirements, and that, in Proposition \ref{prop_equivalence}, it holds in fact $\bar J(+\infty) \equiv \Theta(+\infty)$. For an interesting characterization, when $N=\mathbb R^n$, of the limit $\bar J(+\infty)$ in terms of an invariant called the projective volume of $M$ we refer to \cite{tkachev}.
}
\end{remark}
\section{Proof of Theorem 1}\label{sec_proof}
Let $M^m$ be a minimal properly immersed submanifold in $N^n$, and suppose that $N$ has a pointwise or integral pinching to a space form. Because of the upper bound in \eqref{pinchsectio}, by \cite{Cheung} and \cite{GPB} the bottom of $\sigma(M)$ satisfies
\begin{equation}\label{lowbound_spec}
\inf \sigma(M) \ge \frac{(m-1)^2k}{4}.
\end{equation}
Briefly, the lower bound in \eqref{deltar} implies
$$
\Delta r \ge (m-1) \frac{\mathrm{sn}_k'(r)}{\mathrm{sn}_k(r)} \ge (m-1) \sqrt{k} \qquad \text{on } \, M.
$$
Integrating on a relatively compact, smooth open set $\Omega$ and using the divergence theorem and $|\nabla r| \le 1$, we deduce $\mathcal{H}^{m-1}(\partial\Omega) \ge (m-1) \sqrt{k}\mathrm{vol}(\Omega)$. The desired \eqref{lowbound_spec} then follows from Cheeger's inequality:
$$
\inf \sigma(M) \ge \frac{1}{4}\left(\inf_{\Omega \Subset M} \frac{\mathcal{H}^{m-1}(\partial \Omega)}{\mathrm{vol}(\Omega)}\right)^2 \ge \frac{(m-1)^2k}{4}.
$$
To complete the proof of the theorem, since $\sigma (M)$ is closed it is sufficient to show that each $\lambda > (m-1)^2k/4$ lies in $\sigma(M)$.
Set for convenience $\beta \doteq \sqrt{\lambda - (m-1)^2k/4}$ and, for $0 \le t<s$, let $A_{t,s}$ denote the extrinsic annulus
$$
A_{t,s} \doteq \big\{ x \in M \ : \ r(x) \in [t,s]\big\}.
$$
Define the weighted measure $\mathrm{d} \mu_k \doteq v_k(r)^{-1}\mathrm{d} x$ on $\{r \ge 1\}$. Hereafter, we will always restrict to this set. Consider
\begin{equation}\label{ODEpsi}
\psi(s) \doteq \frac{e^{i \beta s}}{\sqrt{v_k(s)}}, \qquad \text{which solves} \qquad \psi'' + \psi' \frac{v_k'}{v_k} + \lambda \psi = a(s) \psi,
\end{equation}
where
\begin{equation}\label{def_as}
a(s) \doteq \frac{(m-1)^2 k}{4} + \frac{1}{4}\left(\frac{v_k'(s)}{v_k(s)}\right)^2 - \frac{1}{2}\frac{v_k''(s)}{v_k(s)} \rightarrow 0
\end{equation}
as $s \rightarrow +\infty$. For technical reasons, fix $R>1$ large such that $\Theta(R)>0$. Fix $t,s,S$ such that
$$
R+1 < t < s < S-1,
$$
and let $\eta \in C^\infty_c(\mathbb R)$ be a cut-off function satisfying
$$
\begin{array}{l}
\displaystyle 0 \le \eta \le 1, \quad \eta\equiv 0 \ \text{ outside of } \, (t-1,S), \quad \eta \equiv 1 \ \text{ on } \, (t,s), \\[0.2cm]
|\eta'|+ |\eta''| \le C_0 \ \ \text{ on } \, [t-1,s], \qquad |\eta'| + |\eta''| \le \frac{C_0}{S-s} \ \ \text{ on } \, [s,S]
\end{array}
$$
for some absolute constant $C_0$ (the last relation is possible since $S-s \ge 1$). The value $S$ will be chosen later in dependence of $s$. Set $u_{t,s} \doteq \eta(r)\psi(r)\in C^\infty_c(M)$. Then, by \eqref{ODEpsi},
$$
\begin{array}{lcl}
\Delta u_{t,s} + \lambda u_{t,s} & = & \displaystyle (\eta''\psi + 2\eta' \psi' + \eta \psi'')|\nabla r|^2 + (\eta'\psi + \eta \psi') \Delta r + \lambda \eta \psi\\[0.2cm]
& = & \displaystyle \left(\eta''\psi + 2\eta' \psi' -\frac{v_k'}{v_k} \eta \psi' - \lambda \eta \psi + a \eta \psi\right)(|\nabla r|^2-1) + a\eta \psi \\[0.3cm]
& & \displaystyle + (\eta' \psi+ \eta \psi')\left(\Delta r - \frac{v_k'}{v_k}\right) + \displaystyle \left(\eta''\psi + 2\eta' \psi' + \eta' \psi \frac{v_k'}{v_k}\right).\\[0.3cm]
\end{array}
$$
Using that there exists an absolute constant $c$ for which $|\psi|+ |\psi'| \le c/\sqrt{v_k}$, the following inequality holds:
$$
\begin{array}{lcl}
\|\Delta u_{t,s} + \lambda u_{t,s}\|^2_2 & \le & \displaystyle C \left( \int_{A_{t-1,S}}\left[(1-|\nabla r|^2)^2+ \left(\Delta r - \frac{v_k'}{v_k}\right)^2 + a(r)^2\right]\mathrm{d} \mu_k \right.\\[0.5cm]
& & \displaystyle \left. + \frac{\mu_k(A_{s,S})}{(S-s)^2} + \mu_k(A_{t-1,t}) \right),
\end{array}
$$
for some suitable $C$ depending on $c,C_0$. Since $\|u_{t,s}\|^2_2 \ge \mu_k(A_{t,s})$ and $(1-|\nabla r|^2)^2 \le 1-|\nabla r|^2$, we obtain
\begin{equation}\label{weelldone!}
\begin{array}{lcl}
\displaystyle\frac{\|\Delta u_{t,s} + \lambda u_{t,s}\|^2_2}{\|u_{t,s}\|_2^2} & \le & \displaystyle C \left( \frac{1}{\mu_k(A_{t,s})}\int_{A_{t-1,S}}\left[1-|\nabla r|^2+ \left(\Delta r - \frac{v_k'}{v_k}\right)^2 + a(r)^2\right]\mathrm{d} \mu_k \right.\\[0.5cm]
& & \displaystyle \left. + \frac{1}{(S-s)^2}\frac{\mu_k(A_{s,S})}{\mu_k(A_{t,s})} + \frac{\mu_k(A_{t-1,t})}{\mu_k(A_{t,s})} \right)
\end{array}
\end{equation}
Next, using \eqref{hessiancomp},
$$
\begin{array}{lcl}
\Delta r & = & \displaystyle \sum_{j=1}^m \overline{\mathrm{Hess}\, }(\bar \rho)(e_i,e_i) = \frac{\mathrm{sn}_k'(r)}{\mathrm{sn}_k(r)}(m-|\nabla r|^2) + \mathcal{P}(x) \\[0.3cm]
& = & \displaystyle \frac{v_k'(r)}{v_k(r)} + \frac{\mathrm{sn}_k'(r)}{\mathrm{sn}_k(r)}(1-|\nabla r|^2) + \mathcal{P}(x),
\end{array}
$$
where, by Proposition \ref{prop_pinching},
\begin{equation}\label{esti_T}
\begin{array}{lcl}
0 \le \mathcal{P}(x) & \doteq & \displaystyle \sum_{j=1}^m \overline{\mathrm{Hess}\, }(\bar \rho)(e_i,e_i) - \frac{\mathrm{sn}_k'(r)}{\mathrm{sn}_k(r)}(m-|\nabla r|^2) \\[0.3cm]
& \le & \displaystyle \left(\frac{g'(r)}{g(r)} - \frac{\mathrm{sn}_k'(r)}{\mathrm{sn}_k(r)}\right)(m-|\nabla r|^2) = \zeta(r)(m-|\nabla r|^2) \le m \zeta(r).
\end{array}
\end{equation}
We thus obtain, on the set $\{r \ge 1\}$,
\begin{equation}\label{stimapadrao_0}
\begin{array}{lcl}
\displaystyle \left(\Delta r - \frac{v_k'}{v_k}\right)^2 + 1-|\nabla r|^2 + a(r)^2 & \le & \displaystyle \left[\frac{\mathrm{sn}_k'(r)}{\mathrm{sn}_k(r)}(1-|\nabla r|^2) + m\zeta(r)\right]^2 \\[0.4cm]
& & \displaystyle + 1-|\nabla r|^2 + a(r)^2 \\[0.3cm]
& \le & \displaystyle C \Big( \zeta(r)^2 + 1-|\nabla r|^2 + a(r)^2\Big)
\end{array}
\end{equation}
for some absolute constant $C$. Note that, in both our pointwise or integral pinching assumptions on $N$, by Proposition \ref{prop_pinching} it holds $\zeta(s) \rightarrow 0$ as $s \rightarrow +\infty$. Set
$$
F(t) \doteq \sup_{\sigma \in [t-1,+\infty)}[a(\sigma)^2+ \zeta(\sigma)^2],
$$
and note that $F(t) \rightarrow 0$ monotonically as $t \rightarrow +\infty$. Integrating \eqref{stimapadrao_0} we get the existence of $C>0$ independent of $s,t$ such that
\begin{equation}\label{simplify!}
\begin{array}{l}
\displaystyle \int_{A_{t-1,S}} \left[\left(\Delta r - \frac{v_k'}{v_k}\right)^2 + 1-|\nabla r|^2 + a(r)^2\right]\mathrm{d} \mu_k \\[0.5cm]
\qquad \qquad \le \displaystyle C \left( F(t)\int_{A_{t-1,S}}\frac{1}{v_k(r)} + \int_{A_{t-1,S}}\frac{1-|\nabla r|^2}{v_k(r)}\right).
\end{array}
\end{equation}
Using the coarea's formula and the transversality lemma, for each $0 \le a<b$
\begin{equation}\label{radialize}
\mu_k(A_{a,b}) = \int_{A_{a,b}} \frac{1}{v_k(r)} = \int_{a}^b J\big[1+T\big], \qquad \int_{A_{a,b}} \frac{1-|\nabla r|^2}{v_k(r)} = \int_{a}^b JT,
\end{equation}
where $J$ and $T$ are defined, respectively, in \eqref{def_J} and \eqref{def_Is}. Summarizing, in view of \eqref{simplify!} and \eqref{radialize} we deduce from \eqref{weelldone!} the following inequalities:
\begin{equation}\label{verygood}
\begin{array}{lcl}
\displaystyle\frac{\|\Delta u_{t,s} + \lambda u_{t,s}\|^2_2}{\|u_{t,s}\|_2^2} & \le & \displaystyle C \left( \frac{1}{\int_t^{s} J\big[1+T\big]}\left[ F(t)\int_{t-1}^SJ\big[1+T\big] + \int_{t-1}^S JT\right]\right.\\[0.5cm]
& & \displaystyle \left. + \frac{\int_{s}^S J\big[1+T\big]}{(S-s)^2\int_{t}^{s} J\big[1+T\big]} + \frac{\int_{t-1}^t J\big[1+T\big]}{\int_t^{s} J\big[1+T\big]} \right) \doteq \mathcal{Q}(t,s).
\end{array}
\end{equation}
If we can guarantee that
\begin{equation}\label{liminfcond}
\liminf_{t \rightarrow +\infty} \liminf_{s \rightarrow +\infty} \frac{\|\Delta u_{t,s} + \lambda u_{t,s}\|^2_2}{\|u_{t,s}\|_2^2} = 0,
\end{equation}
then we are able to construct a sequence of approximating eigenfunctions for $\lambda$ as follows: fix $\varepsilon>0$. By \eqref{liminfcond} there exists a divergent sequence $\{t_i\}$ such that, for $i \ge i_\varepsilon$,
$$
\liminf_{s \rightarrow +\infty} \frac{\|\Delta u_{t_i,s} + \lambda u_{t_i,s}\|^2_2}{\|u_{t_i,s}\|_2^2} < \varepsilon/2.
$$
For $i=i_\varepsilon$, pick then a sequence $\{s_j\}$ realizing the liminf. For $j \ge j_\varepsilon(i_\varepsilon,\varepsilon)$
\begin{equation}\label{relautisj}
\|\Delta u_{t_i,s_j} + \lambda u_{t_i,s_j}\|^2_2 < \varepsilon \|u_{t_i,s_j}\|_2^2,
\end{equation}
Writing $u_\varepsilon \doteq u_{t_{i_\varepsilon},s_{j_\varepsilon}}$, by \eqref{relautisj} from the set $\{u_\varepsilon\}$ we can extract a sequence of approximating eigenfunctions for $\lambda$, concluding the proof that $\lambda \in \sigma(M)$. To show \eqref{liminfcond}, by \eqref{verygood} it is enough to prove that
\begin{equation}\label{desiredliminfs}
\liminf_{t \rightarrow +\infty} \liminf_{s \rightarrow +\infty} \mathcal{Q}(t,s) = 0.
\end{equation}
Suppose, by contradiction, that \eqref{desiredliminfs} were not true. Then, there exists a constant $\delta>0$ such that, for each $t \ge t_\delta$, $\liminf_{s \rightarrow +\infty} \mathcal{Q}(t,s) \ge 2\delta$, and thus for $t \ge t_\delta$ and $s \ge s_\delta(t)$
\begin{equation}\label{contraddi}
F(t)\int_{t-1}^SJ\big[1+T\big] + \int_{t-1}^S JT + \int_{s}^S \frac{J\big[1+T\big]}{(S-s)^2} + \int_{t-1}^t J\big[1+T\big] \ge \delta \int_t^{s} J\big[1+T\big],
\end{equation}
and rearranging
\begin{equation}\label{firstrelation_3}
(F(t)+1)\int_{t-1}^SJ\big[1+T\big] - \int_{t-1}^S J + \int_{s}^S\frac{J\big[1+T\big]}{(S-s)^2} + \int_{t-1}^t J\big[1+T\big] \ge \delta \int_t^{s} J\big[1+T\big].
\end{equation}
We rewrite the above integrals in order to make $\Theta(s)$ appear. Integrating by parts and using again the coarea's formula and the transversality lemma,
\begin{equation}\label{bellaidentita}
\begin{array}{lcl}
\displaystyle \int_a^b J\big[1+T\big] & = & \displaystyle \int_{A_{a,b}} \frac{1}{v_k(r)} = \displaystyle \int_a^b \frac{1}{v_k(\sigma)} \left[\int_{\Gamma_\sigma} \frac{1}{|\nabla r|}\right]\mathrm{d} \sigma = \int_a^b \frac{\big(V_k(\sigma)\Theta(\sigma)\big)'}{v_k(\sigma)}\mathrm{d} \sigma \\[0.5cm]
& = & \displaystyle \frac{V_k(b)}{v_k(b)} \Theta(b)- \frac{V_k(a)}{v_k(a)} \Theta(a)+ \int_{a}^{b} \frac{V_k v_k'}{v_k^2} \Theta.
\end{array}
\end{equation}
To deal with the term containing the integral of $J$ alone in \eqref{firstrelation_3}, we use the inequality $J(s) \ge \Theta(s)$ coming from the monotonicity formulae in Proposition \ref{prop_monotonicity}. This passage is crucial for us to conclude. Inserting \eqref{bellaidentita} and $J \ge \Theta$ into \eqref{firstrelation_3} we get
\begin{equation}\label{firstrelation_5}
\begin{array}{l}
\displaystyle (F(t)+1)\displaystyle \frac{V_k(S)}{v_k(S)} \Theta(S)- (F(t)+1)\frac{V_k(t-1)}{v_k(t-1)} \Theta(t-1) + (F(t)+1)\int_{t-1}^{S} \frac{V_k v_k'}{v_k^2} \Theta \\[0.5cm]
\displaystyle - \int_{t-1}^S \Theta + \displaystyle \frac{1}{(S-s)^2}\left[\frac{V_k(S)}{v_k(S)} \Theta(S)- \frac{V_k(s)}{v_k(s)} \Theta(s)+ \int_{s}^{S} \frac{V_k v_k'}{v_k^2} \Theta\right] + \frac{V_k(t)}{v_k(t)} \Theta(t)\\[0.5cm]
\displaystyle - \frac{V_k(t-1)}{v_k(t-1)} \Theta(t-1) + \int_{t-1}^{t} \frac{V_k v_k'}{v_k^2} \Theta \\[0.5cm]
\qquad \qquad \qquad \ge \quad \delta \displaystyle \frac{V_k(s)}{v_k(s)} \Theta(s)- \delta\frac{V_k(t)}{v_k(t)} \Theta(t)+ \delta\int_{t}^{s} \frac{V_k v_k'}{v_k^2} \Theta.
\end{array}
\end{equation}
The idea to reach the desired contradiction is to prove that, as a consequence of \eqref{firstrelation_5},
\begin{equation}\label{inttheyta}
\int_{t-1}^S \Theta
\end{equation}
(hence, $\Theta(S)$) must grow faster as $S \rightarrow +\infty$ than the bound in \eqref{bellissima}. To do so, we need to simplify \eqref{firstrelation_5} in order to find a suitable differential inequality for \eqref{inttheyta}.\\
We first observe that, both for $k>0$ and for $k=0$, there exists an absolute constant $\hat c$ such that $\hat c^{-1} \le V_kv_k'/v_k^2 \le \hat c$ on $[1,+\infty)$. Furthermore, by the monotonicity of $\Theta$,
\begin{equation}\label{stimasemplice!}
\int_{s}^{S} \frac{V_k v_k'}{v_k^2} \Theta \le \hat c(S-s) \Theta(S).
\end{equation}
Next, we deal with the two terms in the left-hand side of \eqref{firstrelation_5} that involve \eqref{inttheyta}:
$$
\begin{array}{lcl}
\displaystyle (F(t)+1)\int_{t-1}^{S} \frac{V_k v_k'}{v_k^2} \Theta - \int_{t-1}^S \Theta & = & \displaystyle F(t)\int_{t-1}^{S} \frac{V_k v_k'}{v_k^2} \Theta + \int_{t-1}^S \frac{V_k v_k'-v_k^2}{v_k^2} \Theta \\[0.5cm]
& \le & \displaystyle \hat c F(t)\int_{t-1}^{S}\Theta + \int_{t-1}^S \frac{V_k v_k'-v_k^2}{v_k^2} \Theta.
\end{array}
$$
The key point is the following relation:
\begin{equation}\label{maggica}
\frac{V_k(s) v_k'(s)-v_k(s)^2}{v_k(s)^2} \left\{ \begin{array}{ll}
= -1/m & \quad \text{if } \, k =0; \\[0.2cm]
\rightarrow 0 \ \text{as } \, s \rightarrow +\infty, & \quad \text{if } \, k>0.
\end{array}\right.
\end{equation}
Define
$$
\omega(t) \doteq \sup_{[t-1,+\infty)} \frac{V_k v_k'-v_k^2}{v_k^2}, \qquad \chi(t) \doteq \hat c F(t) + \omega(t).
$$
Again by the monotonicity of $\Theta$,
\begin{equation}\label{cisiamoallafine!!}
\begin{array}{lcl}
\displaystyle (F(t)+1)\int_{t-1}^{S} \frac{V_k v_k'}{v_k^2} \Theta - \int_{t-1}^S \Theta & \le & \displaystyle \big[\hat c F(t)+\omega(t)\big]\int_{t-1}^{S}\Theta = \chi(t)\int_{t-1}^{S}\Theta \\[0.3cm]
& \le & \displaystyle \chi(t)\Theta(t) + \chi(t)\int_{t}^{S}\Theta.
\end{array}
\end{equation}
For simplicity, hereafter we collect all the terms independent of $s$ in a function that we call $h(t)$, which may vary from line to line. Inserting \eqref{stimasemplice!} and \eqref{cisiamoallafine!!} into \eqref{firstrelation_5} we infer
\begin{equation}
\begin{array}{l}
\displaystyle \left[\left(F(t)+1+ \frac{1}{(S-s)^2}\right)\frac{V_k(S)}{v_k(S)} + \frac{\hat c}{S-s}\right]\displaystyle \Theta(S) + \chi(t)\int_{t}^{S}\Theta \\[0.5cm]
\displaystyle \ge h(t) + \left(\delta + \frac{1}{(S-s)^2}\right) \frac{V_k(s)}{v_k(s)}\Theta(s) + \delta\hat c^{-1}\int_{t}^{s} \Theta.
\end{array}
\end{equation}
Summing $\delta \hat{c}^{-1}(S-s)\Theta(S)$ to the two sides of the above inequality, using the monotonicity of $\Theta$ and getting rid of the term containing $\Theta(s)$ we obtain
\begin{equation}\label{firstrelation_7}
\begin{array}{l}
\displaystyle \left[\left(F(t)+1+ \frac{1}{(S-s)^2}\right)\frac{V_k(S)}{v_k(S)} + \frac{\hat c}{S-s} + \delta \hat
c^{-1}(S-s)\right]\displaystyle \Theta(S) + \chi(t)\int_{t}^{S}\Theta \\[0.4cm]
\displaystyle \ge h(t) + \delta\hat c^{-1}\int_{t}^{S} \Theta.
\end{array}
\end{equation}
Using \eqref{maggica}, the definition of $\chi(t)$ and the properties of $\omega(t),F(t)$, we can choose $t_\delta$ sufficiently large to guarantee that
\begin{equation}\label{defck}
\delta\hat c^{-1} - \chi(t) \ge c_k \doteq \left\{ \begin{array}{ll} \frac{1}{m} + \frac{\delta \hat c^{-1}}{2} & \quad \text{if } \, k=0, \\[0.2cm]
\frac{\delta \hat c^{-1}}{2} & \quad \text{if } \, k>0,
\end{array}\right.
\end{equation}
hence
\begin{equation}\label{firstrelation_9}
\displaystyle \left[\left(F(t)+1+ \frac{1}{(S-s)^2}\right)\frac{V_k(S)}{v_k(S)} + \frac{\hat c}{S-s} + \delta \hat
c^{-1}(S-s)\right]\displaystyle \Theta(S) \ge h(t) + c_k\int_{t}^{S} \Theta.
\end{equation}
We now specify $S(s)$ depending on whether $k>0$ or $k=0$.\\[0.2cm]
\noindent \emph{The case $k>0$.}\\
We choose $S \doteq s+1$. In view of the fact that $V_k/v_k$ is bounded above on $\mathbb R^+$, \eqref{firstrelation_9} becomes
\begin{equation}\label{fine_iperbolica}
\bar c\Theta(s+1) \ge h(t) + c_k\int_{t}^{s+1} \Theta \ge \frac{c_k}{2}\int_{t}^{s+1} \Theta,
\end{equation}
for some $\bar c$ independent of $t,s$. Note that the last inequality is satisfied provided $s \ge s_\delta(t)$ is chosen to be sufficiently large, since the monotonicity of $\Theta$ implies that $\Theta \not \in L^1(\mathbb R^+)$. Integrating and using again the monotonicity of $\Theta$, we get
$$
(s+1-t)\Theta(s+1) \ge \int_t^{s+1}\Theta \ge \left[\int_{t}^{s_0+1}\Theta\right] \exp\left\{ \frac{c_k}{2\bar c}(s-s_0)\right\},
$$
hence $\Theta(s)$ grows exponentially. Ultimately, this contradicts our assumption \eqref{bellissima}.\\[0.2cm]
\noindent \emph{The case $k=0$.}\\
We choose $S \doteq s + \sqrt{s}$. Since $V_k(S)/v_k(S) = S/m$, from \eqref{firstrelation_9} we infer
\begin{equation}\label{firstrelation_25}
\displaystyle \left[\left(F(t)+1+ \frac{1}{s}\right)\frac{S}{m} + \frac{\hat c}{\sqrt{s}} + \delta \hat
c^{-1}\sqrt{s}\right]\displaystyle \Theta(S) \ge h(t) + c_k\int_{t}^{S} \Theta.
\end{equation}
Using the expression of $c_k$ and the fact that $F(t) \rightarrow 0$, up to choosing $t_\delta$ and then $s_\delta(t)$ large enough we can ensure the validity of the following inequality:
$$
\left[\left(F(t)+1+ \frac{1}{s}\right)\frac{S}{m} + \frac{\hat c}{\sqrt{s}} + \delta \hat
c^{-1}\sqrt{s}\right] < \left[\frac{1}{m} + \frac{\delta \hat c^{-1}}{4}\right]S = \left[c_k - \frac{\delta \hat c^{-1}}{4}\right]S
$$
for $t \ge t_\delta$ and $s \ge s_\delta(t)$. Plugging into \eqref{firstrelation_9}, and using that $\Theta \not \in L^1(\mathbb R^+)$,
$$
S\Theta(S) \ge h(t) + \frac{c_k}{c_k- \delta \hat{c}^{-1}/4} \int_t^S\Theta \ge (1+\varepsilon)\int_t^S\Theta,
$$
for a suitable $\varepsilon>0$ independent of $t,S$, and provided that $S \ge s_\delta(t)$ is large enough. Integrating and using again the monotonicity of $\Theta$,
$$
S\Theta(S) \ge (S-t)\Theta(S) \ge \int_t^S\Theta \ge \left[\int_t^{S_0}\Theta\right]\left(\frac{S}{S_0}\right)^{1+\varepsilon},
$$
hence $\Theta(S)$ grows polynomially at least with power $\varepsilon$, contradicting \eqref{bellissima}.\\
Concluding, both for $k>0$ and for $k=0$ assuming \eqref{contraddi} leads to a contradiction with our assumption \eqref{bellissima}, hence \eqref{liminfcond} holds, as required.
\section{Proof of Theorem 2}
We first show that $\varphi$ is proper and that $M$ is diffeomorphic to the interior of a compact manifold with boundary. Both the properties are consequence of the following lemma due to \cite{BC}, which improves on \cite{Anderson_prep}, \cite{filho}, \cite{castillon}, \cite{BJM}.
\begin{lemma}\label{lem_proper}
Let $\varphi : M^m \rightarrow N^n$ be an immersed submanifold into an ambient manifold $N$ with a pole and suppose that $N$ satisfies \eqref{pinchsectio} for some $k \ge 0$.
Denote by $B_s = \{x \in M;\; \rho(x) \le s\}$ the intrinsic ball on $M$. Assume that
\begin{equation}\label{assu_II}
\begin{array}{rll}
(i) &\quad \displaystyle \limsup_{s \rightarrow +\infty} s\|\mathrm{II}\|_{L^\infty(\partial B_s)} < 1 & \quad \text{if } \, k = 0 \text{ in \eqref{pinchsectio}, or} \\[0.4cm]
(ii) & \quad \displaystyle \limsup_{s \rightarrow +\infty} \|\mathrm{II}\|_{L^\infty(\partial B_s)} < \sqrt{k} & \quad \text{if } \, k > 0 \text{ in \eqref{pinchsectio}}.
\end{array}
\end{equation}
Then, $\varphi$ is proper and there exists $R>0$ such that $|\nabla r|>0$ on $\{r \ge R\}$, where $r$ is the extrinsic distance function. Consequently, the flow
\begin{equation}\label{def_flow}
\Phi : \mathbb R^+ \times \{r=R\} \rightarrow \{r \ge R\}, \qquad \frac{\mathrm{d}}{\mathrm{d} s}\Phi_s(x) = \frac{\nabla r}{|\nabla r|^2}\big(\Phi_s(x)\big)
\end{equation}
is well defined, and $M$ is diffeomorphic to the interior of a compact manifold with boundary.
\end{lemma}
The properness of $\varphi$ enables us to apply Proposition \ref{prop_equivalence}. Therefore, to show that $\Theta(+\infty)<+\infty$ it is enough to check that
\begin{equation}\label{rapido}
\frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)} \frac{\int_{\Gamma_s} \big[|\nabla r|^{-1} - |\nabla r|\big]}{ \int_{\Gamma_s}|\nabla r|} \in L^1(+\infty).
\end{equation}
To achieve \eqref{rapido}, we need to bound from above the rate of approaching of $|\nabla r|$ to $1$ along the flow $\Phi$ in Lemma \ref{lem_proper}. We begin with the following
\begin{lemma}\label{lem_computations}
Suppose that $N$ has a pole and radial sectional curvature satisfying \eqref{pinchsectio}, and that $\varphi: M^m \rightarrow N^n$ is a proper minimal immersion such that $|\nabla r|>0$ outside of some compact set $\{r \le R\}$. Let $\Phi$ denote the flow of $\nabla r/|\nabla r|^2$ as in \eqref{def_flow} and let $\gamma : [R, +\infty) \rightarrow M$ be a flow line starting from some $x_0 \in \{r=R\}$. Then, along $\gamma$,
\begin{equation}\label{belleequation}
\displaystyle \frac{\mathrm{d}}{\mathrm{d} s}\big( \mathrm{sn}_k(r)\sqrt{1-|\nabla r|^2}\big) \le \displaystyle \mathrm{sn}_k(r) |\mathrm{II}(\gamma(s))|
\end{equation}
\end{lemma}
\begin{proof}
Observe that $r(\gamma(s))=s-R$. By the chain rule and the Hessian comparison theorem \ref{hessiancomp},
$$
\begin{array}{lcl}
\displaystyle \frac{\mathrm{d}}{\mathrm{d} s}|\nabla r|^2 &= & \displaystyle 2\mathrm{Hess}\, r(\nabla r, \dot \gamma) = \frac{2}{|\nabla r|^2} \mathrm{Hess}\, r(\nabla r, \nabla r) \\[0.4cm]
& = & \displaystyle \frac{2}{|\nabla r|^2} \overline{\mathrm{Hess}\, }(\bar \rho) \big(\mathrm{d} \varphi(\nabla r), \mathrm{d} \varphi(\nabla r) \big) + \frac{2}{|\nabla r|^2} \big( \bar \nabla \bar \rho, \mathrm{II}(\nabla r, \nabla r)\big) \\[0.4cm]
& \ge & \displaystyle 2\frac{\mathrm{sn}'_k(r)}{\mathrm{sn}_k(r)}(1-|\nabla r|^2) - 2|\bar \nabla^\perp \bar \rho||\mathrm{II}|,
\end{array}
$$
where $\bar \nabla^\perp \bar \rho$ is the component of $\bar\rho$ perpendicular to $\mathrm{d} \varphi(TM)$ and $|\bar \nabla^\perp \rho| = \sqrt{1-|\nabla r|^2}$. Then,
$$
\displaystyle \frac{\mathrm{d}}{\mathrm{d} s}|\nabla r|^2 \ge \displaystyle 2\frac{\mathrm{sn}'_k(r)}{\mathrm{sn}_k(r)}(1-|\nabla r|^2) - 2|\mathrm{II}|\sqrt{1-|\nabla r|^2}.
$$
Multiplying by $\mathrm{sn}_k^2(r)$ gives
$$
\displaystyle \frac{\mathrm{d}}{\mathrm{d} s}\big(\mathrm{sn}^2_k(r)(1-|\nabla r|^2)\big) \le 2\mathrm{sn}_k^2(r)|\mathrm{II}|\sqrt{1-|\nabla r|^2},
$$
which implies \eqref{belleequation}.
\end{proof}
The above lemma relates the behaviour of $|\nabla r|$ to that of the second fundamental form. The next result makes this relation explicit in the two cases considered in Theorem \ref{teo_finitedens}.
\begin{proposition}\label{prop_inte}
In the assumptions of the above proposition, suppose further that either
\begin{equation}\label{assu_IIdecay_1}
\begin{array}{rll}
(i) & \quad \displaystyle \|\mathrm{II}\|_{L^\infty(\partial B_s)} \le \frac{C}{s \log^{\alpha/2} s} & \quad \text{if } \, k = 0 \text{ in \eqref{pinchsectio}, or} \\[0.4cm]
(ii) & \quad \displaystyle \|\mathrm{II}\|_{L^\infty(\partial B_s)} \le \frac{C}{\sqrt{s} \log^{\alpha/2} s} & \quad \text{if } \, k > 0 \text{ in \eqref{pinchsectio}.}
\end{array}
\end{equation}
for $s \ge 1$ and some constants $C>0$ and $\alpha>0$. Here, $\partial B_s$ is the boundary of the intrinsic ball $B_s(o)$. Then, $|\nabla r|(\gamma(s)) \rightarrow 1$ as $s$ diverges, and if $s>2R$ and $R$ is sufficiently large,
\begin{equation}\label{integrability}
\begin{array}{ll}
\displaystyle \text{in the case $(i)$,} & \displaystyle \qquad 1-|\nabla r(\gamma(s))|^2 \le \frac{\hat C}{\log^\alpha s} \\[0.4cm]
\displaystyle \text{in the case $(ii)$,} & \displaystyle \qquad 1-|\nabla r(\gamma(s))|^2 \le \frac{\hat C}{s\log^\alpha s}
\end{array}
\end{equation}
for some constant $\hat C$ depending on $R$.
\end{proposition}
\begin{proof}
We begin by observing that, in \eqref{assu_IIdecay_1}, $\partial B_s$ can be replaced by $\Gamma_s$. Indeed, since $r(x) \le r(o) + \rho(x)$, we can choose $R$ large enough depending on $r(o),\alpha$ in such a way that, for instance in $(i)$,
$$
|\mathrm{II}(x)| \le \frac{C}{\rho(x)\log^{\alpha/2}\rho(x)} \le \frac{C_1}{r(x)\log^{\alpha/2}r(x)}
$$
for some absolute $C_1$ and for each $r\ge R$. Thus, from $(i)$ and $(ii)$ we infer the bounds
\begin{equation}\label{better}
\|\mathrm{II}\|_{L^\infty(\Gamma_s)} \le \frac{C_1}{s \log^{\alpha/2} s} \quad \text{for } \, (i), \qquad \|\mathrm{II}\|_{L^\infty(\Gamma_s)} \le \frac{C_1}{\sqrt{s} \log^{\alpha/2} s} \quad \text{for } \, (ii).
\end{equation}
Because of \eqref{better}, up to enlarging $R$ further there exists a uniform constant $C_2>0$ such that, on $[R, +\infty)$,
\begin{equation}\label{realnalaysis}
\mathrm{sn}_k(s)|\mathrm{II}(\gamma(s))| \le \left\{ \begin{array}{ll} \displaystyle \frac{C_1}{\log^{\alpha/2}s} \le C_2 \frac{\mathrm{d}}{\mathrm{d} s}\left(\frac{s}{\log^{\alpha/2}s}\right) & \quad \text{if } \, k=0;\\[0.5cm]
\displaystyle \frac{C_1 \mathrm{sn}_k(s)}{\sqrt{s} \log^{\alpha/2}s} \le C_2 \frac{\mathrm{d}}{\mathrm{d} s}\left(\frac{\mathrm{sn}_k(s)}{\sqrt{s} \log^{\alpha/2}s}\right) & \quad \text{if } \, k>0.
\end{array}\right.
\end{equation}
Integrating on $[R,s]$ and using \eqref{belleequation} we get
$$
\sqrt{1-|\nabla r(\gamma(s))|^2} \le \left\{ \begin{array}{ll}
\displaystyle \frac{C_3(R)}{s} + \frac{C_4}{\log^{\alpha/2} s} \le \frac{C_5}{\log^{\alpha/2}s} & \quad \text{if } \, k=0, \\[0.5cm]
\displaystyle \frac{C_3(R)}{\mathrm{sn}_k(s)} + \frac{C_4}{\sqrt{s}\log^{\alpha/2} s} \le \frac{C_5}{\sqrt{s}\log^{\alpha/2}s} & \quad \text{if } \, k>0,
\end{array}\right.
$$
for some absolute constants $C_4,C_5>0$ and if $s > 2R$ and $R$ is large enough. The desired \eqref{integrability} follows by taking squares.
\end{proof}
We are now ready to conclude the proof of Theorem \ref{teo_finitedens} by showing that $M$ has finite density or, equivalently, that \eqref{rapido} holds.\par
Let $\eta(s)$ be either
\begin{equation}\label{def_eta}
\frac{1}{\log^\alpha s} \ \, \text{ when } k=0, \text{ or } \, \frac{1}{s\log^\alpha s} \ \, \text{ when } k>0,
\end{equation}
where $\alpha>1$ and $C$ is a large constant. In our assumptions, we can apply Lemma \ref{lem_computations} and Proposition \ref{prop_inte} to deduce, according to \eqref{integrability}, that, for large enough $R$,
$$
1-|\nabla r(\gamma(s))|^2 \le C\eta(s) \qquad \text{on } \, (R, +\infty),
$$
where $\gamma(s)$ is a flow curve of $\Phi$ in \eqref{def_flow} and $C=C(R)$ is a large constant. In particular, $|\nabla r(\gamma(s))| \rightarrow 1$ as $s \rightarrow +\infty$. We therefore deduce the existence of a constant $C_2(R)>0$ such that
$$
\frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)} \frac{\int_{\Gamma_s} \big[|\nabla r|^{-1} - |\nabla r|\big]}{ \int_{\Gamma_s}|\nabla r|} \le C\frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)}\eta(s) \frac{\int_{\Gamma_s} |\nabla r|^{-1}}{\int_{\Gamma_s}|\nabla r|} \le C_2\frac{\mathrm{sn}_k'(s)}{\mathrm{sn}_k(s)} \eta(s).
$$
In both our cases $k=0$ and $k>0$, since $\alpha >1$ it is immediate to check that $\mathrm{sn}_k'\eta/\mathrm{sn}_k \in L^1(+\infty)$, proving \eqref{rapido}.
\section*{Appendix 1: finite total curvature solutions of Plateau's problem}
In this appendix, we show that (smooth) solutions of Plateau's problem at infinity $M^m \rightarrow \mathbb H^n$ have finite total curvature whenever $M$ is a hypersurface and the boundary datum $\Sigma \subset \partial_\infty \mathbb H^n$ is sufficiently regular. Consider the Poincar\'e model of $\mathbb H^n$, and let $M \rightarrow \mathbb H^n$ be a proper minimal submanifold. We say that $M$ is $C^{k,\alpha}$ up to $\partial_\infty \mathbb H^n$ if its closure $\overline{M}$ in the topology of the closed unit ball $\overline{\mathbb H^n} = \mathbb H^n \cup \partial_\infty \mathbb H^n$ is a $C^{k,\alpha}$-manifold with boundary. We begin with a lemma, whose proof have been suggested to the second author by L. Mazet.
\begin{lemma}\label{prop_mazet}
Let $\varphi : M^m \rightarrow \mathbb H^n$ be a proper minimal submanifold. If $M$ is of class $C^2$ up to $\partial_\infty \mathbb H^n$, then $M$ has finite total curvature.
\end{lemma}
\begin{proof}
The Euclidean metric $\overline{\langle \, , \, \rangle}$ is related to the Poincar\'e metric $\langle \, , \, \rangle$ by the formula
$$
\overline{\langle \, , \, \rangle} = \lambda^2 \langle \, , \, \rangle, \qquad \text{with} \quad \lambda = \frac{1-|x|^2}{2}.
$$
Given a proper, minimal submanifold $\varphi : (M^m,g) \rightarrow (\mathbb H^n, \langle \, , \, \rangle)$, we associate the isometric immersion $\bar \varphi : (M, (\lambda^2 \circ \varphi) g) \rightarrow (\mathbb H^{n}, \overline{\langle \, , \, \rangle})$, $\bar \varphi(x) \doteq \varphi(x)$.
Fix a local Darboux frame $\{e_i, e_\alpha\}$ on $(M,g)$ for $\varphi$, with $\{e_i\}$ tangent to $M$ and $\{e_\alpha\}$ in the normal bundle, and let $\bar e_i = e_i/\lambda$, $\bar e_\alpha = e_\alpha/\lambda$ be the corresponding Darboux frame on $(M, \lambda^2g)$ for $\bar \varphi$. Let $\mathrm{d} V$ and $\mathrm{d} \bar V = \lambda^{m} \mathrm{d} V$ be the volume forms of $(M,g)$ and $(M, \lambda^2g)$, and denote with $h^\alpha_{ij}$ and $\bar h^\alpha_{ij}$ the coefficients of the second fundamental forms of $\varphi$ and $\bar \varphi$, respectively. A standard computation shows that
$$
\bar h^\alpha_{ij} = \frac{1}{\lambda} h^\alpha_{ij} - \frac{\lambda_\alpha}{\lambda} \delta_{ij},
$$
where $\lambda_\alpha = e_\alpha(\lambda)$. Evaluating the norms of $\mathrm{II}$ and $\bar \mathrm{II}$, since $h^\alpha_{ij}$ is trace-free by minimality we obtain
$$
|\bar\mathrm{II}|^2 = \lambda^{-2}|\mathrm{II}|^2 + m |\nabla^\perp \log \lambda|^2 \ge \lambda^{-2}|\mathrm{II}|^2,
$$
and thus $|\bar \mathrm{II}|^m \mathrm{d} \bar V \ge |\mathrm{II}|^m \mathrm{d} V$. Integrating on $M$ it holds
$$
\int_M |\mathrm{II}|^m \mathrm{d} V \le \int_M |\bar\mathrm{II}|^m \mathrm{d} \bar V.
$$
However, the last integral is finite since $M$ is $C^2$ up to $\partial_\infty \mathbb H^n$, and thus $\varphi$ has finite total curvature.
\end{proof}
In view of Lemma \ref{prop_mazet}, we briefly survey on some boundary regularity results for solutions of Plateau's problem. To the best of our knowledge, we just found regularity results for hypersurfaces. Let $M^m \rightarrow \mathbb H^{m+1}$ be a solution of Plateau's problem for a compact, $(m-1)$-dimensional submanifold $\Sigma^{m-1} \subset \partial_\infty \mathbb H^{m+1}$. Then, a classical result of Hardt and Lin \cite{HardtLin} states that if $\Sigma^{m-1} \hookrightarrow \partial_\infty \mathbb H^{m+1}$ is properly embedded and $C^{1,\alpha}$, with $0 \le \alpha \le 1$, near $\Sigma$ each solution $M^m \rightarrow \mathbb H^{m+1}$ of Plateau's problem is a finite collection of $C^{1,\alpha}$-manifolds with boundary, which are disjoint except at the boundary. Therefore, near $\Sigma$, $M$ can locally be described as a graph, and the higher regularity theory in \cite{Lin} \cite{Lin2}, \cite{tonegawa2}, \cite{tonegawa} applies to give the following: if $\Sigma$ is $C^{j,\alpha}$, then $M$ is $C^{j,\alpha}$ up to $\partial_\infty \mathbb H^{m+1}$ whenever
\begin{itemize}
\item[-] $1 \le j \le m-1$ and $0 \le \alpha \le 1$, or
\item[-] $j=m$ and $0 < \alpha < 1$, or
\item[-] $j \ge m+1$ and $0<\alpha<1$ (if $m$ is odd, under a further condition on $\Sigma$).
\end{itemize}
The reader can consult the statement and references in \cite{Lin2}. In particular, because of Lemma \ref{prop_mazet}, if $\Sigma$ is $C^{2,\alpha}$ for some $0<\alpha<1$ then $M$ has finite total curvature (provided that it is smooth).
\section*{Appendix 2: the intrinsic monotonicity formula}
We conclude by recalling an intrinsic version of the monotonicity formula. To state it, we premit the following observation due to H. Donnelly and N. Garofalo, Proposition 3.6 in \cite{donnellygarofalo}.
\begin{proposition}\label{prop_garo}
For $k \ge 0$, the function
\begin{equation}\label{monoto_vk}
\frac{V_k(s)}{v_k(s)} \qquad \text{is non-decreasing on } \, \mathbb R^+.
\end{equation}
\end{proposition}
\begin{proof}
The ratio $v_k'/v_k$ is monotone decreasing by the very definition of $v_k$. Then, since $v_k'>0$, the desired monotonicity follows from a lemma at p. 42 of \cite{cheegergromovtaylor}.
\end{proof}
\begin{proposition}[The intrinsic monotonicity formula]\label{prop_monointri}
Suppose that $N$ has a pole $\bar o$ and satisfies \eqref{pinchsectio}, and let $\varphi : M^m \rightarrow N^n$ be a complete, minimal immersion. Suppose that $\bar o \in \varphi(M)$, and choose $o \in M$ be such that $\varphi(o) = \bar o$. Then, denoting with $\rho$ the intrinsic distance function from $o$ and with $B_s = \{ \rho \le s\}$,
\begin{equation}\label{intrinsicquotient}
\frac{\mathrm{vol}(B_s)}{V_k(s)}
\end{equation}
is monotone non-decreasing on $\mathbb R^+$.
\end{proposition}
\begin{proof}
We refer to Proposition \ref{prop_monotonicity} for definitions and computations. We know that the function $\psi= f\circ r$, with $f$ as in \eqref{def_f}, solves $\Delta \psi \ge 1$ on $M$. Integrating on $B_s$ and using the definition of $\psi$ we obtain
$$
\mathrm{vol}(B_s) \le \int_{B_s}\Delta \psi = \int_{\partial B_s}\langle \nabla \psi, \nabla \rho \rangle \le \int_{\partial B_s} \frac{V_k(r)}{v_k(r)}.
$$
Next, since $\bar o = \varphi(o)$, it holds $r(x) \le \rho(x)$ on $M$. Using then Proposition \ref{prop_garo}, we deduce
$$
\mathrm{vol}(B_s) \le \frac{V_k(s)}{v_k(s)} \mathrm{vol}(\partial B_s).
$$
Integrating we obtain the monotonicity of the desired \eqref{intrinsicquotient}.
\end{proof}
\vspace{0.5cm}
\noindent \textbf{Acknowledgements}.\\
The second author is supported by the grant PRONEX - N\'ucleo de An\'alise Geom\'etrica e Aplicac\~oes Processo nº PR2-0054-00009.01.00/11. The third author is partially supported by CNPq. The second author would like to thank L. Hauswirth and L. Mazet for an interesting discussion on finite total curvature submanifolds of $\mathbb H^n$, A. Figalli and G.P. Bessa for a hint, P. Castillon for a bibliographical suggestion and V. Gimeno for pleasant conversations and various comments that lead to several improvements after we posted a first version of the paper on arXiv.
| {
"timestamp": "2016-02-29T02:12:47",
"yymm": "1407",
"arxiv_id": "1407.5280",
"language": "en",
"url": "https://arxiv.org/abs/1407.5280",
"abstract": "Let $M^m$ be a minimal properly immersed submanifold in an ambient space close, in a suitable sense, to the space form $\\mathbb{N}^n_k$ of curvature $-k\\le 0$. In this paper, we are interested in the relation between the density function $\\Theta(r)$ of $M^m$ and the spectrum of the Laplace-Beltrami operator. In particular, we prove that if $\\Theta(r)$ has subexponential growth (when $k<0$) or sub-polynomial growth ($k=0$) along a sequence, then the spectrum of $M^m$ is the same as that of the space form $\\mathbb{N}^m_k$. Notably, the result applies to Anderson's (smooth) solutions of Plateau's problem at infinity on the hyperbolic space $\\mathbb{H}^n$, independently of their boundary regularity. We also give a simple condition on the second fundamental form that ensures $M$ to have finite density. In particular, we show that minimal submanifolds of $\\mathbb{H}^n$ with finite total curvature have finite density.",
"subjects": "Differential Geometry (math.DG)",
"title": "Density and spectrum of minimal submanifolds in space forms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540656452213,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7099046578600143
} |
https://arxiv.org/abs/2005.13210 | Census of bounded curvature paths | A bounded curvature path is a continuously differentiable piece-wise $C^2$ path with bounded absolute curvature connecting two points in the tangent bundle of a surface. These paths have been widely considered in computer science and engineering since the bound on curvature models the trajectory of the motion of robots under turning circle constraints. Analyzing global properties of spaces of bounded curvature paths is not a simple matter since the length variation between length minimizers of arbitrary close endpoints or directions is in many cases discontinuous. In this note, we develop a simple technology allowing us to partition the space of spaces of bounded curvature paths into one-parameter families. These families of spaces are classified in terms of the type of connected components their elements have (homotopy classes, isotopy classes, or isolated points) as we vary a parameter defined in the reals. Consequently, we answer a question raised by Dubins (Pac J Math 11(2), 1961). | \section{Prelude}
It is well known that any two plane curves both closed or with different endpoints are homotopic. Graustein, and Whitney in 1937, independently proved that not any two planar closed curves are regularly homotopic (homotopic through immersions) \cite{whitney}. Markov in 1857 considered several optimization problems relating a bound on curvature with the design of railroads \cite{markov}. But, it was only in 1957 that bounded curvature paths were rigorously introduced by Dubins when bounded curvature paths of minimal length were first characterized \cite{dubins 1}.
Fix two elements in the tangent bundle of the Euclidean plane $(x,X),(y,Y)\in T{\mathbb R}^2$. Informally, a planar bounded curvature path is a $C^1$ and piecewise $C^2$ path starting at $x$, finishing at $y$; with tangent vectors at these points $X$ and $Y$ respectively, having absolute curvature bounded by $\kappa=\frac{1}{r}>0$. Here $r$ is the minimum allowed radius of curvature. The piecewise $C^2$ property comes naturally due to the nature of the length minimizers \cite{dubins 1}\footnote{Dubins proved that bounded curvature paths of minimal length are concatenations of two arcs of a circle with a line segment in between, or three arcs of a circle, or any subset of these. The so-called {\sc csc}-{\sc ccc} paths.}.
In 1961 Dubins raised fundamental questions about the topology of the spaces of bounded curvature paths \cite{dubins 2}. ``Here we only begin the exploration, raise some questions that we hope will prove stimulating, and invite others to discover the proofs of the definite theorems, proofs that have eluded us'' see pp. 471 in \cite{dubins 2}. Fifty years later the fundamental questions proposed by Dubins were answered through the papers \cite{paperb, papera, paperc, paperd}. In addition, the classification of the homotopy classes of curves with bounded absolute curvature, having fixed initial and final positions, and variable initial and final directions was achieved in \cite{papere}.
In this note, we develop an elementary framework enabling us to parametrize families of spaces of bounded curvature paths. These families share similar types of connected components, being these: isolated points, homotopy, or isotopy classes, see Theorem \ref{maincensus1} and Definition \ref{def:spaces}. In particular, we answer a question raised by Dubins in 1961 \cite{dubins 2} by explicitly describing the set of endpoints $(x,X),(y,Y)\in T{\mathbb R}^2$ so that the space of bounded curvature paths starting at $x$, finishing at $y$; with tangent vectors at these points $X$ and $Y$ respectively admits a bounded isotopy class, see Theorem \ref{noopnoclofib} and Corollary \ref{noopnoclo}. We conclude by presenting an updated (parametric) version of the classification theorem for homotopy classes of bounded curvature paths in \cite{paperd}, by incorporating the results here obtained, see Theorem \ref{paramclass}. Our results can be extended without much effort for paths in the hyperbolic 2-space.
This article is the culmination of a program devoted to classify the homotopy classes of bounded curvature paths \cite{paperc, paperd}, and the minimal length elements in homotopy classes \cite{paperb, papera}. We recommend the reader from time to time refer to our previous work \cite{paperb, papere, papera, paperc, paperd}. We conclude by presenting an Appendix that can be read independently. This Appendix considers further examples and questions about computational aspects of connected components, and deformations of piecewise constant bounded curvature paths.
There is a vast literature on bounded curvature paths from the theoretical computer science point of view. We encourage the reader to refer to \cite{aga1, baker, buasanei2, bui, fortune, jacobs, lavalle, reif, rus}. Bounded curvature paths have been applied to many real-life problems since a bound on curvature models the trajectory of the motion of wheeled vehicles, and drones also called unmanned aerial vehicles (UAV). We mention only \cite{brazil 1, chang, duindan, ny, owen1, soures, tso1}. Literature on the topology and geometry of spaces of bounded curvature paths can be found in \cite{{paperb}, {papere}, {papera}, {paperc}, {paperd}, {dubins 1}, {dubins 2}, {reeds}, {saldanha}, {sus}}.
The illustrations here presented have been imported from Dubins Explorer, a software for bounded curvature paths \cite{dubinsexplorer}.
\section{On spaces of bounded curvature paths}
For the convenience of the reader, we include relevant material from our previous work in \cite{paperb, papere, papera, paperc, paperd}. Denote by $T{\mathbb R}^2$ the tangent bundle of ${\mathbb R}^2$. Recall that the elements in $T{\mathbb R}^2$ are pairs $(x,X)$ denoted here for short by {\sc x}. The first coordinate of such a pair corresponds to a point in ${\mathbb R}^2$ and the second to a tangent vector to ${\mathbb R}^2$ at $x$.
\begin{definition} \label{defbcp} Given $(x,X),(y,Y) \in T{\mathbb R}^2$, a path $\gamma: [0,s]\rightarrow {\mathbb R}^2$ connecting these points is a {\it bounded curvature path} if:
\end{definition}
\begin{itemize}
\item $\gamma$ is $C^1$ and piecewise $C^2$;
\item $\gamma$ is parametrized by arc length (i.e $||\gamma'(t)||=1$ for all $t\in [0,s]$);
\item $\gamma(0)=x$, $\gamma'(0)=X$; $\gamma(s)=y$, $\gamma'(s)=Y$;
\item $||\gamma''(t)||\leq \kappa$, for all $t\in [0,s]$ when defined, $\kappa>0$ a constant.
\end{itemize}
The first item means that a bounded curvature path has continuous first derivative and piecewise continuous second derivative. Minimal length elements in spaces of paths satisfying the last three items in Definition \ref{defbcp} are in fact $C^1$ and piecewise $C^2$. For the third item, without loss of generality, we extend the domain of $\gamma$ to $(-\epsilon,s+\epsilon)$ for $\epsilon>0$. Sometimes we describe the third item as the endpoint condition. The fourth item means that bounded curvature paths have absolute curvature bounded above by a positive constant. Without loss of generality, we consider $\kappa=1$.
The unit tangent bundle $UT\mathbb R^2$ is equipped with a natural projection $p : UT{\mathbb R}^2 \rightarrow {\mathbb R}^2$. Note that $p^{-1}(y)$ is $\mathbb S^1$ for all $y \in{\mathbb R}^2$. The space of endpoints is a circle bundle over ${\mathbb R}^2$.
\begin{remark}\label{coo}{\it (Coordinate system and angle orientation).}
\begin{itemize}
\item For the given $(x,X), (y,Y) \in T{\mathbb R}^2$ in Definition \ref{defbcp} we consider a coordinate system so that the origin is identified with $x$, and $X$ with the first canonical vector in the standard basis $\{X=e_1,e_2 \}$ for $\mathbb R^2$.
\item When measuring angles we consider the positive orientation to be traveled counterclockwise.
\end{itemize}
\end{remark}
Dubins \cite{dubins 1} proved that the length minimizer bounded curvature paths are necessarily a concatenation of an arc of a unit radius circle, followed by a line segment, followed by an arc of a unit radius circle, the so-called {\sc csc} paths. Or, a concatenation of three arcs of unit radius circles, the so-called {\sc ccc} paths. After considering {\sc r} be a circle traveled to the right and {\sc l} a circle travelled to the left we obtain six possible types of paths, namely {\sc lsl}, {\sc rsr}, {\sc lsr}, {\sc rsl}, {\sc lrl} and {\sc rlr}. Paths having one of these types are here called Dubins paths. Note that we are considering Dubins path to be local not necessarily global minimum of length.
In the following paragraphs, we illustrate through examples the richness of the theory of bounded curvature paths. Its features come from the constraints these curves satisfy. These constraints lead to interesting interactions between metric geometry and computational mathematics.
\begin{example} \label{ex:1}\hfill
\begin{enumerate}
\item Recall that length minimizers are considered for establishing distance between points in a manifold. This approach is not suitable when considering bounded curvature paths, since in many cases, the length variation between length minimizers of arbitrarily close endpoints or directions is discontinuous.
Consider $(x,X),(y,Y_\theta) \in T \mathbb R^2$, $\theta \in \mathbb R$, with $\kappa=1$:
\begin{itemize}
\item $x=(0,0)$; $X=e^{2\pi i}\in T_x\mathbb R^2$.
\item $y=(1,1)$; $\mbox{\it Y}_{\theta}=e^{\theta i}\in T_y\mathbb R^2$.
\end{itemize}
\noindent Discontinuities for the length of the length minimizers happen when perturbing around $\mbox{\it Y}_{\frac{\pi}{2}}$, see Fig.~\ref{figdiscde}. The sudden jumps in length suggest the existence of isolated points, see Theorem 3.9 in \cite{paperc}. In fact, the path in Fig. \ref{figdiscde} left is an isolated point in the space of bounded curvature paths from $(x,X)$ to $(y,Y_{\frac{\pi}{2}})$.
The path in Fig.~\ref{figdiscde} right illustrates a discontinuity after perturbing the final location to $y'=(1-\epsilon,1-\epsilon)$ for $\epsilon>0$ small. Note that length minimisers may not be embedded paths.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{figdiscde4}
\caption{Examples of length minimizers in their respective path space. The three paths at the right are the result of small perturbations to the final position or direction of $(y,Y_{\frac{\pi}{2}})$, for $\epsilon>0$. After applying Dubins' characterization for the length minimizers \cite{dubins 1} a simple numerical experiment shows the existence of length discontinuities.}
\label{figdiscde}
\end{figure}
\item Spaces of bounded curvature paths have several local minima of length, see Fig. \ref{sixdubb1}.
Consider $\mbox{\sc x}=(x,X),\mbox{\sc y}=(y,Y)\in T\mathbb R^2$ with $\kappa=1$:
\begin{itemize}
\item $x=(0,0)$; $X=e^{2\pi i}\in T_x\mathbb R^2$.
\item $y=(-2,1)$; $Y=e^{-\frac{\pi}{4} i}\in T_y\mathbb R^2$.
\end{itemize}
By recursively applying the methods in \cite{papera} for obtaining the {\sc csc}-{\sc ccc} charactarization for the length minimizers \cite{paperb, papera, buasanei1, dubins 1, johnson} we obtain all the local minima of length.
By Proposition 4.4 in \cite{paperd} the paths $\gamma_0$ and $\gamma_5$ are homotopic without violating the curvature bound throughout the deformation. The same applies for $\gamma_1$ and $\gamma_4$. By Proposition 4.3 in \cite{paperd} the paths $\gamma_2$ and $\gamma_3$ lie in the same homotopy class of bounded curvature paths.
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth,angle=0]{sixdubbbyn2}
\caption{Spaces of bounded curvature paths have several local minima of length. Paths with matching colors are homotopic without violating the curvature bound throughout the deformation.}
\label{sixdubb1}
\end{figure}
\item Another interesting feature is that the symmetry property metrics satisfy is in general violated. For example, the length minimizer from $(x,X)$ to $(y,\mbox{\it Y}_{\frac{\pi}{2}})$ has lenght $\frac{\pi }{4}$, see Fig.~\ref{figdiscde} left. On the other hand, the length minimizer from $(y,\mbox{\it Y}_{\frac{\pi}{2}})$ to $(x,X)$ has length $\frac{3\pi }{4}$, see Theorem 4.6 in \cite{paperb}.
\item The classification of the homotopy classes of bounded curvature paths was obtained in \cite{paperd}. A crucial step was to prove that for certain $(x,X),(y,Y)\in T{\mathbb R}^2$ there exists a bounded region $\Omega\subset \mathbb R^2$ that ``traps'' embedded bounded curvature paths. That is, no embedded bounded curvature path whose image is in $\Omega$ can be deformed (while preserving the curvature bound throughout the deformation) to a path having a point not in $\Omega$, see Definition 4.1 in \cite{paperc}. In \cite{paperd} we proved that these ``trapped regions'' are the domain of elements in isotopy classes of bounded curvature paths, we refer to these as {\bf bounded isotopy classes}. Discontinuities may also occur in the formation of trapped regions. These ideas will be discussed in subsection \ref{construct}.
Consider $(x,X),(y,Y)\in T\mathbb R^2$ with $\kappa=1$. For $\epsilon>0$ small, the spaces of bounded curvature paths satisfying:
\begin{itemize}
\item $x=(0,0)$; $X=e^{2\pi i}\in T_x\mathbb R^2$
\item $y=(1+\epsilon,1+\epsilon)$; $Y=e^{\frac{\pi}{2}i}\in T_y\mathbb R^2$
\end{itemize}
\noindent have associated a region that ``traps'' embedded bounded curvature paths, see Fig. \ref{motiv} right. More generally, spaces satisfying the previous two conditions admit a bounded isotopy class of bounded curvature paths, see Theorem 5.4. in \cite{paperd}. If $\epsilon=0$, the space satisfying the previous two conditions admit an isolated point, see Fig. \ref{figdiscde} left.
\item Consider $(x,X),(y,Y)\in T \mathbb R^2$ with $\kappa=1$:
\begin{itemize}
\item $x=(0,0)$; $X=e^{2\pi i}\in T_x\mathbb R^2$.
\item $y=(4-\epsilon,0)$; $0<\epsilon<4$; $Y=e^{2\pi i}\in T_y\mathbb R^2$.
\end{itemize}
Intimately related to the previous observation is that the path $\gamma$ shown in Fig. \ref{motiv} (a non-embedded path) is not homotopic (while preserving the curvature bound throughout the deformation) to the line segment connecting $(x,X)$ to $(y,Y)$, see Corollary 7.13 in \cite{paperc}. In contrast, if $\epsilon\leq 0$, then these two paths are homotopic without violating the curvature bound \cite{paperd}. This fact is related with the existence of trapped regions \cite{paperc}. If a bound on curvature is not under consideration then, $\gamma$ and the line segment connecting $(x,X)$ to $(y,Y)$ are regular homotopic, see Fig. \ref{motiv}.
\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth,angle=0]{motivfinal}
\caption{Right: A (zoomed out) trapped region $\Omega\subset \mathbb R^2$ obtained after perturbing the final position of a bounded curvature path being an isolated point, see Fig. \ref{figdiscde} left. Suddenly, the topology of the path space changes from a space of paths admitting an isolated point into a space of paths admitting a bounded isotopy class with non-empty interior. The elements of this isotopy class are only defined in $\Omega$. Left: An illustration of (5) in Example \ref{ex:1}. If $d(x,y)<4$, then $\gamma$ is not homotopic while preserving the curvature bound through the deformation to the line segment (the length minimizer) from $(x,X)$ to $(y,Y)$.}
\label{motiv}
\end{figure}
\end{enumerate}
\end{example}
\subsection{Spaces of bounded curvature paths}
\begin{definition} \label{admsp} Given $\mbox{\sc x,y}\in T{\mathbb R}^2$. The space of bounded curvature paths from {\sc x} to {\sc y} is denoted by $\Gamma(\mbox{\sc x,y})$.
\end{definition}
In this note we consider $\Gamma(\mbox{\sc x,y})$ with the topology induced by the $C^1$ metric. It is important to note that properties (among many others) such as types of connected components, or the number of local (global) minima in $\Gamma(\mbox{\sc x,y})$ depend on the endpoints in $T{\mathbb R}^2$ under consideration.
Next, we make use of the fibre bundle structure of $T\mathbb R^2$ to describe families of spaces of bounded curvature paths.
\begin{definition}\label{famspa} Choose $\mbox{\sc x}\in T{\mathbb R}^2$ and $y \in \mathbb R^2$. Consider the family of pairs $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$ with $\mbox{\sc y}_\theta=(y,Y_\theta)$; $ Y_\theta=e^{\theta i}\in T_y\mathbb R^2$, $\theta \in \mathbb R$. The one-parameter family of spaces of bounded curvature paths starting at $\mbox{\sc x}$ and finishing at $\mbox{\sc y}_\theta$ is called a {\it fiber} and is denoted by $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{definition}
Whenever we write: $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$, $\theta \in \mathbb R$, we mean a family of pairs of endpoints so that: $\mbox{\sc x}\in T\mathbb R^2$ and $y\in \mathbb R^2$ are arbitrary but fixed while $\theta$ varies in the reals. Note that a space $\Gamma(\mbox{\sc x,y})$ is a representative of a family of spaces parametrized in the reals.
\begin{definition}\label{gammaparameter} Given $\mbox{\sc x}\in T{\mathbb R}^2$ we define:
$$\Gamma=\bigcup_{\substack{{y \in \mathbb R^2}\\ \theta \in \mathbb R}}\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta).$$
\end{definition}
In this note we develop a method for parametrizing the fibers in $\Gamma$ in terms of the types of connected components in $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$, $\theta\in \mathbb R$.
When a path is continuously deformed under parameter $p$ we reparametrize each of the deformed paths by its arc-length. In this fashion, $\gamma: [0,s_p]\rightarrow {\mathbb R}^2$ represents a deformed path at parameter $p$, with $s_p$ corresponding to its arc-length.
\begin{definition} \label{hom_adm} Given $\gamma,\eta \in \Gamma(\mbox{\sc x,y})$. A {\it bounded curvature homotopy} between $\gamma: [0,s_0] \rightarrow {\mathbb R^2}$ and $\eta: [0,s_1] \rightarrow {\mathbb R^2}$ corresponds to a continuous one-parameter family of immersed paths $ {\mathcal H}_t: [0,1] \rightarrow \Gamma(\mbox{\sc x,y})$ such that:
\begin{itemize}
\item ${\mathcal H}_t(p): [0,s_p] \rightarrow {\mathbb R}^2$ for $t\in [0,s_p]$ is an element of $\Gamma(\mbox{\sc x,y})$ for all $p\in [0,1]$.
\item $ {\mathcal H}_t(0)=\gamma(t)$ for $t\in [0,s_0]$ and ${\mathcal H}_t(1)=\eta(t)$ for $t\in [0,s_1]$.
\end{itemize}
\end{definition}
A bounded-curvature isotopy is a continuous one-parameter family of embedded bounded curvature paths. Two paths in $\Gamma(\mbox{\sc x,y})$ are {\it bounded-homotopic (bounded-isotopic)} if there exists a bounded curvature homotopy (isotopy) from one to the other. A {\it homotopy (isotopy) class} is a maximal path connected set in $\Gamma(\mbox{\sc x,y})$.
\begin{definition} \label{admsp} Let $\Delta(\mbox{\sc x,y})$ be a non-empty bounded isotopy class of paths in $\Gamma(\mbox{\sc x,y})$. Let $\mathcal B$ denote the set of pairs $\mbox{\sc x,y} \in T\mathbb R^2$ for which $\Gamma(\mbox{\sc x,y})$ possesses such a bounded isotopy class.
\end{definition}
In \cite{paperc} we proved that $\mathcal B\neq \emptyset$ by establishing the existence of non-empty bounded isotopy classes. Whenever we refer to $\Delta(\mbox{\sc x,y})$ we imply that $\Delta(\mbox{\sc x,y})$ is non-empty. In this note, we give necessary and sufficient conditions so that $\mbox{\sc x},\mbox{\sc y} \in T\mathbb R^2$ is an element in $\mathcal B$. As a consequence, we answer a question raised by Dubins in pp. 480 in \cite{dubins 2}. We establish that $\mathcal B$ is a bounded neither open nor closed subset in $T\mathbb R^2$, see Theorem \ref{noopnoclofib} and Corollary \ref{noopnoclo}.
\subsection{Proximity of endpoints}\label{proxcon}
Here we analyze the configurations of distinguished pairs of circles in $\mathbb R^2$. This approach permits us to reduce the configurations of endpoints in $T\mathbb R^2$ into a finite number of cases up to isometries.
Consider $\mbox{\sc x}\in T\mathbb R^2$. Let $\mbox{\sc C}_ l(\mbox{\sc x})$ be the unit radius circle tangent to $x$ and to the left of $X$. The meaning of $\mbox{\sc C}_ r(\mbox{\sc x})$, $\mbox{\sc C}_ l(\mbox{\sc y})$ and $\mbox{\sc C}_ r(\mbox{\sc y})$ should be obvious. These circles are called {\it adjacent circles}. Denote the centers of the adjacent circles with lowercase letters. So, the center of $\mbox{\sc C}_ l(\mbox{\sc x})$ is $c_l(\mbox{\sc x})$, see Fig. \ref{fig:Cr} right. The other cases are analogous.
We concentrate on the following configurations for the adjacent circles.
\begin{equation} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))\geq 4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))\geq4 \label{con_a}\tag{i}\end{equation}
\vspace{-1.5em}
\begin{equation} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))< 4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))\geq 4 \label{con_b}\tag{ii} \end{equation}
\vspace{-1.5em}
\begin{equation} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))\geq4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))< 4 \label{con_b'}\tag{iii} \end{equation}
\vspace{-1.5em}
\begin{equation} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))< 4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))< 4 \label{con_c}\tag{iv}
\end{equation}
The conditions (i)-(iv) have being used in different contexts through \cite{paperb,papera, paperc, paperd}. They give information about the topology and geometry of $\Gamma(\mbox{\sc x,y})$. Note that as planar configurations, (ii) and (iii) are equivalent up to isometries.
\subsection{Trapped regions and bounded isotopy classes}\label{construct}
In Theorem 5.4 in \cite{paperd} we proved that for certain $\mbox{\sc x,y}\in T{\mathbb R}^2$, the associated space $\Gamma(\mbox{\sc x,y})$ admits a bounded isotopy class $\Delta(\mbox{\sc x,y})$. It turns out that paths in $\Delta(\mbox{\sc x,y})$ are defined exclusively in a bounded region $\Omega\subset \mathbb R^2$. The shape of $\Omega$ depends on the initial and final positions and directions in $T{\mathbb R}^2$, see Fig. \ref{regparam}.
For a precise explanation on how these regions $\Omega\subset \mathbb R^2$ are constructed, we strongly suggest the reader refer to Section 4 in \cite{paperc}. We call these regions {\bf trapped regions}.
It is important to note that:
\begin{itemize}
\item Embedded paths in $\Omega$ cannot be deformed without violating the curvature bound to a path with a self-intersection, see Corollary 7.13 in \cite{paperc}.
\item Embedded paths in $\Omega$ are not bounded-homotopic to paths having a point not in $\Omega$, see Theorem 8.1 in \cite{paperc}.
\item The proof of the existence of isolated points in spaces of bounded curvature paths was given in Theorem 3.9 in \cite{paperc}. These correspond to arcs of a unit circle of length less to $\pi$, called {\bf {\sc c} isolated points}. Similarly, a concatenation of two arcs of unit circle, each of length less to $\pi$, are called {\bf {\sc cc} isolated points}. Isolated points in $\Gamma(\mbox{\sc x,y})$ are bounded isotopy classes with empty interior, see Fig. \ref{figdiscde} left, and Fig. \ref{figgenpos} left. In addition, bounded curvature paths of length zero are also isolated points. This observation becomes interesting after recalling the concept of simple connectedness. Closed bounded curvature paths are not bounded-homotopic to a single point.
\end{itemize}
\begin{remark}\label{rem:empty} Suppose that for $\mbox{\sc x,y}\in T{\mathbb R}^2$ we have that $\Gamma(\mbox{\sc x,y})$ does not admit a bounded isotopy class $\Delta(\mbox{\sc x,y})$. Then, embedded trapped paths cannot exist. We adopt the notation $\Delta(\mbox{\sc x,y})$, rather than $\Delta(\Omega)$ as we did in \cite{paperc, paperd}, since our emphasis now is on the endpoints rather than the regions $\Omega\subset \mathbb R^2$. We prefer to write $\Omega$ instead of $\Omega(\mbox{\sc x,y})$.
\end{remark}
The classification theorem for the homotopy classes in \cite{paperd} required the proximity conditions A, B, C, and D, see \cite{paperc,paperd}. Next, we redefine conditions C and D in terms of the existence of bounded isotopy classes, see Fig. \ref{figproxcondabcd}.
\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth,angle=0]{figproxcondabcd3}
\caption{Examples of bounded curvature paths in spaces satisfying conditions A, B, C and D.}
\label{figproxcondabcd}
\end{figure}
\begin{definition}\label{procon}
If $\mbox{\sc x,y}\in T{\mathbb R}^2$ satisfies:
\begin{itemize}
\item (i) then $\Gamma({\mbox{\sc x,y}})$ is said to satisfy proximity condition {\sc A}.
\item (ii) or (iii) then $\Gamma({\mbox{\sc x,y}})$ is said to satisfy proximity condition {\sc B}.
\item (iv) and there is no bounded isotopy class $\Delta({\mbox{\sc x,y}})$ then $\Gamma({\mbox{\sc x,y}})$ is said to satisfy proximity condition {\sc C}.
\item (iv) and there exists a bounded isotopy class $\Delta({\mbox{\sc x,y}})$ then $\Gamma({\mbox{\sc x,y}})$ is said to satisfy proximity condition {\sc D}.
\end{itemize}
\end{definition}
\noindent In Theorem \ref{maincensus1} we clarify for what $\mbox{\sc x,y}\in T\mathbb R^2$ we have that $\Delta(\mbox{\sc x,y})\subset \Gamma(\mbox{\sc x,y})$. To this end, we group spaces of bounded curvature paths in terms of the type of connected components that they have.
\section{An underlying discrete structure}\label{underlying} \label{Crl}
Next, we describe the coordinates of distinguished points in $\mathbb R^2$. The configurations of these points reveal interesting features of $\Gamma(\mbox{\sc x,y})$, $\mbox{\sc x,y}\in T\mathbb R^2$. In particular, these points completely characterize the regions $\Omega\subset \mathbb R^2$ whenever they exist.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{Cr11}
\caption{Right: Notation associated with $\Omega\subset \mathbb R^2$. Left: The angles involved when computing the coordinates of $p,q\in \mathbb R^2$.}
\label{fig:Cr}
\end{figure}
Consider unit radius circles $A$ and $B$ with centers $a=(a_1,a_2)$ and $b=(b_1,b_2)$ respectively. Consider a unit radius circle $C$ with center $c$ tangent to $A$ and $B$ at $p$ and $q$ respectively, see Figure \ref{fig:Cr} left. Also, set $\mbox{\sc x,y}\in T\mathbb R^2$ so that a {\sc ccc} path is obtained. Suppose the coordinates of $a$ and $b$ are known. Next we determine the coordinates of the points $p$ and $q$.
Consider the triangle whose vertices are $a$, $b$, and $c$. Denote by $\theta$ the smallest angle made by the line passing through $a$ and $b$ and the horizontal axis according to Remark \ref{coo}, see Figure \ref{fig:Cr} left. Here $\ell_1$ and $\ell_2$ are parallel to the horizontal axis. Denote by $\delta$ the smallest angle made by the line passing through $b$ and $c$ and the horizontal axis. It is easy to see that $d(a,c)=d(c,b)=2$.
After applying the law of cosines we immediately obtain that:
\begin{equation*}
\alpha =\arccos\bigg({\frac{\sqrt{(b_1-a_1)^2+(b_2-a_2)^2}}{4}}\bigg)
\end{equation*}
\begin{equation*}
\theta = \arctan\bigg({\frac{b_2-a_2}{b_1-a_1}}\bigg)
\end{equation*}
\begin{equation*}
\delta = \arctan\bigg({\frac{b_2-c_2}{b_1-c_1}}\bigg)
\end{equation*}
\begin{equation*}
\label{eq:cruv}
c=(a_1 + 2\cos(\alpha+\theta), a_2 + 2\sin{(\alpha+\theta}))
\end{equation*}
\vspace{0em}
\begin{equation}
\label{eq:i1}
p = (a_1 + \cos({\alpha+\theta}), a_2 + \sin({\alpha+\theta}))
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:i3}
q= (b_1 + \cos{\delta},b_2 + \sin{\delta})
\end{equation}
\vspace{0.1em}
By letting $A=\mbox{\sc C}_ r(\mbox{\sc x})$ and $B=\mbox{\sc C}_ r(\mbox{\sc y})$ we find explicit formulas for the point $p$ between $A$ and $C$ and $q$ between $C$ and $B$, see Fig. \ref{fig:Cr} left. Observe that the coordinates of $c_r(\mbox{\sc x})$ and $c_ r(\mbox{\sc y})$ are easily obtained since $\mbox{\sc x,y}\in T\mathbb R^2$ are given.
Analogously, by letting $A=\mbox{\sc C}_ l(\mbox{\sc x})$ and $B=\mbox{\sc C}_ l(\mbox{\sc y})$, and by applying the same reasoning as before, we find formulas for the points $p'$ between $A$ and $C'$ and $q'$ between $C'$ and $B$ (see Fig. \ref{fig:Cr} right):
\begin{equation*}
\alpha' =\arccos\bigg({\frac{\sqrt{(b_1-a_1)^2+(b_2-a_2)^2}}{4}}\bigg)
\end{equation*}
\begin{equation*}
\theta' = \arctan\bigg({\frac{b_2-a_2}{b_1-a_1}}\bigg)
\end{equation*}
\begin{equation*}
\delta' = \arctan\bigg({\frac{b_2-c_2}{b_1-c_1}}\bigg)
\end{equation*}
\begin{equation*}
\label{eq:cluv}
c'=(a_1 + 2\cos(\alpha'+\theta'), a_2 + 2\sin{(\alpha'+\theta'}))
\end{equation*}
\vspace{0em}
\begin{equation}
\label{eq:i2}
p' = (a_1 + \cos({\alpha'+\theta'}), a_2 + \sin({\alpha'+\theta'}))
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:i4}
q' = (b_1 + \cos{\delta'},b_2 + \sin{\delta'})
\end{equation}
\begin{definition}\label{boundomega} Let:
\begin{itemize}
\item $w_1$ be the {\sc rlr} path consisting of an arc from $x$ to $p$ in $\mbox{\sc C}_ r(\mbox{\sc x})$; an arc from $p$ to $q$ in $C$; and an arc from $q$ to $y$ in $\mbox{\sc C}_ r(\mbox{\sc y})$, see equations (\ref{eq:i1}) and (\ref{eq:i3}).
\item $w_2$ be the {\sc lrl} path consisting of an arc from $x$ to $p'$ in $\mbox{\sc C}_ l(\mbox{\sc x})$; arc from $p'$ to $q'$ in $C'$; and an arc from $p'$ to $y$ in $\mbox{\sc C}_ l(\mbox{\sc y})$, see equations (\ref{eq:i2}) and (\ref{eq:i4}).
\end{itemize}
\end{definition}
Next we make use of the formaulae (\ref{eq:i1})-(\ref{eq:i4}) to characterize $\Omega\subset \mathbb R^2$ in terms of the coordinates of distinguished points.
\begin{definition}\label{omegadfn} Assume $\Gamma(\mbox{\sc x}, \mbox{\sc y})$ satisfies condition {\sc D}. Let $\Omega\subset \mathbb R^2$ be the bounded region whose boundary is given by the union of $w_1$ and $w_2$ in Definition \ref{boundomega}, see Fig. \ref{fig:Cr} right. In this case we say that $\mbox{\sc x,y}\in T\mathbb R^2$ {\it carries a region}.
\end{definition}
\section{Motivation through examples }\label{moduli}
In narrative terms, here we present facts in reverse-chronology. By considering this strategy, we are telling the reader ``the end of the story'' through various examples, with the intention to motivate the more technical steps and proofs.
We study the fibers in $\Gamma$ by fixing $\mbox{\sc x}=(x,X)\in T\mathbb R^2$ and a final position $y\in \mathbb R^2$ while varying the final direction $Y_{\theta}\in T_y\mathbb R^2$, $\theta \in \mathbb R$. In Section \ref{classrangesec} we construct a function, called the {\bf class range}, that assigns to each final position $y\in \mathbb R^2$ a non-negative real number. This number is called the class value and gives the range $\theta$ can vary so that the spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\theta})$ have the same types of connected components.
Firstly, we would like to point out that for a fixed $\theta\in \mathbb R$, the spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{k\theta})$ and $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{j\theta})$ may eventually be different for $j\neq k\in \mathbb Z$. Secondly, consider $\mbox{\sc x}, \mbox{\sc y}_{\theta}\in T\mathbb R^2$, so that $\theta=\pm \pi$. Since the initial and final tangent vectors are parallel having opposite sense, the pairs $\mbox{\sc x}, \mbox{\sc y}_{\pm \pi}\in T\mathbb R^2$ do not carry a region $\Omega\subset \mathbb R^2$. This is due to the existence of parallel tangents, see \cite{papere}.
\begin{definition}\label{defw}Consider $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$, $\theta\in (-\pi,\pi)$, so that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\theta})$ satisfy proximity condition {\sc D}. Let
\begin{itemize}
\item $\omega_- $ be the smallest value in $(-\pi,\pi)$ so that there exits a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$;
\item $\omega_+ $ be the greatest value in $(-\pi,\pi)$ so that there exits a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{itemize}
An interval whose endpoints are $\omega_-$ and $\omega_+$ is denoted by $I(y)$. We refer to $\omega_- $ and $\omega_+$ as critical angles.
\end{definition}
\begin{remark}\hfill
\begin{itemize}
\item The critical angles $\omega_-$ and $\omega_+ $ depend on $y\in \mathbb R^2$ since in Remark \ref{coo} we established that $(x,X)\in T\mathbb R^2$ is fixed. Sometimes we write $\omega_-=\omega_-(y)$ and $\omega_-=\omega_-(y)$.
\item From the way we construct the class range function (see Definition \ref{classrange+}) the existence of $\omega_-$ and $\omega_+$ is guaranteed.
\end{itemize}
\end{remark}
The examples in \ref{paramex} and the illustrations in Fig. \ref{regparam} have been obtained computationally \cite{dubinsexplorer} by evaluating the class range function in Definition \ref{classrange+} via equations (\ref{eq:Wmin}) and (\ref{eq:Wmax}). Throughout this note, whenever we consider examples obtained computationally, angles will be measured in degrees. The following ideas will be formalized in Sections \ref{classrangesec} and \ref{classdomain}, compare Definition \ref{def:spaces}.
\begin{example}[\bf parametrizing fibers]\label{paramex}
In Fig. \ref{regparam} top, we show a sequence for the variation of $\mbox{\sc x}, \mbox{\sc y}_\theta\in T\mathbb R^2$ while $\theta$ ranges over $I(y) \subsetneq (-180^{\circ},180^{\circ})$ illustrating the following example.
Consider $x=(0,0)\in \mathbb R^2$, $X=(1,0)\in T_x\mathbb R^2$, $y=(2.82,0)\in\mathbb R^2$,$Y_{\theta}=e^{i\theta}\in T_y\mathbb R^2$. We determine that $I(y)=[-109.47^{\circ},109.47^{\circ}]$ and conclude that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ admits (from left to right and counterclockwise) spaces of bounded curvature paths being:
\begin{itemize}
\item An isolated point for $\theta=-109.47^{\circ}$, see Theorem 3.9 in \cite{paperc}.
\item There exists a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $\theta \in (-109.47^{\circ},109.47^{\circ})$ i.e., a one-parameter family of bounded isotopy classes, see Theorem \ref{existvect}. Also see Theorem 8.1 in \cite{paperc}.
\item An isolated point for $\theta=109.47^{\circ}$.
\item If $\theta \notin [-109.47^{\circ},109.47^{\circ}]$ then there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{itemize}
In this case we say that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a {\bf fiber of type I}.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{regparam}
\caption{The grey regions are examples of $\Omega\subset \mathbb R^2$. We illustrate the examples in Remark \ref{paramex} computed and plotted according to Definition \ref{classrange+} via equations (\ref{eq:Wmin}) and (\ref{eq:Wmax}). The range where $\theta$ can vary is depicted in dark green.}
\label{regparam}
\end{figure}
In Fig. \ref{regparam} middle we show a sequence for the variation of $\mbox{\sc x}, \mbox{\sc y}_\theta\in T\mathbb R^2$ while $\theta$ ranges over $I(y) \subsetneq (-180^{\circ},180^{\circ})$ illustrating the following example.
Consider $x=(0,0)\in \mathbb R^2$, $X=(1,0)\in T_x\mathbb R^2$, $y=(2.5,-2)\in \mathbb R^2$, $Y_{\theta}=e^{i\theta}\in T_y\mathbb R^2$. We determine that $I(y)=[-48.36^{\circ},30.30^{\circ})$ and conclude that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ contains (from left to right and counterclockwise) spaces of bounded curvature paths such that:
\begin{itemize}
\item There exists a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $\theta \in [-48.36^{\circ},30.30^{\circ})$ i.e., a one-parameter family of bounded isotopy classes.
\item An isolated point for $\theta =30.30^{\circ}$.
\item If $\theta \notin [-48.36^{\circ},30.30^{\circ})$ then there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{itemize}
In this case we say that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a {\bf fiber of type II}.
In Fig. \ref{regparam} bottom we show a sequence for the variation of $\mbox{\sc x}, \mbox{\sc y}_\theta\in T\mathbb R^2$ while $\theta$ ranges over $I(y) \subsetneq (-180^{\circ},180^{\circ})$ illustrating the following example.
Consider $x=(0,0)\in \mathbb R^2$, $X=(1,0)\in T_x\mathbb R^2$, $y=(3,0.5)\in \mathbb R^2$ and $Y_{\theta}=e^{i\theta}\in T_y\mathbb R^2$ we determine that $I(y)=[-80.42^{\circ},60.55^{\circ}]$ and conclude that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ contains (from left to right and counterclockwise) spaces of bounded curvature paths being:
\begin{itemize}
\item There exists a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $\theta \in [-80.42^{\circ},60.55^{\circ}]$ i.e., a one-parameter family of bounded isotopy classes.
\item If $\theta \notin [-80.42^{\circ},60.55^{\circ}]$ then there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{itemize}
In this case we say that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a {\bf fiber of type III}.
\end{example}
There are two more types of fibers, these will be discussed in Definition \ref{def:spaces}.
In Section \ref{classdomain} we characterize a region $B\subset \mathbb R^2$ so that the class range is well defined. This plane region corresponds exactly to the location for the final positions $y\in \mathbb R^2$ so that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ admits a family of bounded isotopy classes of bounded curvature paths.
Next we explain two types of configurations that will be of relevance when determining the extreme values of the class range function.
\section{Topological transitions}\label{crit}
Consider a {\sc cc} isolated point as shown in Fig. \ref{figgenpos} left. After a small clockwise continuous perturbation on the final direction (while fixing the final position) the resultant endpoints define a space that does not admit an isolated point, see Fig. \ref{figgenpos} middle and right. This is true since the paths at middle and right (the length minimisers in their respective space) are parallel homotopic to paths of arbitrary length due to the existence of parallel tangents, see Corollary 3.4 and Proposition 3.8 in \cite{papere}. By Corollary 7.13 in \cite{paperc} these paths are not elements in bounded isotopy classes.
It is fairly easy to see that a small counterclockwise perturbation on the final direction of the {\sc cc} isolated point in Fig. \ref{figgenpos} leads to spaces admitting a bounded isotopy class.
\begin{itemize}
\item[(1)] For certain fibers, the {\sc cc} isolated points are transitions between spaces with different types of connected components.
\end{itemize}
The previous observations say implicitly that for certain fibers the critical values $\omega_-$ and $\omega_+$ are achieved at spaces admitting {\sc cc} isolated points.
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth,angle=0]{figgenpos}
\caption{Two types of discontinuities. When varying the final vector of an isolated point (at the left) we obtain a length discontinuity. The third path shows that length discontinuities not only happen when perturbing directions of isolated points. Here $x=(0,0)\in \mathbb R^2$, $X=(1,0)\in T_x\mathbb R^2$, $y=(2,2)\in \mathbb R^2$, and $e^{i \theta }=Y_\theta \in T_y\mathbb R^2$ with $\theta_0=0^{\circ}$, $\theta_1=-6^{\circ}$, $\theta_2=-12^{\circ}$.}
\label{figgenpos}
\end{figure}
Recall that a necessary condition for the existence of a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y})$ is that $\mbox{\sc x,y}\in T\mathbb R^2$ satisfy:
$$d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))< 4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))< 4.$$
This is easy to see since: if $d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))\geq 4$, then a unit disk can be placed in the line joining $c_l(\mbox{\sc x})$ to $c_l(\mbox{\sc y})$ without overlapping with $C_l(\mbox{\sc x})$ or $C_l(\mbox{\sc y})$. This implies that bounded curvature paths may escape $\Omega\subset \mathbb R^2$ (after applying an operation of type {\sc II} in \cite{paperd} to the length minimiser in $\Omega$) contradicting Theorem 8.1 in \cite{paperc}. For details we recommend the reader refer to Section 4 in \cite{paperc}. The same applies for $d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))\geq 4$.
\begin{itemize}
\item[(2)] For certain fibers, the condition
\begin{equation} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))=4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))=4 \label{con_t}\end{equation}
\end{itemize}
is considered as a transition between spaces with different types of connected components. We proved in Theorem 5.3 in \cite{paperd} that spaces $\Gamma(\mbox{\sc x},\mbox{\sc y})$ satisfying proximity condition A, that is:
\begin{equation*} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))\geq4 \quad \mbox{or}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))\geq 4 \label{con_g}\end{equation*}
do not admit isotopy classes. Therefore, $\Omega=\emptyset$.
\section{Angular formulae}\label{trans}
We describe two types auxiliary triangles that allow us to obtain (via continuous variations of their angles) the values of the class range function. These triangles are constructed out of information obtained from the given endpoints in $T\mathbb R^2$. We establish a correlation between the angle variation in these auxiliary triangles and the types of connected components in $\Gamma(\mbox{\sc x}, \mbox {\sc y}_\theta)$, $\mbox{\sc x},\mbox{\sc y}_\theta\in T\mathbb R^2$, for each $\theta \in (-\pi,\pi)$.
Next we consider fibers whose critical values $\omega_-$ and $\omega_+$ are achieved at spaces admitting {\sc cc} isolated points as disscused in (1) in Section \ref{crit}.
\subsection{Short triangles}\label{short} Suppose $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$, $\theta \in (-\pi,\pi)$ is such that for some $\theta\in (-\pi,\pi)$ the adjacent circles $\mbox{\sc C}_l(\mbox{\sc x})$ and $\mbox{\sc C}_r(\mbox{\sc y}_{\theta})$ intersect at a single point. We have that $\theta=\omega_-$ or $\theta=\omega_+$, see Fig. \ref{shortriang1B}. For $\theta=\omega_-$, construct a triangle whose vertices are $c_l({\mbox{\sc x}}), c_r(\mbox{\sc y}_{\omega_-})$ and $y$. It is immediate that $d(c_l({\mbox{\sc x}}), c_r(\mbox{\sc y}_{\omega_-}))=2$ and that $d(c_r(\mbox{\sc y}_{\omega_-}),y)=1$. For $\theta=\omega_+$, construct the triangle whose vertices are $c_r({\mbox{\sc x}}), c_l(\mbox{\sc y}_{\omega_+})$ and $y$, see Fig \ref{shortriang2}. It is immediate that $d(c_r({\mbox{\sc x}}), c_l(\mbox{\sc y}_{\omega_+}))=2$ and that $d(c_l(\mbox{\sc y}_{\omega_+}),y)=1$.
The obvious observation: A triangle with sides of length $1$ and $2$ cannot have a third side of length greater to $3$ leads us to analyze the transitions in (2) in Section \ref{crit}.
\subsection{Long triangles}\label{long} Consider $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$, $\theta \in (-\pi,\pi)$ so that the adjacent circles $\mbox{\sc C}_l(\mbox{\sc x})$ and $\mbox{\sc C}_r(\mbox{\sc y}_{{\theta}})$ do not intersect. We construct the triangle whose vertices are $c_l({\mbox{\sc x}}), c_l(\mbox{\sc y}_\theta)$ and $y$, see Fig. \ref{fig:Triangle3}. Note that $d(c_l(\mbox{\sc y}_\theta),y)=1$, and $d(c_l({\mbox{\sc x}}), y)>3$, see Fig. \ref{fig:Triangle3}.
In case the adjacent circles $\mbox{\sc C}_r(\mbox{\sc x})$ and $\mbox{\sc C}_l(\mbox{\sc y}_\theta)$ do not intersect, we construct the triangle whose vertices are $c_r({\mbox{\sc x}}), c_r(\mbox{\sc y}_\theta)$ and $y$, see Fig. \ref{fig:Triangle4}. Note that $d(c_r(\mbox{\sc y}_\theta),y)=1$, and $d(c_r({\mbox{\sc x}}), y)>3$.
Next, we look closely at short and long triangles. Their sides are denoted in capital letters while the length of their sides are denoted in lowercase i.e., the side $S_i$ has length $s_i$.
We keep a certain degree of detail for {short triangles} and subsequently reduce the details in the discussions for {long triangles} assuming the analogy in ideas and notation with short triangles.
\subsection{Angular formulae for short triangles} \label{angformshort}
To avoid confusion, the abscissa and ordinate in the coordinate system in Remark \ref{coo} are denoted by $u$-axis and $v$-axis respectively.
Using the notaion from Fig. \ref{shortriang1B} left, consider the triangle whose vertices are $c_l({\mbox{\sc x}}), c_r(\mbox{\sc y})$ and $y$. It is easy to see that this triangle has sides of length $a_1=2$, $b_1=1$, and $c_1=d(c_l(\mbox{\sc x}),y)$. In addition, since the endpoints are given, we can easily obtain the coordinates of the vertices of the triangle under consideration. By the law of cosines we can obtain the angles $\alpha_1$, $\beta_1$, and $\gamma_1$.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{shortriangle11}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.87\linewidth]{shortriang1BBBB}
\label{fig:sfig2}
\end{subfigure}
\caption{Notation for short triangles. Right: The angular formulae applies when $\delta_1$ and $\omega_{-}$ (or $\omega_{+}$) have same and opposite sign. Here $\ell$ is a line parallel to the horizontal axis.}
\label{shortriang1B}
\end{figure}
Let $\delta_{1}$ be the smallest angle made by the line joining $c_l(\mbox{\sc x})$ to $y$ and the $u$-axis. The angle $\delta_1=\arctan\big(\frac{v+1}{u}\big)$ is easy to obtain since $y=(u,v)\in \mathbb R^2$ is given.
\begin{remark}(Ruling out indeterminancies).\label{indet} Note that the standard arctan function allow us to compute angles in $(-\frac{\pi}{2}, \frac{\pi}{2})$. Since our computations involve angles in $(-\pi,\pi)$ we make use of the arctan2 function that allows us to calculate the arctangent in all four quadrants, see equations (\ref{eq:delta_a}) and (\ref{eq:delta_xa}).
\end{remark}
Depending on the final position, the angles $\delta_1$ and $\omega_-$ may have the same or different sign. In Fig. \ref{shortriang1B} left we illustrate the case when $\delta_1<0$ and $\omega_-<0$. In Fig. \ref{shortriang1B} right we illustrate the case where $\delta_1<0$ and $\omega_->0$. Since $\alpha_{1}$ and $\alpha'_{1}$ are supplementary we have that $\alpha'_{1} = \pi - \alpha_{1}$.
From the previous analysis we obtain that $\omega_- = \delta_{1}-\alpha'_{1}+\frac{\pi}{2}$ or equivalently:
\begin{equation}
\label{eq:thetar_b}
\omega_- = \delta_{1}+\alpha_{1}-\frac{\pi}{2}.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=.57\textwidth,angle=0]{shortriang33}
\caption{A critical configuration for short triangles. Note that $\delta_2>0$ and $\omega_+>0$.}
\label{shortriang2}
\end{figure}
Now we obtain a formula for $\omega_+$, see Fig. \ref{shortriang2}. Since $\alpha_{2}$ and $\alpha'_{2}$ are supplementary we have that $\alpha'_{2} = \pi - \alpha_{2}$.
We obtain that $\omega_+ = \delta_{2}+\alpha'_{2}-\frac{\pi}{2}$ or equivalently:
\begin{equation}
\label{eq:thetal_a}
\omega_{+} = \delta_{2}-\alpha_{2}+\frac{\pi}{2}.
\end{equation}
Here $\delta_{2}$ is the smaller angle made by the $u$-axis and the line joining $c_r(\mbox{\sc x})$ to $y$. It is easy to obtain the length of the side $C_2$.
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth,angle=0]{longtriang1111}
\caption{Notation for long triangles.}
\label{fig:Triangle3}
\end{figure}
\subsection{Angular formulae for long triangles}
Using the notation from Fig. \ref{fig:Triangle3} we present the following formulae:
\begin{equation}
\label{eq:thetar_c}
\omega_{-} = \delta_{3}-\alpha_{3}+\frac{\pi}{2}.
\end{equation}
Similarly we obtain,
\begin{equation}
\label{eq:thetal_b}
\omega_{+} = \delta_{4}+\alpha_{4}-\frac{\pi}{2}.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth,angle=0]{longtriang22}
\caption{Notation for long triangles.}
\label{fig:Triangle4}
\end{figure}
We put together equations (\ref{eq:thetar_b})-(\ref{eq:thetal_b}) to give explicit formulae for $\omega_-$ and $\omega_+$.
\begin{equation}
\label{eq:wmin}
\omega_{-} =
\begin{cases}
\delta_{1} + \alpha_{1} - \frac{\pi}{2} & \text{if } \,\,\,d(c_l(\mbox{\sc x}),y) < 3 \\
\delta_{3} - \alpha_{3} + \frac{\pi}{2} & \text{if } \,\,\,d(c_l(\mbox{\sc x}),y) \geq 3
\end{cases}
\end{equation}
\begin{equation}
\label{eq:wmax}
\omega_{+} =
\begin{cases}
\delta_{2} - \alpha_{2} + \frac{\pi}{2} & \text{if } \,\,\,d(c_r(\mbox{\sc x}),y) < 3 \\
\delta_{4} + \alpha_{4} - \frac{\pi}{2} & \text{if } \,\,\,d(c_r(\mbox{\sc x}),y) \geq 3
\end{cases}
\end{equation}
\section{The class range} \label{classrangesec}
Choose $(x,X), (y,Y) \in T{\mathbb R}^2$ so that the origin is identified with $x$ as in Remark \ref{coo}. We consider the formulae (\ref{eq:wmin}) and (\ref{eq:wmax}) as starting point for obtaining the class range function. The class value gives the range that $\theta$ can continuously vary so that the spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\theta})$ have the same types of connected components. The class value corresponds to the length of the maximal subinterval $I(y)\subset (-\pi,\pi)$ so that there is a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$, $\theta \in I(y)$.
Next we express the angles $\delta_i$ and $\alpha_i$ in (\ref{eq:wmin}) and (\ref{eq:wmax}) in terms of generic $y=(u,v)\in \mathbb R^2$. Observe that $\delta_1=\delta_3$ since both are the acute angles made by the line joining $c_l(\mbox{x})$ and $y$ with the $u$-axis. In addition, $\delta_2=\delta_4$ since both are the acute angles made by the line joining $c_r(\mbox{x})$ and $y$ with the $u$-axis. Note that $\tan (\delta_1)=\frac{v+r}{u}$, and $\tan (\delta_2)=\frac{v-r}{u}$.
We use arctan2 function in (\ref{eq:delta_a}) and (\ref{eq:delta_xa}) instead of the standard arctan function to determine angles $\delta_1$ and $\delta_2$, see Remark \ref{indet}. We obtain the following formulae:
\begin{equation}
\label{eq:delta_a}
\delta_{1}(u,v) =
\begin{cases}
\arctan \frac{v+1}{u} & \text{if } u > 0 \\
\arctan \frac{v+1}{u} + \pi & \text{if } u < 0 \,\, \text{and}\,\, v \geq -1 \\
\arctan \frac{v+1}{u} - \pi & \text{if } u > 0 \,\, \text{and}\,\, v<-1\\
\frac{\pi}{2} & \text{if } u = 0 \,\, \text{and}\,\, v>-1\\
-\frac{\pi}{2} & \text{if } u = 0 \,\, \text{and}\,\, v<-1\\
\end{cases}
\end{equation}
\begin{equation}
\label{eq:delta_xa}
\delta_{2}(u,v) =
\begin{cases}
\arctan \frac{v-1}{u} & \text{if } u > 0 \\
\arctan \frac{v-1}{u} + \pi & \text{if } u < 0 \,\, \text{and}\,\, v \geq 1 \\
\arctan \frac{v-1}{u} - \pi & \text{if } u > 0 \,\, \text{and}\,\, v<1\\
\frac{\pi}{2} & \text{if } u = 0 \,\, \text{and}\,\, v>1\\
-\frac{\pi}{2} & \text{if } u = 0 \,\, \text{and}\,\, v<1\\
\end{cases}
\end{equation}
Now we determine the angles $\alpha_{i}$ in (\ref{eq:wmin}) and (\ref{eq:wmax}). To this end, we apply the law of cosines. Here we are not considering degenerate triangles, so the following formulae are never undetermined.
\begin{equation*}
\alpha_{i} = \arccos\bigg(\frac{b_i^2+c_i^2-a_i^2}{2b_ic_i}\bigg)
\end{equation*}
Since the initial and final positions are given, the coordinates of the adjacent circles are easily obtained. In consequence, the length of the sides of the short and long triangles are easily obtained.
We can express the angles $\alpha_{i}$ as a function of $y=(u,v)\in \mathbb R^2$. That is:
\begin{equation}
\label{eq:alpha_1}
\alpha_{1}(u,v) = \arccos\bigg(\frac{u^2+(-1-v)^2-3}{2\sqrt{u^2+(-1-v)^2}}\bigg)
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:alpha_2}
\alpha_{2}(u,v) = \arccos\bigg(\frac{u^2+(1-v)^2-3}{2\sqrt{u^2+(1-v)^2}}\bigg)
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:alpha_3}
\alpha_{3}(u,v) = \arccos\bigg(\frac{u^2+(-1-v)^2-15}{2\sqrt{u^2+(-1-v)^2}}\bigg)
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:alpha_4}
\alpha_{4}(u,v) = \arccos\bigg(\frac{u^2+(1-v)^2-15}{2\sqrt{u^2+(1-v)^2}}\bigg)
\end{equation}
Note that we have expressed all the angles $\alpha_i$ and $\delta_i$ as functions of the variables $u$ and $v$. In addition, recall that in Definition \ref{defw} we considered the concept of critical angles $\omega_-$ and $\omega_+$. We abuse notation and define the functions $\omega_-:\mathbb R^2\to\mathbb R$ and $\omega_+:\mathbb R^2\to\mathbb R$. They have been constructed to match Definition \ref{defw}. These functions assign to each final position $y=(u,v)\in \mathbb R^2$ its respective critical angle $\omega_-(y)$ and $\omega_-(y)$.
We consider equations (\ref{eq:delta_a})-(\ref{eq:alpha_4}) according to equations (\ref{eq:wmin}) and (\ref{eq:wmax}) to obtain:
\begin{equation}
\label{eq:Wmin}
\omega_-(u,v) =
\begin{cases}
\begin{cases}
\arctan (\frac{v-1}{u}) + \arccos(\frac{(u^2+(1-v)^2)-3}{2\sqrt{u^2+(1-v)^2}}) - \frac{\pi}{2} & \text{if } u > 0 \\
\begin{cases}
\arctan (\frac{v-1}{u}) + \pi + \arccos(\frac{(u^2+(1-v)^2)-3}{2\sqrt{u^2+(1-v)^2}}) - \frac{\pi}{2} & \text{if } v \geq 1 \\
\arctan (\frac{v-1}{u}) - \pi + \arccos(\frac{(u^2+(1-v)^2)-3}{2\sqrt{u^2+(1-v)^2}}) - \frac{\pi}{2} & \text{if } v < 1
\end{cases} & \text{if } u < 0 \\
\begin{cases}
\frac{\pi}{2} + \arccos(\frac{(u^2+(1-v)^2)-3}{2\sqrt{u^2+(1-v)^2}}) - \frac{\pi}{2} & \text{if } v>1\\
-\frac{\pi}{2} + \arccos(\frac{(u^2+(1-v)^2)-3}{2\sqrt{u^2+(1-v)^2}}) - \frac{\pi}{2} & \text{if } v<1\\
\end{cases} & \text{if } u = 0 \\
\end{cases} & \text{if } d(\mbox{\it c}_l(\mbox{\sc x}), y) < 3 \\
\begin{cases}
\arctan (\frac{v-1}{u}) - \arccos(\frac{(u^2+(1-v)^2)-15}{2\sqrt{u^2+(1-v)^2}}) + \frac{\pi}{2} & \text{if } u > 0 \\
\begin{cases}
\arctan (\frac{v-1}{u}) + \pi - \arccos(\frac{(u^2+(1-v)^2)-15}{2\sqrt{u^2+(1-v)^2}}) + \frac{\pi}{2} & \text{if } v \geq 1 \\
\arctan (\frac{v-1}{u}) - \pi - \arccos(\frac{(u^2+(1-v)^2)-15}{2\sqrt{u^2+(1-v)^2}}) + \frac{\pi}{2} & \text{if } v < 1
\end{cases} & \text{if } u < 0 \\
\begin{cases}
\frac{\pi}{2} - \arccos(\frac{(u^2+(1-v)^2)-15}{2\sqrt{u^2+(1-v)^2}}) + \frac{\pi}{2} & \text{if } v>1\\
-\frac{\pi}{2} - \arccos(\frac{(u^2+(1-v)^2)-15}{2\sqrt{u^2+(1-v)^2}}) + \frac{\pi}{2} & \text{if } v<1\\
\end{cases} & \text{if } u = 0 \\
\end{cases} & \text{if } d(\mbox{\it c}_l(\mbox{\sc x}), y) \geq 3
\end{cases}
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:Wmax}
\omega_+(u,v) =
\begin{cases}
\begin{cases}
\arctan (\frac{v+1}{u}) - \arccos(\frac{(u^2+(-1-v)^2)-3}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } u > 0 \\
\begin{cases}
\arctan (\frac{v+1}{u}) + \pi - \arccos(\frac{(u^2+(-1-v)^2)-3}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v \geq -1 \\
\arctan (\frac{v+1}{u}) - \pi - \arccos(\frac{(u^2+(-1-v)^2)-3}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v < -1
\end{cases} & \text{if } u < 0\\
\begin{cases}
\frac{\pi}{2} - \arccos(\frac{(u^2+(-1-v)^2)-3}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v>-1\\
-\frac{\pi}{2} - \arccos(\frac{(u^2+(-1-v)^2)-3}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v<-1\\
\end{cases} & \text{if } u = 0 \\
\end{cases} & \text{if } d(\mbox{\it c}_r(\mbox{\sc x}), y) < 3 \\
\begin{cases}
\arctan (\frac{v+1}{u}) + \arccos(\frac{(u^2+(-1-v)^2)-15}{2\sqrt{u^2+(-1-v)^2}}) - \frac{\pi}{2} & \text{if } u > 0 \\
\begin{cases}
\arctan (\frac{v+1}{u}) + \pi + \arccos(\frac{(u^2+(-1-v)^2)-15}{2\sqrt{u^2+(-1-v)^2}}) - \frac{\pi}{2} & \text{if } v \geq -1\\
\arctan (\frac{v+1}{u}) - \pi + \arccos(\frac{(u^2+(-1-v)^2)-15}{2\sqrt{u^2+(-1-v)^2}}) - \frac{\pi}{2} & \text{if } v < -1
\end{cases} & \text{if } u < 0\\
\begin{cases}
\frac{\pi}{2} - \arccos(\frac{(u^2+(-1-v)^2)-15}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v>-1\\
-\frac{\pi}{2} - \arccos(\frac{(u^2+(-1-v)^2)-15}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v<-1\\
\end{cases} & \text{if } u = 0 \\
\end{cases} & \text{if } d(\mbox{\it c}_r(\mbox{\sc x}), y) \geq 3
\end{cases}
\end{equation}
The undetermined expression $\frac{u}{v}$ for $u=v=0$ can only happen at the centres of $C_l(\mbox{\sc x})$ and $C_r(\mbox{\sc x})$. By Corollary 3.4 in \cite{papere}, no path in a bounded isotopy class can satisfy the points $(0,1)$ or $(0,-1)$. This is due the existence of parallel tangents; contradicting Theorem 7.12 in \cite{paperc}.
\begin{definition} \label{classrange+} The class range function $\Theta: \mathbb R^2 \to \mathbb R$ is defined to be:
\label{eq:angularrange_1}
$$\Theta(y)= \omega_+(y) - \omega_-(y)\geq 0.$$
\end{definition}
It is important to note that the critical angles $\omega_-(y)$ and $\omega_+(y)$ are chosen so that $\omega_+(y)\geq\omega_-(y)$, see Definition \ref{defw}. In addition, note that $\omega_-(y)$ and $\omega_+(y)$ are in the boundary or the closure of the interval $I(y)$. Of course, if $\omega_+(y) - \omega_-(y)<0$ we have that $I(y)=\emptyset$, and so $\Omega=\emptyset$, and so there is no bounded isotopy class $\Delta(\mbox{\sc x,y})$. The case where $\Theta(y)=0$ is discussed bellow.
The interior, closure, boundary and complement of a set $B$ are denoted by $int(B)$, $cl(B)$, $\partial(B)$, and $B^c$ respectively.
Next we present data obtained after plotting the values of $\omega_+(y) - \omega_-(y)$.
\subsection{Facts about the class range function}\label{rem:data} (see Fig. \ref{figraf}).
\begin{itemize} \label{facts}
\item $\Theta$ is continuous.
\item The domain of $\Theta$ is a bounded set $B\subset \mathbb R^2$. In Section \ref{classdomain} we determine $B$ and its subdivisions. In these subdivisions lie the final positions $y\in \mathbb R^2$ so that $\Gamma(\mbox{\sc x},\mbox{\sc y}_\theta)$ are fibers of the same type, $\mbox{\sc y}_\theta=(y,Y_\theta)$, see Definition \ref{def:spaces}.
\item If $y\in int(B)$, then $\Theta(y)>0$.
\item If $y \in \partial (cl(B))$, then $\Theta(y)=0$.
\item If $y\in B^c$, then $\omega_+(y) - \omega_-(y)<0$. In this case, $I(y)=\emptyset$, and so $\Omega=\emptyset$ or equivalently there is no bounded isotopy class $\Delta(\mbox{\sc x,y})$.
\item The range of $\Theta$ is the interval $[0, \arctan \big(\frac{1}{4}\sqrt{2}\big)+\pi]$.
\item $\Theta$ attains the minima at the final positions $y\in \mathbb R^2$ for {\sc c} (or {\sc cc}) isolated points. Here we have that $\Theta(y)=0$.
\item $\Theta$ attains a maximum at $y=(0,2\sqrt{2})$ with $\Theta(y)= \arctan \big(\frac{1}{4}\sqrt{2}\big)+\pi$. In Fig. \ref{regparam} top we illustrate the class range for $y=(0,2\sqrt{2})$.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{graph}
\caption{The graph of the class range function, see Definition \ref{classrange+}. Note that the class range is constructed out of (\ref{eq:Wmin}) and (\ref{eq:Wmax}) and that these functions are obtained by combining (\ref{eq:wmin}) and (\ref{eq:wmax}).}
\label{figraf}
\end{figure}
\section{Class domain}\label{classdomain}
The obvious observation that a triangle with sides of length $1$ and $2$ cannot have a third side of length greater to $3$ leads us to study the cases:
\begin{equation}\label{ineq:1} d(c_l(\mbox{\sc x}),y) < 3
\end{equation}
\begin{equation}\label{ineq:2} d(c_l(\mbox{\sc x}),y) \geq 3
\end{equation}
\begin{equation}\label{ineq:3} d(c_r(\mbox{\sc x}),y) < 3
\end{equation}
\begin{equation}\label{ineq:4} d(c_r(\mbox{\sc x}),y) \geq 3
\end{equation}
After looking at the four possible combinations for:
\begin{equation} \label{eq:0}
\omega_-=\omega_+
\end{equation}
in equations (\ref{eq:wmin}) and (\ref{eq:wmax}) we obtain:
\begin{equation} \label{eq:1}
\delta_{1} + \alpha_{1} - \frac{\pi}{2} = \delta_{2} - \alpha_{2} + \frac{\pi}{2}
\end{equation}
\vspace{-.7em}
\begin{equation} \label{eq:2}
\delta_{3} - \alpha_{3} + \frac{\pi}{2} = \delta_{2} - \alpha_{2} + \frac{\pi}{2}
\end{equation}
\vspace{-.7em}
\begin{equation}\label{eq:3}
\delta_{1} + \alpha_{1} - \frac{\pi}{2} = \delta_{4} + \alpha_{4} - \frac{\pi}{2}
\end{equation}
\begin{equation}\label{eq:4}
\delta_{3} - \alpha_{3} + \frac{\pi}{2} = \delta_{4} + \alpha_{4} - \frac{\pi}{2}
\end{equation}
The following observations regarding the circles (\ref{eq:circ_a})-(\ref{eq:circ_g}) can be checked by a mere evaluation. We leave the details to the reader.
Consider $\mbox{\sc x}\in T\mathbb R^2$ according to Remark \ref{coo}. The locus of the circle (\ref{eq:circ_a}) is satisfied by the final positions $y=(u,v)\in \mathbb R^2$ with $u\geq0$. In this case, the angles in the associated triangles (according to Section \ref{trans}) satisfy (\ref{eq:4}), see Figs. \ref{fig:Triangle3}-\ref{fig:Triangle4}. In addition, $\Theta(u,v)=0$ for points in (\ref{eq:circ_a}) for $u\geq0$.
\begin{equation}
\label{eq:circ_a}
u^2+v^2=16.
\end{equation}
The loci of the circles (\ref{eq:circ_d}) and (\ref{eq:circ_e}) for $u\geq 0$ are satisfied by the final positions $y=(u,v)\in \mathbb R^2$ so that its associated angles according to subsection \ref{short} satisfy (\ref{eq:1}). In addition, $\Theta(y)=0$ for points in (\ref{eq:circ_d}) and (\ref{eq:circ_e}) with $u\geq0$.
\begin{equation}
\label{eq:circ_d}
u^2+(v-1)^2=1
\end{equation}
\begin{equation}
\label{eq:circ_e}
u^2+(v+1)^2=1
\end{equation}
The locus of the circle (\ref{eq:circ_f}) is satisfied by the final positions $y=(u,v)\in \mathbb R^2$ with for $u\leq 0$ so that its associated angles according to subsections \ref{short} and \ref{long} satisfy (\ref{eq:2}).
The locus of the circle (\ref{eq:circ_g}) is satisfied by the final positions $y=(u,v)\in \mathbb R^2$ with for $u\leq 0$ so that its associated angles according to subsections \ref{short} and \ref{long} satisfy (\ref{eq:3}). In addition, $\Theta(y)=0$ for points in (\ref{eq:circ_f}) and (\ref{eq:circ_g}) with $u\leq0$.
\begin{equation}
\label{eq:circ_f}
u^2+(v-3)^2=1
\end{equation}
\begin{equation}
\label{eq:circ_g}
u^2+(v+3)^2=1
\end{equation}
The circles (\ref{eq:circ_b}) and (\ref{eq:circ_c}) are trivially extracted out of relations (\ref{ineq:1})-(\ref{ineq:4})
\begin{equation}
\label{eq:circ_b}
u^2+(v-1)^2=9
\end{equation}
\vspace{-2em}
\begin{equation}
\label{eq:circ_c}
u^2+(v+1)^2=9
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{figRegHeat4}
\caption{Left: The domain $B\subset \mathbb R^2$ of $\Theta$. Note that $B$ is bounded by the circles (\ref{eq:circ_a})-(\ref{eq:circ_c}). Right: The temperature give the length of $I(y)$ for $y\in B$. }
\label{figRegionB}
\end{figure}
\subsection{Description of $B\subset \mathbb R^2$}\label{descB}
We obtain the domain of the class range function by evaluating (\ref{eq:Wmin}) and (\ref{eq:Wmax}) according to Definition \ref{classrange+}, this planar set is represented by the colored portion in Fig. \ref{figRegionB} left. In Fig. \ref{figRegionB} right we show a Heatmap for the class values.
\begin{definition}The domain $B\subset \mathbb R^2$ of the class range function $\Theta:\mathbb R^2\to\mathbb R$ corresponds to the open bounded portion enclosed by the simple closed curve corresponding to the union of the semicircles (\ref{eq:circ_d}) and (\ref{eq:circ_e}) for $u\geq0$; (\ref{eq:circ_f}) and (\ref{eq:circ_g}) for $u\leq0$; and (\ref{eq:circ_a}) for $u\geq0$, union the semicircles (\ref{eq:circ_d}) and (\ref{eq:circ_e}) for $u>0$, union the origin.
\end{definition}
Observe that the semicircles (\ref{eq:circ_d}) and (\ref{eq:circ_e}) for $u>0$ are the location where {\sc c} isolated points are defined. In addition, $\Theta$ is continuous but not differentiable at $B$ intersection the circles (\ref{eq:circ_f}) and (\ref{eq:circ_g}), see the list of facts in \ref{facts}.
\begin{definition} \label{cbc} Let $\mbox{\sc x,y}\in T\mathbb R^2$. Then
\begin{itemize}
\item $y\in B_{1}\subset{B}$ then, $d(\mbox{\sc c}_l(\mbox{\sc x}), y) < 3$ and $d(\mbox{\sc c}_r(\mbox{\sc x}), y) < 3$ are satisfied.
\item $y\in B_{2}\subset{B}$ then, $d(\mbox{\sc c}_l(\mbox{\sc x}), y) < 3$ and $d(\mbox{\sc c}_r(\mbox{\sc x}), y) \geq 3$ or, \\
$d(\mbox{\sc c}_l(\mbox{\sc x}), y) \geq 3$ and $d(\mbox{\sc c}_r(\mbox{\sc x}), y) < 3$ are satisfied.
\item $y\in B_{3}\subset{B}$ then, $d(\mbox{\sc c}_l(\mbox{\sc x}), y) \geq 3$ and $d(\mbox{\sc c}_r(\mbox{\sc x}), y) \geq 3$ are satisfied.
\item set $B_4=B^c$.
\end{itemize}
\end{definition}
\begin{theorem}\label{existvect} Given $\mbox{\sc x} \in T\mathbb R^2$ and $y\in B$. There exists a family $e^{i\theta}=Y_\theta\in T_y\mathbb R^2$, $\theta\in (-\pi,\pi)$, such that for $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$ we have that $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a one-parameter family of bounded isotopy classes.
\end{theorem}
\begin{proof} Consider $\mbox{\sc x} \in T\mathbb R^2$. Recall that the values of $\Theta$ are determined by a combination of short and long triangles, according to subsections \ref{short} and \ref{long}.
Since $y\in B$ then $\Theta(y)\geq0$, see the facts in \ref{rem:data}.
Suppose that $\Theta(y)>0$, then we have that $\omega_-\neq \omega_+$. This immediately implyies that $Y_{\omega_-}\neq Y_{\omega_+}$. We conclude that the bounded classes $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})\neq \Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$. Since $\theta$ is continuous, by the intermediate value theorem the result follows. If $\Theta(y)=0$ then $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\theta})$ is a {\sc c} isolated point with $I(y)$ being a single point.
\end{proof}
\begin{definition}\label{def:spaces}\hfill
\begin{enumerate}
\item A family of spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ such that:
\begin{itemize}
\item for $\theta=\omega_-$ we have that $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$ is a {\sc cc} isolated point in $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$;
\item for $\theta \in (\omega_-,\omega_+)$ there exists a bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$;
\item for $\theta=\omega_+$ we have that $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$ is a {\sc cc} isolated point in $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$;
\item for $\theta \notin [\omega_-,\omega_+]$ there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$
\end{itemize}
is called a {\bf fiber of type I}.
\vspace{.2cm}
\item A family of spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ such that:
\begin{itemize}
\item for $\theta=\omega_-$ we have that $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$ is a {\sc cc} isolated point in $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$;
\item for $\theta \in (\omega_-,\omega_+]$ there exists a bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$;
\item for $\theta \notin (\omega_-,\omega_+]$ there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{itemize}
Or,
\begin{itemize}
\item for $\theta=\omega_+$ we have that $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$ is a {\sc cc} isolated point in $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$;
\item for $\theta \in [\omega_-,\omega_+)$ there exists a bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$;
\item for $\theta \notin [\omega_-,\omega_+)$ there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$
\end{itemize}
is called a {\bf fiber of type II}.
\vspace{.2cm}
\item A family of spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ such that:
\begin{itemize}
\item for $\theta \in [\omega_-,\omega_+]$ there exists a bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$;
\item for $\theta \notin [\omega_-,\omega_+]$ there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$
\end{itemize}
is called a {\bf fiber of type III}.
\vspace{.2cm}
\item A family of spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ such that:
\begin{itemize}
\item there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for all $\theta \in (-\pi,\pi]$
\end{itemize}
is called a {\bf fiber of type IV}.
\vspace{.2cm}
\item A family of spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is called a {\bf fiber of type V} if $x=y$. In this case, each $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$, $\theta\in (-\pi,\pi]$ admit an isolated point, being a path of length zero. In addition, there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for all $\theta \in (-\pi,\pi]$.
\end{enumerate}
\end{definition}
\begin{theorem}\label{maincensus1}Consider $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ with $x\neq y$.
Suppose $y\in B\subset \mathbb R^2$, then:
\begin{itemize}
\item If $y\in B_1$, then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type I.
\item If $y\in B_2$, then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type II.
\item If $y\in B_3$, then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type III.
\end{itemize}
If $y\in B_4=B^c$, then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type IV.
\noindent If $x=y$, then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type V.
\end{theorem}
\begin{proof} If $y\in B_1$, the endpoints $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ have associated two short triangles, according to equations (\ref{eq:thetar_b}) and (\ref{eq:thetal_a}). Via equations (\ref{eq:Wmin}) and (\ref{eq:Wmax}) we obtain the values $\omega_-$ and $\omega_+$. Since $\Theta(y)>0$ (see the facts in \ref{rem:data}) Theorem \ref{existvect}, guarantees the existence of a family of bounded isotopy classes $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)\subset \Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $ \theta \in (\omega_-,\omega_+)=I(y)$. Note that by construction $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$ and $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$ admit a {\sc cc} isolated point each.
If $y\in B_2$, then the endpoints $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ have associated one short and one long triangle via a combination of the equations (\ref{eq:thetar_b}) or (\ref{eq:thetal_a}), and (\ref{eq:thetar_c}), or (\ref{eq:thetal_b}). Via equations (\ref{eq:Wmin}) and (\ref{eq:Wmax}) we obtain the values $\omega_-$ and $\omega_+$. Since $\Theta(y)>0$ (see the facts in \ref{rem:data}) Theorem \ref{existvect}, guarantees the existence of a family $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)\subset \Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $\theta \in [\omega_-,\omega_+)=I(y)$ (or $\theta \in (\omega_-,\omega_+]$). Note that by construction $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$ or $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$ admit a {\sc cc} isolated point.
If $y\in B_3$, the endpoints $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ have associated two long triangles, according to equations (\ref{eq:thetar_c}) and (\ref{eq:thetal_b}). Via equations (\ref{eq:Wmin}) and (\ref{eq:Wmax}) we obtain the values $\omega_-$ and $\omega_+$. Since $\Theta(y)>0$, again Theorem \ref{existvect}, guarantees the existence of a family $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)\subset \Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $\theta \in [\omega_-,\omega_+]=I(y)$.
If $y\in B_4=B^c$ we have that $\omega_+-\omega_-<0$, implying that there is no bounded isotopy class.
If $x=y$ then by Theorem 3.9 in \cite{paperc} we conclude that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$, $\theta\in [-\pi,\pi]$ admits an isolated point, being a path of length zero. Since $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ admits only closed paths, they have parallel tangents, see \cite{papere}. Therefore, these closed paths are bounded-homotopic to paths of arbitrary length, see Proposition 3.8 in \cite{papere}. By Theorem 7.12 in \cite{paperc} none of these paths can be in a bounded isotopy class. Therefore, there is no bounded isotopy class for all $\theta \in (-\pi,\pi)$.
\end{proof}
It is easy to see that there is a natural correspondence between $B\times I(y)$ and $\mathcal B$. Equivalently a correspondence between $B\times I(y)$ and the elements in $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ so that there exists a bounded isotopy class.
\begin{theorem} \label{noopnoclofib} The set of endpoints $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ so that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type:
\begin{itemize}
\item I, II, or III is bounded, neither open nor closed in $T\mathbb R^2$.
\item IV is unbounded, neither open nor closed in $T\mathbb R^2$.
\item V is a unit circle.
\end{itemize}
\end{theorem}
\begin{proof} Consider $\mbox{\sc x} \in T\mathbb R^2$ and $y\in B$. Since $B\subset \mathbb R^2$ and $I(y)\subset (-\pi,\pi)$ are both bounded we have that $B\times I(y)$ is bounded. Note that $B$ is not open, since it contains the positive abscissa of the circles (\ref{eq:circ_c}) and (\ref{eq:circ_d}) i.e., the image of {\sc c} isolated points. The set $B$ is not closed since the point $y=(0,1)$ is in the closure of $B$ but not in$B$, due the existence of parallel tangents, see Proposition 3.8 in \cite{papere}.
For the second statement, note that the set of endpoints $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ so that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type $IV$ is unbounded since the fibers of type $IV$ have their final positions in $B_4=B^c$ being this set unbounded. Since $B$ is neither open nor closed, so its complement. Therefore $B_4\times (-\pi,\pi]$ is unbounded neither open nor closed.
Recall that the set of endpoints $(x,X),(y,Y_\theta)\in T\mathbb R^2$ with $x=y$ are such that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type $V$. Since $(x,X)$ remains fixed while $Y_\theta=e^{\theta i}$, $\theta\in (-\pi,\pi]$ the result follows.
\end{proof}
Next, we establish that $\mathcal B$ is bounded neither open nor closed, answering a question raised by Dubins in pp. 480 in \cite{dubins 2}.
\begin{corollary} \label{noopnoclo} $\mathcal B\subset T\mathbb R^2$ is neither open nor closed.
\end{corollary}
\begin{proof} Immediate from Theorem \ref{noopnoclofib} and the obvious correspondence between $\mathcal B$ and $B\times I(y)$.
\end{proof}
\begin{corollary}\label{cor:param}Consider $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$ with $y\in B$. Then:
\begin{itemize}
\item isolated points of zero length are parametrized in the unit circle.
\item isolated points of type {\sc c} are parametrized in $(0,\pi)\sqcup (0,\pi)$.
\item isolated points of type {\sc cc} are parametrized in $$ (0,\pi)\times (0,\pi) \sqcup (0,\pi)\times (0,\pi).$$
\item The bounded isotopy classes are parametrized in $B\times I(y)$.
\end{itemize}
\end{corollary}
\begin{proof} The first and fourth statements were proven in Theorem \ref{noopnoclofib}. The second and third statements are immediate.
\end{proof}
\section{On the classification of the homotopy classes of bounded curvature paths}
Next we present an updated version of Theorem 6.2 in \cite{paperd} by considering the existence of isotopy classes in terms of the values of the class range function.
We first revise Remark \ref{gammaparameter}. Given $\mbox{\sc x}\in T{\mathbb R}^2$,
$$\Gamma=\bigcup_{\substack{{y \in \mathbb R^2}\\ \theta \in (-\pi,\pi]}}\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta).$$
Given $\mbox{\sc x}, \mbox{\sc y} \in T\mathbb R^2$. Let,
$$\Gamma(\mbox{\sc x}, \mbox{\sc y})=\bigcup_{\substack{n \in \mathbb Z}}\Gamma(n)$$
where
$$\Gamma(n)=\{\gamma\in \Gamma(\mbox{\sc x,y}): \tau(\gamma)=n, n\in \mathbb Z\},$$
with $\tau(\gamma)$ being the turning number\footnote{In \cite{paperd} we used the analogous idea of turning number by considering closed path.} of $\gamma$, see Definition 4.1 in \cite{paperb}.
For each $\theta \in (-\pi,\pi]$ we have that,
$$\Gamma_n(\mbox{\sc x}, \mbox{\sc y}_\theta)=\{\gamma\in \Gamma(\mbox{\sc x},\mbox{\sc y}_\theta): \tau(\gamma)=n, n\in \mathbb Z\}.$$
Suppose that $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)\subset \Gamma_k(\mbox{\sc x}, \mbox{\sc y}_\theta)$, for some $\theta\in (-\pi,\pi]$, $k\in \mathbb Z$. The space $\Delta'(\mbox{\sc x}, \mbox{\sc y}_\theta)\subset \Gamma_k(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is the space of paths bounded-homotopic to paths with self-intersections. In \cite{paperd} we proved that:
$$\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)\cup \Delta'(\mbox{\sc x}, \mbox{\sc y}_\theta)=\Gamma_k(\mbox{\sc x}, \mbox{\sc y}_\theta).$$
The proof of Theorem \ref{paramclass} is immediate from the facts in \ref{rem:data}, Theorem \ref{noopnoclofib} and Theorem 6.2 in \cite{paperd}.
\begin{theorem}\label{paramclass} Choose $\mbox{\sc x}, \mbox{\sc y}_{\theta} \in T\mathbb R^2$ we have that:
\begin{equation}
\label{eq:main1}
\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)=\bigcup_{\substack{n \in \mathbb Z}}\Gamma_n(\mbox{\sc x}, \mbox{\sc y}_\theta),\hspace{.2cm} \theta\in(-\pi,\pi].
\end{equation}
\begin{enumerate}
\item If $\Theta(y)>0$ there exists a family of bounded isotopy classes $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ so that:
\begin{equation}
\label{eq:main2}
\Gamma_k(\mbox{\sc x}, \mbox{\sc y}_\theta)=\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta) \cup \Delta'(\mbox{\sc x}, \mbox{\sc y}_\theta), \hspace{.2cm} \mbox{for}\hspace{.2cm} \theta\in I(y), \hspace{.2cm} \mbox{and some}\hspace{.2cm} k\in \mathbb Z.
\end{equation}
In particular, if:
\begin{itemize}
\item If $y\in B_1$ then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber to type I.
\item If $y\in B_2$ then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber to type II.
\item If $y\in B_3$ then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber to type III.
\item If $y\in B_4$ then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber to type IV.
\item If $y=x$ then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber to type V.
\end{itemize}
\item If $y\in B$, $y\neq x$ and $\Theta(y)=0$ we may have a {\sc c} or a {\sc cc} isolated point.
\item If $\omega_+(y)-\omega_-(y)<0$ we have that there is no bounded isotopy class.
\end{enumerate}
\end{theorem}
\begin{center} {\sc Appendix\\ Homotopy classes and deformations of Dubins paths}
\end{center}
We would like to motivate a theory analyzing algorithmic aspects of deformations of piecewise bounded curvature paths of constant curvature. Many standard questions in computational geometry can be adapted for this class of paths.
In \cite{paperd} we defined operations on bounded curvature paths being a finite number of concatenations of line segments and arcs of unit radius circles, the so-called $cs$ paths. The line segments and arcs of circles are called components. The number of components is called the complexity of the path\footnote{Dubins paths have complexity at most 3.}. Also in \cite{paperd}, we proved that a $cs$ path can be constructed arbitrarily close to any given bounded curvature path. It is of interest to study the computational complexity of deforming $cs$ paths.
It is not hard to see that for any given $\mbox{\sc x,y}\in T\mathbb R^2$ the space $\Gamma(\mbox{\sc x,y})$ has a finite number of Dubins paths. In Example \ref{ex:spaces} we index Dubins paths according to their length. The length minimizer in $\Gamma(\mbox{\sc x,y})$ is denoted by $\gamma_0$.
Next, we relate the types of connected components, the number of local minima, number of global minima, existence of local maxima, and deformations of $cs$ paths. In Fig. \ref{fig:spaces} we consider seven illustrations, and in Example \ref{ex:spaces} we consider seven items. We associate illustrations and items in an obvious way.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{spaces8}
\caption{A schematic representation for spaces of bounded curvature paths. Each of the seven illustrations represent the connected components $\Gamma(n)\subset \Gamma(\mbox{\sc x,y})$ admitting Dubins paths, here represented by points. Points with the same color suggest that the associated paths are bounded-homotopic, see Figs. \ref{sixdubb1}, \ref{fourdub}-\ref{sixdubb3}.}
\label{fig:spaces}
\end{figure}
\begin{example}\label{ex:spaces}Consider $x=(0,0)$, $X=e^{2\pi i}\in T_x\mathbb R^2$. In Figure \ref{fig:spaces} we illustrate spaces $\Gamma(\mbox{\sc x,y})$ such that: \hfill
\begin{enumerate}
\item $y=(z,0)$, $z\in\mathbb R^+$, $Y=e^{2\pi i}\in T_y\mathbb R^2$. This example corresponds to the Euclidean geometry case (up to isometries) where the single length minimizer between any two points is a line segment.
\item $y=(4,-8)$, $Y=e^{2\pi i}\in T_y\mathbb R^2$. There are four Dubins paths, $\gamma_0$ being the length minimizer, see Fig. \ref{fourdub}. The paths $\gamma_0$ and $\gamma_3$ are bounded-homotopic. This is checked in Proposition 4.3 and Fig. 13 in \cite{paperd}.
\item $y=(z,0)$, $z\geq 4$, $Y=e^{\pi i}\in T_y\mathbb R^2$. There are four Dubins paths, with $\gamma_0$ and $\gamma_1$ being length minimizers. These four paths are not bounded-homotopic one to the other. In Fig. 1 in \cite{paperb} we illustrate the two length minimizers.
\item $y=(-2,1)$, $Y=e^{-\frac{\pi}{4} i}\in T_y\mathbb R^2$. There are six Dubins paths, one being the length minimizer, see Fig. \ref{sixdubb1}. The paths $\gamma_0$ and $\gamma_5$; $\gamma_1$ and $\gamma_4$, and $\gamma_2$ and $\gamma_3$ are pair-wise bounded-homotopic. This can be verified by applying Proposition 4.4 in \cite{paperd}.
\item $y=(3,0)$, $Y=e^{\frac{\pi}{3} i}\in T_y\mathbb R^2$. Since $y\in B$, then $\Theta(y)>0$, so we have that there exists a bounded isotopy class $\Delta(\mbox{\sc x,y})$, or equivalently $\Omega\neq \emptyset$. In this case, the length minimiser in $\Gamma(\mbox{\sc x,y})$ is a unique {\sc csc} path and it is an element in $\Delta(\mbox{\sc x,y})$. This is a consequence of Proposition 2.13 in \cite{papera} and Theorem 8.1 in \cite{paperc}.
Note that there are eight Dubins paths, one being the length minimizer ($\gamma_0$ lies in $\Omega$), see Fig. \ref{sixdubb2}. In addition, the paths $\gamma_0$, $w_1$ and $w_2$ are bounded-isotopic one to the other since they are paths in $\Delta(\mbox{\sc x,y})$, see Theorem 5.4 in \cite{paperd}. It seems plausible to think that $w_1$ and $w_2$ are local maxima (not local minima) of length. In addition, the path $\gamma_3$ is the length minimizer in $\Delta'(\mbox{\sc x,y})$.
\item there are six Dubins paths, the length minimizer being an isolated point, see Fig. \ref{sixdubb3} and Theorem 3.9 in \cite{paperc}. By a similar argument as the one in Proposition 4.4 in \cite{paperd} we conclude that $\gamma_1$ is bounded-homotopic to $\gamma_4$.
\item $x=y$ and $X=Y$. Closed bounded curvature paths are not bounded-homotopic to a single point. In this case, the length minimiser $\gamma_0$ is an isolated point of length zero, see Theorem 3.9 in \cite{paperc}. There are two non-trivial length minimisers, say $\gamma_1$ and $\gamma_2$. These paths lie in the adjacent circles $C_l(\mbox{\sc x})$ and $C_r(\mbox{\sc x})$ respectively. It is easy to see that $\gamma_1$ and $\gamma_2$ lie in different homotopy classes since they have winding number $1$ and $-1$ respectively, see Theorem 4.6 in \cite{paperb}.
\end{enumerate}
\end{example}
After the previous examples, a natural task would be to determine for any pair of endpoints $\mbox{\sc x,y}\in T\mathbb R^2$ the exact number of homotopy classes admitting Dubins paths. This should be done after first describing all the possible scenarios for homotopies between Dubins paths.
A closely related problem is the following. Given $\mbox{\sc x},\mbox{\sc y}_\theta\in T\mathbb R^2$, $\theta \in (-\pi,\pi]$. Describe how the number of Dubins paths vary, as we vary $\theta \in (-\pi,\pi]$. Also, describe how the type of all (up to eight?) Dubins paths vary for all the fibers. A description of the type of the global minimum (first Dubins path) has been obtained in \cite{bui}.
Given $ \mbox{\sc x},\mbox{\sc y}\in T\mathbb R^2$. What are the complexity $n>3$ $cs$ paths of minimal length? For certain pairs the answer is trivial. What if $d(x,y)<4$?
Given two $cs$ paths with prescribed complexity and lying in the same homotopy class. What is the minimal number of operations (or moves) to deform one path into the other? What are these moves?
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth,angle=0]{fourdub}
\caption{Spaces of bounded curvature paths may have four local minima of length. Note that $\gamma_0$ and $\gamma_3$ are bounded-homotopic, see Proposition 4.3 in \cite{paperd}.}
\label{fourdub}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{sixdubbbyn1}
\caption{Spaces of bounded curvature paths may have up to eight {\sc csc-ccc} paths, local minima (or maxima) of length. Note that $\gamma_3$ and $\gamma_6$ are bounded-homotopic.}
\label{sixdubb2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth,angle=0]{sixdubbbyn3}
\caption{Spaces of bounded curvature paths may have six local minima of length. Note that $\gamma_1$ and $\gamma_4$ are bounded-homotopic. It is of interest to classify the fibers $\Gamma(\mbox{\sc x},\mbox{\sc y}_\theta)$ in terms of the way the type and number of Dubins paths changes as $\theta$ varies, see also Figs. \ref{sixdubb1}, \ref{fourdub} and \ref{sixdubb2}. }
\label{sixdubb3}
\end{figure}
\bibliographystyle{amsplain}
\section{Prelude}
It is well known that any two plane curves both closed or with different endpoints are homotopic. Graustein, and Whitney in 1937, independently proved that not any two planar closed curves are regularly homotopic (homotopic through immersions) \cite{whitney}. Markov in 1857 considered several optimization problems relating a bound on curvature with the design of railroads \cite{markov}. But, it was only in 1957 that bounded curvature paths were rigorously introduced by Dubins when bounded curvature paths of minimal length were first characterized \cite{dubins 1}.
Fix two elements in the tangent bundle of the Euclidean plane $(x,X),(y,Y)\in T{\mathbb R}^2$. Informally, a planar bounded curvature path is a $C^1$ and piecewise $C^2$ path starting at $x$, finishing at $y$; with tangent vectors at these points $X$ and $Y$ respectively, having absolute curvature bounded by $\kappa=\frac{1}{r}>0$. Here $r$ is the minimum allowed radius of curvature. The piecewise $C^2$ property comes naturally due to the nature of the length minimizers \cite{dubins 1}\footnote{Dubins proved that bounded curvature paths of minimal length are concatenations of two arcs of a circle with a line segment in between, or three arcs of a circle, or any subset of these. The so-called {\sc csc}-{\sc ccc} paths.}.
In 1961 Dubins raised fundamental questions about the topology of the spaces of bounded curvature paths \cite{dubins 2}. ``Here we only begin the exploration, raise some questions that we hope will prove stimulating, and invite others to discover the proofs of the definite theorems, proofs that have eluded us'' see pp. 471 in \cite{dubins 2}. Fifty years later the fundamental questions proposed by Dubins were answered through the papers \cite{paperb, papera, paperc, paperd}. In addition, the classification of the homotopy classes of curves with bounded absolute curvature, having fixed initial and final positions, and variable initial and final directions was achieved in \cite{papere}.
In this note, we develop an elementary framework enabling us to parametrize families of spaces of bounded curvature paths. These families share similar types of connected components, being these: isolated points, homotopy, or isotopy classes, see Theorem \ref{maincensus1} and Definition \ref{def:spaces}. In particular, we answer a question raised by Dubins in 1961 \cite{dubins 2} by explicitly describing the set of endpoints $(x,X),(y,Y)\in T{\mathbb R}^2$ so that the space of bounded curvature paths starting at $x$, finishing at $y$; with tangent vectors at these points $X$ and $Y$ respectively admits a bounded isotopy class, see Theorem \ref{noopnoclofib} and Corollary \ref{noopnoclo}. We conclude by presenting an updated (parametric) version of the classification theorem for homotopy classes of bounded curvature paths in \cite{paperd}, by incorporating the results here obtained, see Theorem \ref{paramclass}. Our results can be extended without much effort for paths in the hyperbolic 2-space.
This article is the culmination of a program devoted to classify the homotopy classes of bounded curvature paths \cite{paperc, paperd}, and the minimal length elements in homotopy classes \cite{paperb, papera}. We recommend the reader from time to time refer to our previous work \cite{paperb, papere, papera, paperc, paperd}. We conclude by presenting an Appendix that can be read independently. This Appendix considers further examples and questions about computational aspects of connected components, and deformations of piecewise constant bounded curvature paths.
There is a vast literature on bounded curvature paths from the theoretical computer science point of view. We encourage the reader to refer to \cite{aga1, baker, buasanei2, bui, fortune, jacobs, lavalle, reif, rus}. Bounded curvature paths have been applied to many real-life problems since a bound on curvature models the trajectory of the motion of wheeled vehicles, and drones also called unmanned aerial vehicles (UAV). We mention only \cite{brazil 1, chang, duindan, ny, owen1, soures, tso1}. Literature on the topology and geometry of spaces of bounded curvature paths can be found in \cite{{paperb}, {papere}, {papera}, {paperc}, {paperd}, {dubins 1}, {dubins 2}, {reeds}, {saldanha}, {sus}}.
The illustrations here presented have been imported from Dubins Explorer, a software for bounded curvature paths \cite{dubinsexplorer}.
\section{On spaces of bounded curvature paths}
For the convenience of the reader, we include relevant material from our previous work in \cite{paperb, papere, papera, paperc, paperd}. Denote by $T{\mathbb R}^2$ the tangent bundle of ${\mathbb R}^2$. Recall that the elements in $T{\mathbb R}^2$ are pairs $(x,X)$ denoted here for short by {\sc x}. The first coordinate of such a pair corresponds to a point in ${\mathbb R}^2$ and the second to a tangent vector to ${\mathbb R}^2$ at $x$.
\begin{definition} \label{defbcp} Given $(x,X),(y,Y) \in T{\mathbb R}^2$, a path $\gamma: [0,s]\rightarrow {\mathbb R}^2$ connecting these points is a {\it bounded curvature path} if:
\end{definition}
\begin{itemize}
\item $\gamma$ is $C^1$ and piecewise $C^2$;
\item $\gamma$ is parametrized by arc length (i.e $||\gamma'(t)||=1$ for all $t\in [0,s]$);
\item $\gamma(0)=x$, $\gamma'(0)=X$; $\gamma(s)=y$, $\gamma'(s)=Y$;
\item $||\gamma''(t)||\leq \kappa$, for all $t\in [0,s]$ when defined, $\kappa>0$ a constant.
\end{itemize}
The first item means that a bounded curvature path has continuous first derivative and piecewise continuous second derivative. Minimal length elements in spaces of paths satisfying the last three items in Definition \ref{defbcp} are in fact $C^1$ and piecewise $C^2$. For the third item, without loss of generality, we extend the domain of $\gamma$ to $(-\epsilon,s+\epsilon)$ for $\epsilon>0$. Sometimes we describe the third item as the endpoint condition. The fourth item means that bounded curvature paths have absolute curvature bounded above by a positive constant. Without loss of generality, we consider $\kappa=1$.
The unit tangent bundle $UT\mathbb R^2$ is equipped with a natural projection $p : UT{\mathbb R}^2 \rightarrow {\mathbb R}^2$. Note that $p^{-1}(y)$ is $\mathbb S^1$ for all $y \in{\mathbb R}^2$. The space of endpoints is a circle bundle over ${\mathbb R}^2$.
\begin{remark}\label{coo}{\it (Coordinate system and angle orientation).}
\begin{itemize}
\item For the given $(x,X), (y,Y) \in T{\mathbb R}^2$ in Definition \ref{defbcp} we consider a coordinate system so that the origin is identified with $x$, and $X$ with the first canonical vector in the standard basis $\{X=e_1,e_2 \}$ for $\mathbb R^2$.
\item When measuring angles we consider the positive orientation to be traveled counterclockwise.
\end{itemize}
\end{remark}
Dubins \cite{dubins 1} proved that the length minimizer bounded curvature paths are necessarily a concatenation of an arc of a unit radius circle, followed by a line segment, followed by an arc of a unit radius circle, the so-called {\sc csc} paths. Or, a concatenation of three arcs of unit radius circles, the so-called {\sc ccc} paths. After considering {\sc r} be a circle traveled to the right and {\sc l} a circle travelled to the left we obtain six possible types of paths, namely {\sc lsl}, {\sc rsr}, {\sc lsr}, {\sc rsl}, {\sc lrl} and {\sc rlr}. Paths having one of these types are here called Dubins paths. Note that we are considering Dubins path to be local not necessarily global minimum of length.
In the following paragraphs, we illustrate through examples the richness of the theory of bounded curvature paths. Its features come from the constraints these curves satisfy. These constraints lead to interesting interactions between metric geometry and computational mathematics.
\begin{example} \label{ex:1}\hfill
\begin{enumerate}
\item Recall that length minimizers are considered for establishing distance between points in a manifold. This approach is not suitable when considering bounded curvature paths, since in many cases, the length variation between length minimizers of arbitrarily close endpoints or directions is discontinuous.
Consider $(x,X),(y,Y_\theta) \in T \mathbb R^2$, $\theta \in \mathbb R$, with $\kappa=1$:
\begin{itemize}
\item $x=(0,0)$; $X=e^{2\pi i}\in T_x\mathbb R^2$.
\item $y=(1,1)$; $\mbox{\it Y}_{\theta}=e^{\theta i}\in T_y\mathbb R^2$.
\end{itemize}
\noindent Discontinuities for the length of the length minimizers happen when perturbing around $\mbox{\it Y}_{\frac{\pi}{2}}$, see Fig.~\ref{figdiscde}. The sudden jumps in length suggest the existence of isolated points, see Theorem 3.9 in \cite{paperc}. In fact, the path in Fig. \ref{figdiscde} left is an isolated point in the space of bounded curvature paths from $(x,X)$ to $(y,Y_{\frac{\pi}{2}})$.
The path in Fig.~\ref{figdiscde} right illustrates a discontinuity after perturbing the final location to $y'=(1-\epsilon,1-\epsilon)$ for $\epsilon>0$ small. Note that length minimisers may not be embedded paths.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{figdiscde4}
\caption{Examples of length minimizers in their respective path space. The three paths at the right are the result of small perturbations to the final position or direction of $(y,Y_{\frac{\pi}{2}})$, for $\epsilon>0$. After applying Dubins' characterization for the length minimizers \cite{dubins 1} a simple numerical experiment shows the existence of length discontinuities.}
\label{figdiscde}
\end{figure}
\item Spaces of bounded curvature paths have several local minima of length, see Fig. \ref{sixdubb1}.
Consider $\mbox{\sc x}=(x,X),\mbox{\sc y}=(y,Y)\in T\mathbb R^2$ with $\kappa=1$:
\begin{itemize}
\item $x=(0,0)$; $X=e^{2\pi i}\in T_x\mathbb R^2$.
\item $y=(-2,1)$; $Y=e^{-\frac{\pi}{4} i}\in T_y\mathbb R^2$.
\end{itemize}
By recursively applying the methods in \cite{papera} for obtaining the {\sc csc}-{\sc ccc} charactarization for the length minimizers \cite{paperb, papera, buasanei1, dubins 1, johnson} we obtain all the local minima of length.
By Proposition 4.4 in \cite{paperd} the paths $\gamma_0$ and $\gamma_5$ are homotopic without violating the curvature bound throughout the deformation. The same applies for $\gamma_1$ and $\gamma_4$. By Proposition 4.3 in \cite{paperd} the paths $\gamma_2$ and $\gamma_3$ lie in the same homotopy class of bounded curvature paths.
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth,angle=0]{sixdubbbyn2}
\caption{Spaces of bounded curvature paths have several local minima of length. Paths with matching colors are homotopic without violating the curvature bound throughout the deformation.}
\label{sixdubb1}
\end{figure}
\item Another interesting feature is that the symmetry property metrics satisfy is in general violated. For example, the length minimizer from $(x,X)$ to $(y,\mbox{\it Y}_{\frac{\pi}{2}})$ has lenght $\frac{\pi }{4}$, see Fig.~\ref{figdiscde} left. On the other hand, the length minimizer from $(y,\mbox{\it Y}_{\frac{\pi}{2}})$ to $(x,X)$ has length $\frac{3\pi }{4}$, see Theorem 4.6 in \cite{paperb}.
\item The classification of the homotopy classes of bounded curvature paths was obtained in \cite{paperd}. A crucial step was to prove that for certain $(x,X),(y,Y)\in T{\mathbb R}^2$ there exists a bounded region $\Omega\subset \mathbb R^2$ that ``traps'' embedded bounded curvature paths. That is, no embedded bounded curvature path whose image is in $\Omega$ can be deformed (while preserving the curvature bound throughout the deformation) to a path having a point not in $\Omega$, see Definition 4.1 in \cite{paperc}. In \cite{paperd} we proved that these ``trapped regions'' are the domain of elements in isotopy classes of bounded curvature paths, we refer to these as {\bf bounded isotopy classes}. Discontinuities may also occur in the formation of trapped regions. These ideas will be discussed in subsection \ref{construct}.
Consider $(x,X),(y,Y)\in T\mathbb R^2$ with $\kappa=1$. For $\epsilon>0$ small, the spaces of bounded curvature paths satisfying:
\begin{itemize}
\item $x=(0,0)$; $X=e^{2\pi i}\in T_x\mathbb R^2$
\item $y=(1+\epsilon,1+\epsilon)$; $Y=e^{\frac{\pi}{2}i}\in T_y\mathbb R^2$
\end{itemize}
\noindent have associated a region that ``traps'' embedded bounded curvature paths, see Fig. \ref{motiv} right. More generally, spaces satisfying the previous two conditions admit a bounded isotopy class of bounded curvature paths, see Theorem 5.4. in \cite{paperd}. If $\epsilon=0$, the space satisfying the previous two conditions admit an isolated point, see Fig. \ref{figdiscde} left.
\item Consider $(x,X),(y,Y)\in T \mathbb R^2$ with $\kappa=1$:
\begin{itemize}
\item $x=(0,0)$; $X=e^{2\pi i}\in T_x\mathbb R^2$.
\item $y=(4-\epsilon,0)$; $0<\epsilon<4$; $Y=e^{2\pi i}\in T_y\mathbb R^2$.
\end{itemize}
Intimately related to the previous observation is that the path $\gamma$ shown in Fig. \ref{motiv} (a non-embedded path) is not homotopic (while preserving the curvature bound throughout the deformation) to the line segment connecting $(x,X)$ to $(y,Y)$, see Corollary 7.13 in \cite{paperc}. In contrast, if $\epsilon\leq 0$, then these two paths are homotopic without violating the curvature bound \cite{paperd}. This fact is related with the existence of trapped regions \cite{paperc}. If a bound on curvature is not under consideration then, $\gamma$ and the line segment connecting $(x,X)$ to $(y,Y)$ are regular homotopic, see Fig. \ref{motiv}.
\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth,angle=0]{motivfinal}
\caption{Right: A (zoomed out) trapped region $\Omega\subset \mathbb R^2$ obtained after perturbing the final position of a bounded curvature path being an isolated point, see Fig. \ref{figdiscde} left. Suddenly, the topology of the path space changes from a space of paths admitting an isolated point into a space of paths admitting a bounded isotopy class with non-empty interior. The elements of this isotopy class are only defined in $\Omega$. Left: An illustration of (5) in Example \ref{ex:1}. If $d(x,y)<4$, then $\gamma$ is not homotopic while preserving the curvature bound through the deformation to the line segment (the length minimizer) from $(x,X)$ to $(y,Y)$.}
\label{motiv}
\end{figure}
\end{enumerate}
\end{example}
\subsection{Spaces of bounded curvature paths}
\begin{definition} \label{admsp} Given $\mbox{\sc x,y}\in T{\mathbb R}^2$. The space of bounded curvature paths from {\sc x} to {\sc y} is denoted by $\Gamma(\mbox{\sc x,y})$.
\end{definition}
In this note we consider $\Gamma(\mbox{\sc x,y})$ with the topology induced by the $C^1$ metric. It is important to note that properties (among many others) such as types of connected components, or the number of local (global) minima in $\Gamma(\mbox{\sc x,y})$ depend on the endpoints in $T{\mathbb R}^2$ under consideration.
Next, we make use of the fibre bundle structure of $T\mathbb R^2$ to describe families of spaces of bounded curvature paths.
\begin{definition}\label{famspa} Choose $\mbox{\sc x}\in T{\mathbb R}^2$ and $y \in \mathbb R^2$. Consider the family of pairs $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$ with $\mbox{\sc y}_\theta=(y,Y_\theta)$; $ Y_\theta=e^{\theta i}\in T_y\mathbb R^2$, $\theta \in \mathbb R$. The one-parameter family of spaces of bounded curvature paths starting at $\mbox{\sc x}$ and finishing at $\mbox{\sc y}_\theta$ is called a {\it fiber} and is denoted by $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{definition}
Whenever we write: $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$, $\theta \in \mathbb R$, we mean a family of pairs of endpoints so that: $\mbox{\sc x}\in T\mathbb R^2$ and $y\in \mathbb R^2$ are arbitrary but fixed while $\theta$ varies in the reals. Note that a space $\Gamma(\mbox{\sc x,y})$ is a representative of a family of spaces parametrized in the reals.
\begin{definition}\label{gammaparameter} Given $\mbox{\sc x}\in T{\mathbb R}^2$ we define:
$$\Gamma=\bigcup_{\substack{{y \in \mathbb R^2}\\ \theta \in \mathbb R}}\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta).$$
\end{definition}
In this note we develop a method for parametrizing the fibers in $\Gamma$ in terms of the types of connected components in $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$, $\theta\in \mathbb R$.
When a path is continuously deformed under parameter $p$ we reparametrize each of the deformed paths by its arc-length. In this fashion, $\gamma: [0,s_p]\rightarrow {\mathbb R}^2$ represents a deformed path at parameter $p$, with $s_p$ corresponding to its arc-length.
\begin{definition} \label{hom_adm} Given $\gamma,\eta \in \Gamma(\mbox{\sc x,y})$. A {\it bounded curvature homotopy} between $\gamma: [0,s_0] \rightarrow {\mathbb R^2}$ and $\eta: [0,s_1] \rightarrow {\mathbb R^2}$ corresponds to a continuous one-parameter family of immersed paths $ {\mathcal H}_t: [0,1] \rightarrow \Gamma(\mbox{\sc x,y})$ such that:
\begin{itemize}
\item ${\mathcal H}_t(p): [0,s_p] \rightarrow {\mathbb R}^2$ for $t\in [0,s_p]$ is an element of $\Gamma(\mbox{\sc x,y})$ for all $p\in [0,1]$.
\item $ {\mathcal H}_t(0)=\gamma(t)$ for $t\in [0,s_0]$ and ${\mathcal H}_t(1)=\eta(t)$ for $t\in [0,s_1]$.
\end{itemize}
\end{definition}
A bounded-curvature isotopy is a continuous one-parameter family of embedded bounded curvature paths. Two paths in $\Gamma(\mbox{\sc x,y})$ are {\it bounded-homotopic (bounded-isotopic)} if there exists a bounded curvature homotopy (isotopy) from one to the other. A {\it homotopy (isotopy) class} is a maximal path connected set in $\Gamma(\mbox{\sc x,y})$.
\begin{definition} \label{admsp} Let $\Delta(\mbox{\sc x,y})$ be a non-empty bounded isotopy class of paths in $\Gamma(\mbox{\sc x,y})$. Let $\mathcal B$ denote the set of pairs $\mbox{\sc x,y} \in T\mathbb R^2$ for which $\Gamma(\mbox{\sc x,y})$ possesses such a bounded isotopy class.
\end{definition}
In \cite{paperc} we proved that $\mathcal B\neq \emptyset$ by establishing the existence of non-empty bounded isotopy classes. Whenever we refer to $\Delta(\mbox{\sc x,y})$ we imply that $\Delta(\mbox{\sc x,y})$ is non-empty. In this note, we give necessary and sufficient conditions so that $\mbox{\sc x},\mbox{\sc y} \in T\mathbb R^2$ is an element in $\mathcal B$. As a consequence, we answer a question raised by Dubins in pp. 480 in \cite{dubins 2}. We establish that $\mathcal B$ is a bounded neither open nor closed subset in $T\mathbb R^2$, see Theorem \ref{noopnoclofib} and Corollary \ref{noopnoclo}.
\subsection{Proximity of endpoints}\label{proxcon}
Here we analyze the configurations of distinguished pairs of circles in $\mathbb R^2$. This approach permits us to reduce the configurations of endpoints in $T\mathbb R^2$ into a finite number of cases up to isometries.
Consider $\mbox{\sc x}\in T\mathbb R^2$. Let $\mbox{\sc C}_ l(\mbox{\sc x})$ be the unit radius circle tangent to $x$ and to the left of $X$. The meaning of $\mbox{\sc C}_ r(\mbox{\sc x})$, $\mbox{\sc C}_ l(\mbox{\sc y})$ and $\mbox{\sc C}_ r(\mbox{\sc y})$ should be obvious. These circles are called {\it adjacent circles}. Denote the centers of the adjacent circles with lowercase letters. So, the center of $\mbox{\sc C}_ l(\mbox{\sc x})$ is $c_l(\mbox{\sc x})$, see Fig. \ref{fig:Cr} right. The other cases are analogous.
We concentrate on the following configurations for the adjacent circles.
\begin{equation} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))\geq 4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))\geq4 \label{con_a}\tag{i}\end{equation}
\vspace{-1.5em}
\begin{equation} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))< 4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))\geq 4 \label{con_b}\tag{ii} \end{equation}
\vspace{-1.5em}
\begin{equation} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))\geq4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))< 4 \label{con_b'}\tag{iii} \end{equation}
\vspace{-1.5em}
\begin{equation} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))< 4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))< 4 \label{con_c}\tag{iv}
\end{equation}
The conditions (i)-(iv) have being used in different contexts through \cite{paperb,papera, paperc, paperd}. They give information about the topology and geometry of $\Gamma(\mbox{\sc x,y})$. Note that as planar configurations, (ii) and (iii) are equivalent up to isometries.
\subsection{Trapped regions and bounded isotopy classes}\label{construct}
In Theorem 5.4 in \cite{paperd} we proved that for certain $\mbox{\sc x,y}\in T{\mathbb R}^2$, the associated space $\Gamma(\mbox{\sc x,y})$ admits a bounded isotopy class $\Delta(\mbox{\sc x,y})$. It turns out that paths in $\Delta(\mbox{\sc x,y})$ are defined exclusively in a bounded region $\Omega\subset \mathbb R^2$. The shape of $\Omega$ depends on the initial and final positions and directions in $T{\mathbb R}^2$, see Fig. \ref{regparam}.
For a precise explanation on how these regions $\Omega\subset \mathbb R^2$ are constructed, we strongly suggest the reader refer to Section 4 in \cite{paperc}. We call these regions {\bf trapped regions}.
It is important to note that:
\begin{itemize}
\item Embedded paths in $\Omega$ cannot be deformed without violating the curvature bound to a path with a self-intersection, see Corollary 7.13 in \cite{paperc}.
\item Embedded paths in $\Omega$ are not bounded-homotopic to paths having a point not in $\Omega$, see Theorem 8.1 in \cite{paperc}.
\item The proof of the existence of isolated points in spaces of bounded curvature paths was given in Theorem 3.9 in \cite{paperc}. These correspond to arcs of a unit circle of length less to $\pi$, called {\bf {\sc c} isolated points}. Similarly, a concatenation of two arcs of unit circle, each of length less to $\pi$, are called {\bf {\sc cc} isolated points}. Isolated points in $\Gamma(\mbox{\sc x,y})$ are bounded isotopy classes with empty interior, see Fig. \ref{figdiscde} left, and Fig. \ref{figgenpos} left. In addition, bounded curvature paths of length zero are also isolated points. This observation becomes interesting after recalling the concept of simple connectedness. Closed bounded curvature paths are not bounded-homotopic to a single point.
\end{itemize}
\begin{remark}\label{rem:empty} Suppose that for $\mbox{\sc x,y}\in T{\mathbb R}^2$ we have that $\Gamma(\mbox{\sc x,y})$ does not admit a bounded isotopy class $\Delta(\mbox{\sc x,y})$. Then, embedded trapped paths cannot exist. We adopt the notation $\Delta(\mbox{\sc x,y})$, rather than $\Delta(\Omega)$ as we did in \cite{paperc, paperd}, since our emphasis now is on the endpoints rather than the regions $\Omega\subset \mathbb R^2$. We prefer to write $\Omega$ instead of $\Omega(\mbox{\sc x,y})$.
\end{remark}
The classification theorem for the homotopy classes in \cite{paperd} required the proximity conditions A, B, C, and D, see \cite{paperc,paperd}. Next, we redefine conditions C and D in terms of the existence of bounded isotopy classes, see Fig. \ref{figproxcondabcd}.
\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth,angle=0]{figproxcondabcd3}
\caption{Examples of bounded curvature paths in spaces satisfying conditions A, B, C and D.}
\label{figproxcondabcd}
\end{figure}
\begin{definition}\label{procon}
If $\mbox{\sc x,y}\in T{\mathbb R}^2$ satisfies:
\begin{itemize}
\item (i) then $\Gamma({\mbox{\sc x,y}})$ is said to satisfy proximity condition {\sc A}.
\item (ii) or (iii) then $\Gamma({\mbox{\sc x,y}})$ is said to satisfy proximity condition {\sc B}.
\item (iv) and there is no bounded isotopy class $\Delta({\mbox{\sc x,y}})$ then $\Gamma({\mbox{\sc x,y}})$ is said to satisfy proximity condition {\sc C}.
\item (iv) and there exists a bounded isotopy class $\Delta({\mbox{\sc x,y}})$ then $\Gamma({\mbox{\sc x,y}})$ is said to satisfy proximity condition {\sc D}.
\end{itemize}
\end{definition}
\noindent In Theorem \ref{maincensus1} we clarify for what $\mbox{\sc x,y}\in T\mathbb R^2$ we have that $\Delta(\mbox{\sc x,y})\subset \Gamma(\mbox{\sc x,y})$. To this end, we group spaces of bounded curvature paths in terms of the type of connected components that they have.
\section{An underlying discrete structure}\label{underlying} \label{Crl}
Next, we describe the coordinates of distinguished points in $\mathbb R^2$. The configurations of these points reveal interesting features of $\Gamma(\mbox{\sc x,y})$, $\mbox{\sc x,y}\in T\mathbb R^2$. In particular, these points completely characterize the regions $\Omega\subset \mathbb R^2$ whenever they exist.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{Cr11}
\caption{Right: Notation associated with $\Omega\subset \mathbb R^2$. Left: The angles involved when computing the coordinates of $p,q\in \mathbb R^2$.}
\label{fig:Cr}
\end{figure}
Consider unit radius circles $A$ and $B$ with centers $a=(a_1,a_2)$ and $b=(b_1,b_2)$ respectively. Consider a unit radius circle $C$ with center $c$ tangent to $A$ and $B$ at $p$ and $q$ respectively, see Figure \ref{fig:Cr} left. Also, set $\mbox{\sc x,y}\in T\mathbb R^2$ so that a {\sc ccc} path is obtained. Suppose the coordinates of $a$ and $b$ are known. Next we determine the coordinates of the points $p$ and $q$.
Consider the triangle whose vertices are $a$, $b$, and $c$. Denote by $\theta$ the smallest angle made by the line passing through $a$ and $b$ and the horizontal axis according to Remark \ref{coo}, see Figure \ref{fig:Cr} left. Here $\ell_1$ and $\ell_2$ are parallel to the horizontal axis. Denote by $\delta$ the smallest angle made by the line passing through $b$ and $c$ and the horizontal axis. It is easy to see that $d(a,c)=d(c,b)=2$.
After applying the law of cosines we immediately obtain that:
\begin{equation*}
\alpha =\arccos\bigg({\frac{\sqrt{(b_1-a_1)^2+(b_2-a_2)^2}}{4}}\bigg)
\end{equation*}
\begin{equation*}
\theta = \arctan\bigg({\frac{b_2-a_2}{b_1-a_1}}\bigg)
\end{equation*}
\begin{equation*}
\delta = \arctan\bigg({\frac{b_2-c_2}{b_1-c_1}}\bigg)
\end{equation*}
\begin{equation*}
\label{eq:cruv}
c=(a_1 + 2\cos(\alpha+\theta), a_2 + 2\sin{(\alpha+\theta}))
\end{equation*}
\vspace{0em}
\begin{equation}
\label{eq:i1}
p = (a_1 + \cos({\alpha+\theta}), a_2 + \sin({\alpha+\theta}))
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:i3}
q= (b_1 + \cos{\delta},b_2 + \sin{\delta})
\end{equation}
\vspace{0.1em}
By letting $A=\mbox{\sc C}_ r(\mbox{\sc x})$ and $B=\mbox{\sc C}_ r(\mbox{\sc y})$ we find explicit formulas for the point $p$ between $A$ and $C$ and $q$ between $C$ and $B$, see Fig. \ref{fig:Cr} left. Observe that the coordinates of $c_r(\mbox{\sc x})$ and $c_ r(\mbox{\sc y})$ are easily obtained since $\mbox{\sc x,y}\in T\mathbb R^2$ are given.
Analogously, by letting $A=\mbox{\sc C}_ l(\mbox{\sc x})$ and $B=\mbox{\sc C}_ l(\mbox{\sc y})$, and by applying the same reasoning as before, we find formulas for the points $p'$ between $A$ and $C'$ and $q'$ between $C'$ and $B$ (see Fig. \ref{fig:Cr} right):
\begin{equation*}
\alpha' =\arccos\bigg({\frac{\sqrt{(b_1-a_1)^2+(b_2-a_2)^2}}{4}}\bigg)
\end{equation*}
\begin{equation*}
\theta' = \arctan\bigg({\frac{b_2-a_2}{b_1-a_1}}\bigg)
\end{equation*}
\begin{equation*}
\delta' = \arctan\bigg({\frac{b_2-c_2}{b_1-c_1}}\bigg)
\end{equation*}
\begin{equation*}
\label{eq:cluv}
c'=(a_1 + 2\cos(\alpha'+\theta'), a_2 + 2\sin{(\alpha'+\theta'}))
\end{equation*}
\vspace{0em}
\begin{equation}
\label{eq:i2}
p' = (a_1 + \cos({\alpha'+\theta'}), a_2 + \sin({\alpha'+\theta'}))
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:i4}
q' = (b_1 + \cos{\delta'},b_2 + \sin{\delta'})
\end{equation}
\begin{definition}\label{boundomega} Let:
\begin{itemize}
\item $w_1$ be the {\sc rlr} path consisting of an arc from $x$ to $p$ in $\mbox{\sc C}_ r(\mbox{\sc x})$; an arc from $p$ to $q$ in $C$; and an arc from $q$ to $y$ in $\mbox{\sc C}_ r(\mbox{\sc y})$, see equations (\ref{eq:i1}) and (\ref{eq:i3}).
\item $w_2$ be the {\sc lrl} path consisting of an arc from $x$ to $p'$ in $\mbox{\sc C}_ l(\mbox{\sc x})$; arc from $p'$ to $q'$ in $C'$; and an arc from $p'$ to $y$ in $\mbox{\sc C}_ l(\mbox{\sc y})$, see equations (\ref{eq:i2}) and (\ref{eq:i4}).
\end{itemize}
\end{definition}
Next we make use of the formaulae (\ref{eq:i1})-(\ref{eq:i4}) to characterize $\Omega\subset \mathbb R^2$ in terms of the coordinates of distinguished points.
\begin{definition}\label{omegadfn} Assume $\Gamma(\mbox{\sc x}, \mbox{\sc y})$ satisfies condition {\sc D}. Let $\Omega\subset \mathbb R^2$ be the bounded region whose boundary is given by the union of $w_1$ and $w_2$ in Definition \ref{boundomega}, see Fig. \ref{fig:Cr} right. In this case we say that $\mbox{\sc x,y}\in T\mathbb R^2$ {\it carries a region}.
\end{definition}
\section{Motivation through examples }\label{moduli}
In narrative terms, here we present facts in reverse-chronology. By considering this strategy, we are telling the reader ``the end of the story'' through various examples, with the intention to motivate the more technical steps and proofs.
We study the fibers in $\Gamma$ by fixing $\mbox{\sc x}=(x,X)\in T\mathbb R^2$ and a final position $y\in \mathbb R^2$ while varying the final direction $Y_{\theta}\in T_y\mathbb R^2$, $\theta \in \mathbb R$. In Section \ref{classrangesec} we construct a function, called the {\bf class range}, that assigns to each final position $y\in \mathbb R^2$ a non-negative real number. This number is called the class value and gives the range $\theta$ can vary so that the spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\theta})$ have the same types of connected components.
Firstly, we would like to point out that for a fixed $\theta\in \mathbb R$, the spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{k\theta})$ and $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{j\theta})$ may eventually be different for $j\neq k\in \mathbb Z$. Secondly, consider $\mbox{\sc x}, \mbox{\sc y}_{\theta}\in T\mathbb R^2$, so that $\theta=\pm \pi$. Since the initial and final tangent vectors are parallel having opposite sense, the pairs $\mbox{\sc x}, \mbox{\sc y}_{\pm \pi}\in T\mathbb R^2$ do not carry a region $\Omega\subset \mathbb R^2$. This is due to the existence of parallel tangents, see \cite{papere}.
\begin{definition}\label{defw}Consider $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$, $\theta\in (-\pi,\pi)$, so that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\theta})$ satisfy proximity condition {\sc D}. Let
\begin{itemize}
\item $\omega_- $ be the smallest value in $(-\pi,\pi)$ so that there exits a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$;
\item $\omega_+ $ be the greatest value in $(-\pi,\pi)$ so that there exits a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{itemize}
An interval whose endpoints are $\omega_-$ and $\omega_+$ is denoted by $I(y)$. We refer to $\omega_- $ and $\omega_+$ as critical angles.
\end{definition}
\begin{remark}\hfill
\begin{itemize}
\item The critical angles $\omega_-$ and $\omega_+ $ depend on $y\in \mathbb R^2$ since in Remark \ref{coo} we established that $(x,X)\in T\mathbb R^2$ is fixed. Sometimes we write $\omega_-=\omega_-(y)$ and $\omega_-=\omega_-(y)$.
\item From the way we construct the class range function (see Definition \ref{classrange+}) the existence of $\omega_-$ and $\omega_+$ is guaranteed.
\end{itemize}
\end{remark}
The examples in \ref{paramex} and the illustrations in Fig. \ref{regparam} have been obtained computationally \cite{dubinsexplorer} by evaluating the class range function in Definition \ref{classrange+} via equations (\ref{eq:Wmin}) and (\ref{eq:Wmax}). Throughout this note, whenever we consider examples obtained computationally, angles will be measured in degrees. The following ideas will be formalized in Sections \ref{classrangesec} and \ref{classdomain}, compare Definition \ref{def:spaces}.
\begin{example}[\bf parametrizing fibers]\label{paramex}
In Fig. \ref{regparam} top, we show a sequence for the variation of $\mbox{\sc x}, \mbox{\sc y}_\theta\in T\mathbb R^2$ while $\theta$ ranges over $I(y) \subsetneq (-180^{\circ},180^{\circ})$ illustrating the following example.
Consider $x=(0,0)\in \mathbb R^2$, $X=(1,0)\in T_x\mathbb R^2$, $y=(2.82,0)\in\mathbb R^2$,$Y_{\theta}=e^{i\theta}\in T_y\mathbb R^2$. We determine that $I(y)=[-109.47^{\circ},109.47^{\circ}]$ and conclude that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ admits (from left to right and counterclockwise) spaces of bounded curvature paths being:
\begin{itemize}
\item An isolated point for $\theta=-109.47^{\circ}$, see Theorem 3.9 in \cite{paperc}.
\item There exists a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $\theta \in (-109.47^{\circ},109.47^{\circ})$ i.e., a one-parameter family of bounded isotopy classes, see Theorem \ref{existvect}. Also see Theorem 8.1 in \cite{paperc}.
\item An isolated point for $\theta=109.47^{\circ}$.
\item If $\theta \notin [-109.47^{\circ},109.47^{\circ}]$ then there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{itemize}
In this case we say that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a {\bf fiber of type I}.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{regparam}
\caption{The grey regions are examples of $\Omega\subset \mathbb R^2$. We illustrate the examples in Remark \ref{paramex} computed and plotted according to Definition \ref{classrange+} via equations (\ref{eq:Wmin}) and (\ref{eq:Wmax}). The range where $\theta$ can vary is depicted in dark green.}
\label{regparam}
\end{figure}
In Fig. \ref{regparam} middle we show a sequence for the variation of $\mbox{\sc x}, \mbox{\sc y}_\theta\in T\mathbb R^2$ while $\theta$ ranges over $I(y) \subsetneq (-180^{\circ},180^{\circ})$ illustrating the following example.
Consider $x=(0,0)\in \mathbb R^2$, $X=(1,0)\in T_x\mathbb R^2$, $y=(2.5,-2)\in \mathbb R^2$, $Y_{\theta}=e^{i\theta}\in T_y\mathbb R^2$. We determine that $I(y)=[-48.36^{\circ},30.30^{\circ})$ and conclude that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ contains (from left to right and counterclockwise) spaces of bounded curvature paths such that:
\begin{itemize}
\item There exists a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $\theta \in [-48.36^{\circ},30.30^{\circ})$ i.e., a one-parameter family of bounded isotopy classes.
\item An isolated point for $\theta =30.30^{\circ}$.
\item If $\theta \notin [-48.36^{\circ},30.30^{\circ})$ then there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{itemize}
In this case we say that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a {\bf fiber of type II}.
In Fig. \ref{regparam} bottom we show a sequence for the variation of $\mbox{\sc x}, \mbox{\sc y}_\theta\in T\mathbb R^2$ while $\theta$ ranges over $I(y) \subsetneq (-180^{\circ},180^{\circ})$ illustrating the following example.
Consider $x=(0,0)\in \mathbb R^2$, $X=(1,0)\in T_x\mathbb R^2$, $y=(3,0.5)\in \mathbb R^2$ and $Y_{\theta}=e^{i\theta}\in T_y\mathbb R^2$ we determine that $I(y)=[-80.42^{\circ},60.55^{\circ}]$ and conclude that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ contains (from left to right and counterclockwise) spaces of bounded curvature paths being:
\begin{itemize}
\item There exists a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $\theta \in [-80.42^{\circ},60.55^{\circ}]$ i.e., a one-parameter family of bounded isotopy classes.
\item If $\theta \notin [-80.42^{\circ},60.55^{\circ}]$ then there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{itemize}
In this case we say that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a {\bf fiber of type III}.
\end{example}
There are two more types of fibers, these will be discussed in Definition \ref{def:spaces}.
In Section \ref{classdomain} we characterize a region $B\subset \mathbb R^2$ so that the class range is well defined. This plane region corresponds exactly to the location for the final positions $y\in \mathbb R^2$ so that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ admits a family of bounded isotopy classes of bounded curvature paths.
Next we explain two types of configurations that will be of relevance when determining the extreme values of the class range function.
\section{Topological transitions}\label{crit}
Consider a {\sc cc} isolated point as shown in Fig. \ref{figgenpos} left. After a small clockwise continuous perturbation on the final direction (while fixing the final position) the resultant endpoints define a space that does not admit an isolated point, see Fig. \ref{figgenpos} middle and right. This is true since the paths at middle and right (the length minimisers in their respective space) are parallel homotopic to paths of arbitrary length due to the existence of parallel tangents, see Corollary 3.4 and Proposition 3.8 in \cite{papere}. By Corollary 7.13 in \cite{paperc} these paths are not elements in bounded isotopy classes.
It is fairly easy to see that a small counterclockwise perturbation on the final direction of the {\sc cc} isolated point in Fig. \ref{figgenpos} leads to spaces admitting a bounded isotopy class.
\begin{itemize}
\item[(1)] For certain fibers, the {\sc cc} isolated points are transitions between spaces with different types of connected components.
\end{itemize}
The previous observations say implicitly that for certain fibers the critical values $\omega_-$ and $\omega_+$ are achieved at spaces admitting {\sc cc} isolated points.
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth,angle=0]{figgenpos}
\caption{Two types of discontinuities. When varying the final vector of an isolated point (at the left) we obtain a length discontinuity. The third path shows that length discontinuities not only happen when perturbing directions of isolated points. Here $x=(0,0)\in \mathbb R^2$, $X=(1,0)\in T_x\mathbb R^2$, $y=(2,2)\in \mathbb R^2$, and $e^{i \theta }=Y_\theta \in T_y\mathbb R^2$ with $\theta_0=0^{\circ}$, $\theta_1=-6^{\circ}$, $\theta_2=-12^{\circ}$.}
\label{figgenpos}
\end{figure}
Recall that a necessary condition for the existence of a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y})$ is that $\mbox{\sc x,y}\in T\mathbb R^2$ satisfy:
$$d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))< 4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))< 4.$$
This is easy to see since: if $d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))\geq 4$, then a unit disk can be placed in the line joining $c_l(\mbox{\sc x})$ to $c_l(\mbox{\sc y})$ without overlapping with $C_l(\mbox{\sc x})$ or $C_l(\mbox{\sc y})$. This implies that bounded curvature paths may escape $\Omega\subset \mathbb R^2$ (after applying an operation of type {\sc II} in \cite{paperd} to the length minimiser in $\Omega$) contradicting Theorem 8.1 in \cite{paperc}. For details we recommend the reader refer to Section 4 in \cite{paperc}. The same applies for $d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))\geq 4$.
\begin{itemize}
\item[(2)] For certain fibers, the condition
\begin{equation} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))=4 \quad \mbox{and}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))=4 \label{con_t}\end{equation}
\end{itemize}
is considered as a transition between spaces with different types of connected components. We proved in Theorem 5.3 in \cite{paperd} that spaces $\Gamma(\mbox{\sc x},\mbox{\sc y})$ satisfying proximity condition A, that is:
\begin{equation*} d(c_l(\mbox{\sc x}),c_l(\mbox{\sc y}))\geq4 \quad \mbox{or}\quad d(c_r(\mbox{\sc x}),c_r(\mbox{\sc y}))\geq 4 \label{con_g}\end{equation*}
do not admit isotopy classes. Therefore, $\Omega=\emptyset$.
\section{Angular formulae}\label{trans}
We describe two types auxiliary triangles that allow us to obtain (via continuous variations of their angles) the values of the class range function. These triangles are constructed out of information obtained from the given endpoints in $T\mathbb R^2$. We establish a correlation between the angle variation in these auxiliary triangles and the types of connected components in $\Gamma(\mbox{\sc x}, \mbox {\sc y}_\theta)$, $\mbox{\sc x},\mbox{\sc y}_\theta\in T\mathbb R^2$, for each $\theta \in (-\pi,\pi)$.
Next we consider fibers whose critical values $\omega_-$ and $\omega_+$ are achieved at spaces admitting {\sc cc} isolated points as disscused in (1) in Section \ref{crit}.
\subsection{Short triangles}\label{short} Suppose $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$, $\theta \in (-\pi,\pi)$ is such that for some $\theta\in (-\pi,\pi)$ the adjacent circles $\mbox{\sc C}_l(\mbox{\sc x})$ and $\mbox{\sc C}_r(\mbox{\sc y}_{\theta})$ intersect at a single point. We have that $\theta=\omega_-$ or $\theta=\omega_+$, see Fig. \ref{shortriang1B}. For $\theta=\omega_-$, construct a triangle whose vertices are $c_l({\mbox{\sc x}}), c_r(\mbox{\sc y}_{\omega_-})$ and $y$. It is immediate that $d(c_l({\mbox{\sc x}}), c_r(\mbox{\sc y}_{\omega_-}))=2$ and that $d(c_r(\mbox{\sc y}_{\omega_-}),y)=1$. For $\theta=\omega_+$, construct the triangle whose vertices are $c_r({\mbox{\sc x}}), c_l(\mbox{\sc y}_{\omega_+})$ and $y$, see Fig \ref{shortriang2}. It is immediate that $d(c_r({\mbox{\sc x}}), c_l(\mbox{\sc y}_{\omega_+}))=2$ and that $d(c_l(\mbox{\sc y}_{\omega_+}),y)=1$.
The obvious observation: A triangle with sides of length $1$ and $2$ cannot have a third side of length greater to $3$ leads us to analyze the transitions in (2) in Section \ref{crit}.
\subsection{Long triangles}\label{long} Consider $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$, $\theta \in (-\pi,\pi)$ so that the adjacent circles $\mbox{\sc C}_l(\mbox{\sc x})$ and $\mbox{\sc C}_r(\mbox{\sc y}_{{\theta}})$ do not intersect. We construct the triangle whose vertices are $c_l({\mbox{\sc x}}), c_l(\mbox{\sc y}_\theta)$ and $y$, see Fig. \ref{fig:Triangle3}. Note that $d(c_l(\mbox{\sc y}_\theta),y)=1$, and $d(c_l({\mbox{\sc x}}), y)>3$, see Fig. \ref{fig:Triangle3}.
In case the adjacent circles $\mbox{\sc C}_r(\mbox{\sc x})$ and $\mbox{\sc C}_l(\mbox{\sc y}_\theta)$ do not intersect, we construct the triangle whose vertices are $c_r({\mbox{\sc x}}), c_r(\mbox{\sc y}_\theta)$ and $y$, see Fig. \ref{fig:Triangle4}. Note that $d(c_r(\mbox{\sc y}_\theta),y)=1$, and $d(c_r({\mbox{\sc x}}), y)>3$.
Next, we look closely at short and long triangles. Their sides are denoted in capital letters while the length of their sides are denoted in lowercase i.e., the side $S_i$ has length $s_i$.
We keep a certain degree of detail for {short triangles} and subsequently reduce the details in the discussions for {long triangles} assuming the analogy in ideas and notation with short triangles.
\subsection{Angular formulae for short triangles} \label{angformshort}
To avoid confusion, the abscissa and ordinate in the coordinate system in Remark \ref{coo} are denoted by $u$-axis and $v$-axis respectively.
Using the notaion from Fig. \ref{shortriang1B} left, consider the triangle whose vertices are $c_l({\mbox{\sc x}}), c_r(\mbox{\sc y})$ and $y$. It is easy to see that this triangle has sides of length $a_1=2$, $b_1=1$, and $c_1=d(c_l(\mbox{\sc x}),y)$. In addition, since the endpoints are given, we can easily obtain the coordinates of the vertices of the triangle under consideration. By the law of cosines we can obtain the angles $\alpha_1$, $\beta_1$, and $\gamma_1$.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{shortriangle11}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.87\linewidth]{shortriang1BBBB}
\label{fig:sfig2}
\end{subfigure}
\caption{Notation for short triangles. Right: The angular formulae applies when $\delta_1$ and $\omega_{-}$ (or $\omega_{+}$) have same and opposite sign. Here $\ell$ is a line parallel to the horizontal axis.}
\label{shortriang1B}
\end{figure}
Let $\delta_{1}$ be the smallest angle made by the line joining $c_l(\mbox{\sc x})$ to $y$ and the $u$-axis. The angle $\delta_1=\arctan\big(\frac{v+1}{u}\big)$ is easy to obtain since $y=(u,v)\in \mathbb R^2$ is given.
\begin{remark}(Ruling out indeterminancies).\label{indet} Note that the standard arctan function allow us to compute angles in $(-\frac{\pi}{2}, \frac{\pi}{2})$. Since our computations involve angles in $(-\pi,\pi)$ we make use of the arctan2 function that allows us to calculate the arctangent in all four quadrants, see equations (\ref{eq:delta_a}) and (\ref{eq:delta_xa}).
\end{remark}
Depending on the final position, the angles $\delta_1$ and $\omega_-$ may have the same or different sign. In Fig. \ref{shortriang1B} left we illustrate the case when $\delta_1<0$ and $\omega_-<0$. In Fig. \ref{shortriang1B} right we illustrate the case where $\delta_1<0$ and $\omega_->0$. Since $\alpha_{1}$ and $\alpha'_{1}$ are supplementary we have that $\alpha'_{1} = \pi - \alpha_{1}$.
From the previous analysis we obtain that $\omega_- = \delta_{1}-\alpha'_{1}+\frac{\pi}{2}$ or equivalently:
\begin{equation}
\label{eq:thetar_b}
\omega_- = \delta_{1}+\alpha_{1}-\frac{\pi}{2}.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=.57\textwidth,angle=0]{shortriang33}
\caption{A critical configuration for short triangles. Note that $\delta_2>0$ and $\omega_+>0$.}
\label{shortriang2}
\end{figure}
Now we obtain a formula for $\omega_+$, see Fig. \ref{shortriang2}. Since $\alpha_{2}$ and $\alpha'_{2}$ are supplementary we have that $\alpha'_{2} = \pi - \alpha_{2}$.
We obtain that $\omega_+ = \delta_{2}+\alpha'_{2}-\frac{\pi}{2}$ or equivalently:
\begin{equation}
\label{eq:thetal_a}
\omega_{+} = \delta_{2}-\alpha_{2}+\frac{\pi}{2}.
\end{equation}
Here $\delta_{2}$ is the smaller angle made by the $u$-axis and the line joining $c_r(\mbox{\sc x})$ to $y$. It is easy to obtain the length of the side $C_2$.
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth,angle=0]{longtriang1111}
\caption{Notation for long triangles.}
\label{fig:Triangle3}
\end{figure}
\subsection{Angular formulae for long triangles}
Using the notation from Fig. \ref{fig:Triangle3} we present the following formulae:
\begin{equation}
\label{eq:thetar_c}
\omega_{-} = \delta_{3}-\alpha_{3}+\frac{\pi}{2}.
\end{equation}
Similarly we obtain,
\begin{equation}
\label{eq:thetal_b}
\omega_{+} = \delta_{4}+\alpha_{4}-\frac{\pi}{2}.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth,angle=0]{longtriang22}
\caption{Notation for long triangles.}
\label{fig:Triangle4}
\end{figure}
We put together equations (\ref{eq:thetar_b})-(\ref{eq:thetal_b}) to give explicit formulae for $\omega_-$ and $\omega_+$.
\begin{equation}
\label{eq:wmin}
\omega_{-} =
\begin{cases}
\delta_{1} + \alpha_{1} - \frac{\pi}{2} & \text{if } \,\,\,d(c_l(\mbox{\sc x}),y) < 3 \\
\delta_{3} - \alpha_{3} + \frac{\pi}{2} & \text{if } \,\,\,d(c_l(\mbox{\sc x}),y) \geq 3
\end{cases}
\end{equation}
\begin{equation}
\label{eq:wmax}
\omega_{+} =
\begin{cases}
\delta_{2} - \alpha_{2} + \frac{\pi}{2} & \text{if } \,\,\,d(c_r(\mbox{\sc x}),y) < 3 \\
\delta_{4} + \alpha_{4} - \frac{\pi}{2} & \text{if } \,\,\,d(c_r(\mbox{\sc x}),y) \geq 3
\end{cases}
\end{equation}
\section{The class range} \label{classrangesec}
Choose $(x,X), (y,Y) \in T{\mathbb R}^2$ so that the origin is identified with $x$ as in Remark \ref{coo}. We consider the formulae (\ref{eq:wmin}) and (\ref{eq:wmax}) as starting point for obtaining the class range function. The class value gives the range that $\theta$ can continuously vary so that the spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\theta})$ have the same types of connected components. The class value corresponds to the length of the maximal subinterval $I(y)\subset (-\pi,\pi)$ so that there is a bounded isotopy class $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$, $\theta \in I(y)$.
Next we express the angles $\delta_i$ and $\alpha_i$ in (\ref{eq:wmin}) and (\ref{eq:wmax}) in terms of generic $y=(u,v)\in \mathbb R^2$. Observe that $\delta_1=\delta_3$ since both are the acute angles made by the line joining $c_l(\mbox{x})$ and $y$ with the $u$-axis. In addition, $\delta_2=\delta_4$ since both are the acute angles made by the line joining $c_r(\mbox{x})$ and $y$ with the $u$-axis. Note that $\tan (\delta_1)=\frac{v+r}{u}$, and $\tan (\delta_2)=\frac{v-r}{u}$.
We use arctan2 function in (\ref{eq:delta_a}) and (\ref{eq:delta_xa}) instead of the standard arctan function to determine angles $\delta_1$ and $\delta_2$, see Remark \ref{indet}. We obtain the following formulae:
\begin{equation}
\label{eq:delta_a}
\delta_{1}(u,v) =
\begin{cases}
\arctan \frac{v+1}{u} & \text{if } u > 0 \\
\arctan \frac{v+1}{u} + \pi & \text{if } u < 0 \,\, \text{and}\,\, v \geq -1 \\
\arctan \frac{v+1}{u} - \pi & \text{if } u > 0 \,\, \text{and}\,\, v<-1\\
\frac{\pi}{2} & \text{if } u = 0 \,\, \text{and}\,\, v>-1\\
-\frac{\pi}{2} & \text{if } u = 0 \,\, \text{and}\,\, v<-1\\
\end{cases}
\end{equation}
\begin{equation}
\label{eq:delta_xa}
\delta_{2}(u,v) =
\begin{cases}
\arctan \frac{v-1}{u} & \text{if } u > 0 \\
\arctan \frac{v-1}{u} + \pi & \text{if } u < 0 \,\, \text{and}\,\, v \geq 1 \\
\arctan \frac{v-1}{u} - \pi & \text{if } u > 0 \,\, \text{and}\,\, v<1\\
\frac{\pi}{2} & \text{if } u = 0 \,\, \text{and}\,\, v>1\\
-\frac{\pi}{2} & \text{if } u = 0 \,\, \text{and}\,\, v<1\\
\end{cases}
\end{equation}
Now we determine the angles $\alpha_{i}$ in (\ref{eq:wmin}) and (\ref{eq:wmax}). To this end, we apply the law of cosines. Here we are not considering degenerate triangles, so the following formulae are never undetermined.
\begin{equation*}
\alpha_{i} = \arccos\bigg(\frac{b_i^2+c_i^2-a_i^2}{2b_ic_i}\bigg)
\end{equation*}
Since the initial and final positions are given, the coordinates of the adjacent circles are easily obtained. In consequence, the length of the sides of the short and long triangles are easily obtained.
We can express the angles $\alpha_{i}$ as a function of $y=(u,v)\in \mathbb R^2$. That is:
\begin{equation}
\label{eq:alpha_1}
\alpha_{1}(u,v) = \arccos\bigg(\frac{u^2+(-1-v)^2-3}{2\sqrt{u^2+(-1-v)^2}}\bigg)
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:alpha_2}
\alpha_{2}(u,v) = \arccos\bigg(\frac{u^2+(1-v)^2-3}{2\sqrt{u^2+(1-v)^2}}\bigg)
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:alpha_3}
\alpha_{3}(u,v) = \arccos\bigg(\frac{u^2+(-1-v)^2-15}{2\sqrt{u^2+(-1-v)^2}}\bigg)
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:alpha_4}
\alpha_{4}(u,v) = \arccos\bigg(\frac{u^2+(1-v)^2-15}{2\sqrt{u^2+(1-v)^2}}\bigg)
\end{equation}
Note that we have expressed all the angles $\alpha_i$ and $\delta_i$ as functions of the variables $u$ and $v$. In addition, recall that in Definition \ref{defw} we considered the concept of critical angles $\omega_-$ and $\omega_+$. We abuse notation and define the functions $\omega_-:\mathbb R^2\to\mathbb R$ and $\omega_+:\mathbb R^2\to\mathbb R$. They have been constructed to match Definition \ref{defw}. These functions assign to each final position $y=(u,v)\in \mathbb R^2$ its respective critical angle $\omega_-(y)$ and $\omega_-(y)$.
We consider equations (\ref{eq:delta_a})-(\ref{eq:alpha_4}) according to equations (\ref{eq:wmin}) and (\ref{eq:wmax}) to obtain:
\begin{equation}
\label{eq:Wmin}
\omega_-(u,v) =
\begin{cases}
\begin{cases}
\arctan (\frac{v-1}{u}) + \arccos(\frac{(u^2+(1-v)^2)-3}{2\sqrt{u^2+(1-v)^2}}) - \frac{\pi}{2} & \text{if } u > 0 \\
\begin{cases}
\arctan (\frac{v-1}{u}) + \pi + \arccos(\frac{(u^2+(1-v)^2)-3}{2\sqrt{u^2+(1-v)^2}}) - \frac{\pi}{2} & \text{if } v \geq 1 \\
\arctan (\frac{v-1}{u}) - \pi + \arccos(\frac{(u^2+(1-v)^2)-3}{2\sqrt{u^2+(1-v)^2}}) - \frac{\pi}{2} & \text{if } v < 1
\end{cases} & \text{if } u < 0 \\
\begin{cases}
\frac{\pi}{2} + \arccos(\frac{(u^2+(1-v)^2)-3}{2\sqrt{u^2+(1-v)^2}}) - \frac{\pi}{2} & \text{if } v>1\\
-\frac{\pi}{2} + \arccos(\frac{(u^2+(1-v)^2)-3}{2\sqrt{u^2+(1-v)^2}}) - \frac{\pi}{2} & \text{if } v<1\\
\end{cases} & \text{if } u = 0 \\
\end{cases} & \text{if } d(\mbox{\it c}_l(\mbox{\sc x}), y) < 3 \\
\begin{cases}
\arctan (\frac{v-1}{u}) - \arccos(\frac{(u^2+(1-v)^2)-15}{2\sqrt{u^2+(1-v)^2}}) + \frac{\pi}{2} & \text{if } u > 0 \\
\begin{cases}
\arctan (\frac{v-1}{u}) + \pi - \arccos(\frac{(u^2+(1-v)^2)-15}{2\sqrt{u^2+(1-v)^2}}) + \frac{\pi}{2} & \text{if } v \geq 1 \\
\arctan (\frac{v-1}{u}) - \pi - \arccos(\frac{(u^2+(1-v)^2)-15}{2\sqrt{u^2+(1-v)^2}}) + \frac{\pi}{2} & \text{if } v < 1
\end{cases} & \text{if } u < 0 \\
\begin{cases}
\frac{\pi}{2} - \arccos(\frac{(u^2+(1-v)^2)-15}{2\sqrt{u^2+(1-v)^2}}) + \frac{\pi}{2} & \text{if } v>1\\
-\frac{\pi}{2} - \arccos(\frac{(u^2+(1-v)^2)-15}{2\sqrt{u^2+(1-v)^2}}) + \frac{\pi}{2} & \text{if } v<1\\
\end{cases} & \text{if } u = 0 \\
\end{cases} & \text{if } d(\mbox{\it c}_l(\mbox{\sc x}), y) \geq 3
\end{cases}
\end{equation}
\vspace{0em}
\begin{equation}
\label{eq:Wmax}
\omega_+(u,v) =
\begin{cases}
\begin{cases}
\arctan (\frac{v+1}{u}) - \arccos(\frac{(u^2+(-1-v)^2)-3}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } u > 0 \\
\begin{cases}
\arctan (\frac{v+1}{u}) + \pi - \arccos(\frac{(u^2+(-1-v)^2)-3}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v \geq -1 \\
\arctan (\frac{v+1}{u}) - \pi - \arccos(\frac{(u^2+(-1-v)^2)-3}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v < -1
\end{cases} & \text{if } u < 0\\
\begin{cases}
\frac{\pi}{2} - \arccos(\frac{(u^2+(-1-v)^2)-3}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v>-1\\
-\frac{\pi}{2} - \arccos(\frac{(u^2+(-1-v)^2)-3}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v<-1\\
\end{cases} & \text{if } u = 0 \\
\end{cases} & \text{if } d(\mbox{\it c}_r(\mbox{\sc x}), y) < 3 \\
\begin{cases}
\arctan (\frac{v+1}{u}) + \arccos(\frac{(u^2+(-1-v)^2)-15}{2\sqrt{u^2+(-1-v)^2}}) - \frac{\pi}{2} & \text{if } u > 0 \\
\begin{cases}
\arctan (\frac{v+1}{u}) + \pi + \arccos(\frac{(u^2+(-1-v)^2)-15}{2\sqrt{u^2+(-1-v)^2}}) - \frac{\pi}{2} & \text{if } v \geq -1\\
\arctan (\frac{v+1}{u}) - \pi + \arccos(\frac{(u^2+(-1-v)^2)-15}{2\sqrt{u^2+(-1-v)^2}}) - \frac{\pi}{2} & \text{if } v < -1
\end{cases} & \text{if } u < 0\\
\begin{cases}
\frac{\pi}{2} - \arccos(\frac{(u^2+(-1-v)^2)-15}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v>-1\\
-\frac{\pi}{2} - \arccos(\frac{(u^2+(-1-v)^2)-15}{2\sqrt{u^2+(-1-v)^2}}) + \frac{\pi}{2} & \text{if } v<-1\\
\end{cases} & \text{if } u = 0 \\
\end{cases} & \text{if } d(\mbox{\it c}_r(\mbox{\sc x}), y) \geq 3
\end{cases}
\end{equation}
The undetermined expression $\frac{u}{v}$ for $u=v=0$ can only happen at the centres of $C_l(\mbox{\sc x})$ and $C_r(\mbox{\sc x})$. By Corollary 3.4 in \cite{papere}, no path in a bounded isotopy class can satisfy the points $(0,1)$ or $(0,-1)$. This is due the existence of parallel tangents; contradicting Theorem 7.12 in \cite{paperc}.
\begin{definition} \label{classrange+} The class range function $\Theta: \mathbb R^2 \to \mathbb R$ is defined to be:
\label{eq:angularrange_1}
$$\Theta(y)= \omega_+(y) - \omega_-(y)\geq 0.$$
\end{definition}
It is important to note that the critical angles $\omega_-(y)$ and $\omega_+(y)$ are chosen so that $\omega_+(y)\geq\omega_-(y)$, see Definition \ref{defw}. In addition, note that $\omega_-(y)$ and $\omega_+(y)$ are in the boundary or the closure of the interval $I(y)$. Of course, if $\omega_+(y) - \omega_-(y)<0$ we have that $I(y)=\emptyset$, and so $\Omega=\emptyset$, and so there is no bounded isotopy class $\Delta(\mbox{\sc x,y})$. The case where $\Theta(y)=0$ is discussed bellow.
The interior, closure, boundary and complement of a set $B$ are denoted by $int(B)$, $cl(B)$, $\partial(B)$, and $B^c$ respectively.
Next we present data obtained after plotting the values of $\omega_+(y) - \omega_-(y)$.
\subsection{Facts about the class range function}\label{rem:data} (see Fig. \ref{figraf}).
\begin{itemize} \label{facts}
\item $\Theta$ is continuous.
\item The domain of $\Theta$ is a bounded set $B\subset \mathbb R^2$. In Section \ref{classdomain} we determine $B$ and its subdivisions. In these subdivisions lie the final positions $y\in \mathbb R^2$ so that $\Gamma(\mbox{\sc x},\mbox{\sc y}_\theta)$ are fibers of the same type, $\mbox{\sc y}_\theta=(y,Y_\theta)$, see Definition \ref{def:spaces}.
\item If $y\in int(B)$, then $\Theta(y)>0$.
\item If $y \in \partial (cl(B))$, then $\Theta(y)=0$.
\item If $y\in B^c$, then $\omega_+(y) - \omega_-(y)<0$. In this case, $I(y)=\emptyset$, and so $\Omega=\emptyset$ or equivalently there is no bounded isotopy class $\Delta(\mbox{\sc x,y})$.
\item The range of $\Theta$ is the interval $[0, \arctan \big(\frac{1}{4}\sqrt{2}\big)+\pi]$.
\item $\Theta$ attains the minima at the final positions $y\in \mathbb R^2$ for {\sc c} (or {\sc cc}) isolated points. Here we have that $\Theta(y)=0$.
\item $\Theta$ attains a maximum at $y=(0,2\sqrt{2})$ with $\Theta(y)= \arctan \big(\frac{1}{4}\sqrt{2}\big)+\pi$. In Fig. \ref{regparam} top we illustrate the class range for $y=(0,2\sqrt{2})$.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{graph}
\caption{The graph of the class range function, see Definition \ref{classrange+}. Note that the class range is constructed out of (\ref{eq:Wmin}) and (\ref{eq:Wmax}) and that these functions are obtained by combining (\ref{eq:wmin}) and (\ref{eq:wmax}).}
\label{figraf}
\end{figure}
\section{Class domain}\label{classdomain}
The obvious observation that a triangle with sides of length $1$ and $2$ cannot have a third side of length greater to $3$ leads us to study the cases:
\begin{equation}\label{ineq:1} d(c_l(\mbox{\sc x}),y) < 3
\end{equation}
\begin{equation}\label{ineq:2} d(c_l(\mbox{\sc x}),y) \geq 3
\end{equation}
\begin{equation}\label{ineq:3} d(c_r(\mbox{\sc x}),y) < 3
\end{equation}
\begin{equation}\label{ineq:4} d(c_r(\mbox{\sc x}),y) \geq 3
\end{equation}
After looking at the four possible combinations for:
\begin{equation} \label{eq:0}
\omega_-=\omega_+
\end{equation}
in equations (\ref{eq:wmin}) and (\ref{eq:wmax}) we obtain:
\begin{equation} \label{eq:1}
\delta_{1} + \alpha_{1} - \frac{\pi}{2} = \delta_{2} - \alpha_{2} + \frac{\pi}{2}
\end{equation}
\vspace{-.7em}
\begin{equation} \label{eq:2}
\delta_{3} - \alpha_{3} + \frac{\pi}{2} = \delta_{2} - \alpha_{2} + \frac{\pi}{2}
\end{equation}
\vspace{-.7em}
\begin{equation}\label{eq:3}
\delta_{1} + \alpha_{1} - \frac{\pi}{2} = \delta_{4} + \alpha_{4} - \frac{\pi}{2}
\end{equation}
\begin{equation}\label{eq:4}
\delta_{3} - \alpha_{3} + \frac{\pi}{2} = \delta_{4} + \alpha_{4} - \frac{\pi}{2}
\end{equation}
The following observations regarding the circles (\ref{eq:circ_a})-(\ref{eq:circ_g}) can be checked by a mere evaluation. We leave the details to the reader.
Consider $\mbox{\sc x}\in T\mathbb R^2$ according to Remark \ref{coo}. The locus of the circle (\ref{eq:circ_a}) is satisfied by the final positions $y=(u,v)\in \mathbb R^2$ with $u\geq0$. In this case, the angles in the associated triangles (according to Section \ref{trans}) satisfy (\ref{eq:4}), see Figs. \ref{fig:Triangle3}-\ref{fig:Triangle4}. In addition, $\Theta(u,v)=0$ for points in (\ref{eq:circ_a}) for $u\geq0$.
\begin{equation}
\label{eq:circ_a}
u^2+v^2=16.
\end{equation}
The loci of the circles (\ref{eq:circ_d}) and (\ref{eq:circ_e}) for $u\geq 0$ are satisfied by the final positions $y=(u,v)\in \mathbb R^2$ so that its associated angles according to subsection \ref{short} satisfy (\ref{eq:1}). In addition, $\Theta(y)=0$ for points in (\ref{eq:circ_d}) and (\ref{eq:circ_e}) with $u\geq0$.
\begin{equation}
\label{eq:circ_d}
u^2+(v-1)^2=1
\end{equation}
\begin{equation}
\label{eq:circ_e}
u^2+(v+1)^2=1
\end{equation}
The locus of the circle (\ref{eq:circ_f}) is satisfied by the final positions $y=(u,v)\in \mathbb R^2$ with for $u\leq 0$ so that its associated angles according to subsections \ref{short} and \ref{long} satisfy (\ref{eq:2}).
The locus of the circle (\ref{eq:circ_g}) is satisfied by the final positions $y=(u,v)\in \mathbb R^2$ with for $u\leq 0$ so that its associated angles according to subsections \ref{short} and \ref{long} satisfy (\ref{eq:3}). In addition, $\Theta(y)=0$ for points in (\ref{eq:circ_f}) and (\ref{eq:circ_g}) with $u\leq0$.
\begin{equation}
\label{eq:circ_f}
u^2+(v-3)^2=1
\end{equation}
\begin{equation}
\label{eq:circ_g}
u^2+(v+3)^2=1
\end{equation}
The circles (\ref{eq:circ_b}) and (\ref{eq:circ_c}) are trivially extracted out of relations (\ref{ineq:1})-(\ref{ineq:4})
\begin{equation}
\label{eq:circ_b}
u^2+(v-1)^2=9
\end{equation}
\vspace{-2em}
\begin{equation}
\label{eq:circ_c}
u^2+(v+1)^2=9
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{figRegHeat4}
\caption{Left: The domain $B\subset \mathbb R^2$ of $\Theta$. Note that $B$ is bounded by the circles (\ref{eq:circ_a})-(\ref{eq:circ_c}). Right: The temperature give the length of $I(y)$ for $y\in B$. }
\label{figRegionB}
\end{figure}
\subsection{Description of $B\subset \mathbb R^2$}\label{descB}
We obtain the domain of the class range function by evaluating (\ref{eq:Wmin}) and (\ref{eq:Wmax}) according to Definition \ref{classrange+}, this planar set is represented by the colored portion in Fig. \ref{figRegionB} left. In Fig. \ref{figRegionB} right we show a Heatmap for the class values.
\begin{definition}The domain $B\subset \mathbb R^2$ of the class range function $\Theta:\mathbb R^2\to\mathbb R$ corresponds to the open bounded portion enclosed by the simple closed curve corresponding to the union of the semicircles (\ref{eq:circ_d}) and (\ref{eq:circ_e}) for $u\geq0$; (\ref{eq:circ_f}) and (\ref{eq:circ_g}) for $u\leq0$; and (\ref{eq:circ_a}) for $u\geq0$, union the semicircles (\ref{eq:circ_d}) and (\ref{eq:circ_e}) for $u>0$, union the origin.
\end{definition}
Observe that the semicircles (\ref{eq:circ_d}) and (\ref{eq:circ_e}) for $u>0$ are the location where {\sc c} isolated points are defined. In addition, $\Theta$ is continuous but not differentiable at $B$ intersection the circles (\ref{eq:circ_f}) and (\ref{eq:circ_g}), see the list of facts in \ref{facts}.
\begin{definition} \label{cbc} Let $\mbox{\sc x,y}\in T\mathbb R^2$. Then
\begin{itemize}
\item $y\in B_{1}\subset{B}$ then, $d(\mbox{\sc c}_l(\mbox{\sc x}), y) < 3$ and $d(\mbox{\sc c}_r(\mbox{\sc x}), y) < 3$ are satisfied.
\item $y\in B_{2}\subset{B}$ then, $d(\mbox{\sc c}_l(\mbox{\sc x}), y) < 3$ and $d(\mbox{\sc c}_r(\mbox{\sc x}), y) \geq 3$ or, \\
$d(\mbox{\sc c}_l(\mbox{\sc x}), y) \geq 3$ and $d(\mbox{\sc c}_r(\mbox{\sc x}), y) < 3$ are satisfied.
\item $y\in B_{3}\subset{B}$ then, $d(\mbox{\sc c}_l(\mbox{\sc x}), y) \geq 3$ and $d(\mbox{\sc c}_r(\mbox{\sc x}), y) \geq 3$ are satisfied.
\item set $B_4=B^c$.
\end{itemize}
\end{definition}
\begin{theorem}\label{existvect} Given $\mbox{\sc x} \in T\mathbb R^2$ and $y\in B$. There exists a family $e^{i\theta}=Y_\theta\in T_y\mathbb R^2$, $\theta\in (-\pi,\pi)$, such that for $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$ we have that $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a one-parameter family of bounded isotopy classes.
\end{theorem}
\begin{proof} Consider $\mbox{\sc x} \in T\mathbb R^2$. Recall that the values of $\Theta$ are determined by a combination of short and long triangles, according to subsections \ref{short} and \ref{long}.
Since $y\in B$ then $\Theta(y)\geq0$, see the facts in \ref{rem:data}.
Suppose that $\Theta(y)>0$, then we have that $\omega_-\neq \omega_+$. This immediately implyies that $Y_{\omega_-}\neq Y_{\omega_+}$. We conclude that the bounded classes $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})\neq \Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$. Since $\theta$ is continuous, by the intermediate value theorem the result follows. If $\Theta(y)=0$ then $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\theta})$ is a {\sc c} isolated point with $I(y)$ being a single point.
\end{proof}
\begin{definition}\label{def:spaces}\hfill
\begin{enumerate}
\item A family of spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ such that:
\begin{itemize}
\item for $\theta=\omega_-$ we have that $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$ is a {\sc cc} isolated point in $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$;
\item for $\theta \in (\omega_-,\omega_+)$ there exists a bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$;
\item for $\theta=\omega_+$ we have that $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$ is a {\sc cc} isolated point in $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$;
\item for $\theta \notin [\omega_-,\omega_+]$ there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$
\end{itemize}
is called a {\bf fiber of type I}.
\vspace{.2cm}
\item A family of spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ such that:
\begin{itemize}
\item for $\theta=\omega_-$ we have that $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$ is a {\sc cc} isolated point in $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$;
\item for $\theta \in (\omega_-,\omega_+]$ there exists a bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$;
\item for $\theta \notin (\omega_-,\omega_+]$ there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$.
\end{itemize}
Or,
\begin{itemize}
\item for $\theta=\omega_+$ we have that $\Delta(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$ is a {\sc cc} isolated point in $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$;
\item for $\theta \in [\omega_-,\omega_+)$ there exists a bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$;
\item for $\theta \notin [\omega_-,\omega_+)$ there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$
\end{itemize}
is called a {\bf fiber of type II}.
\vspace{.2cm}
\item A family of spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ such that:
\begin{itemize}
\item for $\theta \in [\omega_-,\omega_+]$ there exists a bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$;
\item for $\theta \notin [\omega_-,\omega_+]$ there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$
\end{itemize}
is called a {\bf fiber of type III}.
\vspace{.2cm}
\item A family of spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ such that:
\begin{itemize}
\item there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for all $\theta \in (-\pi,\pi]$
\end{itemize}
is called a {\bf fiber of type IV}.
\vspace{.2cm}
\item A family of spaces $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is called a {\bf fiber of type V} if $x=y$. In this case, each $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$, $\theta\in (-\pi,\pi]$ admit an isolated point, being a path of length zero. In addition, there is no bounded $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for all $\theta \in (-\pi,\pi]$.
\end{enumerate}
\end{definition}
\begin{theorem}\label{maincensus1}Consider $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ with $x\neq y$.
Suppose $y\in B\subset \mathbb R^2$, then:
\begin{itemize}
\item If $y\in B_1$, then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type I.
\item If $y\in B_2$, then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type II.
\item If $y\in B_3$, then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type III.
\end{itemize}
If $y\in B_4=B^c$, then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type IV.
\noindent If $x=y$, then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type V.
\end{theorem}
\begin{proof} If $y\in B_1$, the endpoints $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ have associated two short triangles, according to equations (\ref{eq:thetar_b}) and (\ref{eq:thetal_a}). Via equations (\ref{eq:Wmin}) and (\ref{eq:Wmax}) we obtain the values $\omega_-$ and $\omega_+$. Since $\Theta(y)>0$ (see the facts in \ref{rem:data}) Theorem \ref{existvect}, guarantees the existence of a family of bounded isotopy classes $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)\subset \Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $ \theta \in (\omega_-,\omega_+)=I(y)$. Note that by construction $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$ and $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$ admit a {\sc cc} isolated point each.
If $y\in B_2$, then the endpoints $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ have associated one short and one long triangle via a combination of the equations (\ref{eq:thetar_b}) or (\ref{eq:thetal_a}), and (\ref{eq:thetar_c}), or (\ref{eq:thetal_b}). Via equations (\ref{eq:Wmin}) and (\ref{eq:Wmax}) we obtain the values $\omega_-$ and $\omega_+$. Since $\Theta(y)>0$ (see the facts in \ref{rem:data}) Theorem \ref{existvect}, guarantees the existence of a family $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)\subset \Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $\theta \in [\omega_-,\omega_+)=I(y)$ (or $\theta \in (\omega_-,\omega_+]$). Note that by construction $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_-})$ or $\Gamma(\mbox{\sc x}, \mbox{\sc y}_{\omega_+})$ admit a {\sc cc} isolated point.
If $y\in B_3$, the endpoints $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ have associated two long triangles, according to equations (\ref{eq:thetar_c}) and (\ref{eq:thetal_b}). Via equations (\ref{eq:Wmin}) and (\ref{eq:Wmax}) we obtain the values $\omega_-$ and $\omega_+$. Since $\Theta(y)>0$, again Theorem \ref{existvect}, guarantees the existence of a family $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)\subset \Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ for $\theta \in [\omega_-,\omega_+]=I(y)$.
If $y\in B_4=B^c$ we have that $\omega_+-\omega_-<0$, implying that there is no bounded isotopy class.
If $x=y$ then by Theorem 3.9 in \cite{paperc} we conclude that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$, $\theta\in [-\pi,\pi]$ admits an isolated point, being a path of length zero. Since $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ admits only closed paths, they have parallel tangents, see \cite{papere}. Therefore, these closed paths are bounded-homotopic to paths of arbitrary length, see Proposition 3.8 in \cite{papere}. By Theorem 7.12 in \cite{paperc} none of these paths can be in a bounded isotopy class. Therefore, there is no bounded isotopy class for all $\theta \in (-\pi,\pi)$.
\end{proof}
It is easy to see that there is a natural correspondence between $B\times I(y)$ and $\mathcal B$. Equivalently a correspondence between $B\times I(y)$ and the elements in $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ so that there exists a bounded isotopy class.
\begin{theorem} \label{noopnoclofib} The set of endpoints $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ so that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type:
\begin{itemize}
\item I, II, or III is bounded, neither open nor closed in $T\mathbb R^2$.
\item IV is unbounded, neither open nor closed in $T\mathbb R^2$.
\item V is a unit circle.
\end{itemize}
\end{theorem}
\begin{proof} Consider $\mbox{\sc x} \in T\mathbb R^2$ and $y\in B$. Since $B\subset \mathbb R^2$ and $I(y)\subset (-\pi,\pi)$ are both bounded we have that $B\times I(y)$ is bounded. Note that $B$ is not open, since it contains the positive abscissa of the circles (\ref{eq:circ_c}) and (\ref{eq:circ_d}) i.e., the image of {\sc c} isolated points. The set $B$ is not closed since the point $y=(0,1)$ is in the closure of $B$ but not in$B$, due the existence of parallel tangents, see Proposition 3.8 in \cite{papere}.
For the second statement, note that the set of endpoints $\mbox{\sc x},\mbox{\sc y}_\theta \in T\mathbb R^2$ so that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type $IV$ is unbounded since the fibers of type $IV$ have their final positions in $B_4=B^c$ being this set unbounded. Since $B$ is neither open nor closed, so its complement. Therefore $B_4\times (-\pi,\pi]$ is unbounded neither open nor closed.
Recall that the set of endpoints $(x,X),(y,Y_\theta)\in T\mathbb R^2$ with $x=y$ are such that $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber of type $V$. Since $(x,X)$ remains fixed while $Y_\theta=e^{\theta i}$, $\theta\in (-\pi,\pi]$ the result follows.
\end{proof}
Next, we establish that $\mathcal B$ is bounded neither open nor closed, answering a question raised by Dubins in pp. 480 in \cite{dubins 2}.
\begin{corollary} \label{noopnoclo} $\mathcal B\subset T\mathbb R^2$ is neither open nor closed.
\end{corollary}
\begin{proof} Immediate from Theorem \ref{noopnoclofib} and the obvious correspondence between $\mathcal B$ and $B\times I(y)$.
\end{proof}
\begin{corollary}\label{cor:param}Consider $\mbox{\sc x}, \mbox{\sc y}_\theta \in T\mathbb R^2$ with $y\in B$. Then:
\begin{itemize}
\item isolated points of zero length are parametrized in the unit circle.
\item isolated points of type {\sc c} are parametrized in $(0,\pi)\sqcup (0,\pi)$.
\item isolated points of type {\sc cc} are parametrized in $$ (0,\pi)\times (0,\pi) \sqcup (0,\pi)\times (0,\pi).$$
\item The bounded isotopy classes are parametrized in $B\times I(y)$.
\end{itemize}
\end{corollary}
\begin{proof} The first and fourth statements were proven in Theorem \ref{noopnoclofib}. The second and third statements are immediate.
\end{proof}
\section{On the classification of the homotopy classes of bounded curvature paths}
Next we present an updated version of Theorem 6.2 in \cite{paperd} by considering the existence of isotopy classes in terms of the values of the class range function.
We first revise Remark \ref{gammaparameter}. Given $\mbox{\sc x}\in T{\mathbb R}^2$,
$$\Gamma=\bigcup_{\substack{{y \in \mathbb R^2}\\ \theta \in (-\pi,\pi]}}\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta).$$
Given $\mbox{\sc x}, \mbox{\sc y} \in T\mathbb R^2$. Let,
$$\Gamma(\mbox{\sc x}, \mbox{\sc y})=\bigcup_{\substack{n \in \mathbb Z}}\Gamma(n)$$
where
$$\Gamma(n)=\{\gamma\in \Gamma(\mbox{\sc x,y}): \tau(\gamma)=n, n\in \mathbb Z\},$$
with $\tau(\gamma)$ being the turning number\footnote{In \cite{paperd} we used the analogous idea of turning number by considering closed path.} of $\gamma$, see Definition 4.1 in \cite{paperb}.
For each $\theta \in (-\pi,\pi]$ we have that,
$$\Gamma_n(\mbox{\sc x}, \mbox{\sc y}_\theta)=\{\gamma\in \Gamma(\mbox{\sc x},\mbox{\sc y}_\theta): \tau(\gamma)=n, n\in \mathbb Z\}.$$
Suppose that $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)\subset \Gamma_k(\mbox{\sc x}, \mbox{\sc y}_\theta)$, for some $\theta\in (-\pi,\pi]$, $k\in \mathbb Z$. The space $\Delta'(\mbox{\sc x}, \mbox{\sc y}_\theta)\subset \Gamma_k(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is the space of paths bounded-homotopic to paths with self-intersections. In \cite{paperd} we proved that:
$$\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)\cup \Delta'(\mbox{\sc x}, \mbox{\sc y}_\theta)=\Gamma_k(\mbox{\sc x}, \mbox{\sc y}_\theta).$$
The proof of Theorem \ref{paramclass} is immediate from the facts in \ref{rem:data}, Theorem \ref{noopnoclofib} and Theorem 6.2 in \cite{paperd}.
\begin{theorem}\label{paramclass} Choose $\mbox{\sc x}, \mbox{\sc y}_{\theta} \in T\mathbb R^2$ we have that:
\begin{equation}
\label{eq:main1}
\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)=\bigcup_{\substack{n \in \mathbb Z}}\Gamma_n(\mbox{\sc x}, \mbox{\sc y}_\theta),\hspace{.2cm} \theta\in(-\pi,\pi].
\end{equation}
\begin{enumerate}
\item If $\Theta(y)>0$ there exists a family of bounded isotopy classes $\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta)$ so that:
\begin{equation}
\label{eq:main2}
\Gamma_k(\mbox{\sc x}, \mbox{\sc y}_\theta)=\Delta(\mbox{\sc x}, \mbox{\sc y}_\theta) \cup \Delta'(\mbox{\sc x}, \mbox{\sc y}_\theta), \hspace{.2cm} \mbox{for}\hspace{.2cm} \theta\in I(y), \hspace{.2cm} \mbox{and some}\hspace{.2cm} k\in \mathbb Z.
\end{equation}
In particular, if:
\begin{itemize}
\item If $y\in B_1$ then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber to type I.
\item If $y\in B_2$ then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber to type II.
\item If $y\in B_3$ then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber to type III.
\item If $y\in B_4$ then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber to type IV.
\item If $y=x$ then $\Gamma(\mbox{\sc x}, \mbox{\sc y}_\theta)$ is a fiber to type V.
\end{itemize}
\item If $y\in B$, $y\neq x$ and $\Theta(y)=0$ we may have a {\sc c} or a {\sc cc} isolated point.
\item If $\omega_+(y)-\omega_-(y)<0$ we have that there is no bounded isotopy class.
\end{enumerate}
\end{theorem}
\begin{center} {\sc Appendix\\ Homotopy classes and deformations of Dubins paths}
\end{center}
We would like to motivate a theory analyzing algorithmic aspects of deformations of piecewise bounded curvature paths of constant curvature. Many standard questions in computational geometry can be adapted for this class of paths.
In \cite{paperd} we defined operations on bounded curvature paths being a finite number of concatenations of line segments and arcs of unit radius circles, the so-called $cs$ paths. The line segments and arcs of circles are called components. The number of components is called the complexity of the path\footnote{Dubins paths have complexity at most 3.}. Also in \cite{paperd}, we proved that a $cs$ path can be constructed arbitrarily close to any given bounded curvature path. It is of interest to study the computational complexity of deforming $cs$ paths.
It is not hard to see that for any given $\mbox{\sc x,y}\in T\mathbb R^2$ the space $\Gamma(\mbox{\sc x,y})$ has a finite number of Dubins paths. In Example \ref{ex:spaces} we index Dubins paths according to their length. The length minimizer in $\Gamma(\mbox{\sc x,y})$ is denoted by $\gamma_0$.
Next, we relate the types of connected components, the number of local minima, number of global minima, existence of local maxima, and deformations of $cs$ paths. In Fig. \ref{fig:spaces} we consider seven illustrations, and in Example \ref{ex:spaces} we consider seven items. We associate illustrations and items in an obvious way.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{spaces8}
\caption{A schematic representation for spaces of bounded curvature paths. Each of the seven illustrations represent the connected components $\Gamma(n)\subset \Gamma(\mbox{\sc x,y})$ admitting Dubins paths, here represented by points. Points with the same color suggest that the associated paths are bounded-homotopic, see Figs. \ref{sixdubb1}, \ref{fourdub}-\ref{sixdubb3}.}
\label{fig:spaces}
\end{figure}
\begin{example}\label{ex:spaces}Consider $x=(0,0)$, $X=e^{2\pi i}\in T_x\mathbb R^2$. In Figure \ref{fig:spaces} we illustrate spaces $\Gamma(\mbox{\sc x,y})$ such that: \hfill
\begin{enumerate}
\item $y=(z,0)$, $z\in\mathbb R^+$, $Y=e^{2\pi i}\in T_y\mathbb R^2$. This example corresponds to the Euclidean geometry case (up to isometries) where the single length minimizer between any two points is a line segment.
\item $y=(4,-8)$, $Y=e^{2\pi i}\in T_y\mathbb R^2$. There are four Dubins paths, $\gamma_0$ being the length minimizer, see Fig. \ref{fourdub}. The paths $\gamma_0$ and $\gamma_3$ are bounded-homotopic. This is checked in Proposition 4.3 and Fig. 13 in \cite{paperd}.
\item $y=(z,0)$, $z\geq 4$, $Y=e^{\pi i}\in T_y\mathbb R^2$. There are four Dubins paths, with $\gamma_0$ and $\gamma_1$ being length minimizers. These four paths are not bounded-homotopic one to the other. In Fig. 1 in \cite{paperb} we illustrate the two length minimizers.
\item $y=(-2,1)$, $Y=e^{-\frac{\pi}{4} i}\in T_y\mathbb R^2$. There are six Dubins paths, one being the length minimizer, see Fig. \ref{sixdubb1}. The paths $\gamma_0$ and $\gamma_5$; $\gamma_1$ and $\gamma_4$, and $\gamma_2$ and $\gamma_3$ are pair-wise bounded-homotopic. This can be verified by applying Proposition 4.4 in \cite{paperd}.
\item $y=(3,0)$, $Y=e^{\frac{\pi}{3} i}\in T_y\mathbb R^2$. Since $y\in B$, then $\Theta(y)>0$, so we have that there exists a bounded isotopy class $\Delta(\mbox{\sc x,y})$, or equivalently $\Omega\neq \emptyset$. In this case, the length minimiser in $\Gamma(\mbox{\sc x,y})$ is a unique {\sc csc} path and it is an element in $\Delta(\mbox{\sc x,y})$. This is a consequence of Proposition 2.13 in \cite{papera} and Theorem 8.1 in \cite{paperc}.
Note that there are eight Dubins paths, one being the length minimizer ($\gamma_0$ lies in $\Omega$), see Fig. \ref{sixdubb2}. In addition, the paths $\gamma_0$, $w_1$ and $w_2$ are bounded-isotopic one to the other since they are paths in $\Delta(\mbox{\sc x,y})$, see Theorem 5.4 in \cite{paperd}. It seems plausible to think that $w_1$ and $w_2$ are local maxima (not local minima) of length. In addition, the path $\gamma_3$ is the length minimizer in $\Delta'(\mbox{\sc x,y})$.
\item there are six Dubins paths, the length minimizer being an isolated point, see Fig. \ref{sixdubb3} and Theorem 3.9 in \cite{paperc}. By a similar argument as the one in Proposition 4.4 in \cite{paperd} we conclude that $\gamma_1$ is bounded-homotopic to $\gamma_4$.
\item $x=y$ and $X=Y$. Closed bounded curvature paths are not bounded-homotopic to a single point. In this case, the length minimiser $\gamma_0$ is an isolated point of length zero, see Theorem 3.9 in \cite{paperc}. There are two non-trivial length minimisers, say $\gamma_1$ and $\gamma_2$. These paths lie in the adjacent circles $C_l(\mbox{\sc x})$ and $C_r(\mbox{\sc x})$ respectively. It is easy to see that $\gamma_1$ and $\gamma_2$ lie in different homotopy classes since they have winding number $1$ and $-1$ respectively, see Theorem 4.6 in \cite{paperb}.
\end{enumerate}
\end{example}
After the previous examples, a natural task would be to determine for any pair of endpoints $\mbox{\sc x,y}\in T\mathbb R^2$ the exact number of homotopy classes admitting Dubins paths. This should be done after first describing all the possible scenarios for homotopies between Dubins paths.
A closely related problem is the following. Given $\mbox{\sc x},\mbox{\sc y}_\theta\in T\mathbb R^2$, $\theta \in (-\pi,\pi]$. Describe how the number of Dubins paths vary, as we vary $\theta \in (-\pi,\pi]$. Also, describe how the type of all (up to eight?) Dubins paths vary for all the fibers. A description of the type of the global minimum (first Dubins path) has been obtained in \cite{bui}.
Given $ \mbox{\sc x},\mbox{\sc y}\in T\mathbb R^2$. What are the complexity $n>3$ $cs$ paths of minimal length? For certain pairs the answer is trivial. What if $d(x,y)<4$?
Given two $cs$ paths with prescribed complexity and lying in the same homotopy class. What is the minimal number of operations (or moves) to deform one path into the other? What are these moves?
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth,angle=0]{fourdub}
\caption{Spaces of bounded curvature paths may have four local minima of length. Note that $\gamma_0$ and $\gamma_3$ are bounded-homotopic, see Proposition 4.3 in \cite{paperd}.}
\label{fourdub}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth,angle=0]{sixdubbbyn1}
\caption{Spaces of bounded curvature paths may have up to eight {\sc csc-ccc} paths, local minima (or maxima) of length. Note that $\gamma_3$ and $\gamma_6$ are bounded-homotopic.}
\label{sixdubb2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth,angle=0]{sixdubbbyn3}
\caption{Spaces of bounded curvature paths may have six local minima of length. Note that $\gamma_1$ and $\gamma_4$ are bounded-homotopic. It is of interest to classify the fibers $\Gamma(\mbox{\sc x},\mbox{\sc y}_\theta)$ in terms of the way the type and number of Dubins paths changes as $\theta$ varies, see also Figs. \ref{sixdubb1}, \ref{fourdub} and \ref{sixdubb2}. }
\label{sixdubb3}
\end{figure}
\bibliographystyle{amsplain}
| {
"timestamp": "2020-05-28T02:11:02",
"yymm": "2005",
"arxiv_id": "2005.13210",
"language": "en",
"url": "https://arxiv.org/abs/2005.13210",
"abstract": "A bounded curvature path is a continuously differentiable piece-wise $C^2$ path with bounded absolute curvature connecting two points in the tangent bundle of a surface. These paths have been widely considered in computer science and engineering since the bound on curvature models the trajectory of the motion of robots under turning circle constraints. Analyzing global properties of spaces of bounded curvature paths is not a simple matter since the length variation between length minimizers of arbitrary close endpoints or directions is in many cases discontinuous. In this note, we develop a simple technology allowing us to partition the space of spaces of bounded curvature paths into one-parameter families. These families of spaces are classified in terms of the type of connected components their elements have (homotopy classes, isotopy classes, or isolated points) as we vary a parameter defined in the reals. Consequently, we answer a question raised by Dubins (Pac J Math 11(2), 1961).",
"subjects": "Metric Geometry (math.MG)",
"title": "Census of bounded curvature paths",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540656452213,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7099046578600143
} |
https://arxiv.org/abs/0912.1825 | Alexandrov curvature of convex hypersurfaces in Hilbert space | It is shown that convex hypersurfaces in Hilbert spaces have nonnegative Alexandrov curvature. This extends an earlier result of Buyalo for convex hypersurfaces in Riemannian manifolds of finite dimension. | \section{Introduction}
In this paper, the following result is established:
\begin{thm}\label{maintheorem} If $C$ is an open set in a Hilbert space $H$
and $\overline C$ is locally convex, then
$\partial C$ is a nonnegatively curved Alexandrov space under the induced length metric.
\end{thm}
Questions of this sort go back to \cite{Alexandrov48}, where Alexandrov defined
Alexandrov
curvature and showed that it characterizes boundaries of locally convex bodies in $\mathbb{R}^3$.
This was generalized by Buyalo to the case of locally convex sets of full dimension in a Riemannian manifold in \cite{Buyalo76}.
If the ambient manifold has a positive lower bound $\kappa$ on sectional curvature, it
has also been shown in \cite{AKP08} that the convex boundary has Alexandrov
curvature $\ge\kappa$.
The proof of Theorem \ref{maintheorem} relies on approximating $\partial C$ by smooth manifolds,
where the connection between curvature and convexity is well understood. Due to the possibly
infinite dimension of $H$, we cannot smooth by integrating over $H$ against a mollifier. As
currently known smoothing operators for infinite dimensional spaces do not preserve convexity,
we proceed by integrating over a suitably chosen finite dimensional subspace. Lemma \ref{quadruple} shows this can be done in such a way that the curvature of $\partial C$ is controlled by the curvature of smooth, finite-dimensional approximating manifolds. A similar
approximation of infinite-dimensional curvature by finite dimensional curvature is outlined
in \cite{Halbeisen00}.
\section{Basic definitions}
We begin by defining curvature in the sense of Alexandrov. There are several equivalent
definitions, and we will find it most convenient to work with comparison angles.
\begin{defn}For three points $x,y,z$ in a metric space $(X,d)$, the
comparison angle $\tilde{\angle} xyz$ is defined as
\[\tilde{\angle} xyz=\arccos\frac{d^2(x,y)-d^2(x,z)+d^2(y,z)}{2d(x,y)d(y,z)}.\]
\end{defn}
Recall that $(X,d)$ is called a length space if the distance between any two points equals
the infimum of the lengths of paths between them.
\begin{defn} A length space $(X,d)$ is said to have nonnegative Alexandrov curvature if
$X$ is locally complete and every $x\in X$ has a neighborhood $U_x$ which satisfies the
quadruple condition:
\[\tilde{\angle} bac + \tilde{\angle} cap +\tilde{\angle} pab \le 2\pi\]
for any quadruple $(a; b,c,p)$ of distinct points in $U_x$.
In this case, $X$ is called a nonnegatively curved Alexandrov space.
\end{defn}
If $X$ is a Riemannian manifold, then nonnegative Alexandrov curvature is equivalent
to nonnegative sectional curvature.
It will also be helpful to fix notation for polygonal paths.
\begin{defn}For two points $p,q$ in a vector space $V$, $\sigma_{pq}:[0,1]\to V$
denotes the constant speed linear path:
\[\sigma_{pq}(t)=(1-t)p+tq.\]
\end{defn}
\begin{defn}
A path $\tau:[0,1]\to V$ is called a polygonal path if it can be written in the form
\[\tau(t)=\sum_{i=1}^{k-1} \sigma_{p_i p_{i+1}}(kt-i) 1_{[i/k,(i+1)/k]}(t)\]
for some set of points $p_1,\dots,p_k\in V$. Here $1_A$ denotes the characteristic
function of the set $A$.
\end{defn}
\section{Approximation by smooth manifolds}
In this section, we prove two technical lemmas which allow us to approximate $C^{1,1}$
convex functions $f$ on a Hilbert space by convex functions that are smooth on a finite-dimensional linear subspace. This enables us to control the Alexandrov
curvature of $\graph_f$, the graph of $f$ in $H\times\mathbb{R}$, via
the sectional curvature of the approximating smooth graphs.
\begin{lem}\label{polypath} Let $f:V\to(X,d)$ be a $\lambda$-bi-Lipschitz map from a Banach space $V$
onto a metric space $(X,d)$. For any rectifiable curve
$\sigma:[0,1]\to X$ and any $\varepsilon>0$, there exists a
polygonal path $\tau:[0,1]\to V$ such that
$f\circ\tau(0)=\sigma(0)$, $f\circ\tau(1)=\sigma(1)$, $\forall
t\in[0,1]$, $|\sigma(t)-f\circ\tau(t)|<\varepsilon$ and
$|l(\sigma)-l(f\circ\tau)| <\varepsilon$.
\end{lem}
\begin{proof}
For each rectifiable curve $\sigma_0:[0,1]\to X$ and
$\varepsilon>0$, define
\[B^1_\varepsilon(\sigma_0)
=\{\sigma:[0,1]\to X ; \forall t\in[0,1],
d(\sigma_0(t),\sigma(t))<\varepsilon,
|l(\sigma_0)-l(\sigma)|<\varepsilon\}.\] For each rectifiable
curve $\sigma_0:[0,1]\to V$ and $\varepsilon>0$, define
\[B^2_\varepsilon(\sigma_0)
=\{\sigma:[0,1]\to V ; \forall t\in[0,1],
|\sigma_0(t)-\sigma(t)|<\varepsilon,
|l(\sigma_0)-l(\sigma)|<\varepsilon\}.\]
Fix a rectifiable curve $\sigma_0:[0,1]\to V$ and $\varepsilon>0$.
For any $\sigma\in B^2_\varepsilon(\sigma_0)$, for all
$t\in[0,1]$,
\[|\sigma_0(t)-\sigma(t)|<\varepsilon\implies |f\circ\sigma_0(t)-f\circ\sigma(t)|<\lambda\varepsilon.\]
Furthermore,
\begin{equation*}
\begin{split}
|l(f\circ\sigma_0)-l(f\circ\sigma)|
&\le |l(\sigma_0)-l(\sigma)| + |l(f\circ\sigma_0)-l(\sigma_0)| + |l(f\circ\sigma)-l(\sigma)|\\
&\le \varepsilon + l(f\circ\sigma_0)+l(\sigma_0) + l(f\circ\sigma)+l(\sigma)\\
&\le \varepsilon + \lambda l(\sigma_0)+l(\sigma_0)+ \lambda l(\sigma)+l(\sigma)\\
&\le \varepsilon+(\lambda+1)l(\sigma_0)+(\lambda+1)(l(\sigma_0)+\varepsilon)\\
&\le 2(\lambda+1)(\varepsilon+l(\sigma_0)).
\end{split}
\end{equation*}
So for $\varepsilon'=2(\lambda+1)(\varepsilon+l(\sigma_0))$,
\[B^2_\varepsilon(\sigma_0)\subset f^{-1}(B^1_{\varepsilon'}(f\circ\sigma_0))\]
By a similar argument, for any rectifiable curve
$\sigma_0:[0,1]\to X$ and $\varepsilon>0$,
\[f^{-1}(B^1_\varepsilon(\sigma_0))\subset B^2_{\varepsilon'}(f^{-1}\circ\sigma_0),\]
for $\varepsilon'=2(\lambda+1)(\varepsilon+l(\sigma_0))$. Thus the
$B^2$'s and $f^{-1}(B^1)$'s determine equivalent topologies on the
space of rectifiable curves $\sigma:[0,1]\to V$. Polygonal paths
are dense under the $B^2$-topology, so they are dense under the
$f^{-1}(B^1)$-topology.
\end{proof}
\begin{lem}\label{quadruple} Let $f:\Omega\to\mathbb{R}$ be a $C^{1,1}$ convex function, where $\Omega$ is a domain in a
Hilbert space $H$. For any $x_0\in\Omega$, there exists $R>0$ such that $Y$, the graph of
$f$ over $B_R(x_0)$, satisfies the quadruple condition
\[\tilde{\angle} bac + \tilde{\angle} cap +\tilde{\angle} pab \le 2\pi\]
for any quadruple $(a; b,c,p)$ of distinct points, under the induced length metric $d$ from $H\times\mathbb{R}$.
\end{lem}
\begin{proof}
$f$ is convex, hence Lipschitz continuous for some Lipschitz constant $L\ge1$.
Let $\hat f:\Omega\to \hat f(\Omega)\subset\graph_f$ be defined
by $\hat f(x)=
(x, f(x))$, and note that $\hat f$ is $\sqrt{1+L^2}$-bi-Lipschitz.
Choose $R>0$ such that $B_{3R}(x_0)\subset\Omega$.
Suppose that
$(a; b,c,p)$ is a quadruple of distinct points such that
\[\tilde{\angle} bac + \tilde{\angle} cap +\tilde{\angle} pab = 2\pi + \varepsilon_0 > 2\pi,\]
where $(a; b,c,p)=(\hat f(a'); \hat f(b'),\hat f(c'),\hat f(p'))$
and $a',b',c',p'\in B_R(x_0)$. The comparison angles vary continuously
in the intrinsic distances, so there exists $\varepsilon>0$ such
that if $(A;B,C,D)$ is a quadruple of points in some other metric
space $(X_1,d_1)$ with
\begin{align*}
|d(a,b)-d_1(A,B)|<\varepsilon,\quad
|d(a,c)-d_1(A,C)|&<\varepsilon,\quad
|d(a,p)-d_1(A,P)|<\varepsilon,\\
|d(b,c)-d_1(B,C)|<\varepsilon,\quad
|d(b.p)-d_1(B,P)|&<\varepsilon,\quad
|d(c,p)-d_1(C,P)|<\varepsilon,
\end{align*}
then
\[\tilde{\angle} BAC + \tilde{\angle} CAP +\tilde{\angle} PAB = 2\pi + (\varepsilon_0/2) > 2\pi.\]
By Lemma \ref{polypath}, we may approximate $d(a,b)$ by the length
of the image under $\hat f$ of a polygonal path $\tau_1$ determined by points
$a'=q_1,q_2,\dots,q_{k_1-1},b'=q_{k_1}\in B_{2R}(x_0)$ such that
\[d(a,b) + (\varepsilon/3) \ge
\sum_{i=1}^{k_1-1}l(\hat f\circ\sigma_{q_i q_{i+1}})
= l(\hat f\circ\tau_1)
\ge d(a,b).
\]
Similarly, we may approximate $d(a,c)$ by
the image under $\hat f$ of a polygonal path determined by points $a'=q_{k_1+1},q_{k_1+2},\dots,c'=q_{k_2}\in B_{2R}(x_0)$ such that
\[d(a,c) +(\varepsilon/3) \ge
\sum_{i=k_1+1}^{k_2-1}l(\hat f\circ\sigma_{q_i q_{i+1}})
\ge d(a,c).\]
Continue in this manner choosing $q_{k_2+1},q_{k_2+2}\dots,q_{k_3},\dots,q_{k_6}$ to approximate
the remaining four intrinsic distances.
The $k_6+1$ points $q_1,\dots,q_{k_6},x_0$ lie in a $k_6$-dimensional subspace of
$H$, which we will identify as $\mathbb{R}^n$, $n=k_6$. Let $\varphi_\delta:\mathbb{R}^n\to\mathbb{R}$ be the standard $C^\infty$ mollifier supported on the $\delta$-ball, and define $f_\delta:B_{5R/2}(x_0)\to\mathbb{R}$ by $f_\delta=f\ast\varphi_\delta$,
where the convolution occurs in the $\mathbb{R}^n$-variables and
$\delta<R/2$. Let $\hat f_\delta(x)=(x,f_\delta(x))$.
As $f$ is assumed to be convex and $C^{1,1}$, it is easy to check the following properties:
\begin{enumerate}
\item $f_\delta|_{B_{2LR}(x_0)\cap\mathbb{R}^n}$ is $C^\infty$.
\item $f_\delta|_{B_{2LR}(x_0)\cap\mathbb{R}^n}$ is $L$-Lipschitz.
\item $f_\delta\to f$ pointwise as $\delta\to0$.
\item On $\mathbb{R}^n\cap \overline{B_{2LR}(x_0)}$, $\nabla_{\mathbb{R}^n}f_\delta\to \nabla_{\mathbb{R}^n}f$ uniformly as $\delta\to0$.
\item For every rectifiable curve $\sigma:[0,1]\to B_{2R}(x_0)$,
$l(\hat f_\delta\circ\sigma)
\to l(\hat f\circ\sigma)$. This convergence is uniform on sets
$\{\sigma:[0,1]\to B_{2R}(x_0) ; l(\sigma)<C\}$ with $C\in\mathbb{R}$.
\item $f_\delta$ is convex.
\end{enumerate}
Let
$Y_\delta$ denote the graph of $f_\delta$ over $B_{2R}(x_0)$
with metric $d_\delta$ induced by $H\times\mathbb{R}$, and let
$Y_{\delta,n}$ denote the graph of $f_\delta$ over
$B_{2R}(x_0)\cap\mathbb{R}^n$ with metric $d_{\delta,n}$ induced by
$\mathbb{R}^n\times\mathbb{R}$. Note that $f_\delta|_{B_{2R}(x_0)\cap\mathbb{R}^n}$ is a
$C^\infty$ convex function over a domain in $\mathbb{R}^n$, so
$Y_{\delta,n}$ is a Riemannian manifold of nonnegative sectional
curvature. In particular, it satisfies the quadruple condition. We
will obtain a contradiction by showing
\begin{align*}
|d(a,b)-d_{\delta,n}(\hat f_{\delta}(a'),\hat
f_{\delta}(b'))|&<\varepsilon,\quad
|d(a,c)-d_{\delta,n}(\hat f_{\delta}(a'),\hat f_{\delta}(c'))|<\varepsilon,\\
|d(a,p)-d_{\delta,n}(\hat f_{\delta}(a'),\hat
f_{\delta}(p'))|&<\varepsilon,\quad
|d(b,c)-d_{\delta,n}(\hat f_{\delta}(b'),\hat f_{\delta}(c'))|<\varepsilon,\\
|d(b,p)-d_{\delta,n}(\hat f_{\delta}(b'),\hat
f_{\delta}(p'))|&<\varepsilon,\quad |d(c,p)-d_{\delta,n}(\hat
f_{\delta}(c'),\hat f_{\delta}(p'))|<\varepsilon.
\end{align*}
Let $C=d(a,b)+d(a,c)+\dots+d(c,p)+\varepsilon$. Choosing
$\delta_0$ small with respect to $C$, we have for all
$\delta<\delta_0$,
\[\tau\in\{\sigma:[0,1]\to B_{2R}(x_0) ; l(\sigma)<C\}
\implies |l(\hat f_\delta\circ\tau)-l(\hat f\circ\tau)|\le
\varepsilon/3.\]
Recall that $\tau_1$ is the polygonal path determined by $q_1,\dots,q_{k_1}$.
\[l(\tau_1)\le l(\hat f\circ\tau_1)
=\sum_{i=1}^{k_1-1}l(\hat f\circ\sigma_{q_i q_{i+1}}) \le
d(a,b)+(\varepsilon/3)<C,\] so $l(\hat f_\delta\circ\tau_1)\le
l(\hat f\circ\tau_1)+(\varepsilon/3)$ for $\delta<\delta_0$. $\hat
f_\delta\circ\tau_1:[0,1]\to Y_{\delta,n}$ is a path from $\hat
f_\delta(a')$ to $\hat f_\delta(b')$, so
\[d_{\delta,n}(\hat f_{\delta}(a'),\hat f_{\delta}(b'))
\le l(\hat f_\delta\circ\tau_1) \le l(\hat
f\circ\tau_1)+(\varepsilon/3) \le d(a,b) + (2\varepsilon/3).\]
Applying Lemma \ref{polypath} again, choose $\tau_2:[0,1]\to
B_{2R}(x_0)\cap\mathbb{R}^n$ such that
\[d_{\delta,n}(\hat f_{\delta}(a'),\hat f_{\delta}(b'))
\ge l(\hat f_\delta\circ\tau_2)-(\varepsilon/6).\] Note that
\[l(\tau_2)\le l(\hat f_\delta\circ\tau_2) \le
d_{\delta,n}(\hat f_{\delta}(a'),\hat f_{\delta}(b')) +
(\varepsilon/6) \le d(a,b)+(5\varepsilon/6) < C.\] For
$\delta<\delta_0$,
\[l(\hat f_\delta\circ\tau_2)\ge l(\hat f\circ\tau_2) - (\varepsilon/3),\]
so
\[d_{\delta,n}(\hat f_{\delta}(a'),\hat f_{\delta}(b'))
> l(\hat f\circ\tau_2)-\varepsilon
\ge d(a,b)-\varepsilon.\] The remaining inequalities follow in a
similar manner, for the same choice of $C$ and $\delta_0$. So for
$\delta<\delta_0$, the quadruple $(\hat f_{\delta}(a');\hat
f_{\delta}(b'), \hat f_{\delta}(c'),\hat f_{\delta}(p'))$ violates
the quadruple condition in the Riemannian manifold of nonnegative
sectional curvature $Y_{\delta, n}$. Therefore our original
assumption is false and $Y$ satisfies the quadruple condition.
\end{proof}
\section{Proof of Theorem \ref{maintheorem}}
\begin{proof}[Proof of Theorem \ref{maintheorem}]
We must prove the quadruple condition holds in a neighborhood of
every $x_0\in\partial C$. Let $C'=B_{2\rho}(x_0)\cap C$, where $\rho$
is chosen small enough to make $C'$ convex. Note that the
intrinsic balls of radius $\rho$ about $x_0$ are the same for $C$
and $C'$. Choose a point $y\in C'$, and $r\in(0,\rho/2)$ such that
$B_{2r}(y)\subset C'$. Let $H'$ be the hyperplane through $x_0$
with normal vector $y-x_0$. For any $x\in H'\cap B_{2r}(x_0)$, let
$L_x$ be the line through $x$ spanned by $y-x_0$. $L_x\cap C'$ is
convex and $C'$ is open and bounded, so $L_x\cap C'$ is a bounded
interval. $x+(y-x_0)\in L_x\cap C'$, so $L_x\cap C'\neq\emptyset$.
Considering $y-x_0$ as the upward direction, let $f(x)$ denote the
$\mathbb{R}$-coordinate of the bottom endpoint of $L_x\cap C'$ in
$H'\times \mathbb{R}$. $f:H'\cap B_{2r}(x_0)\to\mathbb{R}$ is then a convex
function, as the epigraph is convex. Furthermore, the graph of $f$
is a neighborhood of $x_0$ in $\partial C'$, and thus also in $\partial
C$ since $2r<\rho$.
$f$ is convex, hence Lipschitz continuous for some Lipschitz constant $L\ge1$.
As shown in \cite{LL86}, for all small enough $\varepsilon>0$, the
inf-sup-convolution
\[g_\varepsilon(x)=\inf_{z\in H'\cap B_{2r}(x_0)} \sup_{y\in H'\cap B_{2r}(x_0)}
\left[f(y)-\frac{\|y-z\|_H^2}{2\varepsilon}+\frac{\|x-z\|_H^2}{\varepsilon}\right]\]
is a $C^{1,1}$ convex function on $H'\cap B_r(x_0)$,
$g_\varepsilon$ is $L$-Lipschitz, and $g_\varepsilon\to f$
uniformly on $H'\cap B_r(x_0)$.
By Lemma \ref{quadruple}, the graph of $g_\varepsilon$ over
$H'\cap B_R(x_0)$ satisfies the quadruple condition for
$R=r/3$. The graph of $f$ over $H'\cap B_R(x_0)$ then satisfies
the quadruple condition by continuity.
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2009-12-15T19:43:54",
"yymm": "0912",
"arxiv_id": "0912.1825",
"language": "en",
"url": "https://arxiv.org/abs/0912.1825",
"abstract": "It is shown that convex hypersurfaces in Hilbert spaces have nonnegative Alexandrov curvature. This extends an earlier result of Buyalo for convex hypersurfaces in Riemannian manifolds of finite dimension.",
"subjects": "Metric Geometry (math.MG)",
"title": "Alexandrov curvature of convex hypersurfaces in Hilbert space",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540728763411,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7099046572807716
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.